text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
\begin{document}
\title[On a Problem of Saffari]{Generalizations on a Problem of Saffari}
\normalsize \author[A. P. Mangerel]{Alexander P. Mangerel} \address{Department of Mathematics\\ University of Toronto\\ Toronto, Ontario, Canada} \email{[email protected]} \begin{abstract} We provide a generalization of a problem first considered by Saffari and fully solved by Saffari, Erd\H{o}s and Vaughan on direct factor pairs, to arbitrary finite families of direct factors, and solve it using a method of Daboussi. We end with a few related open problems.\end{abstract} \maketitle
\section{Introduction} A common problem in analytic and combinatorial number theory is to determine statistical information on the sizes of sets of products of integers from a given sequence. For instance, the Davenport-Erd\H{o}s theorem states that given any sequence $\mathcal{A} \subseteq \mathbb{N}$, its set of multiples $\mathcal{M}(\mathcal{A}) := \{ma : a \in \mathcal{A}, m \in \mathcal{N}\}$ has logarithmic density, i.e., for $\mathcal{C} := \mathcal{M}(\mathcal{A})$, the limit \begin{equation*} \delta(\mathcal{C}) := \lim_{x \rightarrow \infty} \frac{1}{\log x} \sum_{n \leq x \atop n \in \mathcal{C}} \frac{1}{n} \end{equation*} exists (see Chapter 5 of \cite{HaR}). Saffari \cite{Saf1} considered an inverse problem in which the set of products was found to dictate statistical information regarding the sequences that formed these products, including a particular case in which the sequences were well-behaved in the following sense: \begin{mydef} Let $\mathcal{A},\mathcal{B} \subseteq \mathbb{N}$ such that $1 \in \mathcal{A} \cap \mathcal{B}$. Then $\mathcal{A}$ and $\mathcal{B}$ are said to be \emph{direct factors} of $\mathbb{N}$ if for each $n \in \mathbb{N}$ there exists a unique pair $(a,b) \in \mathcal{A} \times \mathcal{B}$ such that $n = ab$. \end{mydef} Recall that a sequence $\mathcal{S}$ is said to have \textit{natural density} if the limit $\lim_{x \rightarrow \infty} x^{-1}\sum_{n \leq x \atop n \in \mathcal{S}} 1$ exists. This limit is called the \textit{(natural) density} of $\mathcal{S}$ and is denoted by $d\mathcal{S}$. In his 1976 paper, Saffari proved the following theorem: \begin{theorem} Let $\mathcal{A},\mathcal{B}\subseteq \mathbb{N}$ be a pair of direct factors of $\mathbb{N}$. Then if $H(\mathcal{S}):= \sum_{s \in \mathcal{S}} \frac{1}{s}$ denotes the harmonic sum over a set $\mathcal{S}$ and $H(A) < \infty$, then $\mathcal{A}$ and $\mathcal{B}$ have natural density. In particular, $d\mathcal{A} = 1/H(\mathcal{B}) = 0$ and $d\mathcal{B} = 1/H(\mathcal{A})$. \end{theorem} In 1979, Saffari, in a joint work with Erd\H{o}s and Vaughan \cite{Saf2}, subsequently proved that, in the case when $H(\mathcal{A}) = \infty$ as well, the natural density of $\mathcal{B}$ is also zero. Daboussi gave a simplified proof of both of these results shortly thereafter \cite{Dab}. \\ Motivated by this initial problem, we generalize the result in the following direction: \begin{mydef} Let $m \geq 2$ and let $\mathcal{A}_j \subseteq \mathbb{N}$ for $1 \leq j \leq m$. Call $\{\mathcal{A}_1,\ldots,\mathcal{A}_m\}$ an \emph{$m$-family of direct factors} for $\mathbb{N}$ if for each $n \in \mathbb{N}$ there exists a unique $m$-tuple $(a_1,\ldots a_n) \in \mathcal{A}_1 \times \cdots \times \mathcal{A}_m$ such that $n = a_1 \cdots a_m$. \end{mydef} It is natural to ask whether there is a similar relationship between the densities of $\mathcal{A}_i$, should they exist, in terms of the properties of the other $n-1$ sequences. We answer this question in the affirmative: \begin{theorem} With the notation above, $d\mathcal{A}_i = \prod_{j=1 \atop j \neq i}^n H(\mathcal{A}_j)^{-1}$, where the right side is interpreted as zero when $H(\mathcal{A}_j) = \infty$ for some $j$. \end{theorem} The proof follows a similar thread of ideas as that of Daboussi, but with certain necessary modifications. In any case, we provide supplementary elaboration where needed. \\ We can construct examples of the families described in Definition 2: \\
i) Let $m$ be any positive integer and let $\{r_1,\ldots,r_{\phi(m)}\}$ be an ordering of the $\phi(m)$ residue classes coprime to $m$. Let $\mathcal{A}_j := \{n \in \mathbb{N} : p | n \Rightarrow p \equiv r_j \text{ (mod $m$)}\})$, the set of all integers composed only of primes congruent to $r_j$ mod $m$. \\
ii) Let $K/\mathbb{Q}$ be a Galois extension and let $\mathcal{A}_d$ denote the set of integers divisible only by rational primes such that the primes lying above them have relative degree $d$, where $d | [K:\mathbb{Q}]$. This partitions the primes and thus gives a family of direct factors indexed by the divisors of the degree of the field extension. \\ \\ In the remainder of the paper, we denote by $P^+(n)$ and $P^-(n)$ the largest and smallest prime factors, respectively, of a positive integer $n$. \section{Proof of Theorem 2} \begin{proof}
First, fix $y \geq 2$. For each $n \in \mathbb{N}$ set $n_y := \prod_{p^{\nu}||n \atop p \leq y} p^{\nu}$ and let $\mathcal{A}_{i,y} := \{n : n_y \in \mathcal{A}_i\}$. Also, for each $i$ let $\pi_i(n) = a_i \in \mathcal{A}_i$ such that $a_i$ is the $i$th component of the $n$-tuple into which $n$ decomposes (this being well-defined by hypothesis). We remark that $P^+(ab) \leq y$ if and only if $P^+(a),P^+(b) \leq y$, and hence \begin{equation*} \prod_{p \leq y} (1-p^{-1})^{-1} = \sum_{P^+(n) \leq y} \frac{1}{n} = \sum_{P^+(a_1\cdots a_m) \leq y \atop a_i \in \mathcal{A}_i} \frac{1}{a_1\cdots a_m} = \prod_{i = 1}^m \left(\sum_{P^+(a_i) \leq y \atop a_i \in \mathcal{A}_i} \frac{1}{a_i} \right), \end{equation*} whence for each $i$, we have (provided each $\mathcal{A}_j$ is nonempty and $y$ is chosen large enough to produce a non-empty sum) \begin{equation*} \sum_{P^+(a_i) \leq y} \frac{1}{a_i} = \prod_{p \leq y} (1-p^{-1}) \prod_{j = 1 \atop j \neq i}^m \left(\sum_{P^+(a_j) \leq y \atop a_j \in \mathcal{A}_j} \frac{1}{a_j}\right)^{-1}. \end{equation*} In preparation for the remainder of the proof, we prove the following \begin{lemma} \label{LEM1} The density $d\mathcal{A}_{i,y}$ exists, and is equal to $\prod_{j=1 \atop j \neq i}^m \left(\sum_{P^+(a) \leq y \atop a \in \mathcal{A}_j} \frac{1}{a}\right)^{-1}$. Moreover, if $x > 0$ and $A_i(x) := \sum_{a_i \leq x \atop a_i \in \mathcal{A}_{i}} 1$ then $A_i(x) \leq A_{i,y}(x)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{LEM1}] This is an elaboration of the proof of Daboussi. We have \begin{align*} x^{-1}\sum_{n \leq x \atop n_y \in \mathcal{A}_i} 1 &= x^{-1}\sum_{a \leq x \atop P^+(a) \leq y, a \in \mathcal{A}_i} \sum_{m \leq \frac{x}{a} \atop P^-(m) > y} 1 = \sum_{a \leq x \atop P^+(a) \leq y, a \in \mathcal{A}_i} \frac{1}{a} \cdot \left(\frac{a}{x}\sum_{m \leq \frac{x}{a} \atop P^-(m) > y} 1\right). \end{align*} Note that $d\{n : P^-(n) > y\} = \prod_{p \leq y} (1-p^{-1})$ by the inclusion-exclusion principle, so the inner sum, normalized by $\frac{a}{x}$, is convergent, while the outer sum also converges (indeed it is increasing and bounded by the product $\prod_{p \leq y} (1-p^{-1})^{-1}$ for fixed $y$). Applying a discrete version of the dominated convergence theorem (say, defined by the sequence of functions $\{g_x(t)\}_x$ with $g_x(t) := f(\frac{x}{t})1_{(1,x)}(t)$) with Stieltjes integrals \begin{equation*} \int_1^{x} g_x(t) d\{\sum_{a \leq t} \frac{1}{a}\} \end{equation*} we arrive at the existence of the limit \begin{equation*} d\mathcal{A}_{i,y} = \lim_{x \rightarrow \infty} x^{-1}\sum_{n \leq x \atop n_y \in \mathcal{A}_{i}} 1 = \prod_{p \leq y} (1-p^{-1}) \sum_{P^+(a) \leq y \atop a \in \mathcal{A}_i} \frac{1}{a}, \end{equation*} which shows the first part of the claim. \\ \indent Now for each $i$, define $\phi_i: \mathcal{A}_{i} \rightarrow \mathcal{A}_{i,y}$ to be the mapping $a \mapsto \pi_i(a_y)\frac{a}{a_y}$. Note that this is well-defined because $\frac{a}{a_y}$ has no prime factors less than $y$, and $\pi_i(a_y) \in \mathcal{A}_{i}$ with $P^+(a_y) \leq y$ by definition, so the $y$-smooth part of $\phi(a)$ is in $\mathcal{A}_{i,y}$, as required. We claim that $\phi_i$ is injective and in that case \begin{equation*}
A_{i,y}(x) = \sum_{n \leq x \atop n_y \in \mathcal{A}_{i}} 1 \geq \sum_{n \leq x \atop n_y \in \mathcal{A}_{i}} |\phi^{-1}(a)| = \sum_{a \leq x \atop a\in \mathcal{A}_{i}} 1 = A_i(x) \end{equation*} which is the claim of the statement. Indeed, if $a,a' \in \mathcal{A}_{i}$ such that $\phi(a) = \phi(a')$ then since $\frac{a_y}{\pi_i(a_y)} = \prod_{j \neq i} \pi_j(a_y)$, we have $a\prod_{j \neq i}\pi_j(a_y') = a'\prod_{j \neq i} \pi_j(a_y)$. Since the decompositions of integers into products of elements from $\mathcal{A}_{j}$ are unique, and $a,a' \in \mathcal{A}_{i}$ while $\pi_j(a_y),\pi_j(a_y') \in \mathcal{A}_{j}$ for each $j \neq i$, it follows that $a = a'$ as $\mathcal{A}_{i}$ part, and we're done. \end{proof} Lemma \ref{LEM1} allows us to immediately deduce that \begin{equation*} \overline{d}\mathcal{A}_{i} \leq d\mathcal{A}_{i,y} = \prod_{j=1 \atop j \neq i}^m \left(\sum_{P^+(a) \leq y \atop a \in \mathcal{A}_{j}} \frac{1}{a}\right)^{-1}. \end{equation*} This tells us, in particular, that if any of the sums $H(\mathcal{A}_j) = \infty$ then $d\mathcal{A}_i$ exists and is equal to zero (by taking $y \rightarrow \infty$). \\ \indent We are left to check the case when all of $H(\mathcal{A}_j) < \infty$. We need a lower bound to match the upper bound in the lemma to finish the proof. In this direction, we establish the following \begin{lemma} The following lower bound holds: \begin{equation*} \underline{d}\mathcal{A}_{i} \geq d\mathcal{A}_{i,y} + 1-d\mathcal{A}_{i,y}\prod_{j = 1 \atop j \neq i}^m \left(\sum_{a_j \in \mathcal{A}_j} \frac{1}{a_j} \right) \end{equation*} for each $1 \leq i \leq m$. \end{lemma} \begin{proof} In what follows, let $1_i$ denote the characteristic function of $\mathcal{A}_{i}$ for each $i$. Remark that $a \in \mathcal{A}_{i}$ if, and only if, the $n$-tuple representing $a$ consists of 1 at every component except for the $i$th component. It follows that $a \in \mathcal{A}_{i}$ is representable as $a = a_1 \cdots a_n$ if, and only if, $(1_j-\delta)(n) = 0$ for each $j \neq i$, where $\delta(n)$ is 1 or 0 according to whether $n=1$ or not. \\
\indent For each $k \notin \mathcal{A}_{i}$ there exists a set $S_k \subseteq \{1,\ldots,n\}\backslash \{i\}$ such that $\pi_j(k) \neq 1$ if and only if $j \in S_k$, and by construction the converse that any such set corresponds to an element in the complement of $\mathcal{A}_{i}$ also holds. Thus, we can form a partition of $\mathbb{N} \backslash \mathcal{A}_{i}$. Write $f_{S_k}$ to be its characteristic function. For each $S \subset \{1,\ldots,n\}\backslash \{i\}$ let $\mathcal{V}_S$ denote the set of integers $k \notin \mathcal{A}_i$ such that $S_k = S$ by the notation above. Then $\{\mathcal{V}_S : S \subset \{1,\ldots,n\}\backslash\{i\}, |S| > 0\}$ forms a partition of $\mathbb{N}\backslash \mathcal{A}_i$, whence \begin{align*}
x^{-1}\sum_{k \leq x} 1_i(k) &= 1-x^{-1}\sum_{S \subseteq \{1,\ldots,n\}\backslash \{i\} \atop |S| > 0} \sum_{k \in \mathcal{V}_S} f_S(k) = 1-\sum_{S \subseteq \{1,\ldots,n\}\backslash \{i\} \atop |S| > 0} \sum_{k \leq x \atop \pi_j(k) \neq 1 \leftrightarrow j \in S} \frac{1}{k}\cdot \frac{k}{x}\sum_{m \leq \frac{x}{k}} 1_i(m) \\
&\geq 1-\sum_{S \subseteq \{1,\ldots,n\}\backslash \{i\} \atop |S| > 0} \sum_{k \leq x \atop \pi_j(k) \neq 1 \leftrightarrow j \in S} \frac{1}{k}\cdot \frac{k}{x}\mathcal{A}_{i,y}\left(\frac{x}{k}\right). \end{align*} Appealing once again to the Dominated Convergence Theorem, we have \begin{equation*}
\underline{d}\mathcal{A}_i \geq 1-d\mathcal{A}_{i,y}\sum_{S \subseteq \{1,\ldots,n\}\backslash \{i\} \atop |S| > 0} \sum_{k: \pi_j(k) \neq 1 \leftrightarrow j \in S} \frac{1}{k}. \end{equation*} As a result of the partition created, we have \begin{equation*} \sum_{k: \pi_j(k) \neq 1 \leftrightarrow j \in S} \frac{1}{k} = \prod_{j \in S} \left(\sum_{a_j \in \mathcal{A}_j} \frac{1}{a} - 1\right), \end{equation*} and so the sum above becomes, after introducing the contribution for $S = \emptyset$, \begin{align*} \underline{d}\mathcal{A}_i &\geq 1-d\mathcal{A}_{i,y}\sum_{S \subseteq \{1,\ldots,n\}\backslash \{i\}}\prod_{j \in S} \left(\sum_{a_j \in \mathcal{A}_j} \frac{1}{a} - 1\right) + d\mathcal{A}_{i,y}\\ &= d\mathcal{A}_{i,y} + 1-d\mathcal{A}_{i,y}\prod_{j = 1 \atop j \neq i}^n \left(1+\left(\sum_{a_j \in \mathcal{A}_j} \frac{1}{a_j} -1\right)\right) = d\mathcal{A}_{i,y}+1-d\mathcal{A}_{i,y}\prod_{j = 1 \atop j \neq i}^n \left(\sum_{a_j \in \mathcal{A}_j} \frac{1}{a_j}\right), \end{align*} which proves the lemma. \end{proof} To finish the proof, we write $\sum_{a_j \in \mathcal{A}_j} \frac{1}{a_j} = \sum_{a_j \in \mathcal{A}_{j} \atop P^+(a_j) \leq y} \frac{1}{a_j} + \sum_{a_j \in \mathcal{A}_j \atop P^+(a_j) > y} \frac{1}{a_j}$, noting that the second sum vanishes as $y \rightarrow \infty$. We have by the first lemma that \begin{equation*} \underline{d}\mathcal{A}_{i} \geq d\mathcal{A}_{i,y} - d\mathcal{A}_{i,y} \left(\prod_{j = 1 \atop j \neq i}^n \left(H\left(\mathcal{A}_{j,y}\right) + H\left(\mathcal{A}_{i} \backslash \mathcal{A}_{j,y}\right)\right) - \prod_{j = 1 \atop j \neq i}^n H\left(\mathcal{A}_{j,y}\right)\right). \end{equation*} Remark that in the bracketed term, the product $\prod_{j = 1 \atop j \neq i}^n H(\mathcal{A}_{j,y})$ is cancelled off, and each remaining term is multiplied by some factor $H(\mathcal{A}_j \backslash \mathcal{A}_{j,y})$ for $j \neq i$. As all $H(\mathcal{A}_j)$ are assumed finite, the former terms go to zero as $y \rightarrow \infty$, and hence, for any $\epsilon > 0$ we can choose $y$ (depending only on $\epsilon$ and $n$) large enough such that \begin{equation*} \underline{d}\mathcal{A}_{i} \geq d\mathcal{A}_{i,y} - \epsilon. \end{equation*} Thus, $d\mathcal{A}_i$ exists and is equal to $\lim_{y \rightarrow \infty} d\mathcal{A}_{i,y}$, implying the theorem. \end{proof} \section{Open Problems and other Generalizations} Instead of considering collections of integer sequences representing all positive integers uniquely, we could restrict to respresentations of some subsequene of $\mathbb{N}$. \begin{mydef} Let $S \subseteq \mathbb{N}$. Call $\mathcal{A}_1,\mathcal{A}_2$ a \emph{pair of direct factors for $S$} if for each $s \in S$ there exists a unique pair $(a_1,a_2) \in \mathcal{A}_1 \times \mathcal{A}_2$ such that $s = a_1a_2$. \end{mydef} Note that it may be that $S \subsetneq \{a_1a_2 : (a_1,a_2) \in \mathcal{A}_1 \times \mathcal{A}_2\}$. All we require is that the map $(a,b) \mapsto ab$ be an injection on the preimage of $S$. We seek to know whether any analogous relationship will exist between $\mathcal{A}_1$ and $\mathcal{A}_2$ according to the properties of $S$ (which, for instance, might require $S$ to possess natural density).\\ \indent Another natural question is to classify the set of direct factors of $\mathbb{N}$, and more generally, of sequences $S$ of the type considered in answering the above problem. We may remark, for example, that there is no $\mathcal{A}$ such that $(\mathcal{A},\mathcal{A})$ is a direct factor pair even if we do not distinguish between $(a,a')$ and $(a',a)$. Indeed, $\mathcal{A}$ must contain every prime factor and 1, implying that it cannot contain any squares of primes and hence must contain all cubes of primes. In this case, however, it will not contain any fourth powers of primes since otherwise one should have $p^4 = p\cdot p^3 = p^4 \cdot 1$. As a result, $\mathcal{A} \cdot \mathcal{A}$ cannot contain any fifth powers of primes, as the only smaller such powers are cubes and the primes themselves.\\
\indent Conversely, it is possible that a sequence have infinitely many direct factor pairs. Indeed, let $S \subset \mathbb{N}$ be a primitive sequence, i.e., such that for any two $s',s \in S$ with $s' < s$ then $s' \nmid s$. Let $\{S_1,S_2\}$ be a partition of $S$ and set $S' := S \cup \{1\}$ and $S_j' := S_j \cup \{1\}$ for $j = 1,2$. Then clearly each $s \in S'$ has the form $s = s\cdot 1$ for $s \in S_1'$ or $s \in S_2'$, and moreover if $s = s_1s_2$ then one of $s_1$ and $s_2$ must be 1, otherwise $s_j | s$, contradicting primitivity. If $S$ is an infinite such sequence (these do exist, an example furnished by the set $\{p_i^i : i \geq 1 \}$, where $p_i$ denotes the $i$th prime) then there are infinitely many such partitions providing distinct direct factor pairs.
\end{document} | arXiv |
Research article | Open | Open Peer Review | Published: 08 July 2019
Application of artificial neural network model in diagnosis of Alzheimer's disease
Naibo Wang1,2,
Jinghua Chen1,
Hui Xiao1,
Lei Wu1,
Han Jiang3 &
Yueping Zhou1
Alzheimer's disease has become a public health crisis globally due to its increasing incidence. The purpose of this study was to establish an early warning model using artificial neural network (ANN) for early diagnosis of AD and to explore early sensitive markers for AD.
A population based nested case-control study design was used. 89 new AD cases with good compliance who were willing to provide urine and blood specimen were selected from the cohort of 2482 community-dwelling elderly aged 60 years and over from 2013 to 2016. For each case, two controls living nearby were identified. Biomarkers for AD in urine and blood, neuropsychological functions and epidemiological parameters were included to analyze potential risk factors of AD. Compared with logistic regression, k-Nearest Neighbor (kNN) and support vector machine (SVM) model, back-propagation neural network of three-layer topology structures was applied to develop the early warning model. The performance of all models were measured by sensitivity, specificity, accuracy, positive prognostic value (PPV), negative prognostic value (NPV), the area under curve (AUC), and were validated using bootstrap resampling.
The average age of AD group was about 5 years older than the non-AD controls (P < 0.001). Patients with AD included a significantly larger proportion of subjects with family history of dementia, compared with non-AD group. After adjusting for age and gender, the concentrations of urinary AD7c-NTP and aluminum in blood were significantly higher in AD group than non-AD group (2.01 ± 1.06 vs 1.03 ± 0.43, 1.74 ± 0.62 vs 1.24 ± 0.41 respectively), but the concentration of Selenium in AD group (2.26 ± 0.59) was significantly lower than that in non-AD group (2.61 ± 1.07). All the models were established using 18 variables that were significantly different between AD patients and controls as independent variables. The ANN model outperformed the other classifiers. The AUC for this ANN was 0.897 and the model obtained the accuracy of 92.13%, the sensitivity of 87.28% and the specificity of 94.74% on the average.
Increased risk of AD may be associated with higher age among senior citizens in urban communities. Urinary AD7c-NTP is clinically valuable for the early diagnosis. The established ANN model obtained a high accuracy and diagnostic efficiency, which could be a low-cost practicable tool for the screening and diagnosis of AD for citizens.
With the enlarging proportion of aging population, the incidence of Alzheimer's disease (AD) has been increasing, which no doubt becomes a public health crisis globally [1]. AD will seriously affects the quality of life of patients, and there are no ideal drugs or methods for clinical treatment [2]. 2016 Alzheimer's disease Facts and Figures has reported that AD led to 84,767 deaths recorded by official death certificates in 2013 and became the fifth leading cause of death in elderly Americans over 65. In 2015, the time of care provided to patients with AD and other dementias was nearly 18.1 billion hours which valued more than $221 billion [1]. The data above shows that substantial burdens have been placed on the family, society and state. In china, Alzheimer's disease is becoming the fastest growing fatal disease and at least 9.5 million patients have been diagnosed by far. Nearly 1 million new cases will be found every year and the number of new cases is increasing year by year.
Although researchers have revealed a great deal concerning AD, much is yet to be discovered about the precise pathogenesis, owing to its complex causes involving genetic, environmental and metabolic factors. Wide ranges of explorations in view of social medicine, epidemiology and molecular medicine have been carried out and most hold the view that AD is caused by multidimensional factors related to physiology, psychology and sociology [3,4,5,6,7,8,9,10]. The ways of screening for patients with AD relies heavily on clinical manifestations and neuropsychological scales including Mini-mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), Activities of Daily Living (ADL) and Global Decline Scale (GDS). These approaches are practicable for patients with advanced AD, whereas early detection will be key to preventing, slowing and stopping Alzheimer's disease. Therefore, experts and scholars around the world have made active exploration in predicting AD, concentrating on risk factors and biomarkers. The accumulation of the protein beta-amyloid outside neurons and an abnormal form of the protein tau inside neurons are two of several changes believed to contribute to the development of AD. Biomarkers believed to be useful for detection for AD include amyloid beta (Aβ), T-tau, P-tau, ApoE-ε4 and Alzheimer-associated neuronal thread protein (AD7c-NTP) [11,12,13,14].
The main existing methods of AD diagnosis are magnetic resonance imaging (MRI) of brain, genetic detections of some proteins or other pathological detections. These methods that are mainly for clinical patients cost much time and money, and they are especially not suitable for early screening and diagnosis of large population. With the deepening of etiological research of AD, studies of multi-dimensional factors concerning environment, inheritance and life behavior have been carried out. Many experts have taken advantage of the results of etiology to establish different mathematical models [15,16,17], which deeply promote to the research of early diagnosis and prediction of AD. Because of the complexity of AD, the relationship between various risk factors is not independent, resulting in the situation that most traditional statistical models are not very applicable. However, artificial neural network has a nonlinear adaptive processing system made of a large number of artificial neurons in the right way to connect each other [18]. With the ability to approximate any function with arbitrary precision, this model can solve the problems of uncertain or ambiguous medical information more effectively. In our previous study, an ANN containing information of trace elements and neurotransmitters had been established and could accurately distinguish between AD patients and non-AD controls [19]. In this study, epidemiological parameters, scales of neuropsychological functions and biomarkers were combined together, and the back-propagation neural network of three-layer topology structures was applied to develop an early warning model in order to provide effective method for early diagnosis of AD and to explore early sensitive markers for AD.
Study subjects
The cohort of our project was established in Hongdu community with a large and stable population in Nanchang, China in 2012. Senior citizens aged 60 years and over in this community were recruited and a follow-up cohort of 2482 residents had been formed by the end of 2012. Nested case-control study design applied in this study, 89 new AD cases with good compliance who were willing to provide urine and blood specimen were selected from the cohort from 2013 to 2016. Every two controls were picked when discovering a new case. The criteria of choosing for controls was the nearest residential address. The control living in the closest floor had been chosen if suitable controls lived in the same building. AD diagnosis was performed by experienced neurologists according to ICD-10 and National Institute of Neurological and Communicative Diseases and Stroke—Alzheimer's disease and Related Disorders Association criteria [20].
A standardized questionnaire was developed for collecting data on 19 items in four dimensions: demographic characteristics (age, gender, education, previous occupation, marital status, monthly family income), behavioural characteristics (tobacco smoking, alcohol drinking, physical exercises, social activities, living alone or not, personality), medical history (diabetes, hypertension, Parkinson, traumatic injury of brain, family history of dementia) and neuropsychological performance assessments (MMSE and ADL). The MMSE scale can comprehensively and accurately reflect the intelligence status and cognitive impairment of the subjects. The ADL scale can quickly assess the ability of daily living activities of the subjects. Both these two scales contribute to the screening of AD. The definitions of behavioural characteristics were as follows: 1) Smoking was defined as one or more cigarettes per day and the duration was more than 1 year before the investigation. 2) Alcohol drinking was defined as drinking alcohol at least 2 times a week for more than 1 year before the investigation. 3) The Frequency of physical exercise and social activities was defined as: "regularly" means more than 3 times a week, "sometimes" means 1–2 times a week and "never" means less than once a week. In addition, the individuals with family history of dementia refer to the subjects of the first-degree relatives with dementia. The data were collected by strictly trained investigators and the information of medical history was obtained from health records stored in the Health Service Centre of community. The incomplete questionnaires were filled out by interviewing the subjects in their homes again.
Laboratory assays
Five biomarkers comprised of trace elements (aluminum, selenium and zinc), urinary AD7c-NTP and Aβ42 in blood were assayed. The concentration of AD7c-NTP (urine) was tested in strict accordance with ELISA kit (Hanke, Hainan, China) and so does the concentration of Aβ42 (blood) with ELISA kit (Excell Biology, Shanghai, China). The detection of trace elements was carried out using atomic absorption spectroscopy (AAS) and graphite atomizer with reference standard provided from the state.
Data were entered and analyzed using Epidata 3.1 and SPSS 21.0. Descriptive characteristics were presented as mean and SD for continuous variables or frequency and percentage for categorical variables. Comparisons between AD and non-AD groups were made with t-test for continuous variables and chi-square analysis for categorical variables. Generalized linear regression was used for the adjusting of age and gender when comparing the scores of MMSE and ADL, the concentrations of biomarkers in urine and blood between cases and controls. Level of statistical significance was set at a 2-tailed p < 0.05.
ANN model
Due to the request of sigmoid transfer function, the original data should be normalized to range from 0 to 1, of which the purpose is to avoid big training error resulted by the difference of quantity level between input data and output data. All data were normalized to a range from 0 to 1 using the range method (\( {\mathrm{x}}_{\mathrm{i}}^{,}=\frac{{\mathrm{x}}_{\mathrm{i}}-{\mathrm{x}}_{\mathrm{min}}}{{\mathrm{x}}_{\mathrm{max}}-{\mathrm{x}}_{\mathrm{min}}} \)).
After being converted appropriately, the data were randomly divided into training set (70% of the samples) and testing set (30% of the samples). This ratio is chosen from other two different ratio combinations used. The first combination was with 66.7% of the inputs for training and 33.3% of the inputs for testing, and the second combination is 75% (training) and 25% (testing). But the best performance of ANN was obtained with 70% of the data for training and the rest for testing.
Taking the complexity of the network, training time and "over-fitting" into account, the neural network designed in this study consists of three layers (one input layer, one hidden layer and one output layer). The input layer consists of 18 neurons (18 variables that were statistically significantly different between the cases and the controls) as network inputs. Each neuron performs a weighted summation of the inputs. The activation function was sigmoid function (\( \mathrm{f}\left(\mathrm{x}\right)=\frac{1}{1+{\mathrm{e}}^{-\mathrm{x}}}\Big) \). Training algorithm for ANN was the most widely used Back propagation (BP) algorithm. It is generally believed that the BP-ANN network model needs 5–10 times the number of variables in input layer to ensure the reliability and external validity [21], and our sample size meets this demand.
The ANN model was established with SPSS statistics client and was evaluated using the diagnostic test including sensitivity, specificity, accuracy, positive prognostic value (PPV), negative prognostic value (NPV) and area under curve (AUC). Bootstraps with 1000 resample were used for validity of the ANN model.
Logistic regression, k-nearest neighbor (kNN), support vector machine (SVM) model
For the purpose of testing the advantage of NN algorithm, logistic regression model, k-Nearest Neighbor (kNN) and support vector machine (SVM), were applied using the same 18 variables that were significantly different between AD patients and controls as independent variables to make a comparison with ANN.
For the logistic regression model, previous occupation, marital status and personality were set as dummy variables. 0.5 was used as a prediction threshold value. The proportion of training set and testing set was the same as ANN, and Bootstraps with 1000 resample were also used for the validity of efficacy. To evaluate and compare the predictive accuracy of these models, we also calculated sensitivity, specificity, accuracy, PPV, NPV and AUC.
R software version 3.5.2 (R Development Core Team, Vienna, Austria) was used for our analysis. The following R packages for machine learning approaches were used: caret, e1071,nnet.
Demographic characteristics
Table 1 shows that the average age of AD group (77.44 ± 6.82) was about 5 years older than non-AD group (72.49 ± 6.86), and the proportion of females in cases (69.66%) was larger than that in controls (49.44%). Moreover, significant difference was found in education, previous occupation, marital status and monthly family income.
Table 1 Demographic characteristics of AD and non-AD groups
Behavioural characteristics
There was no statistically significant difference in smoking and frequency of social activity between the cases and the controls. Non-AD group had significantly higher frequency of physical exercise and lower proportion of loneliness (11.80% vs 24.72%). Additionally, Alcohol drinking and type of personality were significantly different between the cases and the controls. More details could be found in Table 2.
Table 2 Behavioural characteristics of AD and non-AD groups n (%)
AD group included a significantly larger proportion of subjects with history of diabetes (39.33% vs 26.97%), as well as Parkinson (8.99% vs 2.25%) and family history of dementia (17.98% vs 6.18%), compared with non-AD group. There was no statistical difference in the proportion of hypertension and traumatic injury of brain between the cases and the controls (Table 3).
Table 3 History of diseases of AD and non-AD groups n (%)
Neuropsychological functions
In cases, the score of MMSE was significantly lower (17.64 ± 5.38 vs 26.57 ± 3.63), and the score of ADL was significantly higher (31.73 ± 11.71 vs 15.26 ± 7.90) compared with the controls after adjusting for age and gender (Table 4).
Table 4 Comparisons of the scores of MMSE and ADL between cases and controls mean ± SD
Biomarkers in urine and blood
The concentrations of urinary AD7c-NTP and aluminum in blood were significantly higher in AD group than non-AD group (2.01 ± 1.06 vs 1.03 ± 0.43, 1.74 ± 0.62 vs 1.24 ± 0.41 respectively). The concentration of Selenium in AD group (2.26 ± 0.59) was significantly lower than that in non-AD group (2.61 ± 1.07). However, there was no statistically significant difference in Aβ42 (blood) and Zinc between cases and controls. Age and gender were adjusted when comparing these variables (Table 5).
Table 5 Comparisons of concentration of biomarkers between the cases and the controls mean ± SD
The comparison of ANN, logistic regression, kNN and SVM
Table 6 presents the efficacy of classification of AD in resampling testing sets of established four models. In the testing sets, the average sensitivity of ANN model was 87.28%, and the specificity was 94.74%. The accuracy of ANN was 92.13%, which was higher than that of logistic regression model. The area under curve (AUC) for these four models were 0.897, 0.804, 0.832 and 0.864 respectively.
Table 6 The efficacy of ANN, logistic regression, kNN and SVM in testing sets
The results in this study indicate that age, lower education level and monthly family income may increase a person's risk for developing AD in urban communities in China. As is known to all, age is the greatest risk factor of AD. But it is not a normal part of aging and age alone is not sufficient to cause the disease. The function of brain decreased significantly with aging. The obvious manifestation of nervous system of aging is the cognitive decline, including memory, attention, learning ability and visual function [22, 23]. Knowledge, as a stimulus, can promote the growth of dendrites and axons in brain cells, improving the compensatory capacity of the aging of brain and reducing the degree of cognitive impairment. Therefore the incidence of AD differs in individuals of different educational levels [1]. Furthermore, educational levels may affects personal occupation and social status, which may also have an influence on the development of AD. In this study, monthly family income of non-AD group was significantly higher than AD group. Higher monthly family income means richer daily life and more medical resources, suggesting that it is one of protective factors for AD [24].
Many studies have indicated the fact that cognitive decline is significantly associated with dietary habits and lifestyle involving physical exercise and reading [25,26,27,28,29]. It is worth noting that in our study, drinking subjects in non-AD group were significantly more than that in AD group, suggesting moderate alcohol drinking may be beneficial to prevent AD. Experts have certificated that alcohol consumption can reduce the risk of AD, which is related to the interaction of polyphenols and Tau protein [30]. Regardless of alcohol consumption, only the frequency of drinking was recorded in our study, which may be the reason to this result. Current researches suggest diabetes can increase the viscosity of blood and lead to cognitive impairment [31]. Insulin resistance may also increase the level of inflammatory factors and lower the utilization of blood glucose. That pathological mechanism could accelerate the accumulation of amyloid and toxic substances in brain, resulting in increased risk of AD [32]. In addition, Individuals who have a parent, brother or sister with AD are more likely to develop the disease than those who do not have a first-degree relative with AD [33, 34], which could be explained by the genetic factors more or less.
A certain degree of elevation of AD7c-NTP existing in brain tissue, cerebrospinal fluid and urine could be found in patients with AD of early and middle stages, and the content of AD7c-NTP is positively correlated with the severity of disease [35]. One study has demonstrated that the sensitivity and specificity of urinary AD7c-NTP can reach more than 90% when screening AD [36]. The increased AD7c-NTP in brain tissue can enter the blood through the blood-brain barrier, and eventually go into the urine by glomerular filtration. Urine specimen has the advantages of being non-invasive, relatively cheap and easily available, compared to brain tissue and cerebrospinal fluid, so urinary AD7c-NTP could be utilized as a potential and valuable molecular biomarker for the early diagnosis of AD. There was no difference between patients with AD and controls in terms of Aβ42 in blood in this study, indicating that its clinical value needs further discussion. The concentration of aluminium and selenium were found still significantly diverse in AD and non-AD groups as our previous study [23]. However, zinc was found related to aging in our recent research but it seemed not associated with AD.
In view of the characteristics of AD, such as the slow onset, difficult treatments, and heavy disease and social burdens, it is critical to develop simple, economic, reliable and efficient methods for early discovery and diagnosis, which is the key of this study. There are several existing early diagnosis and prediction models for AD, but they may have some limitations. Hye [37] tried to use 10 kinds of plasma proteins associated with AD to predict this disease, but found the sensitivity and specificity of them were all lower than 90%. Some ANN models applied in diagnosis of AD were better than the model in this study. For example, Grossi [38] use the counts of neurofibrillary tangles and neurotic plaques in cerebral cortex and hippocampus as input variables to build an ANN model. Although it finally could perfectly distinguish Alzheimer's patients from controls and the accuracy could reach 100%, the input variables in their model mostly include invasive clinical examinations or complex laboratory tests, which may be not practical for screening of large populations. Even if our team had established an ANN containing information of trace elements and neurotransmitters with a relatively high accuracy of 92.5% [19], it is not convenient to get all those biological data. All in all, in our established ANN model, the input variables consisting of demographic characteristics, behavioural characteristics, medical history, neuropsychological performance and biomarkers were rigorously selected and significantly associated with AD. Additionally, NN algorithm has shown its advantage in the efficacy of prediction in contrast with other classifiers. This ANN model is more comprehensive, economical and easily available, compared to clinical examination such as CT and MRI.
Increased risk of AD may be associated with higher age. Lower education level and monthly family income, Family history of dementia and physical inactivity may all lead to the developing of AD. Urinary AD7c-NTP is clinically valuable for early diagnosis of AD, but Aβ42 in blood needs further discussion. The final established ANN containing multiple information including epidemiological parameters, neuropsychological functions and biomarkers obtained a high diagnostic precision and efficiency. It can be viewed as a low-cost practicable tool for the screening and diagnosis of AD.
The data sets of the current study are available from the corresponding author on reasonable request.
AD7c-NTP:
Alzheimer-associated neuronal thread protein
ADL:
Aβ:
Amyloid beta
BP:
Back propagation
GDS:
Global Decline Scale
MMSE:
Mini-mental State Examination
MoCA:
Montreal Cognitive Assessment
Gaugler J, James B, Johnson T, et al. 2016 Alzheimer's disease facts and figures [J]. Alzheimers Dement. 2016;12(4):459–509.
Appleby BS, Nacopoulos D, Milano N, et al. A review: treatment of Alzheimer's disease discovered in repurposed agents [J]. Dement Geriatr Cogn Disord. 2013;35(1–2):1–22.
Cerman E, Eraslan M, Cekic O. Age-related macular degeneration and Alzheimer disease [J]. Turk J Med Sci. 2015;45(5):1004–9.
Kang JH, Weuve J, Grodstein F. Postmenopausal hormone therapy and risk of cognitive decline in community-dwelling aging women [J]. Neurology. 2004;63(1):101–7.
Wang L, Roe CM, Snyder AZ, et al. Alzheimer disease family history impacts resting state unctional connectivity [J]. Ann Neurol. 2012;72(4):571–7.
Bemelmans SASA, Tromp K, Bunnik EM, et al. Psychological, behavioral and social effects of disclosing Alzheimer's disease biomarkers to research participants: a systematic review [J]. Alzheimers Res Ther. 2016;8:46.
Herrmann N, Harimoto T, Balshaw R, et al. Risk factors for progression of Alzheimer disease in a Canadian population: the Canadian outcomes study in dementia (COSID)[J]. Can J Psychiatr. 2015;60(4):189–99.
Weuve J, Hebert LE, Scherr PA, et al. Prevalence of Alzheimer disease in US states [J]. Epidemiology. 2015;26(1):e4–6.
Robertson IH. A noradrenergic theory of cognitive reserve: implications for Alzheimer's disease [J]. Neurobiol Aging. 2013;34(1):298–308.
Haass C, Selkoe DJ. Soluble protein oligomers in neurodegeneration: lessons from the Alzheimer' s amyloid beta-peptide [J]. Nat Rev Mol Cell Biol. 2007;8(2):101–12.
Almeida RP, Schultz SA, Austin BP, et al. Effect of cognitive reserve on age-related changes in cerebrospinal fluid biomarkers of Alzheimer disease [J]. Jama Neurology. 2015;72(6):699–706.
Aggarwal NT, Shah RC, Bennett DA. Alzheimer's disease: unique markers for diagnosis & new treatment modalities [J]. Indian J Med Res. 2015;142(4):369–82.
Lim YY, Villemagne VL, Pietrzak RH, et al. APOE ε4 moderates amyloid-related memory decline in preclinical Alzheimer's disease [J]. Neurobiol Aging. 2015;36(3):1239–44.
Almeida RP, Schultz SA, Austin BP, et al. Cognitive reserve and age-related changes in Alzheimer disease [J]. Jama Neurology. 2015;72(6):935–8.
Wang SH, Du S, Zhang Y, et al. Alzheimer's disease detection by Pseudo Zernike moment and linear regression classification [J]. CNS & Neurol Disord Drug Targets. 2017;16(1):11–5.
Zhang Y, Dong Z, Phillips P, et al. Detection of subjects and brain regions related to Alzheimer's disease using 3D MRI scans based on eigenbrain and machine learning [J]. Front Comput Neurosci. 2015;9(9):66.
Wang S, Zhang Y, Liu G, et al. Detection of Alzheimer's disease by three-dimensional displacement field estimation in structural magnetic resonance imaging [J]. Journal of Alzheimers Disease Jad. 2015;50(1):233–48.
Hwang YN, Lee JH, Kim GY, et al. Classification of focal liver lesions on ultrasound images by extracting hybrid textural features and using an artificial neural network [J]. Biomed Mater Eng. 2015;26(s1):S1599–611.
Tang J, Wu L, Huang H, et al. Back propagation artificial neural network for community Alzheimer's disease screening in China [J]. Neural Regen Res. 2013;8(3):270–6.
Tamaoka A. [Alzheimer's disease: definition and National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA)][J]. Nihon Rinsho. 2011;69(Suppl 10(Pt 2)):240–5.
Goh ATC. Back-propagation neural networks for modeling complex systems [J]. Artif Intell Eng. 1995;9(3):143–51.
Prins ND, van der Flier WM, Brashear HR, et al. Predictors of progression from mild cognitive impairment to dementia in the placebo-arm of a clinical trial population [J]. J Alzheimers Dis. 2013;36(1):79–85.
Huang HL, Lei WU, Yi-Feng WU, et al. Epidemiological analysis for community Alzheimer' s patients and their related elements, neurotransmitter in blood [J]. Chin J Dis Control Prev. 2012;16(5):382–7.
Mcdowell I, Xi G, Lindsay J, et al. Mapping the connections between education and dementia [J]. J Clin Exp Neuropsychol. 2007;29(2):127–41.
Li JQ, Tan L, Wang HF, et al. Risk factors for predicting progression from mild cognitive impairment to Alzheimer's disease: a systematic review and meta-analysis of cohort studies [J]. J Neurol Neurosurg Psychiatry. 2015;87(5):476–84.
Hughes TF, Ganguli M. Modifiable midlife risk factors for late-life cognitive impairment and dementia [J]. Curr Psychiatr Rev. 2009;5(2):73–92.
Bherer L, Erickson KI, Liu-Ambrose T. A review of the effects of physical activity and exercise on cognitive and brain functions in older adults [J]. J Aging Res. 2013;2013:657508.
Noice T, Noice H, Kramer AF. Participatory arts for older adults: a review of benefits and challenges [J]. Gerontologist. 2014;54(5):741–53.
Esteve M E, Gil A C. [Reading as a protective factor against cognitive decline][J]. Gac Sanit 2013;27(1):68–71.
Guéroux M, Pinaud-Szlosek M, Fouquet E, et al. How wine polyphenols can fight Alzheimer disease progression: towards a molecular explanation [J]. Tetrahedron. 2015;71(20):3163–70.
Butterfield DA, Di Domenico F, Barone E. Elevated risk of type 2 diabetes for development of Alzheimer disease: a key role for oxidative stress in brain [J]. Biochim Biophys Acta. 2014;1842(9):1693–706.
Sebastiao I, Candeias E, Santos MS, et al. Insulin as a bridge between type 2 diabetes and Alzheimer disease - how anti-diabetics could be a solution for dementia. [J] Front Endocrinol (Lausanne). 2014;5:110.
Sweet RA, Bennett DA, Graffradford NR, et al. Assessment and familial aggregation of psychosis in Alzheimer's disease from the National Institute on Aging late onset Alzheimer's disease family study [J]. Brain. 2010;133(4):1155–62.
Feldman AL, Johansson AL, Lambert PC, et al. Familial coaggregation of Alzheimer's disease and Parkinson's disease: systematic review and meta-analysis [J]. Neuroepidemiology. 2014;42(2):69–80.
Zhang JJ, Shi SS. A literature review of AD7c-ntp as a biomarker for Alzheimer's disease [J]. Ann Indian Acad Neurol. 2013;16(3):307–9.
Hao JH, Jiang LI, He L. The detection and significance of urinary AD7c-NTP in patients with Alzhemier disease [J]. China Trop Med. 2011;11(08):993–4.
Hye A, Riddoch-Contreras J, Baird AL, et al. Plasma proteins predict conversion to dementia from prodromal disease [J]. Alzheimers Dement. 2014;10(6):799–807.
Grossi E, Buscema MP, Snowdon D, et al. Neuropathological findings processed by artificial neural networks (ANNs) can perfectly distinguish Alzheimer's patients from controls in the Nun study [J]. BMC Neurol. 2007;7(1):1–7.
We appreciate the participation and cooperation of all the participants.
Jiangxi Province Key Laboratory of Preventive Medicine, Nanchang University, Nanchang, 330006, People's Republic of China
Naibo Wang
, Jinghua Chen
, Hui Xiao
, Lei Wu
& Yueping Zhou
Jiangxi Centre for Health Education and Promotion, Nanchang, China
Second Affiliated Hospital, Nanchang University, Nanchang, China
Han Jiang
Search for Naibo Wang in:
Search for Jinghua Chen in:
Search for Hui Xiao in:
Search for Lei Wu in:
Search for Han Jiang in:
Search for Yueping Zhou in:
LW and HJ designed the study. NW analyzed the data and drafted the manuscript. JC and HX performed the laboratory assays and participated in analyzing the data. LW and YZ jointly revised the manuscript. All authors contributed to and have approved the final manuscript.
This study was financially supported by National Natural Science Foundation of China (NCFC 81260441 and NCFC 81560550).
Correspondence to Lei Wu or Han Jiang.
The study was approved by the Institutional Review Board of Nanchang University, and written informed consent had been obtained from all participants. Those who lose cognitive ability were deemed to be incapable of giving informed consent and such informed consent was obtained from their immediate family members.
Urban communities | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{High-Dimensional Inference: Confidence Intervals, $p$-Values and \texttt{R}-Software \texttt{hdi}} \runtitle{High-Dimensional Inference: Confidence Intervals, $p$-Values and \texttt{R}-Software \texttt{hdi}}
\begin{aug}
\author[A]{\fnms{Ruben}~\snm{Dezeure}\corref{}\ead[label=e1]{[email protected]}}, \author[A]{\fnms{Peter}~\snm{B\"uhlmann}\ead[label=e2]{[email protected]}}, \author[A]{\fnms{Lukas}~\snm{Meier}\ead[label=e3]{[email protected]}} \and \author[A]{\fnms{Nicolai}~\snm{Meinshausen}\ead[label=e4]{[email protected]}}
\runauthor{Dezeure, B\"uhlmann, Meier and Meinshausen}
\address[A]{Ruben Dezeure is a Ph.D. student, Peter B\"uhlmann is Professor, Lukas Meier is Senior Scientist and Nicolai Meinshausen is Professor, Seminar for Statistics, ETH Z\"{u}rich, CH-8092 Z\"{u}rich, Switzerland \printead{e1,e2}, \printead*{e3,e4}.}
\end{aug}
\begin{abstract} We present a (selective) review of recent frequentist high-dimensional inference methods for constructing $p$-values and confidence intervals in linear and generalized linear models. We include a broad, comparative empirical study which complements the viewpoint from statistical methodology and theory. Furthermore, we introduce and illustrate the \texttt{R}-package \texttt{hdi} which easily allows the use of different methods and supports reproducibility. \end{abstract}
\begin{keyword} \kwd{Clustering} \kwd{confidence interval} \kwd{generalized linear model} \kwd{high-dimensional statistical inference} \kwd{linear model} \kwd{multiple testing} \kwd{$p$-value} \kwd{\texttt{R}-software} \end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec1}
Over the last 15 years, a lot of progress has been achieved in high-dimensional statistics where the number of parameters can be much larger than sample size, covering (nearly) optimal point estimation, efficient computation and applications in many different areas; see, for example, the books by \citet{hastetal09}, \citet{pbvdg11} or the review article by \citet{fanlv10}. The core task of statistical inference accounting for uncertainty, in terms of frequentist confidence intervals and hypothesis testing, is much less developed. Recently, a few methods for assigning $p$-values and constructing confidence intervals have been suggested (\citep{WR08}; \citep{memepb09}; \citep{pb13}; \citep{zhangzhang11}; \citep{covtest14}; \citep{vdgetal13}; \citep{jamo13b}; \citep{meins13}).
The current paper has three main pillars: (i) a (selective) review of the development in frequentist high-dimensional inference methods for $p$-values and confidence regions; (ii) presenting the first broad, comparative empirical study among different methods, mainly for linear models: since the methods are mathematically justified under noncheckable and sometimes noncomparable assumptions, a thorough simulation study should lead to additional insights about reliability and performance of various procedures; (iii) presenting the \texttt{R}-package \texttt{hdi} (\emph{h}igh-\emph{d}imensional \emph{i}nference) which enables to easily use many of the different methods for inference in high-dimensional generalized linear models. In addition, we include a recent line of methodology allowing to detect significant groups of highly correlated variables which could not be inferred as individually significant single variables (\cite{meins13}). The review and exposition in \citet{bumeka13} is vaguely related to points (i) and (iii) above, but much more focusing on an application oriented viewpoint and covering much less statistical methodology, theory and computational details.
Our comparative study, point (ii), mentioned above, exhibits interesting results indicating that more ``stable'' procedures based on Ridge-estimation or random sample splitting with subsequent aggregation are somewhat more reliable for type I error control than asymptotically power-optimal methods. Such results cannot be obtained by comparing underlying assumptions of different methods, since these assumptions are often too crude and far from necessary. As expected, we are unable to pinpoint to a method which is (nearly) best in all considered scenarios. In view of this, we also want to offer a collection of useful methods for the community, in terms of our \textrm{R}-package \texttt{hdi} mentioned in point (iii) above.
\section{Inference for Linear Models}\label{sec.LM}
We consider first a high-dimensional linear model, while extensions are discussed in Section~\ref{sec.GLM}:
\begin{equation} \label{mod.lin} Y = \mathbf{X}\beta^0 + \varepsilon, \end{equation}
with $n \times p$ fixed or random design matrix $\mathbf{X}$, $n \times 1$ response and error vectors $Y$ and $ \varepsilon$, respectively. The errors are assumed to be independent of $\mathbf{X}$ (for random design) with i.i.d. entries having $\mathbb{E}[\varepsilon_i] = 0$. We allow for high-dimensional settings where $p \gg n$. In further development, the active set or the set of relevant variables
\[ S_0 = \bigl\{j;\beta^0_j \neq0, j=1, \ldots,p\bigr\}, \]
as well as its cardinality $s_0 = |S_0|$, are important quantities. The main goals of this section are the construction of confidence intervals and $p$-values for individual regression parameters $\beta^0_j (j=1,\ldots ,p)$ and corresponding multiple testing adjustment. The former is a highly nonstandard problem in high-dimensional settings, while for the latter we can use standard well-known techniques. When considering both goals simultaneously, though, one can develop more powerful multiple testing adjustments. The Lasso (\cite{tibs96}) is among the most popular procedures for estimating the unknown parameter $\beta^0$ in a high-dimensional linear model. It exhibits desirable or sometimes even optimal properties for point estimation such as prediction of $\mathbf{X}\beta^0$ or of a new response
$Y_{\mathrm{new}}$, estimation in terms of $\|\hat{\beta} - \beta^0\|_q$ for $q = 1,2$, and variable selection or screening; see, for example, the book of \citet{pbvdg11}. For assigning uncertainties in terms of confidence intervals or hypothesis testing, however, the plain Lasso seems inappropriate. It is very difficult to characterize the distribution of the estimator in the high-dimensional setting; \citet{knfu00} derive asymptotic results for fixed dimension as sample size $n \to\infty$ and already for such simple situations, the asymptotic distribution of the Lasso has point mass at zero. This implies, because of noncontinuity of the distribution, that standard bootstrapping and subsampling schemes are delicate to apply and uniform convergence to the limit seems hard to achieve. The latter means that the estimator is exposed to undesirable super-efficiency problems, as illustrated in Section~\ref{subsec.comparlm}. All the problems mentioned are expected to apply not only for the Lasso but also for other sparse estimators as well.
In high-dimensional settings and for general fixed design $\mathbf{X}$, the regression parameter is not identifiable. However, when making some restrictions on the design, one can ensure that the regression vector is identifiable. The so-called compatibility condition on the design $\mathbf{X}$ (\cite{vandeGeer:07a}) is a rather weak assumption (\cite{van2009conditions}) which guarantees identifiability and oracle (near) optimality results for the Lasso. For the sake of completeness, the compatibility condition is described in Appendix~\ref{subsec.appadd}.
When assuming the compatibility condition with constant $\phi_0^2$ ($\phi_0^2$ is close to zero for rather ill-posed designs, and sufficiently larger than zero for well-posed designs), the Lasso has the following property: for Gaussian errors and if $\lambda \asymp\sqrt{\log(p)/n}$, we have with high probability that
\begin{equation}
\label{lasso-ell1} \bigl\|\hat{\beta} - \beta^0\bigr\|_1 \le4 s_0 \lambda/\phi_0^2. \end{equation}
Thus, if $s_0 \ll\sqrt{n/\log(p)}$ and $\phi_0^2 \ge M > 0$, we have
$\|\hat{\beta} - \beta^0\|_1 \to0$ and, hence, the parameter $\beta^0$ is identifiable.
Another often used assumption, although not necessary by any means, is the so-called beta-min assumption:
\begin{equation}
\label{beta-min} \min_{j \in S_0}\bigl |\beta^0_j\bigr| \ge\beta_{\mathrm{min}}, \end{equation}
for some choice of constant $\beta_{\mathrm{min}} > 0$. The result in (\ref{lasso-ell1}) immediately implies the screening property: if $\beta_{\mathrm{min}} > 4 s_0 \lambda/\phi_0^2$, then
\begin{equation} \label{screening} \hat{S} = \{j; \hat{\beta}_j \neq0\} \supseteq S_0. \end{equation}
Thus, the screening property holds when assuming the compatibility and beta-min condition. The power of the screening property is a massive dimensionality reduction (in the original variables) because $|\hat{S}| \le \min(n,p)$; thus, if $p \gg n$, the selected set $\hat{S}$ is much smaller than the full set of $p$ variables. Unfortunately, the required conditions are overly restrictive and exact variable screening seems rather unrealistic in practical applications (\cite{pbmand13}).
\subsection{Different Methods}\label{subsec.lm-methods}
We describe here three different methods for construction of statistical hypothesis tests or confidence intervals. Alternative procedures are presented in Sections~\ref{subsec.othermeth} and \ref{subsec.comparlm}.
\subsubsection{Multi sample-splitting}\label{subsec.multisample-split}
A generic way for deriving $p$-values in hypotheses testing is given by splitting the sample with indices $\{1,\ldots,n\}$ into two equal halves denoted by $I_1$ and $I_2$, that is,
$I_r \subset\{1,\ldots,n\}\ (r=1,2)$ with $|I_1| = \lfloor n/2 \rfloor$,
$|I_2| = n - \lfloor n/2 \rfloor$, $I_1 \cap I_2 = \varnothing$ and $I_1 \cup I_2 = \{1,\ldots, n\}$. The idea is to use the first half $I_1$ for variable selection and the second half $I_2$ with the reduced set of selected variables (from $I_1$) for statistical inference in terms of $p$-values. Such a sample-splitting procedure avoids the over-optimism to use the data twice for selection and inference after selection (without taking the effect of selection into account).
Consider a method for variable selection based on the first half of the sample:
\[ \hat{S}(I_1) \subset\{1,\ldots,p\}. \]
A prime example is the Lasso which selects all the variables whose corresponding estimated regression coefficients are different from zero. We then use the second half of the sample $I_2$ for constructing
$p$-values, based on the selected variables $\hat{S}(I_1)$. If the cardinality $|\hat{S}(I_1)| \le n/2 \le|I_2|$, we can run ordinary least squares estimation using the subsample $I_2$ and the selected variables $\hat{S}(I_1)$, that is, we regress $Y_{I_2}$ on
$\mathbf{X}_{I_2}^{(\hat{S}(I_1))}$ where the sub-indices denote the sample half and the super-index stands for the selected variables, respectively. Thereby, we implicitly assume that the matrix $\mathbf{X}_{I_2}^{(\hat{S}(I_1))}$ has full rank $|\hat{S}(I_1)|$. Thus, from such a procedure, we obtain $p$-values $P_{t\mbox{-}\mathrm{test},j}$ for testing $H_{0,j}: \beta^0_j = 0$, for $j \in\hat{S}(I_1)$, from the classical $t$-tests, assuming Gaussian errors or relying on asymptotic justification by the central limit theorem. To be more precise, we define (raw) $p$-values
\begin{eqnarray*} P_{\mathrm{raw},j} = \cases{ P_{t\mbox{-}\mathrm{test},j} \mbox{ based on $Y_{I_2}, \mathbf{X}_{I_2}^{(\hat{S}(I_1))}$},\vspace*{2pt}\cr
\quad \hspace*{10pt}\mbox{if} j \in\hat {S}(I_1), \vspace*{2pt} \cr 1, \quad \mbox{if} j \notin\hat{S}(I_1).} \end{eqnarray*}
An interesting feature of such a sample-splitting procedure is the adjustment for multiple testing. For example, if we wish to control the familywise error rate over all considered hypotheses $H_{0,j} (j=1,\ldots ,p)$, a naive approach would employ a Bonferroni--Holm correction over the
$p$ tests. This is not necessary: we only need to control over the considered $|\hat{S}(I_1)|$ tests in $I_2$. Therefore, a Bonferroni corrected $p$-value for $H_{0,j}$ is given by
\[
P_{\mathrm{corr},j} = \min\bigl(P_{\mathrm{raw},j} \cdot\bigl|\hat{S}(I_1)\bigr|,1 \bigr). \]
In high-dimensional scenarios, $p \gg n > \lfloor n/2 \rfloor\geq
|\hat{S}(I_1)|$, where the latter inequality is an implicit assumption which holds for the Lasso (under weak assumptions), and thus, the correction factor employed here is rather small. Such corrected $p$-values control the familywise error rate in multiple testing when assuming the screening property in (\ref{screening}) for the selector $\hat{S} = \hat{S}(I_1)$ based on the first half $I_1$ only, exactly as stated in Fact~\ref{th1} below. The reason is that the screening property ensures that the reduced model is a correct model, and hence the result is not surprising. In practice, the screening property typically does not hold exactly, but it is not a necessary condition for constructing valid $p$-values (\cite{pbmand13}).
The idea about sample-splitting and subsequent statistical inference is implicitly contained in \citet{WR08}. We summarize the whole procedure as follows:
\emph{Single sample-splitting for multiple testing of $H_{0,j}$ among $j=1,\ldots,p$}:
\begin{longlist}[1.]
\item[1.] Split (partition) the sample $\{1,\ldots,n\} = I_1 \cup I_2$ with $I_1 \cap I_2
= \varnothing$ and $|I_1| = \lfloor n/2 \rfloor$ and $|I_2| = n - \lfloor n/2 \rfloor$.
\item[2.] Using $I_1$ only, select the variables $\hat{S} \subseteq\{ 1,\ldots
,p\}$. Assume or enforce that $|\hat{S}| \le|I_1| = \lfloor n/2 \rfloor
\le|I_2|$.
\item[3.] Denote the design matrix with the selected set of variables by $\mathbf{X}^{(\hat{S})}$. Based on $I_2$ with data $(Y_{I_2},\mathbf{X}_{I_2}^{(\hat{S})})$, compute $p$-values $P_{\mathrm {raw,j}}$ for $H_{0,j}$, for $j \in\hat{S}$, from classical least squares estimation
[i.e., $t$-test which can be used since $|\hat{S}(I_1)| \le|I_2|$]. For $j \notin\hat{S}$, assign $P_{\mathrm{raw},j} = 1$.
\item[4.] Correct the $p$-values for multiple testing: consider
\[
P_{\mathrm{corr},j} = \min\bigl(P_j \cdot|\hat{S}|,1\bigr), \]
which is an adjusted $p$-value for $H_{0,j}$ for controlling the familywise error rate. \end{longlist}
\begin{figure}\label{fig:pval_lottery}
\end{figure}
A major problem of the single sample-splitting method is its sensitivity with respect to the choice of splitting the entire sample: sample splits lead to wildly different $p$-values. We call this undesirable phenomenon a $p$-value lottery, and Figure~\ref{fig:pval_lottery} provides an illustration. To overcome the ``$p$-value lottery,'' we can run the sample-splitting method $B$ times, with $B$ large. Thus, we obtain a collection of $p$-values for the $j$th hypothesis $H_{0,j}$:
\[ P_{\mathrm{corr},j}^{[1]},\ldots,P_{\mathrm{corr},j}^{[B]}\quad (j=1, \ldots,p). \]
The task is now to do an aggregation to a single $p$-value. Because of dependence among $\{P_{\mathrm{corr},j}^{[b]}; b=1,\ldots,B\}$, because all the different half samples are part of the same full sample, an appropriate aggregation needs to be developed. A simple solution is to use an empirical $\gamma$-quantile with $0 < \gamma < 1$:
\begin{eqnarray*} &&Q_j(\gamma)\\ &&\quad = \min \bigl(\mbox{emp. $\gamma$-quantile}\bigl \{P_{\mathrm{corr},j}^{[b]}/\gamma ; b=1,\ldots,B\bigr\},\\ &&\qquad 1 \bigr). \end{eqnarray*}
For example, with $\gamma= 1/2$, this amounts to taking the sample median $\{P_{\mathrm{corr},j}^{[b]}; b=1,\ldots,B\}$ and multiplying it with the factor 2. A bit more sophisticated approach is to choose the best and properly scaled $\gamma$-quantile in the range $(\gamma_{\mathrm{min}},1)$ (e.g., $\gamma_{\mathrm{min}} = 0.05$), leading to the aggregated $p$-value
\begin{eqnarray} \label{aggreg} P_j = \min \Bigl(\bigl(1 - \log(\gamma_{\mathrm{min}}) \bigr) \inf_{\gamma\in (\gamma_{\mathrm{min}},1)} Q_j(\gamma) \Bigr) \nonumber \\[-8pt] \\[-8pt] \eqntext{(j=1, \ldots,p).} \end{eqnarray}
Thereby, the factor $(1 - \log(\gamma_{\mathrm{min}}))$ is the price to be paid for searching for the best $\gamma\in (\gamma_{\mathrm{min}},1)$. This Multi sample-splitting procedure has been proposed and analyzed in \citet{memepb09}, and we summarize it below. Before doing so, we remark that the aggregation of dependent $p$-values as described above is a general principle as described in Appendix~\ref{subsec.appadd}.
\emph{Multi sample-splitting for multiple testing of $H_{0,j}$ among $j=1,\ldots,p$}:
\begin{longlist}[1.]
\item[1.] Apply the single sample-splitting procedure $B$ times, leading to $p$-values $\{P_{\mathrm{corr},j}^{[b]}; b=1,\ldots ,B\}$. Typical choices are $B=50$ or $B=100$.
\item[2.] Aggregate these $p$-values as in (\ref{aggreg}), leading to $P_{j}$ which are adjusted $p$-values for $H_{0,j} (j=1,\ldots,p)$, controlling the familywise error rate. \end{longlist}
The Multi sample-splitting method enjoys the property that the resulting $p$-values are approximately reproducible and not subject to a ``$p$-value lottery'' anymore, and it controls the familywise error rate under the following assumptions:
\begin{longlist}[(A1)]
\item[(A1)] The screening\vspace*{1pt} property as in (\ref{screening}) for the first half of the sample: $\mathbb{P}[\hat{S}(I_1) \supseteq S_0] \ge1 - \delta$ for some $0 < \delta< 1$.
\item[(A2)] The reduced design matrix for the second half of the sample satisfies
$\mathrm{rank}(\mathbf{X}_{I_2}^{(\hat{S}(I_1))}) = |\hat{S}(I_1)|$. \end{longlist}
\begin{theo}[{[\citet{memepb09}]}]\label{th1} Consider a linear model as in (\ref{mod.lin}) with fixed design $\mathbf{X}$ and Gaussian errors. Assume \textup{(A1)--(A2)}. Then, for a significance level $0 < \alpha< 1$ and denoting by $B$ the number of sample splits,
\[ \mathbb{P}\biggl[\bigcup_{j \in S_0^c} I(P_j \le \alpha)\biggr] \le\alpha+ B \delta, \]
that is, the familywise error rate (FWER) is controlled up to the additional (small) value $B \delta$. \end{theo}
A proof is given in Meinshausen, Meier and B{\"u}hlmann (\citeyear{memepb09}). We note that the Multi sample-splitting method can be used in conjunction with any reasonable, sparse variable screening method fulfilling (A1) for very small $\delta> 0$ and
(A2); and it does not necessarily rely on the Lasso for variable screening. See also Section~\ref{subsec.othersparsemeth}.\vspace*{1pt} Assumption (A2) typically holds for the Lasso satisfying $|\hat{S}(I_1)| \le|I_1| = \lfloor n/2 \rfloor\le|I_2| = n - \lfloor n/2 \rfloor$.
\emph{The screening property} (A1). The screening property (A1) with very small $\delta> 0$ is not a necessary condition for constructing valid $p$-values and can be replaced by a zonal assumption requiring the following: there is a gap between large and small regression coefficients and there are not too many small nonzero regression coefficients (\cite{pbmand13}). Still, such a zonal assumption makes a requirement about the unknown $\beta^0$ and the absolute values of its components: but this is the essence of the question in hypothesis testing to infer whether coefficients are sufficiently different from zero, and one would like to do such a test without an assumption on the true values.
The Lasso satisfies (A1) with $\delta\to0$ when assuming the compatibility condition (\ref{compat}) on the design $\mathbf{X}$, the sparsity assumption $s_0 = o(\sqrt{n/\log(p)})$ [or $s_0 = o(n/\log(p))$ when requiring a restricted eigenvalue assumption] and a beta-min condition (\ref{beta-min}), as shown in (\ref{screening}). Other procedures also exhibit the screening property such as the adaptive Lasso (\cite{zou06}), analyzed in detail in \citet{geer11}, or methods with concave regularization penalty such as SCAD (\cite{fan2001variable}) or MC$+$ (\cite{zhang2010}). As criticized above, the required beta-min assumption should be avoided when constructing a hypothesis test about the unknown components of $\beta^0$.
Fact~\ref{th1} has a corresponding asymptotic formulation where the dimension $p = p_n$ and the model depends on sample size $n$: if (A1) is replaced by $\lim_{n \to\infty} \mathbb{P}[\hat{S}(I_{1;n}) \supseteq S_{0;n}] \to1$ and for a fixed number $B$, $\limsup_{n \to\infty} \mathbb{P}[\bigcup_{j \in S_0^c} I(P_j \le\alpha)] \le\alpha$.
In such an asymptotic setting, the Gaussian assumption in Fact~\ref{th1} can be relaxed by invoking the central limit theorem (for the low-dimensional part).
The Multi sample-splitting method is very generic: it can be used for many other models, and its basic assumptions are an approximate screening property
(\ref{screening}) and that the cardinality $|\hat{S}(I_1)| < |I_2|$ so that we only have to deal with a fairly low-dimensional inference problem. See, for example, Section~\ref{sec.GLM} for GLMs. An extension for testing group hypotheses of the form $H_{0,G}: \beta_j = 0$ for all $j \in G$ is indicated in Section~\ref{subsec.assfree}.
Confidence intervals can be constructed based on the duality with the $p$-values from equation (\ref{aggreg}). A procedure is described in detail in Appendix~\ref{subsec.appmssplitci}. The idea to invert the $p$-value method is to apply a bisection method having a point in and a point outside of the confidence interval. To verify if a point is inside the \emph{aggregated} confidence interval, one looks at the fraction of confidence intervals from the splits which cover the point.
\subsubsection{Regularized projection: De-sparsifying the Lasso}\label{subsec.desparslasso}
We describe here a method, first introduced by \citet{zhangzhang11}, which does not require an assumption about $\beta^0$ except for sparsity.
It is instructive to give a motivation starting with the low-dimensional setting where $p < n$ and $\mathrm{rank}(\mathbf{X}) = p$. The $j$th component of the ordinary least squares estimator $\hat{\beta}_{\mathrm{OLS};j}$ can be obtained as follows. Do an OLS regression of $\mathbf{X}^{(j)}$ versus all other variables $\mathbf{X}^{(-j)}$ and denote the corresponding residuals by $Z^{(j)}$. Then
\begin{equation} \label{proj-ols} \hat{\beta}_{\mathrm{OLS};j} = Y^T Z^{(j)}/ \bigl(\mathbf{X}^{(j)}\bigr)^T Z^{(j)} \end{equation}
can be obtained by a linear projection. In a high-dimensional setting, the residuals $Z^{(j)}$ would be equal to zero and the projection is ill-posed.
For the high-dimensional case with $p > n$, the idea is to pursue a regularized projection. Instead of ordinary least squares regression, we use a Lasso regression of $\mathbf{X}^{(j)}$ versus $\mathbf{X}^{(-j)}$ with corresponding residual vector $Z^{(j)}$: such a penalized regression involves a regularization parameter $\lambda_j$ for the Lasso, and hence $Z^{(j)} = Z^{(j)}(\lambda_j)$. As in (\ref{proj-ols}), we immediately obtain (for any vector $Z^{(j)}$)
\begin{eqnarray} \label{proj-lasso} \qquad \frac{Y^T Z^{(j)}}{(\mathbf{X}^{(j)})^T Z^{(j)}} &=& \beta^0_j + \sum _{k \neq j} P_{jk} \beta^0_k + \frac{\varepsilon^T Z^{(j)}}{(\mathbf{X}^{(j)})^T Z^{(j)}}, \nonumber \\[-8pt] \\[-8pt] \nonumber P_{jk}&=& \bigl(\mathbf{X}^{(k)}\bigr)^T Z^{(j)}/\bigl(\mathbf{X}^{(j)}\bigr)^T Z^{(j)}. \end{eqnarray}
We note that in the low-dimensional case with $Z^{(j)}$ being the residuals from ordinary least squares, due to orthogonality, $P_{jk} = 0$. When using the Lasso-residuals for $Z^{(j)}$, we do not have exact orthogonality and a bias arises. Thus, we make a bias correction by plugging in the Lasso estimator $\hat{\beta}$ (of the regression $Y$ versus $\mathbf{X}$): the bias-corrected estimator is
\begin{equation} \label{despars-lasso} \hat{b}_j = \frac{Y^T Z^{(j)}}{(\mathbf{X}^{(j)})^T Z^{(j)}} - \sum _{k \neq j} P_{jk} \hat{\beta}_k. \end{equation}
Using (\ref{proj-lasso}), we obtain
\begin{eqnarray*} \sqrt{n}\bigl(\hat{b}_j - \beta^0_j\bigr) &= &\frac{n^{-1/2} \varepsilon^T Z^{(j)}}{n^{-1} (\mathbf{X}^{(j)})^T Z^{(j)}}\\ &&{} + \sum_{k \neq j} \sqrt{n} P_{jk}\bigl(\beta_k^0 - \hat{\beta}_k \bigr). \end{eqnarray*}
The first term on the right-hand side has a Gaussian distribution, when assuming Gaussian errors; otherwise, it has an asymptotic Gaussian distribution assuming that $\mathbb{E}|\varepsilon_i|^{2 + \kappa} < \infty$ for $\kappa> 0$ (which suffices for the Lyapunov CLT). We will argue in Appendix~\ref{subsec.appadd} that the second term is negligible under the following assumptions:
\begin{longlist}[(B1)]
\item[(B1)] The design matrix $\mathbf{X}$ has compatibility constant bounded away from zero, and the sparsity is $s_0 = o(\sqrt{n}/\log(p))$.
\item[(B2)] The rows of $\mathbf{X}$ are fixed realizations of i.i.d. random vectors $\sim{\mathcal N}_p(0,\Sigma)$, and the minimal eigenvalue of $\Sigma$ is bounded away from zero.
\item[(B3)] The inverse $\Sigma^{-1}$ is row-sparse with $s_j = \sum_{k \neq j} I((\Sigma^{-1})_{jk} \neq0) = o(n/\log(p))$. \end{longlist}
\begin{theo}[(\cite{zhangzhang11}; van~de Geer et\break al., \citeyear{vdgetal13})]\label{th2} Consider a linear model as in (\ref{mod.lin}) with fixed design and Gaussian errors. Assume \textup{(B1)}, \textup{(B2)} and \textup{(B3)} (or an $\ell_1$-sparsity assumption on the rows of $\Sigma^{-1}$).
Then
\begin{eqnarray*} \sqrt{n} \sigma_{\varepsilon}^{-1} \bigl(\hat{b} - \beta^0\bigr) &=& W + \Delta,\quad W \sim{\mathcal N}_p(0,\Omega), \\ \Omega_{jk} &=& \frac{n(Z^{(j)})^T Z^{(k)}}{[(\mathbf{X}^{(j)})^T Z^{(j)}][(X^{(k)})^T Z^{(k)}]}, \\
\|\Delta\|_{\infty} &=& o_P(1). \end{eqnarray*}
[We note that this statement holds with probability tending to one, with respect to the variables $\mathbf{X}\sim{\mathcal N}_P(0,\Sigma)$ as assumed in \textup{(B2)}]. \end{theo}
The asymptotic implications of Fact~\ref{th2} are as follows:
\[ \sigma_{\varepsilon}^{-1} \Omega_{jj}^{-1/2} \sqrt{n} \bigl(\hat{b}_j - \beta^0_j\bigr) \Rightarrow{\mathcal N}(0,1), \]
from which we can immediately construct a confidence interval or hypothesis test by plugging in an estimate $\hat{\sigma}_{\varepsilon}$ as briefly discussed in Section~\ref{subsec.addissues}. From a theoretical perspective, it is more elegant to use the square root Lasso (\cite{belloni2011square}) for the construction of $Z^{(j)}$; then one can drop (B3) [or the $\ell_1$-sparsity version of (B3)] (\cite{vdg14}). In fact, all that we then need is formula (\ref{ell1bound})
\[
\bigl\|\hat{\beta} - \beta^0\bigr\|_1 = o_P\bigl(1/ \sqrt{\log(p)}\bigr). \]
From a practical perspective, it seems to make essentially no difference whether\vspace*{1pt} one takes the square root or plain Lasso for the construction of the $Z^{(j)}$'s.
More general than the statements in Fact~\ref{th2}, the following holds assuming (B1)--(B3) (\cite{vdgetal13}): the asymptotic variance $\sigma_{\varepsilon}^2 \Omega_{jj}$ reaches the Cram\'{e}r--Rao lower bound, which equals $\sigma_{\varepsilon}^2
(\Sigma^{-1})_{jj}$ [which is bounded away from zero, due to (B2)], and the estimator $\hat{b}_j$ is efficient in the sense of semiparametric inference. Furthermore, the convergence in Fact~\ref{th2} is uniform over the subset of the parameter space where the number of nonzero coefficients $\|\beta^0\|_0$ is small and, therefore, we obtain \emph{honest} confidence intervals and tests. In particular, both of these results say that all the complications in post-model selection do not arise (\cite{leebpoetsch03}), and yet $\hat{b}_j$ is optimal for construction of confidence intervals of a single coefficient $\beta^0_j$.
From a practical perspective, we need to choose the regularization parameters $\lambda$ (for the Lasso regression of $Y$ versus $\mathbf{X}$) and $\lambda_j$ [for the nodewise Lasso regressions (\cite{mebu06}) of $\mathbf{X}^{(j)}$ versus all other variables $\mathbf{X}^{(-j)}$]. Regarding the former, we advocate a choice using cross-validation; for the latter, we favor a proposal for a smaller $\lambda_j$ than the one from CV, and the details are described in Appendix~\ref{subsec.appadd}.
Furthermore, for a group $G \subseteq\{1,\ldots ,p\}$, we can test a group hypothesis $H_{0,G}: \beta^0_j = 0$ for all $j \in G$ by considering the test-statistic
\[ \max_{j \in G} \sigma_{\varepsilon}^{-1}
\Omega_{jj}^{-1/2} \sqrt{n} |\hat{b}_j| \Rightarrow \max_{j \in G} \Omega_{jj}^{-1/2}
|W_j|, \]
where the limit on the right-hand side occurs if the null-hypothesis
$H_{0,G}$ holds true. The distribution of $\max_{j \in G} |\Omega_{jj}^{-1/2} W_j|$ can be easily simulated from dependent Gaussian random variables. We also remark that sum-type statistics for large groups cannot be easily treated because $\sum_{j \in G} |\Delta_j|$ might get out of control.
\subsubsection{Ridge projection and bias correction}\label{subsec.ridge-proj}
Related to the desparsified Lasso estimator $\hat{b}$ in (\ref{despars-lasso}) is an approach based on Ridge estimation. We sketch here the main properties and refer to \citet{pb13} for a detailed treatment.
Consider
\[ \hat{\beta}_{\mathrm{Ridge}} = \bigl(n^{-1} \mathbf{X}^T \mathbf{X}+ \lambda I\bigr)^{-1} n^{-1} \mathbf{X}^T Y. \]
A major source of bias occurring in Ridge estimation when $p > n$ comes from the fact that the Ridge estimator is estimating a projected parameter
\[ \theta^0 = P_{R} \beta^0,\quad P_{R} = \mathbf{X}^T \bigl(\mathbf{X}\mathbf{X}^T\bigr)^{-}\mathbf{X}, \]
where $(\mathbf{X}\mathbf{X}^T)^{-}$ denotes a generalized inverse of $\mathbf{X}\mathbf{X}^T$. The minor bias for $\theta^0$ then satisfies
\begin{eqnarray*}
\max_j\bigl|\mathbb{E}[\hat{\beta}_{\mathrm{Ridge};j}] -
\theta^0_j\bigr| \le\lambda\bigl\| \theta^0
\bigr\|_2 \lambda_{\mathrm{min} \neq0}(\hat{\Sigma})^{-1}, \end{eqnarray*}
where $\lambda_{\mathrm{min} \neq0}(\hat{\Sigma})$ denotes the minimal nonzero eigenvalue of $\hat{\Sigma}$ (\cite{shadeng11}). The quantity can be made small by choosing $\lambda$ small. Therefore, for $\lambda\searrow0^+$ and assuming Gaussian errors, we have that
\begin{equation} \label{Ridge-distr}\quad \sigma_{\varepsilon}^{-1} \bigl(\hat{ \beta}_{\mathrm{Ridge}} - \theta^0\bigr) \approx W,\quad W \sim{\mathcal N}_p(0, \Omega_R), \end{equation}
where $\Omega_R = (\hat{\Sigma} + \lambda)^{-1} \hat{\Sigma} (\hat {\Sigma} + \lambda)^{-1}/n$. Since
\[ \frac{\theta^0}{P_{R;jj}} = \beta^0_j + \sum _{k \neq j} \frac{P_{R;jk}}{P_{R;jj}} \beta^0_k, \]
the major bias for $\beta^0_j$ can be estimated and corrected with
\[ \sum_{k \neq j} \frac{P_{R;jk}}{P_{R;jj}} \hat{ \beta}_k, \]
where $\hat{\beta}$ is the ordinary Lasso. Thus, we construct a bias-corrected Ridge estimator, which addresses the potentially substantial difference between $\theta^0$ and the target $\beta^0$:
\begin{eqnarray} \label{corr-Ridge} \hat{b}_{R;j} = \frac{\hat{\beta}_{\mathrm{Ridge};j}}{P_{R;jj}} - \sum _{k \neq j} \frac{P_{R;jk}}{P_{R;jj}} \hat{\beta}_k, \nonumber \\[-8pt] \\[-8pt]
\eqntext{j=1, \ldots,p.} \end{eqnarray}
Based on (\ref{Ridge-distr}), we derive in Appendix~\ref{subsec.appadd} that
\begin{eqnarray} \label{Ridge-repr}&&\sigma_{\varepsilon}^{-1} \Omega_{R;jj}^{-1/2} \bigl(\hat{b}_{R;j} - \beta^0_j\bigr)\nonumber\\ &&\quad \approx \Omega_{R;jj}^{-1/2} W_j / P_{R;jj}\nonumber \\ &&\qquad{}+ \sigma_{\varepsilon}^{-1} \Omega_{R;jj}^{-1/2} \Delta_{R;j},\quad W \sim{\mathcal N}_p(0, \Omega_R), \\
&&|\Delta_{R;j}| \le\Delta_{R\mathrm{bound};j} \nonumber\\ &&\hspace*{22pt}\quad:= \max_{k \neq j} \biggl\vert \frac{P_{R;jk}}{P_{R;jj}}\biggr\vert \bigl(\log(p)/n\bigr)^{1/2 - \xi}, \nonumber\end{eqnarray}
with the typical choice $\xi= 0.05$. Sufficient conditions for deriving (\ref{Ridge-repr}) are assumption (B1) and that the sparsity satisfies $s_0 =O((n/\log(p))^{\xi})$ for $\xi$ as above.
Unlike as in Fact~\ref{th2}, the term $\Delta_{R;j}$ is typically not negligible and we correct the Gaussian part in (\ref{Ridge-repr}) by the upper bound $\Delta_{R\mathrm{bound};j}$. For example, for testing $H_{0,j}: \beta^0_j = 0$ we use the upper bound for the $p$-value
\begin{eqnarray*} 2\bigl(1 - \Phi\bigl(\sigma_{\varepsilon}^{-1}\Omega_{R;jj}^{-1/2}
|P_{R;jj}|\bigl(|\hat {b}_{R;j}| - \Delta_{R\mathrm{bound};j}\bigr)_+\bigr) \bigr). \end{eqnarray*}
Similarly, for two-sided confidence intervals with coverage $1-\alpha$ we use
\begin{eqnarray*} & &[\hat{b}_{R;j} -c_j,\hat{b}_{R;j} + c_j], \\
& &c_j = \Delta_{R\mathrm{bound};j} + \sigma_{\varepsilon} \Omega _{R;jj}^{1/2}/|P_{R;jj}| \Phi^{-1}(1- \alpha/2). \end{eqnarray*}
For testing a group hypothesis for $G \subseteq\{1,\ldots,p\}$, $H_{0,G}: \beta^0_j = 0$ for all $j \in G$, we can proceed similarly as at the end of Section~\ref{subsec.desparslasso}: under the null-hypotheses $H_{0,G}$, the statistic $\sigma_{\varepsilon}^{-1}
\max_{j \in G} \Omega_{R;jj}^{-1/2} |\hat{b}_{R;j}|$ has a distribution which is approximately stochastically upper bounded by
\begin{eqnarray*} \max_{j \in G} \bigl(\Omega_{R;jj}^{-1/2}
|W_j| / |P_{R;jj}| + \sigma_{\varepsilon}^{-1}
\Omega_{R;jj}^{-1/2} |\Delta_{R;j}|\bigr); \end{eqnarray*}
see also (\ref{Ridge-repr}). When invoking an upper bound for $\Delta_{R\mathrm{bound};j} \ge
|\Delta_{R;j}|$ as in (\ref{Ridge-repr}), we can easily simulate this distribution from dependent Gaussian random variables, which in turn can be used to construct a $p$-value; we refer for further details to \citet{pb13}.
\subsubsection{Additional issues: Estimation of the error variance and multiple testing correction}\label{subsec.addissues}
Unlike the Multi sample-splitting procedure in Section~\ref{subsec.multisample-split}, the desparsified Lasso and Ridge projection method outlined in Sections~\ref{subsec.desparslasso}--\ref{subsec.ridge-proj} require to plug-in an estimate of $\sigma_{\varepsilon}$ and to adjust for multiple testing. The scaled Lasso (\cite{sunzhang11}) leads to a consistent estimate of the error variance: it is a fully automatic method which does not need any specification of a tuning parameter. In \citet{reidtibsh13}, an empirical comparison of various estimators suggests that the estimator based on a residual sum of squares of a cross-validated Lasso solution often yields good finite-sample performance.
Regarding the adjustment when doing many tests for individual regression parameters or groups thereof, one can use any valid standard method to correct the $p$-values from the desparsified Lasso or Ridge projection method. The prime examples are the Bonferroni--Holm procedure for controlling the familywise error rate and the method from \citet{benyek01} for controlling the false discovery rate. An approach for familywise error control which explicitly takes the dependence among the multiple hypotheses is proposed in \citet{pb13}, based on simulations for dependent Gaussian random variables.
\subsubsection{Conceptual differences between the methods}
We briefly outline here conceptual differences while Section~\ref{subsec.comparlm} presents empirical results.
The Multi sample-splitting method is very generic and in the spirit of Breiman's appeal for stability (\citeauthor{brei96}, \citeyear{brei96,brei96b}), it enjoys some kind of stability due to multiple sample splits and aggregation; see also the discussion in Sections~\ref{subsec.othersparsemeth} and \ref{subsec.mainass}. The disadvantage is that, in the worst case, the method needs a beta-min or a weaker zonal assumption on the underlying regression parameters: this is somewhat unpleasant since a significance test should \emph{find out} whether a regression coefficient is sufficiently large or not.
Both the desparsified Lasso and Ridge projection procedures do not make any assumption on the underlying regression coefficient except sparsity. The former is most powerful and asymptotically optimal if the design were generated from a population distribution whose inverse covariance matrix is sparse. Furthermore, the convergence is uniform over all sparse regression vectors and, hence, the method yields honest confidence regions or tests. The Ridge projection method does not require any assumption on the fixed design but does not reach the asymptotic Cram\'{e}r--Rao efficiency bound. The construction with the additional correction term in (\ref{delta-bound}) leads to reliable type I error control at the cost of power.
In terms of computation, the Multi sample-splitting and Ridge projection method are substantially less demanding than the desparsified Lasso.
\subsubsection{Other sparse methods than the Lasso}\label{subsec.othersparsemeth}
All the methods described above are used ``in default mode'' in conjunction with the Lasso (see also Section~\ref{subsec.hdilin}). This is not necessary, and other estimators can be used.
For the Multi sample-splitting procedure, assumptions (A1) with $\delta \to 0$ and (A2) are sufficient for asymptotic correctness; see Fact~\ref{th1}. These assumptions hold for many reasonable sparse estimators when requiring a beta-min assumption and some sort of identifiability condition such as the restricted eigenvalue or the compatibility condition on the design matrix~$\mathbf{X}$; see also the discussion after Fact~\ref{th1}. It is unclear whether one could gain substantially by using a different screening method than the Lasso. In fact, the Lasso has been empirically found to perform rather well for screening in comparison to the elastic net (\cite{zou2005regularization}), marginal correlation screening (\cite{fanlv07}) or thresholded Ridge regression; see \citet{pbmand13}.
For the desparsified Lasso, the error of the estimated bias correction can be controlled by using a bound for
$\|\hat{\beta} - \beta^0\|_1$. If we require (B2) and (B3) [or an $\ell_1$ sparsity assumption instead of (B3)], the estimation error in the bias correction, based on an estimator $\hat{\beta}$ in (\ref{despars-lasso}), is asymptotically negligible if
\begin{equation}
\label{ell1bound} \bigl\|\hat{\beta} - \beta^0\bigr\|_1 = o_P\bigl(1/\sqrt{\log(p)}\bigr). \end{equation}
This bound is implied by (B1) and (B2) for the Lasso, but other estimators exhibit this bound as well, as mentioned below. When using such another estimator, the wording ``desparsified Lasso'' does not make sense anymore. Furthermore, when using the square root Lasso for the construction of $Z^{(j)}$, we only need (\ref{ell1bound}) to obtain asymptotic normality with the $\sqrt{n}$ convergence rate (\cite{vdg14}).
For the Ridge projection method, a bound for $\|\hat{\beta} - \beta^0\|_1$ is again the only assumption such that the procedure is asymptotically valid. Thus, for the corresponding bias correction, other methods than the Lasso can be used.
We briefly mention a few other methods for which we have reasons that (A1) with very small $\delta> 0$ and (A2), or the bound in (\ref{ell1bound}) hold: the adaptive Lasso (\cite{zou06}) analyzed in greater detail in \citet{geer11}, the MC$+$ procedure with its high-dimensional mathematical analysis (\cite{zhang2010}), or methods with concave regularization penalty such as SCAD (\cite{fan2001variable}) analyzed in broader generality and detail in \citet{fan2014}. If the assumptions (A1) with small $\delta> 0$ and (A2) fail for the Multi sample-splitting method, the multiple sample splitting still allows to check the stability of the $p$-values $P_{\mathrm{corr},j}^{[b]}$ across $b$ (i.e., across sample splits). If the variable screening is unstable, many of the $P_{\mathrm{corr},j}^{[b]}$ (across $b$) will be equal to 1, therefore, the aggregation has a tendency to produce small $p$-values if most of them, each from a sample split, are stable and small. See also \citet{manbu13}, Section~5. In connection with the desparsified method, a failure of the single sufficient condition in (\ref{ell1bound}), when using, for example, the square root Lasso for construction of the $Z^{(j)}$'s, might result in a too large bias. In absence of resampling or Multi sample splitting, it seems difficult to diagnose such a failure (of the desparsified or Ridge projection method) with real data.
\subsection{\texttt{hdi} for Linear Models}\label{subsec.hdilin} In the \texttt{R}-package \texttt{hdi}, available on R-Forge (\cite{hdipackage}), we provide implementations for the Multi sample-splitting, the Ridge projection and the desparsified Lasso method.
Using the \texttt{R} functions is straightforward:
\begin{verbatim} > outMssplit
<- multi.split(x = x, y = y) > outRidge
<- ridge.proj(x = x, y = y) > outLasso
<- lasso.proj(x = x, y = y) \end{verbatim}
For users that are very familiar with the procedures, we provide flexible options. For example, we can easily use an alternative model selection or another ``classical'' fitting procedure using the arguments \texttt{model.selector} and \texttt{classical.fit} in \texttt{multi.split}. The default options should be satisfactory for standard usage.
All procedures return $p$-values and confidence intervals. The Ridge and desparsified Lasso methods return both single testing $p$-values as well as multiple testing corrected $p$-values, unlike the Multi sample-splitting procedure which only returns multiple testing corrected $p$-values. The confidence intervals are for individual parameters only (corresponding to single hypothesis testing).
The single testing $p$-values and the multiple testing corrected $p$-values are extracted from the fit as follows:
\begin{verbatim} > outRidge$pval > outRidge$pval.corr \end{verbatim}
By default, we correct for controlling the familywise error rate for the $p$-values \texttt{pval.corr}.
Confidence intervals are acquired through the usual \texttt{confint} interface. Below we extract the 95 \% confidence intervals for those $p$-values that are smaller than \texttt{0.05}:
\begin{verbatim} > confint(outMssplit,
parm = which(outMssplit$pval.corr
<= 0.05),
level = 0.95) \end{verbatim}
Due to the fact that the desparsified Lasso method is quite computationally intensive, we provide the option to parallelize the method on a user-specified number of cores.
We refer to the manual of the package for more detailed information.
\subsection{Other Methods}\label{subsec.othermeth}
Recently, other procedures have been suggested for construction of $p$-values and confidence intervals.
Residual-type bootstrap approaches are proposed and analyzed in \citet{chatter13} and \citet{liuyu13}. A problem with these approaches is the nonuniform convergence to a limiting distribution and exposure to the super-efficiency phenomenon, that is, if the true parameter equals zero, a confidence region might be the singleton $\{0\}$ (due to a finite amount of bootstrap resampling), while for nonzero true parameter values, the coverage might be very poor or a big length of the confidence interval.
The covariance test (\cite{covtest14}) is another proposal which relies on the solution path of the Lasso and provides $p$-values for conditional tests that all relevant variables enter the Lasso solution path first. It is related to post-selection inference, mentioned in Section~\ref{subsec.postsel}.
In \citet{jamo13b}, a procedure was proposed that is very similar to the one described in Section~\ref{subsec.desparslasso}, with the only difference being that Z is picked as the solution of a convex program rather than using the Lasso. The method is aiming to relax the sparsity assumption (B3) for the design.
A conservative \emph{Group-bound} method which needs no regularity assumption for the design, for example, no compatibility assumption (\ref{compat}), has been proposed by \citet{meins13}. The method has the capacity to automatically determine whether a regression coefficient is identifiable or not, and this makes the procedure very robust against ill-posed designs. The main motivation of the method is in terms of testing groups of correlated variables, and we discuss it in more detail in Section~\ref{subsec.assfree}.
While all the methods mentioned above are considered in a comparative simulation study in Section~\ref{subsec.comparlm}, we mention here some others. The idea of estimating a low-dimensional component of a high-dimensional parameter is also worked out in \citet{belloni2012sparse}, \citet{beletal13}, bearing connections to the approach of desparsifying the Lasso. Based on stability selection (\cite{mebu10}), \citet{shah13} propose a version which leads to $p$-values for testing individual regression parameters. Furthermore, there are new and interesting proposals for controlling the false discovery rate, in a ``direct way'' (\citeauthor{bogdan13} \citeyear{bogdan13,bogdan14}; \cite{foygcand14}).
\subsection{Main Assumptions and Violations}\label{subsec.mainass}
We discuss here some of the main assumptions, potential violations and some corresponding implications calling for caution when aiming for confirmatory conclusions.
\textit{Linear model assumption}. The first one is that the linear (or some other) model is correct. This might be rather unrealistic and, thus, it is important to interpret the output of software or a certain method. Consider a nonlinear regression model
\begin{eqnarray*} &&\mbox{random design}:\quad Y_0 = f^0(X_0) + \eta_0, \\ &&\mbox{fixed design}:\quad Y = f^0(\mathbf{X}) + \eta, \end{eqnarray*}
where, with some slight abuse of notation, $f^0(\mathbf{X}) = f^0(\mathbf{X} _1),\ldots, (f^0(\mathbf{X}_n))^T$. We assume for the random design model, $\eta_0$ is independent from $X_0$, $\mathbb{E}[\eta_0] = 0$, $\mathbb{E}[f^0(X_0)] = 0$, $\mathbb{E} [X_0] = 0$, and the data are $n$ i.i.d. realizations of $(X_0,Y_0)$; for the fixed design model, the $n \times1$ random vector $\eta$ has i.i.d. components with $\mathbb{E}[\eta_i]=0$. For the random design model, we consider
\begin{eqnarray} \label{betaproj} Y_0 &=& \bigl(\beta^0\bigr)^T X_0 + \varepsilon_0,\nonumber \\ \varepsilon_0 &=& f^0(X_0) - \bigl(\beta^0\bigr)^T X_0 + \eta_0, \\ \beta^0 &=&\operatorname{argmin}_{\beta} \mathbb{E}\bigl[\bigl(f^0(X_0) - \beta^T X_0\bigr)^2\bigr]\nonumber \end{eqnarray}
[where the latter is unique if $\operatorname{Cov}(X_0)$ is positive definite]. We note that $\mathbb{E}[\varepsilon_0|X_0] \neq0$ while $\mathbb{E}[\varepsilon_0] = 0$ and, therefore, the inference should be \emph{unconditional} on $\mathbf{X}$ and is to be interpreted for the projected parameter $\beta^0$ in (\ref{betaproj}). Furthermore, for correct asymptotic inference of the projected parameter $\beta^0$, a modified estimator for the asymptotic variance of the estimator is needed; and then both the Multi sample-splitting and the desparsified Lasso are asymptotically correct (assuming similar conditions as if the model were correct). The Multi sample-splitting method is well suited for the random design case because the sample splitting (resampling type) is coping well with i.i.d. data. This is in contrast to fixed design, where the data is not i.i.d. and the Multi sample-splitting method for a misspecified linear model is typically not working anymore. The details are given in \citet{pbvdg15}.
For a fixed design model with $\mathrm{rank}(\mathbf{X}) = n$, we can always write
\[ Y = \mathbf{X}\beta^0 + \varepsilon,\quad \varepsilon= \eta \]
for many solutions $\beta^0$. For ensuring that the inference is valid, one should consider a sparse $\beta^0$, for example, the basis pursuit solution from compressed sensing (\cite{candes2006near}) as one among many solutions. Thus, inference should be interpreted for a \emph{sparse} solution $\beta^0$, in the sense that a confidence interval for the $j$th component would cover this $j$th component of all sufficiently sparse solutions $\beta^0$. For the high-dimensional fixed design case, there is no misspecification with respect to linearity of the model; misspecification might happen, though, if there is no solution $\beta ^0$ which fulfills a required sparsity condition. The details are given again in \citet{pbvdg15}.
The assumption about constant error variance might not hold. We note that in the random design case of a nonlinear model as above, the error in (\ref{betaproj}) has nonconstant variance when conditioning on $\mathbf{X}$, but, unconditionally, the noise is homoscedastic. Thus, as outlined, the inference for a random design linear model is asymptotically valid (unconditional on $\mathbf{X}$) even though the conditional error distribution given $\mathbf{X}$ has nonconstant variance.
\textit{Compatibility or incoherence-type assumption}. The methods in Section~\ref{subsec.lm-methods} require an identifiability assumption such as the compatibility condition on the design matrix $\mathbf{X}$ described in (\ref{compat}). The procedure in Section~\ref{subsec.assfree} does not require such an assumption: if a component of the regression parameter is not identifiable, the method will not claim significance. Hence, some robustness against nonidentifiability is offered with such a method.
\textit{Sparsity.} All the described methods require some sparsity assumption of the parameter vector $\beta^0$ [if the model is misspecified, this concerns the parameter $\beta^0$ as in (\ref{betaproj}) or the basis pursuit solution]; see the discussion of (A1) after Fact~\ref{th1} or assumption (B1). Such sparsity assumptions can be somewhat relaxed to require weak sparsity in terms of
$\|\beta^0\|_r$ for some $0 < r < 1$, allowing that many or all regression parameters are nonzero but sufficiently small (cf. \cite{vdg15}; \citep{pbvdg15}).
When the truth (or the linear approximation of the true model) is nonsparse, the methods are expected to break down. With the Multi sample-splitting procedure, however, a violation of sparsity might be detected, since for nonsparse problems, a sparse variable screening method will be typically unstable with the consequence that the resulting aggregated $p$-values are typically not small; see also Section~\ref{subsec.othersparsemeth}.
Finally, we note that for the desparsified Lasso, the sparsity assumption (B3) or its weaker version can be dropped when using the square root Lasso; see the discussion after Fact~\ref{th2}.
\textit{Hidden variables}. The problem of hidden variables is most prominent in the area of causal inference (cf. \cite{pearl00}). In the presence of hidden variables, the presented techniques need to be adapted, adopting ideas from, for example, the framework of EM-type estimation (cf. \cite{dempster1977maximum}), low-rank methods (cf. \cite{chandrasekaran2012}) or the FCI technique from causal inference (cf. \cite{sgs00}).
\subsection{A Broad Comparison}\label{subsec.comparlm} We compare a variety of methods on the basis of multiple testing corrected $p$-values and single testing confidence intervals. The methods we look at are the multiple sample-splitting method \emph{MS-Split} (Section~\ref{subsec.multisample-split}), the desparsified Lasso method \emph{Lasso-Pro} (Section~\ref{subsec.desparslasso}), the Ridge projection method \emph{Ridge} (Section~\ref{subsec.ridge-proj}), the covariance test \emph{Covtest} (Section~\ref{subsec.othermeth}), the method by Javanmard and Montanari \emph{Jm2013} (Section~\ref{subsec.othermeth}) and the two bootstrap procedures mentioned in Section~\ref{subsec.othermeth} [\emph{Res-Boot} corresponds to \citet{chatter13} and \emph{liuyu} to \citet{liuyu13}].
\subsubsection{Specific details for the methods}
For the estimation of the error variance, for the Ridge projection or the desparsified Lasso method, the scaled Lasso is used as mentioned in Section~\ref{subsec.addissues}.
For the choice of tuning parameters for the nodewise Lasso regressions (discussed in Section~\ref{subsec.desparslasso}), we look at the two alternatives of using either cross-validation or our more favored alternative procedure (denoted by Z\&Z) discussed in Appendix~\ref{subsec.appadd}.
We do not look at the bootstrap procedures in connection with multiple testing adjustment due to the fact that the required number of bootstrap samples grows out of proportion to go far enough in the tails of the distribution; some additional importance sampling might help to address such issues.
Regarding the covariance test, the procedure does not directly provide $p$-values for the hypotheses we are interested in. For the sake of comparison though, we use the interpretation as in \citet{covtestpblmvdg14}.
This interpretation does not have a theoretical reasoning behind it and functions more as a heuristic.
Thus, the results of the covariance test procedure should be interpreted with caution.
For the method \emph{Jm2013}, we used our own implementation instead of the code provided by the authors. The reason for this is that we had already implemented our own version when we discovered that code was available and our own version was (orders of magnitude) better in terms of error control. Posed with the dilemma of fair comparison, we stuck to the best performing alternative.
\subsubsection{Data used}\label{subsubsec.data} For the empirical results, simulated design matrices as well as design matrices from real data are used. The simulated design matrices are generated $\sim\mathcal{N}_p(0,\Sigma)$ with covariance matrix $\Sigma $ of the following three types:
\begin{eqnarray*}
&&\mbox{Toeplitz:}\quad \Sigma_{j,k} = 0.9^{|j-k|}, \\ &&\mbox{Exp.decay:}\quad \bigl(\Sigma^{-1}\bigr)_{j,k} =
0.4^{|j-k|/5}, \\ &&\mbox{Equi.corr:}\quad \Sigma_{j,k} \equiv0.8 \quad\mbox{for all } j \neq k, \\ &&\hspace*{56pt}\Sigma_{j,j} \equiv1\quad \mbox{ for all } j. \end{eqnarray*}
The sample size and dimension are fixed at $n=100$ and $p=500$, respectively. We note that the Toeplitz type has a banded inverse $\Sigma^{-1}$, and, vice-versa, the Exp.decay type exhibits a banded $\Sigma$. The design matrix RealX from real gene expression data of Bacillus Subtilis ($n=71,p=4088$) was kindly provided by DSM (Switzerland) and is publicly available (\cite{bumeka13}). To make the problem somewhat comparable in difficulty to the simulated designs, the number of variables is reduced to $p=500$ by taking the variables with highest empirical variance.
The cardinality of the active set is picked to be one of two levels $s_0 \in\{3,15\}$.
For each of the active set sizes, we look at 6 different ways of picking the sizes of the nonzero coefficients:
\begin{eqnarray*} &&\mbox{Randomly generated}:\quad U(0,2), U(0,4), U(-2,2), \\ &&\mbox{A fixed value}:\quad 1, 2 \mbox{ or } 10. \end{eqnarray*}
The positions of the nonzero coefficients as columns of the design $\mathbf X$ are picked at random. Results where the nonzero coefficients were positioned to be the first $s_0$ columns of $\mathbf X$ can be found in the supplemental article (\cite{supplement}).
Once we have the design matrix $\mathbf X$ and coefficient vector $\beta^0$, the responses $Y$ are generated according to the linear model equation with $\varepsilon\sim\mathcal{N}(0,1)$.
\begin{figure*}
\caption{Familywise error rate (FWER), average number of false positive [AVG(V)] and power for multiple testing based on various methods for a linear model. The desired control level for the FWER is $\alpha=0.05$. The average number of false positives AVG(V) for each method is shown in the middle. The design matrix is of type \emph{Toeplitz}, and the active set size being $s_0=3$ (top) and $s_0=15$ (bottom).}
\label{fig:lintoeplitz}
\end{figure*}
\begin{figure*}
\caption{See caption of Figure \protect\ref{fig:lintoeplitz} with the only difference being the type of design matrix. In this plot, the design matrix type is \emph{Exp.decay}.}
\label{fig:linexpdecay}
\end{figure*}
\subsubsection{$p$-values}\label{subsubsec.pvals} We investigate multiple testing corrected $p$-values for two-sided testing of the null hypotheses $H_{0,j}: \beta^0_j = 0$ for $j=1,\ldots ,p$. We report the power and the familywise error rate (FWER) for each method:
\begin{eqnarray*} \mbox{Power}& =& \sum_{j \in S_0} \mathbb{P}[H_{0,j}\mbox{ is rejected}]/s_0, \\ \mbox{FWER} &=& \mathbb{P}\bigl[\exists j \in S_0^c : H_{0,j}\mbox{ is rejected}\bigr]. \end{eqnarray*}
We calculate
empirical versions of these quantities based on fitting 100 simulated responses $Y$ coming from newly generated $\varepsilon$.
For every design type, active set size and coefficient type combination we obtain 50 data points of the empirical versions of ``Power'' and ``FWER,'' from 50 independent simulations. Thereby, each data point has a newly generated $X$, $\beta^0$ (if not fixed) and active set positions $S_0 \in\{1,\ldots, p\}$; thus, the 50 data points indicate the variability with respect to the three quantities in the data generation (for the same covariance model of the design, the same model for the regression parameter and its active set positions). The data points are grouped in plots by design type and active set size.
We also report the average number of false positives \texttt{AVG(V)} over all data points per method next to the FWER plot.
The results, illustrating the performance for various methods, can be found in Figures~\ref{fig:lintoeplitz}, \ref{fig:linexpdecay}, \ref{fig:linequi} and \ref{fig:linrealx}.
\begin{figure*}
\caption{See caption of Figure \protect\ref{fig:lintoeplitz} with the only difference being the type of design matrix. In this plot, the design matrix type is \emph{Equi.corr}.}
\label{fig:linequi}
\end{figure*}
\begin{figure*}
\caption{See caption of Figure \protect\ref{fig:lintoeplitz} with the only difference being the type of design matrix. In this plot, the design matrix type is \emph{RealX}.}
\label{fig:linrealx}
\end{figure*}
\begin{figure*}
\caption{Confidence intervals and their coverage rates for 100 realizations of a linear model with fixed design of dimensions $n=100$, $p=500$. The design matrix was of type Toeplitz and the active set was of size $s_0=3$. The nonzero coefficients were chosen by sampling once from the uniform distribution $U[0,2]$. For each method, 18 coefficients are shown from left to right with the 100 estimated 95\%-confidence intervals drawn for each coefficient.The first 3 coefficients are the non-zero coefficients in descending order of value. The other 15 coefficients, to the right of the first 3, were chosen to be those coefficients with the worst coverage. The size of each coefficient is illustrated by the height of a black horizontal bar. To illustrate the coverage of the confidence intervals, each confidence interval is either colored red or black depending on the inclusion of the true coefficient in the interval. Black means the true coefficient was covered by the interval. The numbers written above the coefficients are the number of confidence intervals, out of 100, that covered the truth. All confidence intervals are on the same scale such that one can easily see which methods have wider confidence intervals. To summarize the coverage for all zero coefficients $S_0^c$ (including those not shown on the plot), the rounded average coverage of those coefficients is given to the right of all coefficients.}
\label{fig:lincitoeplitz}
\end{figure*}
\subsubsection{Confidence intervals} We investigate confidence intervals for the one particular setup of the Toeplitz design, active set size $s_0=3$ and coefficients $\beta^0_j \sim U[0,2]\ (j \in S_0)$. The active set positions are chosen to be the first $s_0$ columns of $\mathbf X$. The results we show will correspond to a single data point in the $p$-value results.
In Figure~\ref{fig:lincitoeplitz}, 100 confidence intervals are plotted for each coefficient for each method. These confidence intervals are the results of fitting 100 different responses Y resulting from newly generated $\varepsilon$ error terms.
For the Multi sample-splitting method from Section~\ref{subsec.multisample-split}, if a variable did not get selected often enough in the sample splits, there is not enough information to draw a confidence interval for it. This is represented in the plot by only drawing confidence intervals when this was not the case. If the (uncheckable) beta-min condition (\ref{beta-min}) would be fulfilled, we would know that those confidence intervals cover zero.
For the bootstrapping methods, an invisible confidence interval is the result of the coefficient being set to zero in all bootstrap iterations.
\subsubsection{Summarizing the empirical results} As a first observation, the impact of the sparsity of the problem on performance cannot be denied. The power clearly gets worse for $s_0=15$ for the Toeplitz and Exp.decay setups. The FWER becomes too high for quite a few methods for $s_0=15$ in the cases of Equi.corr and RealX.
For the sparsity $s_0=3$, the Ridge projection method manages to control the FWER as desired for all setups. In the case of $s_0=15$, it is the Multi sample-splitting method that comes out best in comparison to the other methods. Generally speaking, good error control tends to be associated with a lower power, which is not too surprising since we are dealing with the trade-off between type I and type II errors. The desparsified Lasso method turns out to be a less conservative alternative with not perfect but reasonable FWER control as long as the problem is sparse enough ($s_0=3$). The method has a slightly too high FWER for the Equi.corr and RealX setups, but FWER around 0.05 for Toeplitz and Exp.decay designs. Doing the Z\&Z tuning procedure helps the error control, as can be seen most clearly in the Equi.corr setup.
The results for the simulations where the positions for the nonzero coefficients were not randomly chosen, presented in the supplemental article (\cite{supplement}), largely give the same picture. In comparison to the results presented before, the Toeplitz setup is easier while the Exp.decay setup is more challenging. The Equi.corr results are very similar to the ones from before, which is to be expected from the covariance structure.
Looking into the confidence interval results, it is clear that the confidence intervals of the Multi sample-splitting method and the Ridge projection method are wider than the rest. For the bootstrapping methods, the super-efficiency phenomenon mentioned in Section~\ref{subsec.othermeth} is visible. Important to note here is that the smallest nonzero coefficient, the third column, has very poor coverage from these methods.
We can conclude that the coverage of the zero coefficients is decent for all methods and that the coverage of the nonzero coefficients is in line with the error rates for the $p$-values.
Confidence interval results for many other setup combinations are provided in the supplemental article (\cite{supplement}). The observations are to a large extent the same.
\section{Generalized Linear Models}\label{sec.GLM}
Consider a generalized linear model
\begin{eqnarray*} & &Y_1,\ldots,Y_n\quad \mbox{independent}, \\
& &g\bigl(\mathbb{E}[Y_i|X_i = x]\bigr) = \mu^0 + \sum_{j=1}^p \beta^0_j x^{(j)}, \end{eqnarray*}
where $g(\cdot)$ is a real-valued, known link function. As before, the goal is to construct confidence intervals and statistical tests for the unknown parameters $\beta^0_1,\ldots,\beta^0_p$, and maybe $\mu^0$ as well.
\subsection{Methods}\label{subsec.GLMmethods}
The Multi sample-splitting method can be modified for GLMs in an obvious way: the variable screening step using the first half of the data can be based on the $\ell_1$-norm regularized MLE, and $p$-values and confidence intervals using the second half of the sample are constructed from the asymptotic distribution of the (low-dimensional) MLE. Multiple testing correction and aggregation of the $p$-values from multiple sample splits are done exactly as for linear models in Section~\ref{subsec.multisample-split}.
A desparsified Lasso estimator for GLMs can be constructed as follows (\cite{vdgetal13}): The $\ell_1$-norm regularized MLE $\hat{\theta}$ for the parameters $\theta ^0 = (\mu^0,\beta^0)$ is desparsified with a method based on the Karush--Kuhn--Tucker (KKT) conditions for $\hat{\theta}$, leading to an estimator with an asymptotic Gaussian distribution. The Gaussian distribution can then be used to construct confidence intervals and hypothesis tests.
\subsection{Weighted Squared Error Approach}\label{subsec.GLMweighted}
The problem can be simplified in such a way that we can apply the approaches for the linear model from Section~\ref{sec.LM}. This can be done for all types of generalized linear models (as shown in Appendix~\ref{subsec.app.general.wsqerr}), but we restrict ourselves in this section to the specific case of logistic regression. Logistic regression is usually fitted by applying the iteratively reweighted least squares (IRLS) algorithm where at every iteration one solves a weighted least squares problem (\cite{hastetal09}).
The idea is now to apply a standard l1-penalized fitting of the model, build up the weighted least squares problem at the l1-solution and then apply our linear model methods on this problem.
We use the notation $\hat{\pi}_i, i = 1, \ldots,n$ for the estimated probability of the binary outcome. $\hat{\pi}$ is the vector of these probabilities.
\begin{figure*}
\caption{Familywise error rate (FWER) and power for multiple testing based on various methods for logistic regression. The desired control level for the FWER is $\alpha=0.05$. The design matrix is of type \emph{Toeplitz} in the top plot and \emph{Equi.corr} in the bottom plot. If the method name contains a capital \texttt{G}, it is the modified glm version, otherwise the linear model methods are using the weighted squared error approach.}
\label{fig:glmsimul}
\end{figure*}
From \citet{hastetal09}, the adjusted response variable becomes
\[ Y_{\mathrm{adj}} = \mathbf X \hat{\beta} + \mathbf W^{-1}(Y-\hat{ \pi}), \]
and the weighted least squares problem is
\[ \hat{\beta}_{\mathrm{new}} = \operatorname{argmin}_{\beta} (Y_{\mathrm{adj}} - \mathbf X \beta)^T \mathbf W (Y_{\mathrm{adj}} - \mathbf X \beta), \]
with weights
\[ \mathbf W = \pmatrix{ \hat{\pi}_1(1-\hat{ \pi}_1) & 0 & \ldots& 0\vspace*{2pt} \cr 0 & \hat{ \pi}_2(1-\hat{\pi}_2) & \ddots& \vdots\vspace*{2pt} \cr \vdots& \ddots& \ddots& 0\vspace*{2pt} \cr 0 & \ldots& 0 & \hat{ \pi}_n(1-\hat{\pi}_n) } \hspace*{-0.5pt}. \]
We rewrite $Y_{w} = \sqrt{\mathbf W} Y_{\mathrm{adj}}$ and $X_w = \sqrt{\mathbf W} \mathbf X$ to get
\[ \hat{\beta}_{\mathrm{new}} = \operatorname{argmin}_{\beta} (Y_w - \mathbf X_w \beta)^T(Y_w - \mathbf X_w \beta). \]
The linear model methods can now be applied to $Y_{w}$ and $\mathbf X_{w}$, thereby the estimate $\hat{\sigma}_{\varepsilon}$ has to be set to the value 1. We note that in the low-dimensional case, the resulting $p$-values (with unregularized residuals $Z_j$) are very similar to the $p$-values provided by the standard \texttt{R}-function \texttt{glm}.
\subsection{Small Empirical Comparison}
We provide a small empirical comparison of the methods mentioned in Sections~\ref{subsec.GLMmethods} and \ref{subsec.GLMweighted}. When applying the linear model procedures, we use the naming from Section~\ref{subsec.comparlm}. The new GLM-specific methods from Section~\ref{subsec.GLMmethods} are referred to by their linear model names with a capital G added to them.
For simulating the data, we use a subset of the variations presented in Section~\ref{subsubsec.data}. We only look at Toeplitz and Equi.corr and an active set size of $s_0=3$. The number of variables is fixed at $p=500$, but the sample size is varied $n\in\{100,200,400\}$. The coefficients were randomly generated:
\begin{eqnarray*} \mbox{Randomly generated}:\quad U(0,1), U(0,2), U(0,4). \end{eqnarray*}
The nonzero coefficient positions are chosen randomly in one case and fixed as the first $s_0$ columns of $\mathbf X$ in the other.
For every combination (of type of design, type of coefficients, sample size and coefficient positions), 100 responses $Y$ are simulated to calculate empirical versions of the ``Power'' and ``FWER'' described in Section~\ref{subsubsec.pvals}. In contrast to the $p$-value results from Section~\ref{subsubsec.pvals}, there is only one resulting data point per setup combination (i.e., no additional replication with new random covariates, random coefficients and random active set). For each method, there are 18 data points, corresponding to 18 settings, in each plot. The results can be found in Figure~\ref{fig:glmsimul}.
Both the modified GLM methods as well as the weighted squared error approach work adequately. The Equi.corr setup does prove to be challenging for \emph{Lasso-ProG}.
\subsection{\texttt{hdi} for Generalized Linear Models} In the \texttt{hdi} \texttt{R}-package (\cite{hdipackage}) we also provide the option to use the Ridge projection method and the desparsified Lasso method with the weighted squared error approach.
We provide the option to specify the \texttt{family} of the response $Y$ as done in the \texttt{R}-package \texttt{glmnet}:
\begin{verbatim} > outRidge
<- ridge.proj(x = x, y = y,
family = ''binomial'') > outLasso
<- lasso.proj(x = x, y = y,
family = ''binomial'') \end{verbatim}
$p$-values and confidence intervals are extracted in the exact same way as for the linear model case; see Section~\ref{subsec.hdilin}.
\section{Hierarchical Inference in the Presence of Highly Correlated Variables}\label{sect.hierinf}
The previous sections and methods assume in some form or another that the effects are strong enough to enable accurate estimation of the contribution of \emph{individual variables}.
Variables are often highly correlated for high-dimensional data. Working with a small sample size, it is impossible to attribute any effect to an individual variable if the correlation between a block of variables is too high. Confidence intervals for individual variables are then very wide and uninformative. Asking for confidence intervals for individual variables thus leads to poor power of all procedures considered so far. Perhaps even worse, under high correlation between variables the coverage of some procedures will also be unreliable as the necessary conditions for correct coverage (such as the compatibility assumption) are violated.
In such a scenario, the individual effects are not granular enough to be resolved. However, it might yet still be possible to attribute an effect to a group of variables. The groups can arise naturally due to a specific structure of the problem, such as in applications of the \emph{group Lasso} (\cite{yuan06}).
Perhaps more often, the groups are derived via hierarchical clustering (\cite{hartigan1975clustering}), using the correlation structure or some other distance between the variables. The main idea is as follows. A hierarchy ${\mathcal T}$ is a set of clusters or groups $\{{\mathcal C}_k; k\}$ with ${\mathcal C}_k \subseteq \{1,\ldots,p\}$. The root node (cluster) contains all variables $\{1,\ldots,p\}$. For any two clusters ${\mathcal C}_k, {\mathcal C}_{\ell}$, either one cluster is a subset of the other or they have an empty intersection. Usually, a hierarchical clustering has an additional notion of a level such that, on each level, the corresponding clusters build a partition of $\{1,\ldots,p\}$. We consider a hierarchy ${\mathcal T}$ and first test the root node cluster ${\mathcal C}_0 = \{1,\ldots,p\}$ with hypothesis $H_{0,{\mathcal C}_0}: \beta_1 = \beta_2 = \cdots= \beta_p = 0$. If this hypothesis is rejected, we test the next clusters ${\mathcal C}_k$ in the hierarchy (all clusters whose supersets are the root node cluster ${\mathcal C}_0$ only): the corresponding cluster hypotheses are $H_{0,{\mathcal C}_k}: \beta_j = 0$ for all $j \in{\mathcal C}_k$. For the hypotheses which can be rejected, we consider all smaller clusters whose only supersets are clusters which have been rejected by the method before, and we continue to go down the tree hierarchy until no more cluster hypotheses can be rejected.
With the hierarchical scheme in place, we still need a test for the null hypothesis $H_{0,{\mathcal C}}$ of a cluster of variables. The tests have different properties. For example, whether a multiplicity adjustment is necessary will depend on the chosen test. We will describe below some methods that are useful for testing the effect of a group of variables and which can be used in such a hierarchical approach. The nice and interesting feature of the procedures is that they adapt automatically to the level of the hierarchical tree: if a signal of a small cluster of variables is strong, and if that cluster is sufficiently uncorrelated from all other variables or clusters, the cluster will be detected as significant. Vice-versa, if the signal is weak or if the cluster has too high a correlation with other variables or clusters, the cluster will not become significant. For example, a single variable cannot be detected as significant if it has too much correlation to other variables or clusters.
\subsection{Group-Bound Confidence Intervals Without Design Assumptions}\label{subsec.assfree}
The \emph{Group-bound} proposed in \citet{meins13} gives confidence intervals for the $\ell_1$-norm $\|\beta^0_{{\mathcal C}_k}\|_1$ of a group ${{\mathcal C}_k}\subseteq\{1,\ldots,p\}$ of variables. If the lower-bound of the $1-\alpha$ confidence interval is larger than 0, then the null hypothesis $\beta^0_{{\mathcal C}_k}\equiv0$ can be rejected for this group. The method combines a few properties:
\begin{longlist}[(iii)]
\item[(i)] The confidence intervals are valid without an assumption like the compatibility condition (\ref{compat}). In general, they are conservative, but if the compatibility condition holds, they have good ``power'' properties (in terms of length) as well.
\item[(ii)] The test is hierarchical. If a set of variables can be rejected, all supersets will also be rejected. And vice-versa, if a group of variables cannot be rejected, none of its subsets can be rejected.
\item[(iii)] The estimation accuracy has an optimal detection rate under the so-called group effect compatibility condition, which is weaker than the compatibility condition necessary to detect the effect of individual variables.
\item[(iv)] The power of the test is unaffected by adding highly or even perfectly correlated variables in ${\mathcal C}_k $ to the group. The compatibility condition would fail to yield a nontrivial bound, but the group effect compatibility condition is unaffected by the addition of perfectly correlated variables to a group. \end{longlist}
The price to pay for the assumption-free nature of the bound is a weaker power than with previously discussed approaches when the goal is to detect the effect of individual variables. However, for groups of highly correlated variables, the approach can be much more powerful than simply testing all variables in the group.
\begin{figure*}
\caption{A visualization of the hierarchical testing scheme as described in the beginning of Section~\protect\ref{sect.hierinf}, for the examples described in Section~\protect\ref{subsec.illustrations}. One moves top-down through the output of a hierarchical clustering scheme, starting at the root node. For each cluster encountered, the null hypothesis that all the coefficients of that particular cluster are 0 is tested. A rejection is visualized by a red semi-transparent circle at a vertical position that corresponds to the size of the cluster. The chosen significance level was $\alpha=0.05$. The children of significant clusters in the hierarchy are connected by a black line. The process is repeated by testing the null hypotheses for all those children clusters until no more hypotheses could be rejected. The ordering of the hierarchy in the horizontal direction has no meaning and was chosen for a clean separation of children hierarchies. The hierarchical clustering and orderings are the same for all 6 plots since the design matrix was the same. Two different examples were looked at (corresponding to top and bottom row, resp.) and four different methods were applied to these examples. The desparsified Lasso and the Ridge method gave identical results and were grouped in the two plots on the left, while results from the hierarchical Multi sample-splitting method are presented in the middle column and the results for the Group-bound method are shown in the right column. In example 1, the responses were simulated with 2 clusters of highly correlated variables of size 3 having coefficients different from zero. In example 2, the responses were simulated with 2 clusters of highly correlated variables of sizes 11 and 21 having coefficients different from zero. More details about the examples can be found in Section \protect\ref{subsec.illustrations}.}
\label{fig:treeridge}
\end{figure*}
We remark that previously developed tests can be adapted to the context of hierarchical testing of groups with hierarchical adjustment for familywise error control (\cite{Meins08}); for the Multi sample-splitting method, this is described next.
\subsection{Hierarchical Multi Sample-Splitting}\label{subsec.mssplitgroup}
The Multi sample-splitting method (Section~\ref{subsec.multisample-split}) can be adapted to the context of hierarchical testing of groups by using hierarchical adjustment of\vadjust{\goodbreak} familywise error control (\cite{Meins08}). When testing a cluster hypotheses $H_{0,{\mathcal C}}$, one can use a modified form of the partial $F$-test for high-dimensional settings; and the multiple testing adjustment due to the multiple cluster hypotheses considered can be taken care of by a hierarchical adjustment scheme proposed in \citet {Meins08}. A detailed description of the method, denoted here by \emph{Hier. MS-Split}, together with theoretical guarantees is given in \citet{manbu13}.
\subsection{Simultaneous Inference with the Ridge or Desparsified Lasso Method}\label{subsec.simulcovridgelasso}
Simultaneous inference for all possible groups can be achieved by considering $p$-values $P_j$ of individual hypotheses $H_{0,j}: \beta^0_j = 0$ ($j=1,\ldots,p$) and adjusting them for simultaneous coverage, namely, $P_{\mathrm{adjusted},j} = P_j \cdot p$. The individual $p$-values $P_j$ can be obtained by the Ridge or desparsified Lasso method in Section~\ref{sec.LM}.
We can then test any group hypothesis $H_{0,G}: \beta_j^0 = 0$ for all $j \in G$ by simply looking whether $\min_{j \in G} P_{\mathrm{adjust},j} \le \alpha$, and we can consider as many group hypotheses as we want without any further multiple testing adjustment.
\subsection{Illustrations}\label{subsec.illustrations}
A semi-real data example is shown in Figure~\ref{fig:treeridge}, where the predictor variables are taken from the Riboflavin data set (\cite{bumeka13})\vadjust{\goodbreak} ($n=71, p=4088$) and the coefficient vector is taken to have entries 0, except for 2 clusters of highly correlated variables. In example 1, the clusters both have size 3 with nonzero coefficient sizes equal to 1 for all the variables in the clusters and Gaussian noise level $\sigma=0.1$. In example 2, the clusters are bigger and have different sizes 11 and 21; the coefficient sizes for all the variables in the clusters is again 1, but the Gaussian noise level here is chosen to be $\sigma=0.5$.
In the first example, 6 out of the 6 relevant variables are discovered as individually significant by the \emph{Lasso-Pro}, \emph{Ridge} and \emph{MS-Split} methods (as outlined in Sections~\ref{subsec.multisample-split}--\ref{subsec.desparslasso}), after adjusting for multiplicity.
In the second example, the methods cannot reject the single variables individually any longer. The results for the \emph{Group-bound} estimator are shown in the right column. The \emph{Group-bound} can reject a group of 4 and 31 variables in the first example, each containing a true cluster of 3 variables. The method can also detect a group of 2 variables (a subset of the cluster of~4) which contains 2 out of the 3 highly correlated variables. In the second example, a group of 34 variables is rejected with the \emph{Group-bound} estimator, containing 16 of the group of 21 important variables. The smallest group of variables containing the cluster of 21 that the method can detect is of size 360. It can thus be detected that the variables jointly have a substantial effect even though the null hypothesis cannot be rejected for any variable individually. The hierarchical Multi sample-splitting method (outlined in Section~\ref{subsec.mssplitgroup}) manages to detect the same clusters as the \emph{Group-bound} method. It even goes one step further by detecting a smaller subcluster.
\begin{figure}
\caption{The power for the rejection of the group-hypothesis of all variables (top) and the power for the rejection of the group-hypothesis of the variables in blocks highly correlated with $S_0$ variables (bottom). The design matrix used is of type \emph{Block Equi.corr} which is similar to the Equi.corr setup in that $\Sigma$ is block diagonal with blocks (of size $20 \times20$) being the~$\Sigma$ of Equi.corr. The power is plotted as a function of the correlations in the blocks, quantified by~$\rho$. The Ridge-based method loses power as the correlation between variables increases, while the group bound, Hier. MS-Split and Lasso-Pro methods can maintain power close to 1 for both measures of power.}
\label{fig:testgrouppower}
\end{figure}
\begin{figure}
\caption{The power for the rejection of the group-hypothesis of all $S_0$ variables (top) and type I error rate corresponding to the rejection of the group-hypothesis of all $S_0^c$ variables (bottom) for the design matrix of type \emph{Block Equi.corr} when changing the correlation $\rho$ between variables. The design matrix type is described in detail in the caption of Figure \protect\ref{fig:testgrouppower} and in the text. The desparsified Lasso, Hier. MS-Split and the Ridge-based method lose power as the correlation between variables increases, while the \emph{Group-bound} cannot reject the small group of variables $S_0$ (3~in this case). The desparsified Lasso and MS-Split methods also exceed the nominal type I error rate for high correlations (as the design assumptions break down), whereas the Ridge-based method and the \emph{Group-bound} are both within the nominal 5\% error rate for every correlation strength.}
\label{fig:testgroup}
\end{figure}
We also consider the following simulation model.
The type of design matrix was chosen to be such that the population covariance matrix $\Sigma$ is a block-diagonal matrix with blocks of dimension $20 \times20$ being of the same type as $\Sigma$ for Equi.corr (see Section~\ref{subsubsec.data}) with off-diagonal $\rho$ instead of $0.8$. The dimensions of the problem were chosen to be $p=500$ number of variables, $n=100$ number of samples and noise level $\sigma=1$. There were only 3 nonzero coefficients chosen with three different signal levels $U[0,2]$, $U[0,4]$ and $U[0,8]$ being used for the simulations. Aside from varying signal level, we studied the two cases where in one case all the nonzero coefficients were contained in one single highly correlated block and in the other case each of those variables was in a different block.\vadjust{\goodbreak} We look at 3 different measures of power. One can define the power as the fraction of the 100 repeated simulations that the method managed to reject the group of all variables $G = {1,\ldots,p}$. This is shown at the top in Figure~\ref{fig:testgrouppower}. Alternatively, one can look at the rejection rate of the hypothesis for the group $G$ that contains all variables in the highly correlated blocks that contain a variable from $S_0$. This is the plot at the bottom in Figure~\ref{fig:testgrouppower}. Finally, one can look at the rejection rate of the hypothesis where the group $G$ contains only the variables in $S_0$ (of size 3 in this case). The type I error we define to be the fraction of the simulations in which the method rejected the group hypothesis $H_{0,S_0^c}$ where all regression coefficients are equal to zero. These last two measures are presented in Figure~\ref{fig:testgroup}.
The power of the Ridge-based method (\cite{pb13}) drops substantially for high correlations. The power of the \emph{Group-bound} stays close to 1 at the level of the highly correlated groups (Block-power) and above (Power $G={1 ,\ldots, p}$) throughout the entire range of correlation values. The \emph{Lasso-Pro} and \emph{MS-Split} perform well here as well. The power of the\vadjust{\goodbreak} \emph{Group-bound} is 0 when attempting to reject the small groups $H_{0,S_0}$. The type I error rate is supposedly controlled at level $\alpha=0.05$ with all three methods. However, the \emph{Lasso-Pro} and the hierarchical \emph {MS-Split} methods fail to control the error rates, with the type I error rate even approaching 1 for large values of the correlation. The \emph{Group-bound} and Ridge-based estimator have, in contrast, a type I error rate close to 0 for all values of the correlation.
For highly correlated groups of variables, trying to detect the effect of individual variables has thus two inherent dangers. The power to detect interesting groups of variables might be very low. And the assumptions for the methods might be violated, which invalidates the type I error control. The assumption-free \emph{Group-bound} method provides a powerful test for the group effects even if variables are perfectly correlated, but suffers in power, relatively speaking, when variables are not highly correlated.
\subsection{\texttt{hdi} for Hierarchical Inference} An implementation of the \emph{Group-bound} method is provided in the \texttt{hdi} \texttt{R}-package (\cite{hdipackage}).
For specific groups, one can provide a vector or a list of vectors where the elements of the vector specify the desired columns of $\mathbf{X}$ to be tested for. The following code tests the group hypothesis if the group contains all variables:
\begin{verbatim} > group
<- 1:ncol(x) > outGroupBound
<- groupBound(x = x, y = y,
group = group, alpha = 0.05) > rejection
<- outGroupBound > 0 \end{verbatim}
Note that one needs to specify the significance level~$\alpha$.
One can also let the method itself apply the hierarchical clustering scheme as described at the beginning of Section~\ref{sect.hierinf}.
This works as follows:
\begin{verbatim} > outClusterGroupBound
<- clusterGroupBound(x = x,
y = y, alpha = 0.05) \end{verbatim}
The output contains all clusters that were tested for significance in \texttt{members}. The corresponding lower bounds are contained in \texttt{lowerBound}.
To extract the significant clusters, one can do
\begin{verbatim} > significant.cluster.numbers
<- which
(outClusterGroupBound
$lowerBound > 0) > significant.clusters
<- outClusterGroupBound$members
[[significant.cluster.numbers]] \end{verbatim}
The figures in the style of Figure~\ref{fig:treeridge} can be achieved by using the function \texttt{plot} on \texttt{outCluster-\break GroupBound}.
Note that one can specify the distance matrix used for the hierarchical clustering, as done for \texttt{hclust}.
To test group hypotheses $H_{0,G}$ for the Ridge and desparsified Lasso method as described in Section~\ref{subsec.simulcovridgelasso}, one uses the output from the original single parameter fit, as illustrated for the group of all variables:
\begin{verbatim} > outRidge
<- ridge.proj(x = x, y = y) > outLasso
<- lasso.proj(x = x, y = y) > group
<- 1:ncol(x) > outRidge$groupTest(group) > outLasso$groupTest(group) \end{verbatim}
To apply a hierarchical clustering scheme as done in \texttt{clusterGroupBound}, one calls \texttt{cluster-\break GroupTest}:
\begin{verbatim} > outRidge$clusterGroupTest
(alpha = 0.95) \end{verbatim}
To summarize, the \texttt{R}-package provides functions to test individual groups as well as to test according to a hierarchical clustering scheme for the methods \emph{Group-bound}, Ridge and desparsified Lasso. An implementation of the hierarchical Multi sample-splitting method is not provided at this point in time.
\section{Stability Selection and Illustration with \texttt{hdi}}
Stability selection (\cite{mebu10}) is another methodology to guard against false positive selections, by controlling the expected number of false positives $\mathbb{E}[V]$. The focus is on selection of a single or a group of variables in a regression model, or on a selection of more general discrete structures such as graphs or clusters. For example, for a linear model in
(\ref{mod.lin}) and with a selection of single variables, stability selection provides a subset of variables $\hat{S}_{\mathrm{stable}}$ such that for $V = |\hat {S}_{\mathrm{stable}}
\cap S_0^c|$ we have that $\mathbb{E}[V] \le M$, where $M$ is a prespecified number.
For selection of single variables in a regression model, the method does not need a beta-min assumption, but the theoretical analysis of stability selection for controlling $\mathbb{E}[V]$ relies on a restrictive exchangeability condition (which, e.g., is ensured by a restrictive condition on the design matrix). This exchangeability condition seems far from necessary though (\cite{mebu10}). A refinement of stability selection is given in \citet{shah13}.
An implementation of the stability selection procedure is available in the \texttt{hdi} \texttt{R}-package. It is called in a very similar way as the other methods. If we want to control, for example, $\mathbb{E}[V] \le1$, we use
\begin{verbatim} > outStability
<- stability
(x = x, y = y, EV = 1) \end{verbatim}
The ``stable'' predictors are available in the element \texttt{select}.
The default model selection algorithm is the Lasso (the first $q$ variables entering the Lasso paths).\vadjust{\goodbreak} The option \texttt{model.selector} allows to apply a user defined model selection function.
\section{R Workflow Example} We go through a possible \texttt{R} workflow based on the Riboflavin data set (\cite{bumeka13}) and methods provided in the \texttt{hdi} \texttt{R}-package:
\begin{verbatim} > library(hdi) > data(riboflavin) \end{verbatim}
We assume a linear model and we would like to investigate which effects are statistically significant on a significance level of $\alpha=0.05$. Moreover, we want to construct the corresponding confidence intervals.
We start by looking at the individual variables. We want a conservative approach and, based on the results from Section~\ref{subsec.comparlm}, we choose the Ridge projection method for its good error control:
\begin{verbatim} > outRidge
<- ridge.proj
(x = riboflavin$x,
y = riboflavin$y) \end{verbatim}
We investigate if any of the multiple testing corrected $p$-values are smaller than our chosen significance level:
\begin{verbatim} > any(outRidge$pval.corr <= 0.05) [1] FALSE \end{verbatim}
We calculate the 95\% confidence intervals for the first 3 predictors:
\begin{verbatim} > confint(outRidge,parm=1:3,
level=0.95) lower upper AADK_at -0.8848403 1.541988 AAPA_at -1.4107374 1.228205 ABFA_at -1.3942909 1.408472 \end{verbatim}
Disappointed with the lack of significance for testing individual variables, we want to investigate if we can find a significant group instead. From the procedure proposed for the Ridge method in Section~\ref{sect.hierinf}, we know that if the Ridge method can not find any significant individual variables, it would not find a significant group either.
We apply the Group-bound method with its clustering option to try to find a significant group:
\begin{verbatim} > outClusterGroupBound
<- clusterGroupBound
(x = riboflavin$x,
y = riboflavin$y,
alpha = 0.05) > significant.cluster.numbers
<- which(outClusterGroupBound
$lowerBound
> 0) > significant.clusters
<- outClusterGroupBound
$members
[[significant.cluster.numbers]] > str(significant.clusters) num [1:4088] 1 2 3 4 5 6 7 8 9 10... \end{verbatim}
Only a single group, being the root node of the clustering tree, is found significant.
These results are in line with the results achievable in earlier studies of the same data set in \citet{bumeka13} and \citet{vdgetal13}.
\section{Concluding Remarks}
We present a (selective) overview of recent developments in frequentist high-dimensional inference for constructing confidence intervals and assigning \mbox{$p$-}values for the parameters in linear and generalized linear models. We include some methods which are able to detect significant groups of highly correlated variables which cannot
be individually detected as single variables. We complement the methodology and theory viewpoints with a broad empirical study. The latter indicates that more ``stable'' procedures based on Ridge estimation or sample splitting with subsequent aggregation might be more reliable for type I error control, at the price of losing power; asymptotically, power-optimal methods perform nicely in well-posed scenarios but are more exposed to fail for error control in more difficult settings where the design or degree of sparsity are more ill-posed. We introduce the \texttt{R}-package \texttt{hdi} which allows the user to choose from a collection of frequentist inference methods and eases reproducible research.
\subsection{Post-Selection and Sample Splitting Inference}\label{subsec.postsel}
Since the main assumptions outlined in Section~\ref{subsec.mainass} might be unrealistic in practice, one can consider a different route.
The view and ``POSI'' (Post-Selection Inference) method by \citet{berketal13} makes inferential statements which are protected against all possible submodels and, therefore, the procedure is not exposed to the issue of having selected an ``inappropriate'' submodel. The way in which \citet{berketal13} deal with misspecification of the (e.g., linear) model is closely related to addressing this issue with the Multi sample splitting or desparsified Lasso method; see Section~\ref{subsec.mainass} and \citet{pbvdg15}. The method by \citet{berketal13} is conservative, as it protects against any possible submodel, and it is not feasible yet for high-dimensional problems. \citet{wass14} briefly describes the ``HARNESS'' (High-dimensional Agnostic Regression Not Employing Structure or Sparsity) procedure: it is based on single data splitting and making inference for the selected submodel from the first half of the data. When giving up on the goal to infer the true or best approximating parameter $\beta^0$ in (\ref{betaproj}), one can drop many of the main assumptions which are needed for high-dimensional inference.
The ``HARNESS'' is related to post-selection inference where the inefficiency of sample splitting is avoided. Some recent work includes exact post-selection inference, where the full data is used for selection and inference: it aims to avoid the potential inefficiency of single sample splitting and to be less conservative than ``POSI'', thereby restricting the focus to a class of selection procedures which are determined by affine inequalities, including the Lasso and least angle regression (\cite{lee13}; \citep{taylor14}; \citep{fithian14}).
Under some conditions, the issue of selective inference can be addressed by using an adjustment factor (\cite{beye05}): this could be done by adjusting the output of our high-dimensional inference procedures, for example, from the \texttt{hdi} \texttt{R}-package.
\begin{appendix}\label{app}
\section*{Appendix}
\setcounter{subsection}{0} \subsection{Additional Definitions and Descriptions}\label{subsec.appadd}
\emph{Compatibility condition} (\cite{pbvdg11}, page106). Consider a fixed design matrix $\mathbf{X}$. We define the following:
The compatibility condition holds if for some $\phi_0 >0$ and all $\beta$
satisfying $\|\beta_{S_0^c}\|_1 \le3 \|\beta_{S_0}\|_1$,
\begin{eqnarray}
\label{compat} \|\beta_{S_0}\|_1^2 \le \beta^T \hat{\Sigma} \beta s_0/\phi_0^2,\quad \hat{\Sigma} = n^{-1} \mathbf{X}^T \mathbf{X}. \end{eqnarray}
Here $\beta_{A}$ denotes the components $\{\beta_j;j \in A\}$ where $A \subseteq\{1,\ldots,p\}$. The number $\phi_0$ is called the compatibility constant.
\emph{Aggregation of dependent $p$-values.} Aggregation of dependent $p$-values can be generically done as follows.
\begin{lemm}[{[Implicitly contained in \citet{memepb09}]}] Assume that we have\vadjust{\goodbreak} $B$ $p$-values $P^{(1)},\ldots ,P^{(B)}$ for testing a null-hypothesis $H_0$, that is, for every $b \in\{1,\ldots ,B\}$ and any $0 < \alpha< 1$, $\mathbb{P}_{H_0}[P^{(b)} \le\alpha] \le \alpha$. Consider for any $0 < \gamma< 1$ the empirical $\gamma$-quantile
\begin{eqnarray*} &&Q(\gamma) \\ &&\quad= \min \bigl(\mbox{empirical $\gamma$-quantile} \bigl \{P^{(1)}/\gamma,\ldots,P^{(B)}/\gamma\bigr\},\\ &&\qquad 1 \bigr), \end{eqnarray*}
and the minimum value of $Q(\gamma)$, suitably corrected with a factor, over the range $(\gamma_{\mathrm{min}},1)$ for some positive (small) $0<\gamma_{\mathrm{min}} < 1$:
\begin{eqnarray*} P = \min \Bigl(\bigl(1 - \log(\gamma_{\mathrm{min}})\bigr) \min _{\gamma\in (\gamma_{\mathrm{min}},1)} Q(\gamma), 1 \Bigr). \end{eqnarray*}
Then, both $Q(\gamma)$ [for any fixed $\gamma\in(0,1)$] and $P$ are conservative $p$-values satisfying for any $0 < \alpha< 1$, $\mathbb{P}_{H_0}[Q(\gamma) \le\alpha] \le \alpha$ or $\mathbb{P}_{H_0}[P \le\alpha] \le\alpha$, respectively. \end{lemm}
\emph{Bounding the error of the estimated bias correction in the desparsified Lasso.} We will argue now why the error from the bias correction
\[ \sum_{k \neq j} \sqrt{n} P_{jk}\bigl(\hat{ \beta}_k - \beta^0_k\bigr) \]
is negligible. From the KKT conditions when using the Lasso of $\mathbf{X}^{(j)}$ versus $\mathbf{X}^{(-j)}$, we have (B{\"u}hlmann\break and van~de Geer, \citeyear{pbvdg11}, cf. Lemma~2.1)
\begin{equation}
\label{KKT} \max_{k \neq j} 2 \bigl|n^{-1}
\bigl(X^{(k)}\bigr)^T Z^{(j)}\bigr| \le \lambda_j. \end{equation}
Therefore,
\begin{eqnarray*}
&&\biggl|\sqrt{n} \sum_{k \neq j} P_{jk}\bigl(\hat{
\beta}_k - \beta^0_k\bigr)\biggr| \\ &&\quad\le\sqrt{n}
\max_{k\neq j} |P_{jk}| \bigl\|\hat{\beta} -
\beta^0\bigr\|_1 \\
&&\quad\le2 \sqrt{n} \lambda_j\bigl \|\hat{\beta} - \beta^0
\bigr\|_1 \bigl(n^{-1} \bigl(\mathbf{X}^{(j)} \bigr)^T Z^{(j)}\bigr)^{-1}. \end{eqnarray*}
Assuming sparsity and the compatibility condition (\ref{compat}), and when choosing $\lambda_j \asymp\sqrt{\log(p)/n}$, one can show that
$(n^{-1} (\mathbf{X}^{(j)})^T Z^{(j)})^{-1} = O_P(1)$ and $\|\hat{\beta} -
\beta^0\|_1 = O_P(s_0 \sqrt{\log(p)/n})$ [for the latter, see (\ref{lasso-ell1})]. Therefore,
\begin{eqnarray*}
&&\biggl|\sqrt{n} \sum_{k \neq j} P_{jk}\bigl(\hat{
\beta}_k - \beta^0_k\bigr)\biggr| \\ &&\quad\le O_P\bigl(\sqrt{n} s_0 \sqrt{\log(p)/n} \lambda_j\bigr) \\ &&\quad= O_P\bigl(s_0 \log(p) n^{-1/2}\bigr), \end{eqnarray*}
where the last bound follows by assuming $\lambda_j \asymp \sqrt{\log(p)/n}$. Thus, if $s_0 \ll n^{1/2} / \log(p)$, the error from bias correction is asymptotically negligible.
\emph{Choice of $\lambda_j$ for desparsified Lasso.} We see from (\ref{KKT}) that the numerator of the error in the bias correction term (i.e., the $P_{jk}$'s) is decreasing as $\lambda_j \searrow 0$; for controlling the denominator, $\lambda_j$ should not be too small to ensure that the denominator [i.e., $n^{-1} (\mathbf{X}^{(j)})^T Z^{(j)}$] behaves reasonable (staying away from zero) for a fairly large range of $\lambda_j$.
Therefore, the strategy is as follows:
\begin{longlist}[1.]
\item[1.] Compute a Lasso regression of $\mathbf{X}^{(j)}$ versus all other variables $\mathbf{X}^{(-j)}$ using CV, and the corresponding residual vector is denoted by $Z^{(j)}$.
\item[2.] Compute $\|Z^{(j)}\|_2^2/((\mathbf{X}^{(j)})^T Z^{(j)})^2$ which is the asymptotic variance of $\hat{b}_j/\sigma_{\varepsilon}$, assuming that the error in the bias correction is negligible.
\item[3.] Increase the variance by 25\%, that is,
$V_j = 1.25 \|Z^{(j)}\|_2^2/((\mathbf{X}^{(j)})^T Z^{(j)})^2$.
\item[4.] Search for the smallest $\lambda_j$ such that the corresponding residual vector $Z^{(j)}(\lambda_j)$ satisfies
\begin{eqnarray*}
\bigl\|Z^{(j)}(\lambda_j)\bigr\|_2^2/\bigl( \bigl(\mathbf{X}^{(j)}\bigr)^T Z^{(j)}( \lambda_j)\bigr)^2 \le V_j. \end{eqnarray*}
\end{longlist}
This procedure is similar to the choice of $\lambda_j$ advocated in \citet{zhangzhang11}.
\emph{Bounding the error of bias correction for the Ridge projection.} The goal is to derive the formula (\ref{Ridge-repr}). Based on (\ref{Ridge-distr}), we have
\begin{eqnarray*} &&\sigma_{\varepsilon}^{-1} \Omega_{R;jj}^{-1/2} \bigl(\hat{b}_{R;j} - \beta^0_j\bigr)\\ &&\quad\approx \Omega_{R;jj}^{-1/2} W_j / P_{R;jj} \\ &&\qquad{}+ \sigma_{\varepsilon}^{-1} \Omega_{R;jj}^{-1/2} \Delta_{R;j},\quad W \sim{\mathcal N}_p(0, \Omega_R), \nonumber \\
&&|\Delta_{R;j}|\le\max_{k \neq j} \biggl\vert \frac
{P_{R;jk}}{P_{R;jj}}\biggr\vert \bigl\|\hat{\beta} - \beta^0
\bigr\|_1. \end{eqnarray*}
In relation to the result in Fact~\ref{th2} for the desparsified Lasso, the problem here is that the behaviors of $\max_{k \neq j} |P_{R;jj}^{-1}
P_{R:jk}|$ and of the diagonal elements $\Omega_{R;jj}$ are hard to control, but, fortunately, these quantities are fixed and observed for fixed design $\mathbf{X}$.
By invoking the compatibility constant for the design~$\mathbf{X}$, we obtain the bound for $\|\hat{\beta} - \beta^0\|_1 \le s_0 4\lambda/\phi_0$ in (\ref{lasso-ell1}) and, therefore, we can upper-bound
\[
|\Delta_{R;j}| \le4 s_0 \lambda/\phi_0^2 \max_{k \neq j} \biggl\vert \frac{P_{R;jk}}{P_{R;jj}}\biggr\vert .\vadjust{\goodbreak} \]
Asymptotically, for Gaussian errors, we have with high probability
\begin{eqnarray}
\label{delta-bound} |\Delta_{R;j}| &=& O\biggl(s_0 \sqrt{ \log(p)/n} \max_{k \neq j}\biggl\vert \frac{P_{R;jk}}{P_{R;jj}}\biggr \vert \biggr) \nonumber \\[-8pt] \\[-8pt] \nonumber &\le& O\biggl(\bigl(\log(p)/n\bigr)^{1/2 - \xi}\max_{k \neq j} \biggl\vert \frac{P_{R;jk}}{P_{R;jj}}\biggr\vert \biggr), \end{eqnarray}
where the last inequality holds due to assuming $s_0 =O((n/\log(p))^{\xi})$ for some $0 < \xi< 1/2$. In practice, we use the bound from (\ref{delta-bound}) in the form
\begin{eqnarray*} \Delta_{R\mathrm{bound};j} := \max_{k \neq j} \biggl\vert \frac{P_{R;jk}}{P_{R;jj}}\biggr\vert \bigl(\log(p)/n\bigr)^{1/2 - \xi}, \end{eqnarray*}
with the typical choice $\xi= 0.05$.
\subsection{Confidence Intervals for Multi Sample-Splitting}\label {subsec.appmssplitci} We construct confidence intervals that satisfy the duality with the $p$-values from equation (\ref{aggreg}), and, thus, they are corrected already for multiplicity:
\begin{eqnarray*} &&\mbox{$(1-\alpha)$\% CI} \\ &&\quad= \mbox{Those values } c \mbox{ for which the $p$-value }\geq\\ &&\qquad \alpha\mbox{ for testing the null hypothesis } H_{0,j}:\beta=c, \\ &&\quad=\mbox{Those } c \mbox{ for which the $p$-value resulting from}\\ &&\qquad\mbox{the $p$-value aggregation procedure is} \geq\alpha, \\
&&\quad= \{c | P_j \geq\alpha\}, \\
&&\quad= \Bigl\{c | (1-\log{\gamma_{\mathrm{min}}})\inf_{\gamma\in(\gamma_{\mathrm{min}},1)} Q_j(\gamma) \geq\alpha\Bigr\}, \\
&&\quad= \bigl\{c | \forall\gamma\in(\gamma_{\mathrm{min}},1): (1-\log{ \gamma_{\mathrm{min}}}) Q_j(\gamma) \geq\alpha\bigr\}, \\
&&\quad= \bigl\{c | \forall\gamma\in(\gamma_{\mathrm{min}},1):\\ &&\qquad \min\bigl(1,\mathrm{emp.}\ \gamma\ \mathrm{quantile} \bigl(P_{\mathrm{corr};j}^{[b]}\bigr)/\gamma\bigr)\geq\\ &&\qquad \alpha/(1-\log{\gamma_{\mathrm{min}}})\bigr\} , \\
&&\quad= \bigl\{c | \forall\gamma\in(\gamma_{\mathrm{min}},1):\\ &&\qquad \mathrm{emp.}\ \gamma\ \mathrm{quantile} \bigl(P_{\mathrm{corr};j}^{[b]}\bigr)/\gamma\geq\\ &&\qquad\alpha/(1-\log{ \gamma_{\mathrm{min}}})\bigr\} , \\
&&\quad = \biggl\{c | \forall\gamma\in(\gamma_{\mathrm{min}},1):\\ &&\qquad \mathrm{emp.}\ \gamma\ \mathrm{quantile} \bigl(P_{\mathrm{corr};j}^{[b]}\bigr) \geq\frac{\alpha \gamma}{(1-\log{\gamma_{\mathrm{min}}})}\biggr \}. \end{eqnarray*}
We will use the notation $\gamma^{[b]}$ for the position of $P_{\mathrm{corr};j}^{[b]}$ in the ordering by increasing the value of the corrected $p$-values $P_{\mathrm{corr};j}^{[i]}$, divided by $B$.
We can now rewrite our former expression in a form explicitly using our information from every sample split
\begin{eqnarray*} && \mbox{$(1-\alpha)$\% CI} \\
&&\quad= \biggl\{c |\forall b =1,\ldots,B: \bigl(\gamma^{[b]} \leq \gamma_{\mathrm{min}}\bigr)\\ &&\qquad{}\lor\biggl(P_{\mathrm{corr};j}^{[b]} \geq \frac{\alpha\gamma^{[b]}}{(1-\log{\gamma_{\mathrm{min}}})}\biggr) \biggr\} \\
&&\quad= \biggl\{c | \forall b =1,\ldots,B: \bigl(\gamma^{[b]} \leq\gamma_{\mathrm{min}}\bigr)\\ &&\qquad{}\lor
\biggl(c \in\mbox{ the } \biggl(1-\frac{\alpha\gamma^{[b]}}{(1-\log{\gamma_{\mathrm{min}}})|\hat
{S}^{[b]}|} \biggr)\\ &&\qquad{}\cdot 100\% \mbox{ CI for split $b$}\biggr) \biggr\}.
\end{eqnarray*}
For single testing (not adjusted for multiplicity), the corresponding confidence interval becomes
\begin{eqnarray*} & &\mbox{$(1-\alpha)$\% CI} \\
&&\quad = \biggl\{c | \forall b =1,\ldots,B: \bigl(\gamma^{[b]} \leq \gamma_{\mathrm{min}}\bigr)\\ &&\qquad{}\lor\biggl(c \in\mbox{ the } \biggl(1- \frac{\alpha\gamma^{[b]}}{(1-\log{\gamma_{\mathrm{min}}})} \biggr)\\ &&\qquad{}\cdot 100\% \mbox{ CI for split $b$}\biggr) \biggr\}. \end{eqnarray*}
If one has starting points with one being in the confidence interval and the other one outside of it, one can apply the bisection method to find the bound in between these points.
\subsection{Weighted Squared Error Approach for General GLM}\label{subsec.app.general.wsqerr} We describe the approach presented in Section~\ref{subsec.GLMweighted} in a more general way. One algorithm for fitting generalized linear models is to calculate the maximum likelihood estimates $\hat{\beta}$ by applying iterative weighted least squares (\cite{mccullagh1989generalized}).
As in Section~\ref{subsec.GLMweighted}, the idea is now to apply a standard l1-penalized fitting of the model, then build up the weighted least squares problem at the l1-solution and apply our linear model methods on this problem.
From \citet{mccullagh1989generalized}, using the notation $\hat{z}_i = g^{-1}((\mathbf X \hat{\beta})_i), i=1 , \ldots, n$, the adjusted response variable becomes
\begin{eqnarray} Y_{i,\mathrm{adj}} = (\mathbf X \hat{\beta})_i + (Y_i- \hat{z}_i) \frac
{\partial g(z)}{\partial z} \bigg|_{z=\hat{z}_i},\nonumber\\
\eqntext{i = 1 , \ldots, n .} \end{eqnarray}
We then get a weighted least squares problem
\[ \hat{\beta}_{\mathrm{new}} = \operatorname{argmin}_{\beta} (Y_{\mathrm{adj}} - \mathbf X \beta)^T \mathbf W (Y_{\mathrm{adj}} - \mathbf X \beta), \]
with weights
\begin{eqnarray*} &&\mathbf W^{-1} \\ &&\quad= \left(\matrix{\displaystyle \biggl(\frac{\partial g(z)}{\partial z}
\biggr)^2 \bigg|_{z=\hat{z}_1} V(\hat{z}_1) & 0 \vspace*{2pt}\cr 0 & \displaystyle\biggl(\frac{\partial g(z)}{\partial z}\biggr)^2
\bigg|_{z=\hat{z}_2} V(\hat{z}_2) \vspace*{2pt} \cr \vdots& \ddots\vspace*{2pt} \cr 0 & \ldots } \right. \\ &&\quad\hspace*{20pt}\left.\matrix{\ldots& 0 \vspace*{2pt}\cr \ddots& \vdots \vspace*{2pt}\cr \ddots& 0 \vspace*{2pt}\cr 0 & \displaystyle\biggl(
\frac{\partial g(z)}{\partial z}\biggr)^2 \bigg|_{z=\hat{z}_n} V(\hat{z}_n)} \right), \end{eqnarray*}
with variance function $V(z)$.
The variance function $V(z)$ is related to the variance of the response $Y$. To more clearly define this relation, we assume that the response $Y$ has a distribution of the form described in \citet{mccullagh1989generalized}:
\[ f_Y(y;\theta,\phi) = \exp{\bigl[\bigl(y \theta- b(\theta)\bigr)/a( \phi) + c(y,\phi)\bigr]}, \]
with known functions $a(\cdot)$, $b(\cdot)$ and $c(\cdot)$. $\theta$ is the canonical parameter and $\phi$ is the dispersion parameter.
As defined in \citet{mccullagh1989generalized}, the variance function is then related to the variance of the response in the following way:
\[ \operatorname{Var}(Y) = b^{\prime\prime}(\theta)a(\phi)=V\bigl(g^{-1}\bigl(\mathbf X \beta^0\bigr)\bigr) a(\phi). \]
We rewrite $Y_{w} = \sqrt{\mathbf W} Y_{\mathrm{adj}}$ and $X_w = \sqrt{\mathbf W} \mathbf X$ to get
\[ \hat{\beta}_{\mathrm{new}} = \operatorname{argmin}_{\beta} (Y_w - \mathbf X_w \beta)^T(Y_w - \mathbf X_w \beta). \]
The linear model methods can now be applied to $Y_{w}$ and $\mathbf X_{w}$, thereby the estimate $\hat{\sigma}_{\varepsilon}$ has to be set to the value 1. \end{appendix}
\section*{Acknowledgments} We would like to thank some reviewers for insightful and constructive comments.
\begin{supplement}[id=suppA]
\stitle{Supplement to ``High-Dimensional Inference:\break \mbox{Confidence} Intervals, $p$-Values and \texttt{R}-Software \texttt{hdi}''} \slink[doi]{10.1214/15-STS527SUPP} \sdatatype{.pdf} \sfilename{sts527\_supp.pdf} \sdescription{The supplemental article contains additional empirical results.} \end{supplement}
\end{document} | arXiv |
\begin{document}
\title[]{Gagliardo--Nirenberg type inequalities using Fractional Sobolev spaces and Besov spaces}
\author{Nguyen Anh Dao}
\address{Nguyen Anh Dao: School of Economic Mathematics and Statistics, University of Economics Ho Chi Minh City (UEH), Vietnam}
\email{[email protected]}
\date{\today}
\begin{abstract} Our main purpose is to establish Gagliardo--Nirenberg type inequalities using fractional homogeneous Sobolev spaces, and homogeneous Besov spaces. In particular, we extend some of the results obtained by the authors in \cite{Brezis1, Brezis2, Brezis3, DaoLamLu1, Miyazaki, Van}.
\end{abstract}
\subjclass[2010]{Primary 46E35; Secondary 46B70.}
\keywords{Gagliardo--Nirenberg's inequality, fractional Sobolev space, Besov space, maximal function.\\}
\maketitle
\section{Introduction}
In this paper, we are interested in
the following Gagliardo--Nirenberg inequality:
\\
For every $0\leq \alpha_1<\alpha_2$, and for $1\leq p_1, p_2, q \leq \infty$, there holds
\begin{equation}\label{-10}
\|f\|_{\dot{W}^{\alpha_1, p_1}} \lesssim \| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{q} } \|f \|^\frac{\alpha_1}{\alpha_2}_{\dot{W}^{\alpha_2, p_2}}
\,,
\end{equation}
where $$\frac{1}{p_1} = \frac{1}{q} \left(1-\frac{\alpha_1}{\alpha_2}\right) + \frac{1}{p_2} \frac{\alpha_1}{\alpha_2} \,,$$
and $\dot{W}^{\alpha,p}(\mathbb{R}^n)$ denotes by the homogeneous Sobolev space (see its definition in Section 2).
\\
It is known that such an inequality of this type plays an important role in the analysis of PDEs. When $\alpha_i$, $i=1,2$ are nonnegative integer numbers, \eqref{-10} was obtained independently by Gagliardo \cite{Gag} and Nirenberg \cite{Nir}.
After that, the inequalities of this type have been studied by many authors in \cite{Brezis1, Brezis2, Brezis3, CDDD, Dao1,DaoLamLu1,DaoLamLu2,Le, LuWheeden,Lu2, Miyazaki,MeRi2003, Van}, and the references cited therein.
\\
The case $q=\infty$ can be considered as a limiting case of \eqref{-10}, i.e:
\begin{equation}\label{-12}
\|D^{\alpha_1} f\|_{L^{p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}} _{L^\infty} \|D^{\alpha_2} f \|^{\frac{\alpha_1}{\alpha_2}}_{L^{p_2}} \,, \quad \forall f\in L^\infty(\mathbb{R}^n)\cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)\,,
\end{equation}
with $p_1=\frac{p_2\alpha_2}{\alpha_1}$.
Obviously, this inequality fails if $\alpha_1=0$. \\
An partial improvement of \eqref{-12} in terms of {\rm BMO} space was obtained by Meyer and Rivi\`ere \cite{MeRi2003}:
\begin{equation}\label{-13}
\|D f\|^2_{L^{4}} \lesssim \|f\|_{ \rm{BMO} } \| D^2 f\|_{ L^2 } \,,
\end{equation}
for all $f\in {\rm BMO}(\mathbb{R}^n) \cap W^{2,2}(\mathbb{R}^n)$. Thanks to \eqref{-13}, the authors proved a regularity result for a class of stationary Yang--Mills fields in high dimension.
\\
After that \eqref{-14} was extended to higher derivatives by the authors in \cite{Strz,Miyazaki}. Precisely, there holds true
\begin{equation}\label{-14}
\|D^{\alpha_1} f\|_{L^{p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}} _{{\rm BMO}} \|D^{\alpha_2} f \|^{\frac{\alpha_1}{\alpha_2}}_{L^{p_2}} \,,
\end{equation}
for all $f\in {\rm BMO}(\mathbb{R}^n) \cap W^{\alpha_2,p_2}(\mathbb{R}^n)$, $p_2>1$.
\\
Recently, the author et al. \cite{DaoLamLu1} improved \eqref{-14} by means of the homogeneous Besov spaces. For convenience, we recall the result here.
\begin{theorem}[see Theorem 1.2, \cite{DaoLamLu1}] \label{TheC} \sl
Let $m, k$ be integers with $1\leq k<m$. For every $s\geq 0$, let $f \in \mathcal{S}'(\mathbb{R}^n)$ be such that
$ D^m f\in L^{p}(\mathbb{R}^n)$, $1\leq p<\infty$; and $f\in\dot{B}^{-s}(\mathbb{R}^n)$. Then, we have $D^k f\in L^r(\mathbb{R}^n)$, $r=p \left( \frac{m+s}{k+s} \right)$, and
\begin{equation}\label{-15}
\|D^k f\|_{L^r} \lesssim \|f\|^{\frac{m-k}{m+s}}_{\dot{B}^{-s}}
\left\|D^m f \right\|^\frac{k+s}{m+s}_{L^p} \,,
\end{equation}
where we denote $\dot{B}^{\sigma} = \dot{B}^{\sigma,\infty}_{\infty}$, $\sigma\in\mathbb{R}$ (see the definition of Besov spaces in Section 2).
\end{theorem}
\begin{remark}
Obviously, \eqref{-15} is stronger than \eqref{-14} when $s=0$ since ${\rm BMO}(\mathbb{R}^n) \hookrightarrow \dot{B}^{0}(\mathbb{R}^n)$. We emphasize that \eqref{-15} is still true for $k=0$ when $s>0$.
\end{remark}
\begin{remark}
In studying the space ${\rm BV}(\mathbb{R}^2)$, A. Cohen et al., \cite{CDPX}
proved \eqref{-15} for the case $k=0, m=p=1, s=n-1, r=\frac{n}{n-1}$ by using wavelet decompositions (see \cite{Le} for the case $k=0, m=1, p\geq 1, r=p\big(\frac{1+s}{s}\big)$, with $s>0$).
\end{remark}
Inequality \eqref{-10} in terms of fractional Sobolev spaces has been investigated by the authors in \cite{Brezis1, Brezis2, Brezis3,Van} and the references therein. Surprisingly, there is a border line for the limiting case of Gagliardo--Nirenberg type inequality. In \cite{Brezis1}, Brezis--Mironescu proved that the following inequality
\begin{align}\label{-16}
\|f\|_{W^{\alpha_1,p_1}}\lesssim \|f\|^\theta_{W^{\alpha,p}} \|f\|^{1-\theta}_{W^{\alpha_2,p_2}} \,,
\end{align}
with $\alpha_1=\theta \alpha +(1-\theta)\alpha_2$, $\frac{1}{p_1}=\frac{\theta}{p}+\frac{1-\theta}{p_2}$, and $\theta\in(0,1)$
holds if and only if
\begin{equation}\label{special-cond} \alpha-\frac{1}{p}< \alpha_2-\frac{1}{p_2} \,.\end{equation}
As a consequence of this result,
the following inequality
\[
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|_{L^\infty} \|D f\|_{L^1}
\]
fails whenever $0<\alpha_1<1$.
\\
We note that the limiting case of \eqref{-16} reads as:
\begin{equation}\label{-17}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|_{L^\infty}\|f\|_{\dot{W}^{\alpha_2,p_2}} \,,
\end{equation}
where $\alpha_1<\alpha_2$, and $\alpha_1 p_1=\alpha_2 p_2$.
\\
When $\alpha_2<1$, Brezis--Mironescu improved \eqref{-17} by means of ${\rm BMO}(\mathbb{R}^n)$ using the Littlewood--Paley decomposition. Very recently, Van Schaftingen \cite{Van} studied \eqref{-17} for the case $\alpha_2=1$ on a convex open set $\Omega\subset \mathbb{R}^n$ satisfying certain condition. Particularly, he proved that
\begin{equation}\label{-20}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|^{1-\alpha_1}_{{\rm BMO}}
\|D f\|^{\alpha_1}_{L^{p_2}}
\end{equation}
where $0<\alpha_1<1$, $p_1\alpha_1=p_2$, $p_2>1$.
\\
Inspired by the above results, we would like to study \eqref{-10} by means of fractional Sobolev spaces and Besov spaces. Moreover, we also improve the limiting cases \eqref{-17}, \eqref{-20} in terms of $\dot{B}^0(\mathbb{R}^n)$.
\subsection*{Main results}
Our first result is to improve \eqref{-10} by using fractional Sobolev spaces, and homogeneous Besov spaces.
\begin{theorem}\label{Mainthe} Let $\sigma>0$, and $0\leq \alpha_1<\alpha_2<\infty$. Let $1\leq p_1, p_2 \leq \infty$ be such that $p_1=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$, and $p_2(\alpha_2+\sigma)>1$. If $f\in \dot{B}^{-\sigma}(\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. Moreover, there is a positive constant $C=C(n,\alpha_1,\alpha_2,p_2, \sigma)$ such that
\begin{equation}\label{-3}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}
\,.
\end{equation}
\end{theorem}
\begin{remark} Note that
\eqref{-3} is not true for the limiting case $\sigma= \alpha_1=0, p_1=\infty$, even \eqref{special-cond} holds, i.e: $\alpha_2-\frac{1}{p_2}>0$. Indeed, if it is the case, then \eqref{-3} becomes
\[
\|f\|_{L^{\infty}} \lesssim \|f\|_{\dot{B}^{0}} \,.
\]
Obviously,
the inequality cannot happen since
$L^\infty(\mathbb{R}^n)\hookrightarrow {\rm BMO}(\mathbb{R}^n)\hookrightarrow \dot{B}^0(\mathbb{R}^n)$.
\end{remark}
However, if $\alpha_1$ is positive, then
\eqref{-3} holds true with $\sigma=0$. This assertion is in the following theorem.
\begin{theorem}\label{Mainthe1} Let
$\alpha_2> \alpha_1>0$, and let $1\leq p_1, p_2\leq \infty$ be such that $p_1=\frac{\alpha_2 p_2}{\alpha_1}$, and $\alpha_2 p_2>1$. If $f\in \dot{B}^{0}(\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. Moreover, we have
\begin{equation}\label{4.1}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}}
\,.
\end{equation}
\end{theorem}
Our paper is organized as follows. In the next section, we provide the definitions of fractional Sobolev spaces and homogeneous Besov spaces. Section 3 is devoted to the proofs of Theorems \ref{Mainthe}, \ref{Mainthe1}. Moreover, we also obtain the homogeneous version of \eqref{-16} with an elementary proof, see Lemma \ref{Lem-Hom-Sobolev}.
Finally, we prove $\|f\|_{\dot{W}^{s,p}}\approx \|f\|_{\dot{B}^{s,p}}$ for $0<s<1$, $1\leq p<\infty$ in the Appendix section.
\section{Definitions and preliminary results}
\subsection{Fractional Sobolev spaces}
\begin{definition}\label{Def-frac-Sob} For any $0<\alpha<1$, and for $1\leq p<\infty$,
we denote $\dot{W}^{\alpha,p}(\mathbb{R}^n)$ (resp. $W^{\alpha,p}(\mathbb{R}^n)$) by the homogeneous fractional
Sobolev space (resp. the inhomogeneous fractional Sobolev space), endowed by the semi-norm:
\[ \|f\|_{\dot{W}^{\alpha,p}} =
\left(\displaystyle \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{|f(x+h)-f(x)|^p}{|h|^{n+\alpha p}} dhdx \right)^{\frac{1}{p}} \,,
\]
and the norm
\[
\|f\|_{W^{\alpha,p}} = \left(\|f\|^p_{L^p} + \|f\|^p_{\dot{W}^{\alpha,p}} \right)^\frac{1}{p}\,.
\]
\end{definition}
When $\alpha\geq 1$, we can define the higher order fractional Sobolev space as follows:
\\
Denote $\floor{\alpha}$ by the integer part of $\alpha$. Then, we define
\[
\|f\|_{\dot{W}^{\alpha,p}} =\left\{ \begin{array}{cl}
&\|D^{\floor{\alpha}} f\|_{L^p} ,\quad \text{if }\, \alpha\in\mathbb{Z}^+.
\vspace*{0.1in}\\
& \|D^{\floor{\alpha}}f\|^p_{\dot{W}^{\alpha-\floor{\alpha},p}} ,\quad\text{otherwise}\,.
\end{array}\right.
\]
In addition, we also define
\[
\|f\|_{W^{\alpha,p}} =\left\{ \begin{array}{cl}
&\|f\|_{W^{\alpha,p}} ,\quad \text{if }\, \alpha\in\mathbb{Z}^+.
\\
& \left( \|f\|^p_{W^{\floor{\alpha},p}} + \|D^{\floor{\alpha}} f\|^p_{\dot{W}^{\alpha-\floor{\alpha},p}} \right)^{\frac{1}{p}} ,\quad\text{otherwise}\,.
\end{array}\right.
\]
\subsection*{Notation} Through the paper, we accept the notation
$\dot{W}^{\alpha,\infty}(\mathbb{R}^n)=\dot{C}^{\alpha}(\mathbb{R}^n)$, $\alpha\in(0,1)$; and $\dot{W}^{0,p}(\mathbb{R}^n)=L^p(\mathbb{R}^n)$, $1\leq p\leq \infty.$
\\
In addition, we always denote constant by C, which may change
from line to line. Moreover, the notation $C(\alpha, p,n)$ means that $C$ merely depends
on $\alpha, p,n$.
Next, we write $A \lesssim B$ if there exists a constant $c > 0$ such
that $A <cB$. And we write $A \approx B$ iff $A \lesssim B \lesssim A$.
\subsection{Besov spaces}
To define the homogeneous Besov spaces, we recall the Littlewood--Paley decomposition (see \cite{Triebel}). Let $\phi_j(x)$ be the inverse Fourier transform of the $j$-th component of the dyadic decomposition i.e.,
$$\sum_{j\in \mathbb{Z}} \hat{\phi}(2^{-j} \xi ) =1$$
except $\xi=0$, where
$ {\rm supp}( \hat{\phi})\subset \left\{ \frac{1}{2} < |\xi| < 2 \right\}$.
\\
Next, let us put
$$ \mathcal{Z}(\mathbb{R}^n) = \left\{ f \in \mathcal{S}(\mathbb{R}^n), D^\alpha \hat{f}(0) = 0,\, \forall\alpha \in \mathbb{N}^n, \text{ multi-index} \right\} \,,$$
where $\mathcal{S}(\mathbb{R}^n)$ is the Schwartz space as usual.
\begin{definition}\label{Def1} For every $s\in\mathbb{R}$, and for every $1\leq p, q\leq \infty$, the homogeneous Besov space is denoted by
$$\dot{B}^s_{p,q} =\left\{ f\in \mathcal{Z}'(\mathbb{R}^n): \|f\|_{\dot{B}^s_{p,q}} <\infty \right\} \,,$$
with
$$
\|f\|_{\dot{B}^s_{p,q}} = \left\{ \begin{array}{cl}
&\left( \displaystyle \sum_{j\in\mathbb{Z}}
2^{jsq} \|\phi_j * f\|^q_{L^p} \right)^\frac{1}{q}\,, \text{ if }\, 1\leq q<\infty,
\\
& \displaystyle\sup_{ j \in\mathbb{Z} } \left\{ 2^{js} \|\phi_j * f\|_{L^p} \right\} \,, \text{ if }\, q=\infty \,.
\end{array} \right. $$
When $p=q=\infty$, we denote $\dot{B}^s_{\infty,\infty}=\dot{B}^s$ for short.
\end{definition}
The following characterization of $\dot{B}^{s}_{\infty,\infty}$ is useful for our proof below.
\begin{theorem}[see Theorem 4, p. 164, \cite{Peetre}]\label{ThePeetre} Let $\big\{\varphi_\varepsilon\big\}_\varepsilon$ be a sequence of functions such that
\[\left\{
\begin{array}{cl}
&{\rm supp}(\varphi_\varepsilon)\subset B(0,\varepsilon) , \quad \big\{ \frac{1}{2\varepsilon}\leq |\xi|\leq \frac{2}{\varepsilon} \big\}\subset \big\{\widehat{\phi_\varepsilon}(\xi) \not=0 \big\} ,
\vspace*{0.1in}\\
&\int_{\mathbb{R}^n} x^\gamma \varphi_\varepsilon (x)\, dx =0 ,\, \text{for all multi-indexes }\, |\gamma|<k, \text{ where $k$ is a given integer},
\vspace*{0.1in}\\
& \big|D^\gamma \varphi_\varepsilon(x)\big| \leq C \varepsilon^{-(n+|\gamma|)}\, \text{ for every multi-index } \gamma\,.
\end{array}
\right.\]
Assume $s<k$. Then, we have
\[
f\in \dot{B}^s(\mathbb{R}^n) \Leftrightarrow
\sup_{\varepsilon>0}
\left\{\varepsilon^{-s} \|\varphi_\varepsilon * f\|_{L^\infty} \right\} < \infty \,.
\]
\end{theorem}
We end this section by recall the following result (see \cite{DaoLamLu1}).
\begin{proposition}[Lifting operator]\label{Pro1}
Let $s\in\mathbb{R}$, and let $\gamma$ be a multi-index. Then,
$\partial^\gamma$ maps $\dot{B}^s(\mathbb{R}^n) \rightarrow \dot{B}^{s-|\gamma|}(\mathbb{R}^n)$.
\end{proposition}
\section{Proof of the Theorems}
\subsection{Proof of Theorem \ref{Mainthe}}
We first prove Theorem \ref{Mainthe} for the case $0\leq \alpha_1<\alpha_2\leq 1$. After that, we consider $\alpha_i\geq 1$, $i=1,2$.
\\
{\bf i) Step 1: $0\leq \alpha_1<\alpha_2 \leq 1$.} We divide our argument into the following cases.
\\
{\bf a) The case $p_1=p_2=\infty$, $0< \alpha_1<\alpha_2 <1$.} Then, \eqref{-3} becomes
\begin{equation}\label{1.0}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{C}^{\alpha_2}} \,.
\end{equation}
To prove \eqref{1.0}, we use a characterization of homogeneous Besov space $\dot{B}^{s}$ in Theorem \ref{ThePeetre}, and the fact that $\dot{B}^s(\mathbb{R}^n)$ coincides with $\dot{C}^s(\mathbb{R}^n)$, $s\in(0,1)$ (see \cite{Grevholm}).
\\
Then, let us recall sequence $\{\varphi_\varepsilon\}_{\varepsilon>0}$ in Theorem \ref{ThePeetre}.
\\
For $\delta>0$, we write
\begin{align}\label{2.-1}
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} &=\varepsilon^{\alpha_2-\alpha_1} \varepsilon^{-\alpha_2}\|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon<\delta\big\}}+ \varepsilon^{-(\alpha_1+\sigma)} \varepsilon^{\sigma}\|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon\geq \delta\big\}}
\\
&\leq \delta^{\alpha_2-\alpha_1} \|f\|_{\dot{B}^{\alpha_2}}
+\delta^{-(\alpha_1+\sigma)} \|f\|_{\dot{B}^{-\sigma}} \,. \nonumber
\end{align}
Minimizing the right hand side with respect to $\delta$ in the indicated inequality yields
\begin{align*}
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty}\lesssim \|f\|_{\dot{B}^{-\sigma}}^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}\,.
\end{align*}
Since the last inequality holds for every $\varepsilon>0$, then we obtain \eqref{1.0}.
\begin{remark}\label{Rem2} It is not difficult to observe that the above proof also adapts to the two following cases:
\begin{enumerate}
\item[$\bullet$] $\alpha_1=0$, $\alpha_2<1$, $\sigma>0$. Then, we have
\begin{equation}\label{2.-3}
\|f\|_{L^\infty} \lesssim \|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\sigma}{\alpha_2+\sigma}}\,.
\end{equation}
\item[$\bullet $] $\alpha_1=0$, $\alpha_2<1$, $\sigma>0$. Then, we have
\begin{equation}\label{2.-2}
\|f\|_{\dot{B}^{\alpha_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\alpha_1}{\alpha_2}}\,.
\end{equation}
\end{enumerate}
This is Theorem \ref{Mainthe1} when $p_i=\infty$, $i=1, 2$.
\end{remark}
To end part {\bf a)}, it remains to prove \eqref{-3} for the case $\alpha_2=1$. That is
\begin{equation}\label{2.-4}
\|f\|_{\dot{B}^{\alpha_1}} \lesssim \|f\|^{\frac{1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|Df\|_{L^\infty}^{\frac{\sigma}{1+\sigma}}\,.
\end{equation}
The proof is similar to the one of \eqref{1.0}. Hence, it suffices to prove that
\begin{equation}\label{2.-5}
\varepsilon^{-\alpha_1} \|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon<\delta\big\}} \leq \delta^{1-\alpha_1} \|Df\|_{L^\infty} \,. \end{equation}
Indeed, using vanishing moment of $\varphi_\varepsilon$ and the mean value theorem yields
\begin{align*}
\big|\varphi_\varepsilon * f(x)\big| &=\big| \int_{B(0,\varepsilon)} (f(x)-f(x-y)) \varphi_\varepsilon (y)\, dy \big|
\\
&\leq \int_{B(0,\varepsilon)} \|Df\|_{L^\infty} |y| |\varphi_\varepsilon (y)|\, dy \leq \varepsilon \|\varphi_\varepsilon\|_{L^1} \|Df\|_{L^\infty} \lesssim \varepsilon \|Df\|_{L^\infty} \,.
\end{align*}
Thus, \eqref{2.-5} follows easily.
\\
By repeating the proof of \eqref{2.-1}, we obtain \eqref{2.-4}.
\\
{\bf b) The case $p_i<\infty, \,i=1,2$.} Then, the proof follows by way of the following lemmas.
\begin{lemma}\label{Lem10}
Let $0<\alpha< 1$, and $1\leq p<\infty$. For every $s>0$, if $f\in \dot{B}^{-s}(\mathbb{R}^n)\cap \dot{W}^{\alpha,p}(\mathbb{R}^n)$, then there exists a positive constant $C=C(s,\alpha,p)$ such that
\begin{equation}\label{1.1}
|f(x)| \leq C
\| f \|_{\dot{B}^{-s}}^\frac{\alpha}{s+\alpha} \big[
\mathbf{G}_{\alpha,p}(f)(x)\big]^{\frac{s}{s+\alpha}}\,, \quad \text{for } x\in\mathbb{R}^n\,,
\end{equation}
with $$\mathbf{G}_{\alpha,p}(f)(x)= \displaystyle\sup_{\varepsilon>0} \left(\fint_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|^p}{\varepsilon^{\alpha p}} dy \right)^\frac{1}{p} \,.$$
\end{lemma}
\begin{remark}\label{Rem6} When $\alpha=1$, then \eqref{1.1} becomes
\begin{equation}\label{1.1b}
|f(x)|\leq C\|f\|_{\dot{B}^{-s}}^\frac{1}{s+1} \big[\mathbf{M}(|Df|)(x)\big]^{\frac{s}{s+1}}\,, \quad \text{for } x\in\mathbb{R}^n\,.
\end{equation}
This inequality was obtained by the authors in \cite{DaoLamLu1}. As a result, we get
\begin{equation}\label{1.1a}
\|f\|_{L^{p_1}} \lesssim \| f \|_{\dot{B}^{-s}}^\frac{1}{s+1}
\|Df\|_{L^{p_2}}\,,
\end{equation}
with $p_1=p_2\big(\frac{s+1}{s}\big)$, $p_2\geq 1$.
\\
This is also Theorem \ref{Mainthe} when $\alpha_1=0$, $\alpha_2=1$, $s=\sigma>0$.
\end{remark}
\begin{remark}\label{Rem4} Obviously, for $1\leq p<\infty$ we have $\|\mathbf{G}_{\alpha,p}(f)\|_{L^p}\lesssim \|f\|_{\dot{W}^{\alpha,p}}$, and $\mathbf{G}_{\alpha,1}(f)(x)\leq \mathbf{G}_{\alpha,p}(f)(x)$ for $x\in\mathbb{R}^n$.
\\
Next, applying Lemma \ref{Lem10} to $s=\sigma, \alpha=\alpha_2$, $p=p_2$, and taking the $L^{p_1}$-norm of \eqref{1.1} yield
\[
\|f\|_{L^{p_1}} \lesssim
\|f\|_{\dot{B}^{-\sigma}}^\frac{\alpha_2}{\sigma+\alpha_2} \left( \int
\big|\mathbf{G}_{\alpha_2,p_2}(f)(x)\big|^{\frac{\sigma p_1}{\sigma+\alpha_2}} \, dx \right)^{1/p_1} \leq \|f\|_{\dot{B}^{-\sigma}}^\frac{\alpha_2}{\sigma+\alpha_2} \|f\|^{\frac{\sigma}{\sigma+\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\]
with $p_1= p_2 \big(\frac{\sigma+\alpha_2}{\sigma}\big)$.
\\
Hence, we obtain Theorem \ref{Mainthe} for the case $\alpha_1=0$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{Lem10}] Let us recall sequence $\{\varphi_\varepsilon\}_{\varepsilon>0}$ above. Then, we have from the triangle inequality that
\begin{align*}
|f(x)|
\leq | \varphi_\varepsilon * f(x)| + | f(x)-\varphi_\varepsilon * f(x)| :=\mathbf{I}_1+ \mathbf{I}_2 \,.
\end{align*}
We first estimate $\mathbf{I}_1$ in terms of $\dot{B}^{-s}$. Thanks to Theorem \ref{ThePeetre}, we get
\begin{align}\label{1.2}
\mathbf{I}_1 = \varepsilon^{-s} \varepsilon^{s} | \varphi_\varepsilon * f(x)| \leq C \varepsilon^{-s} \| f \|_{\dot{B}^{-s}} \,.
\end{align}
For $\mathbf{I}_2$, applying H\"older's inequality yields
\begin{align}\label{1.3}
\mathbf{I}_2 &\leq \int_{B(0,\varepsilon)} |f(x)-f(x-y)| \varphi_\varepsilon (y) \, dy = \varepsilon^{\frac{n}{p}+\alpha} \int_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|}{\varepsilon^{\frac{n}{p}+\alpha}} \varphi_\varepsilon (y) \, dy \nonumber
\\
&\leq \varepsilon^{\frac{n}{p}+\alpha} \|\varphi_\varepsilon\|_{L^{p'}} \left(\int_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|^p}{\varepsilon^{n+\alpha p}} dy \right)^\frac{1}{p} \nonumber
\\
&\lesssim \varepsilon^{\frac{n}{p}+\alpha} \|\varphi_\varepsilon\|_{L^{\infty}} \big|B(0,\varepsilon)\big|^{\frac{1}{p'}} \mathbf{G}_{\alpha,p}(f)(x) \lesssim \varepsilon^{\alpha} \mathbf{G}_{\alpha,p}(f)(x) \,.
\end{align}
Note that the last inequality follows by using the fact $\|\varphi_\varepsilon\|_{L^{\infty}}\leq C \varepsilon^{-n}$.
\\
By combining \eqref{1.2} and \eqref{1.3}, we obtain
\[|f(x)|\leq C\left(\varepsilon^{-s} \| f \|_{\dot{B}^{-s}} +\varepsilon^{\alpha} \mathbf{G}_{\alpha,p}(f)(x)\right) \,. \]
Since the indicated inequality holds true for $\varepsilon>0$, then minimizing the right hand side of this one yields the desired result. \\
Hence, we complete the proof of Lemma \ref{Lem10}.
\end{proof}
Next, we have the following lemma.
\begin{lemma}\label{Lem11}
Let $0<\alpha_1<\alpha_2< 1$. Let $1\leq p_1, p_2 <\infty$, and $r>1$ be such that
\begin{equation}\label{1.6}
\frac{1}{p_1}= \frac{1}{r} \left(1-\frac{\alpha_1}{\alpha_2}\right) + \frac{1}{p_2}\frac{\alpha_1}{\alpha_2} \,.
\end{equation}
If $f\in L^r(\mathbb{R}^n)\cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. In addition, there exists a constant $C=C(\alpha_1,\alpha_2,p_1,p_2,n)>0$ such that
\begin{equation}\label{1.8}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{r} }
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lem11}]
For any set $\Omega$ in $\mathbb{R}^n$, let us denote $\fint_\Omega f(x) \, dx = \frac{1}{|\Omega|} \int_{\Omega} f(x) \, dx$.
\\
For any $x, z\in\mathbb{R}^n$, we have from the triangle inequality and change of variables that
\begin{align*}
\big|f(x+z)-f(x)\big|&\leq \big|f(x+z) - \fint_{B(x,|z|)} f(y)\,dy \big|+\big|f(x) - \fint_{B(x,|z|)} f(y)\,dy \big|
\\
&\leq \fint_{B(x,|z|)} \big|f(x+z) -f(y) \big| \, dy+ \fint_{B(x,|z|)} \big|f(x)-f(y) \big| \, dy
\\
&\leq C(n) \left( \fint_{B(0,2|z|)} \big|f(x+z) -f(x+z+y) \big| \, dy+ \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy \right)\,.
\end{align*}
With the last inequality noted, and by using a change of variables, we get
\begin{align}\label{1.10}
\int\int \frac{|f(x+z)-f(x)|^{p_1}}{|z|^{n+\alpha_1 p_1}} dzdx \lesssim \int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}}\,.
\end{align}
Next, for every $p\geq 1$ we show that
\begin{align}\label{1.13}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim \big[\mathbf{M}(f)(x)\big]^{\frac{(\alpha_2-\alpha_1)p_1}{\alpha_2}} \left[\mathbf{G}_{\alpha_2,p}(x) \right]^{\frac{\alpha_1 p_1}{\alpha_2}}.
\end{align}
Thanks to Remark \ref{Rem4}, it suffices to show that \eqref{1.13} holds for $p=1$.
\\
Indeed, we have
\begin{align}\label{1.14}
\int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&= \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|^{\alpha_2}} \, dy\right)^{p_1} \frac{|z|^{\alpha_2 p_1} dz}{|z|^{n+\alpha_1 p_1}} \nonumber \\
&\lesssim \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1} \int_{\{|z|<t\}} \frac{1}{|z|^{n+(\alpha_1-\alpha_2) p_1}} dz \nonumber
\\
&\lesssim t^{(\alpha_2-\alpha_1) p_1} \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1}\,.
\end{align}
On the other hand, it is not difficult to observe that
\begin{align}\label{1.15}
\int_{|z|\geq t} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&\lesssim \big[\mathbf{M}(f)(x)\big]^{p_1} \left( \int_{|z|\geq t}\frac{dz}{|z|^{n+\alpha_1 p_1}} \right) \nonumber
\\
&\lesssim t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align}
From \eqref{1.14} and \eqref{1.15}, we obtain
\begin{align*}
\int \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{(\alpha_2-\alpha_1) p_1} \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1} +t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align*}
Minimizing the right hand side of the last inequality yields \eqref{1.13}.
\\
Then, it follows from \eqref{1.13} that
\[
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} dx \lesssim \int \big[\mathbf{M}(f)(x)\big]^{\frac{(\alpha_2-\alpha_1)p_1}{\alpha_2}} \left[\mathbf{G}_{\alpha_2,p_2}(x) \right]^{\frac{\alpha_1 p_1}{\alpha_2}} dx\,.
\]
Note that $\alpha_2 p_2>\alpha_1 p_1$, and $r=\frac{p_1p_2(\alpha_2-\alpha_1)}{\alpha_2p_2-\alpha_1p_1}$, see \eqref{1.6}.
Then, applying H\"older's inequality with $\big((\frac{\alpha_2p_2}{\alpha_1p_1})^\prime, \frac{\alpha_2p_2}{\alpha_1p_1}\big)$ to the right hand side of the last inequaltiy yields
\begin{align*}
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}}
&\lesssim \big\|\mathbf{M}(f)\big\|^{\frac{(\alpha_2-\alpha_1) p_1}{\alpha_2}}_{L^r} \|\mathbf{G}_{\alpha_2,p_2}\|^{\frac{\alpha_1p_1}{\alpha_2}}_{L^{p_2}} \,.
\end{align*}
Thanks to Remark \ref{Rem4}, and by the fact that $\mathbf{M}$ maps $L^r(\mathbb{R}^n)$ into $L^r(\mathbb{R}^n)$ $r>1$, we deduce from the last inequality that
\begin{align}\label{1.18}
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} \lesssim \|f\|^{\frac{(\alpha_2-\alpha_1) p_1}{\alpha_2}}_{L^r} \|f\|^{\frac{\alpha_1 p_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align}
Combining \eqref{1.10} and \eqref{1.18} yields \eqref{1.8}.
\\
Hence, we obtain Lemma \ref{Lem11}.
\end{proof}
Now, we can apply Lemma \ref{Lem10} and Lemma \ref{Lem11} alternatively to get Theorem \ref{Mainthe} for the case $0<\alpha_2<1$. Indeed, we apply \eqref{1.1} to $s=\sigma$, $\alpha=\alpha_2$, $p=p_2$. Then,
\begin{align}\label{1.20}
\|f\|_{L^q} & \lesssim
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|\mathbf{G}^{\frac{\sigma}{\alpha_2+\sigma}}_{\alpha_2,p_2}\big\|_{L^q} =
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|\mathbf{G}_{\alpha_2,p_2}\big\|^{\frac{\sigma}{\alpha_2+\sigma}}_{L^{p_2}} \leq
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\big(\frac{\alpha_2+\sigma}{\sigma}\big)$.
\\
Since $p_1=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$, then it follows from \eqref{1.6} that $r=q>1$.
\\
Next, applying Lemma \ref{Lem11} yields
\begin{equation}\label{1.19}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim
\| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{r} }
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}\,.
\end{equation}
Hence, we obtain Theorem \ref{Mainthe} for the case $0\leq \alpha_1< \alpha_2<1$, $p_i<\infty$, $i=1,2$.
\\
To end {\bf Step 1}, it remains to study the case $\alpha_2=1$, i.e:
\begin{equation} \label{1.22a} \|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|Df\|^{\frac{\alpha_1+\sigma}{1+\sigma}}_{L^{p_2}}
\,.
\end{equation}
This can be done if we show that
\begin{equation}\label{1.21}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\| f \|^{1-\alpha_1}_{ L^{r} }
\|Df\|^{\alpha_1}_{L^{p_2}} \,,
\end{equation}
with $1\leq r<\infty$, $\frac{1}{p_1}= \frac{1-\alpha_1}{r} + \frac{\alpha_1}{p_2}$.
\\
Indeed, a combination of \eqref{1.21} and \eqref{1.1a} implies that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{1-\alpha_1}_{L^r} \|Df\|^{\alpha_1}_{L^{p_2}} \lesssim \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|D f\|^{\frac{\sigma(1-\alpha_1)}{1+\sigma}}_{L^{p_2}} \|Df\|^{\alpha_1}_{L^{p_2}} = \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|D f\|^{\frac{\alpha_1+\sigma}{1+\sigma}}_{L^{p_2}}\,.
\end{align*}
Note that $p_1=p_2\big( \frac{1+\sigma}{\alpha_1+\sigma}\big)$, and $r=p_2\big( \frac{1+\sigma}{\sigma}\big)$.
\\
Hence, we obtain Theorem \ref{Mainthe} when $\alpha_2=1$.
\\
Now, it remains to prove \eqref{1.21}. We note that \eqref{1.21} was proved for $p_2=1$ (see, e.g., \cite{Brezis3, CDPX}). In fact, one can modify the proofs in \cite{Brezis3, CDPX} in order to obtain \eqref{1.21} for the case $1<p_2<\infty$. However, for consistency, we give the proof of \eqref{1.21} for $1<p_2<\infty$.
\\
To obtain the result, we prove a version of \eqref{1.13} in terms of $\mathbf{M}(|Df|)(x)$ instead of $\mathbf{G}_{1,p}(x)$. Precisely, we show that
\begin{align}\label{1.23}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim \big[\mathbf{M}(f)(x)\big]^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{\alpha_1 p_1}
\end{align}
for $x\in\mathbb{R}^n$.
\\
Indeed, it follows from the mean value theorem and a change of variables that
\begin{align*}
\fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|} \, dy &\lesssim \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|y|} \, dy
\\
&= \fint_{B(0,2|z|)} \frac{\big|\int^1_0 D f(x+\tau y) \cdot y\, d\tau\big| }{|y|} \, dy
\\
&\leq \int^1_0\fint_{B(x,2\tau|z|)} | D f(\zeta) | \, d\zeta d\tau
\leq \int^1_0 \mathbf{M}(|Df|)(x) \, d\tau = \mathbf{M}(|Df|)(x) \,.
\end{align*}
Thus,
\begin{align}\label{1.24}
\int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&= \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|} \, dy\right)^{p_1} \frac{|z|^{p_1} dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&\lesssim \big[\mathbf{M}(|Df|)(x) \big]^{p_1} \int_{\{|z|<t\}} |z|^{-n+(1-\alpha_1)p_1} \, dz \nonumber
\\
&\lesssim t^{(1-\alpha_1)p_1} \big[\mathbf{M}(|Df|)(x) \big]^{p_1} \,.
\end{align}
From \eqref{1.24} and \eqref{1.15}, we obtain
\begin{align}\label{1.25}
\int \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{(1-\alpha_1) p_1} \left[ \mathbf{M}(|Df|)(x)\right]^{p_1} +t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align}
Hence, \eqref{1.23} follows by minimizing the right hand side of \eqref{1.25} with respect to $t$.
If $p_2>1$, then we apply H\"older's inequality in \eqref{1.23} in order to get
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int \int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz dx}{|z|^{n+\alpha_1 p_1}}
\\
&\lesssim \int\big[\mathbf{M}(f)(x)\big]^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{\alpha_1 p_1} dx
\\
&\leq \|\mathbf{M}(f)\|^{(1-\alpha_1)p_1}_{L^r} \|\mathbf{M}(|Df|)\|^{\alpha_1 p_1}_{L^{p_2}}
\\
&\lesssim \|f\|^{(1-\alpha_1)p_1}_{L^r} \|Df\|^{\alpha_1 p_1}_{L^{p_2}}\,,
\end{align*}
where $r=p_2\big(\frac{1+\sigma}{\sigma}\big)>1$. Note that the last inequality follows from the $L^{p}$-boundedness of $\mathbf{M}$, $p>1$.
Thus, we get \eqref{1.21}.
\\
This puts an end to the proof of {\bf Step 1}.
\\
{\bf ii) Step 2.}
Now, we can prove Theorem \ref{Mainthe} for the case $\alpha_1\geq 1$.
At the beginning, let us denote $\alpha_i=\floor{\alpha_i}+s_i$, $i=1,2$. Then, we divide the proof into the following cases.
\\
{\bf a) The case $\floor{\alpha_2}=\floor{\alpha_1}$:}
By applying Theorem \ref{Mainthe} to $D^{\floor{\alpha_1}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_1}$; and by Proposition \ref{Pro1}, we obtain
\begin{align*}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} = \big\|D^{\floor{\alpha_1}} f \big\|_{\dot{W}^{s_1,p_1}} &\lesssim \big\| D^{\floor{\alpha_1}} f \big\|^{\frac{s_2-s_1}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-(\sigma+\floor{\alpha_1})}} \big\|D^{\floor{\alpha_1}} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{W}^{s_2,p_2}} \\
&\lesssim \big\| f \big\|^{\frac{s_2-s_1}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}}
\big\| D^{\floor{\alpha_2}} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{W}^{s_2,p_2}} = \big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}}
\big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align*}
with $p_1=p_2\big(\frac{s_2+\sigma_{{\rm new}}}{s_1+\sigma_{{\rm new}}}\big)=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$.
\\
Hence, we get the conclusion for this case.
\\
{\bf b) The case $\floor{\alpha_2}>\floor{\alpha_1}$:} If $s_2>0$, then we can apply Theorem \ref{Mainthe} to $D^{\floor{\alpha_2}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_2}$. Therefore,
\begin{align}\label{1.30}
\big\| D^{\floor{\alpha_2}} f \big\|_{L^q} \lesssim \big\|D^{\floor{\alpha_2}} f \big\|^{\frac{s_2}{s_2+\sigma+\floor{\alpha_2}}}_{\dot{B}^{-(\sigma+\floor{\alpha_2})}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\sigma+\floor{\alpha_2}}{s_2+\sigma+\floor{\alpha_2}}}_{\dot{W}^{s_2,p_2}} \lesssim \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\big(\frac{\alpha_2+\sigma}{\floor{\alpha_2}+\sigma}\big)$. Again, the last inequality follows from the lifting property in Proposition \eqref{Pro1}.
\\
Next, applying Theorem \ref{Mainthe} to $D^{\floor{\alpha_1}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_1}$ yields
\begin{align}\label{1.31}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} = \big\|D^{\floor{\alpha_1}} f \big\|_{\dot{W}^{s_1,p_1}} &\lesssim \big\|D^{\floor{\alpha_1}} f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-(\sigma+\floor{\alpha_1})}} \big\| D^{\floor{\alpha_1}+1} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}}_{L^{q_1}} \nonumber
\\
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_1}+1} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}}_{L^{q_1}} \,,
\end{align}
with $q_1= p_1\big(\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}\big)$.
\\
If $\floor{\alpha_2}=\floor{\alpha_1}+1$, then observe that $q=q_1$. Thus, we deduce from \eqref{1.30} and \eqref{1.31} that
\begin{align*}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}}
\left( \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}= \big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
This yields \eqref{-3}.
\\
Note that $\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}+\frac{s_2(\alpha_1+\sigma)}{(\alpha_2+\sigma)(\floor{\alpha_2}+\sigma)}=\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}$ since $\floor{\alpha_2}=\floor{\alpha_1}+1$.
\\
If $\floor{\alpha_2}>\floor{\alpha_1}+1$, then we apply \cite[Theorem 1.2]{DaoLamLu1} to $k=\floor{\alpha_1}+1$, $m=\floor{\alpha_2}$. Thus,
\begin{align}\label{1.32}
\big\| D^{\floor{\alpha_1}+1} f \big\|_{L^{q_1}}\lesssim \big\|f\big\|^{\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}}\,,
\end{align}
with $q_2=q_1 \big(\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}\big)$.
\\
Combining \eqref{1.31} and \eqref{1.32} yields
\begin{align}\label{1.33}
\big\|f \big\|_{\dot{W}^{\alpha_1,p_1}}
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \left( \big\|f\big\|^{\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}}_{\dot{B}^{-\sigma}}
\big\|D^{\floor{\alpha_2}} f \big\|^{\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}}\right)^{\frac{\alpha_1+\sigma}{1+\floor{\alpha_1}+\sigma}} \nonumber
\\
&= \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)}_{\dot{B}^{-\sigma}}
\big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}} \,.
\end{align}
Observe that $q=q_2=p_2 \big(\frac{\alpha_2+\sigma}{\floor{\alpha_2}+\sigma}\big) $. Thus, it follows from \eqref{1.33} and \eqref{1.30} that
\begin{align*}
\big\| f \big\|_{\dot{W}^{\alpha_1,p_1}}
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)}_{\dot{B}^{-\sigma}} \left( \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}\right)^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}
\\
&=\big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
A straightforward computation shows that
\[\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)+ \frac{s_2(\alpha_1+\sigma)}{(\alpha_2+\sigma)(\floor{\alpha_2}+\sigma)} = \frac{\alpha_2-\alpha_1}{\alpha_2+\sigma} \,.\]
This puts an end to the proof of Theorem \ref{Mainthe} for $s_2>0$.
\\
The proof of the case $s_2=0$ can be done similarly as above. Then, we leave the details to the reader.
\\
Hence, we complete the proof of Theorem \ref{Mainthe}.
\subsection{Proof of Theorem \ref{Mainthe1}}At the beginning, let us recall the notation $\alpha_i=\floor{\alpha_i}+s_i$, $i=1, 2$. Then, we divide the proof into the two following cases.
\\
{\bf i) The case $p_1=p_2=\infty$.} If $0<\alpha_1<\alpha_2<1$, then \eqref{4.1} becomes
\begin{equation}\label{2.0}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{\alpha_2}} \,.
\end{equation}
Inequality \eqref{2.0} can be obtained easily from the proof of \eqref{1.0} for $\sigma=0$. Then, we leave the details to the reader.
\\
If $0<\alpha_1<1\leq \alpha_2$, and $\alpha_2$ is integer, then \eqref{4.1} reads as:
\begin{equation}\label{2.0a}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|\nabla^{\alpha_2} f\|^{\frac{\alpha_1}{\alpha_2}}_{L^\infty} \,.
\end{equation}
To obtain \eqref{2.0a}, we utilize the vanishing moments of $\varphi_\varepsilon$ in Theorem \ref{ThePeetre}. In fact, let us fix $k>\alpha_2$.
Then, it follows from the Taylor series that
\begin{align}\label{2.0c}
|\varphi_\varepsilon * f(x)| &=\left|\int\big( f(x-y)-f(x)\big) \varphi_\varepsilon(y)\, dy \right| \nonumber
\\
&=\left| \int \left(\sum_{|\gamma|< \alpha_2} \frac{D^\gamma f(x)}{|\gamma|!} (-y)^\gamma + \sum_{|\gamma|= \alpha_2} \frac{D^\gamma f(\zeta)}{|\gamma|!} (-y)^\gamma \right) \varphi_\varepsilon(y)\,dy \right|
\nonumber \\
&=\left| \int \sum_{|\gamma|= \alpha_2} \frac{D^{\gamma} f(\zeta)}{|\alpha_2|!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\end{align}
for some $\zeta$ in the line-$xy$. Note that $$ \int \frac{D^\gamma f(x)}{|\gamma|!} (-y)^\gamma \varphi_\varepsilon(y)\,dy=0$$ for every multi-index $|\gamma|<k$.
\\
Hence, we get from \eqref{2.0c} that
\[ |\varphi_\varepsilon * f(x)| \lesssim \|\nabla^{\alpha_2} f\|_{L^\infty} \int_{B(0,\varepsilon)} |y|^{\alpha_2} |\varphi_\varepsilon(y)|\,dy\lesssim \varepsilon^{\alpha_2} \|\nabla^{\alpha_2} f\|_{L^\infty}\,.\]
Inserting the last inequality into \eqref{2.-1} yields
\[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \delta^{\alpha_2-\alpha_1} \|\nabla^{\alpha_2} f\|_{L^\infty}
+\delta^{-\alpha_1} \|f\|_{\dot{B}^{0}} \,.
\]
By minimizing the right hand side of the indicated inequality, we get
\[
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|\nabla^{\alpha_2} f\|^{\frac{\alpha_1}{\alpha_2}}_{L^\infty} \,.
\]
This implies \eqref{2.0a}.
\\
If $0<\alpha_1<1\leq \alpha_2$, and $\alpha_2$ is not integer, then \eqref{4.1} reads as:
\begin{equation}\label{2.0b}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \left\|\nabla^{\floor{\alpha_2}} f\right\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} \,.
\end{equation}
To obtain \eqref{2.0b}, we apply \eqref{2.0c} to $\floor{\alpha_2}$. Thus,
\begin{align*}
|\varphi_\varepsilon * f(x)| &=\left| \int \sum_{|\gamma|=\floor{\alpha_2}} \frac{D^{\gamma} f(\zeta)}{\floor{\alpha_2}!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\\
&=\left| \int \sum_{|\gamma|= \floor{\alpha_2}} \frac{D^{\gamma} f(\zeta) - D^{\gamma} f(x)}{\floor{\alpha_2}!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\\
&\lesssim \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \int |x-\zeta|^{s_2} |y|^{\floor{\alpha_2}} |\varphi_\varepsilon(y)|\,dy
\\
&\leq\|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \int_{B(0,\varepsilon)} |y|^{s_2} |y|^{\floor{\alpha_2}} |\varphi_\varepsilon(y)|\,dy \lesssim \varepsilon^{\alpha_2}
\|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \,.
\end{align*}
Thus,
\[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \delta^{\alpha_2-\alpha_1} \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}}
+\delta^{-\alpha_1} \|f\|_{\dot{B}^{0}} \,. \]
By the analogue as in the proof of \eqref{2.0a}, we also obtain \eqref{2.0b}.
\\
In conclusion, Theorem \ref{Mainthe1} was proved for the case $0<\alpha_1<1$.
\\
Now, if $\alpha_1\geq 1$, then \eqref{4.1} becomes
\begin{equation}\label{2.0d}
\| D^{\floor{\alpha_1}} f \|_{\dot{C}^{s_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|D^{\floor{\alpha_2}} f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} \,.
\end{equation}
Again, we note that $\|.\|_{\dot{C}^{s_i}}$ is replaced by $\|.\|_{L^\infty}$ whenever $s_i=0$, $i=1,2$.
\\
To obtain \eqref{2.0d}, we apply Theorem \ref{Mainthe} to $f_{{\rm new}}=D^{\floor{\alpha_1}} f$, and $\sigma =\floor{\alpha_1}$.
\\
Hence, it follows from Proposition \ref{Pro1} that
\begin{align*}
\|f\|_{\dot{C}^{\alpha_1}} =\| D^{\floor{\alpha_1}} f \|_{\dot{C}^{s_1}}
&\lesssim
\big\|D^{\floor{\alpha_1}} f\big\|^{\frac{\alpha_2-\floor{\alpha_1}-s_1}{\alpha_2-\floor{\alpha_1}+\sigma}}_{\dot{B}^{-\floor{\alpha_1}}} \big\|D^{\floor{\alpha_1}} f\big\|^{\frac{s_1+\sigma}{\alpha_2-\floor{\alpha_1}+\sigma}}_{\dot{C}^{\alpha_2-\floor{\alpha_1}}}
\\
&\lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\big\|D^{\floor{\alpha_2}}f\big\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} = \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{\alpha_2}} \,.
\end{align*}
This puts an end to the proof of Theorem \ref{Mainthe1} for the case $p_1=p_2=\infty$.
\\
{\bf ii) The case $p_i<\infty, i=1,2$.}
We first consider the case $0<\alpha_1<1$.
\\
{\bf a)} If $\alpha_2\in(\alpha_1, 1)$, then we utilize the following result $\|\cdot\|_{\dot{W}^{s,p}} \approx \|\cdot\|_{\dot{B}^{s}_{p,p}}$ for $s\in(0,1)$, $p\geq 1$, see Proposition \ref{Pro-cha} in the Appendix section. Therefore, \eqref{4.1} is equivalent to the following inequality
\begin{equation}\label{2.1}
\|f\|_{\dot{B}^{\alpha_1}_{p_1,p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^{0}}\|f\|^\frac{\alpha_1}{\alpha_2}_{\dot{B}^{\alpha_2}_{p_2,p_2}} \,.
\end{equation}
Note that $\alpha_1 p_1 = \alpha_2 p_2$. Hence,
\begin{align}\label{2.2}
2^{jn \alpha_1 p_1} \|f*\phi_j\|_{L^{p_1}}^{p_1} \leq 2^{jn \alpha_2 p_2} \|f*\phi_j\|_{L^{p_2}}^{p_2} \|f*\phi_j\|_{L^{\infty}}^{p_1-p_2} \leq 2^{jn \alpha_2 p_2} \|f*\phi_j\|_{L^{p_2}}^{p_2} \|f\|_{\dot{B}^0}^{p_1-p_2} \,.
\end{align}
This implies that
\[ \|f\|^{p_1}_{\dot{B}^{\alpha_1}_{p_1,p_1}} \leq \|f\|^{p_1-p_2}_{\dot{B}^{0}} \|f\|^{p_2}_{\dot{B}^{\alpha_2}_{p_2,p_2}} \]
which is \eqref{2.1}.
\\
{\bf b)} If $\alpha_2= 1$, then we show that
\begin{equation}\label{2.3}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim\|f\|^{1-\alpha_1}_{\dot{B}^{0}} \|Df\|^{\alpha_1}_{L^{p_2}}\,.
\end{equation}
To obtain \eqref{2.3}, we prove the homogeneous version of \eqref{-16}.
\begin{lemma}\label{Lem-Hom-Sobolev}
Let $0<\alpha_0<\alpha_1 <\alpha_2\leq 1$, and $p_0\geq 1$ be such that $\alpha_0 -\frac{1}{p_0}<\alpha_2-\frac{1}{p_2}$, and
\[
\frac{1}{p_1} = \frac{\theta}{p_0} + \frac{1-\theta}{p_2} ,\quad \theta= \frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0}
\,. \]
Then, we have
\begin{equation}\label{2.4}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0} }_{\dot{W}^{\alpha_0,p_0}} \|f\|^{\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0}}_{\dot{W}^{\alpha_2,p_2}},\quad \forall f\in \dot{W}^{\alpha_0,p_0} (\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n) \,.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lem-Hom-Sobolev}] The proof is quite similar to the one in Lemma \ref{Lem10}.
Indeed, the proof follows by way of the following result.
\\
If $f\in \dot{W}^{\alpha_0,p_0} (\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then there hold true
\begin{align}\label{2.5a}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\lesssim\big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x) \big]^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}
\end{align}
if provided that $\alpha_2<1$, and
\begin{align}\label{2.5}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\lesssim\big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{1-\alpha_1}{1-\alpha_0})p_1} \left[\mathbf{M}(|Df|)(x) \right]^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}
\end{align}
if $\alpha_2=1$.
\\
The proof of \eqref{2.5a} (resp. \eqref{2.5}) can be done similarly as the one of \eqref{1.13} (resp. \eqref{1.23}). Therefore, we only need to replace $\mathbf{M}(f)(x)$ by $\mathbf{G}_{\alpha_0,p_0}(f)(x)$ in \eqref{1.13} (resp. \eqref{1.23}).
\\
In fact, we have from H\"{o}lder's inequality
\begin{align}\label{2.6}
\int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&\leq \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big|^{p_0} \, dy\right)^{\frac{p_1}{p_0}} \frac{dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&= \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x) - f(x+y) \big|^{p_0}}{|z|^{\alpha_0 p_0}} \, dy\right)^{\frac{p_1}{p_0}} \frac{|z|^{\alpha_0 p_1}dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&\lesssim \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1} \int_{\{|z|\geq t\}} |z|^{-n-(\alpha_1-\alpha_0)p_1} \, dz\nonumber
\\
&\lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1} \,.
\end{align}
If $\alpha_2<1$, then it follows from \eqref{2.5} and \eqref{1.14} that
\begin{align*}
\int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1}+ t^{(\alpha_2-\alpha_1)p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x)\big]^{p_1} \,.
\end{align*}
Thus, \eqref{2.5a} follows by minimizing the right hand side of the indicated inequality.
\\
Next, applying H\"older's inequality in \eqref{2.5a} with $\big(\frac{p_0(\alpha_2-\alpha_0)}{p_1(\alpha_2-\alpha_1)},\frac{p_2(\alpha_2-\alpha_0)}{p_1(\alpha_2-\alpha_1)}\big)$ yields
\begin{align*}
\|f\|^{p_1}_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\\
&\lesssim \int \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x)\big]^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1} dx
\\
&\leq \big\| \mathbf{G}_{\alpha_0,p_0} \big\|^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1}_{L^{p_0}} \big\| \mathbf{G}_{\alpha_2,p_2} \big\|^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}_{L^{p_2}}
\\
&\leq \|f\|^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1}_{\dot{W}^{\alpha_0,p_0}} \|f\|^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
Note that the last inequality is obtained by Remark \ref{Rem4}. Hence, we get \eqref{2.4} for $\alpha_2<1$.
\\
If $\alpha_2=1$, then
it follows from \eqref{2.6} and \eqref{1.24} that
\begin{align*}
\int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1}+ t^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{p_1}
\end{align*}
which implies \eqref{2.5}.
\\
By applying H\"older's inequality with $\big(\frac{p_0(1-\alpha_0)}{p_1(1-\alpha_1)},\frac{p_2(1-\alpha_0)}{p_1(1-\alpha_1)}\big)$, we obtain
\begin{align*}
\|f\|^{p_1}_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \\
&\lesssim \int_{\mathbb{R}^n} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{1-\alpha_1}{1-\alpha_0})p_1} \left[\mathbf{M}(|Df|)(x) \right]^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1} dx
\\
&\leq \big\|\mathbf{G}_{\alpha_0,p_0}(f)\big\|^{(\frac{1-\alpha_1}{1-\alpha_0})p_1}_{L^{p_0}} \big\|\mathbf{M}(|Df|) \big\|^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}_{L^{p_2}}
\\
&\lesssim \|f\|^{(\frac{1-\alpha_1}{1-\alpha_0})p_1}_{\dot{W}^{\alpha_0, p_0}} \|Df\|^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}_{L^{p_2}} \,.
\end{align*}
This yields \eqref{2.4} for $\alpha_2=1$.
\\
Hence, we complete the proof of Lemma \ref{Lem-Hom-Sobolev}.
\end{proof}
Now, we apply Lemma \ref{Lem-Hom-Sobolev} when $\alpha_2=1$ in order to obtain
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1, p_1}}\lesssim \|f\|^{\frac{1-\alpha_1}{1-\alpha_0}}_{\dot{W}^{\alpha_0, p_0}} \|Df\|^{\frac{\alpha_1-\alpha_0}{1-\alpha_0}}_{L^{p_2}} \,,
\end{align*}
where $\alpha_0, p_0$ are chosen as in Lemma \ref{Lem-Hom-Sobolev}.
\\
After that, we have from \eqref{2.1} that
\[ \|f\|_{\dot{W}^{\alpha_0, p_0}} \lesssim \|f\|^{1-\frac{\alpha_0}{\alpha_1}}_{\dot{B}^0} \|f\|^{\frac{\alpha_0}{\alpha_1}}_{\dot{W}^{\alpha_1, p_1}} \,.\]
Combining the last two inequalities yields the desired result.
\\
{\bf The case $\alpha_2>1$}.
\\
If $\alpha_2$ is not integer, then we apply Theorem \ref{Mainthe} to $\sigma=\floor{\alpha_2}$ to get
\begin{align}\label{2.7}
\| D^{\floor{\alpha_2}} f \|_{L^q} \lesssim
\| D^{\floor{\alpha_2}} f \|^{\frac{s_2}{\alpha_2}}_{\dot{B}^{-\floor{\alpha_2}}} \|D^{\floor{\alpha_2}} f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{s_2,p_2}} \lesssim
\|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\frac{\alpha_2}{\floor{\alpha_2}}$.
Recall that $\alpha_2=\floor{\alpha_2}+s_2$.
If $\floor{\alpha_2}=1$, then it follows from \eqref{2.3} and the last inequality that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \|Df\|^{\alpha_1}_{L^q} \lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \left( \|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\alpha_1}=\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^0}\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align*}
with $q=\alpha_1 p_1=\alpha_2 p_2$ since $\floor{\alpha_2}=1$.
\\
This yields \eqref{4.1} when $\floor{\alpha_2}=1$.
If $\floor{\alpha_2}>1$, then we can apply Theorem \ref{TheC} in order to get
\begin{align*}
\|Df\|_{L^{q_1}} \lesssim \|f\|^{\frac{\floor{\alpha_2}-1}{\floor{\alpha_2}}}_{\dot{B}^0} \big\|D^{\floor{\alpha_2}}f \big\|^{\frac{1}{\floor{\alpha_2}}}_{L^{q_2}} \,,
\end{align*}
with $q_1=\alpha_1 p_1$, and $q_2 = \frac{q_1}{\floor{\alpha_2}} = \frac{\alpha_2 p_2}{\floor{\alpha_2}}$.
\\
A combination of the last inequality, \eqref{2.7}, and \eqref{2.3} implies that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \|Df\|^{\alpha_1}_{L^{q_1}} \lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \left(\|f\|^{\frac{\floor{\alpha_2}-1}{\floor{\alpha_2}}}_{\dot{B}^0} \big\|D^{\floor{\alpha_2}}f \big\|^{\frac{1}{\floor{\alpha_2}}}_{L^{q_2}} \right)^{\alpha_1}
\\
& \lesssim\|f\|^{1-\frac{\alpha_1}{\floor{\alpha_2}}}_{\dot{B}^0} \left(\|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\frac{\alpha_1}{\floor{\alpha_2}}}
= \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
Hence, we obtain\eqref{4.1} when $\floor{\alpha_2}>1$.
\\
The case where $\alpha_2>1$ is integer can be done similarly as the above. Then, we leave the details to the reader.
\section{Appendix}
\begin{proposition}\label{Pro-cha} The following statement holds true
\begin{equation}\label{5.1}
\|f\|_{\dot{W}^{\alpha,p}} \approx \|f\|_{\dot{B}^{\alpha}_{p,p}} ,\quad \forall f\in \mathcal{S}(\mathbb{R}^n) \,.
\end{equation}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{Pro-cha}] To obtain the result, we follow the proof by Grevholm, \cite{Grevholm}.
\\
First of all, for any $s\in(0,1)$, $1\leq p<\infty$, it is known that (see, e.g., \cite{Leoni, Triebel})
\[
\|f\|_{\dot{W}^{s,p}} \approx \left( \sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+s p}} \right)^{1/p} ,\quad \forall f\in W^{s,p}(\mathbb{R}^n)\,,
\]
where $\Delta_{te_k} f (x) = f(x+te_k)-f(x)$, and $e_k$ is the $k$-th vector of the canonical basis in $\mathbb{R}^n$, $k=1,\dots,n$.
\\
Thanks to this result, \eqref{5.1} is equivalent to the following inequality
\begin{align}\label{5.1a}
\sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \approx \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,.
\end{align}
Then, we first show that
\begin{equation} \label{5.1c}
\sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \lesssim \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,.
\end{equation}
It suffices to prove that
\begin{equation} \label{5.1b}
\int^\infty_0 \big\|\Delta_{te_1} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \lesssim \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,.
\end{equation}
Indeed, let $\varphi\in\mathcal{S}(\mathbb{R}^n)$ be such that ${\rm supp}(\hat{\varphi})\subset \big\{ \frac{1}{2}< |\xi|< 2 \big\}$, $\hat{\varphi}(\xi) \not=0$ in $\big\{ \frac{1}{4}< |\xi|<1 \big\}$, $\varphi_j(x)= 2^{-jn}\varphi(2^{-j}x)$ for $j\in\mathbb{Z}$, and $\displaystyle\sum_{j\in\mathbb{Z}} \hat{\varphi_j}(\xi) =1$ for $\xi\not=0$.
\\
Next, let us set
$$\widehat{\psi}_j(\xi) = \big(e^{it\xi_1}-1\big) \widehat{\varphi}_j(\xi)\,, \quad \xi=(\xi_1,...,\xi_n) \,.$$
Note that for any $g\in\mathcal{S}(\mathbb{R}^n)$ $$\mathcal{F}^{-1}\big\{(e^{it\xi_1}-1) \widehat{g}\big\} = g(x+te_1)- g(x) \,,$$
where $\mathcal{F}^{-1}$ denotes by the inverse Fourier transform.
\\
Since ${\rm supp}(\widehat{\varphi}_j) \cap {\rm supp}(\widehat{\varphi}_l) =\emptyset$ whenever $|l-j|\geq 2$, then we have
\begin{align}\label{5.2}
\psi_j * f &= \psi_j * \left(\sum_{i\in\mathbb{Z}} \varphi_j \right) * f = \psi_j * \big(\varphi_{j-1} + \varphi_j +\varphi_{j+1} \big) * f \,.
\end{align}
Applying Young's inequality yields
\begin{align}\label{5.3}
\| \psi_j * \varphi_{j} * f \|_{L^p}& \leq \| \psi_j \|_{L^1} \| \varphi_{j} * f \|_{L^p} \nonumber
\\
&= \big\| \mathcal{F}^{-1}\big\{ (e^{it\xi_1}-1) \widehat{\varphi}_j(\xi) \big\}\big\|_{L^1} \| \varphi_{j} * f \|_{L^p} \nonumber
\\
&= \| \varphi_j(.+te_1)-\varphi_{j}(.)\|_{L^1}
\| \varphi_{j} * f \|_{L^p} \leq C\| \varphi_{j} * f \|_{L^p} \,,
\end{align}
where $C=C_\varphi$ is independent of $j$.
\\
On the other hand, we observe that
\begin{align*}
\big|\varphi_j(x+te_1)-\varphi_{j}(x)\big|&=\big| \int^1_0 D\varphi_{j} (x + \tau t e_1) \cdot te_1 \, d\tau \big|
\\
&\leq t\int^1_0 \big|D\varphi_{j} (x + \tau t e_1) \big| \, d\tau = t 2^{-j} 2^{-jn} \int^1_0 \big|D\varphi \big( 2^{-j}( x + \tau t e_1)\big) \big| \, d\tau \,.
\end{align*}
Therefore,
\begin{align}\label{5.5}
\| \varphi_j(.+te_1)-\varphi_{j}(.)\|_{L^1} &\leq t 2^{-j} 2^{-jn} \int^1_0 \big\|D\varphi \big( 2^{-j}( x + \tau t e_1)\big) \big\|_{L^1} \, d\tau \nonumber
\\
& = t 2^{-j} \int^1_0 \|D\varphi\|_{L^1} \, d\tau = C(\varphi) \, t 2^{-j} \,.
\end{align}
Combining \eqref{5.2}, \eqref{5.3} and \eqref{5.5} yields
\begin{equation}\label{5.6}
\sum_{j\in\mathbb{Z}} \| \psi_j * f \|_{L^p} \lesssim \min\{1,t2^{-j}\} \sum_{j\in\mathbb{Z}}\| \varphi_{j} * f \|_{L^p}\,.
\end{equation}
Now, remind that\, $f(x+te_1)-f(x) = \displaystyle\sum_{j\in\mathbb{Z}} \psi_j * f(x)$ in $\mathcal{S}^\prime(\mathbb{R}^n)$.
Then, we deduce from \eqref{5.6} that
\begin{align*}
\int^\infty_0 \int_{\mathbb{R}^n} \frac{|f(x+te_1)-f(x)|^p}{t^{1+\alpha p}} \, dx dt &= \int_{0}^{\infty} \big\| \sum_{j\in \mathbb{Z}} \psi_j * f\big\|_{L^p}^p \frac{dt}{t^{1+\alpha p}}
\\
&\lesssim \sum_{k\in\mathbb{Z}} \int^{2^k}_{2^{k-1}} \sum_{j\in \mathbb{Z}} \min\{1,t^p 2^{-jp}\} \| \varphi_j * f\|_{L^p}^p \frac{dt}{t^{1+\alpha p}}
\\
&\lesssim \sum_{k\in\mathbb{Z}} 2^{-k\alpha p} \sum_{j\in \mathbb{Z}} \min\{1,2^{(k-j)p}\} \| \varphi_j * f\|_{L^p}^p
\\
&= \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} \min\{1,2^{(k-j)p}\} 2^{-(k-j)\alpha p} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right]
\\
&= \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} \min\{2^{-(k-j)\alpha p},2^{(k-j)(1-\alpha)p}\} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right]
\\
&\leq \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} 2^{-|k-j|\delta} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right],\quad \delta=\min\{\alpha p , (1-\alpha)p\}
\\
&\leq C_\delta \sum_{k\in\mathbb{Z}} \left[ 2^{-k\alpha p} \|\varphi_k * f\|_{L^p}^p \right] = C_\delta \|f\|_{\dot{B}^{\alpha}_{p,p}}^p \,.
\end{align*}
Similarly, we also obtain
\[ \int^\infty_0 \int_{\mathbb{R}^n} \frac{|f(x+te_k)-f(x)|^p}{t^{1+\alpha p}} \, dx dt \lesssim \|f\|_{\dot{B}^{\alpha}_{p,p}}^p ,\quad k=2,\dots,n \,.\]
This yields \eqref{5.1b}.
\\
For the converse, let $\{\varphi_j\}_{j\in\mathbb{Z}}$ be the sequence above. By following \cite[page 246]{Grevholm}, we can construct function $\psi\in\mathcal{S}(\mathbb{R}^n)$ such that $\hat{\psi}(\xi) =1$ on $\{1/2 \leq|\xi|\leq 2\}$, and $\widehat{\psi}=\displaystyle\sum^n_{k=1} \widehat{h}^{k}$, with $h^{k}\in\mathcal{S}(\mathbb{R}^n)$ satisfies
\begin{align}\label{5.8a}
\sup_{t\in(2^{j-1}, 2^j)} \big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{L^1} \leq C , \quad k=1,\dots,n \,,
\end{align}
where $h^k_j(x) = 2^{-jn}h^k_j(2^{-j}x)$, and constant $C>0$ is independent of $k, j$. Actually, we only need $\eqref{5.8a}$ holds for $\big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{\mathcal{M}}$ instead of
$\big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{L^1}$, where $\mathcal{M}$ is the space of bounded measures on $\mathbb{R}^n$, and $\|\mu\|_{\mathcal{M}}$ is the total variation of $\mu$.
\\
Next, from the construction of functions $h^k$, $k=1,\dots,n$, there exists a universal constant $C_1>0$ such that
\begin{align*}
\big\|\mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} \big\|_{L^1} \leq C_1 \big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{L^1}\,.
\end{align*}
With the last inequality noted, we deduce from \eqref{5.8a} that
\begin{align}\label{5.8c}
\sup_{t\in(2^{j-1}, 2^j)} \big\|\mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} \big\|_{L^1} \leq C C_1 , \quad k=1,\dots,n \,.
\end{align}
Now, observe that
$$h^k_j *f = \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} * \Delta_{te_k} f \,.$$
Thus, it follows from the triangle inequality, \eqref{5.8c}, and Young's inequality that
\begin{align}\label{5.9}
\big\|\psi_j* f\big\|_{L^p} &= \big\|\sum_{k=1}^n h^k_j *f \big\|_{L^p} \leq \sum_{k=1}^n \big\| h^k_j *f \big\|_{L^p} = \sum_{k=1}^n\big\| \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} * \Delta_{te_k} f \big\|_{L^p} \nonumber
\\
&\leq \sum_{k=1}^n\big\| \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} \big\|_{L^1} \big\| \Delta_{te_k} f \big\|_{L^p} \nonumber
\\
&\lesssim
\sum_{k=1}^n \big\| \Delta_{te_k} f \big\|_{L^p} \,, \quad \text{for all } t\in(2^{j-1},2^j)\,.
\end{align}
On the other hand, it is clear that $\hat{\psi}(\xi) \hat{\varphi} (\xi)=\hat{\varphi} (\xi)$ since ${\rm supp}(\hat{\varphi})\subset \{1/2 \leq|\xi|\leq 2\}$.
\\
Hence, we obtain from \eqref{5.9} that
\begin{align*}
\| \varphi_{j} *f \|^p_{L^p} = \| \varphi_{j} * \psi_j * f \|^p_{L^p} \leq \| \varphi_{j} \|^p_{L^1} \|\psi_j * f\|^p_{L^p} \lesssim \sum_{k=1}^n \big\|\Delta_{te_k} f \big\|^p_{L^p}
\end{align*}
for all $t\in (2^{j-1},2^j)$.
\\
Thus,
\begin{align*}
\sum_{j\in\mathbb{Z}} 2^{-j\alpha p} \| \varphi_{j} *f \|^p_{L^p} &\lesssim \sum_{j\in\mathbb{Z}} 2^{-j\alpha p} \sum_{k=1}^n \fint^{2^j}_{2^{j-1}} \big\|\Delta_{te_k} f \big\|^p_{L^p} \,dt \lesssim \sum_{k=1}^n \int^\infty_0 \|\Delta_{te_k} f\|^p_{L^p} \frac{dt}{t^{1+\alpha p}}
\end{align*}
which yields
\[ \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \lesssim \sum_{k=1}^n \int^\infty_0 \|\Delta_{te_k} f\|^p_{L^p} \frac{dt}{t^{1+\alpha p}} \,. \]
This completes the proof of Proposition \ref{Pro-cha}.
\end{proof}
\textbf{Acknowledgement.} The research is funded by University of Economics Ho Chi Minh City, Vietnam.
\begin{thebibliography}{99}
\bibitem{Brezis1} {H. Brezis and P. Mironescu,} Gagliardo--Nirenberg inequalities and non-inequalities: The full story, Ann. I. H. Poincar\'e-AN {\bf 35} (2018), 1355-1376.
\bibitem{Brezis2} {H. Brezis and P. Mironescu,} {Where Sobolev interacts with Gagliardo--Nirenberg}, Jour. Funct. Anal. {\bf 277} (2019), 2839-2864.
\bibitem{Brezis3} {H. Brezis, J. Van Schaftingen and Po-Lam Yung,} {A surprising formula for Sobolev norms}, Proc. Nat. Acad. Sci. U.S.A. {\bf118}, no. 8 (2021) e2025254.
\bibitem{CDDD} {A. Cohen, W. Dahmen, I. Daubechies and R. DeVore,} {Harmonic analysis of the space BV}, Rev. Mat. Iberoam. {\bf19} (2003), 235-263.
\bibitem{CDPX} {A. Cohen, R. DeVore, P. Petrushev and H. Xu}, Nonlinear approximation and the space ${\rm BV} (\mathbb{R}^2)$, Amer. J. Math. {\bf 121} (1999), 587-628.
\bibitem{Dao1}{N. A. Dao, J. I. D\'iaz and Q. H. Nguyen,} Fractional Sobolev inequalities revisited: the maximal function approach, Rend. Lincei Mat. Appl. {\bf 31} (2020), 225-236.
\bibitem{DaoLamLu1} {N. A. Dao, N. Lam and G. Lu,} Gagliardo--Nirenberg and Sobolev interpolation
inequalities on Besov spaces, Proc. Amer. Math. Soc. {\bf 150} (2022), 605-616.
\bibitem{DaoLamLu2} {N. A. Dao, N. Lam and G. Lu}, Gagliardo--Nirenberg type
inequalities on Lorentz, Marcinkiewicz, and Weak-$L^\infty$ spaces, Proc. Amer. Math. Soc. {\bf150} (2022), 2889-2900.
\bibitem{Gag} {E. Gagliardo}, Ulteriori proprietà di alcune classi di funzioni in più variabili, Ric. Mat. 8 (1959) 24–51.
\bibitem{Grevholm} B. Grevholm, On the structure of the spaces $\mathcal{L}^{p,\lambda}_k$, Math. Scand. {\bf26} (1970), 241-254.
\bibitem{Le} {M. Ledoux}, On improved Sobolev embedding theorems, Math. Res. Lett. \textbf{10} (2003), 659-669.
\bibitem{Leoni} {G. Leoni}, A first course
in Sobolev Spaces, Graduate Studies
in Mathematics
Vol. 105, American Mathematical Society
Providence, Rhode Island.
\bibitem{Lu2} {G. Lu}, Polynomials, higher order Sobolev extension theorems and interpolation inequalities on weighted Folland-Stein spaces on stratified groups. Acta Math. Sin. (Engl. Ser.) 16 (2000), no. 3, 405-444.
\bibitem{LuWheeden} {G. Lu and R. Wheeden}, Simultaneous representation and approximation formulas and high-order Sobolev embedding theorems on stratified groups. Constr. Approx. 20 (2004), no. 4, 647-668.
\bibitem{MeRi2003} {Y. Meyer and T. Rivi\`ere}, A partial regularity result for a class of stationary Yang–Mills fields, Rev. Mat. Iberoamericana {\bf 19} (2003), 195-219.
\bibitem{Miyazaki} {Y. Miyazaki}, A short proof of the Gagliardo-Nirenberg inequality with {\rm BMO} term, Proc. Amer. Math. Soc. {\bf 148} (2020), 4257-4261.
\bibitem{Nir} L. Nirenberg, On elliptic partial differential equations, Ann. Sc. Norm. Super. Pisa 3 (13) (1959), 115-162.
\bibitem{Peetre} J. Peetre, New thoughts on Besov spaces, Published by
Mathematics Department, Duke University, 1976.
\bibitem{Stein} E. Stein. Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton, 1970.
\bibitem{Strz} {P. Strzelecki}, {Gagliardo-Nirenberg inequalities with a {\rm BMO} term}, Bull. London Math. Soc. {\bf 38} (2006), 294-300.
\bibitem{Van} {J. Van Schaftingen}, {Fractional Gagliardo--Nirenberg interpolation inequality and bounded mean oscillation}, ArXiv:2208.14691.
\bibitem{Triebel} H. Triebel, Theory of function spaces, Monogr. Math., vol. 78, Birkhäuser Verlag, Basel, 1983.
\end{thebibliography}
\end{document}
\documentclass[12pt,reqno]{amsart}
\usepackage{txfonts}
\usepackage{amsmath, amsfonts, amssymb, amsthm, amscd, amsbsy, latexsym, amsxtra} \usepackage{fancyhdr} \usepackage[usenames,dvipsnames,svgnames,x11names,hyperref]{xcolor} \usepackage{geometry} \usepackage{graphicx} \usepackage[pagebackref]{hyperref} \usepackage{showidx} \usepackage{showkeys} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \setcounter{MaxMatrixCols}{30}
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\floor}[1]{\lfloor #1 \rfloor}
\hypersetup{colorlinks,breaklinks,
linkcolor={Fuchsia},
citecolor={ForestGreen},
urlcolor={NavyBlue}} \geometry{
a4paper,
total={8.5in,11.5in},
left=1in,
right=1in,
top=1in,
bottom=1in, } \newtheorem{theorem}{Theorem}[section] \theoremstyle{plain} \newtheorem{acknowledgement}{Acknowledgement}[section] \newtheorem{algorithm}{Algorithm}[section] \newtheorem{axiom}{Axiom}[section] \newtheorem{case}{Case}[section] \newtheorem{claim}{Claim}[section] \newtheorem{conclusion}{Conclusion}[section] \newtheorem{condition}{Condition}[section] \newtheorem{conjecture}{Conjecture}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{criterion}{Criterion}[section] \newtheorem{definition}{Definition}[section] \newtheorem{example}{Example}[section] \newtheorem{exercise}{Exercise}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{notation}{Notation}[section] \newtheorem{problem}{Problem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{remark}{Remark}[section] \newtheorem{solution}{Solution}[section] \newtheorem{summary}{Summary}[section] \numberwithin{equation}{section} \allowdisplaybreaks \def\Xint#1{\mathchoice
{\XXint\displaystyle\textstyle{#1}} {\XXint\textstyle\scriptstyle{#1}} {\XXint\scriptstyle\scriptscriptstyle{#1}} {\XXint\scriptscriptstyle\scriptscriptstyle{#1}} \!\int} \def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$ }
\vcenter{\hbox{$#2#3$ }}\kern-.6\wd0}} \def\Xint={\Xint=} \def\Xint-{\Xint-} \begin{document}
\title[]{Gagliardo--Nirenberg type inequalities using Fractional Sobolev spaces and Besov spaces}
\author{Nguyen Anh Dao}
\address{Nguyen Anh Dao: School of Economic Mathematics and Statistics, University of Economics Ho Chi Minh City, Viet Nam}
\email{[email protected]}
\date{\today}
\begin{abstract} Our main purpose is to establish Gagliardo--Nirenberg type inequalities using fractional homogeneous Sobolev spaces, and homogeneous Besov spaces. In particular, we extend some of the recent results obtained by the authors in \cite{Brezis1, Brezis2, Brezis3, DaoLamLu1, Miyazaki, Van}.
\end{abstract}
\subjclass[2010]{Primary 46E35; Secondary 46B70.}
\keywords{Gagliardo--Nirenberg's inequality, Besov spaces, maximal function.\\}
\maketitle
\section{Introduction}
In this paper, we are interested in
the following Gagliardo--Nirenberg inequality:
\\
For every $0\leq \alpha_1<\alpha_2$, and for $1\leq p_1, p_2, q \leq \infty$, there holds
\begin{equation}\label{-10}
\|f\|_{\dot{W}^{\alpha_1, p_1}} \lesssim \| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{q} } \|f \|^\frac{\alpha_1}{\alpha_2}_{\dot{W}^{\alpha_2, p_2}}
\,,
\end{equation}
where $$\frac{1}{p_1} = \frac{1}{q} \left(1-\frac{\alpha_1}{\alpha_2}\right) + \frac{1}{p} \frac{\alpha_1}{\alpha_2} \,,$$
and $\dot{W}^{\alpha,p}(\mathbb{R}^n)$ denotes by the homogeneous Sobolev space (see the definition in Section 2).
\\
It is known that such an inequality of this type plays an important role in the analysis of PDEs. When $\alpha_i$, $i=1,2$ are nonnegative integer, \eqref{-10} was obtained independently by Gagliardo \cite{Gag} and Nirenberg \cite{Nir}.
After that, the inequalities of this type have been studied by many authors in \cite{Brezis1, Brezis2, Brezis3, CDDD,DaoLamLu1,DaoLamLu2,Le, LuWheeden,Lu2, Miyazaki,MeRi2003, Van}, and the references cited therein.
\\
The case $q=\infty$ can be considered as a limiting case of \eqref{-10}, i.e:
\begin{equation}\label{-12}
\|\nabla^{\alpha_1} f\|_{L^{p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}} _{L^\infty} \|\nabla^{\alpha_2} f \|^{\frac{\alpha_1}{\alpha_2}}_{L^{p_2}} \,, \quad \forall f\in L^\infty(\mathbb{R}^n)\cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)\,,
\end{equation}
with $p_1=\frac{p_2\alpha_2}{\alpha_1}$.
Obviously, this inequality fails if $\alpha_1=0$. \\
An partial improvement of \eqref{-12} in terms of {\rm BMO} space was obtained by Meyer and Rivi\`ere \cite{MeRi2003}:
\begin{equation}\label{-13}
\| \nabla f\|^2_{L^{4}} \lesssim \|f\|_{ \rm{BMO} } \| \nabla^2 f\|_{ L^2 } \,,
\end{equation}
for all $f\in {\rm BMO}(\mathbb{R}^n) \cap W^{2,2}(\mathbb{R}^n)$. Thanks to \eqref{-13}, the authors proved a regularity result for a class of stationary Yang--Mills fields in high dimension.
\\
After that \eqref{-14} was extended to higher derivatives by the authors in \cite{Strz,Miyazaki}. Precisely, there holds true
\begin{equation}\label{-14}
\|\nabla^{\alpha_1} f\|_{L^{p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}} _{{\rm BMO}} \|\nabla^{\alpha_2} f \|^{\frac{\alpha_1}{\alpha_2}}_{L^{p_2}} \,,
\end{equation}
for all $f\in {\rm BMO}(\mathbb{R}^n) \cap W^{\alpha_2,p_2}(\mathbb{R}^n)$, $p_2>1$.
\\
Recently, the author et al. \cite{DaoLamLu1} improved \eqref{-14} by means of the homogeneous Besov spaces. For convenience, we recall the result here.
\begin{theorem}[see Theorem 1.2, \cite{DaoLamLu1}] \label{TheC} \sl
Let $m, k$ be integers with $1\leq k<m$. For every $s\geq 0$, let $f \in \mathcal{S}'(\mathbb{R}^n)$ be such that
$ D^m f\in L^{p}(\mathbb{R}^n)$, $1\leq p<\infty$; and $f\in\dot{B}^{-s}(\mathbb{R}^n)$. Then, we have $D^k f\in L^r(\mathbb{R}^n)$, $r=p \left( \frac{m+s}{k+s} \right)$, and
\begin{equation}\label{-15}
\|D^k f\|_{L^r} \lesssim \|f\|^{\frac{m-k}{m+s}}_{\dot{B}^{-s}}
\left\|D^m f \right\|^\frac{k+s}{m+s}_{L^p} \,,
\end{equation}
where we denote $\dot{B}^{\sigma} = \dot{B}^{\sigma,\infty}_{\infty}$, $\sigma\in\mathbb{R}$ (see the definition of Besov spaces in Section 2).
\end{theorem}
\begin{remark}
Obviously, \eqref{-15} is stronger than \eqref{-14} when $s=0$ since ${\rm BMO}(\mathbb{R}^n) \hookrightarrow \dot{B}^{0}(\mathbb{R}^n)$. We emphasize that \eqref{-15} is still true for $k=0$ when $s>0$.
\end{remark}
We would like to mention that
in studying the space ${\rm BV}(\mathbb{R}^2)$, A. Cohen et al., \cite{CDPX}
proved \eqref{-15} for the case $k=0, m=p=1, s=n-1, r=\frac{n}{n-1}$ by using wavelet decompositions (see \cite{Le} for the case $k=0, m=1, p\geq 1, r=p\big(\frac{1+s}{s}\big)$, with $s>0$).
\\
Inequality \eqref{-10} in terms of fractional Sobolev spaces has been investigated by the authors in \cite{Brezis1, Brezis2, Brezis3,Van} and the reference therein. Surprisingly, there is a border line for the limiting case of Gagliardo--Nirenberg type inequality. That is
\begin{align}\label{-16}
\|f\|_{W^{\alpha_1,p_1}}\lesssim \|f\|^\theta_{W^{\alpha,p}} \|f\|^{1-\theta}_{W^{\alpha_2,p_2}} \,,
\end{align}
with \[ \alpha_1=\theta \alpha +(1-\theta)\alpha_2,\, \frac{1}{p_1}=\frac{\theta}{p}+\frac{1-\theta}{p_2},\, \text{and } \theta\in(0,1) \,.\]
In \cite{Brezis1}, Brezis--Mironescu proved that
\eqref{-16} holds if and only if
\begin{equation}\label{special-cond} \alpha-\frac{1}{p}< \alpha_2-\frac{1}{p_2} \,.\end{equation}
As a consequence of this result,
the following inequality
\[
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|_{L^\infty} \|\nabla f\|_{L^1}
\]
fails whenever $0<\alpha_1<1$.
\\
The limiting case of \eqref{-16} reads as:
\begin{equation}\label{-17}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|_{L^\infty}\|f\|_{\dot{W}^{\alpha_2,p_2}} \,,
\end{equation}
where $\alpha_1<\alpha_2$, and $\alpha_1 p_1=\alpha_2 p_2$.
\\
When $\alpha_2<1$, Brezis--Mironescu improved \eqref{-17} by means of ${\rm BMO}(\mathbb{R}^n)$ using the Littlewood--Paley decomposition. Very recently, Van Schaftingen \cite{Van} studied \eqref{-17} for $\alpha_2=1$. Precisely, he proved that
\begin{equation}\label{-20}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|^{1-\alpha_1}_{{\rm BMO}} \|\nabla f\|^{\alpha_1}_{L^{p_2}}
\end{equation}
where $0<\alpha_1<1$, $p_1\alpha_1=p_2$, $p_2>1$.
\\
Inspired by the above results, we would like to study \eqref{-10} by means of fractional Sobolev spaces and Besov spaces. Moreover, we also improve the limiting cases \eqref{-17}, \eqref{-20} in terms of $\dot{B}^0(\mathbb{R}^n)$.
\subsection*{Main result}
Our first result is to improve \eqref{-10} to fractional Sobolev spaces, and homogeneous Besov spaces.
\begin{theorem}\label{Mainthe} Let $\sigma>0$, and $0\leq \alpha_1<\alpha_2<\infty$. Let $1\leq p_1, p_2 \leq \infty$ be such that $p_1=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$, and $p_2(\alpha_2+\sigma)>1$. If $f\in \dot{B}^{-\sigma}(\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. Moreover, there is a positive constant $C=C(n,\alpha_1,\alpha_2,p_2, \sigma)$ such that
\begin{equation}\label{-3}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}
\,.
\end{equation}
\end{theorem}
\begin{remark} Note that
\eqref{-3} is not true for the limiting case $\sigma= \alpha_1=0, p_1=\infty$, even \eqref{special-cond} holds, i.e: $\alpha_2-\frac{1}{p_2}>0$. Indeed, if it is the case, then \eqref{-3} becomes
\[
\|f\|_{L^{\infty}} \lesssim \|f\|_{\dot{B}^{0}} \,.
\]
Obviously,
the inequality cannot happen since
$L^\infty(\mathbb{R}^n)\hookrightarrow {\rm BMO}(\mathbb{R}^n)\hookrightarrow \dot{B}^0(\mathbb{R}^n)$.
\end{remark}
However, if $\alpha_1$ is positive, then
\eqref{-3} holds true with $\sigma=0$. This assertion is in the following theorem.
\begin{theorem}\label{Mainthe1} Let
$\alpha_2> \alpha_1>0$, and let $1\leq p_1, p_2\leq \infty$ be such that $p_1=\frac{\alpha_2 p_2}{\alpha_1}$, and $\alpha_2 p_2>1$. If $f\in \dot{B}^{0}(\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. Moreover, we have
\begin{equation}\label{4.1}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}}
\,.
\end{equation}
\end{theorem}
Our paper is organized as follows. We provide the definitions of the fractional Sobolev spaces and the homogeneous Besov spaces in the next section. Section 3 is devoted to the proofs of Theorems \ref{Mainthe}, \ref{Mainthe1}. Moreover, we also obtain the homogeneous version of \eqref{-16} with an elementary proof from Lemma \ref{Lem11}.
Finally, we give a characterization of the fractional homogeneous Sobolev spaces via the homogeneous Besov spaces in the last section.
\section{Definitions and preliminary results}
\subsection{Fractional Sobolev spaces}
\begin{definition}\label{Def-frac-Sob} For any $0<\alpha<1$, and for $1\leq p<\infty$,
we denote $\dot{W}^{\alpha,p}(\mathbb{R}^n)$ (resp. $W^{\alpha,p}(\mathbb{R}^n)$) by the homogeneous fractional
Sobolev space (resp. the inhomogeneous fractional Sobolev space), endowed by the semi-norm:
\[ \|f\|_{\dot{W}^{\alpha,p}} =
\left(\displaystyle \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{|f(x+h)-f(x)|^p}{|h|^{n+\alpha p}} dhdx \right)^{\frac{1}{p}} \,,
\]
and the norm
\[
\|f\|_{W^{\alpha,p}} = \left(\|f\|^p_{L^p} + \|f\|^p_{\dot{W}^{\alpha,p}} \right)^\frac{1}{p}\,.
\]
\end{definition}
When $\alpha\geq 1$, we can define the higher order fractional Sobolev space as follows:
\\
Denote $\floor{\alpha}$ by the integer part of $\alpha$. Then, we define
\[
\|f\|_{\dot{W}^{\alpha,p}} =\left\{ \begin{array}{cl}
&\|D^{\floor{\alpha}} f\|_{L^p} ,\quad \text{if }\, \alpha\in\mathbb{Z}^+.
\vspace*{0.1in}\\
& \|D^{\floor{\alpha}}f\|^p_{\dot{W}^{\alpha-\floor{\alpha},p}} ,\quad\text{otherwise}\,.
\end{array}\right.
\]
In addition, we also define
\[
\|f\|_{W^{\alpha,p}} =\left\{ \begin{array}{cl}
&\|f\|_{W^{\alpha,p}} ,\quad \text{if }\, \alpha\in\mathbb{Z}^+.
\\
& \left( \|f\|^p_{W^{\floor{\alpha},p}} + \|D^{\floor{\alpha}} f\|^p_{\dot{W}^{\alpha-\floor{\alpha},p}} \right)^{\frac{1}{p}} ,\quad\text{otherwise}\,.
\end{array}\right.
\]
\subsection*{Notation} Through the paper, we accept the notation
$\dot{W}^{\alpha,\infty}(\mathbb{R}^n)=\dot{C}^{\alpha}(\mathbb{R}^n)$, $\alpha\in(0,1)$; and $\dot{W}^{0,p}(\mathbb{R}^n)=L^p(\mathbb{R}^n)$, $1\leq p\leq \infty.$
\\
In addition, we always denote constant by C, which may change
from line to line. Moreover, the notation $C(\alpha, p,n)$ means that $C$ merely depends
on $\alpha, p,n$.
Next, we write $A \lesssim B$ if there exists a constant $c > 0$ such
that $A <cB$. And we write $A \approx B$ iff $A \lesssim B \lesssim A$.
\subsection{Besov spaces}
To define the homogeneous Besov spaces, we recall the Littlewood--Paley decomposition (see \cite{Triebel}). Let $\phi_j(x)$ be the inverse Fourier transform of the $j$-th component of the dyadic decomposition i.e.,
$$\sum_{j\in \mathbb{Z}} \hat{\phi}(2^{-j} \xi ) =1$$
except $\xi=0$, where
$ {\rm supp}( \hat{\phi})\subset \left\{ \frac{1}{2} < |\xi| < 2 \right\}$.
\\
Next, let us put
$$ \mathcal{Z}(\mathbb{R}^n) = \left\{ f \in \mathcal{S}(\mathbb{R}^n), D^\alpha \hat{f}(0) = 0,\, \forall\alpha \in \mathbb{N}^n, \text{ multi-index} \right\} \,,$$
where $\mathcal{S}(\mathbb{R}^n)$ is the Schwartz space as usual.
\begin{definition}\label{Def1} For every $s\in\mathbb{R}$, and for every $1\leq p, q\leq \infty$, the homogeneous Besov space is denoted by
$$\dot{B}^s_{p,q} =\left\{ f\in \mathcal{Z}'(\mathbb{R}^n): \|f\|_{\dot{B}^s_{p,q}} <\infty \right\} \,,$$
with
$$
\|f\|_{\dot{B}^s_{p,q}} = \left\{ \begin{array}{cl}
&\left( \displaystyle \sum_{j\in\mathbb{Z}}
2^{jsq} \|\phi_j * f\|^q_{L^p} \right)^\frac{1}{q}\,, \text{ if }\, 1\leq q<\infty,
\\
& \displaystyle\sup_{ j \in\mathbb{Z} } \left\{ 2^{js} \|\phi_j * f\|_{L^p} \right\} \,, \text{ if }\, q=\infty \,.
\end{array} \right. $$
When $p=q=\infty$, we denote $\dot{B}^s_{\infty,\infty}=\dot{B}^s$ for short.
\end{definition}
The following characterization of $\dot{B}^{s}_{\infty,\infty}$ is useful for our proof below.
\begin{theorem}[see Theorem 4, p. 164, \cite{Peetre}]\label{ThePeetre} Let $\big\{\varphi_\varepsilon\big\}_\varepsilon$ be a sequence of functions such that
\[\left\{
\begin{array}{cl}
&{\rm supp}(\varphi_\varepsilon)\subset B(0,\varepsilon) , \quad \big\{ \frac{1}{2\varepsilon}\leq |\xi|\leq \frac{2}{\varepsilon} \big\}\subset \big\{\widehat{\phi_\varepsilon}(\xi) \not=0 \big\} ,
\vspace*{0.1in}\\
&\int_{\mathbb{R}^n} x^\gamma \varphi_\varepsilon (x)\, dx =0 ,\, \text{for all multi-indexes }\, |\gamma|<k, \text{ where $k$ is a given integer},
\vspace*{0.1in}\\
& \big|D^\gamma \varphi_\varepsilon(x)\big| \leq C \varepsilon^{-(n+|\gamma|)}\, \text{ for every multi-index } \gamma\,.
\end{array}
\right.\]
Assume $s<k$. Then, we have
\[
f\in \dot{B}^s(\mathbb{R}^n) \Leftrightarrow
\sup_{\varepsilon>0}
\left\{\varepsilon^{-s} \|\varphi_\varepsilon * f\|_{L^\infty} \right\} < \infty \,.
\]
\end{theorem}
We end this section by recall the following result (see \cite{DaoLamLu1}).
\begin{proposition}[Lifting operator] \label{Pro1}
Let $s\in\mathbb{R}$, and let $\gamma$ be a multi-index. Then,
$\partial^\gamma$ maps $\dot{B}^s(\mathbb{R}^n) \rightarrow \dot{B}^{s-|\gamma|}(\mathbb{R}^n)$.
\end{proposition}
\section{Proof of the Theorems}
\subsection{Proof of Theorem \ref{Mainthe}}
We first prove Theorem \ref{Mainthe} for the case $0\leq \alpha_1<\alpha_2\leq 1$. After that, we consider $\alpha_i\geq 1$, $i=1,2$.
\\
{\bf i) Step 1: $0\leq \alpha_1<\alpha_2 \leq 1$.} We divide our argument into the following cases.
{\bf a) The case $p_1=p_2=\infty$, $0< \alpha_1<\alpha_2 <1$.} Then, \eqref{-3} becomes
\begin{equation}\label{1.0}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{C}^{\alpha_2}} \,.
\end{equation}
To prove \eqref{1.0}, we use a characterization of homogeneous Besov space $\dot{B}^{s}$ in Theorem \ref{ThePeetre}, and the fact that $\dot{B}^s(\mathbb{R}^n)$ coincides with $\dot{C}^s(\mathbb{R}^n)$, $s\in(0,1)$ (see \cite{Grevholm}).
\\
Then, let us recall sequence $\{\varphi_\varepsilon\}_{\varepsilon>0}$ in Theorem \ref{ThePeetre}.
\\
For $\delta>0$, we write
\begin{align}\label{2.-1}
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} &=\varepsilon^{\alpha_2-\alpha_1} \varepsilon^{-\alpha_2}\|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon<\delta\big\}}+ \varepsilon^{-(\alpha_1+\sigma)} \varepsilon^{\sigma}\|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon\geq \delta\big\}}
\\
&\leq \delta^{\alpha_2-\alpha_1} \|f\|_{\dot{B}^{\alpha_2}}
+\delta^{-(\alpha_1+\sigma)} \|f\|_{\dot{B}^{-\sigma}} \,. \nonumber
\end{align}
Minimizing the right hand side with respect to $\delta$ in the indicated inequality yields
\begin{align*}
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty}\lesssim \|f\|_{\dot{B}^{-\sigma}}^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}\,.
\end{align*}
Since the last inequality holds for every $\varepsilon>0$, then we obtain \eqref{1.0}.
\begin{remark}\label{Rem2} It is not difficult to observe that the above proof also adapts to the two following cases:
\begin{enumerate}
\item[$\bullet$] $\alpha_1=0$, $\alpha_2<1$, $\sigma>0$. Then, we have
\begin{equation}\label{2.-3}
\|f\|_{L^\infty} \lesssim \|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\sigma}{\alpha_2+\sigma}}\,.
\end{equation}
\item[$\bullet $] $\alpha_1=0$, $\alpha_2<1$, $\sigma>0$. Then,
\begin{equation}\label{2.-2}
\|f\|_{\dot{B}^{\alpha_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\alpha_1}{\alpha_2}}\,.
\end{equation}
\end{enumerate}
This is Theorem \ref{Mainthe1} when $p_i=\infty$, $i=1, 2$.
\end{remark}
{\bf b) The case $p_i<\infty, \,i=1,2$.} Let us first consider $0<\alpha_2<1$. Then, the proof follows by way of the following lemmas.
\begin{lemma}\label{Lem10}
Let $0<\alpha<1$, and $1\leq p<\infty$. For every $s>0$, if $f\in \dot{B}^{-s}(\mathbb{R}^n)\cap \dot{W}^{\alpha,p}(\mathbb{R}^n)$, then there exists a positive constant $C=C(s,\alpha,p)$ such that
\begin{equation}\label{1.1}
|f(x)| \leq C
\| f \|_{\dot{B}^{-s}}^\frac{\alpha}{s+\alpha} \big[
\mathbf{G}_{\alpha,p}(f)(x)\big]^{\frac{s}{s+\alpha}}\,, \quad \text{for } x\in\mathbb{R}^n\,,
\end{equation}
with $$\mathbf{G}_{\alpha,p}(f)(x)= \displaystyle\sup_{\varepsilon>0} \left(\fint_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|^p}{\varepsilon^{\alpha p}} dy \right)^\frac{1}{p} \,.$$
\end{lemma}
\begin{remark}\label{Rem4} Obviously, for $p\geq 1$ we have $\|\mathbf{G}_{\alpha,p}(f)\|_{L^p}\lesssim \|f\|_{\dot{W}^{\alpha,p}}$, and $\mathbf{G}_{\alpha,1}(f)(x)\leq \mathbf{G}_{\alpha,p}(f)(x)$ for $x\in\mathbb{R}^n$.
\end{remark}
\begin{remark}\label{Rem3}
Applying Lemma \ref{Lem10} to $s=\sigma, \alpha=\alpha_2$, $p=p_2$, and taking the $L^{p_1}$-norm of \eqref{1.1} yield
\[
\|f\|_{L^{p_1}} \lesssim
\|f\|_{\dot{B}^{-\sigma}}^\frac{\alpha_2}{\sigma+\alpha_2} \left( \int
\big|\mathbf{G}_{\alpha_2,p_2}(f)(x)\big|^{\frac{\sigma p_1}{\sigma+\alpha_2}} \, dx \right)^{1/p_1} \leq \|f\|_{\dot{B}^{-\sigma}}^\frac{\alpha_2}{\sigma+\alpha_2} \|f\|^{\frac{\sigma}{\sigma+\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\]
with $p_2= {\frac{\sigma p_1}{\sigma+\alpha_2}}$.
\\
Hence, we obtain Theorem \ref{Mainthe} for the case $\alpha_1=0$.
\end{remark}
\begin{remark}\label{Rem1} If $\alpha=1$, then it follows from the mean value theorem that
\[ \mathbf{G}_{\alpha,p}(f)(x) \lesssim \mathbf{M}(|Df|)(x),\quad \text{for a.e. } x\in\mathbb{R}^n \,, \]
see \eqref{1.24}. Hence,
\[ |f(x)| \leq C
\| f \|_{\dot{B}^{-s}}^\frac{1}{s+1} \big[
\mathbf{M}(|Df|)(x)\big]^{\frac{s}{s+1}}\,, \quad \text{for } x\in\mathbb{R}^n\,. \]
As a result, one obtains
\begin{equation}\label{1.1a}
\|f\|_{L^{p_1}} \lesssim \| f \|_{\dot{B}^{-s}}^\frac{1}{s+1}
\|Df\|_{L^{p_2}}\,,
\end{equation}
with $p_1=p_2\big(\frac{s+1}{s}\big)$, $p_2\geq 1$ (see, e.g., \cite{DaoLamLu1,Le}). This is also Theorem \ref{Mainthe} when $\alpha_1=0$, $s=\sigma>0$, $\alpha=\alpha_2$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{Lem10}] Let us recall sequence $\{\varphi_\varepsilon\}_{\varepsilon>0}$ above. Then, we write
\begin{align*}
|f(x)| = | f(x)-\varphi_\varepsilon * f(x) - \varphi_\varepsilon * f(x)|
\leq | \varphi_\varepsilon * f(x)| + | f(x)-\varphi_\varepsilon * f(x)| :=\mathbf{I}_1+ \mathbf{I}_2 \,.
\end{align*}
We first estimate $\mathbf{I}_1$ in terms of $\dot{B}^{-s}$. Thanks to Theorem \ref{ThePeetre}, we get
\begin{align}\label{1.2}
\mathbf{I}_1 = \varepsilon^{-s} \varepsilon^{s} | \varphi_\varepsilon * f(x)| \leq C \varepsilon^{-s} \| f \|_{\dot{B}^{-s}} \,.
\end{align}
For $\mathbf{I}_2$, applying H\"older's inequality yields
\begin{align}\label{1.3}
\mathbf{I}_2 &\leq \int_{B(0,\varepsilon)} |f(x)-f(x-y)| \varphi_\varepsilon (y) \, dy = \varepsilon^{\frac{n}{p}+\alpha} \int_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|}{\varepsilon^{\frac{n}{p}+\alpha}} \varphi_\varepsilon (y) \, dy \nonumber
\\
&\leq \varepsilon^{\frac{n}{p}+\alpha} \|\varphi_\varepsilon\|_{L^{p'}} \left(\int_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|^p}{\varepsilon^{n+\alpha p}} dy \right)^\frac{1}{p} \nonumber
\\
&\lesssim \varepsilon^{\frac{n}{p}+\alpha} \|\varphi_\varepsilon\|_{L^{\infty}} \big|B(0,\varepsilon)\big|^{\frac{1}{p'}} \mathbf{G}_{\alpha,p}(x) \lesssim \varepsilon^{\alpha} \mathbf{G}_{\alpha,p}(x) \,.
\end{align}
Note that the last inequality follows by using the fact $\|\varphi_\varepsilon\|_{L^{\infty}}\leq C \varepsilon^{-n}$.
\\
By combining \eqref{1.2} and \eqref{1.3}, we obtain
\[|f(x)|\leq C\left(\varepsilon^{-s} \| f \|_{\dot{B}^{-s}} +\varepsilon^{\alpha} \mathbf{G}_{\alpha,p}(x)\right) \,. \]
Since the indicated inequality holds true for $\varepsilon>0$, then minimizing the right hand side of this one yields the desired result.
\\
Hence, we complete the proof of Lemma \ref{Lem10}.
\end{proof}
Next, we have the following result.
\begin{lemma}\label{Lem11}
Let $0<\alpha_1<\alpha_2< 1$. Let $1\leq p_1, p_2 <\infty$, and $r>1$ be such that
\begin{equation}\label{1.6}
\frac{1}{p_1}= \frac{1}{r} \left(1-\frac{\alpha_1}{\alpha_2}\right) + \frac{1}{p_2}\frac{\alpha_1}{\alpha_2} \,.
\end{equation}
If $f\in L^r(\mathbb{R}^n)\cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. In addition, there exists a constant $C=C(\alpha_1,\alpha_2,p_1,p_2,n)>0$ such that
\begin{equation}\label{1.8}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{r} }
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lem11}]
For any set $\Omega$ in $\mathbb{R}^n$, let us denote $\fint_\Omega f(x) \, dx = \frac{1}{|\Omega|} \int_{\Omega} f(x) \, dx$.
\\
For any $x, z\in\mathbb{R}^n$, we have from the triangle inequality and change of variables that
\begin{align*}
\big|f(x+z)-f(x)\big|&\leq \big|f(x+z) - \fint_{B(x,|z|)} f(y)\,dy \big|+\big|f(x) - \fint_{B(x,|z|)} f(y)\,dy \big|
\\
&\leq \fint_{B(x,|z|)} \big|f(x+z) -f(y) \big| \, dy+ \fint_{B(x,|z|)} \big|f(x)-f(y) \big| \, dy
\\
&\leq C(n) \left( \fint_{B(0,2|z|)} \big|f(x+z) -f(x+z+y) \big| \, dy+ \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy \right)\,.
\end{align*}
With the last inequality noted, and by using a change of variables, we get
\begin{align}\label{1.10}
\int\int \frac{|f(x+z)-f(x)|^{p_1}}{|z|^{n+\alpha_1 p_1}} dzdx \lesssim \int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}}\,.
\end{align}
Next, for every $p\geq 1$ we show that
\begin{align}\label{1.13}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim \big[\mathbf{M}(f)(x)\big]^{\frac{(\alpha_2-\alpha_1)p_1}{\alpha_2}} \left[\mathbf{G}_{\alpha_2,p}(x) \right]^{\frac{\alpha_1 p_1}{\alpha_2}}.
\end{align}
Thanks to Remark \ref{Rem4}, it suffices to show that \eqref{1.13} holds for $p=1$.
\\
Indeed, we have
\begin{align}\label{1.14}
\int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&= \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|^{\alpha_2}} \, dy\right)^{p_1} \frac{|z|^{\alpha_2 p_1} dz}{|z|^{n+\alpha_1 p_1}} \nonumber \\
&\lesssim \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1} \int_{\{|z|<t\}} \frac{1}{|z|^{n+(\alpha_1-\alpha_2) p_1}} dz \nonumber
\\
&\lesssim t^{(\alpha_2-\alpha_1) p_1} \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1}\,.
\end{align}
On the other hand, it is not difficult to observe that
\begin{align}\label{1.15}
\int_{|z|\geq t} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&\lesssim \big[\mathbf{M}(f)(x)\big]^{p_1} \left( \int_{|z|\geq t}\frac{dz}{|z|^{n+\alpha_1 p_1}} \right) \nonumber
\\
&\lesssim t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align}
From \eqref{1.14} and \eqref{1.15}, we obtain
\begin{align*}
\int \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{(\alpha_2-\alpha_1) p_1} \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1} +t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align*}
Minimizing the right hand side of the last inequality yields \eqref{1.13}.
\\
Then, it follows from \eqref{1.13} that
\[
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} dx \lesssim \int \big[\mathbf{M}(f)(x)\big]^{\frac{(\alpha_2-\alpha_1)p_1}{\alpha_2}} \left[\mathbf{G}_{\alpha_2,p_2}(x) \right]^{\frac{\alpha_1 p_1}{\alpha_2}} dx\,.
\]
Note that $\alpha_2 p_2>\alpha_1 p_1$, and $r=\frac{p_1p_2(\alpha_2-\alpha_1)}{\alpha_2p_2-\alpha_1p_1}$, see \eqref{1.6}.
Then, applying H\"older's inequality with $\big((\frac{\alpha_2p_2}{\alpha_1p_1})^\prime, \frac{\alpha_2p_2}{\alpha_1p_1}\big)$ to the right hand side of the last inequaltiy yields
\begin{align*}
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}}
&\lesssim \big\|\mathbf{M}(f)\big\|^{\frac{(\alpha_2-\alpha_1) p_1}{\alpha_2}}_{L^r} \|\mathbf{G}_{\alpha_2,p_2}\|^{\frac{\alpha_1p_1}{\alpha_2}}_{L^{p_2}} \,.
\end{align*}
Thanks to Remark \ref{Rem4}, and by the fact that $\mathbf{M}$ maps $L^r(\mathbb{R}^n)$ into $L^r(\mathbb{R}^n)$ $r>1$, we deduce from the last inequality that
\begin{align}\label{1.18}
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} \lesssim \|f\|^{\frac{(\alpha_2-\alpha_1) p_1}{\alpha_2}}_{L^r} \|f\|^{\frac{\alpha_1 p_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align}
Combining \eqref{1.10} and \eqref{1.18} yields \eqref{1.8}.
\\
Hence, we obtain Lemma \ref{Lem11}.
\end{proof}
Now, we can apply Lemma \ref{Lem10} and Lemma \ref{Lem11} alternatively to get Theorem \ref{Mainthe} for the case $0<\alpha_2<1$. Indeed, we apply \eqref{1.1} to $s=\sigma$, $\alpha=\alpha_2$, $p=p_2$. Then,
\begin{align}\label{1.20}
\|f\|_{L^q} & \lesssim
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|\mathbf{G}^{\frac{\sigma}{\alpha_2+\sigma}}_{\alpha_2,p_2}\big\|_{L^q} =
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|\mathbf{G}_{\alpha_2,p_2}\big\|^{\frac{\sigma}{\alpha_2+\sigma}}_{L^{p_2}} \leq
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\big(\frac{\alpha_2+\sigma}{\sigma}\big)$.
\\
Since $p_1=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$, then it follows from \eqref{1.6} that $r=q>1$.
\\
Next, applying Lemma \ref{Lem11} yields
\begin{equation}\label{1.19}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim
\| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{r} }
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}\,.
\end{equation}
Hence, we obtain Theorem \ref{Mainthe} for the case $\alpha_2<1$, $p_i<\infty$, $i=1,2$.
\\
{\bf c) $\alpha_2=1$.} Then, \eqref{-3} reads as:
\begin{equation} \label{1.22a} \|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|Df\|^{\frac{\alpha_1+\sigma}{1+\sigma}}_{L^{p_2}}
\,.
\end{equation}
To obtain the result, we first recall a result obtained by the authors, \cite[Lemma 3.1]{DaoLamLu1}. Precisely, if $f\in \dot{B}^{-\sigma}(\mathbb{R}^n)$, $\sigma>0$; and $D f\in L^{p_2}(\mathbb{R}^n)$, $1\leq p_2\leq \infty$, then we have $f\in L^r(\mathbb{R}^n)$, and
\begin{equation}\label{1.22} \|f\|_{L^r} \lesssim \|f\|^{\frac{1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|D f\|^{\frac{\sigma}{1+\sigma}}_{L^{p_2}} ,\quad r=p_2\big(\frac{1+\sigma}{\sigma}\big) \,.
\end{equation}
Now, we prove a version of \eqref{1.13} in terms of $\mathbf{M}(Df)(x)$ instead of $\mathbf{G}_{1,p}(x)$. Precisely, we show that
\begin{align}\label{1.23}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim \big[\mathbf{M}(f)(x)\big]^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{\alpha_1 p_1}
\end{align}
for $x\in\mathbb{R}^n$.
\\
Indeed, it follows from the mean value theorem and a change of variables that
\begin{align*}
\fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|} \, dy &\lesssim \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|y|} \, dy
\\
&= \fint_{B(0,2|z|)} \frac{\big|\int^1_0 D f(x+\tau y) \cdot y\, d\tau\big| }{|y|} \, dy
\\
&\leq \int^1_0\fint_{B(x,2\tau|z|)} | D f(\zeta) | \, d\zeta d\tau
\\
& \leq \int^1_0 \mathbf{M}(|Df|)(x) \, d\tau = \mathbf{M}(|Df|)(x) \,.
\end{align*}
Thus,
\begin{align}\label{1.24}
\int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&= \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|} \, dy\right)^{p_1} \frac{|z|^{p_1} dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&\lesssim \big[\mathbf{M}(|Df|)(x) \big]^{p_1} \int_{\{|z|<t\}} |z|^{-n+(1-\alpha_1)p_1} \, dz \nonumber
\\
&\lesssim t^{(1-\alpha_1)p_1} \big[\mathbf{M}(|Df|)(x) \big]^{p_1} \,.
\end{align}
From \eqref{1.24} and \eqref{1.15}, we obtain
\begin{align}\label{1.25}
\int \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{(1-\alpha_1) p_1} \left[ \mathbf{M}(|Df|)(x)\right]^{p_1} +t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align}
Hence, \eqref{1.23} follows by minimizing the right hand side of \eqref{1.25} with respect to $t$.
Now, if $p_2=1$, then it is known that the estimate
\begin{equation}\label{1.21}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\| f \|^{1-\alpha_1}_{ L^{r} }
\|Df\|^{\alpha_1}_{L^{1}}
\end{equation}
holds for $1\leq r<\infty$, with $\frac{1}{p_1}= \frac{1-\alpha_1}{r} + \alpha_1$ (see, e.g., \cite{Brezis3, CDPX}).
\\
Therefore, \eqref{1.22a} follows from \eqref{1.22} and \eqref{1.21}.
Otherwise $p_2>1$, then we apply H\"older's inequality in \eqref{1.23} in order to get
\begin{align*}
\int \int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz dx}{|z|^{n+\alpha_1 p_1}}
&\lesssim \int\big[\mathbf{M}(f)(x)\big]^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{\alpha_1 p_1} dx
\\
&\leq \|\mathbf{M}(f)\|^{(1-\alpha_1)p_1}_{L^r} \|\mathbf{M}(|Df|)\|^{\alpha_1 p_1}_{L^{p_2}}
\\
&\lesssim \|f\|^{(1-\alpha_1)p_1}_{L^r} \|Df\|^{\alpha_1 p_1}_{L^{p_2}}\,,
\end{align*}
where $r=p_2\big(\frac{1+\sigma}{\sigma}\big)$ fulfills \eqref{1.6}.
\\
With the last inequality noted, we deduce from \eqref{1.10} and \eqref{1.22} that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{1-\alpha_1}_{L^r} \|Df\|^{\alpha_1}_{L^{p_2}} \lesssim \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|D f\|^{\frac{\sigma(1-\alpha_1)}{1+\sigma}}_{L^{p_2}} \|Df\|^{\alpha_1}_{L^{p_2}} = \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|D f\|^{\frac{\alpha_1+\sigma}{1+\sigma}}_{L^{p_2}}\,.
\end{align*}
Hence, we obtain \eqref{-3} when $\alpha_2=1$.
\\
This puts an end to the proof of {\bf Step 1}.
\\
{\bf Step 2.}
Now, we can prove Theorem \ref{Mainthe} for the case $\alpha_1\geq 1$.
At the beginning, let us denote $\alpha_i=\floor{\alpha_i}+s_i$, $i=1,2$. Then, we divide the proof into the following cases.
\\
{\bf i) The case $\floor{\alpha_2}=\floor{\alpha_1}$:}
By applying Theorem \ref{The1} to $D^{\floor{\alpha_1}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_1}$; and by Proposition \ref{Pro1}, we obtain
\begin{align*}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} = \big\|D^{\floor{\alpha_1}} f \big\|_{\dot{W}^{s_1,p_1}} &\lesssim \big\| D^{\floor{\alpha_1}} f \big\|^{\frac{s_2-s_1}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-(\sigma+\floor{\alpha_1})}} \big\|D^{\floor{\alpha_1}} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{W}^{s_2,p_2}} \\
&\lesssim \big\| f \big\|^{\frac{s_2-s_1}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}}
\big\| D^{\floor{\alpha_2}} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{W}^{s_2,p_2}} = \big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}}
\big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align*}
with $p_1=p_2\big(\frac{s_2+\sigma_{{\rm new}}}{s_1+\sigma_{{\rm new}}}\big)=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$.
\\
Hence, we get the conclusion for this case.
\\
{\bf ii) The case $\floor{\alpha_2}>\floor{\alpha_1}$:} If $s_2>0$, then we can apply Theorem \ref{The1} to $D^{\floor{\alpha_2}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_2}$. Therefore,
\begin{align}\label{1.30}
\big\| D^{\floor{\alpha_2}} f \big\|_{L^q} \lesssim \big\|D^{\floor{\alpha_2}} f \big\|^{\frac{s_2}{s_2+\sigma+\floor{\alpha_2}}}_{\dot{B}^{-(\sigma+\floor{\alpha_2})}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\sigma+\floor{\alpha_2}}{s_2+\sigma+\floor{\alpha_2}}}_{\dot{W}^{s_2,p_2}} \lesssim \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\big(\frac{\alpha_2+\sigma}{\floor{\alpha_2}+\sigma}\big)$. Again, the last inequality follows from the lifting property in Proposition \eqref{Pro1}.
\\
Next, applying Theorem \ref{The1} to $D^{\floor{\alpha_1}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_1}$ yields
\begin{align}\label{1.31}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} = \big\|D^{\floor{\alpha_1}} f \big\|_{\dot{W}^{s_1,p_1}} &\lesssim \big\|D^{\floor{\alpha_1}} f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-(\sigma+\floor{\alpha_1})}} \big\| D^{\floor{\alpha_1}+1} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}}_{L^{q_1}} \nonumber
\\
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_1}+1} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}}_{L^{q_1}} \,,
\end{align}
with $q_1= p_1\big(\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}\big)$.
\\
If $\floor{\alpha_2}=\floor{\alpha_1}+1$, then observe that $q=q_1$. Thus, we deduce from \eqref{1.30} and \eqref{1.31} that
\begin{align*}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}}
\left( \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}= \big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
This yields \eqref{-3}.
Note that $\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}+\frac{s_2(\alpha_1+\sigma)}{(\alpha_2+\sigma)(\floor{\alpha_2}+\sigma)}=\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}$ since $\floor{\alpha_2}=\floor{\alpha_1}+1$.
\\
If $\floor{\alpha_2}>\floor{\alpha_1}+1$, then we apply \cite[Theorem 1.2]{DaoLamLu1} to $k=\floor{\alpha_1}+1$, $m=\floor{\alpha_2}$. Thus,
\begin{align}\label{1.32}
\big\| D^{\floor{\alpha_1}+1} f \big\|_{L^{q_1}}\lesssim \big\|f\big\|^{\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}}\,,
\end{align}
with $q_2=q_1 \big(\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}\big)$.
\\
Combining \eqref{1.31} and \eqref{1.32} yields
\begin{align}\label{1.33}
\big\|f \big\|_{\dot{W}^{\alpha_1,p_1}}
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \left( \big\|f\big\|^{\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}}_{\dot{B}^{-\sigma}}
\big\|D^{\floor{\alpha_2}} f \big\|^{\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}}\right)^{\frac{\alpha_1+\sigma}{1+\floor{\alpha_1}+\sigma}} \nonumber
\\
&= \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)}_{\dot{B}^{-\sigma}}
\big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}} \,.
\end{align}
Observe that $q=q_2=p_2 \big(\frac{\alpha_2+\sigma}{\floor{\alpha_2}+\sigma}\big) $. Thus, it follows from \eqref{1.33} and \eqref{1.30} that
\begin{align*}
\big\| f \big\|_{\dot{W}^{\alpha_1,p_1}}
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)}_{\dot{B}^{-\sigma}} \left( \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}\right)^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}
\\
&=\big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
A straightforward computation shows that
\[\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)+ \frac{s_2(\alpha_1+\sigma)}{(\alpha_2+\sigma)(\floor{\alpha_2}+\sigma)} = \frac{\alpha_2-\alpha_1}{\alpha_2+\sigma} \,.\]
This puts an end to the proof of Theorem \ref{Mainthe} for $s_2>0$.
\\
The proof of the case $s_2=0$ can be done similarly. In fact, we do not use \eqref{1.30} for this case.
\\
Hence, we complete the proof of Theorem \ref{Mainthe}.
\subsection{Proof of Theorem \ref{Mainthe1}}At the beginning, let us recall the notation $\alpha_i=\floor{\alpha_i}+s_i$, $i=1, 2$. Then, we divide the proof into the two following cases.
\\
{\bf i) The case $p_1=p_2=\infty$.} If $0<\alpha_1<\alpha_2<1$, then \eqref{4.1} becomes
\begin{equation}\label{2.0}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{\alpha_2}} \,.
\end{equation}
Inequality \eqref{2.0} can be obtained easily from the proof of \eqref{1.0} for $\sigma=0$. Then, we leave the details to the reader.
\\
If $0<\alpha_1<1\leq \alpha_2$, and $\alpha_2$ is integer, then \eqref{4.1} reads as:
\begin{equation}\label{2.0a}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|\nabla^{\alpha_2} f\|^{\frac{\alpha_1}{\alpha_2}}_{L^\infty} \,.
\end{equation}
To obtain \eqref{2.0a}, we utilize the vanishing moments of $\varphi_\varepsilon$ in Theorem \ref{ThePeetre}. In fact, let us fix $k>\alpha_2$.
Then, it follows from the Taylor series that
\begin{align}\label{2.0c}
|\varphi_\varepsilon * f(x)| &=\left|\int\big( f(x-y)-f(x)\big) \varphi_\varepsilon(y)\, dy \right| \nonumber
\\
&=\left| \int \left(\sum_{|\gamma|< \alpha_2} \frac{D^\gamma f(x)}{|\gamma|!} (-y)^\gamma + \sum_{|\gamma|= \alpha_2} \frac{D^\gamma f(\zeta)}{|\gamma|!} (-y)^\gamma \right) \varphi_\varepsilon(y)\,dy \right|
\nonumber \\
&=\left| \int \sum_{|\gamma|= \alpha_2} \frac{D^{\gamma} f(\zeta)}{|\alpha_2|!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\end{align}
for some $\zeta$ in the line-$xy$. Note that $$ \int \frac{D^\gamma f(x)}{|\gamma|!} (-y)^\gamma \varphi_\varepsilon(y)\,dy=0$$ for every multi-index $|\gamma|<k$.
\\
Hence, we get from \eqref{2.0c} that
\[ |\varphi_\varepsilon * f(x)| \lesssim \|\nabla^{\alpha_2} f\|_{L^\infty} \int_{B(0,\varepsilon)} |y|^{\alpha_2} |\varphi_\varepsilon(y)|\,dy\lesssim \varepsilon^{\alpha_2} \|\nabla^{\alpha_2} f\|_{L^\infty}\,.\]
Inserting the last inequality into \eqref{2.-1} yields
\[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \delta^{\alpha_2-\alpha_1} \|\nabla^{\alpha_2} f\|_{L^\infty}
+\delta^{-\alpha_1} \|f\|_{\dot{B}^{0}} \,.
\]
By minimizing the right hand side of the indicated inequality, we get
\[
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|\nabla^{\alpha_2} f\|^{\frac{\alpha_1}{\alpha_2}}_{L^\infty} \,.
\]
This implies \eqref{2.0a}.
\\
If $0<\alpha_1<1\leq \alpha_2$, and $\alpha_2$ is not integer, then \eqref{4.1} reads as:
\begin{equation}\label{2.0b}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \left\|\nabla^{\floor{\alpha_2}} f\right\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} \,.
\end{equation}
To obtain \eqref{2.0b}, we apply \eqref{2.0c} to $\floor{\alpha_2}$. Thus,
\begin{align*}
|\varphi_\varepsilon * f(x)| &=\left| \int \sum_{|\gamma|=\floor{\alpha_2}} \frac{D^{\gamma} f(\zeta)}{\floor{\alpha_2}!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\\
&=\left| \int \sum_{|\gamma|= \floor{\alpha_2}} \frac{D^{\gamma} f(\zeta) - D^{\gamma} f(x)}{\floor{\alpha_2}!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\\
&\lesssim \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \int |x-\zeta|^{s_2} |y|^{\floor{\alpha_2}} |\varphi_\varepsilon(y)|\,dy
\\
&\leq\|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \int_{B(0,\varepsilon)} |y|^{s_2} |y|^{\floor{\alpha_2}} |\varphi_\varepsilon(y)|\,dy \lesssim \varepsilon^{\alpha_2}
\|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \,.
\end{align*}
Thus,
\[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \delta^{\alpha_2-\alpha_1} \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}}
+\delta^{-\alpha_1} \|f\|_{\dot{B}^{0}} \,. \]
By the analogue as in the proof of \eqref{2.0a}, we also obtain \eqref{2.0b}.
\\
In conclusion, Theorem \ref{Mainthe1} was proved for the case $0<\alpha_1<1$.
\\
Now, if $\alpha_1\geq 1$, then \eqref{4.1} becomes
\begin{equation}\label{2.0d}
\| D^{\floor{\alpha_1}} f \|_{\dot{C}^{s_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|D^{\floor{\alpha_2}} f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} \,.
\end{equation}
Again, we note that $\|.\|_{\dot{C}^{s_i}}$ is replaced by $\|.\|_{L^\infty}$ whenever $s_i=0$, $i=1,2$.
\\
To obtain \eqref{2.0d}, we apply Theorem \ref{Mainthe} to $f_{{\rm new}}=D^{\floor{\alpha_1}} f$, and $\sigma =\floor{\alpha_1}$. Hence, it follows from Proposition \ref{Pro1} that
\begin{align*}
\|f\|_{\dot{C}^{\alpha_1}} =\| D^{\floor{\alpha_1}} f \|_{\dot{C}^{s_1}}
&\lesssim
\big\|D^{\floor{\alpha_1}} f\big\|^{\frac{\alpha_2-\floor{\alpha_1}-s_1}{\alpha_2-\floor{\alpha_1}+\sigma}}_{\dot{B}^{-\floor{\alpha_1}}} \big\|D^{\floor{\alpha_1}} f\big\|^{\frac{s_1+\sigma}{\alpha_2-\floor{\alpha_1}+\sigma}}_{\dot{C}^{\alpha_2-\floor{\alpha_1}}}
\\
&\lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\big\|D^{\floor{\alpha_2}}f\big\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} = \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{\alpha_2}} \,.
\end{align*}
This puts an end to the proof of Theorem \ref{Mainthe1} for the case $p_1=p_2=\infty$.
\\
{\bf ii) The case $p_i<\infty, i=1,2$.}
We first consider the case $0<\alpha_1<1$.
\\
{\bf a)} If $\alpha_2\in(\alpha_1, 1)$, then we utilize the following result $\|\cdot\|_{\dot{W}^{s,p}} \approx \|\cdot\|_{\dot{B}^{s}_{p,p}}$ for $s\in(0,1)$, $p\geq 1$, see Proposition \ref{Pro-cha} in the Appendix section. Therefore, \eqref{4.1} is equivalent to the following inequality
\begin{equation}\label{2.1}
\|f\|_{\dot{B}^{\alpha_1}_{p_1,p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^{0}}\|f\|^\frac{\alpha_1}{\alpha_2}_{\dot{B}^{\alpha_2}_{p_2,p_2}} \,.
\end{equation}
Note that $\alpha_1 p_1 = \alpha_2 p_2$. Hence,
\begin{align}\label{2.2}
2^{jn \alpha_1 p_1} \|f*\phi_j\|_{L^{p_1}}^{p_1} \leq 2^{jn \alpha_2 p_2} \|f*\phi_j\|_{L^{p_2}}^{p_2} \|f*\phi_j\|_{L^{\infty}}^{p_1-p_2} \leq 2^{jn \alpha_2 p_2} \|f*\phi_j\|_{L^{p_2}}^{p_2} \|f\|_{\dot{B}^0}^{p_1-p_2} \,.
\end{align}
This implies that
\[ \|f\|^{p_1}_{\dot{B}^{\alpha_1}_{p_1,p_1}} \leq \|f\|^{p_1-p_2}_{\dot{B}^{0}} \|f\|^{p_2}_{\dot{B}^{\alpha_2}_{p_2,p_2}} \]
which is \eqref{2.1}.
\\
{\bf b)} If $\alpha_2= 1$, then we show that
\begin{equation}\label{2.3}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim\|f\|^{1-\alpha_1}_{\dot{B}^{0}} \|Df\|^{\alpha_1}_{L^{p_2}}\,.
\end{equation}
To obtain \eqref{2.3}, we need the following lemma.
\begin{lemma}\label{Lem-Hom-Sobolev}
Let $0<\alpha_0<\alpha_1 <\alpha_2\leq 1$, and $p_0\geq 1$ be such that $\alpha_0 -\frac{1}{p_0}<\alpha_2-\frac{1}{p_2}$, and
\[
\frac{1}{p_1} = \frac{\theta}{p_0} + \frac{1-\theta}{p_2} ,\quad \theta= \frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0}
\,. \]
Then, we have
\begin{equation}\label{2.4}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0} }_{\dot{W}^{\alpha_0,p_0}} \|f\|^{\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0}}_{\dot{W}^{\alpha_2,p_2}},\quad \forall f\in \dot{W}^{\alpha_0,p_0} (\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n) \,.
\end{equation}
Note that $\|f\|_{\dot{W}^{\alpha_2,p_2}}$ becomes $\|Df\|_{L^{p_2}}$ if $\alpha_2=1$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lem-Hom-Sobolev}]
The proof follows by way of the following result.
\\
If $f\in \dot{W}^{\alpha_0,p_0} (\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then there hold true
\begin{align}\label{2.5a}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\lesssim\big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x) \big]^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}
\end{align}
if provided that $\alpha_2<1$, and
\begin{align}\label{2.5}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\lesssim\big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{1-\alpha_1}{1-\alpha_0})p_1} \left[\mathbf{M}(|Df|)(x) \right]^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}
\end{align}
if $\alpha_2=1$.
\\
The proof of \eqref{2.5a} (resp. \eqref{2.5}) can be done similarly as the one of \eqref{1.13} (resp. \eqref{1.23}). Therefore, we only need to replace $\mathbf{M}(f)(x)$ by $\mathbf{G}_{\alpha_0,p_0}(f)(x)$ in \eqref{1.13} (resp. \eqref{1.23}).
\\
In fact, we have from H\"{o}lder's inequality
\begin{align}\label{2.6}
\int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&\leq \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big|^{p_0} \, dy\right)^{\frac{p_1}{p_0}} \frac{dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&= \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x) - f(x+y) \big|^{p_0}}{|z|^{\alpha_0 p_0}} \, dy\right)^{\frac{p_1}{p_0}} \frac{|z|^{\alpha_0 p_1}dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&\lesssim \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1} \int_{\{|z|\geq t\}} |z|^{-n-(\alpha_1-\alpha_0)p_1} \, dz\nonumber
\\
&\lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1} \,.
\end{align}
If $\alpha_2<1$, then it follows from \eqref{2.5} and \eqref{1.14} that
\begin{align*}
\int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1}+ t^{(\alpha_2-\alpha_1)p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x)\big]^{p_1} \,.
\end{align*}
Thus, \eqref{2.5a} follows by minimizing the right hand side of the indicated inequality.
\\
Next, applying H\"older's inequality in \eqref{2.5a} with $\big(\frac{p_0(\alpha_2-\alpha_0)}{p_1(\alpha_2-\alpha_1)},\frac{p_2(\alpha_2-\alpha_0)}{p_1(\alpha_2-\alpha_1)}\big)$ yields
\begin{align*}
\|f\|^{p_1}_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\\
&\lesssim \int \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x)\big]^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1} dx
\\
&\leq \big\| \mathbf{G}_{\alpha_0,p_0} \big\|^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1}_{L^{p_0}} \big\| \mathbf{G}_{\alpha_2,p_2} \big\|^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}_{L^{p_2}}
\\
&\leq \|f\|^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1}_{\dot{W}^{\alpha_0,p_0}} \|f\|^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
Note that the last inequality is obtained by Remark \ref{Rem4}. Hence, we get \eqref{2.4} for $\alpha_2<1$.
\\
If $\alpha_2=1$, then
it follows from \eqref{2.6} and \eqref{1.24} that
\begin{align*}
\int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1}+ t^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{p_1}
\end{align*}
which implies \eqref{2.5}.
\\
By applying H\"older's inequality with $\big(\frac{p_0(1-\alpha_0)}{p_1(1-\alpha_1)},\frac{p_2(1-\alpha_0)}{p_1(1-\alpha_1)}\big)$, we obtain
\begin{align*}
\|f\|^{p_1}_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \\
&\lesssim \int_{\mathbb{R}^n} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{1-\alpha_1}{1-\alpha_0})p_1} \left[\mathbf{M}(|Df|)(x) \right]^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1} dx
\\
&\leq \big\|\mathbf{G}_{\alpha_0,p_0}(f)\big\|^{(\frac{1-\alpha_1}{1-\alpha_0})p_1}_{L^{p_0}} \big\|\mathbf{M}(|Df|) \big\|^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}_{L^{p_2}}
\\
&\lesssim \|f\|^{(\frac{1-\alpha_1}{1-\alpha_0})p_1}_{\dot{W}^{\alpha_0, p_0}} \|Df\|^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}_{L^{p_2}} \,.
\end{align*}
This yields \eqref{2.4} for $\alpha_2=1$.
\\
Hence, we complete the proof of Lemma \ref{Lem-Hom-Sobolev}.
\end{proof}
Now, we apply Lemma \ref{Lem-Hom-Sobolev} when $\alpha_2=1$ in order to obtain
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1, p_1}}\lesssim \|f\|^{\frac{1-\alpha_1}{1-\alpha_0}}_{\dot{W}^{\alpha_0, p_0}} \|Df\|^{\frac{\alpha_1-\alpha_0}{1-\alpha_0}}_{L^{p_2}} \,,
\end{align*}
where $\alpha_0, p_0$ are chosen as in Lemma \ref{Lem-Hom-Sobolev}.
\\
After that, we have from \eqref{2.1} that
\[ \|f\|_{\dot{W}^{\alpha_0, p_0}} \lesssim \|f\|^{1-\frac{\alpha_0}{\alpha_1}}_{\dot{B}^0} \|f\|^{\frac{\alpha_0}{\alpha_1}}_{\dot{W}^{\alpha_1, p_1}} \,.\]
Combining the last two inequalities yields the desired result.
\\
{\bf The case $\alpha_2>1$}.
\\
If $\alpha_2$ is not integer, then we apply Theorem \ref{Mainthe} to $\sigma=\floor{\alpha_2}$ to get
\begin{align}\label{2.7}
\| D^{\floor{\alpha_2}} f \|_{L^q} \lesssim
\| D^{\floor{\alpha_2}} f \|^{\frac{s_2}{\alpha_2}}_{\dot{B}^{-\floor{\alpha_2}}} \|D^{\floor{\alpha_2}} f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{s_2,p_2}} \lesssim
\|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\frac{\alpha_2}{\floor{\alpha_2}}$.
Recall that $\alpha_2=\floor{\alpha_2}+s_2$.
If $\floor{\alpha_2}=1$, then it follows from \eqref{2.3} and the last inequality that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \|Df\|^{\alpha_1}_{L^q} \lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \left( \|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\alpha_1}=\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^0}\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align*}
with $q=\alpha_1 p_1=\alpha_2 p_2$ since $\floor{\alpha_2}=1$.
\\
This yields \eqref{4.1} when $\floor{\alpha_2}=1$.
If $\floor{\alpha_2}>1$, then we can apply Theorem \ref{TheC} in order to get
\begin{align*}
\|Df\|_{L^{q_1}} \lesssim \|f\|^{\frac{\floor{\alpha_2}-1}{\floor{\alpha_2}}}_{\dot{B}^0} \big\|D^{\floor{\alpha_2}}f \big\|^{\frac{1}{\floor{\alpha_2}}}_{L^{q_2}} \,,
\end{align*}
with $q_1=\alpha_1 p_1$, and $q_2 = \frac{q_1}{\floor{\alpha_2}} = \frac{\alpha_2 p_2}{\floor{\alpha_2}}$.
\\
A combination of the last inequality, \eqref{2.7}, and \eqref{2.3} implies that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \|Df\|^{\alpha_1}_{L^{q_1}} \lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \left(\|f\|^{\frac{\floor{\alpha_2}-1}{\floor{\alpha_2}}}_{\dot{B}^0} \big\|D^{\floor{\alpha_2}}f \big\|^{\frac{1}{\floor{\alpha_2}}}_{L^{q_2}} \right)^{\alpha_1}
\\
& \lesssim\|f\|^{1-\frac{\alpha_1}{\floor{\alpha_2}}}_{\dot{B}^0} \left(\|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\frac{\alpha_1}{\floor{\alpha_2}}}
= \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
Hence, we obtain\eqref{4.1} when $\floor{\alpha_2}>1$.
\\
The case where $\alpha_2>1$ is integer can be done similarly as the above. Then, we leave the details to the reader.
\section{Appendix}
\begin{proposition}\label{Pro-cha} The following statement holds true
\begin{equation}\label{5.1}
\|f\|_{\dot{W}^{\alpha,p}} \approx \|f\|_{\dot{B}^{\alpha, p}_p} ,\quad \forall f\in \mathcal{S}(\mathbb{R}^n) \,.
\end{equation}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{Pro-cha}]
For any $s\in(0,1)$, $1\leq p<\infty$, it is known that (see, e.g., \cite{Leoni})
\[
\|f\|_{\dot{W}^{s,p}} \approx \left( \sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+s p}} \right)^{1/p} ,\quad \forall f\in W^{s,p}(\mathbb{R}^n)\,,
\]
where $\Delta_{te_k} f (x) = f(x+te_k)-f(x)$, $k=1,\dots,n$.
\\
Thanks to this result, the proof is equivalent to prove that
\begin{align}\label{5.1a}
\sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \approx \|f\|^p_{\dot{B}^{\alpha,p}_p} \,.
\end{align}
We first show that
\begin{equation} \label{5.1b}
\sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \lesssim \|f\|^p_{\dot{B}^{\alpha,p}_p} \,.
\end{equation}
Indeed, let $\varphi\in\mathcal{S}(\mathbb{R}^n)$ be such that ${\rm supp}(\hat{\varphi})\subset \big\{ \frac{1}{2}< |\xi|< 2 \big\}$, $\hat{\varphi}(\xi) \not=0$ in $\big\{ \frac{1}{4}< |\xi|<1 \big\}$, $\varphi_j(x)= 2^{-jn}\varphi(2^{-j}x)$ for $j\in\mathbb{Z}$, and $\displaystyle\sum_{j\in\mathbb{Z}} \hat{\varphi_j}(\xi) =1$ for $\xi\not=0$.
\\
Next, let us set \,
$\widehat{\psi}_j(\xi) = \big(e^{it\xi_1}-1\big) \widehat{\varphi}_j(\xi)$, $\xi=(\xi_1,...,\xi_n)$. Note that for any $g\in\mathcal{S}(\mathbb{R}^n)$ $$\mathcal{F}^{-1}\big\{(e^{it\xi_1}-1) \widehat{g}\big\} = g(x+te_1)- g(x) \,,$$
where $\mathcal{F}^{-1}$ denotes by the inverse Fourier transform.
\\
Since ${\rm supp}(\widehat{\varphi}_j) \cap {\rm supp}(\widehat{\varphi}_l) =\emptyset$ whenever $|l-j|\geq 2$, then we have
\begin{align}\label{5.2}
\psi_j * f &= \psi_j * \left(\sum_{i\in\mathbb{Z}} \varphi_j \right) * f = \psi_j * \big(\varphi_{j-1} + \varphi_j +\varphi_{j+1} \big) * f \,.
\end{align}
Applying Young's inequality yields
\begin{align}\label{5.3}
\| \psi_j * \varphi_{j} * f \|_{L^p}& \leq \| \psi_j \|_{L^1} \| \varphi_{j} * f \|_{L^p} \nonumber
\\
&= \big\| \mathcal{F}^{-1}\big\{ (e^{it\xi_1}-1) \varphi_j(\xi) \big\}\big\|_{L^1} \| \varphi_{j} * f \|_{L^p} \nonumber
\\
&= \| \varphi_j(.+te_1)-\varphi_{j}(.)\|_{L^1}
\| \varphi_{j} * f \|_{L^p} \leq C\| \varphi_{j} * f \|_{L^p} \,,
\end{align}
where $C=C_\varphi$ is independent of $j$.
\\
On the other hand, we observe that
\begin{align*}
\big|\varphi_j(x+te_1)-\varphi_{j}(x)\big|&=\big| \int^1_0 D\varphi_{j} (x + \tau t e_1) \cdot te_1 \, d\tau \big|
\\
&\leq t\int^1_0 \big|D\varphi_{j} (x + \tau t e_1) \big| \, d\tau = t 2^{-j} 2^{-jn} \int^1_0 \big|D\varphi \big( 2^{-j}( x + \tau t e_1)\big) \big| \, d\tau \,.
\end{align*}
Therefore,
\begin{align}\label{5.5}
\| \varphi_j(.+te_1)-\varphi_{j}(.)\|_{L^1} &\leq t 2^{-j} 2^{-jn} \int^1_0 \big\|D\varphi \big( 2^{-j}( x + \tau t e_1)\big) \big\|_{L^1} \, d\tau \nonumber
\\
& = t 2^{-j} \int^1_0 \|D\varphi\|_{L^1} \, d\tau = C_\varphi \, t 2^{-j} \,.
\end{align}
Combining \eqref{5.2}, \eqref{5.3} and \eqref{5.5} yields
\begin{equation}\label{5.6}
\sum_{j\in\mathbb{Z}} \| \psi_j * f \|_{L^p} \lesssim \min\{1,t2^{-j}\} \sum_{j\in\mathbb{Z}}\| \varphi_{j} * f \|_{L^p}\,.
\end{equation}
Now, remind that $$f(x+te_1)-f(x) = \sum_{j\in\mathbb{Z}} \psi_j * f(x)$$ in $\mathcal{S}^\prime(\mathbb{R}^n)$.
\\
With this fact noted, we deduce from \eqref{5.6} that
\begin{align*}
\int^\infty_0 \int_{\mathbb{R}^n} \frac{|f(x+te_1)-f(x)|^p}{t^{1+\alpha p}} \, dx dt &= \int_{0}^{\infty} \big\| \sum_{j\in \mathbb{Z}} \psi_j * f\big\|_{L^p}^p \frac{dt}{t^{1+\alpha p}}
\\
&\lesssim \sum_{k\in\mathbb{Z}} \int^{2^k}_{2^{k-1}} \sum_{j\in \mathbb{Z}} \min\{1,t^p 2^{-jp}\} \| \varphi_j * f\|_{L^p}^p \frac{dt}{t^{1+\alpha p}}
\\
&\lesssim \sum_{k\in\mathbb{Z}} 2^{-k\alpha p} \sum_{j\in \mathbb{Z}} \min\{1,2^{(k-j)p}\} \| \varphi_j * f\|_{L^p}^p
\\
&= \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} \min\{1,2^{(k-j)p}\} 2^{-(k-j)\alpha p} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right]
\\
&= \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} \min\{2^{-(k-j)\alpha p},2^{(k-j)(1-\alpha)p}\} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right]
\\
&\leq \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} 2^{-|k-j|\delta} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right],\quad \delta=\min\{\alpha p , (1-\alpha)p\}
\\
&\leq C_\delta \sum_{k\in\mathbb{Z}} \left[ 2^{-k\alpha p} \|\varphi_k * f\|_{L^p}^p \right] = C_\delta \|f\|_{\dot{B}_p^{\alpha,p}}^p \,.
\end{align*}
By the same argument, we also obtain
\[ \int^\infty_0 \int_{\mathbb{R}^n} \frac{|f(x+te_k)-f(x)|^p}{t^{1+\alpha p}} \, dx dt \lesssim \|f\|_{\dot{B}_p^{\alpha,p}}^p ,\quad k=2,\dots,n \,.\]
This yields \eqref{5.1b}.
\\
For the converse, let $\{\varphi_j\}_{j\in\mathbb{Z}}$ be the sequence above. By following \cite{Grevholm}, we can construct function $\psi$ such that
$\hat{\psi}(\xi)=1$ for $1/2 \leq|\xi|\leq 2$, $\psi_j(x)=2^{-jn}\psi(2^{-j}x)$, and
\begin{align}\label{5.8}
\sup_{t\in(2^{j-1}, 2^j)} \big\|\frac{\hat{\psi}_j(\xi)}{e^{it\xi_1}-1} \big\|_{L^1} \leq C
\end{align}
for $j\in\mathbb{Z}$, where $C$ is independent of $j$.
\\
Observe that
\[
\widehat{\psi_j *f} (\xi)= \hat{\psi_j} (\xi) \hat{f}(\xi) = \frac{\widehat{\psi_j}(\xi)}{e^{it\xi_1}-1} \big(e^{it\xi_1}-1\big)\hat{f}(\xi) = \frac{\widehat{\psi_j}(\xi)}{e^{it\xi_1}-1} \widehat{\Delta_{te_1} f}(\xi) \,.
\]
Thus,
\[ \psi_j *f(x) = \mathcal{F}^{-1}\big\{\frac{\widehat{\psi_j}(\xi)}{e^{it\xi_1}-1}\big\} * \Delta_{te_1} f (x) ,\quad \text{in }\mathcal{S}^\prime(\mathbb{R}^n) \,. \]
By \eqref{5.8} and the last identity, applying Young's inequality yields
\begin{align}\label{5.9}
\| \psi_{j}*f \|^p_{L^p} = \big\| \mathcal{F}^{-1}\big\{\frac{\widehat{\psi_j}(\xi)}{e^{it\xi_1}-1}\big\} * \Delta_{te_1} f \big\|^p_{L^p} \leq \big\| \frac{\widehat{\psi_j}(\xi)}{e^{it\xi_1}-1} \big\|^p_{L^1} \big\| \Delta_{te_1} f (x) \big\|^p_{L^p} \lesssim \|\Delta_{te_1} f\|^p_{L^p}
\end{align}
for all $t\in (2^{j-1},2^j)$.
\\
On the other hand, it is clear that $\hat{\psi}(\xi) \hat{\varphi} (\xi)=\hat{\varphi} (\xi)$ since ${\rm supp}(\hat{\varphi})\subset \{1/2 \leq|\xi|\leq 2\}$.
\\
Then, it follows from \eqref{5.9} and this fact that
\begin{align*}
\| \varphi_{j} *f \|^p_{L^p} = \| \varphi_{j} * \psi_j * f \|^p_{L^p} \leq \| \varphi_{j} \|^p_{L^1} \|\psi_j * f\|^p_{L^p} \lesssim \|\Delta_{te_1} f\|^p_{L^p}
\end{align*}
for all $t\in (2^{j-1},2^j)$.
\\
This implies that
\begin{align*}
\sum_{j\in\mathbb{Z}} 2^{-j\alpha p} \| \varphi_{j} *f \|^p_{L^p} &\lesssim \sum_{j\in\mathbb{Z}} 2^{-j\alpha p} \fint^{2^j}_{2^{j-1}} \|\Delta_{te_1} f\|^p_{L^p} \,dt \lesssim \int^\infty_0 \|\Delta_{te_1} f\|^p_{L^p} \frac{dt}{t^{1+\alpha p}} \,.
\end{align*}
Hence,
\[ \|f\|^p_{\dot{B}^{\alpha,p}_p} \lesssim \sum_{k=1}^n \int^\infty_0 \|\Delta_{te_k} f\|^p_{L^p} \frac{dt}{t^{1+\alpha p}} \,. \]
This completes the proof of Proposition \ref{Pro-cha}.
\end{proof}
\textbf{Acknowledgement.} The research is funded by University of Economics Ho Chi Minh City, Vietnam.
\begin{thebibliography}{99}
\bibitem{Brezis1} {H. Brezis and P. Mironescu,} Gagliardo--Nirenberg inequalities and non-inequalities: The full story, Ann. I. H. Poincar\'e-AN {\bf 35} (2018), 1355-1376.
\bibitem{Brezis2} {H. Brezis and P. Mironescu,} {Where Sobolev interacts with Gagliardo--Nirenberg}, Jour. Funct. Anal. {\bf 277} (2019), 2839-2864.
\bibitem{Brezis3} {H. Brezis, J. Van Schaftingen and Po-Lam Yung,} {A surprising formula for Sobolev norms}, Proc. Nat. Acad. Sci. U.S.A. {\bf118}, no. 8 (2021) e2025254.
\bibitem{CDDD} {A. Cohen, W. Dahmen, I. Daubechies and R. DeVore,} {Harmonic analysis of the space BV}, Rev. Mat. Iberoam. {\bf19} (2003), 235-263.
\bibitem{CDPX} {A. Cohen, R. DeVore, P. Petrushev and H. Xu}, Nonlinear approximation and the space ${\rm BV} (\mathbb{R}^2)$, Amer. J. Math. {\bf 121} (1999), 587-628.
\bibitem{DaoLamLu1} {N. A. Dao, N. Lam and G. Lu,} Gagliardo--Nirenberg and Sobolev interpolation
inequalities on Besov spaces, Proc. Amer. Math. Soc. {\bf 150} (2022), 605-616.
\bibitem{DaoLamLu2} {N. A. Dao, N. Lam and G. Lu}, Gagliardo--Nirenberg type
inequalities on Lorentz, Marcinkiewicz, and Weak-$L^\infty$ spaces, Proc. Amer. Math. Soc. {\bf150} (2022), 2889-2900.
\bibitem{Gag} {E. Gagliardo}, Ulteriori proprietà di alcune classi di funzioni in più variabili, Ric. Mat. 8 (1959) 24–51.
\bibitem{Grevholm} B. Grevholm, On the structure of the spaces $\mathcal{L}^{p,\lambda}_k$, Math. Scand. {\bf26} (1970), 241-254.
\bibitem{Le} {M. Ledoux}, On improved Sobolev embedding theorems, Math. Res. Lett. \textbf{10} (2003), 659-669.
\bibitem{Leoni} {G. Leoni}, A first course
in Sobolev Spaces, Graduate Studies
in Mathematics
Vol. 105, American Mathematical Society
Providence, Rhode Island.
\bibitem{Lu2} {G. Lu}, Polynomials, higher order Sobolev extension theorems and interpolation inequalities on weighted Folland-Stein spaces on stratified groups. Acta Math. Sin. (Engl. Ser.) 16 (2000), no. 3, 405-444.
\bibitem{LuWheeden} {G. Lu and R. Wheeden}, Simultaneous representation and approximation formulas and high-order Sobolev embedding theorems on stratified groups. Constr. Approx. 20 (2004), no. 4, 647-668.
\bibitem{MeRi2003} {Y. Meyer and T. Rivi\`ere}, A partial regularity result for a class of stationary Yang–Mills fields, Rev. Mat. Iberoamericana {\bf 19} (2003), 195-219.
\bibitem{Miyazaki} {Y. Miyazaki}, A short proof of the Gagliardo-Nirenberg inequality with {\rm BMO} term, Proc. Amer. Math. Soc. {\bf 148} (2020), 4257-4261.
\bibitem{Nir} L. Nirenberg, On elliptic partial differential equations, Ann. Sc. Norm. Super. Pisa 3 (13) (1959), 115-162.
\bibitem{Peetre} J. Peetre, New thoughts on Besov spaces, Published by
Mathematics Department, Duke University, 1976.
\bibitem{Stein} E. Stein. Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton, 1970.
\bibitem{Strz} {P. Strzelecki}, {Gagliardo-Nirenberg inequalities with a {\rm BMO} term}, Bull. London Math. Soc. {\bf 38} (2006), 294-300.
\bibitem{Van} {J. Van Schaftingen}, {Fractional Gagliardo--Nirenberg interpolation inequality and bounded mean oscillation}, ArXiv:2208.14691.
\bibitem{Triebel} H. Triebel, Theory of function spaces II. Birkhäuser (1992).
\end{thebibliography}
\end{document}
\documentclass[12pt,reqno]{amsart}
\usepackage{txfonts}
\usepackage{amsmath, amsfonts, amssymb, amsthm, amscd, amsbsy, latexsym, amsxtra} \usepackage{fancyhdr} \usepackage[usenames,dvipsnames,svgnames,x11names,hyperref]{xcolor} \usepackage{geometry} \usepackage{graphicx} \usepackage[pagebackref]{hyperref} \usepackage{showidx} \usepackage{showkeys} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \setcounter{MaxMatrixCols}{30}
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\floor}[1]{\lfloor #1 \rfloor}
\hypersetup{colorlinks,breaklinks,
linkcolor={Fuchsia},
citecolor={ForestGreen},
urlcolor={NavyBlue}} \geometry{
a4paper,
total={8.5in,11.5in},
left=1in,
right=1in,
top=1in,
bottom=1in, } \newtheorem{theorem}{Theorem}[section] \theoremstyle{plain} \newtheorem{acknowledgement}{Acknowledgement}[section] \newtheorem{algorithm}{Algorithm}[section] \newtheorem{axiom}{Axiom}[section] \newtheorem{case}{Case}[section] \newtheorem{claim}{Claim}[section] \newtheorem{conclusion}{Conclusion}[section] \newtheorem{condition}{Condition}[section] \newtheorem{conjecture}{Conjecture}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{criterion}{Criterion}[section] \newtheorem{definition}{Definition}[section] \newtheorem{example}{Example}[section] \newtheorem{exercise}{Exercise}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{notation}{Notation}[section] \newtheorem{problem}{Problem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{remark}{Remark}[section] \newtheorem{solution}{Solution}[section] \newtheorem{summary}{Summary}[section] \numberwithin{equation}{section} \allowdisplaybreaks \def\Xint#1{\mathchoice
{\XXint\displaystyle\textstyle{#1}} {\XXint\textstyle\scriptstyle{#1}} {\XXint\scriptstyle\scriptscriptstyle{#1}} {\XXint\scriptscriptstyle\scriptscriptstyle{#1}} \!\int} \def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$ }
\vcenter{\hbox{$#2#3$ }}\kern-.6\wd0}} \def\Xint={\Xint=} \def\Xint-{\Xint-} \begin{document}
\title[]{Gagliardo--Nirenberg type inequalities using Fractional Sobolev spaces and Besov spaces}
\author{Nguyen Anh Dao}
\address{Nguyen Anh Dao: Institute of Applied Mathematics, University of Economics Ho Chi Minh City, Viet Nam}
\email{[email protected]}
\tableofcontents
\date{\today}
\begin{abstract} Our main purpose is to establish Gagliardo--Nirenberg type inequalities using fractional homogeneous Sobolev spaces, and homogeneous Besov spaces. In particular, we extend some of the recent results obtained by the authors in \cite{Brezis1, Brezis2, Brezis3, DaoLamLu1, Miyazaki, Van}.
\end{abstract}
\subjclass[2010]{Primary 46E35; Secondary 46B70.}
\keywords{Gagliardo--Nirenberg's inequality, Besov spaces, maximal function.\\}
\maketitle
\section{Introduction}
In this paper, we are interested in
the following Gagliardo--Nirenberg inequality:
\\
For every $0\leq \alpha_1<\alpha_2$, and for $1\leq p_1, p_2, q \leq \infty$, there holds
\begin{equation}\label{-10}
\|f\|_{\dot{W}^{\alpha_1, p_1}} \lesssim \| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{q} } \|f \|^\frac{\alpha_1}{\alpha_2}_{\dot{W}^{\alpha_2, p_2}}
\,,
\end{equation}
where $$\frac{1}{p_1} = \frac{1}{q} \left(1-\frac{\alpha_1}{\alpha_2}\right) + \frac{1}{p} \frac{\alpha_1}{\alpha_2} \,,$$
and $\dot{W}^{\alpha,p}(\mathbb{R}^n)$ denotes by the homogeneous Sobolev space (see the definition in Section 2).
\\
It is known that such an inequality of this type plays an important role in the analysis of PDEs. When $\alpha_i$, $i=1,2$ are nonnegative integer, \eqref{-10} was obtained independently by Gagliardo \cite{Gag} and Nirenberg \cite{Nir}.
After that, the inequalities of this type have been studied by many authors in \cite{Brezis1, Brezis2, Brezis3, CDDD,DaoLamLu1,DaoLamLu2,Le, LuWheeden,Lu2, Miyazaki,MeRi2003, Van}, and the references cited therein.
\\
The case $q=\infty$ can be considered as a limiting case of \eqref{-10}, i.e:
\begin{equation}\label{-12}
\|\nabla^{\alpha_1} f\|_{L^{p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}} _{L^\infty} \|\nabla^{\alpha_2} f \|^{\frac{\alpha_1}{\alpha_2}}_{L^{p_2}} \,, \quad \forall f\in L^\infty(\mathbb{R}^n)\cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)\,,
\end{equation}
with $p_1=\frac{p_2\alpha_2}{\alpha_1}$.
Obviously, this inequality fails if $\alpha_1=0$. \\
An partial improvement of \eqref{-12} in terms of {\rm BMO} space was obtained by Meyer and Rivi\`ere \cite{MeRi2003}:
\begin{equation}\label{-13}
\| \nabla f\|^2_{L^{4}} \lesssim \|f\|_{ \rm{BMO} } \| \nabla^2 f\|_{ L^2 } \,,
\end{equation}
for all $f\in {\rm BMO}(\mathbb{R}^n) \cap W^{2,2}(\mathbb{R}^n)$. Thanks to \eqref{-13}, the authors proved a regularity result for a class of stationary Yang--Mills fields in high dimension.
\\
After that \eqref{-14} was extended to higher derivatives by the authors in \cite{Strz,Miyazaki}. Precisely, there holds true
\begin{equation}\label{-14}
\|\nabla^{\alpha_1} f\|_{L^{p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}} _{{\rm BMO}} \|\nabla^{\alpha_2} f \|^{\frac{\alpha_1}{\alpha_2}}_{L^{p_2}} \,,
\end{equation}
for all $f\in {\rm BMO}(\mathbb{R}^n) \cap W^{\alpha_2,p_2}(\mathbb{R}^n)$, $p_2>1$.
\\
Recently, the author et al. \cite{DaoLamLu1} improved \eqref{-14} by means of the homogeneous Besov spaces. For convenience, we recall the result here.
\begin{theorem}[see Theorem 1.2, \cite{DaoLamLu1}] \label{TheC} \sl
Let $m, k$ be integers with $1\leq k<m$. For every $s\geq 0$, let $f \in \mathcal{S}'(\mathbb{R}^n)$ be such that
$ D^m f\in L^{p}(\mathbb{R}^n)$, $1\leq p<\infty$; and $f\in\dot{B}^{-s}(\mathbb{R}^n)$. Then, we have $D^k f\in L^r(\mathbb{R}^n)$, $r=p \left( \frac{m+s}{k+s} \right)$, and
\begin{equation}\label{-15}
\|D^k f\|_{L^r} \lesssim \|f\|^{\frac{m-k}{m+s}}_{\dot{B}^{-s}}
\left\|D^m f \right\|^\frac{k+s}{m+s}_{L^p} \,,
\end{equation}
where we denote $\dot{B}^{\sigma} = \dot{B}^{\sigma,\infty}_{\infty}$, $\sigma\in\mathbb{R}$ (see the definition of Besov spaces in Section 2).
\end{theorem}
\begin{remark}
Obviously, \eqref{-15} is stronger than \eqref{-14} when $s=0$ since ${\rm BMO}(\mathbb{R}^n) \hookrightarrow \dot{B}^{0}(\mathbb{R}^n)$. We emphasize that \eqref{-15} is still true for $k=0$ when $s>0$.
\end{remark}
We would like to mention that
in studying the space ${\rm BV}(\mathbb{R}^2)$, A. Cohen et al., \cite{CDPX}
proved \eqref{-15} for the case $k=0, m=p=1, s=n-1, r=\frac{n}{n-1}$ by using wavelet decompositions (see \cite{Le} for the case $k=0, m=1, p\geq 1, r=p\big(\frac{1+s}{s}\big)$, with $s>0$).
\\
Inequality \eqref{-10} in terms of fractional Sobolev spaces has been investigated by the authors in \cite{Brezis1, Brezis2, Brezis3,Van} and the reference therein. Surprisingly, there is a border line for the limiting case of Gagliardo--Nirenberg type inequality. That is
\begin{align}\label{-16}
\|f\|_{W^{\alpha_1,p_1}}\lesssim \|f\|^\theta_{W^{\alpha,p}} \|f\|^{1-\theta}_{W^{\alpha_2,p_2}} \,,
\end{align}
with \[ \alpha_1=\theta \alpha +(1-\theta)\alpha_2,\, \frac{1}{p_1}=\frac{\theta}{p}+\frac{1-\theta}{p_2},\, \text{and } \theta\in(0,1) \,.\]
In \cite{Brezis1}, Brezis--Mironescu proved that
\eqref{-16} holds if and only if
\begin{equation}\label{special-cond} \alpha-\frac{1}{p}< \alpha_2-\frac{1}{p_2} \,.\end{equation}
As a consequence of this result,
the following inequality
\[
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|_{L^\infty} \|\nabla f\|_{L^1}
\]
fails whenever $0<\alpha_1<1$.
\\
The limiting case of \eqref{-16} reads as:
\begin{equation}\label{-17}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|_{L^\infty}\|f\|_{\dot{W}^{\alpha_2,p_2}} \,,
\end{equation}
where $\alpha_1<\alpha_2$, and $\alpha_1 p_1=\alpha_2 p_2$.
\\
When $\alpha_2<1$, Brezis--Mironescu improved \eqref{-17} by means of ${\rm BMO}(\mathbb{R}^n)$ using the Littlewood--Paley decomposition. Very recently, Van Schaftingen \cite{Van} studied \eqref{-17} for $\alpha_2=1$. Precisely, he proved that
\begin{equation}\label{-20}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|^{1-\alpha_1}_{{\rm BMO}}
\|D f\|^{\alpha_1}_{L^{p_2}}
\end{equation}
where $0<\alpha_1<1$, $p_1\alpha_1=p_2$, $p_2>1$.
\\
Inspired by the above results, we would like to study \eqref{-10} by means of fractional Sobolev spaces and Besov spaces. Moreover, we also improve the limiting cases \eqref{-17}, \eqref{-20} in terms of $\dot{B}^0(\mathbb{R}^n)$.
\subsection*{Main result}
Our first result is to improve \eqref{-10} to fractional Sobolev spaces, and homogeneous Besov spaces.
\begin{theorem}\label{Mainthe} Let $\sigma>0$, and $0\leq \alpha_1<\alpha_2<\infty$. Let $1\leq p_1, p_2 \leq \infty$ be such that $p_1=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$, and $p_2(\alpha_2+\sigma)>1$. If $f\in \dot{B}^{-\sigma}(\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. Moreover, there is a positive constant $C=C(n,\alpha_1,\alpha_2,p_2, \sigma)$ such that
\begin{equation}\label{-3}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}
\,.
\end{equation}
\end{theorem}
\begin{remark} Note that
\eqref{-3} is not true for the limiting case $\sigma= \alpha_1=0, p_1=\infty$, even \eqref{special-cond} holds, i.e: $\alpha_2-\frac{1}{p_2}>0$. Indeed, if it is the case, then \eqref{-3} becomes
\[
\|f\|_{L^{\infty}} \lesssim \|f\|_{\dot{B}^{0}} \,.
\]
Obviously,
the inequality cannot happen since
$L^\infty(\mathbb{R}^n)\hookrightarrow {\rm BMO}(\mathbb{R}^n)\hookrightarrow \dot{B}^0(\mathbb{R}^n)$.
\end{remark}
However, if $\alpha_1$ is positive, then
\eqref{-3} holds true with $\sigma=0$. This assertion is in the following theorem.
\begin{theorem}\label{Mainthe1} Let
$\alpha_2> \alpha_1>0$, and let $1\leq p_1, p_2\leq \infty$ be such that $p_1=\frac{\alpha_2 p_2}{\alpha_1}$, and $\alpha_2 p_2>1$. If $f\in \dot{B}^{0}(\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. Moreover, we have
\begin{equation}\label{4.1}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}}
\,.
\end{equation}
\end{theorem}
Our paper is organized as follows. We provide the definitions of the fractional Sobolev spaces and the homogeneous Besov spaces in the next section. Section 3 is devoted to the proofs of Theorems \ref{Mainthe}, \ref{Mainthe1}. Moreover, we also obtain the homogeneous version of \eqref{-16} with an elementary proof from Lemma \ref{Lem11}.
Finally, we give a characterization of the fractional homogeneous Sobolev spaces via the homogeneous Besov spaces in the last section.
\section{Definitions and preliminary results}
\subsection{Fractional Sobolev spaces}
\begin{definition}\label{Def-frac-Sob} For any $0<\alpha<1$, and for $1\leq p<\infty$,
we denote $\dot{W}^{\alpha,p}(\mathbb{R}^n)$ (resp. $W^{\alpha,p}(\mathbb{R}^n)$) by the homogeneous fractional
Sobolev space (resp. the inhomogeneous fractional Sobolev space), endowed by the semi-norm:
\[ \|f\|_{\dot{W}^{\alpha,p}} =
\left(\displaystyle \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{|f(x+h)-f(x)|^p}{|h|^{n+\alpha p}} dhdx \right)^{\frac{1}{p}} \,,
\]
and the norm
\[
\|f\|_{W^{\alpha,p}} = \left(\|f\|^p_{L^p} + \|f\|^p_{\dot{W}^{\alpha,p}} \right)^\frac{1}{p}\,.
\]
\end{definition}
When $\alpha\geq 1$, we can define the higher order fractional Sobolev space as follows:
\\
Denote $\floor{\alpha}$ by the integer part of $\alpha$. Then, we define
\[
\|f\|_{\dot{W}^{\alpha,p}} =\left\{ \begin{array}{cl}
&\|D^{\floor{\alpha}} f\|_{L^p} ,\quad \text{if }\, \alpha\in\mathbb{Z}^+.
\vspace*{0.1in}\\
& \|D^{\floor{\alpha}}f\|^p_{\dot{W}^{\alpha-\floor{\alpha},p}} ,\quad\text{otherwise}\,.
\end{array}\right.
\]
In addition, we also define
\[
\|f\|_{W^{\alpha,p}} =\left\{ \begin{array}{cl}
&\|f\|_{W^{\alpha,p}} ,\quad \text{if }\, \alpha\in\mathbb{Z}^+.
\\
& \left( \|f\|^p_{W^{\floor{\alpha},p}} + \|D^{\floor{\alpha}} f\|^p_{\dot{W}^{\alpha-\floor{\alpha},p}} \right)^{\frac{1}{p}} ,\quad\text{otherwise}\,.
\end{array}\right.
\]
\subsection*{Notation} Through the paper, we accept the notation
$\dot{W}^{\alpha,\infty}(\mathbb{R}^n)=\dot{C}^{\alpha}(\mathbb{R}^n)$, $\alpha\in(0,1)$; and $\dot{W}^{0,p}(\mathbb{R}^n)=L^p(\mathbb{R}^n)$, $1\leq p\leq \infty.$
\\
In addition, we always denote constant by C, which may change
from line to line. Moreover, the notation $C(\alpha, p,n)$ means that $C$ merely depends
on $\alpha, p,n$.
Next, we write $A \lesssim B$ if there exists a constant $c > 0$ such
that $A <cB$. And we write $A \approx B$ iff $A \lesssim B \lesssim A$.
\subsection{Besov spaces}
To define the homogeneous Besov spaces, we recall the Littlewood--Paley decomposition (see \cite{Triebel}). Let $\phi_j(x)$ be the inverse Fourier transform of the $j$-th component of the dyadic decomposition i.e.,
$$\sum_{j\in \mathbb{Z}} \hat{\phi}(2^{-j} \xi ) =1$$
except $\xi=0$, where
$ {\rm supp}( \hat{\phi})\subset \left\{ \frac{1}{2} < |\xi| < 2 \right\}$.
\\
Next, let us put
$$ \mathcal{Z}(\mathbb{R}^n) = \left\{ f \in \mathcal{S}(\mathbb{R}^n), D^\alpha \hat{f}(0) = 0,\, \forall\alpha \in \mathbb{N}^n, \text{ multi-index} \right\} \,,$$
where $\mathcal{S}(\mathbb{R}^n)$ is the Schwartz space as usual.
\begin{definition}\label{Def1} For every $s\in\mathbb{R}$, and for every $1\leq p, q\leq \infty$, the homogeneous Besov space is denoted by
$$\dot{B}^s_{p,q} =\left\{ f\in \mathcal{Z}'(\mathbb{R}^n): \|f\|_{\dot{B}^s_{p,q}} <\infty \right\} \,,$$
with
$$
\|f\|_{\dot{B}^s_{p,q}} = \left\{ \begin{array}{cl}
&\left( \displaystyle \sum_{j\in\mathbb{Z}}
2^{jsq} \|\phi_j * f\|^q_{L^p} \right)^\frac{1}{q}\,, \text{ if }\, 1\leq q<\infty,
\\
& \displaystyle\sup_{ j \in\mathbb{Z} } \left\{ 2^{js} \|\phi_j * f\|_{L^p} \right\} \,, \text{ if }\, q=\infty \,.
\end{array} \right. $$
When $p=q=\infty$, we denote $\dot{B}^s_{\infty,\infty}=\dot{B}^s$ for short.
\end{definition}
The following characterization of $\dot{B}^{s}_{\infty,\infty}$ is useful for our proof below.
\begin{theorem}[see Theorem 4, p. 164, \cite{Peetre}]\label{ThePeetre} Let $\big\{\varphi_\varepsilon\big\}_\varepsilon$ be a sequence of functions such that
\[\left\{
\begin{array}{cl}
&{\rm supp}(\varphi_\varepsilon)\subset B(0,\varepsilon) , \quad \big\{ \frac{1}{2\varepsilon}\leq |\xi|\leq \frac{2}{\varepsilon} \big\}\subset \big\{\widehat{\phi_\varepsilon}(\xi) \not=0 \big\} ,
\vspace*{0.1in}\\
&\int_{\mathbb{R}^n} x^\gamma \varphi_\varepsilon (x)\, dx =0 ,\, \text{for all multi-indexes }\, |\gamma|<k, \text{ where $k$ is a given integer},
\vspace*{0.1in}\\
& \big|D^\gamma \varphi_\varepsilon(x)\big| \leq C \varepsilon^{-(n+|\gamma|)}\, \text{ for every multi-index } \gamma\,.
\end{array}
\right.\]
Assume $s<k$. Then, we have
\[
f\in \dot{B}^s(\mathbb{R}^n) \Leftrightarrow
\sup_{\varepsilon>0}
\left\{\varepsilon^{-s} \|\varphi_\varepsilon * f\|_{L^\infty} \right\} < \infty \,.
\]
\end{theorem}
We end this section by recall the following result (see \cite{DaoLamLu1}).
\begin{proposition}[Lifting operator]\label{Pro1}
Let $s\in\mathbb{R}$, and let $\gamma$ be a multi-index. Then,
$\partial^\gamma$ maps $\dot{B}^s(\mathbb{R}^n) \rightarrow \dot{B}^{s-|\gamma|}(\mathbb{R}^n)$.
\end{proposition}
\section{Proof of the Theorems}
\subsection{Proof of Theorem \ref{Mainthe}}
We first prove Theorem \ref{Mainthe} for the case $0\leq \alpha_1<\alpha_2\leq 1$. After that, we consider $\alpha_i\geq 1$, $i=1,2$.
\\
{\bf i) Step 1: $0\leq \alpha_1<\alpha_2 \leq 1$.} We divide our argument into the following cases.
\\
{\bf a) The case $p_1=p_2=\infty$, $0< \alpha_1<\alpha_2 <1$.} Then, \eqref{-3} becomes
\begin{equation}\label{1.0}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{C}^{\alpha_2}} \,.
\end{equation}
To prove \eqref{1.0}, we use a characterization of homogeneous Besov space $\dot{B}^{s}$ in Theorem \ref{ThePeetre}, and the fact that $\dot{B}^s(\mathbb{R}^n)$ coincides with $\dot{C}^s(\mathbb{R}^n)$, $s\in(0,1)$ (see \cite{Grevholm}).
\\
Then, let us recall sequence $\{\varphi_\varepsilon\}_{\varepsilon>0}$ in Theorem \ref{ThePeetre}.
\\
For $\delta>0$, we write
\begin{align}\label{2.-1}
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} &=\varepsilon^{\alpha_2-\alpha_1} \varepsilon^{-\alpha_2}\|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon<\delta\big\}}+ \varepsilon^{-(\alpha_1+\sigma)} \varepsilon^{\sigma}\|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon\geq \delta\big\}}
\\
&\leq \delta^{\alpha_2-\alpha_1} \|f\|_{\dot{B}^{\alpha_2}}
+\delta^{-(\alpha_1+\sigma)} \|f\|_{\dot{B}^{-\sigma}} \,. \nonumber
\end{align}
Minimizing the right hand side with respect to $\delta$ in the indicated inequality yields
\begin{align*}
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty}\lesssim \|f\|_{\dot{B}^{-\sigma}}^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}\,.
\end{align*}
Since the last inequality holds for every $\varepsilon>0$, then we obtain \eqref{1.0}.
\begin{remark}\label{Rem2} It is not difficult to observe that the above proof also adapts to the two following cases:
\begin{enumerate}
\item[$\bullet$] $\alpha_1=0$, $\alpha_2<1$, $\sigma>0$. Then, we have
\begin{equation}\label{2.-3}
\|f\|_{L^\infty} \lesssim \|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\sigma}{\alpha_2+\sigma}}\,.
\end{equation}
\item[$\bullet $] $\alpha_1=0$, $\alpha_2<1$, $\sigma>0$. Then,
\begin{equation}\label{2.-2}
\|f\|_{\dot{B}^{\alpha_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\|f\|_{\dot{B}^{\alpha_2}}^{\frac{\alpha_1}{\alpha_2}}\,.
\end{equation}
\end{enumerate}
This is Theorem \ref{Mainthe1} when $p_i=\infty$, $i=1, 2$.
\end{remark}
To end part {\bf a)}, it remains to prove \eqref{-3} for the case $\alpha_2=1$. That is
\begin{equation}\label{2.-4}
\|f\|_{\dot{B}^{\alpha_1}} \lesssim \|f\|^{\frac{1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|Df\|_{L^\infty}^{\frac{\sigma}{1+\sigma}}\,.
\end{equation}
The proof is similar to the one of \eqref{1.0}. Hence, it suffices to prove that
\begin{equation}\label{2.-5}
\varepsilon^{-\alpha_1} \|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon<\delta\big\}} \leq \delta^{1-\alpha_1} \|Df\|_{L^\infty} \,. \end{equation}
Indeed, using vanishing moment of $\varphi_\varepsilon$ and the mean value theorem yields
\begin{align*}
\big|\varphi_\varepsilon * f(x)\big| &=\big| \int_{B(0,\varepsilon)} (f(x)-f(x-y)) \varphi_\varepsilon (y)\, dy \big|
\\
&\leq \int_{B(0,\varepsilon)} \|Df\|_{L^\infty} |y| |\varphi_\varepsilon (y)|\, dy \leq \varepsilon \|\varphi_\varepsilon\|_{L^1} \|Df\|_{L^\infty} \lesssim \varepsilon \|Df\|_{L^\infty} \,.
\end{align*}
Thus, \eqref{2.-5} follows easily.
\\
By repeating the proof of \eqref{2.-1}, we obtain \eqref{2.-4}.
\\
{\bf b) The case $p_i<\infty, \,i=1,2$.} Then, the proof follows by way of the following lemmas.
\begin{lemma}\label{Lem10}
Let $0<\alpha< 1$, and $1\leq p<\infty$. For every $s>0$, if $f\in \dot{B}^{-s}(\mathbb{R}^n)\cap \dot{W}^{\alpha,p}(\mathbb{R}^n)$, then there exists a positive constant $C=C(s,\alpha,p)$ such that
\begin{equation}\label{1.1}
|f(x)| \leq C
\| f \|_{\dot{B}^{-s}}^\frac{\alpha}{s+\alpha} \big[
\mathbf{G}_{\alpha,p}(f)(x)\big]^{\frac{s}{s+\alpha}}\,, \quad \text{for } x\in\mathbb{R}^n\,,
\end{equation}
with $$\mathbf{G}_{\alpha,p}(f)(x)= \displaystyle\sup_{\varepsilon>0} \left(\fint_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|^p}{\varepsilon^{\alpha p}} dy \right)^\frac{1}{p} \,.$$
\end{lemma}
\begin{remark}\label{Rem6} When $\alpha=1$, then \eqref{1.1} becomes
\begin{equation}\label{1.1b}
|f(x)|\leq C\|f\|_{\dot{B}^{-s}}^\frac{1}{s+1} \big[\mathbf{M}(|Df|)(x)\big]^{\frac{s}{s+1}}\,, \quad \text{for } x\in\mathbb{R}^n\,.
\end{equation}
This inequality was obtained by the authors in \cite{DaoLamLu1}. As a result, we get
\begin{equation}\label{1.1a}
\|f\|_{L^{p_1}} \lesssim \| f \|_{\dot{B}^{-s}}^\frac{1}{s+1}
\|Df\|_{L^{p_2}}\,,
\end{equation}
with $p_1=p_2\big(\frac{s+1}{s}\big)$, $p_2\geq 1$.
\\
This is also Theorem \ref{Mainthe} when $\alpha_1=0$, $\alpha_2=1$, $s=\sigma>0$.
\end{remark}
\begin{remark}\label{Rem4} Obviously, for $1\leq p<\infty$ we have $\|\mathbf{G}_{\alpha,p}(f)\|_{L^p}\lesssim \|f\|_{\dot{W}^{\alpha,p}}$, and $\mathbf{G}_{\alpha,1}(f)(x)\leq \mathbf{G}_{\alpha,p}(f)(x)$ for $x\in\mathbb{R}^n$.
\\
Next, applying Lemma \ref{Lem10} to $s=\sigma, \alpha=\alpha_2$, $p=p_2$, and taking the $L^{p_1}$-norm of \eqref{1.1} yield
\[
\|f\|_{L^{p_1}} \lesssim
\|f\|_{\dot{B}^{-\sigma}}^\frac{\alpha_2}{\sigma+\alpha_2} \left( \int
\big|\mathbf{G}_{\alpha_2,p_2}(f)(x)\big|^{\frac{\sigma p_1}{\sigma+\alpha_2}} \, dx \right)^{1/p_1} \leq \|f\|_{\dot{B}^{-\sigma}}^\frac{\alpha_2}{\sigma+\alpha_2} \|f\|^{\frac{\sigma}{\sigma+\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\]
with $p_1= p_2 \big(\frac{\sigma+\alpha_2}{\sigma}\big)$.
\\
Hence, we obtain Theorem \ref{Mainthe} for the case $\alpha_1=0$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{Lem10}] Let us recall sequence $\{\varphi_\varepsilon\}_{\varepsilon>0}$ above. Then, we have from the triangle inequality that
\begin{align*}
|f(x)|
\leq | \varphi_\varepsilon * f(x)| + | f(x)-\varphi_\varepsilon * f(x)| :=\mathbf{I}_1+ \mathbf{I}_2 \,.
\end{align*}
We first estimate $\mathbf{I}_1$ in terms of $\dot{B}^{-s}$. Thanks to Theorem \ref{ThePeetre}, we get
\begin{align}\label{1.2}
\mathbf{I}_1 = \varepsilon^{-s} \varepsilon^{s} | \varphi_\varepsilon * f(x)| \leq C \varepsilon^{-s} \| f \|_{\dot{B}^{-s}} \,.
\end{align}
For $\mathbf{I}_2$, applying H\"older's inequality yields
\begin{align}\label{1.3}
\mathbf{I}_2 &\leq \int_{B(0,\varepsilon)} |f(x)-f(x-y)| \varphi_\varepsilon (y) \, dy = \varepsilon^{\frac{n}{p}+\alpha} \int_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|}{\varepsilon^{\frac{n}{p}+\alpha}} \varphi_\varepsilon (y) \, dy \nonumber
\\
&\leq \varepsilon^{\frac{n}{p}+\alpha} \|\varphi_\varepsilon\|_{L^{p'}} \left(\int_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|^p}{\varepsilon^{n+\alpha p}} dy \right)^\frac{1}{p} \nonumber
\\
&\lesssim \varepsilon^{\frac{n}{p}+\alpha} \|\varphi_\varepsilon\|_{L^{\infty}} \big|B(0,\varepsilon)\big|^{\frac{1}{p'}} \mathbf{G}_{\alpha,p}(f)(x) \lesssim \varepsilon^{\alpha} \mathbf{G}_{\alpha,p}(f)(x) \,.
\end{align}
Note that the last inequality follows by using the fact $\|\varphi_\varepsilon\|_{L^{\infty}}\leq C \varepsilon^{-n}$.
\\
By combining \eqref{1.2} and \eqref{1.3}, we obtain
\[|f(x)|\leq C\left(\varepsilon^{-s} \| f \|_{\dot{B}^{-s}} +\varepsilon^{\alpha} \mathbf{G}_{\alpha,p}(f)(x)\right) \,. \]
Since the indicated inequality holds true for $\varepsilon>0$, then minimizing the right hand side of this one yields the desired result. \\
Hence, we complete the proof of Lemma \ref{Lem10}.
\end{proof}
Next, we have the following lemma.
\begin{lemma}\label{Lem11}
Let $0<\alpha_1<\alpha_2< 1$. Let $1\leq p_1, p_2 <\infty$, and $r>1$ be such that
\begin{equation}\label{1.6}
\frac{1}{p_1}= \frac{1}{r} \left(1-\frac{\alpha_1}{\alpha_2}\right) + \frac{1}{p_2}\frac{\alpha_1}{\alpha_2} \,.
\end{equation}
If $f\in L^r(\mathbb{R}^n)\cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\mathbb{R}^n)$. In addition, there exists a constant $C=C(\alpha_1,\alpha_2,p_1,p_2,n)>0$ such that
\begin{equation}\label{1.8}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{r} }
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lem11}]
For any set $\Omega$ in $\mathbb{R}^n$, let us denote $\fint_\Omega f(x) \, dx = \frac{1}{|\Omega|} \int_{\Omega} f(x) \, dx$.
\\
For any $x, z\in\mathbb{R}^n$, we have from the triangle inequality and change of variables that
\begin{align*}
\big|f(x+z)-f(x)\big|&\leq \big|f(x+z) - \fint_{B(x,|z|)} f(y)\,dy \big|+\big|f(x) - \fint_{B(x,|z|)} f(y)\,dy \big|
\\
&\leq \fint_{B(x,|z|)} \big|f(x+z) -f(y) \big| \, dy+ \fint_{B(x,|z|)} \big|f(x)-f(y) \big| \, dy
\\
&\leq C(n) \left( \fint_{B(0,2|z|)} \big|f(x+z) -f(x+z+y) \big| \, dy+ \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy \right)\,.
\end{align*}
With the last inequality noted, and by using a change of variables, we get
\begin{align}\label{1.10}
\int\int \frac{|f(x+z)-f(x)|^{p_1}}{|z|^{n+\alpha_1 p_1}} dzdx \lesssim \int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}}\,.
\end{align}
Next, for every $p\geq 1$ we show that
\begin{align}\label{1.13}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim \big[\mathbf{M}(f)(x)\big]^{\frac{(\alpha_2-\alpha_1)p_1}{\alpha_2}} \left[\mathbf{G}_{\alpha_2,p}(x) \right]^{\frac{\alpha_1 p_1}{\alpha_2}}.
\end{align}
Thanks to Remark \ref{Rem4}, it suffices to show that \eqref{1.13} holds for $p=1$.
\\
Indeed, we have
\begin{align}\label{1.14}
\int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&= \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|^{\alpha_2}} \, dy\right)^{p_1} \frac{|z|^{\alpha_2 p_1} dz}{|z|^{n+\alpha_1 p_1}} \nonumber \\
&\lesssim \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1} \int_{\{|z|<t\}} \frac{1}{|z|^{n+(\alpha_1-\alpha_2) p_1}} dz \nonumber
\\
&\lesssim t^{(\alpha_2-\alpha_1) p_1} \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1}\,.
\end{align}
On the other hand, it is not difficult to observe that
\begin{align}\label{1.15}
\int_{|z|\geq t} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&\lesssim \big[\mathbf{M}(f)(x)\big]^{p_1} \left( \int_{|z|\geq t}\frac{dz}{|z|^{n+\alpha_1 p_1}} \right) \nonumber
\\
&\lesssim t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align}
From \eqref{1.14} and \eqref{1.15}, we obtain
\begin{align*}
\int \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{(\alpha_2-\alpha_1) p_1} \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1} +t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align*}
Minimizing the right hand side of the last inequality yields \eqref{1.13}.
\\
Then, it follows from \eqref{1.13} that
\[
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} dx \lesssim \int \big[\mathbf{M}(f)(x)\big]^{\frac{(\alpha_2-\alpha_1)p_1}{\alpha_2}} \left[\mathbf{G}_{\alpha_2,p_2}(x) \right]^{\frac{\alpha_1 p_1}{\alpha_2}} dx\,.
\]
Note that $\alpha_2 p_2>\alpha_1 p_1$, and $r=\frac{p_1p_2(\alpha_2-\alpha_1)}{\alpha_2p_2-\alpha_1p_1}$, see \eqref{1.6}.
Then, applying H\"older's inequality with $\big((\frac{\alpha_2p_2}{\alpha_1p_1})^\prime, \frac{\alpha_2p_2}{\alpha_1p_1}\big)$ to the right hand side of the last inequaltiy yields
\begin{align*}
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}}
&\lesssim \big\|\mathbf{M}(f)\big\|^{\frac{(\alpha_2-\alpha_1) p_1}{\alpha_2}}_{L^r} \|\mathbf{G}_{\alpha_2,p_2}\|^{\frac{\alpha_1p_1}{\alpha_2}}_{L^{p_2}} \,.
\end{align*}
Thanks to Remark \ref{Rem4}, and by the fact that $\mathbf{M}$ maps $L^r(\mathbb{R}^n)$ into $L^r(\mathbb{R}^n)$ $r>1$, we deduce from the last inequality that
\begin{align}\label{1.18}
\int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} \lesssim \|f\|^{\frac{(\alpha_2-\alpha_1) p_1}{\alpha_2}}_{L^r} \|f\|^{\frac{\alpha_1 p_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align}
Combining \eqref{1.10} and \eqref{1.18} yields \eqref{1.8}.
\\
Hence, we obtain Lemma \ref{Lem11}.
\end{proof}
Now, we can apply Lemma \ref{Lem10} and Lemma \ref{Lem11} alternatively to get Theorem \ref{Mainthe} for the case $0<\alpha_2<1$. Indeed, we apply \eqref{1.1} to $s=\sigma$, $\alpha=\alpha_2$, $p=p_2$. Then,
\begin{align}\label{1.20}
\|f\|_{L^q} & \lesssim
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|\mathbf{G}^{\frac{\sigma}{\alpha_2+\sigma}}_{\alpha_2,p_2}\big\|_{L^q} =
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|\mathbf{G}_{\alpha_2,p_2}\big\|^{\frac{\sigma}{\alpha_2+\sigma}}_{L^{p_2}} \leq
\|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\big(\frac{\alpha_2+\sigma}{\sigma}\big)$.
\\
Since $p_1=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$, then it follows from \eqref{1.6} that $r=q>1$.
\\
Next, applying Lemma \ref{Lem11} yields
\begin{equation}\label{1.19}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim
\| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{r} }
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}\,.
\end{equation}
Hence, we obtain Theorem \ref{Mainthe} for the case $0\leq \alpha_1< \alpha_2<1$, $p_i<\infty$, $i=1,2$.
\\
To end {\bf Step 1}, it remains to study the case $\alpha_2=1$, i.e:
\begin{equation} \label{1.22a} \|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|Df\|^{\frac{\alpha_1+\sigma}{1+\sigma}}_{L^{p_2}}
\,.
\end{equation}
This can be done if we show that
\begin{equation}\label{1.21}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C
\| f \|^{1-\alpha_1}_{ L^{r} }
\|Df\|^{\alpha_1}_{L^{p_2}} \,,
\end{equation}
with $1\leq r<\infty$, $\frac{1}{p_1}= \frac{1-\alpha_1}{r} + \frac{\alpha_1}{p_2}$.
\\
Indeed, a combination of \eqref{1.21} and \eqref{1.1a} implies that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{1-\alpha_1}_{L^r} \|Df\|^{\alpha_1}_{L^{p_2}} \lesssim \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|D f\|^{\frac{\sigma(1-\alpha_1)}{1+\sigma}}_{L^{p_2}} \|Df\|^{\alpha_1}_{L^{p_2}} = \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}}
\|D f\|^{\frac{\alpha_1+\sigma}{1+\sigma}}_{L^{p_2}}\,.
\end{align*}
Note that $p_1=p_2\big( \frac{1+\sigma}{\alpha_1+\sigma}\big)$, and $r=p_2\big( \frac{1+\sigma}{\sigma}\big)$.
\\
Hence, we obtain Theorem \ref{Mainthe} when $\alpha_2=1$.
\\
Now, it remains to prove \eqref{1.21}. We note that \eqref{1.21} was proved for $p_2=1$ (see, e.g., \cite{Brezis3, CDPX}). In fact, one can modify the proofs in \cite{Brezis3, CDPX} in order to obtain \eqref{1.21} for the case $1<p_2<\infty$. However, for consistency, we give the proof of \eqref{1.21} for $1<p_2<\infty$.
\\
To obtain the result, we prove a version of \eqref{1.13} in terms of $\mathbf{M}(|Df|)(x)$ instead of $\mathbf{G}_{1,p}(x)$. Precisely, we show that
\begin{align}\label{1.23}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim \big[\mathbf{M}(f)(x)\big]^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{\alpha_1 p_1}
\end{align}
for $x\in\mathbb{R}^n$.
\\
Indeed, it follows from the mean value theorem and a change of variables that
\begin{align*}
\fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|} \, dy &\lesssim \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|y|} \, dy
\\
&= \fint_{B(0,2|z|)} \frac{\big|\int^1_0 D f(x+\tau y) \cdot y\, d\tau\big| }{|y|} \, dy
\\
&\leq \int^1_0\fint_{B(x,2\tau|z|)} | D f(\zeta) | \, d\zeta d\tau
\leq \int^1_0 \mathbf{M}(|Df|)(x) \, d\tau = \mathbf{M}(|Df|)(x) \,.
\end{align*}
Thus,
\begin{align}\label{1.24}
\int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&= \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|} \, dy\right)^{p_1} \frac{|z|^{p_1} dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&\lesssim \big[\mathbf{M}(|Df|)(x) \big]^{p_1} \int_{\{|z|<t\}} |z|^{-n+(1-\alpha_1)p_1} \, dz \nonumber
\\
&\lesssim t^{(1-\alpha_1)p_1} \big[\mathbf{M}(|Df|)(x) \big]^{p_1} \,.
\end{align}
From \eqref{1.24} and \eqref{1.15}, we obtain
\begin{align}\label{1.25}
\int \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{(1-\alpha_1) p_1} \left[ \mathbf{M}(|Df|)(x)\right]^{p_1} +t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,.
\end{align}
Hence, \eqref{1.23} follows by minimizing the right hand side of \eqref{1.25} with respect to $t$.
If $p_2>1$, then we apply H\"older's inequality in \eqref{1.23} in order to get
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int \int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz dx}{|z|^{n+\alpha_1 p_1}}
\\
&\lesssim \int\big[\mathbf{M}(f)(x)\big]^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{\alpha_1 p_1} dx
\\
&\leq \|\mathbf{M}(f)\|^{(1-\alpha_1)p_1}_{L^r} \|\mathbf{M}(|Df|)\|^{\alpha_1 p_1}_{L^{p_2}}
\\
&\lesssim \|f\|^{(1-\alpha_1)p_1}_{L^r} \|Df\|^{\alpha_1 p_1}_{L^{p_2}}\,,
\end{align*}
where $r=p_2\big(\frac{1+\sigma}{\sigma}\big)>1$. Note that the last inequality follows from the $L^{p}$-boundedness of $\mathbf{M}$, $p>1$.
Thus, we get \eqref{1.21}.
\\
This puts an end to the proof of {\bf Step 1}.
\\
{\bf ii) Step 2.}
Now, we can prove Theorem \ref{Mainthe} for the case $\alpha_1\geq 1$.
At the beginning, let us denote $\alpha_i=\floor{\alpha_i}+s_i$, $i=1,2$. Then, we divide the proof into the following cases.
\\
{\bf a) The case $\floor{\alpha_2}=\floor{\alpha_1}$:}
By applying Theorem \ref{Mainthe} to $D^{\floor{\alpha_1}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_1}$; and by Proposition \ref{Pro1}, we obtain
\begin{align*}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} = \big\|D^{\floor{\alpha_1}} f \big\|_{\dot{W}^{s_1,p_1}} &\lesssim \big\| D^{\floor{\alpha_1}} f \big\|^{\frac{s_2-s_1}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-(\sigma+\floor{\alpha_1})}} \big\|D^{\floor{\alpha_1}} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{W}^{s_2,p_2}} \\
&\lesssim \big\| f \big\|^{\frac{s_2-s_1}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}}
\big\| D^{\floor{\alpha_2}} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{W}^{s_2,p_2}} = \big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}}
\big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align*}
with $p_1=p_2\big(\frac{s_2+\sigma_{{\rm new}}}{s_1+\sigma_{{\rm new}}}\big)=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$.
\\
Hence, we get the conclusion for this case.
\\
{\bf b) The case $\floor{\alpha_2}>\floor{\alpha_1}$:} If $s_2>0$, then we can apply Theorem \ref{Mainthe} to $D^{\floor{\alpha_2}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_2}$. Therefore,
\begin{align}\label{1.30}
\big\| D^{\floor{\alpha_2}} f \big\|_{L^q} \lesssim \big\|D^{\floor{\alpha_2}} f \big\|^{\frac{s_2}{s_2+\sigma+\floor{\alpha_2}}}_{\dot{B}^{-(\sigma+\floor{\alpha_2})}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\sigma+\floor{\alpha_2}}{s_2+\sigma+\floor{\alpha_2}}}_{\dot{W}^{s_2,p_2}} \lesssim \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\big(\frac{\alpha_2+\sigma}{\floor{\alpha_2}+\sigma}\big)$. Again, the last inequality follows from the lifting property in Proposition \eqref{Pro1}.
\\
Next, applying Theorem \ref{Mainthe} to $D^{\floor{\alpha_1}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_1}$ yields
\begin{align}\label{1.31}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} = \big\|D^{\floor{\alpha_1}} f \big\|_{\dot{W}^{s_1,p_1}} &\lesssim \big\|D^{\floor{\alpha_1}} f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-(\sigma+\floor{\alpha_1})}} \big\| D^{\floor{\alpha_1}+1} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}}_{L^{q_1}} \nonumber
\\
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_1}+1} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}}_{L^{q_1}} \,,
\end{align}
with $q_1= p_1\big(\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}\big)$.
\\
If $\floor{\alpha_2}=\floor{\alpha_1}+1$, then observe that $q=q_1$. Thus, we deduce from \eqref{1.30} and \eqref{1.31} that
\begin{align*}
\big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}}
\left( \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}= \big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
This yields \eqref{-3}.
\\
Note that $\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}+\frac{s_2(\alpha_1+\sigma)}{(\alpha_2+\sigma)(\floor{\alpha_2}+\sigma)}=\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}$ since $\floor{\alpha_2}=\floor{\alpha_1}+1$.
\\
If $\floor{\alpha_2}>\floor{\alpha_1}+1$, then we apply \cite[Theorem 1.2]{DaoLamLu1} to $k=\floor{\alpha_1}+1$, $m=\floor{\alpha_2}$. Thus,
\begin{align}\label{1.32}
\big\| D^{\floor{\alpha_1}+1} f \big\|_{L^{q_1}}\lesssim \big\|f\big\|^{\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}}\,,
\end{align}
with $q_2=q_1 \big(\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}\big)$.
\\
Combining \eqref{1.31} and \eqref{1.32} yields
\begin{align}\label{1.33}
\big\|f \big\|_{\dot{W}^{\alpha_1,p_1}}
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \left( \big\|f\big\|^{\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}}_{\dot{B}^{-\sigma}}
\big\|D^{\floor{\alpha_2}} f \big\|^{\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}}\right)^{\frac{\alpha_1+\sigma}{1+\floor{\alpha_1}+\sigma}} \nonumber
\\
&= \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)}_{\dot{B}^{-\sigma}}
\big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}} \,.
\end{align}
Observe that $q=q_2=p_2 \big(\frac{\alpha_2+\sigma}{\floor{\alpha_2}+\sigma}\big) $. Thus, it follows from \eqref{1.33} and \eqref{1.30} that
\begin{align*}
\big\| f \big\|_{\dot{W}^{\alpha_1,p_1}}
&\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)}_{\dot{B}^{-\sigma}} \left( \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}\right)^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}
\\
&=\big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
A straightforward computation shows that
\[\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)+ \frac{s_2(\alpha_1+\sigma)}{(\alpha_2+\sigma)(\floor{\alpha_2}+\sigma)} = \frac{\alpha_2-\alpha_1}{\alpha_2+\sigma} \,.\]
This puts an end to the proof of Theorem \ref{Mainthe} for $s_2>0$.
\\
The proof of the case $s_2=0$ can be done similarly as above. Then, we leave the details to the reader.
\\
Hence, we complete the proof of Theorem \ref{Mainthe}.
\subsection{Proof of Theorem \ref{Mainthe1}}At the beginning, let us recall the notation $\alpha_i=\floor{\alpha_i}+s_i$, $i=1, 2$. Then, we divide the proof into the two following cases.
\\
{\bf i) The case $p_1=p_2=\infty$.} If $0<\alpha_1<\alpha_2<1$, then \eqref{4.1} becomes
\begin{equation}\label{2.0}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{\alpha_2}} \,.
\end{equation}
Inequality \eqref{2.0} can be obtained easily from the proof of \eqref{1.0} for $\sigma=0$. Then, we leave the details to the reader.
\\
If $0<\alpha_1<1\leq \alpha_2$, and $\alpha_2$ is integer, then \eqref{4.1} reads as:
\begin{equation}\label{2.0a}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|\nabla^{\alpha_2} f\|^{\frac{\alpha_1}{\alpha_2}}_{L^\infty} \,.
\end{equation}
To obtain \eqref{2.0a}, we utilize the vanishing moments of $\varphi_\varepsilon$ in Theorem \ref{ThePeetre}. In fact, let us fix $k>\alpha_2$.
Then, it follows from the Taylor series that
\begin{align}\label{2.0c}
|\varphi_\varepsilon * f(x)| &=\left|\int\big( f(x-y)-f(x)\big) \varphi_\varepsilon(y)\, dy \right| \nonumber
\\
&=\left| \int \left(\sum_{|\gamma|< \alpha_2} \frac{D^\gamma f(x)}{|\gamma|!} (-y)^\gamma + \sum_{|\gamma|= \alpha_2} \frac{D^\gamma f(\zeta)}{|\gamma|!} (-y)^\gamma \right) \varphi_\varepsilon(y)\,dy \right|
\nonumber \\
&=\left| \int \sum_{|\gamma|= \alpha_2} \frac{D^{\gamma} f(\zeta)}{|\alpha_2|!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\end{align}
for some $\zeta$ in the line-$xy$. Note that $$ \int \frac{D^\gamma f(x)}{|\gamma|!} (-y)^\gamma \varphi_\varepsilon(y)\,dy=0$$ for every multi-index $|\gamma|<k$.
\\
Hence, we get from \eqref{2.0c} that
\[ |\varphi_\varepsilon * f(x)| \lesssim \|\nabla^{\alpha_2} f\|_{L^\infty} \int_{B(0,\varepsilon)} |y|^{\alpha_2} |\varphi_\varepsilon(y)|\,dy\lesssim \varepsilon^{\alpha_2} \|\nabla^{\alpha_2} f\|_{L^\infty}\,.\]
Inserting the last inequality into \eqref{2.-1} yields
\[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \delta^{\alpha_2-\alpha_1} \|\nabla^{\alpha_2} f\|_{L^\infty}
+\delta^{-\alpha_1} \|f\|_{\dot{B}^{0}} \,.
\]
By minimizing the right hand side of the indicated inequality, we get
\[
\varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|\nabla^{\alpha_2} f\|^{\frac{\alpha_1}{\alpha_2}}_{L^\infty} \,.
\]
This implies \eqref{2.0a}.
\\
If $0<\alpha_1<1\leq \alpha_2$, and $\alpha_2$ is not integer, then \eqref{4.1} reads as:
\begin{equation}\label{2.0b}
\|f\|_{\dot{C}^{\alpha_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \left\|\nabla^{\floor{\alpha_2}} f\right\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} \,.
\end{equation}
To obtain \eqref{2.0b}, we apply \eqref{2.0c} to $\floor{\alpha_2}$. Thus,
\begin{align*}
|\varphi_\varepsilon * f(x)| &=\left| \int \sum_{|\gamma|=\floor{\alpha_2}} \frac{D^{\gamma} f(\zeta)}{\floor{\alpha_2}!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\\
&=\left| \int \sum_{|\gamma|= \floor{\alpha_2}} \frac{D^{\gamma} f(\zeta) - D^{\gamma} f(x)}{\floor{\alpha_2}!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right|
\\
&\lesssim \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \int |x-\zeta|^{s_2} |y|^{\floor{\alpha_2}} |\varphi_\varepsilon(y)|\,dy
\\
&\leq\|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \int_{B(0,\varepsilon)} |y|^{s_2} |y|^{\floor{\alpha_2}} |\varphi_\varepsilon(y)|\,dy \lesssim \varepsilon^{\alpha_2}
\|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \,.
\end{align*}
Thus,
\[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \delta^{\alpha_2-\alpha_1} \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}}
+\delta^{-\alpha_1} \|f\|_{\dot{B}^{0}} \,. \]
By the analogue as in the proof of \eqref{2.0a}, we also obtain \eqref{2.0b}.
\\
In conclusion, Theorem \ref{Mainthe1} was proved for the case $0<\alpha_1<1$.
\\
Now, if $\alpha_1\geq 1$, then \eqref{4.1} becomes
\begin{equation}\label{2.0d}
\| D^{\floor{\alpha_1}} f \|_{\dot{C}^{s_1}} \lesssim
\|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|D^{\floor{\alpha_2}} f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} \,.
\end{equation}
Again, we note that $\|.\|_{\dot{C}^{s_i}}$ is replaced by $\|.\|_{L^\infty}$ whenever $s_i=0$, $i=1,2$.
\\
To obtain \eqref{2.0d}, we apply Theorem \ref{Mainthe} to $f_{{\rm new}}=D^{\floor{\alpha_1}} f$, and $\sigma =\floor{\alpha_1}$.
\\
Hence, it follows from Proposition \ref{Pro1} that
\begin{align*}
\|f\|_{\dot{C}^{\alpha_1}} =\| D^{\floor{\alpha_1}} f \|_{\dot{C}^{s_1}}
&\lesssim
\big\|D^{\floor{\alpha_1}} f\big\|^{\frac{\alpha_2-\floor{\alpha_1}-s_1}{\alpha_2-\floor{\alpha_1}+\sigma}}_{\dot{B}^{-\floor{\alpha_1}}} \big\|D^{\floor{\alpha_1}} f\big\|^{\frac{s_1+\sigma}{\alpha_2-\floor{\alpha_1}+\sigma}}_{\dot{C}^{\alpha_2-\floor{\alpha_1}}}
\\
&\lesssim
\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\big\|D^{\floor{\alpha_2}}f\big\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} = \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}}
\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{\alpha_2}} \,.
\end{align*}
This puts an end to the proof of Theorem \ref{Mainthe1} for the case $p_1=p_2=\infty$.
\\
{\bf ii) The case $p_i<\infty, i=1,2$.}
We first consider the case $0<\alpha_1<1$.
\\
{\bf a)} If $\alpha_2\in(\alpha_1, 1)$, then we utilize the following result $\|\cdot\|_{\dot{W}^{s,p}} \approx \|\cdot\|_{\dot{B}^{s}_{p,p}}$ for $s\in(0,1)$, $p\geq 1$, see Proposition \ref{Pro-cha} in the Appendix section. Therefore, \eqref{4.1} is equivalent to the following inequality
\begin{equation}\label{2.1}
\|f\|_{\dot{B}^{\alpha_1}_{p_1,p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^{0}}\|f\|^\frac{\alpha_1}{\alpha_2}_{\dot{B}^{\alpha_2}_{p_2,p_2}} \,.
\end{equation}
Note that $\alpha_1 p_1 = \alpha_2 p_2$. Hence,
\begin{align}\label{2.2}
2^{jn \alpha_1 p_1} \|f*\phi_j\|_{L^{p_1}}^{p_1} \leq 2^{jn \alpha_2 p_2} \|f*\phi_j\|_{L^{p_2}}^{p_2} \|f*\phi_j\|_{L^{\infty}}^{p_1-p_2} \leq 2^{jn \alpha_2 p_2} \|f*\phi_j\|_{L^{p_2}}^{p_2} \|f\|_{\dot{B}^0}^{p_1-p_2} \,.
\end{align}
This implies that
\[ \|f\|^{p_1}_{\dot{B}^{\alpha_1}_{p_1,p_1}} \leq \|f\|^{p_1-p_2}_{\dot{B}^{0}} \|f\|^{p_2}_{\dot{B}^{\alpha_2}_{p_2,p_2}} \]
which is \eqref{2.1}.
\\
{\bf b)} If $\alpha_2= 1$, then we show that
\begin{equation}\label{2.3}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim\|f\|^{1-\alpha_1}_{\dot{B}^{0}} \|Df\|^{\alpha_1}_{L^{p_2}}\,.
\end{equation}
To obtain \eqref{2.3}, we prove the homogeneous version of \eqref{-16}.
\begin{lemma}\label{Lem-Hom-Sobolev}
Let $0<\alpha_0<\alpha_1 <\alpha_2\leq 1$, and $p_0\geq 1$ be such that $\alpha_0 -\frac{1}{p_0}<\alpha_2-\frac{1}{p_2}$, and
\[
\frac{1}{p_1} = \frac{\theta}{p_0} + \frac{1-\theta}{p_2} ,\quad \theta= \frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0}
\,. \]
Then, we have
\begin{equation}\label{2.4}
\|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0} }_{\dot{W}^{\alpha_0,p_0}} \|f\|^{\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0}}_{\dot{W}^{\alpha_2,p_2}},\quad \forall f\in \dot{W}^{\alpha_0,p_0} (\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n) \,.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lem-Hom-Sobolev}]
The proof follows by way of the following result.
\\
If $f\in \dot{W}^{\alpha_0,p_0} (\mathbb{R}^n) \cap \dot{W}^{\alpha_2,p_2}(\mathbb{R}^n)$, then there hold true
\begin{align}\label{2.5a}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\lesssim\big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x) \big]^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}
\end{align}
if provided that $\alpha_2<1$, and
\begin{align}\label{2.5}
\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\lesssim\big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{1-\alpha_1}{1-\alpha_0})p_1} \left[\mathbf{M}(|Df|)(x) \right]^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}
\end{align}
if $\alpha_2=1$.
\\
The proof of \eqref{2.5a} (resp. \eqref{2.5}) can be done similarly as the one of \eqref{1.13} (resp. \eqref{1.23}). Therefore, we only need to replace $\mathbf{M}(f)(x)$ by $\mathbf{G}_{\alpha_0,p_0}(f)(x)$ in \eqref{1.13} (resp. \eqref{1.23}).
\\
In fact, we have from H\"{o}lder's inequality
\begin{align}\label{2.6}
\int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
&\leq \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big|^{p_0} \, dy\right)^{\frac{p_1}{p_0}} \frac{dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&= \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x) - f(x+y) \big|^{p_0}}{|z|^{\alpha_0 p_0}} \, dy\right)^{\frac{p_1}{p_0}} \frac{|z|^{\alpha_0 p_1}dz}{|z|^{n+\alpha_1 p_1}} \nonumber
\\
&\lesssim \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1} \int_{\{|z|\geq t\}} |z|^{-n-(\alpha_1-\alpha_0)p_1} \, dz\nonumber
\\
&\lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1} \,.
\end{align}
If $\alpha_2<1$, then it follows from \eqref{2.5} and \eqref{1.14} that
\begin{align*}
\int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1}+ t^{(\alpha_2-\alpha_1)p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x)\big]^{p_1} \,.
\end{align*}
Thus, \eqref{2.5a} follows by minimizing the right hand side of the indicated inequality.
\\
Next, applying H\"older's inequality in \eqref{2.5a} with $\big(\frac{p_0(\alpha_2-\alpha_0)}{p_1(\alpha_2-\alpha_1)},\frac{p_2(\alpha_2-\alpha_0)}{p_1(\alpha_2-\alpha_1)}\big)$ yields
\begin{align*}
\|f\|^{p_1}_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}}
\\
&\lesssim \int \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x)\big]^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1} dx
\\
&\leq \big\| \mathbf{G}_{\alpha_0,p_0} \big\|^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1}_{L^{p_0}} \big\| \mathbf{G}_{\alpha_2,p_2} \big\|^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}_{L^{p_2}}
\\
&\leq \|f\|^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1}_{\dot{W}^{\alpha_0,p_0}} \|f\|^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
Note that the last inequality is obtained by Remark \ref{Rem4}. Hence, we get \eqref{2.4} for $\alpha_2<1$.
\\
If $\alpha_2=1$, then
it follows from \eqref{2.6} and \eqref{1.24} that
\begin{align*}
\int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1}+ t^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{p_1}
\end{align*}
which implies \eqref{2.5}.
\\
By applying H\"older's inequality with $\big(\frac{p_0(1-\alpha_0)}{p_1(1-\alpha_1)},\frac{p_2(1-\alpha_0)}{p_1(1-\alpha_1)}\big)$, we obtain
\begin{align*}
\|f\|^{p_1}_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \\
&\lesssim \int_{\mathbb{R}^n} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{1-\alpha_1}{1-\alpha_0})p_1} \left[\mathbf{M}(|Df|)(x) \right]^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1} dx
\\
&\leq \big\|\mathbf{G}_{\alpha_0,p_0}(f)\big\|^{(\frac{1-\alpha_1}{1-\alpha_0})p_1}_{L^{p_0}} \big\|\mathbf{M}(|Df|) \big\|^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}_{L^{p_2}}
\\
&\lesssim \|f\|^{(\frac{1-\alpha_1}{1-\alpha_0})p_1}_{\dot{W}^{\alpha_0, p_0}} \|Df\|^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}_{L^{p_2}} \,.
\end{align*}
This yields \eqref{2.4} for $\alpha_2=1$.
\\
Hence, we complete the proof of Lemma \ref{Lem-Hom-Sobolev}.
\end{proof}
Now, we apply Lemma \ref{Lem-Hom-Sobolev} when $\alpha_2=1$ in order to obtain
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1, p_1}}\lesssim \|f\|^{\frac{1-\alpha_1}{1-\alpha_0}}_{\dot{W}^{\alpha_0, p_0}} \|Df\|^{\frac{\alpha_1-\alpha_0}{1-\alpha_0}}_{L^{p_2}} \,,
\end{align*}
where $\alpha_0, p_0$ are chosen as in Lemma \ref{Lem-Hom-Sobolev}.
\\
After that, we have from \eqref{2.1} that
\[ \|f\|_{\dot{W}^{\alpha_0, p_0}} \lesssim \|f\|^{1-\frac{\alpha_0}{\alpha_1}}_{\dot{B}^0} \|f\|^{\frac{\alpha_0}{\alpha_1}}_{\dot{W}^{\alpha_1, p_1}} \,.\]
Combining the last two inequalities yields the desired result.
\\
{\bf The case $\alpha_2>1$}.
\\
If $\alpha_2$ is not integer, then we apply Theorem \ref{Mainthe} to $\sigma=\floor{\alpha_2}$ to get
\begin{align}\label{2.7}
\| D^{\floor{\alpha_2}} f \|_{L^q} \lesssim
\| D^{\floor{\alpha_2}} f \|^{\frac{s_2}{\alpha_2}}_{\dot{B}^{-\floor{\alpha_2}}} \|D^{\floor{\alpha_2}} f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{s_2,p_2}} \lesssim
\|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align}
with $q=p_2\frac{\alpha_2}{\floor{\alpha_2}}$.
Recall that $\alpha_2=\floor{\alpha_2}+s_2$.
If $\floor{\alpha_2}=1$, then it follows from \eqref{2.3} and the last inequality that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \|Df\|^{\alpha_1}_{L^q} \lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \left( \|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\alpha_1}=\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^0}\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,,
\end{align*}
with $q=\alpha_1 p_1=\alpha_2 p_2$ since $\floor{\alpha_2}=1$.
\\
This yields \eqref{4.1} when $\floor{\alpha_2}=1$.
If $\floor{\alpha_2}>1$, then we can apply Theorem \ref{TheC} in order to get
\begin{align*}
\|Df\|_{L^{q_1}} \lesssim \|f\|^{\frac{\floor{\alpha_2}-1}{\floor{\alpha_2}}}_{\dot{B}^0} \big\|D^{\floor{\alpha_2}}f \big\|^{\frac{1}{\floor{\alpha_2}}}_{L^{q_2}} \,,
\end{align*}
with $q_1=\alpha_1 p_1$, and $q_2 = \frac{q_1}{\floor{\alpha_2}} = \frac{\alpha_2 p_2}{\floor{\alpha_2}}$.
\\
A combination of the last inequality, \eqref{2.7}, and \eqref{2.3} implies that
\begin{align*}
\|f\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \|Df\|^{\alpha_1}_{L^{q_1}} \lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \left(\|f\|^{\frac{\floor{\alpha_2}-1}{\floor{\alpha_2}}}_{\dot{B}^0} \big\|D^{\floor{\alpha_2}}f \big\|^{\frac{1}{\floor{\alpha_2}}}_{L^{q_2}} \right)^{\alpha_1}
\\
& \lesssim\|f\|^{1-\frac{\alpha_1}{\floor{\alpha_2}}}_{\dot{B}^0} \left(\|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0}
\|f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\frac{\alpha_1}{\floor{\alpha_2}}}
= \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,.
\end{align*}
Hence, we obtain\eqref{4.1} when $\floor{\alpha_2}>1$.
\\
The case where $\alpha_2>1$ is integer can be done similarly as the above. Then, we leave the details to the reader.
\section{Appendix}
\begin{proposition}\label{Pro-cha} The following statement holds true
\begin{equation}\label{5.1}
\|f\|_{\dot{W}^{\alpha,p}} \approx \|f\|_{\dot{B}^{\alpha}_{p,p}} ,\quad \forall f\in \mathcal{S}(\mathbb{R}^n) \,.
\end{equation}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{Pro-cha}] To obtain the result, we follow the proof by Grevholm, \cite{Grevholm}.
\\
First of all, for any $s\in(0,1)$, $1\leq p<\infty$, it is known that (see, e.g., \cite{Leoni, Triebel})
\[
\|f\|_{\dot{W}^{s,p}} \approx \left( \sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+s p}} \right)^{1/p} ,\quad \forall f\in W^{s,p}(\mathbb{R}^n)\,,
\]
where $\Delta_{te_k} f (x) = f(x+te_k)-f(x)$, and $e_k$ is the $k$-th vector of the canonical basis in $\mathbb{R}^n$, $k=1,\dots,n$.
\\
Thanks to this result, \eqref{5.1} is equivalent to the following inequality
\begin{align}\label{5.1a}
\sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \approx \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,.
\end{align}
Then, we first show that
\begin{equation} \label{5.1c}
\sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \lesssim \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,.
\end{equation}
It suffices to prove that
\begin{equation} \label{5.1b}
\int^\infty_0 \big\|\Delta_{te_1} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \lesssim \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,.
\end{equation}
Indeed, let $\varphi\in\mathcal{S}(\mathbb{R}^n)$ be such that ${\rm supp}(\hat{\varphi})\subset \big\{ \frac{1}{2}< |\xi|< 2 \big\}$, $\hat{\varphi}(\xi) \not=0$ in $\big\{ \frac{1}{4}< |\xi|<1 \big\}$, $\varphi_j(x)= 2^{-jn}\varphi(2^{-j}x)$ for $j\in\mathbb{Z}$, and $\displaystyle\sum_{j\in\mathbb{Z}} \hat{\varphi_j}(\xi) =1$ for $\xi\not=0$.
\\
Next, let us set
$$\widehat{\psi}_j(\xi) = \big(e^{it\xi_1}-1\big) \widehat{\varphi}_j(\xi)\,, \quad \xi=(\xi_1,...,\xi_n) \,.$$
Note that for any $g\in\mathcal{S}(\mathbb{R}^n)$ $$\mathcal{F}^{-1}\big\{(e^{it\xi_1}-1) \widehat{g}\big\} = g(x+te_1)- g(x) \,,$$
where $\mathcal{F}^{-1}$ denotes by the inverse Fourier transform.
\\
Since ${\rm supp}(\widehat{\varphi}_j) \cap {\rm supp}(\widehat{\varphi}_l) =\emptyset$ whenever $|l-j|\geq 2$, then we have
\begin{align}\label{5.2}
\psi_j * f &= \psi_j * \left(\sum_{i\in\mathbb{Z}} \varphi_j \right) * f = \psi_j * \big(\varphi_{j-1} + \varphi_j +\varphi_{j+1} \big) * f \,.
\end{align}
Applying Young's inequality yields
\begin{align}\label{5.3}
\| \psi_j * \varphi_{j} * f \|_{L^p}& \leq \| \psi_j \|_{L^1} \| \varphi_{j} * f \|_{L^p} \nonumber
\\
&= \big\| \mathcal{F}^{-1}\big\{ (e^{it\xi_1}-1) \widehat{\varphi}_j(\xi) \big\}\big\|_{L^1} \| \varphi_{j} * f \|_{L^p} \nonumber
\\
&= \| \varphi_j(.+te_1)-\varphi_{j}(.)\|_{L^1}
\| \varphi_{j} * f \|_{L^p} \leq C\| \varphi_{j} * f \|_{L^p} \,,
\end{align}
where $C=C_\varphi$ is independent of $j$.
\\
On the other hand, we observe that
\begin{align*}
\big|\varphi_j(x+te_1)-\varphi_{j}(x)\big|&=\big| \int^1_0 D\varphi_{j} (x + \tau t e_1) \cdot te_1 \, d\tau \big|
\\
&\leq t\int^1_0 \big|D\varphi_{j} (x + \tau t e_1) \big| \, d\tau = t 2^{-j} 2^{-jn} \int^1_0 \big|D\varphi \big( 2^{-j}( x + \tau t e_1)\big) \big| \, d\tau \,.
\end{align*}
Therefore,
\begin{align}\label{5.5}
\| \varphi_j(.+te_1)-\varphi_{j}(.)\|_{L^1} &\leq t 2^{-j} 2^{-jn} \int^1_0 \big\|D\varphi \big( 2^{-j}( x + \tau t e_1)\big) \big\|_{L^1} \, d\tau \nonumber
\\
& = t 2^{-j} \int^1_0 \|D\varphi\|_{L^1} \, d\tau = C(\varphi) \, t 2^{-j} \,.
\end{align}
Combining \eqref{5.2}, \eqref{5.3} and \eqref{5.5} yields
\begin{equation}\label{5.6}
\sum_{j\in\mathbb{Z}} \| \psi_j * f \|_{L^p} \lesssim \min\{1,t2^{-j}\} \sum_{j\in\mathbb{Z}}\| \varphi_{j} * f \|_{L^p}\,.
\end{equation}
Now, remind that\, $f(x+te_1)-f(x) = \displaystyle\sum_{j\in\mathbb{Z}} \psi_j * f(x)$ in $\mathcal{S}^\prime(\mathbb{R}^n)$.
Then, we deduce from \eqref{5.6} that
\begin{align*}
\int^\infty_0 \int_{\mathbb{R}^n} \frac{|f(x+te_1)-f(x)|^p}{t^{1+\alpha p}} \, dx dt &= \int_{0}^{\infty} \big\| \sum_{j\in \mathbb{Z}} \psi_j * f\big\|_{L^p}^p \frac{dt}{t^{1+\alpha p}}
\\
&\lesssim \sum_{k\in\mathbb{Z}} \int^{2^k}_{2^{k-1}} \sum_{j\in \mathbb{Z}} \min\{1,t^p 2^{-jp}\} \| \varphi_j * f\|_{L^p}^p \frac{dt}{t^{1+\alpha p}}
\\
&\lesssim \sum_{k\in\mathbb{Z}} 2^{-k\alpha p} \sum_{j\in \mathbb{Z}} \min\{1,2^{(k-j)p}\} \| \varphi_j * f\|_{L^p}^p
\\
&= \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} \min\{1,2^{(k-j)p}\} 2^{-(k-j)\alpha p} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right]
\\
&= \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} \min\{2^{-(k-j)\alpha p},2^{(k-j)(1-\alpha)p}\} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right]
\\
&\leq \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} 2^{-|k-j|\delta} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right],\quad \delta=\min\{\alpha p , (1-\alpha)p\}
\\
&\leq C_\delta \sum_{k\in\mathbb{Z}} \left[ 2^{-k\alpha p} \|\varphi_k * f\|_{L^p}^p \right] = C_\delta \|f\|_{\dot{B}^{\alpha}_{p,p}}^p \,.
\end{align*}
Similarly, we also obtain
\[ \int^\infty_0 \int_{\mathbb{R}^n} \frac{|f(x+te_k)-f(x)|^p}{t^{1+\alpha p}} \, dx dt \lesssim \|f\|_{\dot{B}^{\alpha}_{p,p}}^p ,\quad k=2,\dots,n \,.\]
This yields \eqref{5.1b}.
\\
For the converse, let $\{\varphi_j\}_{j\in\mathbb{Z}}$ be the sequence above. By following \cite[page 246]{Grevholm}, we can construct function $\psi\in\mathcal{S}(\mathbb{R}^n)$ such that $\hat{\psi}(\xi) =1$ on $\{1/2 \leq|\xi|\leq 2\}$, and $\widehat{\psi}=\displaystyle\sum^n_{k=1} \widehat{h}^{k}$, with $h^{k}\in\mathcal{S}(\mathbb{R}^n)$ satisfies
\begin{align}\label{5.8a}
\sup_{t\in(2^{j-1}, 2^j)} \big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{L^1} \leq C , \quad k=1,\dots,n \,,
\end{align}
where $h^k_j(x) = 2^{-jn}h^k_j(2^{-j}x)$, and constant $C>0$ is independent of $k, j$.
\\
Note that
$$h^k_j *f = \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} * \Delta_{te_k} f \,.$$
With this fact noted, it follows from the triangle inequality, \eqref{5.8a}, and Young's inequality that
\begin{align*}
\big\|\psi_j* f\big\|_{L^p} &= \big\|\sum_{k=1}^n h^k_j *f \big\|_{L^p} \leq \sum_{k=1}^n \big\| h^k_j *f \big\|_{L^p} = \sum_{k=1}^n\big\| \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} * \Delta_{te_k} f \big\|_{L^p}
\\
&\leq \sum_{k=1}^n\big\| \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} \big\|_{L^1} \big\| \Delta_{te_k} f \big\|_{L^p} \leq \sum_{k=1}^n\big\| \frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1} \big\|_{L^1} \big\| \Delta_{te_k} f \big\|_{L^p}
\\
&\lesssim
\sum_{k=1}^n \big\| \Delta_{te_k} f \big\|_{L^p} \,, \quad \text{uniformly on } t\in(2^{j-1},2^j)\,.
\end{align*}
This implies that
\begin{align}\label{5.9}
\big\| \psi_{j}*f \big\|^p_{L^p} \lesssim\sum_{k=1}^n \big\|\Delta_{te_k} f \big\|^p_{L^p} \,, \quad \text{uniformly on } t\in(2^{j-1},2^j)\,.
\end{align}
On the other hand, it is clear that $\hat{\psi}(\xi) \hat{\varphi} (\xi)=\hat{\varphi} (\xi)$ since ${\rm supp}(\hat{\varphi})\subset \{1/2 \leq|\xi|\leq 2\}$.
\\
Hence, it follows from \eqref{5.9} that
\begin{align*}
\| \varphi_{j} *f \|^p_{L^p} = \| \varphi_{j} * \psi_j * f \|^p_{L^p} \leq \| \varphi_{j} \|^p_{L^1} \|\psi_j * f\|^p_{L^p} \lesssim \sum_{k=1}^n \big\|\Delta_{te_k} f \big\|^p_{L^p}
\end{align*}
uniformly on $t\in (2^{j-1},2^j)$.
\\
Thus,
\begin{align*}
\sum_{j\in\mathbb{Z}} 2^{-j\alpha p} \| \varphi_{j} *f \|^p_{L^p} &\lesssim \sum_{j\in\mathbb{Z}} 2^{-j\alpha p} \sum_{k=1}^n \fint^{2^j}_{2^{j-1}} \big\|\Delta_{te_k} f \big\|^p_{L^p} \,dt \lesssim \sum_{k=1}^n \int^\infty_0 \|\Delta_{te_k} f\|^p_{L^p} \frac{dt}{t^{1+\alpha p}}
\end{align*}
which yields
\[ \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \lesssim \sum_{k=1}^n \int^\infty_0 \|\Delta_{te_k} f\|^p_{L^p} \frac{dt}{t^{1+\alpha p}} \,. \]
This completes the proof of Proposition \ref{Pro-cha}.
\end{proof}
\textbf{Acknowledgement.} The research is funded by University of Economics Ho Chi Minh City, Vietnam.
\end{document} | arXiv |
\begin{document}
\title[Moduli of log twisted $\mathcal{N} =1$ SUSY curves]{Moduli of log twisted $\mathcal{N} =1$ SUSY curves} \author{Yasuhiro Wakabayashi} \date{} \markboth{Yasuhiro Wakabayashi}{} \maketitle \footnotetext{Y. Wakabayashi: Graduate School of Mathematical Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo, 153-8914, Japan;} \footnotetext{e-mail: {\tt [email protected]};} \footnotetext{2010 {\it Mathematical Subject Classification}: Primary 81R60, Secondary 17A70;} \footnotetext{Key words: superscheme, supersymmetry, super Riemann surface, compactification, twisted curve} \begin{abstract} The goal of the present paper is to construct a smooth compactification of the moduli superstack classifying pointed $\mathcal{N} =1$ SUSY (= $\text{SUSY}_1$) curves.
This construction is based on the Abramovich-Jarvis-Chiodo compactification of the moduli stack classifying spin curves. First, we give a general framework of a theory of log superschemes (or more generally, log superstacks). Then, we introduce the notion of a pointed (stable) log twisted $\text{SUSY}_1$ curve; it may be thought of as a logarithmic and twisted generalization of the classical notion of a pointed $\text{SUSY}_1$ curve, as well as
a supersymmetric analogue of the notion of a pointed (log) twisted curve. The main result of the present paper asserts that the moduli superstack classifying pointed stable log twisted $\text{SUSY}_1$ curves may be represented by a log superstack whose underlying superstack is a superproper and supersmooth Deligne-Mumford superstack. Consequently, this moduli superstack forms a smooth compactification different from the compactification proposed by P. Deligne.
\end{abstract} \tableofcontents
\section*{Introduction}
\subsection*{0.1} \label{S01}
The goal of the present paper is {\it to provide a rigorous construction of a smooth compactification of the moduli superstack classifying pointed $\mathcal{N} =1$ SUSY curves}. Throughout the present paper, we abbreviate ``$\mathcal{N} =1$ SUSY" to ``$\text{SUSY}_1$" for simplicity. Recall that $\text{SUSY}_1$ curves and their analytic counterparts, called super Riemann surfaces,
have been widely considered (intensively in the 1980s) in the physical literature on supersymmetry.
Super Riemann surfaces are defined to be
complex supermanifolds of superdimension $1|1$ satisfying an additional superconformal condition (cf. e.g., ~\cite{LR}). Super Riemann surfaces
play a role of
the correct supersymmetric analogue of Riemann surfaces, and their moduli superspace plays a role analogous to the role of moduli space classifying Riemann surfaces in bosonic string theory (cf. ~\cite{Frie}). Indeed, just as the world sheet of a bosonic string carries the structure of a Riemann surface, the world sheet in superstring theory is a super Riemann surface. Also, perturbative calculations in superstring theory are carried out by integration over this moduli superspace.
Besides having such physical applications, the theory of super Riemann surfaces and their moduli is interesting on its own from the mathematical viewpoint. In order to achieve a deep understanding of this theory (from the mathematical viewpoint or another),
it will be worth asking the following question regarding their global structure that should be answered:
\begin{quote}\textit{What is a natural (smooth) compactification of the moduli superspace classifying super Riemann surfaces (or more generally, pointed $\text{SUSY}_1$ curves)?} \end{quote}
\subsection*{0.2} \label{S02}
To obtain an answer of this question,
P. Deligne constructed, in his letter to Y. Manin (cf. ~\cite{Del}), a smooth compactification of the moduli superstack classifying (unpointed) $\text{SUSY}_1$ curves; it may be thought of as
an analogue of the Deligne-Mumford compactification of the moduli stack classifying proper smooth algebraic curves, and obtained by adding certain divisors at infinity parametrizing $\text{SUSY}_1$ curves with nodes.
The main deference from the bosonic case is that, in constructing the compactification, we need to allow
two different types of degeneration of $\text{SUSY}_1$ curves, called Neveu-Schwarz and Ramond degenerations.
We refer to ~\cite{Witten1}, \S\,6, for a detailed exposition, including the physical viewpoint, of the moduli superspace classifying $\text{SUSY}_1$ curves (with marked points).
In the present paper, we consider a smooth compactification different from the compactification constructed by P. Deligne, including the case of pointed $\text{SUSY}_1$ curves.
\subsection*{0.3} \label{S03}
Let us describe the main theorem of the present paper. Let $S_0$ be a noetherian affine scheme over $\mathbb{Z} [\frac{1}{2}]$, $\lambda$ a positive even integer which is invertible in $S_0$, and $(g,r)$ a pair of nonnegative integers such that $r$ is even and $2g-2 +r>0$.
Write $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$ (cf. (\ref{EE13})) for the category of fs log superschemes (cf. Definition \ref{d2b}) over $S_0$. Also, write \begin{equation} {^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}} \end{equation} (cf. (\ref{EE12})) for the category fibered in groupoids over $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$ classifying families of {\it stable log twisted $\text{SUSY}_1$ curves of type $(g, r, \lambda)$} (cf. Definition \ref{D03}) parametrized by log superschems in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$.
Then, our main result is the following theorem.
(See Definitions \ref{d089} (ii), \ref{d0299}, \ref{d08gg9}, and \ref{d2} for the definitions of various notions appeared in the statement.)
\begin{intthm}
\label{y019}
\leavevmode\\
\ \ \
${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ may be represented by
a log superstack whose underlying superstack is a superproper and supersmooth Deligne-Mumford superstack over $S_0$ of relative superdimension $3g-3+r | 2g-2+ \frac{r}{2}$.
\end{intthm}
\subsection*{0.4} \label{S04}
Let us make a remark on the main result just described. Denote by ${^{\S_1} \mathfrak{M}}_{g,r}^\circledS$ the moduli superstack classifying $r$-pointed (supersmooth) $\text{SUSY}_1$ curves of genus $g$ (in the classical sense) over log superschemes in $\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS \mathrm{log}}_{/S_0}$. Since any pointed (supersmooth) $\text{SUSY}_1$ curve is a stable log twisted $\text{SUSY}_1$ curve, we have a natural inclusion ${^{\S_1} \mathfrak{M}}_{g,r}^\circledS \hookrightarrow {^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$; it is an open immersion whose image is dense and coincides with the locus in which the log structure of ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ becomes trivial.
Thus, the moduli log superstack ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$, being our central character, forms a smooth compactification of ${^{\S_1} \mathfrak{M}}_{g,r}^\circledS$ (different from the compactification by P. Deligne). The feature of our compactifcation is that we add divisors at infinity which parametrize pointed $\text{SUSY}_1$ curves (equipped with a logarithmic structure) admitting at most
a single type of degeneration.
Next, recall the discussion of the non-projectedness (cf. ~\cite{Witten2}, \S\,2), as well as the non-splitness (cf. Definition \ref{d0d99ff}), of ${^{\S_1} \mathfrak{M}}_{g,r}^\circledS$ considered in ~\cite{Witten2}. By applying (an argument similar to) the argument in {\it loc.\,cit.}\,to our situation, we will be able to verify the non-projectedness (and hence, the non-splitness) of (the underlying superstack of) ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ even when $S_0$ is not necessarily $\mathrm{Spec} (\mathbb{C})$ (where $\mathbb{C}$ denotes the field of complex numbers).
Indeed,
since the non-projectedness of ${^{\S_1} \mathfrak{M}}_{g,r}^\circledS$ implies the non-projectedness of $ {^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$, it follows from Theorems 1.1-1.3 in {\it loc.\,cit.}\,that $ {^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ is non-projected for many cases of $(g,r)$.
This means that ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ cannot be reconstructed from purely bosonic moduli stacks
in an elementally fashion and, in some sense, needs to be studied independently.
\subsection*{0.5} \label{S05}
Let us briefly explain the points of our discussion and the organization of the present paper. In \S\,1, we give (and recall) a general framework of a theory of superschemes, and more generally, superstacks. Then, we define, in \S\,2, a logarithmic structure on a superscheme, as well as a superstack. The motivation for introducing the notion of a log superstack (i.e., a superstack equipped with a logarithmic structure) is to consider a supersymmetric analogue of (stable) pointed twisted curves with a canonical log structure. (We refer to ~\cite{AV1}, Definition 4.3.1, and ~\cite{Chi1}, Definition 2.4.1, for the definition of a pointed twisted curve, and to ~\cite{O1}, Theorem 3.5, for the canonical logarithmic structure defined on a pointed twisted curve.)
By means of the various notions defined in \S\S\,1-2, we present, in \S\,3, the definition of a (stable) pointed log twisted $\text{SUSY}_1$ curve as, roughly speaking, a certain pointed
log superstack of superdimension $1|1$ equipped with an additional superconformal structure (cf. Definition \ref{D02} and Definition \ref{D03}). Thus, for a suitable triple $(g,r, \lambda)$ of nonnegative integers, one may obtain the category ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ fibered in groupoids, as we introduced above,
classifying stable log twisted $\text{SUSY}_1$ curves of type $(g,r, \lambda)$.
Denote by $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$ the restriction of ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ to the full subcategory $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\mathrm{log} \subseteq \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$ consisting of fs log schemes (in the classical sense) . As discussed in \S\,4, the kay point is that to giving a family of pointed log twisted $\text{SUSY}_1$ curves parametrized by an fs log scheme (i.e., an object in $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$) is equivalent to giving a family of pointed log twisted curves equipped with an additional data called a pointed spin structure (cf. Definition \ref{De2}). (This observation for the case of unpointed smooth $\text{SUSY}_1$ curves is classical and well-known.)
This implies (cf. Proposition \ref{P66}) that $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$ is canonically isomorphic to the moduli stack ${^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$ (cf. (\ref{EE33})) classifying $\lambda$-stable log twisted curves of type $(g,r)$ equipped with a pointed spin structure. On the other hand, D. Abramovich, T. J. Jarvis, and A. Chiodo proved (cf. ~\cite{AJ1}, Theorem 1.5.1 and ~\cite{Chi1}, Corollary 4.11) that ${^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$ ($\cong ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$) may be represented by a proper smooth Deligne-Mumford stack, which forms a compatification of the moduli stack classifying pointed smooth spin curves (in the classical sense). Thus, by thickening this Deligne-Mumford stack in the fermionic directions in a way that a universal stable log twisted $\text{SUSY}_1$ curve exists (uniquely), we construct, in \S\,5, a log superstack representing ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ and satisfying the desired conditions described in Theorem A.
\hspace{-4mm}{\bf Acknowledgement} \leavevmode\\
\ \ \ The author cannot express enough his sincere and deep gratitude to all those who give the opportunity or impart the great joy of studying
mathematics to him. The author means the present paper for a gratitude letter to them. The author was partially supported by the Grant-in-Aid for Scientific Research (KAKENHI No.\,15J02721) and the FMSP program at the Graduate School of Mathematical Sciences of the University of Tokyo.
\section{Superschemes and superstacks}
The aim of this section is to give a brief introduction to the theory of superschemes (or more generally, superstacks). We first recall the notion of a superscheme (cf. Definition of \ref{d3}) and then, discuss basic properties of superschemes and morphisms between them.
In particular, we define a super\'{e}tale morphism (cf. Definition \ref{d2}), which is a supersymmetric analogue of an \'{e}tale morphism in the classical sense. By means of this sort of morphism, one obtains a category of superschemes equipped with the Grothendieck
pretopology
which will be denoted by $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$ (cf. (\ref{FF01})), and moreover, obtains the definition of a (Deligne-Mumford) superstack (cf. Definitions \ref{D029} and \ref{d0299}). Finally, we show (cf. Proposition \ref{p0207}) that any Deligne-Mumford stack admits a complete versal family which is isomorphic to a split (cf. Definition \ref{d0d99ff}) and supersmooth (cf. Definition \ref{d0819}) superscheme.
Basic references for the notion of a {\it superscheme} are, e.g., ~\cite{CR}, ~\cite{G}, and ~\cite{Man}.
Let $R_0$ be a noetherian ring over $\mathbb{Z} [\frac{1}{2}]$. Throughout the present paper, {\it all schemes are assumed to be locally noetherian schemes over the affine scheme $S_0 := \mathrm{Spec}(R_0)$ and all morphisms of schemes are assumed to be locally of finite presentation.}
\subsection{Superschemes} \label{S11} \leavevmode\\
First, recall the definition of a superscheme as follows.
\begin{defi} \label{d3}\leavevmode\\
\begin{itemize} \item[(i)]
A {\bf superscheme} (over $S_0$) is a pair $X^\circledS := (X_b, \mathcal{O}_{X^\circledS})$ consisting of a (locally noetherian) scheme $X_b$ over $S_0$ and a {\it coherent} sheaf of superalgebras $\mathcal{O}_{X^\circledS}$ over $\mathcal{O}_{X_b}$
such that the natural morphism $\mathcal{O}_{X_b} \rightarrow \mathcal{O}_{X^\circledS}$ is injective and its image coincides with
the bosonic (i.e., even) part of $\mathcal{O}_{X^\circledS}$.
We write $\mathcal{O}_{X_f}$ for the fermionic (i.e., odd) part of $\mathcal{O}_{X^\circledS}$ and identify $\mathcal{O}_{X_b}$ with the bosonic part via the injection $\mathcal{O}_{X_b} \hookrightarrow \mathcal{O}_{X^\circledS}$ (hence, $\mathcal{O}_{X^\circledS} = \mathcal{O}_{X_b} \oplus \mathcal{O}_{X_f}$).
\item[(ii)]
Let $X^\circledS := (X_b, \mathcal{O}_{X^\circledS})$ and $Y^\circledS := (Y_b, \mathcal{O}_{Y^\circledS})$ be two superschemes (over $S_0$).
A {\bf morphism of superschemes (over $S_0$)} from $Y^\circledS$ to $X^\circledS$ is a pair $f^\circledS := (f_b, f^\flat)$ consisting of a morphism $f_b : Y_b \rightarrow X_b$ of schemes (over $S_0$, which is locally of finite presentation) and a morphism of superalgebras $f^\flat : f_b^*(\mathcal{O}_{X^\circledS}) \ (:= \mathcal{O}_{Y_b} \otimes_{f_b^{-1}(\mathcal{O}_{X_b})} f_b^{-1}(\mathcal{O}_{X^\circledS})) \rightarrow \mathcal{O}_{Y^\circledS}$ over $\mathcal{O}_{Y_b}$.
\end{itemize}
\end{defi}
We always identify any scheme $X_b$ (over $S_0$) with a superscheme $X^\circledS := (X_b, \mathcal{O}_{X^\circledS})$ with $\mathcal{O}_{X_f} =0$.
\begin{rema} \label{r486} \leavevmode\\
\ \ \
The definition of a superscheme may differ from the usual definition in the sense that the fermionic part of the structure sheaf of a superscheme is assumed not to be coherent.
In fact, that condition is usually regarded as an additional condition on a superscheme which is, in ~\cite{G}, Definition 2.6, referred to as ``{\it fermionically of finite presentation}".
But, in the present paper, we only deal with superschemes fermionically of finite presentation in order that super\'{e}tale morphisms, as well as supersmooth morphisms, defined later are well-behaved.
This is why we define the notion of a superscheme as above.
\end{rema}
Let $X^\circledS$ be a superscheme and $\mathcal{F}$ a left $\mathcal{O}_{X^\circledS}$-supermodule. We write \begin{align} \mathcal{F}_b \ \ \ (\text{resp.}, \ \mathcal{F}_f) \end{align}
for the bosonic (resp., fermionic) part of $\mathcal{F}$ (hence $\mathcal{F}= \mathcal{F}_b \oplus \mathcal{F}_f$).
$\mathcal{F}$ may be considered as a right $\mathcal{O}_{X^\circledS}$-supermodule equipped with an
$\mathcal{O}_{X^\circledS}$-action
given
by $m \cdot a := (-1)^{|m| \cdot |a|} a \cdot m$ for homogeneous local sections $a \in \mathcal{O}_{X^\circledS}$ , $m \in \mathcal{F}$ (where $|-|$ denotes the parity function).
By an {\bf $\mathcal{O}_{X^\circledS}$-supermodule}, we shall means simply a left $\mathcal{O}_{X^\circledS}$-supermodule, which is often treated as a right $\mathcal{O}_{X^\circledS}$-supermodule by this consideration. Also,
by a {\bf supervector bundle (of superrank $m|n$)} on $X^\circledS$ (where both $m$ and $n$ are nonnegative integers), we mean a locally free (left) $\mathcal{O}_{X^\circledS}$-supermodule (of superrank $m|n$).
Let $f^\circledS := (f_b, f^\flat): Y^\circledS \rightarrow X^\circledS$ be a morphism of superschemes. If we are given an $\mathcal{O}_{X^\circledS}$-supermodule (resp., an $\mathcal{O}_{Y^\circledS}$-supermodule) $\mathcal{F}$, then one may define, via the natural morphism $f_b^{-1}(\mathcal{O}_{X^\circledS}) \rightarrow \mathcal{O}_{Y^\circledS}$, the pull-back (resp., direct image) of $\mathcal{F}$ to be the $\mathcal{O}_{Y^\circledS}$-supermodule (resp., the $\mathcal{O}_{X^\circledS}$-supermodule) \begin{align} f^{\circledS *}(\mathcal{F}) := \mathcal{O}_{Y^\circledS} \otimes_{f^{-1}_b (\mathcal{O}_{X^\circledS})} f^{-1}_b(\mathcal{F}) \ \ (\text{resp.,} \ f_*^\circledS (\mathcal{F}) := f_{b*}(\mathcal{F})). \end{align}
\begin{defi} \label{d32}\leavevmode\\
\ \ \ Let $S^\circledS$ be a superscheme.
\begin{itemize}
\item[(i)]
Let
$X^\circledS$ and $Y^\circledS$ are superschemes over $S^\circledS$ and $f^\circledS \ (:= (f_b, f^\flat)) : Y^\circledS \rightarrow X^\circledS$ a morphism of superschemes over $S^\circledS$. We shall say that $f^\circledS$ is a {\bf closed immersion (over $S^\circledS$)} if $f_b : Y_b \rightarrow X_b$ is a closed immersion
and $f^\flat : f_b^*(\mathcal{O}_{X^\circledS}) \rightarrow \mathcal{O}_Y$ is surjective. \item[(ii)] Let $X^\circledS$ be a superscheme over $S^\circledS$. A {\bf closed subsuperscheme} of $X^\circledS$ is an equivalence class of closed immersions into $X^\circledS$, where two morphisms $f_1^\circledS: Y^\circledS_1 \rightarrow X^\circledS$, $f^\circledS_2 : Y^\circledS_2 \rightarrow X^\circledS$ over $S^\circledS$ are {\bf equivalent} if there exists an isomorphism $\iota^\circledS : Y^\circledS_1 \isom Y^\circledS_2$ satisfying that $f_2^\circledS \circ \iota^\circledS = f^\circledS_1$. If $f^\circledS : Y^\circledS \rightarrow X^\circledS$ is a closed immersion, then we shall write $[f^\circledS]$ for the closed subsuperscheme of $X^\circledS$ represented by $f^\circledS$.
\end{itemize}
\end{defi}
Let $X^\circledS := (X_b, \mathcal{O}_{X^\circledS})$ be a superscheme.
By means of the morphism \begin{equation} \label{e22} \beta^\circledS_X : X^\circledS \rightarrow X_b \end{equation}
corresponding to the inclusion $\mathcal{O}_{X_b} \hookrightarrow \mathcal{O}_{X^\circledS}$, $X^\circledS$ may be thought of as a superscheme over the scheme $X_b$.
The construction of $\beta_X^\circledS$ is evidently functorial in $X^\circledS$, that is to say, $\beta^\circledS_X \circ f^\circledS = f_b \circ \beta_Y^\circledS$ for any superscheme $Y^\circledS$ and any morphism
$f^\circledS \ (:= (f_b, f^\flat)) : Y^\circledS \rightarrow X^\circledS$.
Denote by \begin{align} \mathcal{N}_{X^\circledS} \end{align} the superideal of $\mathcal{O}_{X^\circledS}$ generated by $\mathcal{O}_{X_f}$.
We shall write \begin{align} \label{e480} \tau^\circledS_X : X_t \rightarrow X^\circledS \end{align}
for the closed
immersion corresponding to the quotient $\mathcal{O}_{X^\circledS} \twoheadrightarrow \mathcal{O}_{X^\circledS}/\mathcal{N}_{X^\circledS}$. Hence, $X_t$ forms a scheme, and
the composite \begin{align} \label{B01} \gamma_X := \beta_X^\circledS \circ \tau_X^\circledS : X_t \rightarrow X_b \end{align} forms a closed immersion of schemes corresponding to the quotient $\mathcal{O}_{X_b} \twoheadrightarrow \mathcal{O}_{X_b}/\mathcal{O}_{X_f}^2$ ($= \mathcal{O}_{X^\circledS}/\mathcal{N}_{X^\circledS}$) by the nilpotent ideal $\mathcal{O}_{X_f}^2 \subseteq \mathcal{O}_{X_b}$. Any morphism $f^\circledS : Y^\circledS \rightarrow X^\circledS$ induces a morphism $f_t : Y_t \rightarrow X_t$ of schemes satisfying that $f^\circledS \circ \tau^\circledS_Y = \tau_X^\circledS \circ f_t$ and $f_b \circ \tau_Y = \tau_X \circ f_t$. In particular, any morphism $Z \rightarrow X^\circledS$ (where $Z$ is a scheme) decomposes as $Z \rightarrow X_t \stackrel{\tau_X^\circledS}{\rightarrow} X^\circledS$ for a unique morphism $Z \rightarrow X_t$ of schemes.
Finally, for each nonnegative integer $n$, we write \begin{align} \label{e467} \mathrm{gr}_{X^\circledS}^n := \mathcal{N}^n_{X^\circledS} /\mathcal{N}_{X^\circledS}^{n+1}, \end{align}
which may be thought of as an $\mathcal{O}_{X_t}$-module.
\subsection{Morphisms of superschemes} \label{S12} \leavevmode\\
We shall consider analogues of flat morphism and \'{e}tale morphisms to superschemes.
Let $f^\circledS \ (:= (f_b, f^\flat)) : Y^\circledS \rightarrow X^\circledS$ be a morphism of superschemes.
\begin{defi} \label{d302d}\leavevmode\\
\ \ \
We shall say that $f^\circledS$ is {\bf bosonic} if for any scheme $Z$ together with a morphism $Z \rightarrow X^\circledS$, the fiber product
$Y^\circledS \times_{f^\circledS, X^\circledS} Z$ is a scheme.
(Here, we note that the superschemes and morphisms between them form a category, in which the fiber products
exist. See ~\cite{CR}, Corollary 10.3.9.)
\end{defi}
\begin{defi} \label{d312d}\leavevmode\\
\ \ \
We shall say that $f^\circledS$ is {\bf superflat} if for any point $y$ of $Y_b$ the homomorphism $\mathcal{O}_{X^\circledS, f_b (y)} \rightarrow \mathcal{O}_{Y^\circledS, y}$ of local rings induced by $f^\circledS$ is flat.
\end{defi}
\begin{rema} \label{r48} \leavevmode\\
\ \ \
Suppose that $f^\circledS$ is bosonic and superflat.
According to ~\cite{G}, Lemma 2.7 and Proposition 2.1 (cf. Remark \ref{r486}), the following properties hold (where although the results of {\it loc.\,cit.} is assumed that $S_0 = \mathrm{Spec}(\mathbb{C})$, one may prove the same assertion for our general case): \begin{itemize} \item[(i)] The homomorphism $f^\flat$ induces, by restriction, isomorphisms \begin{align} \label{E1} f_b^*(\mathcal{O}_{X_f}) \isom \mathcal{O}_{Y_f} \ \ \text{and} \ \ f_b^*(\mathcal{O}^2_{X_f}) \isom \mathcal{O}^2_{Y_f} \end{align} (hence, we have $ f_b^*(\mathcal{N}_{X^\circledS}) \isom \mathcal{N}_{Y^\circledS}$). In particular, the natural morphisms \begin{align} \label{E3} Y^\circledS \rightarrow Y_b \times_{f_b, X_b, \beta^\circledS_X} X^\circledS \ \ \text{and} \ \ Y_t \rightarrow Y^\circledS \times_{f^\circledS, X^\circledS, \tau^\circledS_X} X_t \end{align}
are isomorphisms. \item[(ii)]
The underlying morphism $f_b : Y_b \rightarrow X_b$ is flat (in the classical sense).
\end{itemize}
\end{rema}
\begin{defi} \label{d2}\leavevmode\\
\ \ \
We shall say that $f^\circledS$ is {\bf super\'{e}tale} if $f^\circledS$ is bosonic and superflat, and the flat morphism $f_b : Y_b \rightarrow X_b$ (cf. Remark \ref{r48} (ii) above) is unramified.
\end{defi}
\begin{prop} \label{p0607} \leavevmode\\
\ \ \ For a superscheme $Z^\circledS$ over $S_0$, we shall denote by $\mathfrak{E} \mathfrak{t}_{/Z^\circledS}$ the category defined as follows: \begin{itemize} \item[$\bullet$] The {\it objects} are super\'{e}tale morphisms $W^\circledS \rightarrow Z^\circledS$ of superschemes to $Z^\circledS$; \item[$\bullet$] The {\it morphisms} from $W_1^\circledS \rightarrow Z^\circledS$ to $W_2^\circledS \rightarrow Z^\circledS$ (where both $W_1^\circledS \rightarrow Z^\circledS$ and $W_2^\circledS \rightarrow Z^\circledS$ are objects of this category) are morphisms $W_1^\circledS \rightarrow W_2^\circledS$ of superschemes over $Z^\circledS$. \end{itemize}
Then,
the functor
\begin{align} \label{FF02}
\mathfrak{E} \mathfrak{t}_{/X^\circledS} \isom \mathfrak{E} \mathfrak{t}_{/X_t}.
\end{align} determined by
base-change $Y^\circledS \mapsto Y^\circledS \times_{X^\circledS, \tau_X^\circledS} X_t$ is an equivalence of categories.
In particular, if ${X'}^\circledS$ and ${X''}^\circledS$ are superschemes over $S_0$ such that
$(X'_t)_\mathrm{red} \cong (X''_t)_\mathrm{red}$ (where $(-)_{\mathrm{red}}$ denotes the reduced scheme associated with the scheme $(-)$), then we have $\mathfrak{E} \mathfrak{t}_{/{X'}^\circledS} \cong \mathfrak{E} \mathfrak{t}_{/{X''}^\circledS}$.
\end{prop}
\begin{proof} We shall construct a functor $\mathfrak{E} \mathfrak{t}_{/X_t} \rightarrow \mathfrak{E} \mathfrak{t}_{/X^\circledS}$. Let $Y_0 \rightarrow X_t$ be an object in $\mathfrak{E} \mathfrak{t}_{/ X_t}$ (i.e., $Y_0 \rightarrow X_t$ is \'{e}tale in the classical sense). Since $X_b$ is a nilpotent thickening of $X_t$ (via the closed immersion $\gamma_X$),
$Y_0$ extends
{\it uniquely} to an \'{e}tale scheme $Y_1$ over $X_b$.
The superscheme $Y_1 \times_{X_b} X^\circledS$ (together with the projection to $X^\circledS$) is an object of $\mathfrak{E} \mathfrak{t}_{/X^\circledS}$ whose image of the functor (\ref{FF02}) is isomorphic to $Y_0$. The assignment $Y_0 \mapsto Y_1 \times_{X_b} X^\circledS$ is well-defined and functorial with respect to $Y_0$, and hence, determines a functor $\mathfrak{E} \mathfrak{t}_{/X_t} \rightarrow \mathfrak{E} \mathfrak{t}_{/X^\circledS}$. This functor is verified to be the inverse to the functor (\ref{FF02}).
This completes the proof of Proposition \ref{p0607}.
\end{proof}
\subsection{The category of superschemes} \label{S13} \leavevmode\\
Write \begin{align} \label{FF01} \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0} \ \ (\text{resp.,} \ \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS) \end{align} for the category whose {\it objects} are
schemes (resp., superschemes) and whose {\it morphisms} are morphisms of schemes (resp., morphisms of superschemes). By the natural inclusion $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0} \hookrightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$, we identify $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}$ with a full subcategory of $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$. The fiber products and finite coproducts exist in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$, and the inclusion $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0} \hookrightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$ preserves the fiber products and finite coproducts.
When there is fear of confusion, by a {\bf stack (over $S_0$)}, we mean a stack over the site $\mathfrak{S} \mathfrak{c} \mathfrak{h}^{}_{/S_0}$ with respect to the \'{e}tale pretopology. We shall equip $\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS}_{/S_0}$ with the Grothendieck pretopology consisting of coverings $\{ U^\circledS_i \rightarrow X^\circledS \}_{i \in I}$, where each $U^\circledS_i \rightarrow X^\circledS$ is a super\'{e}tale morphism such that (the underlying morphism between schemes of) $\coprod_{i \in I} U^\circledS_i \rightarrow X^\circledS$ is surjective; we shall refer to this pretopology as the {\bf super\'{e}tale pretopology}. One verifies that the property on a morphism in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$ of being bosonic (resp., superflat; resp., super\'{e}tale) is closed under composition and base-change, and satisfies descent for super\'{e}tale coverings.
\subsection{Split superschemes and affine superschemes} \label{S14} \leavevmode\\
Let $\underline{X}$ be a scheme and
$\mathcal{E}$ a coherent $\mathcal{O}_{\underline{X}}$-module. Consider the exterior algebra $\bigwedge^\bullet_{\mathcal{O}_{\underline{X}}} \mathcal{E}$ ($:= \bigoplus_{i \geq 0} \bigwedge^i_{\mathcal{O}_{\underline{X}}} \mathcal{E}$) associated with $\mathcal{E}$ over $\mathcal{O}_{\underline{X}}$. The $\mathcal{O}_{\underline{X}}$-subalgebra $\bigwedge^{\mathrm{even}}_{\mathcal{O}_{\underline{X}}} \mathcal{E}$ ($:= \bigoplus_{i : \mathrm{even}} \bigwedge^i_{\mathcal{O}_{\underline{X}}} \mathcal{E}$) defines (since it is commutative) a relative affine space $\mathcal{S} pec (\bigwedge^{\mathrm{even}}_{\mathcal{O}_{\underline{X}}} \mathcal{E})$ over $\underline{X}$.
Also, $\bigwedge^\bullet_{\mathcal{O}_{\underline{X}}} \mathcal{E}$ may be thought of as a coherent $\mathcal{O}_{\mathcal{S} pec (\bigwedge^{\mathrm{even}}_{\mathcal{O}_{\underline{X}}} \mathcal{E})}$-module. Thus, we obtain a superscheme \begin{align} \label{E12} \langle \underline{X}, \mathcal{E} \rangle^\circledS := (\mathcal{S} pec ({\bigwedge}_{\mathcal{O}_{\underline{X}}}^{\mathrm{even}} \mathcal{E}), {\bigwedge}_{\mathcal{O}_{\underline{X}}}^\bullet \mathcal{E}). \end{align}
The inclusion $\mathcal{O}_{\underline{X}} \ (= \bigwedge^0_{\mathcal{O}_{\underline{X}}} \mathcal{E}) \hookrightarrow \bigwedge^\bullet_{\mathcal{O}_{\underline{X}}} \mathcal{E}$ defines a morphism
\begin{align} \label{E14} \langle \beta \rangle^\circledS_{\underline{X}, \mathcal{E}} : \langle \underline{X}, \mathcal{E} \rangle^\circledS \rightarrow \underline{X} \end{align} of superschemes.
\begin{defi} \label{d0d99ff} \leavevmode\\ \ \ \ We shall say that a superscheme $Z^\circledS$ is {\bf split} if $Z^\circledS \cong \langle \underline{X}, \mathcal{E} \rangle^\circledS$ for some scheme $\underline{X}$ and a coherent $\mathcal{O}_{\underline{X}}$-module $\mathcal{E}$.
\end{defi}
Next, we shall recall the notion of an affine superscheme.
\begin{defi} \label{d0d99} \leavevmode\\ \ \ \ If $R := R_b \oplus R_f$ is a superring, then we shall write \begin{align} \mathrm{Spec}(R)^\circledS \end{align}
for the superspectrum of $R$ (i.e., ``$\underline{\mathrm{Spec}} R$" in the sense of ~\cite{CR}, Definition 10.1.1). We shall say that a superscheme $X^\circledS$ is {\bf affine} if it is isomorphic to $\mathrm{Spec}(R)^\circledS$ for some superring $R$ (in particular, both $X_b$ and $X_t$ are affine schemes).
\end{defi}
If both $m$ and $n$ are nonnegative integers and $S^\circledS$ is a superscheme, then we shall write \begin{align}
\mathbb{A}_{S^\circledS}^{m |n} & := S^\circledS \times _{S_0} \mathrm{Spec}(R_0 [t_n, \cdots, t_m] \otimes_{R_0} {\bigwedge}_{R_0} (R_0 \psi_1 \oplus \cdots \oplus R_0 \psi_n))^\circledS \\
\big(& \ = S^\circledS \times_{S_0} \langle \mathbb{A}_{S_0}^{m|0}, \bigoplus_{l=1}^n \mathcal{O}_{\mathbb{A}_{S_0}^{m|0}} \psi_i\rangle^\circledS \big), \notag \end{align} where the $t_1, \cdots, t_m$ are ordinary indeterminates and $\psi_1, \cdots, \psi_n$ are fermionic (i.e., anticommuting) indeterminates.
The following assertion is immediately verified from the definition of $\mathbb{A}^{m|n}_{(-)}$.
\begin{prop} \label{P0} \leavevmode\\
\ \ \ Let $f^\circledS : Y^\circledS \rightarrow X^\circledS$ be a morphism of superschemes.
Then, the functorial (with respect to $Y^\circledS$) map of sets \begin{align} \label{FF04}
\{ \widetilde{f}^\circledS \in \mathrm{Hom}_{\mathfrak{S} \mathfrak{c} \mathfrak{k}_{/S_0}^\circledS} (Y^\circledS, \mathbb{A}_{X^\circledS}^{m|n}) \ | \ \mathrm{pr} \circ \widetilde{f}^\circledS = f^\circledS \} & \rightarrow \Gamma (Y_b, \mathcal{O}_{Y_b})^{\oplus m} \oplus \Gamma (Y_b, \mathcal{O}_{Y_f})^{\oplus n} \\ \widetilde{f}^\circledS \hspace{10mm} & \mapsto ((\widetilde{f}^\flat (t_l))_{l=1}^m, (\widetilde{f}^\flat (\psi_{l'}))_{l'=1}^n) \notag \end{align}
is bijective, where $\mathrm{pr}$ denotes the natural projection $\mathbb{A}_{X^\circledS}^{m|n} \rightarrow X^\circledS$.
\end{prop}
\subsection{Supersmooth morphisms} \label{S15} \leavevmode\\
Let $m$ and $n$ be nonnegative integers.
\begin{defi} \label{d0819}
\leavevmode\\ \ \ \ Let $f^\circledS : Y^\circledS \rightarrow X^\circledS$ be a morphism of superschemes.
We shall say that $f^\circledS$ is {\bf supersmooth of relative superdimension $m|n$}
if there exists, super\'{e}tale locally on $Y^\circledS$, a super\'{e}tale morphism $Y^\circledS \rightarrow X^\circledS \times_{S_0} \mathbb{A}_{S_0}^{m|n}$ over $X^\circledS$.
\end{defi}
\begin{rema} \label{r488} \leavevmode\\
\ \ \ Let $n$ be a nonnegative integer. A morphism $f^\circledS :Y^\circledS \rightarrow X^\circledS$ of superschemes
is supersmooth of relative superdimension $n|0$
if and only if $f^\circledS$ is superflat and $f_b : Y_b \rightarrow X_b$ is, in the classical sense, smooth of relative dimension $n$ (i.e., all nonempty fibers are equidimensional of dimension $n$). In particular, $f^\circledS$ is supersmooth of relative superdimension $0|0$ if and only if it is super\'{e}tale.
\end{rema}
\begin{prop} \label{p0201} \leavevmode\\
\ \ \
Let $X^\circledS$ be
a superscheme.
Then, the following two conditions (a) and (b) are equivalent:
\begin{itemize}
\item[(a)]
$X^\circledS$ is supersmooth over $S_0$ of relative superdimension $m|n$;
\item[(b)] $X_t$ is smooth over $S_0$ of relative dimension $m$, the $\mathcal{O}_{X_t}$-module $\mathrm{gr}_{X^\circledS}^1$ is locally free of rank $n$, and there exists, super\'{e}tale locally on $X^\circledS$, an isomorphism $X^\circledS \isom \langle X_t, \mathrm{gr}_{X^\circledS}^1 \rangle$ which makes the diagram \begin{align} \xymatrix{
& X_t \ar[ld]_{\tau^\circledS_X} \ar[rd]^{\tau^\circledS_{ \langle X_t, \mathrm{gr}_{X^\circledS}^1 \rangle}}&
\\
X^\circledS \ar[rr]^{\sim} & & \langle X_t, \mathrm{gr}_{X^\circledS}^1 \rangle } \end{align} commute.
\end{itemize}
In particular, $X^\circledS$ is split and supersmooth over $S_0$ of relative superdimension $m|n$ if and only if $X^\circledS \cong \langle \underline{X}, \mathcal{E}\rangle^\circledS$ for some smooth scheme $\underline{X}$ over $S_0$ of relative dimension $m$ and some vector bundle $\mathcal{E}$ on $\underline{X}$ of rank $n$.
\end{prop}
\begin{proof} Since the latter assertion follows directly from the former assertion, it suffices to prove only the former assertion, i.e., the equivalence (a) $\Leftrightarrow$ (b).
The implication (b) $\Rightarrow$ (a) is clear. We shall prove (a) $\Rightarrow$ (b). After possibly replacing $X^\circledS$ with its super\'{e}tale covering, we may assume, without loss of generality, that $X_b$ is affine and there exists a (globally defined) super\'{e}tale morphism $\pi^\circledS \ (:= (\pi_b, \pi^\flat)) : X^\circledS \rightarrow \mathbb{A}_{S_0}^{m|n}$ over $S_0$. Then, $\pi^\flat$ restricts to isomorphisms
\begin{align} \label{E5}
\pi_b^*(\mathcal{O}_{(\mathbb{A}_{S_0}^{m|n})_f}^i) \cong \mathcal{O}_{X_f}^i \hspace{3mm} (i =1,2)
\end{align}
(cf. Remark \ref{r48}).
In particular,
the commutative square diagram \begin{align} \xymatrix{ X_t \ar[r]^{\gamma_X} \ar[d]_{ \pi_t} & X_b \ar[d]^{\pi_b}\\
(\mathbb{A}_{S_0}^{m|n})_t \ar[r]_{\gamma_{\mathbb{A}_{S_0}^{m|n}}} & (\mathbb{A}_{S_0}^{m|n})_b } \end{align} is cartesian.
It follows that $X_t$ is \'{e}tale over $(\mathbb{A}_{S_0}^{m|n})_t$ ($=\mathbb{A}_{S_0}^{m|0}$), and hence, smooth over $S_0$ of relative dimension $m$. Since $X_b$ is affine, there exists a morphism $\iota_X : X_b \rightarrow X_t$ over $S_0$ satisfying that $\iota_X \circ \gamma_X = \mathrm{id}_{X_t}$. The isomorphisms (\ref{E5}) yield an isomorphism \begin{align} \label{e097}
\mathrm{gr}^1_{\pi^\flat} : (\pi_t^* (\mathrm{gr}^1_{\mathbb{A}_{S_0}^{m|n}}) =) \ \pi_b^* (\mathrm{gr}^1_{\mathbb{A}_{S_0}^{m|n}}) \isom \mathrm{gr}^1_{X^\circledS}. \end{align} In particular, we have $\mathrm{gr}^1_{X^\circledS} \cong \mathcal{O}^{\oplus n}_{X_t}$. By Proposition \ref{P0}, one may find
a morphism
\begin{align}
\widetilde{\iota}_X^\circledS : X^\circledS \rightarrow \langle X_t, \mathrm{gr}_{X^\circledS}^1 \rangle^\circledS
\end{align}
whose composite with the projection $\langle \beta \rangle_{X_t, \mathrm{gr}_{X^\circledS}^1}^\circledS : \langle X_t, \mathrm{gr}_{X^\circledS}^1 \rangle^\circledS \rightarrow X_t$ coincides with $\iota_X \circ \beta_X^\circledS : X^\circledS \rightarrow X_t$ and which makes
the diagram \begin{align} \xymatrix{X \ar[rr]^{\widetilde{\iota}_X^\circledS} \ar[rd]_{\pi^\circledS}& & \langle X_t, \mathrm{gr}_{X^\circledS}^1 \rangle^\circledS \ar[dl]^{{\pi'}^\circledS} \\
& \mathbb{A}^{m|n}_{S_0} & } \end{align} commute, where
${\pi'}^\circledS$ denotes the morphism determined (by means of the canoncial isomorphism $\mathbb{A}^{m|n}_{S_0} \isom \langle \mathbb{A}^{m|0}_{S_0}, \pi_t^* (\mathrm{gr}^1_{\mathbb{A}_{S_0}^{m|n}}) \rangle^\circledS$) by both $\pi_t : X_t \rightarrow \mathbb{A}^{m|0}_{S_0}$ and the morphism $\mathrm{gr}^1_{\pi^\flat}$.
Observe that both $\pi^\circledS$ and ${\pi'}^\circledS$ are superflat (since $\langle X_t, \mathrm{gr}_{X^\circledS}^1 \rangle^\circledS \cong \mathbb{A}_{S_0}^{m|n} \times_{\mathbb{A}_{S_0}^{m|0}, \pi_t} X_t$ and $\pi_t$ is \'{e}tale).
On the other hand,
$\widetilde{\iota}_X$ restricts, via base-change by $\tau^\circledS_{\mathbb{A}_{S_0}^{m|n}} : (\mathbb{A}_{S_0}^{m|n})_t \rightarrow \mathbb{A}_{S_0}^{m|n}$, to
the identity morphism of $X_t$. This implies that $\widetilde{\iota}_X$ is an isomorphism, and hence, completes the proof of Proposition \ref{p0201}.
\end{proof}
\subsection{Superstacks} \label{S16} \leavevmode\\
\begin{defi}\label{D029}\leavevmode\\
\begin{itemize} \item[(i)] A {\bf superstack (over $S_0$)} is a category fibered in groupoids $Z^\circledS \rightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS}_{/S_0}$
over $\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS}_{/S_0}$ which is a stack with respect to the super\'{e}tale pretopology.
\item[(ii)] Let $Z^\circledS_1 \rightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS}_{/S_0}$ and $Z^\circledS_2 \rightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS}_{/S_0}$ be superstacks. A {\bf morphism of superstacks} from $Z^\circledS_1$ to $Z^\circledS_2$ is a functor $Z^\circledS_1 \rightarrow Z^\circledS_2$ over $\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS}_{/S_0}$.
\end{itemize} \end{defi}
One verifies immediately that the superstacks and the morphisms of superstacks form a $2$-category, in which the $2$-fiber products and finite coproducts exist. The natural inclusion from $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$ into this category preserves such limits.
\begin{rema} \label{r4a8} \leavevmode\\
\ \ \
For a superscheme $X^\circledS$, the set-valued contravariant functor $\mathrm{Hom}_{\mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}}(-, X^\circledS)$ on $\mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}$ is verified (from a standard argument in descent theory) to be a sheaf on $\mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}$ with respect to super\'{e}tale pretopology (cf. ~\cite{G}, Lemma 2.8).
We always identify any superscheme $X^\circledS$ with the superstack corresponding to the sheaf $\mathrm{Hom}_{\mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}}(-, X^\circledS)$.
\end{rema}
\begin{rema} \label{r4b8} \leavevmode\\
\ \ \
If $\underline{Z}$ is a stack over $S_0$, then, in a natural manner, one may consider it as a superstack. More precisely, let us define the category $\underline{Z}^{\circledS \text{-}\mathrm{triv}} \rightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$ fibered in groupoids over $\mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}$ as follows: \begin{itemize} \item[$\bullet$] The {\it objects} in $\underline{Z}^{\circledS \text{-}\mathrm{triv}}$ are pairs $(X^\circledS, x)$ consisting of a superscheme $X^\circledS$ and a morphisms $x : X_b \rightarrow \underline{Z}$ of stacks; \item[$\bullet$] The {\it morphisms} from an object $(Y^\circledS, y)$ to an object $(X^\circledS, x)$ are morphisms $f^\circledS : Y^\circledS \rightarrow X^\circledS$ of superschemes satisfying that $x \circ f_b \cong y$; \item[$\bullet$] The {\it functor} $\underline{Z}^{\circledS \text{-}\mathrm{triv}} \rightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}$ is given by assigning $(X^\circledS, x) \mapsto X^\circledS$ (for any object $(X^\circledS, x)$ in $\underline{Z}^{\circledS \text{-}\mathrm{triv}}$) and $f^\circledS \mapsto f^\circledS$ (for any morphism $f^\circledS$ in $\underline{Z}^{\circledS \text{-}\mathrm{triv}}$). \end{itemize} Then, $\underline{Z}^{\circledS \text{-}\mathrm{triv}}$ forms a superstack. The assignment $\underline{Z} \mapsto \underline{Z}^{\circledS \text{-}\mathrm{triv}}$ determines a fully faithful functor from the category of stacks over $S_0$ to the category of superstacks over $S_0$. In this manner, we always consider any stack as a superstack.
\end{rema}
\begin{rema} \label{r4c8} \leavevmode\\
\ \ \
Let $Z^\circledS \rightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}$ be a superstack. The restriction of this superstack to
the subcategory $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0} \subseteq \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$ forms
a stack \begin{align} \label{E6} Z_t \rightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0} \end{align}
over $S_0$.
If, moreover, $Z^\circledS$ may represented by a superscheme $X^\circledS$ (i.e., $Z^\circledS \cong \mathrm{Hom}_{\mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}}(-, X^\circledS)$), then $Z_t$ (in the sense of (\ref{E6})) may be represented by $X_t$ (in the sense of (\ref{e480})).
Moreover, if $W^\circledS \rightarrow Z^\circledS$ is a morphism of superstacks, then it induces a morphism $W_t \rightarrow Z_t$ of stacks.
\end{rema}
\begin{defi} \label{d089}
\leavevmode\\ \ \ \
Let $f^\circledS : Y^\circledS \rightarrow X^\circledS$ be a morphism of superstacks. \begin{itemize} \item[(i)]
We shall say that $f^\circledS$ is {\bf representable} if, for any morphism ${X'}^\circledS \rightarrow X^\circledS$ of superstacks (where ${X'}^\circledS$ is a superscheme), the fiber product $Y^\circledS \times_{f^\circledS, X^\circledS} {X'}^\circledS$ is a superscheme. \item[(ii)] We shall say that $f^\circledS$ is {\bf superproper} if the underlying morphism $f_t : Y_t \rightarrow X_t$ of stacks is proper in the classical sense.
\end{itemize}
\end{defi}
\subsection{Deligne-Mumford superstacks} \label{S17} \leavevmode\\
\begin{defi} \label{d0299} \leavevmode\\ \ \ \
We shall say that a superstack $Z^\circledS$ is {\bf Deligne-Mumford}
if it satisfies the following two conditions:
\begin{itemize}
\item[(i)]
The diagonal morphism $Z^{\circledS} \rightarrow Z^{\circledS} \times_{S_0} Z^{\circledS}$ is representable and the associated (representable) morphism $Z_t \rightarrow Z_t \times_{S_0} Z_t$ of stacks is separated and quasi-compact in the classical sense;
\item[(ii)]
There exists a superscheme $U^\circledS$ over $S_0$ together with a representable
morphism $U^\circledS \rightarrow Z^\circledS$ of superstacks over $S_0$ such that
for each superscheme $V^\circledS$ over $Z^\circledS$, the morphism
$U^\circledS \times_{Z^\circledS} V^\circledS \rightarrow V^\circledS$ of superschemes (where $U^\circledS \times_{Z^\circledS} V^\circledS$ is necessarily a superscheme thank to condition (i)) is surjective and super\'{e}tale.
\end{itemize}
We shall refer to such
a superscheme $U^\circledS$ (together with $U^\circledS \rightarrow Z^\circledS$)
as a {\bf complete versal family for $Z^\circledS$}.
\end{defi}
\begin{defi} \label{d08gg9}
\leavevmode\\ \ \ \ Let $f^\circledS : Y^\circledS \rightarrow X^\circledS$ be a morphism of Deligne-Mumford superstacks over $S_0$, and let $m$, $n$ be nonnegative integers.
We shall say that $f^\circledS$ is {\bf super\'{e}tale} (resp., {\bf supersmooth of relative superdimension $m|n$}) if for any $2$-commutative diagram
\begin{align}
\xymatrix{
V^\circledS \ar[r] \ar[rd]_{h^\circledS} & Y'^{\circledS} \ar[r] \ar[d] \ar@{}[rd]|{\Box} & Y^\circledS \ar[d]^{f^\circledS}
\\
& U^\circledS \ar[r] & X^\circledS,
}
\end{align}
where $U^\circledS$ and $V^\circledS$ are complete versal families for $X^\circledS$ and $Y'^{\circledS} := Y^\circledS \times_{X^\circledS} U^\circledS$ respectively, the morphism $h^\circledS : V^\circledS \rightarrow U^\circledS$ of superschemes is super\'{e}tale in the sense of Definition \ref{d2} (resp., supersmooth of relative superdimension $m |n$ in the sense of Definition \ref{d0819}).
\end{defi}
\begin{rema} \label{r7gg8} \leavevmode\\
\ \ \ Let $Z^\circledS$ be a Deligne-Mumford superstack.
Then, the structure sheaf $\mathcal{O}_{Z^\circledS}$ on $Z^\circledS$ is defined to be a super\'{e}tale sheaf on $Z^\circledS$ such that $\Gamma (T^\circledS, \mathcal{O}_{Z^\circledS}) := \Gamma (T_b, \mathcal{O}_{T^\circledS})$
for any superscheme $T^\circledS$ together with a super\'{e}tale morphism $T^\circledS \rightarrow Z^\circledS$.
Moreover, one may define the notion of an $\mathcal{O}_{Z^\circledS}$-supermodule, as usual (cf. the discussion following Remark \ref{r486}).
\end{rema}
\subsection{Groupoids in the category of superschemes} \label{S18} \leavevmode\\
Now, we recall that if we are given a {\it groupoid} $\Gamma$, then it may be described as a certain collection of data $(U_0, R_0, s_0, t_0, c_0)$, where $U_0$ and $R_0$ denote the sets of {\it objects} and {\it arrows} of $\Gamma$ respectively, $s_0$ and $t_0$ denote the {\it source} and {\it target} maps $R_0 \rightarrow U_0$ respectively, and $c_0$ denotes the {\it composition} map $R_0 \times_{t_0, U_0, s_0} R_0 \rightarrow R_0$.
\begin{defi}\label{D019}\leavevmode\\ \ \ \ A {\bf groupoid in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$} is a collection of data \begin{align} R^\circledS \stackrel{}{\rightrightarrows} U^\circledS :=(U^\circledS, R^\circledS, s^\circledS, t^\circledS, c^\circledS), \end{align} where \begin{itemize} \item[$\bullet$] $U^\circledS$ and $R^\circledS$ are superschemes;
\item[$\bullet$]
$s^\circledS, t^\circledS : R^\circledS \rightarrow U^\circledS$ and $c^\circledS : R^\circledS \times_{s^\circledS, U^\circledS, t^\circledS} R^\circledS \rightarrow R^\circledS$ are morphisms of superschemes
\end{itemize}
such that for any $T^\circledS \in \mathrm{Ob} (\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS}_{/S_0})$ the quintuple \begin{align} (R^\circledS \rightrightarrows U^\circledS) (T^\circledS) := (U^\circledS(T^\circledS), R^\circledS(T^\circledS), s^\circledS(T^\circledS), t^\circledS (T^\circledS), c^\circledS(T^\circledS)) \end{align}
forms a groupoid (in the above sense) which is functorial with respect to $T^\circledS$. In a similar vein, one may obtain the definition of a {\bf groupoid in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}$}. \end{defi}
As in the usual case of stacks, one may associate, to each groupoid $R^\circledS \rightrightarrows U^\circledS$
in $\mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0}$, a superstack
\begin{align}
[R^\circledS \rightrightarrows U^\circledS]
\end{align}
over $S_0$.
More precisely, $[R^\circledS \rightrightarrows U^\circledS]$ is the {\it stackification}
(with respect to the super\'{e}tale pretopology) of the category fibered in groupoids $ [R^\circledS \rightrightarrows U^\circledS]'$ determined by $R^\circledS \rightrightarrows U^\circledS$ (i.e., the fiber of $ [R^\circledS \rightrightarrows U^\circledS]'$ over $T^\circledS \in \mathrm{Ob} (\mathfrak{S} \mathfrak{c} \mathfrak{h}^\circledS_{/S_0})$ is the groupoid $(R^\circledS \rightrightarrows U^\circledS) (T^\circledS)$ defined above). Denote by \begin{align} \label{e4056} \pi_{R^\circledS \rightrightarrows U^\circledS}^\circledS : U^\circledS \rightarrow [R^\circledS \rightrightarrows U^\circledS] \end{align}
the natural projection.
\begin{rema} \label{r78} \leavevmode\\
\ \ \ Let $Z^\circledS$ be a Deligne-Mumford superstack.
\begin{itemize}
\item[(i)] One verifies that there exists an isomorphism $z^\circledS : [R^\circledS \rightrightarrows U^\circledS] \isom Z^\circledS$ of superstacks, where $R^\circledS \rightrightarrows U^\circledS :=(U^\circledS, R^\circledS, s^\circledS, t^\circledS, c^\circledS)$ is a groupoid in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$, such that the three morphisms $s^\circledS$, $t^\circledS$, and $z^\circledS \circ \pi_{R^\circledS \rightrightarrows U^\circledS}^\circledS$ are super\'{e}tale. Indeed, if $U^\circledS$ is
an arbitrary complete versal family
for our $Z^\circledS$, then one may obtain, by starting with the data $U^\circledS$, the desired groupoid $[R^\circledS \rightrightarrows U^\circledS]$ in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$
as follows: \begin{itemize}
\item[$\bullet$]
$R^\circledS := U^\circledS \times_{Z^\circledS} U^\circledS$;
\item[$\bullet$] $s^\circledS$ and $t^\circledS$ are the second and first projections $U^\circledS \times_{Z^\circledS} U^\circledS \ (=R^\circledS) \rightarrow U^\circledS$ respectively;
\item[$\bullet$] $c^\circledS$ is the projection \begin{align} & \ (U^\circledS \times_{Z^\circledS} U^\circledS) \times_{s^\circledS, U^\circledS, t^\circledS} (U^\circledS \times_{Z^\circledS} U^\circledS) \ (= R^\circledS \times_{s^\circledS, U^\circledS, t^\circledS} R^\circledS)\\
\rightarrow & \ U^\circledS \times_{Z^\circledS} U^\circledS \ (= R^\circledS) \notag \end{align}
into the $(1, 4)$-th factor. \end{itemize}
We shall refer to such a groupoid $R^\circledS \rightrightarrows U^\circledS$ (together with such an isomorphism $z^\circledS$) as a {\bf representation} of $Z^\circledS$.
\item[(ii)]
Let $R^\circledS \rightrightarrows U^\circledS$ be as in (i) and denote by
$R_t \rightrightarrows U_t$ the groupoid in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}$ defined to be $R_t \rightrightarrows U_t := (U_t, R_t, s_t, t_t, c_t)$.
Then, the isomorphism $z^\circledS$ induces an isomorphism
\begin{align} \label{E9} z_t : [R_t \rightrightarrows U_t] \isom Z_t
\end{align} of stacks.
\item[(iii)] Let $R^\circledS \rightrightarrows U^\circledS$ be as in (i) again. Then, $R_b \rightrightarrows U_b := (U_b, R_b, s_b, t_b, c_b)$ forms a groupoid in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}$ and we obtain a stack \begin{align} \label{E17} Z_b := [R_b \rightrightarrows U_b] \end{align} together with a morphism $\beta^\circledS_{Z} : Z^\circledS \rightarrow Z_b$. For any super\'{e}tale morphism $T^\circledS \rightarrow Z^\circledS$ (where $T^\circledS$ is a superscheme), there exists an \'{e}tale morphism $T_b \rightarrow Z_b$ which makes the square diagram \begin{align} \xymatrix{ T^\circledS \ar[r] \ar[d]_{\beta^\circledS_T}& Z^\circledS \ar[d]^{\beta^\circledS_Z} \\ T_b \ar[r] & Z_b } \end{align} commute and cartesian. In particular, the structure sheaf $\mathcal{O}_{Z_b}$ of $Z_b$ may be identified, via $\beta^\circledS_Z$, with the bosonic part of $\mathcal{O}_Z$. If, moreover, $Z^\circledS$ may be represented by a superscheme $X^\circledS := (X_b, \mathcal{O}_{X^\circledS})$, then $Z_b$ and $\beta_Z^\circledS$ (in the sense of (\ref{E17})) are isomorphic to $X_b$ and $\beta^\circledS_X$ (in the sense of (\ref{e22})) respectively. \end{itemize}
\end{rema}
\begin{prop} \label{p0207} \leavevmode\\
\ \ \
Let $Z^\circledS$ be a superstack and $m$, $n$ are nonnegative integers.
Then, the following three conditions (a), (b), and (c) are equivalent:
\begin{itemize}
\item[(a)]
$Z^\circledS$ is a supersmooth Deligne-Mumford superstack over $S_0$ of relative superdimension $m|n$;
\item[(b)]
$Z^\circledS$ is a Deligne-Mumford superstack for which there exists a complete versal family of the form $\langle \underline{U}, \mathcal{E}_U \rangle^\circledS$, where $\underline{U}$ denotes a smooth locally noetherian scheme over $S_0$ of relative dimension $m$ and $\mathcal{E}_{\underline{U}}$ denotes a vector bundle on $\underline{U}$ of rank $n$;
\item[(c)]
$Z^\circledS$ admits a representation $R^\circledS \rightrightarrows U^\circledS := (U^\circledS, R^\circledS, s^\circledS, t^\circledS, c^\circledS)$ satisfying the following properties: \begin{itemize}
\item[(c-1)] $U^\circledS = \langle \underline{U}, \mathcal{E}_{\underline{U}} \rangle^\circledS$, where $\underline{U}$ is a smooth scheme over $S_0$ of relative dimension $m$ and $\mathcal{E}_{\underline{U}}$ is a rank $n$ vector bundle on $\underline{U}$;
\item[(c-2)] Both $s^\circledS$ and $t^\circledS$ are super\'{e}tale and the morphism $(s_t, t_t) : R_t \rightarrow \underline{U} \times_{S_0} \underline{U}$ ($= U_t \times_{S_0} U_t$) is separated and quasi-compact.
\end{itemize}
\end{itemize}
\end{prop}
\begin{proof}
The equivalence (a) $\Leftrightarrow$ (c) follows immediately from Proposition \ref{p0201} and the definition of a Deligne-Mumford stack. The implication (b) $\Rightarrow$ (c) is clear. Let us consider (c) $\Rightarrow$ (b). First, we prove that the diagonal morphism $\varDelta_Z^\circledS : Z^\circledS \rightarrow Z^\circledS \times_{S_0} Z^\circledS$ is representable. Let $V^\circledS$ be an object in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$, and let $x^\circledS$ and $y^\circledS : V^\circledS \rightarrow Z^\circledS$ be morphisms in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$. To prove that $Z^\circledS \times_{\varDelta_Z^\circledS, Z^\circledS \times_{S_0} Z^\circledS, (x^\circledS, y^\circledS)} V^\circledS$ is in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$, one may replace (thanks to descent property in the super\'{e}tale pretopology) $V^\circledS$ with its super\'{e}tale covering. Hence, we suppose, without loss of generality, that both $x^\circledS$ and $y^\circledS$ may lift to morphisms $\widetilde{x}^\circledS$, $\widetilde{y}^\circledS : V^\circledS \rightarrow U^\circledS$. Then, we have \begin{align} Z^\circledS \times_{\varDelta_Z^\circledS, Z^\circledS \times_{S_0} Z^\circledS, (x^\circledS, y^\circledS)} V^\circledS
\isom R^\circledS \times_{(s^\circledS, t^\circledS), U^\circledS \times_{S_0} U^\circledS, (\widetilde{x}^\circledS, \widetilde{y}^\circledS)} V^\circledS, \end{align} where the right-hand side is evidently an object in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$. Thus, $\varDelta_Z^\circledS$ is representable. Moreover, this representability implies (since both $s^\circledS$ and $t^\circledS$ are super\'{e}tale) immediately that the projection $\pi_{R^\circledS \rightrightarrows U^\circledS} $ is representable, surjective, and super\'{e}tale. Finally, by means of the isomorphism (\ref{E9}), the latter condition of (c-2) implies that the diagonal morphism $Z_t \rightarrow Z_t \times_{S_0} Z_t$ is separated and quasi-compact. This completes the proof of the implication (c) $\Rightarrow$ (b), and consequently, the proof of Proposition \ref{p0207}.
\end{proof}
\section{Logarithmic structures on superschemes}
In this section, we shall give briefly a general formulation of log superschemes (or more generally, log superstacks). The notion of logarithmic structure on a superscheme, as well as a superstack (cf. Definition \ref{d2a} (i))
is a supersymmetric generalization of the classical notion of logarithmic structure in the sense of J. M. Fontaine and L. Illusie.
(Basic references for the notion of {\it logarithmic structure} on a scheme are, e.g., ~\cite{KATO} and ~\cite{ILL}.) One of the most important concepts in log supergeometry is log supersmoothness (cf. Definition \ref{D0519} (ii)).
At the end of this section, we show (cf. Proposition \ref{p0404} (ii) and Corollary \ref{c0404} (i)-(iii)) how log supersmooth deformations of a log superstack or a morphism of log superstacks are controlled by the sheaf of logarithmic superderivations (cf. Definition \ref{D0192} for the definition of a logarithmic superderivation).
\subsection{Logarithmic structures} \label{S21} \leavevmode\\
\begin{defi} \label{d2a}\leavevmode\\
\ \ \
\begin{itemize} \item[(i)]
\ \ \ Let $X^\circledS := (X_b, \mathcal{O}_{X^\circledS})$ be a superscheme (resp., a superstack). A {\bf logarithmic structure} (or {\bf log structure} for short) on $X^\circledS$ is a logarithmic structure
$\alpha_{X_b} : \mathcal{M}_{X_b} \rightarrow \mathcal{O}_{X_b}$ on $X_b$
(where $\mathcal{M}_{X_b}$ denotes an \'{e}tale sheaf of commutative monoids on $X_b$).
A {\bf log superscheme} (resp., {\bf log superstack}) is a triple
\begin{equation}
Y^{\circledS\mathrm{log}} := (Y_b, \mathcal{O}_{Y^\circledS}, \mathcal{M}_{Y_b} \stackrel{ \alpha_{Y_b}}{\rightarrow} \mathcal{O}_{Y_b})
\end{equation} consisting of a superscheme (resp., a superstack) $Y^\circledS := (Y_b, \mathcal{O}_{Y^\circledS})$ and a log structure $\alpha_{Y_b}$ on $Y_b$ (hence, of $Y^\circledS$). We shall refer to $Y_b^\mathrm{log} := (Y_b, \alpha_{Y_b})$
as the {\bf underlying log scheme}
(resp., {\bf underlying log stack}) of $Y^{\circledS \mathrm{log}}$
and refer to $Y^\circledS$ as the
{\bf underlying superscheme} (resp., {\bf underlying superstack}) of $Y^{\circledS \mathrm{log}}$. Denote by $\beta_Y^{\circledS \mathrm{log}} : Y^{\circledS \mathrm{log}} \rightarrow Y_b^\mathrm{log}$ the morphism of log superschemes extending $\beta_Y^\circledS$.
\item[(ii)]
\ \ \
Let $X^{\circledS \mathrm{log}} := (X_b, \mathcal{O}_{X^\circledS}, \alpha_{X_b})$ and $Y^{\circledS \mathrm{log}} := (Y_b, \mathcal{O}_{Y^\circledS}, \alpha_{Y_b})$ be two log superschemes (resp., log superstacks). A {\bf morphism of log superschemes} (resp., {\bf morphism of log superstacks}) from $Y^{\circledS \mathrm{log}}$ to $X^{\circledS \mathrm{log}}$ is a triple \begin{align} f^{\circledS \mathrm{log}} := (f_b : Y_b \rightarrow X_b, f^\flat : f_b^* (\mathcal{O}_{X^\circledS}) \rightarrow \mathcal{O}_{Y^\circledS}, f^\sharp_b : f_b^{-1}(\mathcal{M}_{X_b}) \stackrel{}{\rightarrow} \mathcal{M}_{Y_b}), \end{align}
where $f^\circledS := (f_b, f^\flat)$ forms a morphism $Y^\circledS \rightarrow X^\circledS$ between the underlying superschemes (resp., underlying superstacks) and $f^\mathrm{log}_b := (f_b, f_b^\sharp)$ forms a morphism $Y_b^\mathrm{log} \rightarrow X_b^\mathrm{log}$ between the underlying log schemes (resp., underlying log stacks). \end{itemize}
\end{defi}
\begin{defi} \label{d2b}\leavevmode\\
\ \ \ An {\bf fs log superscheme} (resp., {\bf fs log superstack}) is a log superscheme (resp., log superstack) whose underlying log scheme (resp., underlying log stack) is fine and saturated.
\end{defi}
Let $\underline{X}^\mathrm{log} := (\underline{X}, \alpha_{\underline{X}})$ be an fs log scheme over $S_0$ and $\mathcal{E}$ a coherent $\mathcal{O}_{\underline{X}}$-module. Then, we shall write \begin{align} \label{E13} \langle \underline{X}, \mathcal{E} \rangle^{\circledS\mathrm{log}} \end{align}
for the log superscheme defined to be $\langle \underline{X}, \mathcal{E} \rangle$ (cf. (\ref{E12})) equipped with the log structure pulled-back from $\underline{X}^\mathrm{log}$ via
$\langle \beta \rangle^\circledS_{\underline{X}, \mathcal{E}}$ (cf. (\ref{E14})).
\begin{defi} \label{d2c}\leavevmode\\
\ \ \ Let $X^{\circledS \mathrm{log}}$ and $Y^{\circledS \mathrm{log}}$ be log superschemes (resp., log superstacks) and $f^{\circledS \mathrm{log}} : Y^{\circledS \mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$ a morphism of log superschemes (resp., a morphism of log superstacks).
We shall say that $ f^{\circledS \mathrm{log}}$ is {\bf strict super\'{e}tale} (resp., {\bf a strict closed immersion}) if $f^\circledS$ is super\'{e}tale (resp., a closed immersion) and $f^\mathrm{log}_b$ is strict, in the sense of ~\cite{ILL}, \S\,1.2.
\end{defi}
We shall write
\begin{align} \label{EE13}
\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS \mathrm{log}}_{/S_0} \end{align} for the category whose {\it objects} are fs log superschemes and whose {\it morphisms} are morphisms of log superschemes. Also, write $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\mathrm{log}$ for the full subcategory of $\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS \mathrm{log}}_{/S_0}$ consisting of fs log schemes (i.e., fs log superschemes $X^{\circledS \mathrm{log}}$ with $\mathcal{O}_{X_f} =0$). The fiber products and finite coproducts in $\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS \mathrm{log}}_{/S_0}$ exist, and $\mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS \mathrm{log}}_{/S_0}$ admits the Grothendieck pretopology given by strict super\'{e}tale morphisms; we shall refer to it as the {\bf strict super\'{e}tale pretopology}. In a natural manner, any log superstack may be thought of as a stack
over $ \mathfrak{S} \mathfrak{c} \mathfrak{h}^{\circledS \mathrm{log}}_{/S_0}$ with respect to the strict super\'{e}tale pretopology.
\subsection{Logarithmic superdifferentials} \label{S22} \leavevmode\\
Let $S^{\circledS \mathrm{log}} : = (S_b, \mathcal{O}_{S^\circledS}, \alpha_{S_b})$ and $X^{\circledS \mathrm{log}} := (X_b, \mathcal{O}_{X^\circledS}, \alpha_{X_b})$ be fs log superschemes and
$f^{\circledS \mathrm{log}} \ (:= (f_b, f^\flat, f_b^\sharp)) : X^{\circledS \mathrm{log}} \rightarrow S^{\circledS \mathrm{log}}$ a morphism of log superschemes.
In the following, we shall define (in a functorial manner) a ``{\it log super}" analogue of the sheaf of relative differential $1$-forms, i.e., an $\mathcal{O}_{X^\circledS}$-supermodule $\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$
together with the universal derivation $d : \mathcal{O}_{X^\circledS} \rightarrow \Omega_{X^{\circledS \mathrm{log}}/S^{\circledS\mathrm{log}}}$ defined as follows. Let us write $\varDelta_X^\circledS : X^\circledS \rightarrow X^\circledS \times_{S^\circledS} X^\circledS$ for the diagonal morphism and write $\mathcal{J} := \mathrm{Ker} (\mathcal{O}_{X^\circledS \times_{S^\circledS} X^\circledS} \rightarrow \varDelta_{X*}^\circledS(\mathcal{O}_{X^\circledS}))$. Then, we shall define
\begin{align} \Omega_{X^\circledS/S^\circledS} := \varDelta_X^{\circledS *}(\mathcal{J}/\mathcal{J}^2) \end{align}
and write
$d : \mathcal{O}_{X^\circledS} \rightarrow \Omega_{X^\circledS/S^\circledS}$ for the $f^{-1}_b(\mathcal{O}_{S^\circledS})$-linear morphism given by assigning $a \mapsto d(a) := \overline{(a \otimes 1 - 1\otimes a)}$ for any local section $a \in \mathcal{O}_{X^\circledS}$. For example, if $X^{\circledS} = \mathbb{A}^{m |n}_{S^\circledS}$, then we have \begin{align} \label{E44} \Omega_{X^\circledS/S^\circledS} \cong (\bigoplus_{i=1}^m \mathcal{O}_{X^\circledS} d (t_i)) \oplus (\bigoplus_{i=1}^n \mathcal{O}_{X^\circledS} d (\psi_i)), \end{align}
where $d (t_i)$ ($i =1, \cdots, m$) are bosonic elements in $\Omega_{X^\circledS/S^\circledS}$ and $d (\psi_i)$ ($i = 1, \cdots, n$) are fermionic elements.
Moreover, let us define the $\mathcal{O}_{X^{\circledS}}$-supermodule $\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ to be \begin{align} \Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}} := (\Omega_{X^\circledS/S^\circledS} \oplus (\mathcal{O}_{X^\circledS} \otimes_\mathbb{Z} \mathcal{M}^\mathrm{gr}_{X_b}))/\mathcal{N}, \end{align} where \begin{itemize} \item[(i)] $ \mathcal{M}^\mathrm{gr}_{X_b}$ denotes the groupification of $\mathcal{M}_{X_b}$ whose local sections are bosonic (hence we obtain the $\mathcal{O}_{X^\circledS}$-supermodule $\mathcal{O}_{X^\circledS} \otimes_\mathbb{Z} \mathcal{M}^\mathrm{gr}_{X_b}$); \item[(ii)]
$\mathcal{N}$ denotes the $\mathcal{O}_{X^\circledS}$-subsupermodule generated locally by local sections of the following forms: \begin{itemize}
\item[$\bullet$] $(d (\alpha_{X_b} (a)), 0) - (0, \alpha_{X_b} (a) \otimes a)$ with $a \in \mathcal{M}_{X_b}$;
\item[$\bullet$] $(0, 1\otimes a)$ with $a \in \mathrm{Im} (f_b^{-1} (\mathcal{M}_{S_b}) \stackrel{f_b^\sharp}{\rightarrow} \mathcal{M}_{X_b})$. \end{itemize}
\end{itemize} The class of $(0, 1\otimes a)$ for $a \in \mathcal{M}_{X_b}$ in $\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ is denoted by $d \mathrm{log} (a)$. Finally, we write \begin{equation} \mathcal{T}_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}} := \Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}^\vee. \end{equation} i.e., the dual $\mathcal{O}_{X^\circledS}$-supermodule of $\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$.
The following Propositions \ref{p0404f} and \ref{p04304} may be verified immediately.
\begin{prop} \label{p0404f} \leavevmode\\
\ \ \ Let us consider a cartesian square diagram \begin{align} \xymatrix{
Y^{\circledS \mathrm{log}} \ar[r]^{f^{\circledS \mathrm{log}}} \ar[d] \ar@{}[rd]|{\Box} & X^{\circledS \mathrm{log}} \ar[d] \\ T^{\circledS \mathrm{log}} \ar[r] & S^{\circledS \mathrm{log}} } \end{align} in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$. Then, the natural $\mathcal{O}_{Y^\circledS}$-linear morphism \begin{align} f^{\circledS *} (\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}) \rightarrow \Omega_{Y^{\circledS \mathrm{log}}/T^{\circledS \mathrm{log}}} \end{align} is an isomorphism.
\end{prop}
\begin{prop} \label{p04304} \leavevmode\\
\begin{itemize} \item[(i)] Let $X^{\circledS \mathrm{log}}$ and $Y^\circledS$ be fs log superschemes and $f^{\circledS \mathrm{log}} : Y^\circledS \rightarrow X^\circledS$
a morphism of log superschemes. Then, there exists an exact sequence \begin{align} \label{E20} f^{\circledS *} (\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}) \rightarrow \Omega_{Y^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}} \rightarrow \Omega_{Y^{\circledS \mathrm{log}}/X^{\circledS \mathrm{log}}} \rightarrow 0 \end{align} of $\mathcal{O}_{Y^{\circledS}}$-supermodule. \item[(ii)] Suppose further that $f^{\circledS \mathrm{log}}$ is strict super\'{e}tale. Then, $\Omega_{Y^{\circledS \mathrm{log}}/X^{\circledS \mathrm{log}}} =0$ and the first arrow $f^{\circledS *} (\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}) \rightarrow \Omega_{Y^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ in (\ref{E20}) is an isomorphism.
\end{itemize}
\end{prop}
\subsection{Logarithmic superderivations} \label{S23} \leavevmode\\
Let $S^{\circledS \mathrm{log}}$, $X^{\circledS \mathrm{log}}$, and $f^{\circledS \mathrm{log}} : X^{\circledS \mathrm{log}} \rightarrow S^{\circledS \mathrm{log}}$ be as at the beginning of the previous subsection.
\begin{defi}\label{D0192}\leavevmode\\ \ \ \ Let $\mathcal{E}$ be an $\mathcal{O}_{X^\circledS}$-supermodule. A {\bf logarithmic superderivation of $(\mathcal{O}_{X^\circledS}, \mathcal{M}_{X_b})$ (over $S^{\circledS\mathrm{log}}$) with value in $\mathcal{E}$} is a pair $\partial := (D, \delta)$, where \begin{itemize} \item[$\bullet$]
$D$ is a superderivation $\mathcal{O}_{X^\circledS} \rightarrow \mathcal{E}$ over $S^\circledS$, i.e., an $f_b^{-1} (\mathcal{O}_{S^\circledS})$-linear morphism satisfying that
\begin{align}
D (a \cdot b) = D (a) \cdot b + (-1)^{|D| \cdot |a|} a \cdot D(b)
\end{align}
for any local sections $a, b \in \mathcal{O}_X$ (where $|D|$ denotes the parity of $D$);
\item[$\bullet$]
$\delta$ is a monoid homomorphism $\mathcal{M}_{X_b} \rightarrow \mathcal{E}$ such that
\begin{align}
D (\alpha_{X_b} (m)) = \alpha_{X_b} (m) \cdot \delta (m)
\end{align}
for any local section $m \in \mathcal{M}_{X_b}$; \item[$\bullet$] $D (f^{-1}_b (b)) = \delta (f^\sharp_b (n)) =0$ for any sections $b \in \mathcal{O}_{S^\circledS}$ and $n \in \mathcal{M}_{S_b}$. \end{itemize} If $\partial := (D, \delta)$ is a logarithmic superderivation, then we usually just write $\partial (a)$ and $\partial(m)$ (where $a \in \mathcal{O}_{X^\circledS}$ and $m \in \mathcal{M}_{X_b}$) for $D (a)$ and $\delta (m)$ respectively. \end{defi}
\begin{rema} \label{r408} \leavevmode\\
\ \ \
Let $\mathcal{E}$ be an $\mathcal{O}_{X^\circledS}$-supermodule. Denote by \begin{align} \label{E45} \mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}}; \mathcal{E}) \end{align}
the set of logarithmic superderivations of $(\mathcal{O}_{X^\circledS}, \mathcal{M}_{X_b})$ over $S^{\circledS \mathrm{log}}$ with value in $\mathcal{E}$. The structure of $\mathcal{O}_{X^\circledS}$-supermodule on $\mathcal{E}$ gives rise to a structure of $\Gamma (X_b, \mathcal{O}_{X^\circledS})$-supermodule on $\mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}}; \mathcal{E})$. In particular, $\mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}}; \mathcal{E})$ decomposes as \begin{align} \mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}}; \mathcal{E}) = \mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}}; \mathcal{E})_b \oplus \mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}}; \mathcal{E})_f. \end{align} It is clear that there exists a universal logarithmic superderivation \begin{align} \label{E46} d \in \mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}}; \Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}})_b. \end{align}
That is to say,
the morphism
\begin{align} \label{E47}
\mathrm{Hom}_{\mathcal{O}_{X^\circledS}} (\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}, \mathcal{E}) \isom & \ \mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}}; \mathcal{E}) \\
h \hspace{10mm} \mapsto & \hspace{7mm} h \circ d \notag
\end{align}
is an isomorphism
of $\Gamma (X_b, \mathcal{O}_{X^\circledS})$-supermodules.
In particular, (since the isomorphism (\ref{E47}) is compatible with restriction to each open subscheme of $X_b$) the case of $\mathcal{E} = \mathcal{O}_{X^\circledS}$ implies that
the dual $\mathcal{T}_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ of $\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ is isomorphic to the sheaf
given by assigning $U \mapsto \mathrm{Def}_{S^{\circledS}} (X^{\circledS \mathrm{log}} \times_{X_b} U; \mathcal{E} |_{U})$ (for any open subscheme $U$ of $X_b$).
By taking account of ~\cite{Og}, Proposition 1.1.7, one verifies that
$\mathcal{T}_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ admits a structure of Lie superalgebra over $f_b^{-1}(\mathcal{O}_{S^\circledS})$ with bracket operation given by
\begin{align}
[\partial_1, \partial_2] := (D_1\circ D_2 - (-)^{|D_1| \cdot |D_2|} D_2 \circ D_1, D_1 \circ \delta_2 - (-)^{|D_1| \cdot |D_2|} D_2 \circ \delta_1)
\end{align} for any homogenous logarithmic superderivations $\partial_1 := (D_1, \delta_1)$ and $\partial_2 := (D_2, \delta_2)$.
\end{rema}
\begin{rema} \label{r4018} \leavevmode\\
\ \ \
The discussions in \S\,2.2 and \S\,2.3 (especially,
Propositions \ref{p0404f} and \ref{p04304}) generalize naturally to the case where $X^{\circledS \mathrm{log}}$ is a log superstack.
In fact, $\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ is constructed in such a way that if $t^{\circledS \mathrm{log}} : T^{\circledS \mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$ (where $T^{\circledS \mathrm{log}}$ is a superscheme) is
a strict super\'{e}tale morphism, then we have a functorial (with respect to $T^{\circledS \mathrm{log}}$) isomorphism $t^{\circledS *}(\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}) \cong \Omega_{T^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$.
\end{rema}
\subsection{Log supersmooth morphisms} \label{S24} \leavevmode\\
Let $S^{\circledS \mathrm{log}}$ be an fs log superscheme, $X^{\circledS \mathrm{log}}$ an fs log superstack, and $f^{\circledS \mathrm{log}} : X^{\circledS \mathrm{log}}\rightarrow S^{\circledS \mathrm{log}}$ a morphism of log superstacks.
\begin{defi}\label{D0519}\leavevmode\\ \ \ \
Let $m$, $n$ be nonnegative integers.
\begin{itemize} \item[(i)]
An {\bf $(m|n)$-chart} on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ is a triple \begin{align} (Y^{\circledS \mathrm{log}}, U^\mathrm{log}, \eta^{\circledS \mathrm{log}}), \end{align} where \begin{itemize}
\item[$\bullet$] $Y^{\circledS \mathrm{log}}$ is an affine fs log superscheme
together with a strict super\'{e}tale morphism $Y^{\circledS \mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$ over $S_0$;
\item[$\bullet$] $U^\mathrm{log}$ is an fs log affine scheme
together with an integral log smooth morphism $U^\mathrm{log} \rightarrow S_b^\mathrm{log}$ of relative dimension $m$;
\item[$\bullet$]
$\eta^{\circledS \mathrm{log}}$ is an isomorphism $Y^{\circledS \mathrm{log}} \isom U^\mathrm{log} \times_{S_b} \mathbb{A}^{0|n}_{S^{\circledS}}$ over $S^{\circledS \mathrm{log}}$.
\end{itemize} \item[(ii)]
We shall say that $X^{\circledS \mathrm{log}}$ is {\bf log supersmooth over $S^{\circledS \mathrm{log}}$ of relative superdimension $m|n$} if there exists a collection $\{ (Y^{\circledS \mathrm{log}}_\gamma, U^\mathrm{log}_\gamma, \eta^{\circledS \mathrm{log}}_\gamma)\}_\gamma$ of $(m|n)$-charts on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ for which the morphism $\coprod_\gamma Y^{\circledS \mathrm{log}}_\gamma \rightarrow X^{\circledS \mathrm{log}}$ is a strict super\'{e}tale covering of $X^{\circledS \mathrm{log}}$.
\end{itemize} \end{defi}
\begin{rema} \label{r4078} \leavevmode\\
\ \ \
It is clear that if both $S^{\circledS \mathrm{log}}$ and $X^{\circledS \mathrm{log}}$ has trivial structures, then $X^{\circledS \mathrm{log}}$ is log supersmooth over $S^{\circledS \mathrm{log}}$ of relative superdimension $m|n$ if and only if $X^\circledS$ is supersmooth of relative superdimension $m|n$, in the sense of Definition \ref{d08gg9}, (ii).
\end{rema}
\begin{prop} \label{p0404} \leavevmode\\
\ \ \
Suppose that $X^{\circledS \mathrm{log}}$ is log supersmooth over $S^{\circledS \mathrm{log}}$ of relative superdimension $m |n$ for some nonnegative integers $m$, $n$. Then, the following assertions hold.
\begin{itemize} \item[(i)]
The $\mathcal{O}_{X^\circledS}$-supermodule $\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ is a supervector bundle of superrank $m|n$.
\item[(ii)] Let us consider a commutative square diagram \begin{align} \xymatrix{ T^{\circledS \mathrm{log}} \ar[d]_{i^{\circledS \mathrm{log}}} \ar[r]^{t_X^{\circledS \mathrm{log}}}& X^{\circledS \mathrm{log}} \ar[d]^{f^{\circledS \mathrm{log}}} \\ \widetilde{T}^{\circledS \mathrm{log}} \ar[r]^{\widetilde{t}_S^{\circledS \mathrm{log}}} & S^{\circledS \mathrm{log}},} \end{align} where $T^\circledS$ is affine and $i^{\circledS \mathrm{log}}$ is a strict closed immersion defined by a square nilpotent superideal $\mathcal{J}$ of $\mathcal{O}_{\widetilde{T}^\circledS}$. (Hence, $\mathcal{J}$ may be thought of as an $\mathcal{O}_{T^\circledS}$-supermodule.) We shall write
\begin{align} \label{e04889} \mathcal{F} := \mathcal{H} om_{\mathcal{O}_{T^\circledS}} (t_X^{\circledS *}(\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}), \mathcal{J})_{b} \ \big(\cong (t_X^{\circledS *}(\mathcal{T}_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}} )\otimes \mathcal{J})_b \big)
\end{align} i.e.,
the $\mathcal{O}_{T_b}$-submodule of $\mathcal{H} om_{\mathcal{O}_{T^\circledS}} (t_X^{\circledS *}(\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}), \mathcal{J})$ consisting of $\mathcal{O}_{T^\circledS}$-linear homomorphisms $t_X^{\circledS *}(\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}) \rightarrow \mathcal{J}$
of even parity. Also, we shall write \begin{align} \mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}}) \end{align}
for
the strict super\'{e}tale sheaf on $\widetilde{T}^{\circledS \mathrm{log}}$ which, to any strict super\'{e}tale morphism $\alpha^{\circledS \mathrm{log}} : \widetilde{T}_1^{\circledS \mathrm{log}} \rightarrow \widetilde{T}^{\circledS \mathrm{log}}$, assigns
the set of
morphisms $\widetilde{t}_{1, X}^{\circledS \mathrm{log}} : \widetilde{T}^{\circledS \mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$ which makes the diagram
\begin{align} \xymatrix{
T^{\circledS \mathrm{log}} \times_{\widetilde{T}^{\circledS \mathrm{log}}} \widetilde{T}_1^{\circledS \mathrm{log}} \ar[d]_{i^{\circledS \mathrm{log}}} \ar[r]^{}& X^{\circledS \mathrm{log}} \ar[d]^{f^{\circledS \mathrm{log}}} \\
\widetilde{T}_1^{\circledS \mathrm{log}} \ar[ru]_{\widetilde{t}_{1, X}^{\circledS \mathrm{log}}}\ar[r]_{\widetilde{t}_S^{\circledS \mathrm{log}}\circ \alpha^{\circledS \mathrm{log}} } & S^{\circledS \mathrm{log}},} \end{align} commute, where the upper horizontal arrow denotes the composite of $t_X^{\circledS \mathrm{log}}$ and the projection $ T^{\circledS \mathrm{log}} \times_{\widetilde{T}^{\circledS \mathrm{log}}} \widetilde{T}_1^{\circledS \mathrm{log}} \rightarrow T^{\circledS \mathrm{log}}$ to the first factor.
Then, $\mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}})$ is nonempty (i.e., admits locally a section), and moreover, admits canonically a structure of affine space
\begin{align} \label{E49}
\mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}}) \times i_{b*}(\mathcal{F}) & \rightarrow \mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}}) \\
(\widetilde{t}^{\circledS \mathrm{log}}_X, \partial) \hspace{5mm} & \mapsto \ \ \ \widetilde{t}^{\circledS \mathrm{log}}_X \boxplus^\dagger \partial \notag
\end{align}
modeled on $i_{b*}(\mathcal{F})$. \end{itemize}
\end{prop}
\begin{proof} Assertion (i) follows from (\ref{E44}), Proposition \ref{p04304} (ii), and ~\cite{KATO}, Proposition (3.10).
Next, we shall prove the former assertion of (ii), i.e., that $\mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}})$ is nonempty. After possibly replacing $\widetilde{T}^{\circledS \mathrm{log}}$ with its strict super\'{e}tale covering, one may assume, without loss of generality, that $X^{\circledS \mathrm{log}} = U^\mathrm{log} \times_{S_b} \mathbb{A}_{S^\circledS}^{0|n}$ for some fs log affine scheme $U^\mathrm{log}$ together with an integral log smooth morphism $f_U^\mathrm{log} : U^\mathrm{log} \rightarrow S_b^\mathrm{log}$ of relative dimension $m$. Consider the commutative diagram \begin{align}
\begin{CD} T_b^{\mathrm{log}}
@> \underline{t}_X^\mathrm{log} >> U^\mathrm{log} \\ @V i_b^{\mathrm{log}} VV @VV f_U^\mathrm{log} V \\ \widetilde{T}_b^{\mathrm{log}} @>> ({\widetilde{t}_S})_b^{\mathrm{log}} > S_b^\mathrm{log}, \end{CD} \end{align} where the upper horizontal arrow $\underline{t}_X^\mathrm{log}$ denotes the composite $(t_X)_b^\mathrm{log} : T_b^\mathrm{log} \rightarrow X_b^\mathrm{log}$ and the natural projection $X_b^\mathrm{log} \rightarrow U^\mathrm{log}$.
Since $f^\mathrm{log}_U$ is log smooth,
there exists a morphism $\widetilde{t}_{b, U}^\mathrm{log} : \widetilde{T}_b^\mathrm{log} \rightarrow U^\mathrm{log}$
such that
$\widetilde{t}_{b, U}^\mathrm{log} \circ i_b^\mathrm{log} = \underline{t}_X^\mathrm{log}$
and
$f_U^\mathrm{log} \circ \widetilde{t}_{b, U}^\mathrm{log} = (\widetilde{t}_S)_b^\mathrm{log}$. On the other hand, let us consider the functorial bijection (\ref{FF04}) obtained in Proposition \ref{P0} (of the case where $(m, n) =(0, 1)$).
Then, the composite of $t_X^{\circledS \mathrm{log}} : T^{\circledS \mathrm{log}} \stackrel{}{\rightarrow} U^\mathrm{log} \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ and the projection
$U^\mathrm{log} \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \stackrel{}{\rightarrow} \mathbb{A}^{0 |1}_{S^\circledS}$ to the second factor extends, strict super\'{e}tale locally on $\widetilde{T}^{\circledS \mathrm{log}}$,
to a morphism $\widetilde{t}_\mathbb{A}^{\circledS \mathrm{log}} : \widetilde{T}^{\circledS \mathrm{log}} \rightarrow \mathbb{A}^{0|1}_{S^\circledS}$. Thus, the morphism \begin{align}
(\widetilde{t}_{b, U}^\mathrm{log} \circ \beta^{\circledS \mathrm{log}}_{\widetilde{T}^{}}, \widetilde{t}_\mathbb{A}^{\circledS \mathrm{log}}) : \widetilde{T}^{\circledS \mathrm{log}} \rightarrow U^\mathrm{log} \times_{S_b} \mathbb{A}_{S^\circledS}^{0|1} \end{align}
determines a section of $\mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}})$.
Next, we shall prove the latter assertion of (ii). One may assume, without loss of generality, that there exists an element $\widetilde{t}_{X, 0}^{\circledS \mathrm{log}} \in \Gamma (\widetilde{T}^{}_b, \mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}}))$. Since we have an isomorphism
\begin{align} \Gamma (\widetilde{T}^{}_b, i_{b*}(\mathcal{F})) \ ( \isom \mathrm{Hom}_{\mathcal{O}_{X^{\circledS}}} (\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}, t_{X*}^{\circledS} (\mathcal{J}))_b) \isom \mathrm{Der}_{S^{\circledS \mathrm{log}}} (X^{\circledS \mathrm{log}}; t_{X*}^{\circledS} (\mathcal{J}))_b
\end{align} (cf. \ref{E47}),
it suffices to construct a functorial (with respect to $\widetilde{T}^{\circledS \mathrm{log}}$) bijection \begin{align} \label{E48} \mathrm{Der}_{S^{\circledS \mathrm{log}}} (X^{\circledS \mathrm{log}}; (\widetilde{t}_{X, 0}^\circledS)_* (\mathcal{J}))_b \isom \Gamma (\widetilde{T}^{}_b, \mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}})). \end{align}
Denote by $\widetilde{t}_{X, 0}^{\circledS \dagger} : \mathcal{O}_{X^\circledS} \rightarrow t_{X*}^\circledS (\mathcal{O}_{\widetilde{T}^\circledS})$ and $\widetilde{t}_{X, 0}^{\circledS \ddagger} : \mathcal{M}_{X_b} \rightarrow t_{X*}^\circledS (\mathcal{M}_{\widetilde{T}_b})$ the morphism arising naturally from $\widetilde{t}_{X, 0}^{\circledS}$ (where we consider both $\mathcal{O}_{\widetilde{T}^\circledS}$ and $\mathcal{M}_{\widetilde{T}_b}$ as sheaves on $T_b$ via the underlying homeomorphism between topological spaces of $i^{\circledS \mathrm{log}}$). Let us take an element \begin{align} \partial := (\mathcal{O}_{X^\circledS} \stackrel{D}{\rightarrow} (\widetilde{t}_{X, 0}^\circledS)_* (\mathcal{J}), \mathcal{M}_{X_b} \stackrel{\delta}{\rightarrow} (\widetilde{t}_{X, 0}^\circledS)_* (\mathcal{J})) \end{align}
of $ \mathrm{Der}_{S^{\circledS \mathrm{log}}} (X^{\circledS \mathrm{log}}; (\widetilde{t}_{X, 0}^\circledS)_* (\mathcal{J}))_b$. By applying the inclusions \begin{align} (\widetilde{t}_{X, 0}^\circledS)_* (\mathcal{J}) \hookrightarrow (\widetilde{t}_{X, 0}^\circledS)_*(\mathcal{O}_{\widetilde{T}^\circledS})
\ \ \text{and} \ \ (\widetilde{t}_{X, 0}^\circledS)_* (1 + \mathcal{J}) \hookrightarrow (\widetilde{t}_{X, 0}^\circledS)_* (\mathcal{M}_{\widetilde{T}_b}), \end{align}
one may obtain two maps $\widetilde{t}_{X, 0}^{\circledS \dagger} + D$ and $\widetilde{t}_{X, 0}^{\circledS \ddagger} + \delta$ given by \begin{align} \widetilde{t}_{X, 0}^{\circledS \dagger} + D : & \ \mathcal{O}_{X^\circledS} \rightarrow (\widetilde{t}_{X, 0}^\circledS)_*(\mathcal{O}_{\widetilde{T}^\circledS}), & \widetilde{t}_{X, 0}^{\circledS \ddagger} + \delta : & \ \mathcal{M}_{X_b} \rightarrow (\widetilde{t}_{X, 0}^\circledS)_* (\mathcal{M}_{\widetilde{T}_b}) \\ & \ \ a \ \ \ \mapsto \ \ \ a + D (a) & & \ \ \ b \ \ \ \mapsto \ \ \ 1 + \delta (b) \notag \end{align} for any local sections $a \in \mathcal{O}_{X^\circledS}$, $b \in \mathcal{M}_{X_b}$.
By the definition of a logarithmic superderivation, the pair $(\widetilde{t}_{X, 0}^{\circledS \dagger} + D, \widetilde{t}_{X, 0}^{\circledS \ddagger} + \delta)$ determines a new morphism $\widetilde{t}_{X, 0}^{\circledS} \boxplus^\dagger \partial : \widetilde{T}^{\circledS \mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$ in $\mathcal{D} e f_{\widetilde{T}^{\circledS \mathrm{log}}}(t_X^{\circledS \mathrm{log}})$.
One verifies immediately that this assignment $\partial \mapsto \widetilde{t}_{X, 0}^{\circledS} \boxplus^\dagger \partial $ determines the desired bijection (\ref{E48}). This completes the proof of Proposition \ref{p0404}. \end{proof}
\subsection{Log supersmooth liftings}\label{S25} \leavevmode\\
\begin{defi}\label{D0719}\leavevmode\\ \ \ \ Let $X^{\circledS \mathrm{log}}$ and $S^{\circledS \mathrm{log}}$
be as in Proposition \ref{p0404}.
Also, let $i_S^{\circledS \mathrm{log}} : S^{\circledS \mathrm{log}} \rightarrow \widetilde{S}^{\circledS \mathrm{log}}$ be a strict closed immersion
determined by a nilpotent superideal $\mathcal{J}$ on $\mathcal{O}_{\widetilde{S}^\circledS}$.
\begin{itemize} \item[(i)] By a {\bf log supersmooth lifting} of $X^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$, we mean a triple \begin{align} \widetilde{\mathbb{X}} := (\widetilde{X}^{\circledS \mathrm{log}}, \widetilde{f}^{\circledS \mathrm{log}}, i_X^{\circledS \mathrm{log}}) \end{align}
consisting of a log superstack $\widetilde{X}^{\circledS \mathrm{log}}$, a log supersmooth morphism $\widetilde{f}^{\circledS \mathrm{log}} : \widetilde{X}^{\circledS \mathrm{log}} \rightarrow \widetilde{S}^{\circledS \mathrm{log}}$, and a strict closed immersion $i^{\circledS \mathrm{log}}_X : X^{\circledS \mathrm{log}} \rightarrow \widetilde{X}^{\circledS \mathrm{log}}$ which make the square diagram \begin{align} \xymatrix{
X^{\circledS \mathrm{log}} \ar[d]_{f^{\circledS \mathrm{log}}} \ar[r]^{i_X^{\circledS \mathrm{log}}}
& \widetilde{X}^{\circledS \mathrm{log}} \ar[d]^{\widetilde{f}^{\circledS \mathrm{log}}}\\
S^{\circledS \mathrm{log}} \ar[r]_{i_S^{\circledS \mathrm{log}}} & \widetilde{S}^{\circledS \mathrm{log}} } \end{align} commute and cartesian. \item[(ii)] Let $\widetilde{\mathbb{X}}_l := (\widetilde{X}_l^{\circledS \mathrm{log}}, \widetilde{f}_l^{\circledS \mathrm{log}}, i_{X_l}^{\circledS \mathrm{log}})$ ($l =1,2$)
be log supersmooth liftings of $X^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$. An {\bf isomorphism of log supersmooth liftings} from $\widetilde{\mathbb{X}}_1$ to $\widetilde{\mathbb{X}}_2$ is an isomorphism $j^{\circledS \mathrm{log}} : \widetilde{X}^{\circledS \mathrm{log}}_1 \isom \widetilde{X}^{\circledS \mathrm{log}}_2$ such that $\widetilde{f}^{\circledS \mathrm{log}}_2 \circ j^{\circledS \mathrm{log}} = \widetilde{f}^{\circledS\mathrm{log}}_1$ and $\widetilde{i}^{\circledS \mathrm{log}}_{X_2} \circ j^{\circledS \mathrm{log}} = \widetilde{i}^{\circledS \mathrm{log}}_{X_1}$. \end{itemize} \end{defi}
\begin{rema} \label{r4} \leavevmode\\
\ \ \
Suppose that we are given a log supersmooth lifting $\widetilde{\mathbb{X}} := (\widetilde{X}^{\circledS \mathrm{log}}, \widetilde{f}^{\circledS \mathrm{log}}, i_{X}^{\circledS \mathrm{log}})$ of $X^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$ and a strict super\'{e}tale morphism $Y^{\circledS \mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$ over $S^{\circledS \mathrm{log}}$.
Then, by Proposition \ref{p0607}, there exists a log supersmooth lifting of $Y^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$ which is uniquely determined up to isomorphism.
We denote this log supersmooth lifting by
\begin{align} \label{E64}
\widetilde{\mathbb{X}} |_{Y^{\circledS \mathrm{log}}} := (\widetilde{X}^{\circledS \mathrm{log}} |_{Y^{\circledS \mathrm{log}}}, \widetilde{f}^{\circledS \mathrm{log}} |_{Y^{\circledS \mathrm{log}}}, i_{X}^{\circledS \mathrm{log}} |_{Y^{\circledS \mathrm{log}}}).
\end{align}
\end{rema}
\begin{cor} \label{c0404} \leavevmode\\
\ \ \ Let us keep the notation in Definition \ref{D0719}. Suppose further that $\mathcal{J}$ is square nilpotent
(hence $\mathcal{J}$ may be thought of as an $\mathcal{O}_{S^\circledS}$-supermodule.) Also, write \begin{align} \mathcal{F} := \mathcal{H} om_{\mathcal{O}_{X^\circledS}} (\Omega_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}, \mathcal{J} \mathcal{O}_{X^\circledS})_{b}. \end{align} (Note that, since $X^{\circledS}$ is superflat over $S^\circledS$, we have $\mathcal{F} \cong (\mathcal{T}_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}\otimes f^{\circledS *}(\mathcal{J}))_b$.)
\begin{itemize} \item[(i)]
Suppose that we are given a log supersmooth lifting $\widetilde{\mathbb{X}} :=(\widetilde{X}^{\circledS \mathrm{log}}, \widetilde{f}^{\circledS \mathrm{log}}, i_X^{\circledS \mathrm{log}})$ of $X^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$. Then, the group of automorphisms of $\widetilde{\mathbb{X}}$
is canonically isomorphic to $\Gamma (X_b, \mathcal{F})$.
\item[(ii)] Suppose that we are given two log smooth liftings $\widetilde{\mathbb{X}}_l := (\widetilde{X}_l^{\circledS \mathrm{log}}, \widetilde{f}_l^{\circledS \mathrm{log}}, i_{\mathbb{X}_l}^{\circledS \mathrm{log}})$ ($l =1, 2$) of $X^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$. Then, there exists a strict super\'{e}tale covering $Y^{\circledS \mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$
such that $\widetilde{\mathbb{X}}_1 |_{Y^{\circledS \mathrm{log}}} \isom \widetilde{\mathbb{X}}_2 |_{Y^{\circledS \mathrm{log}}}$.
In particular, if there exists a log supersmooth lifting of $X^{\circledS\mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$,
then the set of isomorphism classes of log supersmooth liftings of $X^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$ forms canonically an affine space modeled on
$H^1 (X_b, \mathcal{F})$.
\item[(iii)] A log supersmooth lifting of $X^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$ exists if $H^2 (X_b, \mathcal{F}) =0$.
\end{itemize}
\end{cor}
\begin{proof} Assertions (i), (ii), and (iii) follow from Proposition \ref{p0404}, (ii) together with a routine argument in the theory of the classical (log) smoothness. \end{proof}
\section{Stable log twisted $\text{SUSY}_1$ curves}
In this section, we shall
consider, by means of various notions defined previously, supersymmetric analogues of a pointed log twisted curve (with a canonical logarithmic structure). We first recall (in \S\,\ref{S31}) the definition of a twisted curve and prove the Riemann-Roch theorem for twisted curves (cf. Theorem \ref{p04033}), which will be used in, e.g., computing the superdimension of the relevant moduli introduced later. Then, log twisted $(1|1)$-curves (cf. Definition \ref{D0219f} (i))
are defined and characterized by local models, which are the fiber products of a (locally defined) log twisted curve and the affine superspace of superdimension $0|1$. Moreover, by introducing a logarithmic and twisted analogue of superconformal structure, we obtain the notion of a (pointed) log twisted $\text{SUSY}_1$ curve (cf. Definition \ref{D02}) which are central objects of the present paper.
As shown in Corollary \ref{c01033}, it is a basic property that suitable (with respect to log supersmooth deformation) local models of a (pointed) log twisted $\text{SUSY}_1$ curve may be chosen.
Finally, we introduce the fibered category ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ (cf. (\ref{EE12})) classifying stable log twisted $\text{SUSY}_1$ curve of prescribed type $(g,r, \lambda)$ (cf. Definition \ref{D03}).
\subsection{The Riemann-Roch theorem for twisted curves} \label{S31} \leavevmode\\
In this section, let us review the notion of a twisted curve and consider the Riemann-Roch theorem for twisted curves.
Here, recall that the {\it tameness} condition on a Deligne-Mumford stack $Z$ means that for every geometric point $q : \mathrm{Spec} (k) \rightarrow Z$ the group $\mathrm{Aut} (q)$ of its stabilizers has order prime to the characteristic of the algebraically closed field $k$.)
\begin{defi} \label{d0112}
\leavevmode\\
\ \ \ Let $\underline{S}$ be a scheme.
\begin{itemize} \item[(i)] A {\bf local twisted curve} over $\underline{S}$ is a flat morphism $\underline{f} : \underline{U} \rightarrow \underline{S}$ of tame Deligne-Mumford stacks satisfying the following three conditions (i-1)-(i-3):
\begin{itemize}
\item[(i-1)] The geometric fibers of $\underline{f}$ are purely $1$-dimensional and, \'{e}tale locally on $\underline{U}$, isomorphic to nodal curves;
\item[(i-2)]
The smooth locus $\underline{U}^\mathrm{sm}$ of $\underline{U}$ (over $\underline{S}$) is an algebraic space;
\item[(i-3)] For each node $q$ of a geometric fiber of $\underline{f}$, there exists a commutative diagram: \begin{align} \label{e012} \xymatrix{ V \ar[r]^{\hspace{-3mm}c} \ar[dr]_d& [T/\mu_{l'}] \ar[r]^b& R \ar[d]^a \\ & \underline{U} \ar[r]^{\underline{f}} & \underline{S}, } \end{align} where
\begin{itemize} \item[$\bullet$] $R = \mathrm{Spec} (A)$ for some commutative ring $A$ and $a$ denotes an \'{e}tale neighborhood of $\underline{f} (q) \in \underline{S}$; \item[$\bullet$] $T = \mathrm{Spec}(A[z, w]/(zw -t))$ for some $t \in A$; \item[$\bullet$] $[T/\mu_{l'}]$ denotes the quotient stack of $T$ by $\mu_{l'}$, where $l'$ is a positive integer, $\mu_{l'}$ denotes the group scheme over $R$ of $l'$-th roots of unity, and the action of $\mu_{l'}$ is given by $(z,w) \mapsto (\xi \cdot z, \xi^{-1} \cdot w)$ for any $\xi \in \mu_{l'}$. \item[$\bullet$]
$b$ denotes the natural projection, $c$ denotes an \'{e}tale morphism
and $d$ is an \'{e}tale neighborhood of $q$. \end{itemize} \end{itemize}
\item[(ii)] Let $g$ be a nonnegative integer. A {\bf twisted curve (of genus $g$)} over $\underline{S}$ is a local twisted curve over $\underline{S}$ which is proper and whose coarse moduli space becomes a semistable curve (of genus $g$) over $\underline{S}$ (cf. ~\cite{Chi1}, Definition 2.4.1). \end{itemize} \end{defi}
We prove the following assertion, which is the Riemann-Roch theorem for line bundles on a twisted curve.
\begin{thm} \label{p04033} \leavevmode\\
\ \ \
Let $X$ be a twisted curve of genus $g$ ($\geq 0$) over an algebraically closed field $k$, and let $\mathcal{L}$ be a line bundle on $X$ of total degree $m$. (Here, it follows from ~\cite{Chi1}, Proposition 2.5.6, that $m$ is necessarily an integer.)
We shall write
\begin{align}
\chi (X, \mathcal{L}) := \mathrm{dim}_k(H^0(X, \mathcal{L})) - \mathrm{dim}_k (H^1 (X, \mathcal{L}))).
\end{align}
Then, we have that $H^2 (X, \mathcal{L}) = 0$ and $ \chi (X, \mathcal{L}) = \mathrm{deg} (\mathcal{L}) - g +1$.
\end{thm}
\begin{proof}
Write $|X|$ for the coarse moduli space of $X$ and $\pi : X \rightarrow |X |$ for the projection. First, we shall compare $\chi (X, \mathcal{L})$ with $\chi (|X|, \pi_*(\mathcal{L}))$ (i.e., the Eular characteristic of $\pi_*(\mathcal{L})$ in the classical sense). Denote by $e_1, \cdots, e_s : \mathrm{Spec} (k) \rightarrow |X|$ the set of nodes of $|X|$.
For each $i \in \{ 1, \cdots, s \}$,
the fiber product $X \times_{|X|, e_i} \mathrm{Spec} (k)$ is isomorphic to the classifying stack $B (\mathrm{Aut} (\widetilde{e}_i))$ of the group $\mathrm{Aut} (\widetilde{e}_i)$ of stabilizers of the (unique) point $\widetilde{e}_i \in X(k)$ over $e_i$. Since $\mathrm{Aut} (\widetilde{e}_i)$ is, by the definition of a twisted curve, isomorphic to $\mu_{l_i}$ for some $l_i \geq 1$,
the pull-back $\widetilde{e}_i^* (\mathcal{L})$ of $\mathcal{L}$ may be thought of as the trivial $k$-module $k$ (of rank one) equipped with a $\mu_{l_i}$-action. Write \begin{align}
E^{\mathrm{nt}} := \{ i \ | \ 1 \leq i \leq s \ \text{and the $\mu_{l_i}$-action on $\widetilde{e}_i^* (\mathcal{L})$ is nontrivial} \}. \end{align}
If $i \in E^\mathrm{nt}$, then one verifies immediately that
\begin{align} \label{E0010}
H^q (\mu_{l_i}, \widetilde{e}_i^* (\mathcal{L})) =0 \ (q =0, 2) \ \text{and} \ H^1 (\mu_{l_i}, \widetilde{e}_i^* (\mathcal{L})) =k. \end{align} On the other hand, if $i \notin E^\mathrm{nt}$, then
\begin{align} \label{E0011}
H^q (\mu_{l_i}, \widetilde{e}_i^* (\mathcal{L})) =0 \ (q =1, 2) \ \text{and} \ H^0 (\mu_{l_i}, \widetilde{e}_i^* (\mathcal{L})) =k. \end{align}
Now, let us consider the Leray spectral sequence \begin{align} \label{S001}
E_2^{p, q} := H^p (|X|, \mathbb{R}^q \pi_* (\mathcal{L})) \Longrightarrow H^{p+q} (X, \mathcal{L}) \end{align}
associated with $\mathcal{L}$. Since $\mathbb{R}^q \pi_*(-)$ ($q \geq 1$) vanishes on the smooth locus $|X|^{\mathrm{sm}}$ of $|X|$, we have that \begin{align} \label{E0005}
H^j (|X|, \mathbb{R}^i \pi_*(-)) = 0 \ \ \text{unless $j =0$ or $(i, j) = (0, 1)$}. \end{align}
Moreover, $\mathbb{R}^q \pi_* (\mathcal{L})$ is isomorphic to $\bigoplus_{i=1}^s e_{i*} (H^q (\mu_i, \widetilde{e}_i^* (\mathcal{L})))$. Hence, it follows from (\ref{E0010}), (\ref{E0011}), and (\ref{E0005}) that $H^2 (X, \mathcal{L}) = 0$ (which completes the proof of the former equality) and
\begin{align} \label{E0006}
& \ \mathrm{dim}_k (H^1 (X, \mathcal{L})) - \mathrm{dim}_k (H^1 (|X|, \pi_*(\mathcal{L}))) \\
=& \ \Sigma_{i\in E^\mathrm{nt}} \mathrm{dim}_k (H^1 (\mu_i, \widetilde{e}_i^* (\mathcal{L})))\notag \\
=& \ \sharp E^\mathrm{nt}. \notag
\end{align}
Since the equality $H^0 (X, \mathcal{L}) = H^0 (|X|, \pi_* (\mathcal{L}))$ is evidently verified,
we have \begin{align} \label{E0004}
\chi (X, \mathcal{L}) - \chi (|X|, \pi_*(\mathcal{L})) = \sharp E^\mathrm{nt}. \end{align}
Next, we shall compare the total degree of $\mathcal{L}$ and $\pi^*(\pi_*(\mathcal{L}))$. For each $i \in \{ 1, \cdots, s \}$,
the formal neighborhood $\widetilde{T}_i$ of $X$ at $\widetilde{e}_i$ is isomorphic to the quotient stack $[\mathrm{Spec}(R)/\mu_{l_i}]$, where $R := k[[z, w]]/(zw)$ and the $\mu_{l_i}$-action on $\mathrm{Spec}(R)$ is given by $(z, w) \mapsto (\xi \cdot z, \xi^{-1} \cdot w)$ for any $\xi \in \mu_{l_i}$.
In particular, if $T_i$ denotes the formal neighborhood of $|X|$ at $e_i$, then we have $T_i \cong \mathrm{Spec} (R^{l_i})$, where $R^{l_i} := k[[z^{l_i}, w^{l_i}]]/(z^{l_i}y^{l_i})$, and the morphism $\widetilde{T}_i \rightarrow T_i$ induced by $\pi$ is given by the natural inclusion $R^{l_i} \hookrightarrow R$.
A choice of trivialization $\mathcal{L} |_{\mathrm{Spec}(R)} \isom \mathcal{O}_{\mathrm{Spec}(R)}$ allows us to identify the total space of the line bundle $\mathcal{L} |_{\widetilde{T}_i}$ with the quotient stack $[(\widetilde{T}_i \times \mathrm{Spec} (k[t]))/\mu_{l_i}]$, where the $\mu_{l_i}$-action is given by $(z, w, t) \mapsto (\xi \cdot z, \xi^{-1} \cdot w, \xi^{m_{i}}\cdot t)$ for some integer $m_i$ with $0 \leq m_i \leq l$.
Then, $\mathcal{O}_{T_i}$-module $\pi_*(\mathcal{L}) |_{T_i}$ corresponds to the ideal $(z^{l - m_{i}}, w^{m_i}) \subseteq R^{l_i}$. The restriction to $\widetilde{T}_i$ of the natural morphism $\pi^*(\pi_*(\mathcal{L})) \rightarrow \mathcal{L}$ may be identified with the natural inclusion of the ideal $(z^{l - m_{i}}, w^{m_i}) \subseteq R$. If $i \notin E^\mathrm{nt}$ (resp., $i \in E^\mathrm{nt}$), then the length of $\mathcal{L}/ \pi^*(\pi_*(\mathcal{L}))$ at $\widetilde{e}_i$ is $0$ (resp., $\frac{1}{l} \cdot \mathrm{length} (R/(z^{l - m_{i}}, w^{m_i})) = 1$). Since $\pi^*(\pi_*(\mathcal{L})) \rightarrow \mathcal{L}$ is injective and its cokernel is only supported at $\bigcup_{i=1}^s \mathrm{Im} (\widetilde{e}_i)$,
we have \begin{align} \label{E0003} \mathrm{deg} (\mathcal{L}) = \mathrm{deg} (\pi^* (\pi_*(\mathcal{L}))) + \Sigma_{i \in E^\mathrm{nt}} 1 = \mathrm{deg} (\pi_*(\mathcal{L})) + \sharp E^\mathrm{nt}. \end{align} By combining (\ref{E0004}) and (\ref{E0003}), we have the equality $\chi (X, \mathcal{L}) = \mathrm{deg} (\mathcal{L}) -g+1$, as desired.
\end{proof}
\subsection{Log twisted $(1|1)$-curves} \label{S32} \leavevmode\\
Let $\underline{S}$ be a scheme and
$\underline{f} : \underline{U} \rightarrow \underline{S}$ a local twisted curve over $\underline{S}$.
According to (the proof of) ~\cite{O1}, Theorem 3.6, there exist canoincally log structures \begin{align} \alpha_{\underline{U}}^{\underline{f}} : \mathcal{M}_{\underline{U}} \rightarrow \mathcal{O}_{\underline{U}} \ \ \text{and} \ \ \alpha^{\underline{f}}_{\underline{S}} : \mathcal{M}_{\underline{S}}\rightarrow \mathcal{O}_{\underline{S}} \end{align}
on $\underline{U}$ and $\underline{S}$ respectively (where we denote the resulting log stacks by
$\underline{U}^{\underline{f}\text{-}\mathrm{log}}$ and $\underline{S}^{\underline{f}\text{-}\mathrm{log}}$ respectively), and moreover, a special morphism
\begin{align}
\underline{f}^{\underline{f}\text{-}\mathrm{log}} := (\underline{f}, \underline{f}^\flat : \underline{f}^{-1}(\mathcal{M}_{\underline{S}})\rightarrow \mathcal{M}_{\underline{U}}) :\underline{U}^{\underline{f}\text{-}\mathrm{log}} \rightarrow \underline{S}^{\underline{f}\text{-}\mathrm{log}}
\end{align}
(cf. ~\cite{O1}, Theorem 3.5 for the definition of ``{\it special}") extending $\underline{f}$. The data $(\alpha_{\underline{U}}^{\underline{f}}, \alpha^{\underline{f}}_{\underline{S}}, \underline{f}^{\underline{f}\text{-}\mathrm{log}})$ is uniquely determined up to unique isomorphism.
\begin{defi}\label{D0219h}\leavevmode\\ \ \ \ Let $\underline{S}$ be a scheme and $\alpha_{\underline{S}}^1 : \mathcal{M}^1_{\underline{S}} \rightarrow \mathcal{O}_{\underline{S}}$, $\alpha^2_{\underline{S}} : \mathcal{M}_{\underline{S}}^2 \rightarrow \mathcal{O}_{\underline{S}}$ two log structures on $\underline{S}$. We shall say that a morphism $(\underline{S}, \alpha^1_{\underline{S}}) \rightarrow (\underline{S}, \alpha^2_{\underline{S}})$ of log schemes is {\bf log-like} (over $\underline{S}$) if its underlying endomorphism of $\underline{S}$ coincides with the identity morphism. \end{defi}
\begin{defi}\label{D0219g}\leavevmode\\ \ \ \ Let $\underline{S}^\mathrm{log}$ be an fs log scheme.
A {\bf log local twisted curve over $\underline{S}^\mathrm{log}$} is a morphism $\underline{f}^\mathrm{log} : \underline{U}^\mathrm{log} \rightarrow \underline{S}^\mathrm{log}$ of log stacks satisfying the following two conditions: \begin{itemize} \item[(i)] The underlying morphism $\underline{f} : \underline{U} \rightarrow \underline{S}$ is a local twisted curve over $\underline{S}$; \item[(ii)] There exist a log-like morphisms $\underline{S}^\mathrm{log} \rightarrow \underline{S}^{\underline{f}\text{-}\mathrm{log}}$ and $\underline{U}^\mathrm{log} \rightarrow \underline{U}^{\underline{f}\text{-}\mathrm{log}}$ over $\underline{S}$ and $\underline{U}$ respectively which make the square diagram \begin{align} \xymatrix{ \underline{U}^\mathrm{log} \ar[r] \ar[d]_{\underline{f}^\mathrm{log}} & \underline{U}^{\underline{f}\text{-}\mathrm{log}} \ar[d]^{ \underline{f}^{\underline{f}\text{-}\mathrm{log}}} \\ \underline{S}^\mathrm{log} \ar[r] & \underline{S}^{\underline{f}\text{-}\mathrm{log}} } \end{align} commute and cartesian. \end{itemize} \end{defi}
Now, let us fix an fs log superscheme $S^{\circledS \mathrm{log}}$.
\begin{defi}\label{D0219f}\leavevmode\\
\begin{itemize} \item[(i)]
A {\bf log twisted $(1|1)$-curve} over $S^{\circledS \mathrm{log}}$ is a log superstack $X^{\circledS \mathrm{log}}$ over $S^{\circledS \mathrm{log}}$ such that $X^\circledS/S^\circledS$ is proper and,
for each geometric point $q$ of $X_b$,
there exists a $(1|1)$-chart $(Y^{\circledS \mathrm{log}}, U^\mathrm{log}, \eta^{\circledS \mathrm{log}})$ on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ around $q$ such that $U^\mathrm{log}$ is a log local twisted curve over $S_b^\mathrm{log}$. We shall refer to such a $(1|1)$-chart $(Y^{\circledS \mathrm{log}}, U^\mathrm{log}, \eta^{\circledS \mathrm{log}})$ as a {\bf log twisted $(1|1)$-chart} (around $q$) on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$. \item[(ii)]
Let $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ be a log twisted $(1|1)$-curve over $S^{\circledS \mathrm{log}}$. Then, the induced stack $X_t$ is a twisted curve over $S_t$.
We shall say that
$X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ is {\bf of genus $g$} if $X_t/S_t$ is of genus $g$ in the sense of ~\cite{Chi1}, Definition 2.4.1. \end{itemize} \end{defi}
\subsection{Pointed log twisted $(1|1)$-curves} \label{S33} \leavevmode\\
In the rest of the present paper, let us fix a pair of nonnegative integers $(g,r)$ satisfying that $2g-2+r >0$.
\begin{defi}
\label{D010}\leavevmode\\ \ \ \
An {\bf $r$-pointed log twisted $(1|1)$-curve of genus $g$} over $S^{\circledS \mathrm{log}}$ is a collection of data \begin{equation} \label{e015} \mathfrak{X}^{\circledS \bigstar} :=\big(X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}, \{ [\sigma^\circledS_i]
\}_{i =1}^r \big), \end{equation} where \begin{itemize} \item[$\bullet$]
$X^{\circledS \mathrm{log}}$ denotes a log twisted $(1|1)$-curve of genus $g$ over $S^{\circledS \mathrm{log}}$;
\item[$\bullet$]
$ [\sigma^\circledS_i]$ (for each $i =1, \cdots, r $) denotes a closed subsuperscheme of $X^\circledS$ over $S^\circledS$ represented by a closed immersion $\sigma_i^\circledS : \mathbb{A}^{0|1}_{S^\circledS} \rightarrow X^\circledS$ over $S^\circledS$,
\end{itemize}
satisfying the following conditions:
\begin{itemize}
\item[(i)]
$\mathbb{A}^{0|1}_{S^\circledS} \times_{\sigma^\circledS_i, X^\circledS, \sigma^\circledS_j} \mathbb{A}^{0|1}_{S^\circledS} = \emptyset$
for any pair $(i, j)$ with $i \neq j$;
\item[(ii)] The smooth locus $X_t^{\mathrm{sm}}$ of $X_t$ (over $S_t$) may be represented by a scheme over $S_t$ and the image $\mathrm{Im} ( (\sigma_i)_t)$ of each $(\sigma_i)_t$ lies in $X_t^{\mathrm{sm}}$.
\end{itemize} Let $\mathfrak{X}^{\circledS \bigstar} := (X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}, \{[\sigma_i^\circledS] \}_{i=1}^r)$
be an $r$-pointed log twisted $(1|1)$-curve of genus $g$ over $S^{\circledS \mathrm{log}}$. Then, the collection of data \begin{align} \label{e016} \mathfrak{X}_t^\bigstar := (X_t / S_t, \{ [ (\sigma_i)_t ] \}_{i=1}^r) \end{align} forms an $r$-pointed twisted curve of genus $g$ over $S_t$ (in the sense of ~\cite{AV1}, Definition 4.1.2); we shall refer to it as the {\bf underlying (pointed) twisted curve} of $\mathfrak{X}^{\circledS \bigstar}$.
\end{defi}
\begin{prop} \label{p0468} \leavevmode\\
\ \ \
Let $\mathfrak{X}^{\circledS \bigstar} := (X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}, \{ [\sigma_i^\circledS ]\}_{i=1}^r)$ be an $r$-pointed twisted $(1|1)$-curve of genus $g$ over $S^{\circledS \mathrm{log}}$. (We shall fix a representative $\sigma_i^\circledS$ of $[\sigma^\circledS_i]$ for each $i$.) Also, we shall fix $i \in \{ 1, \cdots, r \}$ and a geometric point $q$ of $\mathrm{Im}([\sigma_i])$.
Then, there exists
a collection of data
\begin{align} \label{E001} \mathbb{U}^\bigstar := (Y^{\circledS \mathrm{log}} \stackrel{\pi^{\circledS \mathrm{log}}}{\rightarrow} X^{\circledS \mathrm{log}}, U^\mathrm{log}, \eta^{\circledS \mathrm{log}}, \Sigma^U, \sigma^U),
\end{align}
where
\begin{itemize}
\item[$\bullet$]
the triple $(Y^{\circledS \mathrm{log}}, U^\mathrm{log}, \eta^{\circledS \mathrm{log}})$ is a log twisted $(1|1)$-chart around $q$ of $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$; \item[$\bullet$] $\pi^{\circledS \mathrm{log}}$ denotes the structure morphism of $Y^{\circledS \mathrm{log}}$ over $X^{\circledS \mathrm{log}}$; \item[$\bullet$] $\Sigma^U$ is an \'{e}tale scheme over $S_b$;
\item[$\bullet$] $\sigma^U$ is a closed immersion $\Sigma^U \rightarrow U$ over $S_b$
\end{itemize} such that the square diagram \begin{align} \begin{CD}
\Sigma^U \times_{S_b} \mathbb{A}_{S^\circledS}^{0|1} @> \mathrm{pr}_2^\circledS >> \mathbb{A}^{0|1}_{S^\circledS}
\\
@V \sigma^U \times \mathrm{id} VV @VV \sigma_i^\circledS V
\\
U \times_{S_b} \mathbb{A}_{S^\circledS}^{0 |1} @>> \pi^\circledS \circ (\eta^\circledS)^{-1} > X^\circledS. \end{CD} \end{align}
is commutative and cartesian, where $\mathrm{pr}_2^\circledS$ denotes the projection to the second factor. We shall refer to such a collection of data $\mathbb{U}^\bigstar$ as a {\bf pointed log twisted $(1|1)$-chart} (around $q$) on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$. \end{prop}
\begin{proof}
We may suppose, without loss of generality, that $S^\circledS$ is affine. Let us take a log twisted $(1|1)$-chart $(Y^{\circledS \mathrm{log}}, U^\mathrm{log}, \eta^{\circledS \mathrm{log}})$ around $q$ on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ such that there is no nodal point in $U$.
It follows from Proposition \ref{p0607} that there exist an \'{e}tale scheme $\Sigma^U$ over $S_b$
and a closed immersion
$\sigma^{Y \circledS} : \Sigma^U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow Y^\circledS$
which make the diagram \begin{align} \xymatrix{
\Sigma^U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}
\ar[r]^{\hspace{5mm}\mathrm{pr}_2^\circledS} \ar[d]_{\sigma^{Y \circledS}} & \mathbb{A}^{0|1}_{S^\circledS} \ar[d]^{\sigma_i^{\circledS}} \\ Y^\circledS \ar[r]_{\pi^{\circledS \mathrm{log}}}& X^{\circledS} } \end{align} commutate and cartesian.
Consider the composite
\begin{align}
\sigma_{\eta}^{Y\circledS} : \Sigma^U \times_{S_b} \mathbb{A}_{S^\circledS}^{0|1} \stackrel{\sigma^{Y \circledS}}{\rightarrow} Y^\circledS \stackrel{\eta^\circledS}{\rightarrow} U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}.
\end{align}
By applying Lemma \ref{L0102} below, there exists an automorphism $u^\circledS$ of $U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ over $S^\circledS$ such that $u^\circledS \circ \sigma_{\eta}^{Y\circledS} = \sigma^U \times \mathrm{id}_{\mathbb{A}^{0|1}_{S^\circledS}}$ for some closed immersion $\sigma^U : \Sigma^U \rightarrow U$. Notice that (since there is no nodal point in $U$)
the log structure of $U^\mathrm{log}$ coincides with the pull-back from $S_b^\mathrm{log}$. Hence, $u^\circledS$ extends to an automorphism $u^{\circledS \mathrm{log}}$ of $U^\mathrm{log} \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ over $S^{\circledS \mathrm{log}}$. Thus, the collection of data \begin{align} \mathbb{U}^\bigstar := (Y^{\circledS \mathrm{log}}, U^\mathrm{log}, u^{\circledS \mathrm{log}} \circ \eta^{\circledS \mathrm{log}}, \Sigma^U, \sigma^U) \end{align} obtained in this manner forms the desired collection. This completes the proof of Proposition \ref{p0468}. \end{proof}
The following lemma was used in the proof of Proposition \ref{p0468}.
\begin{lemma}
\label{L0102} \leavevmode\\
\ \ \
Suppose that $S^\circledS$ is affine.
Let $\Sigma$ be an affine scheme over $S_b$ and $U$ a smooth affine scheme over $S_b$.
Then, for any closed immersion $\sigma^\circledS : \Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ over $S^{\circledS}$,
there exists an automorphism $u^\circledS$ of $U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ over $S^\circledS$ such that the composite $u^\circledS \circ \sigma^\circledS : \Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ is of the form $\sigma_0 \times \mathrm{id}_{\mathbb{A}^{0 |1}_{S^\circledS}}$ for some closed immersion $\sigma_0 : \Sigma \rightarrow U$ over $S_b$.
\end{lemma}
\begin{proof}
In the following, let us construct two morphisms $\delta_1^\circledS : U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow U$ and $\delta_2^\circledS : U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow \mathbb{A}^{0|1}_{S^\circledS}$.
First, we shall consider $\delta_2^\circledS$.
The map $\Gamma (U, \mathcal{O}_{(U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS})_f}) \rightarrow \Gamma (\Sigma, \mathcal{O}_{(\Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS})_f})$ induced by the closed immersion $\sigma^\circledS$ is surjective. Hence, by Proposition \ref{P0}, the map \begin{align} \label{EE44}
\mathrm{Hom}_{S^\circledS}(U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}, \mathbb{A}^{0|1}_{S^\circledS}) & \rightarrow \mathrm{Hom}_{S^\circledS} (\Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}, \mathbb{A}^{0|1}_{S^\circledS}) \\ h^\circledS \hspace{15mm} & \mapsto \hspace{10mm} h^\circledS \circ \sigma^\circledS \notag \end{align} obtained by composing with $\sigma^\circledS$ is surjective. Then, let us take $\delta_2^\circledS$ to be an inverse image
of the projection $\Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow \mathbb{A}^{0|1}_{S^\circledS}$ via the surjection (\ref{EE44}).
Next, we shall consider $\delta_1^\circledS$.
We shall write \begin{align}
\sigma_1^\circledS : \Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \stackrel{\sigma^\circledS}{\rightarrow} U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \stackrel{\mathrm{pr}^\circledS}{\rightarrow} U,
\end{align} where the second arrow $\mathrm{pr}^\circledS$ denotes the projection to the first factor. Also, write
\begin{align}
\widetilde{\sigma}_t^\circledS : \Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow \Sigma \stackrel{\sigma_t}{\rightarrow} U,
\end{align}
where the first arrow denotes the projection $\Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow \Sigma$ to the first factor.
Moreover, denote by $\mathcal{J}_{\Sigma}$ and $\mathcal{J}_U$
the (square nilpotent) ideal of $\mathcal{O}_{(\Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS})_b}$ and $\mathcal{O}_{(U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS})_b}$ corresponding to the closed immersions
\begin{align}
\gamma_{\Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}} : \Sigma \rightarrow (\Sigma \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS})_b \ \ \text{and} \ \
\gamma_{U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}} : U \rightarrow (U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS})_b
\end{align}
respectively.
Then, there exists an element $\partial \in \Gamma (\Sigma, \sigma_t^*(\mathcal{T}_{U/S_b})\otimes \mathcal{J}_\Sigma)$ such that $(\sigma_1)_b \boxplus^\dagger \partial = (\widetilde{\sigma}_t^{\circledS})_b$
(cf. Proposition \ref{p0404} (ii) for the definition of ``$\boxplus^\dagger$"). Since the morphism \begin{align} \label{FF07} \Gamma (U, \mathcal{T}_{U/S_b} \otimes \mathcal{J}_U) \rightarrow \Gamma (\Sigma, \sigma_t^*(\mathcal{T}_{U/S_b})\otimes \mathcal{J}_\Sigma) \end{align}
induced by $\sigma_t$
is surjective, we obtain an inverse image $\widetilde{\partial} \in \Gamma (U, \mathcal{T}_{U/S_b}\otimes \mathcal{J}_U) $ of $\partial$ via (\ref{FF07}). Thus, we obtain a morphism \begin{align}
\delta^\circledS_1 := (\mathrm{pr}_b \boxplus^\dagger \widetilde{\partial}) \circ \beta^\circledS_{U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}} : U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \rightarrow U \end{align} over $S_b$.
It follows from the definitions of $\delta_1^\circledS$ and $\delta_2^\circledS$ that the endomorphism $u^\circledS := (\delta_1^\circledS, \delta_2^\circledS)$ of $U \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ turns out to be the desired automorphism. This completes the proof of Lemma \ref{L0102}. \end{proof}
\begin{rema} \label{r4f828} \leavevmode\\
\ \ \
Let $\mathfrak{X}^{\circledS \bigstar} := (X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}, \{ [ \sigma_i^\circledS ] \}_{i=1}^r)$ be an $r$-pointed log twisted $(1|1)$-curve of genus $g$ over $S^{\circledS \mathrm{log}}$, $q$ a geometric point of $X_b$, and $\mathbb{U} := (Y^{\circledS \mathrm{log}}, U^\mathrm{log}, \eta^{\circledS \mathrm{log}})$ a log twisted $(1|1)$-chart around $q$ on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$. Suppose that we are given a strict super\'{e}tale morphism $\pi_Y^{\circledS \mathrm{log}} : Y'^{\circledS \mathrm{log}} \rightarrow Y^{\circledS \mathrm{log}}$ such that $Y'^\circledS$ is affine and the image of the composite $Y'^{\circledS \mathrm{log}} \stackrel{\pi_Y^{\circledS \mathrm{log}}}{\rightarrow} Y^{\circledS \mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$ contains $q$. Then, by Proposition \ref{p0607}, there exist a strict \'{e}tale morphism $\pi_U^{\mathrm{log}} : U'^{\mathrm{log}} \rightarrow U^\mathrm{log}$ and an isomorphism
$\eta'^{\circledS \mathrm{log}} : Y'^{\circledS \mathrm{log}} \isom U'^{\mathrm{log}} \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ over $S^{\circledS \mathrm{log}}$ (hence $U'$ is affine) which make the square diagram \begin{align} \xymatrix{
Y'^{\circledS \mathrm{log}} \ar[r]^{\eta'^{\circledS \mathrm{log}}} \ar[d]_{\pi_Y^{\circledS \mathrm{log}}} & U'^{\mathrm{log}} \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} \ar[d]^{\pi_U^{\mathrm{log}} \times \mathrm{id}} \\
Y^{\circledS \mathrm{log}} \ar[r]^{\eta^{\circledS \mathrm{log}}} & U^\mathrm{log} \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS} } \end{align} commute and cartesian. (Such a pair $(U'^{\mathrm{log}}, \eta^{\circledS \mathrm{log}})$ is uniquely determined up to isomorphism).
We shall write \begin{align}
\mathbb{U} |_{Y'^{\circledS \mathrm{log}}} := (Y'^{\circledS \mathrm{log}}, U'^{\mathrm{log}}, \eta'^{\circledS \mathrm{log}}), \end{align}
which forms a log twisted $(1|1)$-chart around $q$ on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$.
Suppose further that $q \in \mathrm{Im} ((\sigma_i)_b)$ and we are given
$\Sigma^U$ and $\sigma^U$ as in (\ref{E001}) for which the collection of data $\mathbb{U}^\bigstar := (Y^{\circledS \mathrm{log}}, U^\mathrm{log}, \eta^{\circledS \mathrm{log}}, \Sigma^U, \sigma^U)$ forms
a pointed log twisted $(1|1)$-chart around $q$ on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$.
Let us write \begin{align} \Sigma^{U'} := \Sigma^U \times_{\sigma^U, U, \pi_U} U' \ \ \text{and} \ \ \sigma^{U'} := \sigma^U \times \mathrm{id}_{U'} : \Sigma^{U'} \rightarrow U'. \end{align}
Then, \begin{align} \label{E1001}
\mathbb{U}^\bigstar |_{Y'^{\circledS \mathrm{log}}} := (Y'^{\circledS \mathrm{log}}, U'^{\mathrm{log}}, \eta'^{\circledS \mathrm{log}}, \Sigma^{U'}, \pi^{U'}, \sigma^{U'}) \end{align}
forms a pointed log twisted $(1|1)$-chart $\mathbb{U}^\bigstar$ around $q$ of $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$.
\end{rema}
\subsection{Superconformal structures} \label{S34} \leavevmode\\
Let us fix an $r$-pointed log twisted $(1|1)$-curve $\mathfrak{X}^{\circledS \bigstar} :=(X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}, \{ [\sigma_i^{\circledS}]\}_{i =1}^r)$ of genus $g$ over $S^{\circledS \mathrm{log}}$. We shall construct a new log structure on $X^\circledS$ as follows. The ideal sheaf $\mathcal{I}_i \subseteq \mathcal{O}_{X_b}$ ($i =1, \cdots, r$) defining the closed immersion $(\sigma_i)_b$ is, by Proposition \ref{p0468}, an invertible sheaf.
As explained in ~\cite{KATO}, Complement 1, it corresponds to a log structure $\alpha_{X_b}^{\sigma_i} : \mathcal{M}_{X_b}^{\sigma_i} \rightarrow \mathcal{O}_{X_b}$. We shall write \begin{align} \mathcal{M}^\bigstar_{X_b} := \mathcal{M}_{X_b} \oplus_{\mathcal{O}_{X_b}^\times} (\mathcal{M}^{\sigma_1}_{X_b} \oplus_{\mathcal{O}^\times_{X_b}} \cdots \oplus_{\mathcal{O}^\times_{X_b}} \mathcal{M}^{\sigma_r}_{X_b}) \end{align} and define a log structure $\alpha^{\bigstar}_{X_b}$
to be the amalgam \begin{align} \alpha^\bigstar_{X_b} := (\alpha_{X_b}, (\alpha^{\sigma_1}_{X_b}, \cdots, \alpha_{X_b}^{\sigma_r})) : \mathcal{M}^\bigstar_{X_b} \rightarrow \mathcal{O}_{X_b}. \end{align} We shall denote by \begin{align} X^{\circledS \bigstar\text{-}\mathrm{log}} := (X^{\circledS }, \alpha^\bigstar_{X_b}) \end{align} the resulting log superstack over $S^{\circledS \mathrm{log}}$, which admits a natural morphism $X^{\circledS \bigstar\text{-}\mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$. If $f^{\circledS \mathrm{log}} : X^{\circledS \mathrm{log}} \rightarrow S^{\circledS \mathrm{log}}$ denotes the structure morphism of $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$, then we shall write \begin{align} f^{\circledS \bigstar\text{-}\mathrm{log}} : X^{\circledS \bigstar \text{-}\mathrm{log}} \rightarrow S^{\circledS \mathrm{log}} \end{align}
for the composite of $f^{\circledS \mathrm{log}}$ with $X^{\circledS \bigstar \text{-}\mathrm{log}} \rightarrow X^{\circledS \mathrm{log}}$. One verifies that $ X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}$ is log supersmooth of relative superdimension $1|1$. In particular, the $\mathcal{O}_{X^\circledS }$-supermodule $\mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}$ (as well as $\Omega_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}$) is a supervector bundle of superrank $1|1$ (cf. Proposition \ref{p0404} (i)).
\begin{defi} \label{D02}\leavevmode\\
\begin{itemize} \item[(i)]
Let $X^{\circledS \mathrm{log}}$ be a log supersmooth superscheme over $S^{\circledS \mathrm{log}}$ of relative superdimension $1|1$. A {\bf superconformal structure} on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ is a subsupervector bundle $\mathcal{D}$ of superrank $0|1$ of $\mathcal{T}_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ (i.e., $\mathcal{T}_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}/\mathcal{D}$ is a supervector bundle of superrank $1|0$) such that the {\it $\mathcal{O}_{X^\circledS}$-linear} morphism \begin{align} \label{e032} (\mathcal{D}^{\otimes 2} :=) \ \mathcal{D} \otimes_{\mathcal{O}_{X^\circledS}} \mathcal{D} &\rightarrow \mathcal{T}_{X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}/\mathcal{D} \\ \partial_1 \otimes \partial_2 \hspace{5mm} &\mapsto \frac{1}{2} \cdot\overline{ [\partial_1, \partial_2]} \notag \end{align} (where $\partial_1$ and $\partial_2$ are local sections of $\mathcal{D}$) is an isomorphism. \item[(ii)] An {\bf $r$-pointed log twisted $\text{SUSY}_1$ curve of genus $g$} over $S^{\circledS \mathrm{log}}$ is a collection of data \begin{equation} {^{\S_1} \mathfrak{Y}}_{}^{\circledS \bigstar} :=(Y^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}, \{[\sigma_i^\circledS] \}_{i=1}^r, \mathcal{D}) \end{equation}
consisting of an $r$-pointed log twisted $(1|1)$-curve $(Y^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}, \{[\sigma_i^\circledS ] \}_{i=1}^r)$ of genus $g$ over $S^{\circledS \mathrm{log}}$ and a superconformal structure $\mathcal{D}$ on $Y^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}$. \end{itemize}
\end{defi}
\begin{defi} \label{D021}\leavevmode\\ \ \ \
For $j \in \{1, 2\}$, let $S_j^{\circledS \mathrm{log}}$ be an fs log superscheme
and
${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}_j := (f_j^{\circledS \mathrm{log}} : X_j^{\circledS \mathrm{log}} \rightarrow S_j^{\circledS \mathrm{log}}, \{ [\sigma^\circledS_{j, i}] \}_{i=1}^r, \mathcal{D}_j)$
an $r$-pointed log twisted $\text{SUSY}_1$ curve of genus $g$ over $S_j^{\circledS \mathrm{log}}$. \begin{itemize} \item[(i)]
A {\bf superconformal morphism} from ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}_1$ to ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}_2$ is a pair \begin{align} {^{\S_1} \Phi}^{\circledS \bigstar} : = (\Phi^{\circledS \mathrm{log}}, \phi^{\circledS \mathrm{log}})
\end{align} consisting of two morphisms $\Phi^{\circledS \mathrm{log}} : X_1^{\circledS \mathrm{log}} \rightarrow X_2^{\circledS \mathrm{log}}$, $\phi^{\circledS \mathrm{log}} : S_1^{\circledS \mathrm{log}} \rightarrow S_2^{\circledS \mathrm{log}}$ such that
\begin{itemize} \item[$\bullet$]
the square diagram
\begin{align} \xymatrix{ X_1^{\circledS \mathrm{log}} \ar[r]^{\Phi^{\circledS \mathrm{log}}} \ar[d]_{f_1^{\circledS \mathrm{log}}} & X_2^{\circledS \mathrm{log}} \ar[d]^{f_2^{\circledS \mathrm{log}}} \\ S_1^{\circledS \mathrm{log}} \ar[r]_{\phi^{\circledS \mathrm{log}}} & S_1^{\circledS \mathrm{log}} }
\end{align} ($i = 1, \cdots, r$)
is commutative and cartesian;
\item[$\bullet$] $[\sigma^\circledS_{1, i}] = \Phi^{\circledS *} ([\sigma^\circledS_{2, i}])$ (for any $i \in \{ 1, \cdots, r \}$) and
$\mathcal{D}_1 = \Phi^{\circledS *} (\mathcal{D}_2)$ via the isomorphism $\mathcal{T}_{X_1^{\circledS \bigstar\text{-}\mathrm{log}}/S_1^{\circledS \mathrm{log}}}\isom \Phi^{\circledS*}(\mathcal{T}_{X_2^{\circledS \bigstar\text{-}\mathrm{log}}/S_2^{\circledS \mathrm{log}}})$ induced by $\Phi^{\circledS \mathrm{log}}$.
\end{itemize} \item[(ii)] Suppose further that $S_1^{\circledS \mathrm{log}} = S_2^{\circledS \mathrm{log}}$ ($=: S^{\circledS \mathrm{log}}$). A {\bf superconformal isomorphism over $S^{\circledS \mathrm{log}}$} from ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}_1$ to ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}_2$
is a superconformal morphism ${^{\S_1} \Phi}^{\circledS \bigstar} :=(\Phi^{\circledS \mathrm{log}}, \phi^{\circledS \mathrm{log}}) :{^{\S_1} \mathfrak{X}}^{\circledS \bigstar}_1 \rightarrow {^{\S_1} \mathfrak{X}}^{\circledS \bigstar}_2$
such that $\phi^{\circledS \mathrm{log}} = \mathrm{id}_{S^{\circledS \mathrm{log}}}$ and $\Phi^{\circledS \mathrm{log}}$ is an isomorphism.
\end{itemize}
\end{defi}
In the following Proposition \ref{p01033}, we discuss an explicit description of a superconformal structure (cf. e.g., ~\cite{Witten2}, Lemma 3.1, for the case of smooth $\text{SUSY}_1$ curves over $\mathbb{C}$, i.e., super Riemann surfaces.)
\begin{prop} \label{p01033} \leavevmode\\
\ \ \
Let $U^\mathrm{log}$ be a log smooth scheme over $S_b^\mathrm{log}$ of relative dimension $1$.
(In particular, $Z^{\circledS \mathrm{log}} := U^\mathrm{log} \times_{S_b} \mathbb{A}^{0|1}_{S^\circledS}$ is a log supersmooth superscheme over $S^{\circledS \mathrm{log}}$ of relative superdimension $1|1$).
Suppose that
we are given an element $z \in \Gamma (U, \mathcal{M}_{U})$ such that $\Omega_{U^\mathrm{log}/S_b^\mathrm{log}} \cong \mathcal{O}_U \cdot d \mathrm{log} (z)$.
Let us regard $d \mathrm{log} (z)$ and $d (\psi)$ as sections of $\Omega_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ via the projections $Z^{\circledS \mathrm{log}} \rightarrow U^\mathrm{log}$ and $Z^{\circledS \mathrm{log}} \rightarrow \mathbb{A}^{0|1}_{S^\circledS}$ respectively; these sections give a decomposition
$\Omega_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}} \cong \mathcal{O}_{Z^\circledS} \cdot d (\psi) \oplus \mathcal{O}_{Z^{\circledS}} \cdot d \mathrm{log} (z)$.
In particular, we have
\begin{align}
\mathcal{T}_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}} \cong \mathcal{O}_{Z^\circledS} \cdot \partial_\psi \oplus \mathcal{O}_{Z^{\circledS}} \cdot \partial_z,
\end{align}
where $\{ \partial_\psi, \partial_z \}$ is the dual basis of $\{ d (\psi), d \mathrm{log} (z) \}$.
Then, the following assertions are satisfied.
\begin{itemize}
\item[(i)]
For each $a \in \Gamma (Z_b, \mathcal{O}_{Z_b}^\times)$, the subsupermodule
\begin{align}
\mathcal{D}_a := \mathcal{O}_{Z^\circledS} \cdot (\partial_\psi + a \psi \cdot \partial_z)
\end{align}
of $\mathcal{T}_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$ forms a superconformal structure on $Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$.
Moreover, the assignment $a \mapsto \mathcal{D}_a$ determines a bijection between the set $\Gamma (Z_b, \mathcal{O}_{Z_b}^\times)$ and the set of superconformal structres on $Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$.
\item[(ii)] Let us take two superconformal structures on $Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ of the form $\mathcal{D}_a$, $\mathcal{D}_b$ for some $a$, $b \in \Gamma (Z_b, \mathcal{O}_{Z_b}^\times)$. Suppose that there exists an element $c \in \Gamma (Z_b, \mathcal{O}_{Z_b}^\times)$ such that $c^2 \cdot a = b$. (According to Proposition \ref{p0607}, such an element $c$ exists after possibly replacing $U$ with its \'{e}tale covering.)
If we write $\iota_{c}$ for the automorphism of $Z^{\circledS \mathrm{log}}$ over $U^\mathrm{log} \times_{S_b} S^\circledS$ given by assigning $\psi \mapsto c \cdot \psi$, then the isomorphism $\mathcal{T}_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}} \isom \iota_{c}^* (\mathcal{T}_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}})$ induces an isomorphism $\mathcal{D}_a \isom \iota_{c}^*(\mathcal{D}_b)$.
\end{itemize}
\end{prop}
\begin{proof}
First, we consider assertion (i). For each $a \in \Gamma (Z_b, \mathcal{O}_{Z_b}^\times)$, we have \begin{align} & \ \ \ \ \frac{1}{2} \cdot [\partial_\psi + a \psi \cdot \partial_z, \partial_\psi + a \psi \cdot \partial_z] \\ & = \partial_\psi^2 + ((a \psi \cdot \partial_z) \circ \partial_\psi + \partial_\psi \circ (a \psi \cdot \partial_z)) + (a \psi \cdot \partial_z)^2 \notag \\ & = 0 + a \cdot \partial_z + 0 \notag \\ & = a \cdot \partial_z. \notag \end{align} Hence, (since the sections $\{ \partial_\psi + a \psi \cdot \partial_z, a \cdot \partial_z \}$ generate $\mathcal{T}_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}$) $\mathcal{D}_a$ forms a superconformal structure on $Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$.
Next, we shall consider the bijectivity of the assignment $a \mapsto \mathcal{D}_a$. Let $\mathcal{D}$ be a superconformal structure on $Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$. There exists an open covering $\{ Z_\gamma \}_\gamma$ of $Z_b$ such that each restriction $\mathcal{D} |_{Z_\gamma}$ may be generated by some $\partial_\gamma \in \Gamma (Z_\gamma, \mathcal{D})$.
The section $\partial_\gamma$ may be described as $\partial_\gamma := a_\gamma \psi \cdot \partial_z + b_\gamma \cdot \partial_\psi$ (where $a_\gamma, b_\gamma \in \Gamma (Z_\gamma, \mathcal{O}_{X_b})$).
Then,
\begin{align}
\partial^{2}_\gamma := \frac{1}{2} \cdot [ \partial_\gamma, \partial_\gamma ] = a_\gamma b_\gamma \cdot \partial_z + a_\gamma (\partial_z (b_\gamma)) \psi \cdot \partial_\psi.
\end{align}
Since $\{\partial_\gamma, \partial^{2}_\gamma \}$ generates $\mathcal{T}_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}}|_{Z_\gamma}$, both $a_\gamma$ and $b_\gamma$ lie in $\Gamma (Z_\gamma, \mathcal{O}_{Z_b}^\times)$.
Thus, there exists {\it uniquely} an element $\partial'_\gamma$ (i.e., $\partial'_\gamma := b_\gamma^{-1} \cdot \partial_\gamma$) in $\Gamma (Z_\gamma, \mathcal{D})$ of the form $\partial_\psi + a'_\gamma \psi \cdot \partial_z$ (for some $a'_\gamma \in \Gamma (Z_\gamma, \mathcal{O}_{Z_b}^\times)$).
In particular, $\{ \partial'_\gamma \}_\gamma$ may be glued together to an element of $\Gamma (Z_b, \mathcal{D})$ of the form $\partial_\phi + a' \psi \cdot \partial_z$ (for a unique $a' \in \Gamma (Z_b, \mathcal{O}_{Z_b}^\times)$). This assignment $\mathcal{D} \mapsto a'$ determines an inverse to the assignment $a \mapsto \mathcal{D}_a$. Consequently, $a \mapsto \mathcal{D}_a$ is bijective, as desired.
Finally, assertion (ii) follows immediately from the definition of $\iota_c$. \end{proof}
In particular, we have the following assertion.
\begin{cor} \label{c01033} \leavevmode\\
\ \ \
Let ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar} := (X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}, \{ [\sigma_i^\circledS] \}_{I-1}^r, \mathcal{D})$ be an $r$-pointed log twisted $\text{SUSY}_1$ curve of genus $g$ over $S^{\circledS \mathrm{log}}$.
Then, there exists a collection of data \begin{align} \label{E50} \{ (Y_\gamma^{\circledS \mathrm{log}} \stackrel{\pi_\gamma^{\circledS \mathrm{log}}}{\rightarrow}X^{\circledS \mathrm{log}}, U_\gamma^\mathrm{log}, \eta_\gamma^{\circledS \mathrm{log}}, z_\gamma)\}_\gamma, \end{align} where \begin{itemize} \item[$\bullet$]
$\{ (Y_\gamma^{\circledS \mathrm{log}} \stackrel{\pi_\gamma^{\circledS \mathrm{log}}}{\rightarrow}X^{\circledS \mathrm{log}}, U_\gamma^\mathrm{log}, \eta_\gamma^{\circledS \mathrm{log}}) \}_\gamma$ is a collection of log twisted $(1|1)$-chart on $X^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}$ such that $\coprod_\gamma Y^{\circledS \mathrm{log}}_\gamma \rightarrow X^{\circledS \mathrm{log}}$ is a strict super\'{e}tale covering of $X^{\circledS \mathrm{log}}$; \item[$\bullet$] Each $z_\gamma$ is an element of $\Gamma (U_\gamma, \mathcal{M}_{U_\gamma})$ such that $d \mathrm{log} (z_\gamma)$ generates $\Omega_{U_\gamma^\mathrm{log}/S_b^\mathrm{log}}$ and
the superconformal structure $\mathcal{D} |_{Y_\gamma^{\circledS \mathrm{log}}}$ on $Y^{\circledS \mathrm{log}}_\gamma/S^{\circledS \mathrm{log}}$ obtained by restricting $\mathcal{D}$ to $Y_\gamma^{\circledS \mathrm{log}}$ coincides with \begin{align}
\mathcal{O}_{U_\gamma^\mathrm{log} \times_{S_b}\mathbb{A}_{S^\circledS}} \cdot (\partial_\psi + \psi \cdot \partial_{z_\gamma}) \subseteq \mathcal{T}_{U_\gamma^\mathrm{log} \times_{S_b}\mathbb{A}_{S^\circledS}^{0|1}/S^{\circledS \mathrm{log}}} \end{align}
(where $\{ \partial_\psi, \partial_{z_\gamma} \}$ is the dual basis of $\{ d (\psi), d \mathrm{log} (z_\gamma) \}$) via the isomorphism $\mathcal{T}_{Z^{\circledS \mathrm{log}}/S^{\circledS \mathrm{log}}} \isom (\eta_\gamma^{\circledS})^*(\mathcal{T}_{U_\gamma^\mathrm{log} \times_{S_b}\mathbb{A}_{S^\circledS}^{0|1}/S^{\circledS \mathrm{log}}})$ induced by $\eta_\gamma^\circledS$. \end{itemize}
\end{cor}
\subsection{Kodaira-Spencer morphisms} \label{S35} \leavevmode\\
Let ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar} := (f^{\circledS \mathrm{log}} : X^{\circledS \mathrm{log}} \rightarrow S^{\circledS \mathrm{log}}, \{ [\sigma_i^\circledS] \}_{i=1}^r, \mathcal{D})$ be an $r$-pointed log twisted $\text{SUSY}_1$ curve of genus $g$ over $S^{\circledS \mathrm{log}}$.
Let us define an $f_b^{-1}(\mathcal{O}_{S^\circledS})$-subsupermodule $\mathcal{T}_{X^{\circledS \bigstar\text{-}\mathrm{log}}/S_0}^\mathcal{D}$
(resp., $\mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}^\mathcal{D}$)
of $\mathcal{T}_{X^{\circledS \bigstar\text{-}\mathrm{log}}/S_0}$ (resp., $\mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}$) to be \begin{align}
\mathcal{T}_{X^{\circledS \bigstar\text{-}\mathrm{log}}/S_0}^\mathcal{D} & := \{ \partial \in \mathcal{T}_{X^{\bigstar\text{-}\mathrm{log}}/S_0} \ | \ [\partial, \mathcal{D}] \subseteq \mathcal{D} \} \\ (\text{resp.,} \ \mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}^\mathcal{D} & := \mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}} \cap \mathcal{T}_{X^{\circledS \bigstar\text{-}\mathrm{log}}/S_0}^\mathcal{D}). \notag \end{align} Since $X^{\circledS \bigstar \text{-}\mathrm{log}}$ is log supersmooth over $S^{\circledS \mathrm{log}}$, the dual of the sequence (\ref{E20}) gives rise to a short exact
sequence of $\mathcal{O}_{X^\circledS}$-supermodules:
\begin{align} \label{S0011} 0 \rightarrow \mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}^\mathcal{D} \rightarrow \mathcal{T}_{X^{\circledS \bigstar\text{-}\mathrm{log}}/S_0}^\mathcal{D} \rightarrow f^{\circledS *}(\mathcal{T}_{S^{\circledS \mathrm{log}}/S_0}) \rightarrow 0. \end{align} (Here, the pulled-back $\mathcal{O}_{S^\circledS}$-supermodule $f^{\circledS *}(-)$ via $f^\circledS$ defined preceding Definition \ref{d32} may be also defined in our situation, i.e., $X^\circledS$ is a superstack.) The higher direct image $\mathbb{R}^1 f_{b*}(\mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}^\mathcal{D})$ admits naturally a structure of $\mathcal{O}_{S^\circledS}$-supermodule. Denote by $\mathbb{R}^1 f_{*}^\circledS(\mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}^\mathcal{D})$ the resulting $\mathcal{O}_{S^\circledS}$-supermodule. The connecting homomorphism of (\ref{S0011}) yields an $\mathcal{O}_{S^\circledS}$-linear morphism \begin{align} \mathcal{K} \mathcal{S}({^{\S_1} \mathfrak{X}}^{\circledS \bigstar}) : \mathcal{T}_{S^{\circledS \mathrm{log}}/S_0} \rightarrow \mathbb{R}^1 f_{*}^\circledS (\mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}^\mathcal{D}), \end{align} which is referred to as the {\bf Kodaira-Spencer morphism} of ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$.
The following proposition will be used in the discussion in Remark \ref{r4} and Proposition \ref{P001}.
\begin{prop}
\label{p010} \leavevmode\\
\ \ \
There exists a canonical $f_b^{-1}(\mathcal{O}_{S^\circledS})$-linear isomorphism $\mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}^\mathcal{D} \isom \mathcal{D}^{\otimes 2}$.
\end{prop}
\begin{proof} The assertion follows from an argument similar to the argument in the proof of ~\cite{LR}, Lemma 2.1, together with Corollary \ref{c01033} of the present paper. \end{proof}
\subsection{Stable log twisted $\text{SUSY}_1$ curves} \label{S36} \leavevmode\\
Let $\lambda$ be
an even positive integer invertible in $S_0$. Let us recall from ~\cite{Chi1}, Definition 4.1.3 and Remark 4.2.6, the notion of a $\lambda$-stable twisted curve. We shall write
\begin{align} {^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda} \end{align}
for the moduli stack classifying $r$-pointed $\lambda$-stable twisted curves over $S_0$ of genus $g$. It is a geometrically connected, proper, and smooth Deligne-Mumford stack over $S_0$ of relative dimension $3g-3+r$ (cf. ~\cite{Chi1}, Corollary 4.2.8). Denote by $(\mathfrak{C}, \{ [\sigma_{\mathfrak{C}, i}] \}_{i=1}^r)$ the tautological $r$-pointed $\lambda$-stable twisted curve over ${^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}$. Both ${^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}$ and $\mathfrak{C}$
admit canonically log structures (cf. ~\cite{O1}, Theorem 1.9). If ${^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\mathrm{log}}$ and $\mathfrak{C}^\mathrm{log}$ denote the resulting log stacks, then the structure morphism of $\mathfrak{C}$ over ${^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}$ extends to a log smooth morphism $\mathfrak{C}^\mathrm{log} \rightarrow {^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\mathrm{log}}$.
Let $s^\mathrm{log} : \underline{S}^\mathrm{log} \rightarrow {^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\mathrm{log}}$ be a morphism whose underlying morphism of stacks classifies an $r$-pointed twisted curve $\underline{\mathfrak{X}}^\bigstar := (\underline{X}/\underline{S}, \{ [\underline{\sigma}_i]\}_{i=1}^r)$ of genus $g$. Then, by equipping $\underline{X}$ with the log structure pulled-back from $\mathfrak{C}^\mathrm{log} \times_{{^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\mathrm{log}}, s^\mathrm{log}} \underline{S}^\mathrm{log} $ via the isomorphism $\underline{X} \isom \mathfrak{C} \times_{{^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}, s} \underline{S}$ induced by $s$, we have a log stack \begin{align} \label{E1111} \underline{X}^{\bigstar \text{-} \mathrm{log}} \end{align}
together with a log smooth morphism $\underline{X}^{\bigstar \text{-} \mathrm{log}} \rightarrow \underline{S}^\mathrm{log}$.
Moreover, let us write $\mathfrak{M}_{g,r}$ for the moduli stack classifying $r$-pointed proper {\it smooth} curves over $S_0$ of genus $g$.
By the natural inclusion $\mathfrak{M}_{g,r} \hookrightarrow {^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\mathrm{log}}$, we may regard $\mathfrak{M}_{g,r}$ as a dense open substack of ${^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\mathrm{log}}$. Also, this open locus of ${^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\mathrm{log}}$ coincides with
the locus in which the log structure of ${^\text{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\mathrm{log}}$ becomes trivial.
\begin{defi}
\label{D03}\leavevmode\\ \ \ \
A {\bf stable log twisted $\text{SUSY}_1$ curve of type $(g,r, \lambda)$}
over $S^{\circledS \mathrm{log}}$ is an $r$-pointed log twisted $\text{SUSY}_1$ curve of genus $g$ over $S^{\circledS \mathrm{log}}$ whose underlying pointed twisted curve is $\lambda$-stable.
\end{defi}
Let $(g, r, \lambda)$ be
a triple of nonnegative integers satisfying that $2g-2+r >0$ and $\lambda$ is even.
Then, the stable log twisted $\text{SUSY}_1$ curves of type $(g, r, \lambda)$
over log superschemes
and superconformal morphisms between them form a category fibered in groupoids over $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$:
\begin{align} \label{EE12} {^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}} \hspace{2mm} &\rightarrow \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}} \\
{^{\S_1} \mathfrak{X}}^{\circledS \bigstar} \ \text{(over $S^{\circledS \mathrm{log}}$)} \hspace{2mm} &\mapsto \hspace{5mm} S^{\circledS \mathrm{log}}. \notag \end{align}
One verifies from a standard argument in descent theory that ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}} $ forms a stack with respect to the strict super\'{e}tale pretopology in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$.
We shall denote by \begin{align} \label{EE11} ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^{\mathrm{log}} \end{align} the restriction of ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ to the full subcategory $\mathfrak{S} \mathfrak{c}\mathfrak{h}_{/S_0}^{\mathrm{log}} \subseteq \mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$. The assignment
from each stable log twisted $\text{SUSY}_1$ curve over an fs log scheme to its underlying pointed twisted curve
determines a morphism
$({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^{\mathrm{log}} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r,\lambda}$; it extends to a morphism
\begin{align} \label{e010} ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^{\mathrm{log}} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}^\mathrm{log}_{g,r,\lambda} \end{align}
of log stacks.
\section{Superconformal structure v.s. spin structure}
This section is devoted to understand the structure of the reduced stack $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^{\mathrm{log}}$ of ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$. The point is that to giving a pointed log twisted $\text{SUSY}_1$ curve over a log scheme is, via a natural procedure, equivalent to giving a pointed log twisted curve equipped with an additional data called a pointed spin structure (cf. Definition \ref{De2}).
Thus, if ${^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$ (cf. (\ref{EE33})) denotes the moduli stack classifying $\lambda$-stable log twisted curves of type $(g,r)$ equipped with a parabolic spin structure, then it is canonically isomorphic to $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$ of ${^\S \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$, as shown in Proposition \ref{P66}.
In the following, {\it we suppose that $r$ is even}.
\subsection{Parabolic spin structures} \label{S41} \leavevmode\\
Let $\underline{S}^\mathrm{log}$ be an fs log scheme and $\underline{\mathfrak{X}}^\bigstar := (\underline{X}/\underline{S}, \{[ \underline{\sigma}_i]\}_{i=1}^r)$ be an $r$-pointed twisted curve of genus $g$ over the underlying scheme $\underline{S}$ of $\underline{S}^\mathrm{log}$. Hence, by the discussion preceding Definition \ref{D03}, we have a log smooth morphism $\underline{X}^{\bigstar \text{-} \mathrm{log}} \rightarrow \underline{S}^{\mathrm{log}}$ and $\Omega_{\underline{X}^{\bigstar \text{-} \mathrm{log}}/\underline{S}^{\mathrm{log}}}$ is a line bundle of total degree $2g-2+r$.
Note that for each $i \in \{1, \cdots, r \}$, there exists a canonical isomorphism \begin{align} \Lambda_i : \underline{\sigma}_i^* (\Omega_{\underline{X}^{\bigstar \text{-} \mathrm{log}}/\underline{S}^{\mathrm{log}}}) \isom \mathcal{O}_{\underline{S}} \end{align}
which maps any local section of the form $\underline{\sigma}_i^* (d \mathrm{log} (x))$ to $1 \in \mathcal{O}_{\underline{S}}$, where $x$ is a local function defining the closed substack $[ \underline{\sigma}_i]$ of $\underline{X}$. We shall write \begin{equation} \mathfrak{S} \mathfrak{p} \mathfrak{i} \mathfrak{n}_{\underline{\mathfrak{X}}^\bigstar} \end{equation}
for the groupoid defined as follows: \begin{itemize} \item[$\bullet$]
The {\it objects} in $\mathfrak{S} \mathfrak{p} \mathfrak{i} \mathfrak{n}_{\underline{\mathfrak{X}}^\bigstar}$ are pairs $(\mathcal{L}, \eta)$,
where $\mathcal{L}$ denotes a line bundle on $\underline{X}$ such that $\underline{\sigma}_i^*(\mathcal{L}) \cong \mathcal{O}_{\underline{S}}$ for any $i \in \{ 1, \cdots, r \}$ and $\eta$ denotes an isomorphism $\mathcal{L}^{\otimes 2} \isom \Omega_{\underline{X}^{\bigstar \text{-} \mathrm{log}}/\underline{S}^{\mathrm{log}}}$.
\item[$\bullet$] The {\it morphisms} from $(\mathcal{L}, \eta)$ to $(\mathcal{L}', \eta')$ (where both $(\mathcal{L}, \eta)$ and $(\mathcal{L}', \eta')$ are objects in $\mathfrak{S} \mathfrak{p} \mathfrak{i} \mathfrak{n}_{\underline{\mathfrak{X}}^\bigstar}$) are
isomorphisms $\iota : \mathcal{L}_1 \isom \mathcal{L}_2$ satisfying the equality $\eta_2 \circ \iota^{\otimes 2} = \eta_1$.
\end{itemize}
\begin{defi} \label{De2}\leavevmode\\ \ \ \ We shall refer to such a pair $(\mathcal{L}, \eta)$ (i.e., an object of $\mathfrak{S} \mathfrak{p} \mathfrak{i} \mathfrak{n}_{\underline{\mathfrak{X}}^\bigstar}$) as a {\bf pointed spin structure} on $\underline{\mathfrak{X}}^\bigstar$.
\end{defi}
\begin{rema} \label{r9009} \leavevmode\\
\ \ \
Suppose that we are given a line bundle $\mathcal{L}_0$ on $\underline{X}$ together with an isomorphism $\eta_0 : \mathcal{L}_0^{\otimes 2} \isom \Omega_{\underline{X}^{\bigstar \text{-} \mathrm{log}}/\underline{S}^{\mathrm{log}}}$.
Since the composite $\Lambda_i \circ \underline{\sigma}_i^*(\eta_0)$ (for each $i \in \{ 1, \cdots, r \}$) is an isomorphism $\underline{\sigma}_i^*(\mathcal{L})^{\otimes 2} \isom \mathcal{O}_{\underline{S}}$, the line bundle $\underline{\sigma}_i^*(\mathcal{L})$ defines a $\mu_2$-torsor over $\underline{S}$. Hence,
after possibly base-changing $\underline{\mathfrak{X}}^\bigstar$ via an \'{e}tale covering $\underline{S}' \rightarrow \underline{S}$ of $\underline{S}$,
the pair $(\mathcal{L}_0, \eta_0)$ becomes a pointed spin structure on $\underline{\mathfrak{X}}^\bigstar$. Indeed, if $\underline{S}_i$ denotes the total space of the $\mu_2$-torsor corresponding to $\underline{\sigma}_i^*(\mathcal{L})$, then it suffices to choose the \'{e}tale covering $\underline{S}' = \underline{S}_1 \times_{\underline{S}} \underline{S}_2 \times_{\underline{S}} \cdots \times_{\underline{S}} \underline{S}_r$ of $\underline{S}$.
\end{rema}
Denote by \begin{equation} \label{EE33} {^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}} \end{equation} the category fibered in groupoids over ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r}$ whose fiber over $\underline{S} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r}$ (where $\underline{S}$ is a scheme) classifying an $r$-pointed $\lambda$-stable twisted curve $\underline{\mathfrak{X}}^\bigstar$ is the groupoid $\mathfrak{S} \mathfrak{p} \mathfrak{i} \mathfrak{n}_{\underline{\mathfrak{X}}^\bigstar}$.
One verifies from ~\cite{Chi1}, Corollary 4.2.8 (and the fact that $\mathrm{deg} (\Omega_{\underline{X}^{\bigstar \text{-} \mathrm{log}}/\underline{S}^{\underline{f}\text{-}\mathrm{log}}}) = 2g-2+r$ is even) that ${^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}$ may be represented by
a smooth proper Deligne-Mumford stack over $S_0$ of relative dimension $3g-3+r$ and the forgetting morphism \begin{align} \label{e040}
{^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda} \end{align}
is finite and \'{e}tale.
Indeed, according to the discussion in Remark \ref{r9009},
${^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}$ turns out to be finite and \'{e}tale over the moduli stack classifying $r$-pointed $\lambda$-stable twisted curves $(\underline{X}/\underline{S}, \{ [\underline{\sigma}_i] \}_{i=1}^r)$ of genus $g$ equipped with a square root of $\Omega_{\underline{X}^{\bigstar \text{-} \mathrm{log}}/\underline{S}^{\underline{f}\text{-}\mathrm{log}}}$.
We equip ${^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}$ with
the log structure pulled-back from ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^\mathrm{log}$.
Write
\begin{align}
{^{\mathrm{tw}} \overline{\mathfrak{M}}}^\mathrm{log}_{g,r, \lambda, \mathrm{spin}}
\end{align} for the resulting fs log stack (hence, (\ref{e040}) extends to ${^{\mathrm{tw}} \overline{\mathfrak{M}}}^\mathrm{log}_{g,r, \lambda, \mathrm{spin}} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^\mathrm{log}$).
\subsection{From $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$ to ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$} \label{S42} \leavevmode\\
The main goal of this section is to prove Proposition \ref{P66} described at the end of this section, i.e., to construct an equivalence of categories $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log} \isom {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$. To this end, we construct first
a morphism $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$ over ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r,\lambda}^\mathrm{log}$.
Let $\underline{S}^\mathrm{log}$ be an fs log scheme and $\underline{S}^\mathrm{log} \rightarrow ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$ a morphism classifying
a stable log twisted $\text{SUSY}_1$ curve
\begin{align}
{^{\S_1} \mathfrak{X}}^{\circledS \bigstar} := (f^{\circledS \mathrm{log}} : X^{\circledS \mathrm{log}} \rightarrow \underline{S}^\mathrm{log}, \{ [\sigma^{\circledS}_i] \}_{i=1}^r, \mathcal{D})
\end{align}
of type $(g, r, \lambda)$ over $\underline{S}^\mathrm{log}$. The morphism $\gamma_X : X_t \rightarrow X_b$ is an isomorphism, and allows us to identify
the $r$-pointed $\lambda$-stable twisted curve \begin{align} \mathfrak{X}_b^\bigstar := (f_b : X_b \rightarrow \underline{S}, \{ [(\sigma_i)_b]\}_{i=1}^r) \end{align}
with the underlying pointed twisted curve of $(X^{\circledS \mathrm{log}}/\underline{S}^\mathrm{log}, \{ [\sigma^{\circledS}_i]\}_{i=1}^r)$.
One verifies that there exist a line bundle $\mathcal{L}^\bigstar$ on $X_b$ and an isomorphism $\Upsilon : X^{\circledS} \isom \langle X_b, \mathcal{L}^\bigstar \rangle^\circledS$ over $\underline{S}$ which sends $[\sigma_i^\circledS]$ (for each $i \in \{1, \cdots, r\}$) to the closed subsuperscheme of $\langle X_b, \mathcal{L}^\bigstar \rangle^\circledS$ represented by the closed immersion $\langle \underline{S}, (\sigma_i)_b^*(\mathcal{L}^\bigstar)\rangle \rightarrow \langle X_b, \mathcal{L}^\bigstar \rangle^\circledS$ extending $(\sigma_i)_b$. In particular, we have \begin{align} \label{FF09} (\sigma_i)_b^*(\mathcal{L}^\bigstar) \cong \mathcal{O}_{\underline{S}}. \end{align}
The isomorphism $\Upsilon$ gives rise to an isomorphism \begin{align} \label{e06} \Omega_{X^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} \isom ( \mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}}\Omega_{X_b^{\bigstar\text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} ) \oplus ( \mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}} \mathcal{L}^\bigstar ) \end{align} of $\mathcal{O}_{X^\circledS}$-supermodules, where, in the right-hand side, the sections of the forms $(1 \otimes a, 0)$ and $(0, 1\otimes b)$ (for some $a \in \Omega_{X_b^{\bigstar\text{-}\mathrm{log}}/\underline{S}^\mathrm{log}}$ and $b \in \mathcal{L}^\bigstar$)
are defined to be bosonic and fermionic sections respectively. Consider its dual \begin{align} \label{e0998} \mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} \isom (\mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}}\mathcal{T}_{X_b^{\bigstar\text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} ) \oplus ( \mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}} \mathcal{L}^{\bigstar \vee}). \end{align} It follows from Proposition \ref{p01033} (i) that the composite morphism \begin{align} \label{e077} \mathcal{D} & \ \hookrightarrow \mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} \\ & \stackrel{(\ref{e0998})}{\isom} (\mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}}\mathcal{T}_{X_b^{\bigstar\text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} ) \oplus ( \mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}} \mathcal{L}^{\bigstar \vee}) \notag \\ & \ \twoheadrightarrow \mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}} \mathcal{L}^{\bigstar \vee} \notag \end{align} (where the third morphism denotes the projection to the second factor)
between supervector bundles of superrank $0|1$
is surjective, and hence, an isomorphism. Moreover, we have a composite isomorphism \begin{align} \label{e088} \mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}}\mathcal{T}_{X_b^{\bigstar\text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} & \rightarrow (\mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}}\mathcal{T}_{X_b^{\bigstar\text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} ) \oplus ( \mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}} \mathcal{L}^{\bigstar \vee}) \\ & \stackrel{(\ref{e0998})^{-1}}{\isom} \mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} \notag \\ & \twoheadrightarrow \mathcal{T}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} /\mathcal{D} \notag \end{align} (where the first morphism denotes the inclusion into the first factor). The isomorphism (\ref{e032}) in our situation may be described
as an isomorphism \begin{align} \label{e036} (\mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}} (\mathcal{L}^{\bigstar \vee})^{\otimes 2} =) \ (\mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}} \mathcal{L}^{\bigstar \vee})^{\otimes 2} \isom \mathcal{O}_{X^\circledS} \otimes_{\mathcal{O}_{X_b}}\mathcal{T}_{X_b^{\bigstar\text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} \end{align}
via the composite isomorphisms (\ref{e077}) and (\ref{e088}). The restriction of this isomorphism to the bosonic part (and taking its dual) becomes an isomorphism $\eta^\bigstar : (\mathcal{L}^\bigstar)^{\otimes 2} \isom \Omega_{X_b^{\bigstar\text{-}\mathrm{log}}/\underline{S}^\mathrm{log}}$. Thus, the pair $(\mathcal{L}^\bigstar, \eta^\bigstar)$ forms (thanks to (\ref{FF09}))
a pointed spin structure on $\mathfrak{X}_b^\bigstar$.
If $\underline{S} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}$ denotes the classifying morphism of $(\mathcal{L}^\bigstar, \eta^\bigstar)$, then it
extends uniquely to a morphism $\underline{S}^\mathrm{log} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}^\mathrm{log}_{g,r, \lambda, \mathrm{spin}}$ over ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r,\lambda}^\mathrm{log}$. The assignment ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar} \mapsto (\mathcal{L}^\bigstar, \eta^\bigstar)$ is functorial with respect to $\underline{S}^\mathrm{log}$ and hence, determines a morphism \begin{align} \label{e445} ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log} \end{align}
over ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r,\lambda}^\mathrm{log}$.
\subsection{From ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$ to $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$} \label{S43} \leavevmode\\
Conversely, we shall construct a morphism ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log} \rightarrow ({^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda})^\mathrm{log}_t$. Let $\underline{S}^\mathrm{log}$ be an fs log scheme and $\underline{S}^\mathrm{log} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r,\lambda, \mathrm{spin}}^\mathrm{log}$ a morphism whose underlying morphism $\underline{S} \rightarrow {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r,\lambda, \mathrm{spin}}$ classifies
a spin structure $(\mathcal{L}, \eta)$ on an $r$-pointed $\lambda$-stable twisted curve $\underline{\mathfrak{X}}^\bigstar := (\underline{f} : \underline{X} \rightarrow \underline{S}, \{ [\underline{\sigma}_i]\}_{i=1}^r)$. (In particular, we have a morphism $\underline{S}^\mathrm{log} \rightarrow \underline{S}^{\underline{f} \text{-} \mathrm{log}}$.) By fixing an isomorphism $\underline{\sigma}_i^*(\mathcal{L}) \isom \mathcal{O}_{\underline{S}}$ (for each $i = 1, \cdots, r$), we obtain a composite closed immersion
\begin{align} \underline{\sigma}^\circledS_{(\mathcal{L}, \eta), i} :
\mathbb{A}^{0|1}_{\underline{S}} \isom \langle \underline{S}, \underline{\sigma}_i^*(\mathcal{L}) \rangle^\circledS \rightarrow
\langle \underline{X}, \mathcal{L} \rangle^\circledS \end{align}
extending $\underline{\sigma}_i$.
We shall write $\underline{X}^\mathrm{log} := \underline{X}^{\underline{f} \text{-} \mathrm{log}} \times_{\underline{X}^{\underline{f} \text{-} \mathrm{log}}} \underline{S}^\mathrm{log}$, and hence, obtain a log superstack $\langle \underline{X}, \mathcal{L} \rangle^{\circledS \mathrm{log}}$ (cf. (\ref{E13})) over $\underline{S}^\mathrm{log}$. The collection of data \begin{align} \underline{\mathfrak{X}}^{\circledS \bigstar}_{(\mathcal{L}, \eta)} := (\langle \underline{X}, \mathcal{L} \rangle^{\circledS \mathrm{log}}/\underline{S}^\mathrm{log}, \{ [\underline{\sigma}^\circledS_{(\mathcal{L}, \eta), i}]\}_{i=1}^r) \end{align}
forms an $r$-pointed log twisted $(1|1)$-curve of genus $g$ over $\underline{S}^\mathrm{log}$.
Since $\mathcal{O}_{\langle \underline{X}, \mathcal{L} \rangle^\circledS} \cong \mathcal{O}_{\underline{X}}\oplus \mathcal{L}$, we obtain (cf. (\ref{e06})) a composite isomorphism \begin{align} \label{e057}
& \ \ \ \ \ \mathcal{T}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}} /\underline{S}^{\mathrm{log}}} \\
& \isom (\mathcal{O}_{\langle \underline{X}, \mathcal{L} \rangle^\circledS} \otimes_{\mathcal{O}_X} \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/\underline{S}^{\underline{f}\text{-}\mathrm{log}}}) \oplus
(\mathcal{O}_{\langle X, \mathcal{L} \rangle^\circledS} \otimes_{\mathcal{O}_X} \mathcal{L}^\vee)\notag \\ & \isom \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/\underline{S}^{\underline{f}\text{-}\mathrm{log}}} \oplus \mathcal{L}^{\vee}\oplus \mathcal{L}^{\vee } \oplus \mathcal{O}_{\underline{X}} \notag \end{align} of $\mathcal{O}_{\underline{X}}$-modules. Consider the $\mathcal{O}_{\langle \underline{X}, \mathcal{L} \rangle^\circledS}$-linear injection \begin{align} \label{E30}
(\mathcal{O}_{\langle \underline{X}, \mathcal{L} \rangle^\circledS} \otimes_{\mathcal{O}_{\underline{X}}} \mathcal{L}^\vee =:) \ \mathcal{L}^{\vee } \oplus \mathcal{O}_{\underline{X}} & \hookrightarrow \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/\underline{S}^{\underline{f}\text{-}\mathrm{log}}} \oplus \mathcal{L}^{\vee}\oplus \mathcal{L}^{\vee } \oplus \mathcal{O}_{\underline{X}} \\ (\,a, \ b\,) \hspace{3mm} &\mapsto \hspace{15mm} (\,0, \ \ a, \ \ a, \ \ b\,).\notag \end{align}
Write $\mathcal{D}_{(\mathcal{L}, \eta)}$ for the subsupervector bundle (of superrank $0|1$) of $\mathcal{T}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}} /\underline{S}^{\mathrm{log}}}$ corresponding, via the composite isomorphism (\ref{e057}), to the image of (\ref{E30}). Then, one verifies immediately that the collection of data \begin{align} {^\S \underline{\mathfrak{X}}}^{\circledS \bigstar}_{(\mathcal{L}, \eta)} := (\langle \underline{X}, \mathcal{L} \rangle^{\circledS \mathrm{log}}/\underline{S}^\mathrm{log}, \{ [\underline{\sigma}^\circledS_{(\mathcal{L}, \eta), i}]\}_{i=1}^r, \mathcal{D}_{(\mathcal{L}, \eta)}) \end{align} forms a stable log twisted $\text{SUSY}_1$ curve of type $(g, r, \lambda)$ over $\underline{S}^\mathrm{log}$ whose underlying pointed twisted curve is isomorphic to $\underline{\mathfrak{X}}^\bigstar$. It determines a morphism $\underline{S}^\mathrm{log} \rightarrow ({^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda})^\mathrm{log}_t$. By varying $\underline{S}^\mathrm{log}$ with the various fs log schemes, we obtain a morphism \begin{align} \label{e345} {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log} \rightarrow ({^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda})^\mathrm{log}_t \end{align}
over ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r,\lambda}^\mathrm{log}$.
\begin{rema} \label{r4} \leavevmode\\
\ \ \
We keep the above notation.
By applying Proposition \ref{p010}, we have a composite isomorphism of $\underline{f}^{-1} (\mathcal{O}_{\underline{S}})$-modules:
\begin{align} \label{e059}
\mathcal{T}^{\mathcal{D}_{(\mathcal{L}, \eta)}}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} & \isom (\mathcal{D}_{(\mathcal{L}, \eta)})^{\otimes 2} \\
& \isom \mathcal{O}_{\langle \underline{X}, \mathcal{L} \rangle^\circledS} \otimes_{\mathcal{O}_{\underline{X}}} (\mathcal{L}^{\vee})^{\otimes 2} \notag \\ & \isom \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/\underline{S}^{\underline{f} \text{-}\mathrm{log}}} \oplus \mathcal{L}^\vee, \notag
\end{align} where the second isomorphism follows from the definition of $\mathcal{D}_{(\mathcal{L}, \eta)}$ (cf. (\ref{E30})). Let us equip $ \mathcal{T}^{\mathcal{D}_{(\mathcal{L}, \eta)}}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}}$ with a structure of $\mathcal{O}_{\langle \underline{X}, \mathcal{L} \rangle^\circledS}$-supermodule via the first isomorphism in (\ref{e059}). Then, (\ref{e059}) induces two isomorphisms of $\mathcal{O}_{\underline{X}}$-modules \begin{align} \label{E006} ( \mathcal{T}^{\mathcal{D}_{(\mathcal{L}, \eta)}}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}})_b \cong \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/\underline{S}^{\underline{f} \text{-}\mathrm{log}}}, \hspace{5mm} ( \mathcal{T}^{\mathcal{D}_{(\mathcal{L}, \eta)}}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}})_f \cong \mathcal{L}^\vee. \end{align} In particular, we have \begin{align} \label{E066} \mathrm{deg} (( \mathcal{T}^{\mathcal{D}_{(\mathcal{L}, \eta)}}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}})_b) = -2g+2-r, \hspace{5mm} \mathrm{deg} (( \mathcal{T}^{\mathcal{D}_{(\mathcal{L}, \eta)}}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}})_f) = -g+1-\frac{r}{2}. \end{align}
The inclusion $ \mathcal{T}^{\mathcal{D}_{(\mathcal{L}, \eta)}}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}} \hookrightarrow \mathcal{T}_{\langle \underline{X}, \mathcal{L} \rangle^{\circledS \bigstar \text{-}\mathrm{log}}/\underline{S}^\mathrm{log}}$ corresponds, via the isomorphisms (\ref{e057}) and (\ref{e059}), to
the inclusion
\begin{align} \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/\underline{S}^{\underline{f} \text{-}\mathrm{log}}} \oplus \mathcal{L}^\vee & \rightarrow \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/\underline{S}^{\underline{f} \text{-}\mathrm{log}}} \oplus \mathcal{L}^{\vee}\oplus \mathcal{L}^{\vee } \oplus \mathcal{O}_{\underline{X}} \\ (\,a, \ b\,) \hspace{3mm} &\mapsto \hspace{22mm} (\,a, \ b, \ b, \ 0\,).\notag
\end{align}
\end{rema}
\begin{prop} \label{Pt66} \leavevmode\\
\ \ \
Let ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar} := (f^{\circledS \mathrm{log}} : X^{\circledS \mathrm{log}} \rightarrow S^{\circledS \mathrm{log}}, \{ [\sigma_i^\circledS] \}_{i=1}^r, \mathcal{D})$ be a stable log twisted $\text{SUSY}_1$ curve of type $(g, r, \lambda)$ over $S^{\circledS \mathrm{log}}$. Then, $\mathbb{R}^2f_{*}^\circledS (\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}) =0$. Also, the $\mathcal{O}_{S^\circledS}$-supermodule $\mathbb{R}^1f_{*}^\circledS (\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}})$ is locally free of rank $3g-3+r | 2g-2+\frac{r}{2}$ and the formulation of $\mathbb{R}^1f_{*}^\circledS (\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}})$ commutes with base-change with respect to $S^\circledS$. Moreover, if $S^\circledS$ is affine (i.e., $S^{\circledS} = S_b$) and $\mathcal{F}$ is a quasi-coherent $\mathcal{O}_{S_b}$-module, then the natural morphism \begin{align} \mathbb{R}^1f_{b*} ((\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S_b^{\mathrm{log}}})_{(-)}) \otimes \mathcal{F} \rightarrow \mathbb{R}^1f_{b*} ((\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S_b^{\mathrm{log}}} \otimes_{f^{-1}_b (\mathcal{O}_{S_b})} f^{-1}_b (\mathcal{F}))_{(-)}), \end{align} where $(-)$ denotes either ``$b$" or ``$f$", is an isomorphism.
\end{prop}
\begin{proof} In the following, we consider $\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}$ as being equipped with a structure of $\mathcal{O}_{X^{\circledS}}$-supermodule by transposing the structure of $\mathcal{O}_{X^{\circledS}}$-supermodule on $\mathcal{D}^{\otimes 2}$ via the isomorphism $\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}} \isom \mathcal{D}^{\otimes 2}$ obtained in Proposition \ref{p010}. Let us take an algebraically closed field $k$ and a morphism $v^{\circledS} : \mathrm{Spec} (k) \rightarrow S^\circledS$, where we equip $\mathrm{Spec} (k)$ with a log structure pulled-back from $S^{\circledS \mathrm{log}}$ via $v^\circledS$. Write $\mathrm{Spec}(k)^\mathrm{log}$ for the resulting log scheme and $v^{\circledS \mathrm{log}} : \mathrm{Spec}(k)^\mathrm{log} \rightarrow S^{\circledS \mathrm{log}}$ the morphism extending $v^\circledS$. Also, write $(X_v^{\circledS \mathrm{log}}/k^{\mathrm{log}}, \{ [\sigma_{i, v}^\circledS ] \}_{i=1}^r, \mathcal{D}_v)$ for the stable log twisted $\text{SUSY}_1$ curve over $\mathrm{Spec}(k)^\mathrm{log}$ defined as the base-change of ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$ via $v^{\circledS \mathrm{log}}$.
It follows from
Theorem \ref{p04033} that $H^2 ((X_v)_b, \mathcal{T}^{\mathcal{D}_v}_{X_v^{\circledS \bigstar \text{-}\mathrm{log}}/k^{\mathrm{log}}}) =0$.
By replacing $v^\circledS$ with the various points of $S^\circledS$, one verifies from ~\cite{Har}, Ch.\,III, Theorem 12.11, (a) (or ~\cite{QL}, \S\,5, Remark 3.21, (c)) that
$\mathbb{R}^2f_{b*} (\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}}) =0$.
(Here, note that although the result of {\it loc.\,cit.} deals only with the case of schemes, one may prove, by a similar argument, its analogous assertion for the case of superstacks.
Thus, in the proof of this proposition, we apply the result of {\it loc.\,cit.} as the result corresponding to the case of superstacks.) Hence, the natural morphism
\begin{align}
v^{\circledS *}(\mathbb{R}^1f_{b*} (\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}})) \rightarrow H^1 ((X_v)_b, \mathcal{T}^{\mathcal{D}_v}_{X_v^{\circledS \bigstar \text{-}\mathrm{log}}/k^{\mathrm{log}}})
\end{align}
is surjective for all $v^\circledS$ (cf. ~\cite{Har}, Ch.\,III, Theorem 12.11, (b)) and hence, is an isomorphism. Moreover, the last reference or ~\cite{QL}, \S\,5, Remark 3.21, (c) (applied to the case $i=1$) shows that $\mathbb{R}^1f_{b*} (\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}})$ is locally free. The rank of this $\mathcal{O}_{S^\circledS}$-supermodule may be calculated by the dimension of $H^1 ((X_v)_b, \mathcal{T}^{\mathcal{D}_v}_{X_v^{\circledS \bigstar \text{-}\mathrm{log}}/k^{\mathrm{log}}})$. But, it follows from (\ref{E066}) and Theorem \ref{p04033} that \begin{align} \mathrm{dim}_k (H^1 ((X_v)_b, (\mathcal{T}^{\mathcal{D}_v}_{X_v^{\circledS \bigstar \text{-}\mathrm{log}}/k^{\mathrm{log}}})_b)) = 3g-3+r \end{align} and \begin{align} \mathrm{dim}_k (H^1 ((X_v)_b, (\mathcal{T}^{\mathcal{D}_v}_{X_v^{\circledS \bigstar \text{-}\mathrm{log}}/k^{\mathrm{log}}})_f)) = 2g-2+\frac{r}{2}, \end{align} as desired. The commutativity (with respect to base-change over superschemes over $S^\circledS$) of the formulation of $\mathbb{R}^1f_{*}^\circledS (\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}})$ follows from the above discussion and the discussion in {\it loc.\,cit.}.
This completes the proof of the former assertion.
The latter assertion follows from ~\cite{QL}, \S\,5, Remark 3.21, (c). \end{proof}
\subsection{ ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$ is isomorphic to $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$} \label{S44} \leavevmode\\
One verifies that the morphisms (\ref{e445}) and (\ref{e345}) obtained previously
are the inverse morphisms of each other. Thus, we have the following Proposition \ref{P66}. In particular, for each $r$-pointed $\lambda$-stable twisted curve $\underline{\mathfrak{X}}^\bigstar : = (\underline{f} : \underline{X} \rightarrow \underline{S}, \{ [\underline{\sigma}_{i} ]\}_{i=1}^r)$ of genus $g$ over a scheme $\underline{S}$, there exists canonically an equivalence of categories between $\mathfrak{S} \mathfrak{p} \mathfrak{i} \mathfrak{n}_{\underline{\mathfrak{X}}^\bigstar}$ and the groupoid of stable log twisted $\text{SUSY}_1$ curves ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$ over $S^{\underline{f} \text{-} \mathrm{log}}$ having $\underline{\mathfrak{X}}^\bigstar$ as the underlying pointed twisted curve.
\begin{prop} \label{P66} \leavevmode\\
\ \ \ There exists a canonical isomorphism of fibered categories \begin{align} \label{E37} ({^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda})^\mathrm{log}_t \isom {^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}
\end{align} over ${^\mathrm{tw} \overline{\mathfrak{M}}}_{g,r, \lambda}^\mathrm{log}$. In particular, $({^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda})^\mathrm{log}_t$ may be represented by a proper smooth Deligne-Mumford stack over $S_0$ of relative dimension $3g -3 +r$.
\end{prop}
\section{Deformations of stable log twisted $\text{SUSY}_1$ curves}
In this final section, we prove the main assertion, i.e., Theorem A. As discussed in \S\,\ref{S54}, the main step of the proof is to construct a canonical fermionic deformation of $ {^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$ ($\cong ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})^\mathrm{log}_t$ by Proposition \ref{P66}) in a way that a universal stable log twisted $\text{SUSY}_1$ curve exists (uniquely). To this end, we develop (in \S\S\,5.1-5.2) log smooth deformation theory concerning log twisted $\text{SUSY}_1$ curves. By applying the results obtained in these discussions, one may construct (cf. Corollary \ref{p01}) a universal family of stable log twisted $\text{SUSY}_1$ curves over a fermionic deformation of a representation (in the sense of Remark \ref{r78} (i)) of $ {^{\mathrm{tw}} \overline{\mathfrak{M}}}_{g,r, \lambda, \mathrm{spin}}^\mathrm{log}$.
It gives rise to a representation (by a groupoid in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\circledS$) of ${^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda}^{\circledS \mathrm{log}}$ itself
(equipped with a natural log structure).
This implies immediately the proof of Theorem A, as desired (cf. \S\,\ref{S54} for the detailed discussion).
\subsection{Deformation spaces of stable log twisted $\text{SUSY}_1$ curves} \label{S51} \leavevmode\\
Let $\widetilde{S}^{\circledS \mathrm{log}}$ be an fs log scheme and
$S^{\circledS \mathrm{log}}$ be a strict closed subsuperscheme of $\widetilde{S}^{\circledS \mathrm{log}}$ determined by a nilpotent superideal $\mathcal{I} \subseteq \mathcal{O}_{\widetilde{S}^\circledS}$ contained in $\mathcal{N}_{\widetilde{S}^\circledS}$.
Let
${^{\S_1} \mathfrak{X}}^{\circledS \bigstar} :=(f^{\circledS \mathrm{log}} : X^{\circledS \mathrm{log}} \rightarrow S^{\circledS \mathrm{log}}, \{ [\sigma^\circledS_{i}]\}_{i=1}^r, \mathcal{D})$
be a stable log twisted $\text{SUSY}_1$ curve of type $(g,r, \lambda)$ over $S^{\circledS \mathrm{log}}$. Write ${^{\S_1} \underline{\mathfrak{X}}}^{\circledS \bigstar} := (\underline{f}^{\circledS \mathrm{log}} : \underline{X}^{\circledS \mathrm{log}} \rightarrow S^\mathrm{log}_t, \{ [\underline{\sigma}_i]\}_{i=1}^r , \underline{\mathcal{D}})$ for the base-change of ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$ via the strict closed immersion $\tau_{S}^{\circledS \mathrm{log}} : S_t^\mathrm{log} \rightarrow S^{\circledS \mathrm{log}}$ extending $\tau_{S}^\circledS$.
Also, write
\begin{align} \mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \bigstar}) \end{align}
for the set of superconformal isomorphism classes of stable log twisted $\text{SUSY}_1$ curves of type $(g,r, \lambda)$
over $\widetilde{S}^{\circledS \mathrm{log}}$ extending ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$.
\begin{prop} [cf. ~\cite{LR}, Lemma 2.4] \label{P001} \leavevmode\\
\ \ \
Suppose that $\widetilde{S}^\circledS$ is affine and that $\mathcal{N}_{\widetilde{S}^\circledS} \mathcal{I} =0$ (which implies that $\mathcal{I}$ is square nilpotent and may be thought of as an $\mathcal{O}_{S_t}$-module). Then, $\mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \bigstar})$ is nonempty and has a canonical structure \begin{align} \label{E003} \mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \bigstar}) \times H^1 (\underline{X}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_b^{-1}(\mathcal{O}_{S_t})} \underline{f}^{-1}_b(\mathcal{I}))_b) & \rightarrow \mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \bigstar}) \\ ( \ a, \hspace{5mm} b \ ) \hspace{65mm}& \mapsto \hspace{5mm} a \boxplus^\ddagger b
\notag \end{align}
of affine space modeled on $H^1 (\underline{X}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_b^{-1}(\mathcal{O}_{S^\circledS})} \underline{f}^{-1}_b(\mathcal{I}))_b)$. Also, if ${^{\S_1} \widetilde{\mathfrak{X}}}^{\circledS \bigstar}$ is a stable log twisted $\text{SUSY}_1$ curve in $\mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \bigstar})$, then
there is no nontrivial superconformal automorphism of ${^{\S_1} \widetilde{\mathfrak{X}}}^{\circledS \bigstar}$ over $\widetilde{S}^{\circledS \mathrm{log}}$ which restricts to the identity morphism of ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$. \end{prop}
\begin{proof}
First, we shall prove that $\mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1}\mathfrak{X}}^{\circledS \bigstar})$ is nonempty. Let us take a collection of data
\begin{align} \mathbb{U}_I := \{ (Y_{\gamma}^{\circledS \mathrm{log}} \stackrel{\pi_\gamma^{\circledS \mathrm{log}}}{\rightarrow} X^{\circledS \mathrm{log}}, U_{\gamma}^\mathrm{log}, \eta^{\circledS\mathrm{log}}_{\gamma}, z_{\gamma}) \}_{\gamma \in I} \end{align} (indexed by a set $I$) obtained by applying Corollary \ref{c01033} to our ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$.
For each $i \in \{ 1, \cdots, r\}$, we shall write $I_i$ for the subset of $I$ consisting of elements $\gamma$ satisfying that $Y_{\gamma}^\circledS \times_{X^\circledS, \sigma^\circledS_{ i}} \mathbb{A}^{0|1}_{S^\circledS} \neq \emptyset$. By Proposition \ref{p0468}, we may assume, without loss of generality, that for each $\gamma \in I_i$ there exists a pair $(\Sigma^{U_\gamma}, \sigma^{U_\gamma})$ for which the collection of data \begin{align} \mathbb{U}^\bigstar_\gamma := (Y_\gamma^{\circledS \mathrm{log}}, U_\gamma^{\circledS \mathrm{log}}, \eta_\gamma^{\circledS \mathrm{log}}, \Sigma^{U_\gamma}, \sigma^{U_\gamma}) \end{align}
satisfies the condition described in Proposition \ref{p0468}. For each $\gamma \in I$ there exists (since $Y_{\gamma}^\circledS$ is affine) a log supersmooth lifting $\widetilde{\mathbb{Y}}_\gamma := (\widetilde{Y}^{\circledS \mathrm{log}}_\gamma, \widetilde{f}_\gamma^{\circledS \mathrm{log}}, \widetilde{i}^{\circledS \mathrm{log}}_{Y_\gamma})$ of $Y^{\circledS \mathrm{log}}_{\gamma}$ over $\widetilde{S}^{\circledS \mathrm{log}}$ (cf. Corollary \ref{c0404} (iii)) and a log smooth lifting $\widetilde{U}_\gamma^\mathrm{log}$ of $U_\gamma^\mathrm{log}$ over $\widetilde{S}_b^\mathrm{log}$ together with an isomorphism $\widetilde{\eta}_\gamma^{\circledS \mathrm{log}} : \widetilde{Y}_\gamma^{\circledS \mathrm{log}} \isom \widetilde{U}^{\mathrm{log}}_\gamma \times_{\widetilde{S}_b} \mathbb{A}_{\widetilde{S}^\circledS}^{0|1}$ over $\widetilde{S}^{\circledS \mathrm{log}}$ lifting $\eta^{\circledS \mathrm{log}}_\gamma$.
Also, $z_\gamma$ lifts to an element $\widetilde{z}_\gamma \in \mathcal{M}_{\widetilde{U}_\gamma}$. The $\mathcal{O}_{\widetilde{U}_\gamma^\mathrm{log} \times_{\widetilde{S}_b}\mathbb{A}_{\widetilde{S}^\circledS}}$-subsupermodule $\mathcal{O}_{\widetilde{U}_\gamma^\mathrm{log} \times_{\widetilde{S}_b}\mathbb{A}_{\widetilde{S}^\circledS}} \cdot (\partial_\psi + \psi \cdot \partial_{\widetilde{z}_\gamma}) \subseteq \mathcal{T}_{\widetilde{U}_\gamma^\mathrm{log} \times_{\widetilde{S}_b}\mathbb{A}_{\widetilde{S}^\circledS}/\widetilde{S}^{\circledS \mathrm{log}}}$ defines, via $\widetilde{\eta}^{\circledS \mathrm{log}}_\gamma$, a superconformal structure $\widetilde{\mathcal{D}}_{\gamma}$ on $\widetilde{Y}^{\circledS \mathrm{log}}_\gamma/\widetilde{S}^{\circledS \mathrm{log}}$ extending $\mathcal{D} |_{Y^{\circledS \mathrm{log}}_\gamma}$. After possibly replacing $\mathbb{U}_I$ with its refinement in an evident sense (cf. Remark \ref{r4f828}), we may suppose that the following three properties (i)-(iii) are satisfied: \begin{itemize}
\item[(i)]
For each pair $(\gamma_1, \gamma_2)$ with $Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}} := Y_{\gamma_1}^{\circledS \mathrm{log}} \times_{X^{\circledS \mathrm{log}}} Y_{\gamma_2}^{\circledS \mathrm{log}} \neq \emptyset$
(hence $Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}$ is affine),
there exists an isomorphism \begin{align}
\nu^{\circledS \mathrm{log}}_{\gamma_1, \gamma_2} : \widetilde{\mathbb{Y}}_{\gamma_1}^{\circledS \mathrm{log}} |_{Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}} \isom \widetilde{\mathbb{Y}}_{\gamma_2}^{\circledS \mathrm{log}} |_{Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}} \end{align}
of log supersmooth liftings which sends
$\widetilde{\mathcal{D}}_{\gamma_1} |_{Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}}$ to $\widetilde{\mathcal{D}}_{\gamma_2} |_{Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}}$ (cf. Corollary \ref{c0404} (ii) and Proposition \ref{p01033}). \item[(ii)] If $\gamma \in I_i$ ($i =1, \cdots, r$), then
there exists a pair $(\Sigma_i^{\widetilde{U}_\gamma}, \sigma_i^{\widetilde{U}_\gamma})$ consisting of a scheme $\Sigma_i^{\widetilde{U}_\gamma}$ over $\widetilde{S}_b$ with $\Sigma_i^{\widetilde{U}_\gamma} \times_{\widetilde{S}_b} S_b \cong \Sigma_i^{U_\gamma}$ and a closed immersion $\sigma_i^{\widetilde{U}_\gamma} : \Sigma_i^{\widetilde{U}_\gamma} \rightarrow \widetilde{U}_\gamma$ over $\widetilde{S}_b$ extending $\sigma_i^{U_\gamma}$;
\item[(iii)]
For each pair $(\gamma_1, \gamma_2) \in I_i \times I_i$ ($i =1, \cdots, r$) with $Y^{\circledS \mathrm{log}}_{\gamma_1, \gamma_2} \neq \emptyset$, then
the restrictions to $Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}$ of two composites
\begin{align}
\Sigma_i^{\widetilde{U}_{\gamma_l}} \times_{\widetilde{S}_b} \mathbb{A}^{0|1}_{\widetilde{S}^\circledS} \stackrel{\sigma_i^{\widetilde{U}_\gamma}}{\rightarrow} \widetilde{U}_{\gamma_l} \times_{\widetilde{S}_b} \mathbb{A}^{0|1}_{\widetilde{S}^\circledS} \stackrel{(\widetilde{\eta}_{\gamma_l}^\circledS)^{-1}}{\rightarrow} \widetilde{Y}^{\circledS}_{\gamma_l}
\end{align}
($l =1, 2$) are compatible (in an evident sense) via $\nu^{\circledS \mathrm{log}}_{\gamma_1, \gamma_2}$. \end{itemize}
If a triple $(\gamma_1, \gamma_2, \gamma_3) \in I^{\times 3}$ satisfies that \begin{align} Y_{\gamma_1, \gamma_2, \gamma_3}^{\circledS \mathrm{log}} := Y_{\gamma_1}^{\circledS \mathrm{log}} \times_{X^{\circledS \mathrm{log}}} Y_{\gamma_2}^{\circledS \mathrm{log}} \times_{X^{\circledS \mathrm{log}}} Y_{\gamma_3}^{\circledS \mathrm{log}} \neq \emptyset, \end{align}
then there exists uniquely an element \begin{align} \nu_{\gamma_1, \gamma_2, \gamma_3}^\ddagger \in & \Gamma ((Y_{\gamma_1, \gamma_2, \gamma_3})_b, (\mathcal{T}^\mathcal{D}_{X^{\circledS \bigstar \text{-}\mathrm{log}}/S^{\circledS \mathrm{log}}} \otimes_{f_b^{-1}(\mathcal{O}_{S^\circledS})} f_b^{-1}(\mathcal{I}))_b) \\
\big( & = \Gamma ((Y_{\gamma_1, \gamma_2, \gamma_3})_t, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_t^{-1}(\mathcal{O}_{S_t})} \underline{f}^{-1}_t(\mathcal{I}))_b)\big) \notag \end{align}
such that \begin{align} \nu_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}} \circ \nu_{\gamma_2, \gamma_3}^{\circledS \mathrm{log}} \circ \nu_{\gamma_3, \gamma_1}^{\circledS \mathrm{log}} = \mathrm{id}_{Y^{\circledS \mathrm{log}}_{\gamma_1, \gamma_2, \gamma_3}} \boxplus^\ddagger \nu_{\gamma_1, \gamma_2, \gamma_3}^\ddagger. \end{align} The collection of elements $\{\nu_{\gamma_1, \gamma_2, \gamma_3}^\ddagger \}_{\gamma_1, \gamma_2, \gamma_3}$ determines an element \begin{align} \nu^\ddagger \in H^2 (\underline{X}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_t^{-1}(\mathcal{O}_{S_t})} \underline{f}^{-1}_t(\mathcal{I}))_b). \end{align} But, since $S^\circledS$ is affine and $\mathrm{dim}(\underline{X}_b/S_t) =1$,
we have
\begin{align}
H^2 (\underline{X}_b, \mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_b^{-1}(\mathcal{O}_{S^\circledS})} \underline{f}^{-1}_b(\mathcal{I})) = 0
\end{align}
(in particular, $\nu^\ddagger =0$). Thus, after possibly replacing $\mathbb{U}_I$ with its refinement and
replacing each $\nu_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}$ with a suitable isomorphism $\widetilde{\mathbb{Y}}_{\gamma_1}^{\circledS \mathrm{log}} |_{Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}} \isom \widetilde{\mathbb{Y}}_{\gamma_2}^{\circledS \mathrm{log}} |_{Y_{\gamma_1, \gamma_2}^{\circledS \mathrm{log}}}$, the log superschemes $\{ \widetilde{Y}_\gamma^{\circledS \mathrm{log}} \}_{\gamma \in I}$
may be glued together to a log supersmooth superstack $\widetilde{X}^{\circledS \mathrm{log}}$ over $\widetilde{S}^{\circledS \mathrm{log}}$.
For each $i \in \{ 1, \cdots, r\}$, the morphisms $\{ (\sigma_i^{\widetilde{U}_\gamma} \times \mathrm{id}) \circ (\widetilde{\eta}_\gamma^\circledS )^{-1} : \Sigma_i^{\widetilde{U}_\gamma} \times_{\widetilde{S}_b} \mathbb{A}^{0|1}_{\widetilde{S}^\circledS} \rightarrow \widetilde{Y}^{\circledS}_\gamma \}_{\gamma \in I}$ may be glued together to
a closed immersion $\widetilde{\sigma}^\circledS_i : \mathbb{A}^{0|1}_{\widetilde{S}^\circledS} \rightarrow \widetilde{X}^\circledS$ over $\widetilde{S}^\circledS$ extending $\sigma_i^\circledS$, for which
the collection of data $(\widetilde{X}^{\circledS \mathrm{log}}/\widetilde{S}^{\circledS \mathrm{log}}, \{ \widetilde{\sigma}_i^\circledS \}_{i=1}^r)$ forms an $r$-pointed log twisted $(1|1)$-curve of genus $g$ over $\widetilde{S}^{\circledS \mathrm{log}}$.
Moreover, $\{ \widetilde{\mathcal{D}}_\gamma \}_{\gamma \in I}$
may be glued together to a superconformal structure $\widetilde{\mathcal{D}}$ on
$\widetilde{X}^{\circledS \bigstar \text{-} \mathrm{log}}/\widetilde{S}^{\circledS \mathrm{log}}$ extending $\mathcal{D}$. The collection of data \begin{align} {^{\S_1} \widetilde{\mathfrak{X}}}^{\circledS \bigstar} := (\widetilde{X}^{\circledS \mathrm{log}}/\widetilde{S}^{\circledS \mathrm{log}}, \{ [\widetilde{\sigma}^\circledS_i] \}_{i=1}^r, \widetilde{\mathcal{D}}) \end{align}
forms a stable log twisted $\text{SUSY}_1$ curve of type $(g,r, \lambda)$ over $\widetilde{S}^{\circledS \mathrm{log}}$ which restricts to ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$.
Consequently, $\mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \mathrm{log}})$ is nonempty.
Also, by considering the above discussion and a usual discussion in deformation theory, $\mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \mathrm{log}})$ admits a structure of affine space modeled on $H^1 (\underline{X}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_t^{-1}(\mathcal{O}_{S_t})} \underline{f}^{-1}_t(\mathcal{I}))_b)$ as described in (\ref{E003}).
Finally, the above argument implies the remaining portion of the proposition. Indeed, the group of superconformal automorphisms of an arbitrary ${^{\S_1} \widetilde{\mathfrak{X}}}^{\circledS \mathrm{log}} \in \mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \mathrm{log}})$
is canonically isomorphic to
$H^0 (\underline{X}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_t^{-1}(\mathcal{O}_{S_t})} \underline{f}^{-1}_t(\mathcal{I}))_b)$. If $(\mathcal{L}^\bigstar, \eta^\bigstar)$ denotes the spin structure on $(\underline{X}_t/S_t , \{[ \underline{\sigma}_i ]\}_{i=1}^r)$ corresponding to $\underline{\mathcal{D}}$ (cf. Proposition \ref{P66}), then we obtain a sequence of isomorphisms
\begin{align} \label{e896} & \ (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_b^{-1}(\mathcal{O}_{S_t})} \underline{f}^{-1}_t(\mathcal{I}))_b \\
\isom & \
(\underline{\mathcal{D}}^{\otimes 2} \otimes_{\underline{f}_t^{-1}(\mathcal{O}_{S_t})} \underline{f}^{-1}_t(\mathcal{I}))_b \notag \\
\isom & \ ((\mathcal{O}_{\underline{X}^\circledS}\otimes \mathcal{L}^{\bigstar\vee})^{\otimes 2} \otimes_{(\underline{f}_t)^{-1}(\mathcal{O}_{S_t})} \underline{f}_t^{-1}(\mathcal{I}))_b \notag \\
\isom & \ (\mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/S_t^\mathrm{log}} \otimes_{\underline{f}_t^*(\mathcal{O}_{S_t})}\underline{f}_t^*(\mathcal{I}_b)) \oplus (\mathcal{L}^{\bigstar \vee} \otimes_{\underline{f}_t^{*}(\mathcal{O}_{S_t})}\underline{f}_t^{*}(\mathcal{I}_f)), \notag \end{align} where the first isomorphism follows from Proposition \ref{p010} and the second isomorphism follows from (\ref{E30}). Here, note that the natural morphisms \begin{align} H^0(\underline{X}_b, \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/S_t^\mathrm{log}}) \otimes_{\Gamma (S_t, \mathcal{O}_{S_t})} \Gamma (S_t, \mathcal{I}_b) \rightarrow H^0(\underline{X}_b, (\mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/S_t^\mathrm{log}} \otimes_{\underline{f}_t^*(\mathcal{O}_{S_t})}\underline{f}_t^*(\mathcal{I}_b))) \end{align} and \begin{align} H^0 (\underline{X}_b, \mathcal{L}^{\bigstar \vee}) \otimes_{\Gamma (S_t, \mathcal{O}_{S_t})} \Gamma (S_t, \mathcal{I}_f) \rightarrow H^0 (\underline{X}_b, (\mathcal{L}^{\bigstar \vee} \otimes_{\underline{f}_t^{*}(\mathcal{O}_{S_t})}\underline{f}_t^{*}(\mathcal{I}_f))) \end{align} are surjective. On the other hand, the fact that $\mathrm{deg} (\mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/S_t^\mathrm{log}} ) <0$ and $\mathrm{deg} (\mathcal{L}^{\bigstar \vee})<0$ implies the equalities $H^0(\underline{X}_b, \mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/S_t^\mathrm{log}}) = H^0 (\underline{X}_b, \mathcal{L}^{\bigstar \vee}) =0$. Hence, we have \begin{align} & \ H^0 (\underline{X}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{X}^{\circledS \bigstar \text{-}\mathrm{log}}/S_t^{\mathrm{log}}} \otimes_{\underline{f}_t^{-1}(\mathcal{O}_{S_t})} \underline{f}^{-1}_t(\mathcal{I}))_b) \\ = & \ H^0(\underline{X}_b, (\mathcal{T}_{\underline{X}^{\bigstar \text{-}\mathrm{log}}/S_t^\mathrm{log}} \otimes_{\underline{f}_t^*(\mathcal{O}_{S_t})}\underline{f}_t^*(\mathcal{I}_b))) \oplus H^0 (\underline{X}_b, (\mathcal{L}^{\bigstar \vee} \otimes_{\underline{f}_t^{*}(\mathcal{O}_{S_t})}\underline{f}_t^{*}(\mathcal{I}_f))) \notag \\ = & \ 0. \notag \end{align} This implies that there is no nontrivial superconformal automorphism of any ${^{\S_1} \widetilde{\mathfrak{X}}}^{\circledS \bigstar} \in \mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \mathrm{log}})$. This completes the proof of Proposition \ref{P001}. \end{proof}
\subsection{Deformations of morphisms} \label{S52} \leavevmode\\
We keep the notation in the previous subsection. Moreover, let $T^{\circledS \mathrm{log}}$ be an fs log superscheme which is
log supersmooth over $S_0$ (of some relative superdimension) and
${^{\S_1} \mathfrak{Y}}^{\circledS \bigstar} := (f'^{\circledS \mathrm{log}} : Y^{\circledS \mathrm{log}} \rightarrow T^{\circledS \mathrm{log}}, \{ [\sigma'_i ] \}_{i=1}^r, \mathcal{D}')$ a stable log twisted $\text{SUSY}_1$ curve of type $(g,r, \lambda)$ over $T^{\circledS \mathrm{log}}$ such that $\mathcal{K}\mathcal{S}({^{\S_1} \mathfrak{Y}}^{\circledS \bigstar})$ is an isomorphism. Suppose that we are given a morphism $s^{\circledS \mathrm{log}} : S^{\circledS \mathrm{log}} \rightarrow T^{\circledS \mathrm{log}}$ of log superschemes via which the base-change of ${^{\S_1} \mathfrak{Y}}^{\circledS \bigstar}$ is isomorphic to ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$ over $S^{\circledS \mathrm{log}}$.
The following proposition is immediately verified from the various definitions involved, including the affine structures described in Proposition \ref{p0404} (ii) and Proposition \ref{P001}.
\begin{prop} [cf. ~\cite{LR}, Lemma 2.5] \label{p03} \leavevmode\\
\ \ \
Suppose that $\widetilde{S}^{\circledS}$ is affine and that $\mathcal{N}_{\widetilde{S}^\circledS} \mathcal{I} =0$. Denote by $\mathrm{KS}({^{\S_1} \mathfrak{Y}}^{\circledS \bigstar}; \mathcal{I})$ the composite isomorphism \begin{align} \label{E002} \Gamma (S_b, (s^{\circledS *}(\mathcal{T}_{T^{\circledS \mathrm{log}}/S_0}) \otimes \mathcal{I})_b) & \isom \Gamma (S_b, (s^{\circledS *}(\mathbb{R}^1 f'_{b*} (\mathcal{T}^\mathcal{D}_{Y^{\circledS \bigstar \text{-} \mathrm{log}}/T^{\circledS \mathrm{log}}})) \otimes \mathcal{I})_b) \\ & \isom \Gamma (S_b, (\mathbb{R}^1 f_{b*} (\mathcal{T}^\mathcal{D}_{\underline{X}^{\circledS \bigstar \text{-} \mathrm{log}}/S_t^{\circledS \mathrm{log}}}) \otimes \mathcal{I})_b) \notag \\ & \isom H^1 (\underline{X}_b, (\mathcal{T}^\mathcal{D}_{\underline{X}^{\circledS \bigstar \text{-} \mathrm{log}}/S_t^{\circledS \mathrm{log}}} \otimes_{\underline{f}_b^{*}(\mathcal{O}_{S_t})} \underline{f}^*_b(\mathcal{I}))_b), \notag \end{align} where the first isomorphism arises from $\mathcal{K}\mathcal{S}({^{\S_1} \mathfrak{Y}}^{\circledS \bigstar})$ and both the second and third isomorphisms arise from Proposition \ref{Pt66}.
Consider the map of sets \begin{align} s^\circledast : \Gamma (\widetilde{S}_b, \mathcal{D} ef_{\widetilde{S}^{\circledS \mathrm{log}}}(s^{\circledS \mathrm{log}})) \rightarrow \mathrm{Def}_{\widetilde{S}^{\circledS \mathrm{log}}} ({^{\S_1} \mathfrak{X}}^{\circledS \bigstar}) \end{align} given by pulling-back ${^{\S_1} \mathfrak{Y}}^{\circledS \bigstar}$.
Then, this map satisfies the equality
\begin{align} \label{e997} s^\circledast (\widetilde{s}^{\circledS \mathrm{log}} \boxplus^\dagger \zeta)
=s^\circledast (\widetilde{s}^{\circledS \mathrm{log}}) \boxplus^\ddagger \mathrm{K S}({^{\S_1} \mathfrak{Y}}^{\circledS \bigstar}; \mathcal{I}) (\zeta).
\end{align} for any $\widetilde{s}^{\circledS \mathrm{log}} \in \Gamma (\widetilde{S}_b, \mathcal{D} ef_{\widetilde{S}^{\circledS \mathrm{log}}}(s^{\circledS \mathrm{log}}))$ and $\zeta \in \Gamma (S_b, (s^{\circledS *}(\mathcal{T}_{T^{\circledS \mathrm{log}}/S_0}) \otimes \mathcal{I})_b)$.
In particular, (since $\mathrm{KS}({^\S \mathfrak{Y}}^{\circledS \bigstar}; \mathcal{I})$ is an isomorphism)
$s^\circledast$ is bijective, and hence, $\Gamma (\widetilde{S}_b, \mathcal{D} ef_{\widetilde{S}^{\circledS \mathrm{log}}}(s^{\circledS \mathrm{log}}))$ is nonempty.
\end{prop}
\begin{cor}[cf. ~\cite{LR}, Theorem 2.7] \label{p04} \leavevmode\\
\ \ \ Suppose that we are given a stable log twisted $\text{SUSY}_1$ curve ${^{\S_1} \widetilde{\mathfrak{X}}}^{\circledS \bigstar}$ over $\widetilde{S}^{\circledS \mathrm{log}}$ extending ${^{\S_1} \mathfrak{X}}^{\circledS \bigstar}$.
Then, there exists uniquely an extension $\widetilde{s}^{\circledS \mathrm{log}} : \widetilde{S}^{\circledS \mathrm{log}} \rightarrow T^{\circledS \mathrm{log}}$ of $s^{\circledS \mathrm{log}}$ via which the base-change of ${^{\S_1} \mathfrak{Y}}^{\circledS \mathrm{log}}$ is isomorphic to ${^{\S_1} \widetilde{\mathfrak{X}}}^{\circledS \bigstar}$.
\end{cor}
\begin{proof} The assertion may be directly proved by applying inductively Proposition \ref{p03} to the case where the pair $(S^{\circledS \mathrm{log}}, \widetilde{S}^{\circledS \mathrm{log}})$ is taken to be $(\widetilde{S}^{\circledS \mathrm{log}}_n, \widetilde{S}^{\circledS \mathrm{log}}_{n+1})$ (where, for each $n \geq 0$, we shall denote by $\widetilde{S}^\circledS_n$ the strict closed subsuperscheme of $\widetilde{S}^{\circledS \mathrm{log}}$ determined by $\mathcal{N}_{\widetilde{S}^\circledS}^{n+1}$).
\end{proof}
\subsection{Canonical liftings over complete versal families} \label{S53} \leavevmode\\
\begin{prop}[cf. ~\cite{LR}, Theorem 2.8] \label{p01} \leavevmode\\
\ \ \ Let $\underline{T}^\mathrm{log}$ be an affine log smooth scheme in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^\mathrm{log}$, and let ${^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \mathrm{log}} := (\underline{f}^{\circledS \mathrm{log}} : \underline{Y}^{\circledS \mathrm{log}} \rightarrow \underline{T}^\mathrm{log}, \{[ \underline{\sigma}_{i}] \}_{i=1}^r, \underline{\mathcal{D}})$ be a stable log twisted $\text{SUSY}_1$ curve of type $(g,r, \lambda)$ over $\underline{T}^\mathrm{log}$ such that the classical Kodaira-Spencer map \begin{align} \mathcal{K} \mathcal{S} (\underline{Y}_b^{\bigstar \text{-}\mathrm{log}}/\underline{T}^{\mathrm{log}}) : \mathcal{T}_{\underline{T}^{\mathrm{log}}/S_0} \rightarrow \mathbb{R}^1 \underline{f}_{b*} (\mathcal{T}_{\underline{Y}_b^{\bigstar \text{-}\mathrm{log}}/\underline{T}^{\mathrm{log}}}) \end{align}
of $\underline{Y}_b^{\bigstar \text{-}\mathrm{log}} \rightarrow \underline{T}^{\mathrm{log}}$ (cf. (\ref{E1111})) is an isomorphism.
Let us write
\begin{align} \label{e678}
\langle \underline{T} \rangle^{\circledS \mathrm{log}} :=\langle \underline{T}, \mathbb{R}^1 \underline{f}_{b *} ((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\mathrm{log}}})_f)^\vee \rangle^{\circledS \mathrm{log}}.
\end{align} Then, there exists a stable log twisted $\text{SUSY}_1$ curve ${^{\S_1} \mathfrak{Y}}_{\dagger}^{\circledS \bigstar}$ of type $(g,r, \lambda)$ over $\langle \underline{T} \rangle^{\circledS \mathrm{log}}$ which restricts to ${^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar}$ via $\tau_{\langle \underline{T} \rangle}^{\circledS \mathrm{log}} : \underline{T}^\mathrm{log} \rightarrow \langle \underline{T} \rangle^{\circledS \mathrm{log}}$ and whose Kodaira-Spencer map $\mathcal{K} \mathcal{S} ({^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar})$ is an isomorphism.
Moreover, such a stalbe log twisted $\text{SUSY}_1$ curve is unique in the following sense: if ${^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar}$ and ${^{\S_1} \mathfrak{Y}}_\ddagger^{\circledS \bigstar}$ are stable log twisted $\text{SUSY}_1$ curves of type $(g,r, \lambda)$ over $\langle \underline{T} \rangle^{\circledS \mathrm{log}}$ which restrict to ${^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar}$ and whose Kodaira-Spencer maps are isomorphism, then there exists uniquely a superconformal isomorphism ${^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar} \isom {^{\S_1} \mathfrak{Y}}_\ddagger^{\circledS \bigstar}$ over $\langle \underline{T} \rangle^{\circledS \mathrm{log}}$ which restricts to the identity morphism of ${^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar}$.
\end{prop}
\begin{proof} The uniqueness portion follows from the uniqueness assertion of Corollary \ref{p04}. We shall prove the existence portion. For each nonnegative integer $n$, we shall write $\langle \underline{T} \rangle_{n}^{\circledS \mathrm{log}}$ for the strict closed subsuperscheme of $\langle \underline{T} \rangle^{\circledS \mathrm{log}}$ corresponding to the ideal $\mathcal{N}_{\langle T \rangle^\circledS}^{n+1}$. Since $\langle \underline{T} \rangle_1^{\circledS \mathrm{log}}$ is simply $(\underline{T}, \mathcal{O}_{\underline{T}} \oplus \mathbb{R}^1 \underline{f}_{b *} ((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\mathrm{log}}})_f)^\vee)$, we obtain the trivial deformation ${^{\S_1} \mathfrak{Y}}_{1, \mathrm{triv}}^{\circledS \bigstar}$ of ${^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar}$ over $\langle \underline{T} \rangle_{1}^{\circledS \mathrm{log}}$ by pulling-back via the projection $\langle \underline{T} \rangle_{1}^{\circledS \mathrm{log}} \rightarrow \underline{T}^\mathrm{log}$. By applying Proposition \ref{p03} and considering the point of the affine space $\mathrm{Def}_{\langle \underline{T} \rangle_{1}^{\circledS \mathrm{log}}} ({^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar})$ representing ${^{\S_1} \mathfrak{Y}}_{1, \mathrm{triv}}^{\circledS \bigstar}$ as its origin, we have a canonical composite bijection \begin{align} \label{e3402} & \hspace{8mm} \mathrm{Def}_{\langle \underline{T} \rangle_{1}^{\circledS \mathrm{log}}} ({^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar}) \\ & \isom H^1 (\underline{Y}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar\text{-}\mathrm{log}}/\underline{T}^\mathrm{log}}\otimes \underline{f}_b^{*}( \mathbb{R}^1 \underline{f}_{b *} ((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\mathrm{log}}})_f)^\vee))_b) \notag \\ & \isom H^1 (\underline{Y}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar\text{-}\mathrm{log}}/\underline{T}^\mathrm{log}})_f \otimes \underline{f}_b^{*}( \mathbb{R}^1 \underline{f}_{b *} ((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\mathrm{log}}})_f)^\vee))\notag \\ & \isom H^1 (\underline{Y}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar\text{-}\mathrm{log}}/\underline{T}^\mathrm{log}})_f) \otimes_{H^0(\underline{T}, \mathcal{O}_{\underline{T}})} H^1 (\underline{Y}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar\text{-}\mathrm{log}}/\underline{T}^\mathrm{log}})_f)^\vee \notag \\
& \isom \mathrm{End}_{H^0(\underline{T}, \mathcal{O}_{\underline{T}})}(H^1 (\underline{Y}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar\text{-}\mathrm{log}}/\underline{T}^\mathrm{log}})_f)), \notag \end{align} where the third bijection follows from Proposition \ref{Pt66}.
If we write ${^{\S_1} \mathfrak{Y}}_{1}^{\circledS \bigstar}$ for the stable log twisted $\text{SUSY}_1$ curve corresponding to \begin{align} \mathrm{id}_{H^1 (\underline{Y}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar\text{-}\mathrm{log}}/\underline{T}^\mathrm{log}})_f)} \in \mathrm{End}_{H^0(\underline{T}, \mathcal{O}_{\underline{T}})}(H^1 (\underline{Y}_b, (\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar\text{-}\mathrm{log}}/\underline{T}^\mathrm{log}})_f)) \end{align}
via (\ref{e3402}), then its Kodaira-Spencer map turns out to be an isomorphism. By Proposition \ref{P001}, ${^{\S_1} \mathfrak{Y}}_{1}^{\circledS \bigstar}$ may be deformed to a stable log twisted $\text{SUSY}_1$ curve \begin{align} {^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar} := (f^{\circledS \mathrm{log}}_\dagger : Y_\dagger^{\circledS \mathrm{log}} \rightarrow \langle \underline{T} \rangle^{\circledS \mathrm{log}}, \{ [\sigma_{\dagger, i}^\circledS] \}_{i=1}^r, \mathcal{D}_\dagger) \end{align}
of type $(g,r,\lambda)$ over $\langle \underline{T} \rangle^{\circledS \mathrm{log}}$.
We shall prove that the Kodaira-Spencer map $\mathcal{K} \mathcal{S} ({^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar})$ is an isomorphism. To this end, it suffices to prove that its restriction along the reduced space $\underline{T}$ is an isomorphism.
(Indeed, by Proposition \ref{Pt66}, $\mathbb{R}^1 f^\circledS_{\dagger *}(\mathcal{T}^{\mathcal{D}_\dagger}_{Y_\dagger^{\circledS \bigstar \text{-} \mathrm{log}}/\langle \underline{T} \rangle^{\circledS \mathrm{log}}})$ and $\mathcal{T}_{\langle \underline{T} \rangle^{\circledS \mathrm{log}}/S_0}$ are locally free of the same rank). By the definition of $\langle \underline{T} \rangle^{\circledS \mathrm{log}}$, the pull-back of $\mathcal{T}_{\langle \underline{T} \rangle^{\circledS \mathrm{log}}/S_0}$ via $\tau^\circledS_{\langle \underline{T} \rangle} : \underline{T} \rightarrow \langle \underline{T} \rangle$ admits a canonical isomorphism \begin{align} \label{e3333} \tau_{\langle \underline{T} \rangle}^{\circledS *}(\mathcal{T}_{\langle \underline{T} \rangle^{\circledS \mathrm{log}}/S_0}) \isom \mathcal{T}_{\underline{T}^\mathrm{log}/S_0} \oplus \mathbb{R}^1 \underline{f}_{b *} ((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\circledS \mathrm{log}}})_f). \end{align} On the other hand, the pull-back of $\mathbb{R}^1 f^\circledS_{\dagger *}(\mathcal{T}^{\mathcal{D}_\dagger}_{Y_\dagger^{\circledS \bigstar \text{-} \mathrm{log}}/\langle \underline{T} \rangle^{\circledS \mathrm{log}}})$ admits a canonical composite isomorphism
\begin{align} \label{e3323} & \ \tau_{\langle \underline{T} \rangle}^{\circledS *}(\mathbb{R}^1 f_{\dagger *}^\circledS(\mathcal{T}^{\mathcal{D}_\dagger}_{Y_\dagger^{\circledS \bigstar \text{-} \mathrm{log}}/\langle \underline{T} \rangle^{\circledS \mathrm{log}}})) \\
\isom & \ \mathbb{R}^1 \underline{f}_{*}^\circledS(\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\circledS \mathrm{log}}}) \notag \\
\isom & \ \mathbb{R}^1 \underline{f}_{b*}((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\circledS \mathrm{log}}})_b) \oplus \mathbb{R}^1 \underline{f}_{b*}((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\circledS \mathrm{log}}})_f) \notag \\
\isom & \
\mathbb{R}^1 \underline{f}_{b*}(\mathcal{T}_{\underline{Y}_b^{\bigstar \text{-} \mathrm{log}}/\underline{T}^{\mathrm{log}}})
\oplus \mathbb{R}^1 \underline{f}_{*}((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\circledS \mathrm{log}}})_f) \notag
\end{align}
where the first isomorphism follows from Proposition \ref{Pt66} and the third isomorphism follows from (\ref{E006}).
One may verifies immediately from the various definitions involved that the pull-back $\tau_{\langle \underline{T} \rangle}^{\circledS *}(\mathcal{K} \mathcal{S} ({^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar}))$ of $\mathcal{K} \mathcal{S} ({^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar})$ makes the square diagram \begin{align} \begin{CD} \tau_{\langle \underline{T} \rangle}^{\circledS *}(\mathcal{T}_{\langle \underline{T} \rangle^{\circledS \mathrm{log}}/S_0}) @> (\ref{e3333}) > \sim > \mathcal{T}_{\underline{T}^\mathrm{log}/S_0} \oplus \mathbb{R}^1 f_{b *} ((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\circledS \mathrm{log}}})_f) \\ @V \tau_{\langle \underline{T} \rangle}^{\circledS *}(\mathcal{K} \mathcal{S} ({^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar})) V V @VV \mathcal{K} \mathcal{S} (\underline{Y}_b^{\bigstar \text{-}\mathrm{log}}/\underline{T}^\mathrm{log}) \oplus \mathrm{id} V \\ \tau_{\langle \underline{T} \rangle}^{\circledS *}(\mathbb{R}^1 f_{\dagger *}^\circledS(\mathcal{T}^{\mathcal{D}_\dagger}_{Y_\dagger^{\circledS \bigstar \text{-} \mathrm{log}}/\langle \underline{T} \rangle^{\circledS \mathrm{log}}})) @> \sim > (\ref{e3323}) > \mathbb{R}^1 \underline{f}_{b*}(\mathcal{T}_{\underline{Y}_b^{\bigstar \text{-} \mathrm{log}}/\underline{T}^{\mathrm{log}}})
\oplus \mathbb{R}^1 \underline{f}_{b *}((\mathcal{T}^{\underline{\mathcal{D}}}_{\underline{Y}^{\circledS \bigstar \text{-} \mathrm{log}}/\underline{T}^{\circledS \mathrm{log}}})_f) \end{CD} \end{align} commute. Hence, since we have assumed that $\mathcal{K} \mathcal{S} (\underline{Y}^{\bigstar \text{-}\mathrm{log}}/\underline{T}^\mathrm{log})$ is an isomorphism, $ \tau_{\langle \underline{T} \rangle}^{\circledS *}(\mathcal{K} \mathcal{S} ({^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar}))$ is an isomorphism, as desired.
This completes the proof of Proposition \ref{p01}. \end{proof}
\subsection{The proof of Theorem A} \label{S54} \leavevmode\\
In this final section, we shall prove Theorem A, the main result of the present paper. Since $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t$
is a smooth Deligne-Mumford stack over $S_0$ (cf. Proposition \ref{P66}), there exists an isomorphism $[\underline{R} \rightrightarrows \underline{U}] \isom ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t$ over $S_0$ for some groupoid $\underline{R} \rightrightarrows \underline{U} := (\underline{U}, \underline{R}, \underline{s}, \underline{t}, \underline{c})$ in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}$ such that both $\underline{U}$ and $\underline{R}$ are smooth affine schemes over $S_0$ of relative dimension $3g-3+r$, and both $\underline{s}$ and $\underline{t}$ are \'{e}tale. Denote by $\pi_{\underline{U}} : \underline{U} \rightarrow ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t$ the natural projection (hence $\pi_{\underline{R}} := \pi_{\underline{U}} \circ \underline{s} = \pi_{\underline{U}} \circ \underline{t}$). Write $\underline{U}^\mathrm{log}$ (resp., $\underline{R}^\mathrm{log}$) for the log scheme defined to be $\underline{U}$ (resp., $\underline{R}$) equipped with the log structure pulled-back from $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$. In particular, $\pi_{\underline{U}}$ (resp., $\pi_{\underline{R}}$) extends to a morphisms $\pi_{\underline{U}}^\mathrm{log} : \underline{U}^\mathrm{log} \rightarrow ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$ (resp., $\pi_{\underline{R}}^\mathrm{log} : \underline{R}^\mathrm{log} \rightarrow ({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t^\mathrm{log}$) of log stacks.
Moreover, $\underline{s}, \underline{t} : \underline{R} \rightarrow \underline{U}$ extend to morphisms $\underline{s}^\mathrm{log}, \underline{t}^\mathrm{log} : \underline{R}^\mathrm{log} \rightarrow \underline{U}^\mathrm{log}$, and $\underline{c}$ extends to
a morphism $\underline{c}^\mathrm{log} : \underline{R}^\mathrm{log} \times_{\underline{s}^\mathrm{log}, \underline{U}^\mathrm{log}, \underline{t}^\mathrm{log}} \underline{R}^\mathrm{log} \rightarrow \underline{R}^\mathrm{log}$.
Let us write \begin{align} & {^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar} := (\underline{Y}^{\circledS \mathrm{log}}/\underline{U}^\mathrm{log}, \{[\sigma^\circledS_{\underline{Y}, i}] \}_{i=1}^r, \mathcal{D}_{\underline{Y}}) \\ (\text{resp.,} \ & {^{\S_1} \underline{\mathfrak{X}}}^{\circledS \bigstar} := (\underline{X}^{\circledS \mathrm{log}}/\underline{R}^\mathrm{log}, \{[\sigma^\circledS_{\underline{X}, i}] \}_{i=1}^r, \mathcal{D}_{\underline{X}})) \notag \end{align} for the stable log twisted $\text{SUSY}_1$ curve over $\underline{U}^\mathrm{log}$ (resp., $\underline{R}^\mathrm{log}$) classified by $\pi^\mathrm{log}_{\underline{U}}$ (resp., $\pi^\mathrm{log}_{\underline{R}}$).
The base-change of ${^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar}$ via $\underline{s}^\mathrm{log}$ and $\underline{t}^\mathrm{log}$ respectively are, by definition, isomorphic to ${^{\S_1} \underline{\mathfrak{X}}}^{\circledS \bigstar}$.
The Kodaira-Spencer morphisms $\mathcal{K} \mathcal{S} ({^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar})$, $\mathcal{K} \mathcal{S} ({^{\S_1} \underline{\mathfrak{X}}}^{\circledS \bigstar})$ are isomorphisms. Here, let us define $\langle \underline{U} \rangle^{\circledS \mathrm{log}}$ (resp., $\langle \underline{R} \rangle^{\circledS \mathrm{log}}$) to be the log superscheme obtained from $\underline{U}^{\mathrm{log}}$ (resp., $\underline{R}^{\mathrm{log}}$) as in (\ref{e678}), which is split and log supersmooth over $S_0$ of relative superdimension $3g-3+r | 2g-2+\frac{r}{2}$ (by Proposition \ref{Pt66}). It follows from Proposition \ref{p01} that ${^{\S_1} \underline{\mathfrak{Y}}}^{\circledS \bigstar}$ (resp., ${^{\S_1} \underline{\mathfrak{X}}}^{\circledS \bigstar}$) may be deformed to a stable log twisted $\text{SUSY}_1$ curve \begin{align} & {^{\S_1} \mathfrak{Y}}_\dagger^{\circledS \bigstar} := (f^{\circledS \mathrm{log}}_{Y, \dagger} : Y_\dagger^{\circledS \mathrm{log}}\rightarrow \langle \underline{U} \rangle^{\circledS \mathrm{log}}, \{ [\sigma_{Y, \dagger, i}^\circledS] \}_{i=1}^r, \mathcal{D}_{Y, \dagger}) \\ (\text{resp.,} \ & {^{\S_1} \mathfrak{X}}_\dagger^{\circledS \bigstar} := (f^{\circledS \mathrm{log}}_{X, \dagger} : X_\dagger^{\circledS \mathrm{log}}\rightarrow \langle \underline{R} \rangle^{\circledS \mathrm{log}}, \{ [\sigma_{X, \dagger, i}^\circledS] \}_{i=1}^r, \mathcal{D}_{X, \dagger})) \notag \end{align}
over $\langle \underline{U} \rangle^{\circledS \mathrm{log}}$ (resp., $\langle \underline{R} \rangle^{\circledS \mathrm{log}}$) whose Kodaira-Spencer map is an isomorphism. Hence, by Corollary \ref{p04}, there exists morphisms $\langle \underline{s} \rangle^{\circledS \mathrm{log}}, \langle \underline{t} \rangle^{\circledS \mathrm{log}} : \langle \underline{R} \rangle^{\circledS \mathrm{log}} \rightarrow \langle \underline{U} \rangle^{\circledS \mathrm{log}}$ via which the base-changes of ${^{\S_1} \mathfrak{Y}}_{\dagger}^{\circledS \bigstar}$ are isomorphic to ${^{\S_1} \mathfrak{X}}_\dagger^{\circledS \bigstar}$ and which make the square diagrams \begin{align} \label{dg01} \xymatrix{
\underline{R}^\mathrm{log} \ar[r]^{\underline{s}^\mathrm{log}} \ar[d]_{\tau^{\circledS \mathrm{log}}_{ \langle \underline{R} \rangle}}& \underline{U}^\mathrm{log} \ar[d]^{\tau^{\circledS \mathrm{log}}_{ \langle \underline{U} \rangle}} \\
\langle \underline{R} \rangle^{\circledS \mathrm{log}} \ar[r]_{\langle \underline{s} \rangle^{\circledS \mathrm{log}}} & \langle \underline{U} \rangle^{\circledS \mathrm{log}} } \hspace{10mm} \xymatrix{
\underline{R}^\mathrm{log} \ar[r]^{\underline{t}^\mathrm{log}} \ar[d]_{\tau^{\circledS \mathrm{log}}_{ \langle \underline{R} \rangle}}& \underline{U}^\mathrm{log} \ar[d]^{\tau^{\circledS \mathrm{log}}_{ \langle \underline{U} \rangle}} \\
\langle \underline{R} \rangle^{\circledS \mathrm{log}} \ar[r]_{\langle \underline{t} \rangle^{\circledS \mathrm{log}}} & \langle \underline{U} \rangle^{\circledS \mathrm{log}} } \end{align} commute. Moreover, we obtain a morphism \begin{align} \langle \underline{c}\rangle^{\circledS \mathrm{log}} : \langle \underline{R} \rangle^{\circledS \mathrm{log}} \times_{ \langle \underline{s} \rangle^{\circledS \mathrm{log}}, \langle \underline{U} \rangle^{\circledS \mathrm{log}}, \langle \underline{t} \rangle^{\circledS \mathrm{log}}} \langle \underline{R} \rangle^{\circledS \mathrm{log}} \rightarrow \langle \underline{R} \rangle^{\circledS \mathrm{log}} \end{align} extending
the morphism $\underline{c}^{\circledS \mathrm{log}}$. The uniqueness assertion in Corollary \ref{p04} implies that the collection of data \begin{align} \langle \underline{R} \rangle^{\circledS} \rightrightarrows \langle \underline{U} \rangle^{\circledS} := (\langle \underline{U} \rangle^{\circledS}, \langle \underline{R} \rangle^{\circledS}, \langle \underline{s} \rangle^{\circledS}, \langle \underline{t} \rangle^{\circledS}, \langle \underline{c} \rangle^{\circledS}) \end{align} forms a groupoid in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS}$.
For $\Box = \underline{s}$ or $\underline{d}$, we shall denote by \begin{align} d \langle \Box \rangle^\circledS : \mathcal{T}_{\langle \underline{R} \rangle^{\circledS \mathrm{log}}/S_0} \rightarrow \langle \Box \rangle^{\circledS *} (\mathcal{T}_{\langle \underline{U} \rangle^{\circledS \mathrm{log}}/S_0}) \end{align}
the differential of $\langle \Box \rangle^\circledS$
relative to $S_0$. Then, $d \langle \Box \rangle^\circledS$ is an isomorphism since the square diagram \begin{align} \xymatrix{ \mathcal{T}_{\langle \underline{R} \rangle^{\circledS \mathrm{log}}/S_0} \ar[r]^{d \langle \Box \rangle^\circledS } \ar[d]^{\wr}_{\mathcal{K} \mathcal{S} ({^{\S_1} \mathfrak{X}}_{\dagger}^{\circledS \bigstar})}& \langle \Box \rangle^{\circledS *} (\mathcal{T}_{\langle \underline{U} \rangle^{\circledS \mathrm{log}}/S_0}) \ar[d]_{\wr}^{\langle \Box \rangle^{\circledS *}(\mathcal{K} \mathcal{S} ({^{\S_1} \mathfrak{Y}}_{\dagger}^{\circledS \bigstar}))} \\ \mathbb{R}^1 f^\circledS_{X, \dagger} (\mathcal{T}^{\mathcal{D}_{X, \dagger}}_{X_{\dagger}^{\circledS \bigstar \text{-} \mathrm{log}}/ \langle \underline{R} \rangle^{\circledS \mathrm{log}}}) \ar[r]^{\hspace{-5mm} \sim} & \langle \Box \rangle^{\circledS *} (\mathbb{R}^1 f^\circledS_{Y, \dagger} (\mathcal{T}^{\mathcal{D}_{Y, \dagger}}_{Y_{\dagger}^{\circledS \bigstar \text{-} \mathrm{log}}/ \langle \underline{U} \rangle^{\circledS \mathrm{log}}})) } \end{align} is commute and cartesian (where the lower horizontal arrow is isomorphism by Proposition \ref{Pt66}). Hence, for each $n \geq 0$, the morphism
$\mathrm{gr}^n_{\langle \underline{U} \rangle^\circledS} \rightarrow \mathrm{gr}^n_{\langle \underline{R} \rangle^\circledS} $ induced by $d \langle \Box \rangle^\circledS$ is an isomorphism.
It follows immediately that $\langle \underline{s} \rangle_b, \langle \underline{t} \rangle_b : \langle \underline{R} \rangle_b \rightarrow \langle \underline{U} \rangle_b$ are \'{e}tale (since $\underline{s}$ and $\underline{t}$ are \'{e}tale) and that two morphisms
\begin{align}
(\langle \underline{s} \rangle^\circledS, \beta^\circledS_{\langle \underline{R} \rangle_b}), (\langle \underline{t} \rangle^\circledS, \beta^\circledS_{\langle \underline{R} \rangle_b}) : \langle \underline{R} \rangle^{\circledS} \rightarrow \langle \underline{U}^\circledS \rangle \times_{\langle \underline{U} \rangle_b} \langle \underline{R} \rangle_b
\end{align}
are isomorphisms. Thus, both $\langle \underline{s} \rangle^\circledS$ and $\langle \underline{t} \rangle^\circledS$ are super\'{e}tale.
By Proposition \ref{p0207}, $[\langle \underline{R} \rangle^{\circledS} \rightrightarrows \langle \underline{U} \rangle^{\circledS}]$ forms a supersmooth Deligne-Mumford superstack over $S_0$ of relative superdimension $3g-3+r | 2g-2+\frac{r}{2}$; it is superproper over $S_0$ since $({^{\S_1} \overline{\mathfrak{M}}}_{g,r, \lambda})_t$ is proper over $S_0$ in the classical sense.
Moreover, the log structures of the various constituents in $\langle \underline{R} \rangle^{\circledS} \rightrightarrows \langle \underline{U} \rangle^{\circledS}$
gives rises to a log structure on
the superstack $[\langle \underline{R} \rangle^{\circledS} \rightrightarrows \langle \underline{U} \rangle^{\circledS}]$. Let us write $[\langle \underline{R} \rangle^{\circledS} \rightrightarrows \langle \underline{U} \rangle^{\circledS}]^\mathrm{log}$ for the resulting log superstack and write $\pi^{\circledS \mathrm{log}}_{\langle \underline{U} \rangle} : \langle \underline{U} \rangle^{\circledS \mathrm{log}} \rightarrow {^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda}^{\circledS \mathrm{log}}$
for the classifying morphism
of ${^\S \mathfrak{Y}}_{\dagger}^{\circledS \bigstar}$.
Then, $\pi^{\circledS \mathrm{log}}_{\langle \underline{U} \rangle}$ factors through a morphism \begin{align} \Theta^{\circledS \mathrm{log}} : [\langle \underline{R} \rangle^{\circledS} \rightrightarrows \langle \underline{U} \rangle^{\circledS}]^\mathrm{log} \rightarrow {^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda}^{\circledS \mathrm{log}}. \end{align}
To complete the proof of Theorem A, it suffices to prove that $\Theta^{\circledS \mathrm{log}}$ is an isomorphism.
Consider the surjective portion. Let $S^{\circledS \mathrm{log}}$ be an object in $\mathfrak{S} \mathfrak{c} \mathfrak{h}_{/S_0}^{\circledS \mathrm{log}}$ and $s^{\circledS \mathrm{log}} : S^{\circledS \mathrm{log}} \rightarrow {^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda}^{\circledS \mathrm{log}}$ a morphism of log superstacks, which induces a morphism $s_t^\mathrm{log} : S_t^\mathrm{log} \rightarrow ({^{\S_1} \overline{\mathfrak{M}}}_{g,r,\lambda})_t^{\mathrm{log}}$. There exists a strict \'{e}tale covering $\pi_{\underline{s}}^\mathrm{log} : \underline{S}^{' \mathrm{log}} \rightarrow S_t^\mathrm{log}$ of $S_t^\mathrm{log}$ and a morphism ${\underline{s}'}^{\mathrm{log}} : \underline{S}^{' \mathrm{log}} \rightarrow \underline{U}^\mathrm{log}$ satisfying that $s_t^{\mathrm{log}} \circ \pi_{\underline{s}}^\mathrm{log} \cong \pi^\mathrm{log}_{\underline{U}} \circ {\underline{s}'}^{\mathrm{log}}$. By Proposition \ref{p0607}, there exists a strict super\'{e}tale morphism $\pi_{s}^{\circledS \mathrm{log}} : S^{' \circledS \mathrm{log}} \rightarrow S^{\circledS \mathrm{log}}$ which fits into the following cartesian square diagram \begin{align} \xymatrix{
\underline{S}^{'\mathrm{log}} \ar[r]^{\pi_{\underline{s}}^\mathrm{log}} \ar[d] \ar@{}[rd]|{\Box} & S_t^\mathrm{log} \ar[d]^{\tau^{\circledS \mathrm{log}}_S} \\ S^{' \circledS \mathrm{log}} \ar[r]_{\pi_{s}^{\circledS \mathrm{log}}} & S^{\circledS \mathrm{log}}. }
\end{align} (In particular, the left-hand vertical arrow coincides with $\tau^{\circledS \mathrm{log}}_{S'}$.)
By Corollary \ref{p04}, the morphism ${\underline{s}'}^{\mathrm{log}}$ extends to a morphism ${s'}^{\circledS \mathrm{log}} : S^{' \circledS \mathrm{log}} \rightarrow \langle \underline{U} \rangle^{\circledS \mathrm{log}}$. The uniqueness assertion of Corollary \ref{p04} implies that $\pi_{\langle \underline{U} \rangle}^{\circledS \mathrm{log}} \circ {s'}^{\circledS \mathrm{log}} \cong s^{\circledS \mathrm{log}} \circ \pi_s^{\circledS \mathrm{log}}$. This shows the subjectivity of $\Theta^{\circledS \mathrm{log}}$.
The injectivity portion follows from an argument technically similar to the above discussion. This completes the proof of Theorem A.
\end{document} | arXiv |
Planets X, Y and Z take 360, 450 and 540 days, respectively, to rotate around the same sun. If the three planets are lined up in a ray having the sun as its endpoint, what is the minimum positive number of days before they are all in the exact same locations again?
We are asked to find the least common multiple of 360, 450 and 540. We prime factorize \begin{align*}
360 &= 2^3\cdot 3^2\cdot 5 \\
450 &= 2 \cdot3^2 \cdot 5^2 \\
540 &= 2^2\cdot 3^3 \cdot 5
\end{align*} and take the largest exponent for each of the primes to get a least common multiple of $2^3\cdot 3^3\cdot 5^2=\boxed{5400}$. | Math Dataset |
Cattle genome-wide analysis reveals genetic signatures in trypanotolerant N'Dama
Soo-Jin Kim1,2,
Sojeong Ka1,
Jung-Woo Ha3,
Jaemin Kim2,
DongAhn Yoo2,4,
Kwondo Kim2,4,
Hak-Kyo Lee5,
Dajeong Lim6,
Seoae Cho2,
Olivier Hanotte7,8,
Okeyo Ally Mwai9,
Tadelle Dessie8,
Stephen Kemp9,10,
Sung Jong Oh11 &
Heebal Kim1,2,4
BMC Genomicsvolume 18, Article number: 371 (2017) | Download Citation
Indigenous cattle in Africa have adapted to various local environments to acquire superior phenotypes that enhance their survival under harsh conditions. While many studies investigated the adaptation of overall African cattle, genetic characteristics of each breed have been poorly studied.
We performed the comparative genome-wide analysis to assess evidence for subspeciation within species at the genetic level in trypanotolerant N'Dama cattle. We analysed genetic variation patterns in N'Dama from the genomes of 101 cattle breeds including 48 samples of five indigenous African cattle breeds and 53 samples of various commercial breeds. Analysis of SNP variances between cattle breeds using wMI, XP-CLR, and XP-EHH detected genes containing N'Dama-specific genetic variants and their potential associations. Functional annotation analysis revealed that these genes are associated with ossification, neurological and immune system. Particularly, the genes involved in bone formation indicate that local adaptation of N'Dama may engage in skeletal growth as well as immune systems.
Our results imply that N'Dama might have acquired distinct genotypes associated with growth and regulation of regional diseases including trypanosomiasis. Moreover, this study offers significant insights into identifying genetic signatures for natural and artificial selection of diverse African cattle breeds.
Cattle are vital resources for African economy and society. Approximately 150 breeds of indigenous cattle have been found in sub-Saharan Africa [1]. Indigenous African cattle which have inhabited geographically isolated region for a long time have been subjected to the environmental pressure. This imposed strong adaptive constraints to African cattle, and thus led to selection of the fitter individuals to the harsh conditions [2]. In particular, some breeds (e.g. Gobra zebu and N'Dama) have acquired tolerance to local diseases that is known to significantly decrease the survival and productivity of African livestock [3]. In addition to the environmental factors, artificial selection has resulted in characteristic phenotypes in a few breeds (e.g. Ankole, Boran, Kenana and Ogaden), which enhanced the production of dairy products and beef [4, 5].
Rapid development of large-scale genetic variant inventories has brought attention to the identification of the genes or loci controlling phenotypic traits [6]. This triggered extensive studies on genome-wide analysis which is expected to ultimately improve our understanding in the role of unique genetic signatures for adapting environmental conditions. Recently, several genome analyses were performed to study the genetic backgrounds as well as the diversity in multiple breeds of African cattle [7,8,9,10,11]. For instance, a genome-wide SNP analysis for the small East African Zebu revealed the candidate loci to improve sustainable livestock productivity in the tropics [11]. Discovery of such regions in the genome enables us to detect distinct genetic variants that are related to phenotypic traits of a certain breed and facilitate functional annotation of the genome.
African trypanosomiasis is a matter of great concern that can lead to serious economic losses and health crisis in Africa. Trypanosomes are infectious agents that are transmitted by tsetse fly. It can cause lethal diseases in mammals including human and livestock. In particular, T. congolense, T. vivax and T. brucei groups are the main African pathogenic trypanosomes for cattle [12]. Most cattle including non-African and some African breeds (Boran, Kenana and Ogaden) are highly susceptible to trypanosome infection. Several studies have demonstrated that each breed of cattle showed an innately different degree of tolerance to trypanosomiasis when exposed to natural infection by wild-type tsetse flies from the field [13, 14]. To be specific, N'Dama breeds are naturally less susceptible to trypanosomiasis than other cattle, and hence they can survive better and maintain high productivity in trypanosomiasis-endemic areas [13, 15]. Moreover, trypanotolerant breeds including N'Dama are also less susceptible to other critical infectious diseases such as helminthiasis [13], ticks and tick-borne-diseases [3], and streptothricosis [16] in Africa. Hence, a recent study looked into trypanotolerance, one of the interesting physiological traits of indigenous African cattle. Bayesian-based method was applied to the genome data of African cattle to detect the genetic divergence that may be associated with trypanosomiasis [7]. Moreover, a systematic approach using an experimental cross between N'Dama and Boran revealed several QTLs and candidate genes controlling tolerance to trypanosomiasis in cattle [17,18,19,20].
Many studies on the tolerance to cattle trypanosomiasis-susceptibility have mainly focused on comparing N'Dama and Boran breeds. However, not many studies have carried out comparative research between N'Dama and other trypano-susceptible breeds. Herein, we concentrate on the analysis of the genetic variations between N'Dama and Ogaden cattle in order to discover N'Dama-specific genetic signatures. Ogaden cattle are one of the representative breeds that play a role as a valuable economic resource including the production of beef and dairy products, but they are known to be susceptible to trypanosomiasis [2].
In this study, a comparative genome-wide analysis of diverse cattle breeds was carried out to identify genetic distinctiveness of N'Dama breed. We investigated the genome of five indigenous African breeds and four commercial breeds using the combined methods based on information-theoretic and statistical approaches. This study identified new genetic patterns from cattle genome, and also detected selective pressures which cause an increase in genetic differentiation among populations. The proposed approaches on the analysis of the selected SNPs confirmed the differences of genomic patterns between N'Dama and other cattle breeds. Moreover, the identified associations between genes with N'Dama-specific genetic variations are related to the regulation of ossification, neurological system, and immune system development which might be involved in the evolution of N'Dama-specific phenotypes including the tolerance to African trypanosomiasis. This study reveals insights into detecting the breed-specific genetic signatures from the genome.
We performed a comparative genome-wide analysis of diverse cattle breeds to discover genetic signatures of N'Dama cattle using the combined methods based on information-theoretic and statistical approaches (Fig. 1).
Schematic overview of systemic analysis on cattle genome for identifying genetic signatures of subspeciation in trypanotolerant N'Dama
Summary of sequencing, assembly and SNP detection
6.5 billion reads or ~644 Gbp of sequences with ~11 X genome coverage in total were generated from individual genomes of five indigenous African cattle (Ankole, Boran, Kenana, N'Dama and Ogaden) and four commercial cattle breeds (Angus, Hanwoo, Holstein and Jersey). The reads were aligned to the reference genome sequence UMD 3.1 with an average alignment rate of 98.84% that covered 98.56% of the reference genome (Additional file 1: Table S1). A total of ~37 million SNPs were obtained after filtering the potential PCR duplicates and correcting misalignments (Additional file 1: Table S2). Moreover, we observed 94.92% overall genotype concordance between the BovineSNP50 Genotyping BeadChip and the re-sequencing results across the samples. It helps to offer confidence on the accuracy of SNP calling (Additional file 1: Table S3).
Identification of discriminative SNPs based on mutual information
The candidate SNPs to distinguish N'Dama and other cattle breeds were extracted using an information-theoretic method, mutual information (MI) which estimates the association strength between the SNP positions and breeds. Thus, our analysis was designed to detect the discriminative SNPs with a high dependence between the haplotypes of two adjacent loci and breeds. Approximately 2.6 hundred thousand SNPs were identified by averaging the results between N'Dama and other five breeds along with 2,793 common genes (Additional file 1: Figure S1 and S2). The extracted SNPs showed high MI values (the maximum value = 0.691) and significant p-values (2.13e-6). To overcome any bias caused by the small sample size, a lower p-value threshold was selected for estimating statistical significance (p-values less than 1.0e-3) compared to those in other studies [21]. Overall, these results showed that the haplotype patterns in N'Dama were clearly different from those in other cattle. Moreover, the regions containing the extracted SNPs can serve as a potential marker to distinguish N'Dama breeds.
Difference in distribution of the SNPs identified by MI among Boran, Ogaden and N'Dama breeds
The paired datasets of the three different cattle breeds including Boran, Ogaden, and N'Dama were generated as N'Dama-Boran, N'Dama-Ogaden, and Boran-Ogaden in order to identify the difference in the distribution of the identified SNPs. We computed MI values between each SNP position variable and the breed variable from the paired datasets. The total 37,363,436 SNP positions were annotated with 16,699 genes for analysing the difference of the MI distributions between N'Dama and other breeds. For the analysis, (i) the maximum, (ii) the mean, and (iii) the sum of MI values of all the SNPs in a gene were calculated in addition to (iv) the number of SNPs counted for each gene. Figure 2a shows the distributions of the mean and the maximum values of the MI of SNPs in each gene for all three pair datasets, I(N;B), I(N;O), and I(B;O). Also shown in Fig. 2a, I(B;O) values were lower compared to those of I(N;B) and I(N;O). This signifies that N'Dama breed had the SNP patterns which are distinguishable from Boran and Ogaden breeds. Such differences were likely to be associated with the unique property of N'Dama breed such as African trypanosomiasis tolerance. The differences of N'Dama from other two breeds were also clearly shown in Fig. 2b which compares the distributions of ratios for the MI values of I(N;B), I(N;O), and I(B;O). While the distributions of I(N;B) and I(N;O) were similar, those of I(B;O) clearly showed a different pattern. Considering the differential distribution of SNPs which led to the larger MI values, we suggest that N'Dama has distinctive SNP patterns which may be related to their breed-specific traits including trypanotolerance. Finally, Fig. 2c presents the Kullback-Liebler (KL)-divergence values of the MI distribution between the paired datasets of three breeds. KL-divergence is a widely used non-symmetric measure of the difference between two distributions. Larger values of KL-divergence mean larger differences between two distributions. Thus, this result also indicated that N'Dama is different from Boran and Ogaden breeds with respect to the SNP patterns which may influence N'Dama-specific traits.
The difference in the distribution of mutual information (MI) of SNP-annotating genes between the breed pairs including Boran, N'Dama and Ogaden breed. a Distribution of mean and maximum values of MIs between three breed pairs on each gene is presented. All the SNPs are annotated by 16669 genes. X-axis denotes the number of SNPs annotated by a gene, and MI score is shown in y axis. Mean MI is calculated by averaging MI scores of all the SNPs annotated by a gene. Max MI is the maximum value among MI scores of all the SNPs annotated by a genes. I(N;B), I(N;O) and I(B;O) indicate MI between N'Dama and Boran, N'Dama and Ogaden, and Boran and Ogaden breed. b The distributions of MI ratios between Boran, N'Dama and Ogaden breed pairs. Top and bottom graphs are the ratio distribution of the mean and the max MI ratio distributions between three breed pairs, respectively. c The Difference in distributions between Boran, N'Dama and Ogaden breed pairs is calculated by KL divergence
Detection of genetic signatures in N'Dama
We performed the analysis with the weighted mutual information (wMI) in order to scan the genome for breed-specific SNPs. For a given gene, wMI is defined as the summation of two factors: the normalized number of SNPs assigned to the gene and the mean MI value of SNPs of the gene. The proposed wMI is considered as the degree of the genetic variation in the gene and as the discriminative information between the breeds. Figure 3 shows the distribution of the significant SNPs identified by wMI across all 30 chromosomes as well as the intersection of MI and XP-CLR, and MI and XP-EHH on each chromosome, and the degree of enrichment in each chromosome with Fisher's exact test. Fisher's exact test was performed with a 2×2 contingency table, composed of two factors: whether the SNP is included in a specific chromosome, and whether the SNP is identified by each measure. We also presented the distribution of the genes including significant SNPs identified by the same three measures for each chromosome (Additional file 1: Figure S3). Although SNPs were found in all chromosomes, the number of the SNPs were not even across the chromosomes. Especially, when the intersection of MI and XP-CLR, and MI and XP-EHH were applied, relatively large number of SNPs were detected in chromosome 5. These distributions of the SNPs on each chromosome provided the information on genomic locations that are likely to have received selection pressure and possess the ability to distinguish N'Dama and Ogaden breeds.
Distribution of the numbers and the log ratios of SNPs distinguishing between N'Dama and Ogaden in each chromosome. Black, grey and patterned light grey bars indicate the numbers of SNPs identified by weighted MI, the intersection of MI and XP-CLR, and MI and XP-EHH with a significant p-value (1.0e-2). Blue values denote the enriched chromosomes with a p-value less than 1.0e-2 in Fisher's exact test. Line graphs are the ratios of the identified SNPs to total SNPs for each chromosome. The ratios are in negative log scale, thus a lower value indicates a high proportion of the SNPs distinguishing between N'Dama and Ogaden breeds. For all graphs, left y-axis represents the number of SNPs and right y-axis indicates the ratio value
N'Dama-specific SNPs identified by wMI
Thirty genes containing the distinctive SNPs between N'Dama and Ogaden were identified by wMI analysis (Additional file 1: Table S4). We constructed correlation networks with the identified genes. The networks were generated based on the correlation coefficients of the gene variation degrees which are obtained by calculating the variations of SNPs annotated by each gene. The SNP variation is the difference between alleles of the same SNP position for cattle samples. It indicates the degree of homozygosity or heterozygosity of SNPs which is defined as the ratio of homozygous or heterozygous alleles for all samples of a breed. For instance, when the allele pair of SNP_1 of most samples of breed_1 is "AA", the homozygosity of SNP_1 for breed_1 is large. The heterozygosity of SNP_1 for breed_2 is high when SNP_1 allele pair of most breed_2 samples is "AT".
The constructed network showed that ACCN1, CTNNA2, FHIT and USH2A function as main hubs of the network (Fig. 4a). The heterozygosity or homozygosity of SNPs in many genes of the network was strongly associated with that in these four genes. ACCN1 encodes a sodium channel protein which is expressed in both the central and peripheral nervous system. It regulates neuronal activity in a pH-dependent manner. The diverse physiological roles of ACCN1 in neuronal systems include synaptic plasticity, learning, fear, pain sensation, mechanosensation, and neurodegenerative diseases [22]. CTNNA2 is known as a linker between cadherin receptors and the cytoskeleton to regulate cell-cell adhesion and differentiation in the nervous system, and is implicated in several neurological functions including the control of startle modulation [23]. FHIT protein is a member of the histidine triad gene family of nucleotide hydrolases involved in purine metabolism. This gene contributes to the regulation of gene expression essential for cell proliferation and survival and tumor suppressor [24]. USH2A is found in the basement membrane of the cochlea and the retina, and is believed to take part in adhesion of pre- and post-synaptic membranes and in nerve fiber guidance. Mutations in the USH2A gene are also responsible for a subtype of Usher syndrome which is the most frequent cause of combined deaf-blindness in man [25].
Genes identified based on the wMI for N'Dama breed. a The correlation network of 30 genes selected using the wMI of SNP types between N'Dama and Ogaden breeds. Circles denote genes including SNPs identified by the wMI and dark grey octagons represent the genes annotated in the result of GO analysis. Dark red edges indicate strong positive correlations and dark green edges show strong negative correlations (gene pairs with a correlation coefficient value larger than 0.45 or smaller than -0.25 are connected). b GO analysis on the extracted genes by adjusting thresholds from the constructed correlation network. A vertical line indicates an FDR-adjusted p-value (0.05). c The genotype profiles of the identified genes including ACCN1, CTNNA2, FHIT, and USH2A are shown. The SNP positions of each gene reveal clearly different patterns between N'Dama and Ogaden breeds. Each logo indicates A (A/A), T (T/T), G (G/G), C (C/C), L (A/T), D (A/G), E (A/C), F (G/T), H (C/T), and I (C/G). Upper table shows the types of SNP alleles for each breed. Values in the parentheses represent the numbers of samples with each allele for four genes
In addition, we performed GO analysis with the genes extracted by the threshold of correlation coefficient (larger than 0.8 or smaller than -0.3) in the constructed network. Enriched terms were related to cognitive functions ('learning', 'learning or memory' and 'cognition'), perceptual systems ('sensory perception of sound' and 'sensory perception of mechanical stimulus') and neurological systems ('neurological system process', 'neuromuscular process', 'synaptic transmission', 'sensory perception of mechanical stimulus', and 'transmission of nerve impulse') (FDR adjusted p-value < 0.05) (Fig. 4b; Additional file 1: Table S5). This result strongly indicated that N'Dama may be distinguished from the other breeds of African cattle by a neurological system related to startling response which requires sensory perception, learning or memory as well as neuromuscular system. Furthermore, Fig. 4c displayed genotype profiles for each SNP position on the above-mentioned four genes. Interestingly, the genotypes of the identified SNPs revealed different patterns between N'Dama and Ogaden breeds. Genotypes of N'Dama were biased for homozygosity and were found to be more homogeneous within the population than those in Ogaden.
N'Dama-specific SNPs identified by MI and XP-CLR
In the next step, we identified genes displaying genetic signatures which may have contributed to the development of N'Dama-specific phenotypes. Two gene lists were created one of which containing 2,793 genes obtained from MI analysis and the other containing 220 genes from XP-CLR. The 131 genes found in common between these two lists represented a set of functional genes that facilitated adaptation of N'Dama to the local environment (Additional file 1: Table S6). A correlation network based on the identified genes demonstrated that the genotype of SNPs in many genes were negatively associated with a single hub gene known as general transcription factor or GTF2IRD1 (Fig. 5a). GTF2RD1 has been intensely studied in brain and embryo due to its involvement in a rare neurodevelopmental disorder, Williams-Beuren syndrome [26]. Chimge et al. [27] observed overexpression of GTF2RD1 in mouse embryonic fibroblast cells, and reported that GTF2IRD1 regulates many genes that are involved in a variety of biological processes such as immune response, cell cycle, cell signaling and transcriptional regulation. The expression levels of ATOH7, IL1RL2, OASL and OPRD1 changed after GTF2RD1 overexpression [27]. When a SNP type is defined based on the degree of the heterozygosity or homozygosity of SNPs for all samples, the associations of SNP types between GTF2IRD1 and those mentioned target genes were also observed in our correlation network using the combined measure of the MI and the XP-CLR. These results may reflect modified biological interactions of GTF2IRD1 with target genes in N'Dama as opposed to other African cattle and commercial breeds. In addition, the genotype profiles of this gene showed differences between N'Dama and Ogaden breeds (Additional file 1: Figure S4).
Genes identified based on the MI and XP-CLR for N'Dama breed. a The correlation network of 131 genes selected using the MI and XP-CLR of SNP types between N'Dama and Ogaden breeds is presented. Circles denote genes identified by the intersection of MI and XP-CLR and dark grey octagons represent the genes annotated in the result of GO analysis. Dark red edges indicate strong positive correlations and dark green edges are strong negative correlations (gene pairs with a correlation coefficient value larger than 0.9 or smaller than -0.4 are connected). b GO analysis on the extracted genes by adjusting thresholds from the constructed correlation network is shown in this figure. A vertical line is an FDR-adjusted p-value (0.05). c The genotype profiles of the identified genes, CALCR, FGF23 and CDK6, for 10 representative SNP positions of each gene clearly show different patterns between N'Dama and Ogaden breeds. Upper table reveals the types of SNP alleles for each breed. Values in the parentheses denote for the numbers of samples with each allele for the three genes
We also carried out GO analysis with the genes selected by the threshold of correlation coefficient (larger than 0.97 or smaller than -0.5) in the constructed network. The significantly enriched terms included 'regulation of hormone secretion' and 'regulation of ossification' (FDR adjusted p-value < 0.05) (Fig. 5b; Additional file 1: Table S7). The terms that were related to appearance of the ossification enriched by genes including CALCR, FGF23, and CDK6 suggest pathways that may provide deeper insights into understanding some aspects of the N'Dama-specific features. In particular, CALCR is a high affinity receptor for the peptide hormone calcitonin. This receptor is known to be associated with maintaining calcium homeostasis enhancing calcium excretion by the kidneys and it also takes part in regulating osteoclast-mediated bone resorption [28]. FGF23 is a regulator of phosphate homeostasis and vitamin-D metabolism. This protein is reported to negatively regulate osteoblast differentiation and matrix mineralization [29]. Finally, CDK6 which is a member of a protein kinase is an important regulator of cell cycle progression. It also prevents myeloid differentiation by interfering with RUNX1, a transcription factor that regulates the differentiation of hematopoietic stem cells into mature blood cells [30]. Furthermore, we identified IL1RL1 and IL1RL2 in the constructed network in concordance with the observation that the initial response of the host immune system to trypanosomes infection contains the activation of macrophages secreting pro-inflammatory molecules such as IL-1 [31, 32]. In particular, it has been previously reported that T. brucei infections lead to the increase of IL-1 secretion [33]. Apart from the GO analysis, we showed that N'Dama and Ogaden possess distinct patterns of homozygosity and heterozygosity for the SNP alleles of CALCR, FGF23, and CDK6 (Fig. 5c). Taken together, these results indicated that genetic diversification has occurred between N'Dama and Ogaden, in the genes related to the regulation of ossification.
N'Dama-specific SNPs identified by MI and XP-EHH
The 117 common genes were identified in the lists of 2,793 genes from MI and 239 genes from XP-EHH (Additional file 1: Table S8). The correlation network analysis performed on those genes showed that the genotypes of SNPs in many of these genes were negatively related to a hub gene, RASAL1 (Fig. 6a). RASAL1 is a member of ras GTPase-activating protein families and recently reported to be a tumor suppressor gene in several types of cancer [34, 35]. The SNP alleles of RASAL1 in N'Dama also represented homozygous types unlike Ogaden breeds (Additional file 1: Figure S5).
Genes identified based on the MI and XP-EHH for N'Dama breed. a Correlation network of 117 genes selected using the MI and XP-EHH of SNP types between N'Dama and Ogaden breeds is demonstrated. Circles denote genes identified by the intersection of MI and XP-EHH and dark grey octagons represent the genes annotated in the result of GO analysis. Dark red edges indicate strong positive correlations and dark green edges are strong negative correlations (gene pairs with a correlation coefficient value larger than 0.9 or smaller than -0.5 are connected). b GO analysis on the selected genes by adjusting thresholds from the constructed correlation network is presented (excluding miRNAs). A vertical line represents an FDR-adjusted p-value (0.05). c The genotype profiles of the identified genes, SP1, SP7 and CARD11, for each SNP position clearly reveal different patterns between N'Dama and Ogaden breeds. Table shows the types of SNP allele for each breed. Values in the parentheses represent the numbers of samples with each allele for three genes
GO analysis of the genes extracted by the threshold of correlation coefficient (larger than 0.97 or smaller than -0.5) in the constructed network showed significantly enriched terms, 'immune system development' (FDR adjusted p-value < 0.05) (Fig. 6b; Additional file 1: Table S9). CARD11, FOXP1 and SP1 were significantly over represented in 'immune system development'. In particular, CARD11 is critical for signaling in T- and B-lymphocytes in both the innate and adaptive immune system, and it transmits signals from antigen receptors to the transcription factor NF-kB [36, 37]. FOXP1 belongs to subfamily P of the forkhead box (FOX) transcription factor family which plays important roles in the regulation of tissue- and cell type-specific gene transcription during embryo development and adulthood. More specific function of FOXP1 includes the regulation of cardiomyocyte proliferation [38], motor neuron development [39], and B-cell development [40]. In addition, similar to the result from the analysis of MI and XP-CLR, ossification-related terms were enriched with significant p-values (the modified Fisher exact p-value < 0.05) due to genes including SP1 and SP7 (Additional file 1: Table S9). SP1 is a zinc finger transcription factor involved in many cellular processes including cell differentiation, apoptosis, immune responses, and osteogenic differentiation of dental stem cells [41]. On the other hand, SP7 is a bone-specific transcription factor that is required for the activation of a range of genes during osteoblast differentiation and bone formation [42]. Also, it was reported by other studies that some of SP7-expressing osteoblast precursors travel through the cartilage template and form stromal cells in the bone marrow space in which hematopoiesis occurs [43, 44]. Fig. 6c presents the SNP profiles for SP1 and SP7 genes between N'Dama and Ogaden breeds. Two genes showed the opposite zygosity SNP pattern in N'Dama and Ogaden respectively. These results imply that SNP variants may be involved in the gene regulation between N'Dama and Ogaden breeds.
Furthermore, we also observed that majority of SNPs found in eight miRNAs (bta-miR-369, bta-miR-377, bta-miR-409b, bta-miR-410, bta-miR-412, bta-miR-541, bta-miR-656, and bta-miR-3957) showed homogeneity in SNP variation (Fig. 6a). Notably, these miRNAs are located in close proximity to one another in chromosome 21 between 67,598,000 and 67,604,800 and five of which including bta-miR-369, -377, -409b, -410 and -656 are the members of miR-154 family. Homologs of miR-154 family found in human are originally known to be overexpressed in idiopathic pulmonary fibrosis [45]. In addition, recent evidence suggests the association of the function of this miRNA family with the bone development. Li et al. [46] reported that expression of miR-410 and miR-154 are decreased in tension-treated adipose-derived mesenchymal stem cells (ADSCs), and that miR-154 inhibits osteogenic differentiation of ADSCs through the WNT/PCP pathway by directly regulating WNT 11. This result indicated that the SNP variants may cause differential expression of miRNA which in turn influence expression of their target genes between N'Dama and Ogaden breeds.
Identification of N'Dama-specific missense and nonsense mutations
Finally, we looked into variations on a protein level by focusing onto non-synonymous SNPs and investigated whether such variations caused any physiological change in N'Dama cattle. N'Dama-specific missense or nonsense variants with their annotated genomic locations and coding effects for the identified genes were observed after performing the three measures (Additional file 1: Table S10). All missense or nonsense mutations observed were summarized in Table 1: 20 missense mutations in 15 protein coding genes, and a nonsense mutation with one variants in RANBP17. Many of the annotated genes are associated with immune (C1RL, EOMES and TPST1), nervous (AMZ1, DDX54, EML1, OPCML, SBF2, SLIT3 and USH2A) and cellular metabolic (ACAD9, CDADC1, NOX5 and TIGAR) systems. The gene description and the related function for each gene are shown in Additional file 1: Table S11.
Table 1 List of the identified genes including N'Dama-specific missense and nonsense mutations. AA, amino acid
Also, the 15 mutations out of 20 missense mutations resulted in alteration of chemical properties. Eleven mutations were located in functional domains, while the rest nine were in inter-domain region (Table 1). AMZ1, C1RL and PIK3C2G exhibited multiple protein mutations. Even though these mutations were not found within the functional domains, amino acid properties were changed. Notably, C1RL displayed four mutations, all of which resulted in altered properties of amino acids. Two mutations including CUB and trypsin-like serine protease domain were located in the functional domains. Several proteins containing CUB and trypsin-like serine protease domains are associated with complement activation, tissue remodeling and cellular migration. It has been suggested that C1RL is involved in complement pathways during inflammation although its physiological role is not well-understood [47]. We also found one nonsense variant (rs385712825) with a significant p-value (7.82e-11). This SNP was located in RANBP17 which is a member of the importin-β super family of nuclear transport receptors. In human, RANBP17 is the loci of recurrent chromosomal 5 breakpoints detected in T-cell acute lymphoblastic leukemia, and the transcriptional activation of this gene occurs during hematopoietic process with enhancer elements of the TCR delta gene [48].
Furthermore, we compared the amino acids encoded by the 20 missense and one nonsense mutations in N'Dama with the corresponding amino acids in reference cow (UMD 3.1), human and mouse (Fig. 7). Interestingly, the amino acids substitution in the variant positions were detected only in N'Dama which clearly distinguished N'Dama from other cattle breeds and species. It implies that the mutated alleles affected coding changes leading to alterations in the function of the identified genes.
Amino acid substitution resulted from the missense and nonsense mutations of the genes identified by wMI, the intersection of MI/XP-CLR and MI/XP-EHH. The 20 missense and one nonsense variants of the identified genes show distinguishing amino acid substitution in N'Dama compared to that of reference cow (UMD 3.1), human and mouse
The development of large-scale genetic variant inventories has triggered a number of studies on the identification of distinct genome patterns which give rise to breed-specific traits. For instance, several researches attempted to detect genetic divergences that are associated with trypanosomiasis in African cattle from genome data [7, 10]. In this study, a genome-wide comparative analysis was performed with SNP data from various cattle breeds, including African indigenous cattle and commercial breeds, in order to identify the genetic signatures of N'Dama.
Comparison of N'Dama genome with other indigenous African cattle and commercial breeds resulted in the identification of N'Dama specific SNPs. MI analysis for the detection of breed-specific SNPs successfully distinguished genotypic profiles among Boran, N'Dama and Ogaden. In addition, the combination of either MI and XP-CLR or MI and XP-EHH allowed us to screen positively selected SNPs in N'Dama genome that are presumed to have occurred during natural and artificial selection. Genetic regions uncovered by XP-EHH and XP-CLR often represent biologically meaningful variations that may explain adaptive traits. Moreover, it is possible to produce larger lists of likely selective sweeps, and as a result, this may allow us to better understand how selection has affected the variation of a specific-breed [49]. Some of the positively selected SNPs located in genic region were unique in N'Dama when compared to commercial breed and other mammals. Furthermore, some variants in N'Dama were homogeneous, and these N'Dama-specific variants were also detected in the pool of Ogaden genotypes. Ogaden possessed not only more heterogeneous but also bigger genetic pools than N'Dama. Numbers of detected SNPs were significantly high in some of the chromosomes (p-values less than 1.0e-2), indicating greater selection pressures to these chromosomes during the evolutionary history of N'Dama.
The correlation network is constructed based on the similarity of genotype between genes. If a SNP variation value at gene level is close to 0, this means that the gene possesses similar genotypes to the reference. On the other hand, if the SNP variation at gene level is higher, the gene is likely to possess relatively more heterozygous or alternative homozygous genotypes. The interaction between genes in the correlation networks shows similarity in their genotypes. If calculated trends of genotypes for two genes are homo-homo or hetero-hetero, their correlation will be high (close to 1) and the edge will be red as shown in the Fig. 4a, 5a and 6a. Whereas, if the collective genotype is homo for one gene and hetero for the other gene, then the correlation will be low (close to -1) and the edge will be in green. In the correlation network, hub genes connected by negative correlation edges can be differently interpreted from hub genes with positive edges. Since a hub node is usually important in many networks, highly connected hub genes are expected to play a significant role in biological networks [50]. Thus, the hub genes we found are expected to have potentials for distinguishing between N'Dama and Ogaden. In particular, two negative hub genes including GTF2IRD1 and RASAL1 can be considered as genes with the opposite zygosity against most of the node genes. We speculate that the homo- or hetero- zygosity of two genes is likely to play a distinct role from other genes. It can be expected to provide an opportunity to formulate potential hypotheses for investigating biological processes.
Comparison of genomes among different cattle breeds using wMI identified statistically significant SNPs and the genes where these SNPs are located. From the analysis based on wMI approach, many genes of the constructed network and the majority of enriched GO terms indicated that N'Dama may have a distinguished sensory and neurological system related to startle response (Fig. 4b; Additional file 1: Table S5). Notably, the investigation of the acoustic startle response in terms of brain and genetic mechanisms revealed the involvement of genetic factor [51, 52]. For example, there are a wide range of responses across the inbred strains in rodents [53,54,55]. It is plausible that indigenous African cattle may possess various levels of startling and fear responses. The mammalian startle response is related to defence system and plays a critical role in survival of the species throughout evolution [56]. In addition, fear has greatly affected the process of animal domestication, especially when animals become frightened of the people who handle them [57]. This imply that unique neuronal circuitries of startle response and cognition might play a critical role in specification, adaptation, and domestication of N'Dama cattle. Unfortunately, not many studies on neurology of normal or trypanosomiasis infected N'Dama cattle exist. Hence, the functional consequences and pathogenic relevance of the neurological features regarding trypanotolerance remain to be elucidated. Although we could not directly associate N'Dama-specific neurological features with trypanotolerance, these results may be considered as genetic signatures distinguishing N'Dama from the other cattle breeds due to their statistical significance.
While wMI methods extracted statistically significant SNPs by comparing genomes of different breeds, XP-EHH and XP-CLR detected SNPs that were influenced by positive selection. Both the combined MI and XP-CLR, and MI and XP-EHH analysis identified genes involved in ossification. This may reflect the differences in feed efficiency and growth traits between N'Dama and Ogaden which may result in smaller skeletal size of N'Dama. Additionally, N'Dama has developed superior ability to survive under unfavourable environment while Ogaden has been selected for better dairy and beef production. In many genome-wide association study (GWAS) and genomic predictions for feed efficiency and growth traits in commercial beef and dairy cattle, the positive or negative regulation of ossification and bone mineralization is commonly observed in relation to traits like average daily gain, or mid-test metabolic weight [58, 59].
Enrichment in the term "ossification" may also indicate physiological difference between N'Dama and Ogaden. Ossification has several functions: for instance, skeletal growth, mineral storage, blood cell production, and energy storage. The genes associated with ossification were implicated in biological process such as calcium homeostasis (CALCR), phosphate homeostasis and vitamin-D metabolism (FGF23), cell cycle progression (CDK6), and the regulation of transcription (SP1 and SP7) involved in multiple functions (osteogenic formation, bone formation, differentiation, apoptosis, and immune response).
According to previous studies on trypanosomiasis, responses to trypanosoma infection in cattle include immunosuppression, inflammatory response and anaemia [17, 19, 60]. CARD11 over-represented in 'immune system development' plays important roles in innate and adaptive immune system, and contributes to NF-kB activation in various signalling cascades [36, 37]. The activation of NF-kB is known as a determinant of the intracellular survival and tissue tropism of T. cruzi that causes human sleeping sickness [61]. This may suggest that CARD11 affecting NF-kB activation is possible to change in functions to effectively control the infection of T. brucei. In addition, haematopoietic stem cells (HSCs) in bone marrow give rise to the different types of mature blood cells and immune cells. Our results imply that N'Dama may possess specific genetic factors that confer immunity to supress activities of trypanosomes more effectively. A previous genome-wide study performed with West African cattle revealed that genes involved in immune response were under strong balancing selection in trypanotolerant N'Dama breeds [7] which also supports the implication suggested by our result. Furthermore, bone marrow function and blood cells have been suggested to take parts in the development of trypanosomiasis [62,63,64].
Through examining exonic SNPs that results in missense or nonsense mutation (Table 1 and Additional file 1: Table S10), we identified three main biological processes associated with the immune system (Additional file 1: Table S11). All of the mutations were specific in N'Dama cattle compared to other cattle, mouse and human (Fig. 7). Although these mutations are required to be validated for functional and physiological consequences in the future studies, we suggest that the biological processes related to immunity may be a part of the strong candidate systems that give rise to trypanotolerance.
In conclusion, our results illustrate that trypanotolerant N'Dama displays clear genetic differences compared to other African cattle and commercial breeds. The adaptation of N'Dama to the environment may implicate unique bone formation related to growth traits, immuno-genetic mechanisms that allow them to tolerate regional diseases including trypanosomiasis, and neurological processes which involved in the development of favorable behaviors for survival. Our analysis provides advanced knowledge in genetic selection of N'Dama and its adaptation to the local environment.
Samples, DNA resequencing and SNP detection
Whole-blood samples (10 ml) were collected from indigenous African cattle (10 Ankole, 10 Boran, 9 Kenana, 10 N'Dama, 9 Ogaden breeds) and commercial cattle (10 Angus, 10 Jersey, 10 Holstein and 23 Hanwoo breeds). The DNA was isolated from the whole blood using G-DEXTMIIb Genomic DNA Extraction Kit (iNtRoN Biotechnology, Korea) and pair-end reads were generated from the isolated DNA using Illumina HiSeq 2000. The Covaris System was used to shear 3 μg of genomic DNA into the ~300 bp inserts. The fragments of the sheared DNA were end-repaired, polyA-tailed, adaptor ligated, and amplified using the TruSeq DNA Sample Prep. Kit (Illumina, USA). Pair-end sequencing was performed on the Illumina HiSeq 2000 platform using the TruSeq SBS Kit v3-HS (Illumina, USA) (https://www.illumina.com/documents/products/datasheets/datasheet_hiseq2000.pdf). Finally, sequence data were generated using the Illumina HiSeq system. The details of data are described in [65, 66].
The quality check was carried out on the 6.50 billion reads (~644 Gbp), derived from the genomes of five indigenous African cattle (Ankole, Boran, Kenana, N'Dama and Ogaden) and four commercial cattle breeds (Angus, Jersey, Holstein and Hanwoo), via the fastQC package (http://www.bioinformatics.babraham.ac.uk/projects/fastqc). The pair-end sequence reads were aligned to the UMD 3.1 using Bowtie [67] with the default parameters (except the "-no-mixed" option). The UMD 3.1 reference genome (ftp://ftp.ensembl.org/pub/release-75/fasta/bos_taurus/) from the Ensembl database (release 75) was used as the bovine reference genome for the assembly. The size of reference genome sequence UMD 3.1 is 2.67Gb. The overall alignment rate of the reads to the reference genome was 98.84% with an average read depth of ~10.8X genome coverage. On average across the whole samples, the reads covered 98.56% of the reference UMD3.1 genome (Additional file 1: Table S1).
We used Picard (http://broadinstitute.github.io/picard/) and SAMtools [68] for downstream processing and variant calling. Potential PCR duplicates were filtered using Picard ("REMOVE_DUPLICATEDS = true" option in "MarkDuplicates"), and the index files for the reference and bam files were generated with SAMtools. We also conducted a local multiple sequence realignment to correct misalignments caused by the presence of INDELs ("RealignerTargetCreator" and "IndelRealigner") and called candidate SNPs ("UnifiedGenotyper" and "SelectVariants") using GATK 3.1 [69]. After the variants were called and exported into the variant call format (VCF), we filtered the variants to minimize the false positives ("VariantFiltration"). The variants were filtered with the following options: QUAL (Phred-scaled quality score) < 30; MQ0 (the number of reads with a mapping quality of zero) > 4; QD (variants confidence/quality by depth) < 5; and FS (Phred-scaled p-value using Fisher's exact test) > 200. BEAGLE [70] was used to impute missing genotypes and infer haplotype phases. Finally, we obtained ~ 37 million SNPs (Additional file 1: Table S2).
We additionally genotyped 45 African cattle samples (of which blood samples were available) using BovineSNP50 Genotyping BeadChip (Illumina, USA). After filtering out SNPs based on GeneCall score less than 0.7, common loci of SNP chip and DNA resequencing data were extracted and examined to assess concordance (Additional file 1: Table S3).
Moreover, we performed enrichment analysis to detect significant breed-specific SNPs using SNPSift for focusing on the non-synonymous SNPs (MISSENSE and NONSENSE) [71]. SNPSift CaseControl counts the number of genotypes present in two factors, and then a p-value calculation is calculated using Fisher exact and Cochran-Armitage trend tests. In general, one of the factors is fixed as genetic models which can be dominant, recessive, or co-dominant. The other is breed information which was applied in this study for identifying breed-specific enriched SNPs. As a result, we constructed 2 by 2 (dominant or recessive coding / breed-specific group information, specific breed, N'Dama, versus the others) or 2 by 3 (co-dominant coding / breed-specific group information) contingency tables, and performed Fisher exact and Cochran-Armitage trend tests for the 2 by 2 and 2 by 3 contingency tables, respectively. A total of 37,363,436 SNPs were applied in the tests, and we used Bonferroni correction for multiple correction testing. After identifying significant N'Dama breed-specific enriched SNPs, we annotated each SNP using snpEff (Table 1 and Additional file 1: Table S10).
For effectively representing breed-specific SNP variations, all SNP alleles of the samples are converted into binary values including 0 and 1. '0' denotes the major allele of a SNP position for all the samples while '1' represents minor values regardless of its alleles. This biallelic representation explicitly characterizes the ratio of major and minor alleles of each SNP position per breed, thus allowing breed-specific SNPs to be effectively discovered. To be specific, the allele of the i-th SNP is transformed as follows:
\( S N{P}_i=\left\{\begin{array}{c}\hfill 0,\kern0.48em \mathrm{if}\ SN{P}_i^{*}= Major(i)\hfill \\ {}\hfill 1,\ \mathrm{Otherwise}\kern3.75em \hfill \end{array}\right. \),
where SNP i * and Major(i) are the i-th SNP allele and the most frequent allele in the i-th SNP position for all the cattle samples. The values 0 and 1 in a SNP position per each breed denote "conserved" and "mutated", respectively.
Mutual information analysis
Information-theoretic measures have emerged as a useful way to quantify the dependencies of many genetic variables [72]. In particular, mutual information (MI) of two random variables is an entropy-based metric for measuring mutual dependency between the variables [73]. Several studies using the MI method exist to analyze biological phenomena, however, most of them have been applied to gene expression data [74,75,76,77,78]. This study proposed a hybrid approach based on MI combining statistical methods to detect breed-specific SNPs from large-scale genome sequences.
In genetic association studies, MI can be used to measure the dependencies between genetic factors and phenotypes by defining genetic features and phenotypic classes as random variables. Extracting the discriminative genetic variations from tens of millions of SNPs can be addressed as finding the distinct variables from a huge-scale variable set. Given a SNP position variable set X = {x 1 ,…,x n } and a breed class variable y, we define a function F(X;y) that selects variables by measuring the associations between SNP positions and breed classes:
$$ {X}^{*}= F\left( X; y\right)={\displaystyle \underset{i, j}{\cup}\kern0.5em f\left({x}_i,{x}_j, y\right)} $$
$$ \mathrm{s}.\mathrm{t}.\kern0.5em f\left({x}_i,{x}_j, y\right)=\left\{\begin{array}{c}\hfill \left\{\left({x}_i,{x}_j\right)\right\},\mathrm{if}\kern0.5em MIE\left({x}_i,{x}_j, y\right)>\theta \hfill \\ {}\hfill \varnothing, \kern4em \mathrm{otherwise}\hfill \end{array}\right. $$
where X * is the selective SNP variable pair set, x i and x j denote two SNP variables in a chromosome, and θ indicates the threshold for selecting the SNP variables. Also, MIE denotes mutual information estimator.
When the two random variables, SNP and C, denotes a genetic variable and phenotypic class, respectively, the value set of a SNP consists of its possible alleles, and the value set of C is defined as {N'Dama, other cattle}. The MI I(SNP; C) quantifies the reduction in the uncertainty of the phenotypic class C due to the information contained in the genetic variation of SNP:
$$ I\left( SNP; C\right)= H(SNP)- H\left( SNP\left| C\right.\right) $$
$$ \mathrm{s}.\mathrm{t}.\kern0.5em H(SNP)=-{\displaystyle {\sum}_{snp\in SNP} p(snp) \log \kern0.5em p(snp),\kern0.5em \mathrm{and}\kern0.5em H\left( SNP\left| C\right.\right)- H(C).} $$
where H(SNP) is the entropy of SNP. H(SNP|C) denotes the conditional entropy of SNP for a given C, and it can be found using the chain rule. Thus, by the definition of the entropy H, the MI can be reformulated with the joint probability distribution p(SNP, C) as follows:
$$ I\left( SNP; C\right)={\displaystyle \sum_{snp\in SNP}{\displaystyle \sum_{c\in C} p\left( snp, c\right)\kern0.5em \log \kern0.5em \frac{p\left( snp, c\right)}{p\left( snp, c\right)\kern0.5em p(c)}}} $$
I(SNP; C) is nonnegative and is only zero when p(SNP, C) = p(SNP)p(C), indicating that there is no association between SNP and C. Intuitively, then, MIEs can be used for measuring the main effect of a genetic variable SNP on the breed C.
In this study, we calculated conditional mutual information (conditional MI) to quantify the associations among three or more variables as the MIE function and to measure the influence of two-locus haplotypes on the breeds. Conditional MI is defined as follows:
$$ I\left( C; SN{P}_1\left| SN{P}_2\right.\right)= I\left( C; SN{P}_1, SNP\right)- I\left( C; SN{P}_2\right). $$
We defined I(C; SNP 1 , SNP 2 ) as an MIE. MIEs can be obtained by the chain rule for MI:
$$ I\left( C; SN{P}_1, SN{P}_2\right)= I\left( C; SN{P}_1\left| SN{P}_2\right.\right)+ I\left( C; SN{P}_2\right), $$
\( \mathrm{s}.\mathrm{t}.\kern0.5em I\left( C; SN{P}_1\left| SN{P}_2\right.\right)={\displaystyle \sum_{s_2\in SN{P}_2}{\displaystyle \sum_{s_1\in SN{P}_1}{\displaystyle \sum_{c\in C}{P}_{C, SN{P}_1, SN{P}_2}\left( c,{s}_1,{s}_2\right) \log \kern0.5em \frac{p_{SN{P}_2}\left({s}_2\right){p}_{C, SN{P}_1, SN{P}_2}\left( c,{s}_1,{s}_2\right)}{{p_{C,}}_{SN{P}_2}\left( c,{s}_2\right){p}_{SN{P}_1, SN{P}_2}\left({s}_1,{s}_2\right)}}}} \)
The MIE quantifies the associations between SNPs at two loci and breeds. I(C; SNP 1 | SNP 2 ) is also nonnegative and becomes zero when no dependency exists among all three variables. This property allows the method to be suitable for identifying distinct two-locus haplotypes determining the phenotype of cattle.
Weighted MI between the i-th gene and breed variable C (wMI) is defined by interpolating the number of SNPs annotated by the gene and the mean MI of the gene:
$$ w M{I}_i=\alpha \overline{I}\left({g}_i; C\right)+\left(1-\alpha \right)\frac{\left|{g}_i\right|}{\underset{g\in G}{ \max}\left\{\left| g\right|\right\}} $$
$$ \overline{I}\left({g}_i; C\right)=\frac{1}{\left|{g}_i\right|}{\displaystyle {\sum}_{SNP\in {g}_i} I\left( SNP; C\right)} $$
where g i is the set of SNPs annotated by the i-th gene and α denotes the constant for moderating two factors. When a gene possesses more SNPs and mean MI between the gene SNPs and the breed variable is larger, the wMI of the gene provides a larger value.
XP-CLR and XP-EHH tests
We performed cross-population composite likelihood ratio (XP-CLR) and cross-population extended haplotype homozygosity (XP-EHH) tests for detecting the selective pressures in the N'Dama and Ogaden cattle. The XP-CLR scores are computed using XP-CLR 1.0 (https://reich.hms.harvard.edu/software) for observation of selective sweeps which involve modeling the multi-locus allele frequency differentiation between two populations [21, 49]. The parameters including non-overlapping sliding windows of 50 kb, a maximum number of SNPs within each window of 600, and the correlation level of the SNPs' contribution to the XP-CLR results down-weighted of 0.95 are used. The regions with XP-CLR scores in the top 1% of the empirical distributions (XP-CLR > 224.2) are designated as candidate sweeps in the N'Dama and Ogaden breeds (Additional file 1: Table S12).
In addition, we used the XP-EHH to identify the loci of selection based on the comparison of genome-wide SNP genotypes between populations. The XP-EHH scores are calculated using software xpehh (http://hgdp.uchicago.edu/Software/) to detect alleles with an increase in frequency to the point of fixation or near-fixation in one of the populations. It means that it detects SNPs which are under selection in one population but not in others. So, the extreme XP-EHH scores potentially represent the selection of a particular population. XP-EHH scores are also directional. A positive score means that selection is likely to have happened in population A, while a negative score indicates the selection probably occurs in population B [21, 79]. The genome is divided into non-overlapping segments of 50 kb to facilitate the comparison of genomic regions across populations, before calculation of the maximum XP-EHH score of all SNPs in each segment. We binned genomic windows according to their numbers of SNPs in the increments of 500 SNPs to consider the SNP frequency. Within each bin, for each window i, the fraction of windows with a value of the statistic greater than that in i is defined as the empirical p-value [21, 80]. The resulting XP-EHH value with a positive score indicates selection in the N'Dama, whereas a negative score signifies selection in the Ogaden.
We selected the regions with positive XP-EHH scores in p-values less than 1%, which can be considered as strong signals in the N'Dama breed (Additional file 1: Table S13). Finally, the selected genomic regions found from XP-CLR and XP-EHH tests are annotated to the closest genes (UMD 3.1). Genes that partially or completely span the window regions (-25 ~ + 25 kb) are defined as candidate genes.
Construction of gene interaction networks based on genetic variations between the breeds
A gene correlation network characterizes the correlation of the variations of genes for cattle breeds. The patterns of genetic variations based on the converted SNP alleles which are distinguishable from cattle breeds are used to build the gene-gene interaction networks. The networks constructed from the annotated genes and their quantitative variation degrees in a gene level are as follows:
We select the genes with a significant level of p-value < 1.0e-3 with respect to wMI, the intersection of MI and XP-CLR, and of MI and XP-EHH.
An allele pair for the selected SNPs was converted into a three-level value with respect to variation status as 0, 1 and 2 by summing the pair. The converted SNP values are 0 or 2 when a SNP allele pair in a position is major homozygous types or alternative homozygous types, respectively. When an allele pair shows heterozygosity, on the other hands, the value is 1. For example, we assume that the alleles of a SNP position pair belongs to "AA", "AT", or "TT", and "A" is a major allele of the SNP pair. Then, the SNP value in this position for all samples are converted into 0, 1 or 2, respectively. This value is defined as the SNP variations of each sample.
The selected SNPs are annotated by genes in which these SNPs are located.
We compute the mean of the SNP values calculated in (2) for each gene. Note that this mean value is defined as the variation of a gene.
We calculate the Pearson correlation coefficient of all the gene pairs from the gene variations of cattle breed samples computed in (4):
$$ Corr\left({g}_i,{g}_j\right)=\frac{Cov\left({g}_i,{g}_j\right)}{\sigma_i{\sigma}_j} $$
where g i and g j denote the i-th and the j-th gene variations. σ i and Cov(g i , g j ) mean the standard deviation of g i and the covariance of g i and g j .
A gene corresponds to a node and two genes with a significant correlation coefficient are connected to each other.
For investigating N'Dama-specific traits including trypanotolerance, the gene interaction networks are constructed from N'Dama and Ogaden breeds. Also, the positive and the negative thresholds are selected for connecting two genes. We implemented the source code for ourselves using the scipy package of Python 2.7 in order to calculate correlation coefficients between the extracted genes, and used Cytoscape 3.2.1 for network visualization.
Finally, we conduct functional analysis for the genes of the constructed networks using the Database for Annotation, Visualization and Integrated Discovery (DAVID) ver. 6.7 (https://david-d.ncifcrf.gov/tools.jsp) [81] to statistically determine over-representation of GO categories. Go analysis were carried out with default parameters in DAVID which were set to GO level "all", count threshold (the minimum number of gene for the corresponding GO term) of 2 and EASE threshold of 0.1. EASE score is the modified Fisher exact p-value adjustment than the naïve Fisher exact test [82]. We also used FDR to correct the multiple testing errors.
ADSCs:
Adipose-derived mesenchymal stem cells
GWAS:
HSCs:
Haematopoietic stem cells
KL-divergence:
Kullback-Liebler-divergence
Mutual information
miR:
SNP:
Single-nucleotide polymorphism
VCF:
Variant call format
wMI:
Weighted mutual information
WNT/PCP:
WNT/planar cell polarity
XP-CLR:
Cross-population composite likelihood ratio
XP-EHH:
Cross-population extended haplotype homozygosity
Rege JEO. The state of African cattle genetic resources I. Classification framework and identification of threatened and extinct breeds. Anim Genet Res Inf. 1999;25:1–25.
Mwai O, Hanotte O, Kwon YJ, Cho S. African indigenous cattle: unique genetic resources in a rapidly changing world. Asian-Australas J Anim Sci. 2015;28:911–21.
Mattioli RC, Pandey VS, Murray M, Fitzpatrick JL. Immunogenetic influences on tick resistance in African cattle with particular reference to trypanotolerant N'Dama (Bos taurus) and trypanosusceptible Gobra zebu (Bos indicus) cattle. Acta Trop. 2000;75:263–77.
Kugonza DR, Nabasirye M, Mpairwe D, Hanotte O, Okeyo AM. Productivity and morphology of Ankole cattle in three livestock production systems in Uganda. Anim Genet Res Inf. 2011;48:13–22.
Yousif IA, Fadlelmoula AA. Characterisation of Kenana cattle breed and its production environment. FAO Anim Genet Res Inf. 2006;38:47–56.
Mackay TF, Stone EA, Ayroles JF. The genetics of quantitative traits: challenges and prospects. Nat Rev Genet. 2009;10:565–77.
Gautier M, Flori L, Riebler A, Jaffrézic F, Laloé D, Gut I, et al. A whole genome Bayesian scan for adaptive genetic divergence in West African cattle. BMC Genomics. 2009;10:550.
Murray GG, Woolhouse M, Tapio M, Mbole-Kariuki MN, Sonstegard TS, Thumbi SM, et al. Genetic susceptibility to infectious disease in East African Shorthorn Zebu: a genome-wide analysis of the effect of heterozygosity and exotic introgression. BMC Evol Biol. 2013;13:246.
Decker JE, McKay SD, Rolf MM, Kim J, Molina Alcalá A, Sonstegard TS, et al. Worldwide patterns of ancestry, divergence, and admixture in domesticated cattle. PLoS Genet. 2014;10:e1004254.
Smetko A, Soudre A, Silbermayr K, Müller S, Brem G, Hanotte O, et al. Trypanosomosis: potential driver of selection in African cattle. Front Genet. 2015;6:137.
Bahbahani H, Clifford H, Wragg D, Mbole-Kariuki MN, Tassell CV, Sonstegard T, et al. Signatures of positive selection in East African Shorthorn Zebu: A genome-wide single nucleotide polymorphism analysis. Sci Rep. 2015;5:11729.
Hoare CA. The Trypanosomes of Mammals: A Zoological Monograph. Oxford: Blackwell Scientific; 1972.
Murray M, Morrison W, Whitelaw D. Host susceptibility to African trypanosomiasis: trypanotolerance. Adv Parasitol. 1982;21:1–68.
Roelants G, Fumoux F, Pinder M, Queval R, Bassinga A, Authie E. Identification and selection of cattle naturally resistant to African trypanosomiasis. Acta Trop. 1987;44:55–66.
Murray M, Trail J, Davis C, Black S. Genetic resistance to African trypanosomiasis. J Infect Dis. 1984;149:311–9.
Coleman C. Cutaneous streptothricosis of cattle in West Africa. Vet Rec. 1967;81:251–4.
Hanotte O, Ronin Y, Agaba M, Nilsson P, Gelhaus A, Horstmann R, et al. Mapping of quantitative trait loci controlling trypanotolerance in a cross of tolerant West African N'Dama and susceptible East African Boran cattle. Proc Natl Acad Sci U S A. 2003;100:7443–8.
Orenge CO, Munga L, Kimwele C, Kemp S, Korol A, Gibson J, et al. Expression of trypanotolerance in N'Dama x Boran crosses under field challenge in relation to N'Dama genome content. BMC Proc. 2011;5:S23.
Noyes H, Brass A, Obara I, Anderson S, Archibald AL, Bradley DG, et al. Genetic and expression analysis of cattle identifies candidate genes in pathways responding to Trypanosoma congolense infection. Proc Natl Acad Sci U S A. 2011;108:9304–9.
Orenge CO, Munga L, Kimwele C, Kemp S, Korol A, Gibson JP, et al. Trypanotolerance in N'Dama x Boran crosses under natural trypanosome challenge: effect of test-year environment, gender, and breed composition. BMC Genet. 2012;13:87.
Pickrell JK, Coop G, Novembre J, Kudaravalli S, Li JZ, Absher D, et al. Signals of recent positive selection in a worldwide sample of human populations. Genome Res. 2009;19:826–37.
Kellenberger S, Schild L. International Union of Basic and Clinical Pharmacology. XCI. structure, function, and pharmacology of acid-sensing ion channels and the epithelial Na + channel. Pharmacol Rev. 2015;67:1–35.
Park C, Falls W, Finger JH, Longo-Guess CM, Ackerman SL. Deletion in Catna2, encoding alpha N-catenin, causes cerebellar and hippocampal lamination defects and impaired startle modulation. Nat Genet. 2002;31:279–84.
Saldivar JC, Shibata H, Huebner K. Pathology and biology associated with the fragile FHIT gene and gene product. J Cell Biochem. 2010;109:858–65.
Reiners J, Nagel-Wolfrum K, Jürgens K, Märker T, Wolfrum U. Molecular basis of human Usher syndrome: deciphering the meshes of the Usher protein network provides insights into the pathomechanisms of the Usher disease. Exp Eye Res. 2006;83:97–119.
van Hagen JM, van der Geest JN, van der Giessen RS, Lagers-van Haselen GC, Eussen HJ, Gille JJ, et al. Contribution of CYLN2 and GTF2IRD1 to neurological and cognitive symptoms in Williams Syndrome. Neurobiol Dis. 2007;26:112–24.
Chimge NO, Mungunsukh O, Ruddle F, Bayarsaihan D. Expression profiling of BEN regulated genes in mouse embryonic fibroblasts. J Exp Zool B Mol Dev Evol. 2007;308:209–24.
Davey RA, Turner AG, McManus JF, Chiu WS, Tjahyono F, Moore AJ, et al. Calcitonin receptor plays a physiological role to protect against hypercalcemia in mice. J Bone Miner Res. 2008;23:1182–93.
Ohnishi M, Razzaque MS. Osteo-renal cross-talk and phosphate metabolism by the FGF23-Klotho system. Contrib Nephrol. 2013;180:1–13.
Fujimoto T, Anderson K, Jacobsen SE, Nishikawa SI, Nerlov C. Cdk6 blocks myeloid differentiation by interfering with Runx1 DNA binding and Runx1-C/EBPalpha interaction. EMBO J. 2007;26:2361–70.
Baral TN. Immunobiology of African trypanosomes: need of alternative interventions. Biomed Res Int. 2010;23:2010.
Duxbury R, Sadun E, Wellde B, Anderson J, Muriithi I. Immunization of cattle with x-irradiated African trypanosomes. Trans R Soc Trop Med Hyg. 1972;66:349–50.
Sileghem M, Darji A, Hamers R, De Baetselier P. Modulation of IL-1 production and IL-1 release during experimental trypanosome infections. Immunology. 1989;68:137.
Ohta M, Seto M, Ijichi H, Miyabayashi K, Kudo Y, Mohri D, et al. Decreased expression of the RAS-GTPase activating protein RASAL1 is associated with colorectal tumor progression. Gastroenterology. 2009;136:206–16.
Liu D, Yang C, Bojdani E, Murugan AK, Xing M. Identification of RASAL1 as a major tumor suppressor gene in thyroid cancer. J Natl Cancer Inst. 2013;105:1617–27.
Pomerantz JL, Denny EM, Baltimore D. CARD11 mediates factor-specific activation of NF-kappaB by the T cell receptor complex. EMBO J. 2002;21:5184–94.
Hara H, Wada T, Bakal C, Kozieradzki I, Suzuki S, Suzuki NM, et al. The MAGUK family protein CARD11 is essential for lymphocyte activation. Immunity. 2003;18:763–75.
Wang Y, Morrisey E. Regulation of cardiomyocyte proliferation by Foxp1. Cell Cycle. 2010;9:4251–2.
Adams KL, Rousso DL, Umbach JA, Novitch BG. Foxp1-mediated programming of limb-innervating motor neurons from mouse and human embryonic stem cells. Nat Commun. 2015;6:6778.
Fuxa M, Skok JA. Transcriptional regulation in early B cell development. Curr Opin Immunol. 2007;19:129–36.
Felthaus O, Viale-Bouroncle S, Driemel O, Reichert TE, Schmalz G, Morsczeck C. Transcription factors TP53 and SP1 and the osteogenic differentiation of dental stem cells. Differentiation. 2012;83:10–6.
Long F. Building strong bones: molecular regulation of the osteoblast lineage. Nat Rev Mol Cell Biol. 2011;13:27–38.
Maes C, Kobayashi T, Selig MK, Torrekens S, Roth SI, Mackem S, et al. Osteoblast precursors, but not mature osteoblasts, move into developing and fractured bones along with invading blood vessels. Dev Cell. 2010;19:329–44.
Ono N, Ono W, Nagasawa T, Kronenberg HM. A subset of chondrogenic cells provides early mesenchymal progenitors in growing bones. Nat Cell Biol. 2014;16:1157–67.
Milosevic J, Pandit K, Magister M, Rabinovich E, Ellwanger DC, Yu G, et al. Profibrotic role of miR-154 in pulmonary fibrosis. Am J Respir Cell Mol Biol. 2012;47:879–87.
Li J, Hu C, Han L, Liu L, Jing W, Tang W, et al. MiR-154-5p regulates osteogenic differentiation of adipose-derived mesenchymal stem cells under tensile stress through the Wnt/PCP pathway by targeting Wnt11. Bone. 2015;78:130–41.
Lin N, Liu S, Li N, Wu P, An H, Yu Y, et al. A novel human dendritic cell-derived C1r-like serine protease analog inhibits complement-mediated cytotoxicity. Biochem Biophys Res Commun. 2004;321:329–36.
Bernard OA, Busson-LeConiat M, Ballerini P, Mauchauffé M, Della Valle V, Monni R, et al. A new recurrent and specific cryptic translocation, t(5;14)(q35;q32), is associated with expression of the Hox11L2 gene in T acute lymphoblastic leukemia. Leukemia. 2001;15:1495–504.
Chen H, Patterson N, Reich D. Population differentiation as a test for selective sweeps. Genome Res. 2010;20:393–402.
Langfelder P, Mischel PS, Horvath S. When Is Hub Gene Selection Better than Standard Meta-Analysis? PLoS One. 2013;8:e61505.
Andraso GM. A comparison of startle response in two morphs of the brook stickleback Culaea inconstans): further evidence for a trade-off between defensive morphology and swimming ability. Evol Ecol. 1997;11:83–90.
Hale ME, Long Jr JH, McHenry MJ, Westneat MW. Evolution of behavior and neural control of the fast-start escape response. Evolution. 2002;56:993–1007.
Glowa JR, Hansen CT. Differences in response to an acoustic startle stimulus among forty-six rat strains. Behav Genet. 1994;24:79–84.
Willott JF, Tanner L, O'Steen J, Johnson KR, Bogue MA, Gagnon L. Acoustic startle and prepulse inhibition in 40 inbred strains of mice. Behav Neurosci. 2003;117:716–27.
Balogh SA, Wehner JM. Inbred mouse strain differences in the establishment of long-term fear memory. Behav Brain Res. 2003;140:97–106.
Gogan P. The startle and orienting reactions in man. A study of their characteristics and habituation. Brain Res. 1970;18:117–35.
Hemsworthlt PH, Barnett JL, Coleman GJ. The human-animal relationship in agriculture and its consequences for the animal. Anim Welf. 1993;2:33–51.
Saatchi M, Schnabel RD, Taylor JF, Garrick DJ. Large-effect pleiotropic or closely linked QTL segregate within and across ten US cattle breeds. BMC Genomics. 2014;15:442.
Saatchi M, Beever JE, Decker JE, Faulkner DB, Freetly HC, Hansen SL, et al. QTLs associated with dry matter intake, metabolic mid-test weight, growth and feed efficiency have little overlap across 4 beef cattle studies. BMC Genomics. 2014;15:1004.
O'Gorman GM, Park SDE, Hill EW, Meade KG, Coussens PM, Agaba M, et al. Transcriptional profiling of cattle infected with Trypanosoma congolense highlights gene expression signatures underlying trypanotolerance and trypanosusceptibility. BMC Genomics. 2009;10:207.
Hall BS, Tam W, Sen R, Pereira ME. Cell-specific activation of nuclear factor-κB by the parasite Trypanosoma cruzi promotes resistance to intracellular infection. Mol Biol Cell. 2000;11:153–60.
Dargie JD, Murray PK, Murray M, Grimshaw WR, McIntyre WI. Bovine trypanosomiasis: the red cell kinetics of N'Dama and Zebu cattle infected with Trypanosoma congolense. Parasitology. 1979;78:271–86.
Amole BO, Clarkson Jr AB, Shear HL. Pathogenesis of anemia in Trypanosoma brucei-infected mice. Infect Immun. 1982;36:1060–8.
Mabbott N, Sternberg J. Bone marrow nitric oxide production and development of anemia in Trypanosoma brucei-infected mice. Infect Immun. 1995;63:1563–6.
Kim J, Hanotte O, Mwai OA, Dessie T, Bashir S, Diallo B, et al. The genome landscape of indigenous African cattle. Genome Biol. 2017;18:34.
Taye M, Kim J, Yoon SH, Lee W, Hanotte O, Dessie T, et al. Whole genome scan reveals the genetic signature of African Ankole cattle breed and potential for higher quality beef. BMC Genet. 2017;18:11.
Li R, Fan W, Tian G, Zhu H, He L, Cai J, et al. The sequence and de novo assembly of the giant panda genome. Nature. 2009;463:311–7.
Nekrutenko A, Taylor J. Next-generation sequencing data interpretation: enhancing reproducibility and accessibility. Nat Rev Genet. 2012;13:667–72.
Browning SR, Browning BL. Rapid and accurate haplotype phasing and missing-data inference for whole-genome association studies by use of localized haplotype clustering. Am J Hum Genet. 2007;81:1084–97.
Cingolani P, Patel VM, Coon M, Nguyen T, Land SJ, Ruden DM, et al. Using Drosophila melanogaster as a model for genotoxic chemical mutational studies with a new program. SnpSift Front Genet. 2012;3:35.
Anastassiou D. Computational analysis of the synergy among multiple interacting genes. Mol Syst Biol. 2007;3:83.
Cover TM, Thomas JA. Elements of information theory. 2nd ed. New York: Wiley; 2006.
Zhang X, Zhao XM, He K, Lu L, Cao Y, Liu J, et al. Inferring gene regulatory networks from gene expression data by path consistency algorithm based on conditional mutual information. Bioinformatics. 2012;28:98–104.
Giorgi FM, Lopez G, Woo JH, Bisikirska B, Califano A, Bansal M. Inferring protein modulation from gene expression data using conditional mutual information. PLoS One. 2014;9:e109569.
Villaverde AF, Ross J, Morán F, Banga JR. MIDER: network inference with mutual information distance and entropy reduction. PLoS One. 2014;9:e96732.
Wang YX, Huang H. Review on statistical methods for gene network reconstruction using expression data. J Theor Biol. 2014;362:53–61.
Barman S, Kwon YK. A novel mutual information-based Boolean network inference method from time-series gene expression data. PLoS One. 2017;12:e0171097.
Sabeti PC, Varilly P, Fry B, Lohmueller J, Hostetter E, Cotsapas C, et al. Genome-wide detection and characterization of positive selection in human populations. Nature. 2007;449:913–8.
Granka JM, Henn BM, Gignoux CR, Kidd JM, Bustamante CD, Feldman MW. Limited evidence for classic selective sweeps in African populations. Genetics. 2012;192:1049–64.
Huang DW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009;4:44–57.
Hosack DA, Dennis G, Sherman BT, Lane HC, Lempicki RA. Identifying biological themes within lists of genes with EASE. Genome Biol. 2003;4:R70.
This work was supported by Cooperative Research Program for Agriculture Science & Technology Development (No. PJ01040603), Rural Development Administration (RDA), Republic of Korea for the design of the study and data collection. Also, it was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. 2016R1D1A1B03935676), Republic of Korea for the data analysis and interpretation of the study.
Sequences are available from GenBank with the Bioproject accession numbers PRJNA312138 (African cattle), PRJNA318087 (Angus), PRJNA210521 (Holstein), PRJNA318089 (Jersey) and PRJNA210523 (Korean native cattle, Hanwoo).
SJK designed the experiment and method, performed the analysis of genome data, and drafted the manuscript. SK (S. Ka) carried out biological interpretation from the results and wrote the manuscript. JWH designed and implemented the method and wrote the methods in the manuscript. JK, KK and DL analysed the data. DAY and SC wrote and corrected the manuscript. HKL, OH, OAM, TD, and SK (S. Kemp) collected samples, generated data from the sample, and contributed to interpretation of the results. OSJ and HK supervised and managed the whole study. All authors read and approved the final manuscript.
Collection of blood samples was performed in accordance with the guidelines given by the relevant agricultural institutions (Ol Pejeta Conservancy, Kenya (Ankole); International Livestock Research Institute, Kapiti Ranch (Boran); Ministry of Animal Resources, Fisheries and Range, Sudan (Kenana); Direction Nationale de l'Élevage, Guinea (N'Dama); Institute of Biodiversity, Ethiopia (Ogaden)). All methods involving animal works were approved by the Institutional Animal Care and Use Committee of the National Institute of Animal Science in Korea under approval numbers 2012-C-005 (National Institute of Animal Science, Korea (Holstein); National Institute of Animal Science and Kyungpook National University, Korea (Hanwoo)) and NIAS-2014-093 (National Institute of Animal Science, Korea (Angus and Jersey)). Blood samples from African indigenous cattle were collected after obtaining the consent from the local authorities and owners of the animals.
Department of Agricultural Biotechnology and Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul, 08826, Republic of Korea
Soo-Jin Kim
, Sojeong Ka
& Heebal Kim
C&K Genomics, Seoul National University Research Park, Seoul, 151-919, Republic of Korea
, Jaemin Kim
, DongAhn Yoo
, Kwondo Kim
, Seoae Cho
Clova, NAVER Corp., Seongnam, 13561, Republic of Korea
Jung-Woo Ha
Interdisciplinary Program in Bioinformatics, Seoul National University, Seoul, 08826, Republic of Korea
DongAhn Yoo
Department of Animal Biotechnology, Chonbuk National University, Jeonju, 66414, Republic of Korea
Hak-Kyo Lee
Division of Animal Genomics and Bioinformatics, National Institute of Animal Science, RDA, Jeonju, 55365, Republic of Korea
Dajeong Lim
University of Nottingham, School of Life Sciences, Nottingham, NG7 2RD, UK
Olivier Hanotte
International Livestock Research Institute, Addis Ababa, Ethiopia
& Tadelle Dessie
International Livestock Research Institute, Box 30709-00100, Nairobi, Kenya
Okeyo Ally Mwai
& Stephen Kemp
The Centre for Tropical Livestock Genetics and Health, The Roslin Institute, University of Edinburgh, Easter Bush Campus, Edinburgh, Scotland, UK
Stephen Kemp
National Institute of Animal Science, RDA, Wanju, 55365, Republic of Korea
Sung Jong Oh
Search for Soo-Jin Kim in:
Search for Sojeong Ka in:
Search for Jung-Woo Ha in:
Search for Jaemin Kim in:
Search for DongAhn Yoo in:
Search for Kwondo Kim in:
Search for Hak-Kyo Lee in:
Search for Dajeong Lim in:
Search for Seoae Cho in:
Search for Olivier Hanotte in:
Search for Okeyo Ally Mwai in:
Search for Tadelle Dessie in:
Search for Stephen Kemp in:
Search for Sung Jong Oh in:
Search for Heebal Kim in:
Correspondence to Sung Jong Oh or Heebal Kim.
Supplementary figures and tables. (PDF 2201 kb)
Cattle genome
Trypanotolerant N'Dama
Genetic signatures
Comparative genome-wide analysis
Non-human and non-rodent vertebrate genomics | CommonCrawl |
How might SpinLaunch actually spin something fast enough to launch it into orbit?
update: TechCrunch's SpinLaunch spins up a $35M round to continue building its space catapult is worth a read and contains this cool photo.
Ars Technica's Edition 1.34 of the Rocket Report! says:
SpinLaunch signs deal with Spaceport America. Spaceport America has announced that SpinLaunch has signed a lease to conduct tests at the facility in southern New Mexico and that the company will invest up to $7 million in facilities there, Parabolic Arc reports. The company considered several locations for the test site, but the New Mexico-based site provided the best mix of affordability and location.
A novel approach ... SpinLaunch is developing a kinetic-energy launch system that would spin in a circle at up to 5,000 miles per hour before it is released to fly to space. The system would not use any propellants, and the company has reportedly raised $40 million in venture-capital funding. We're intrigued but will remain skeptical until we see some test flights. (submitted by Ken the Bin)
That's 2222 meters/sec so I'm guessing they are only talking about building a suborbital demo? Or does it have a propulsive "2nd stage"?
The Wikipedia article SpinLaunch doesn't say much about how this is going to work:
SpinLaunch intends to develop a space launch technology that aims to reduce dependency on traditional chemical rockets. Instead, a novel technology will use a large centrifuge to store energy and will then rapidly transfer that energy into a catapult to send a payload to space at up to 4,800 kilometres per hour (3,000 mph). If successful, the acceleration concept is projected to be both lower cost and use much less power, with the price of a single space launch reduced to under US$500,000.[2] The speed required to maintain Low Earth orbit is 27,000 kilometres per hour (17,000 mph).
The last sentence is a bit unusual as it seems to be a disconnected factoid, as if it wants to remind us that the company's numbers are deeply sub-orbital without coming out and saying "their current speed is way too low to go to orbit!"
Their website doesn't seem to address the issue either.
Is there any engineering information out there on the feasibility of spinning something to orbital launch velocity while on the ground and then letting it go? I don't need the blueprints, but at least an informed discussion or educated speculation.
$\begingroup$ I note that at no point in the article is the word "orbit" used. $\endgroup$ – Russell Borogove Jan 29 '19 at 16:10
$\begingroup$ Even if they launched at 8km/s the payload would try to return to the launch point, without some kind of circularization burn at apogee. $\endgroup$ – Russell Borogove Jan 29 '19 at 16:26
$\begingroup$ It's not clear if their catapult is mechanical or electromagnetic. They might just be using a flywheel instead of the more common capacitor banks to power some form of railgun or coilgun. $\endgroup$ – Steve Linton Jan 29 '19 at 16:46
$\begingroup$ I've quoted this a couple of times before, but: "Many novel launch schemes need some amount of help from rockets. What kills a lot of them is doing a tradeoff study of just enlarging the rocket part and getting rid of the non-rocket part. Surprisingly often, that works out to be better and cheaper." --Henry Spencer $\endgroup$ – Russell Borogove Jan 29 '19 at 17:57
$\begingroup$ @RussellBorogove That quote about novel launch schemes could not be quoted too often. $\endgroup$ – Uwe Jan 29 '19 at 20:35
There is lots of information on spinning things fast. The main problem is that at high speeds, the centrifugal force exceeds the tensile strength of the material.
The Bloodhound SSC team ran into this limit when designing the wheels for their car. At 1600 km/h, the wheel rims (with a diameter of 900 mm) experience 50,000 G. SpinLaunch wants to go 5 times faster than that?
Smaller objects can go faster: you can get ultracentrifuges that operate at 1 MG.
There's also a balance problem. An ultracentrifuge has to be finely balanced, or it'll break up. When you launch an object from a spinning contraption, your contraption instantly becomes unbalanced and starts wobbling.
HobbesHobbes
$\begingroup$ To avoid disbalance, two oblects of equal mass should be launched simultaneously in opposite direction. One up into the sky and the other one down into a deep hole into the ground. $\endgroup$ – Uwe Jan 29 '19 at 18:41
$\begingroup$ "a novel technology will use a large centrifuge to store energy and will then rapidly transfer that energy into a catapult to send a payload to space" No launch of an object from a spinning contraption but from a catapult driven by the spinning wheel. $\endgroup$ – Uwe Jan 29 '19 at 19:57
$\begingroup$ A catapult that will take the form of an arm that rotates on an axis. An arm that has to reach 5000 mph according to their claim. So, an object that rotates at 5000 mph. $\endgroup$ – Hobbes Jan 29 '19 at 20:03
$\begingroup$ But there may be linear catapults too, for instance aircraft catapults used on aircraft carriers. $\endgroup$ – Uwe Jan 29 '19 at 20:25
$\begingroup$ I think you are mixing up tangential velocity (mph) with angular velocity (rpm). If angular velocity is fixed, centrifugal force scales linearly with radius, thus smaller is better. For SpinLaunch, tangential velocity is fixed, and centrifugal force scales inversely with radius, so the much longer (I assume) catapult arm will not be under such high centrifugal force. $\endgroup$ – Lex Jan 31 '19 at 2:13
One thing I wondered about is whether this idea is plausible at all. I think it's pretty clearly not for reasons I'll go into below, but the initial question is can you make something strong enough to do what you want to do ignoring practical considerations?
So, first of all let's consider a simplified thing: two equal masses connected by some kind of light cable being spun, and at some point you'll let go of one of the masses (and deal with the other one, and the cable, somehow...) The question is whether you can make the cable strong enough.
Let the masses be $m$, the cable have length $2r$, and the angular velocity of the thing be $\omega$. The masses are moving with speed $v = r\omega$, and the centripetal acceleration is $a = r\omega^2$. So the tension in the cable is
$$T = m r \omega^2 = \frac{mv^2}{r}$$
Let the tensile strength of the cable be $u$, then the strength of the cable $\pi u d^2/4$ where $d$ is the diameter of the cable.
So we can rearrange this to get $d$, which is the interesting thing: we need $d$ to be really small otherwise our approximation goes horribly wrong as the cable is not light and you have to do harder sums.
So in the light-cable approximation then you get:
$$d \ge 2v\sqrt{\frac{m}{\pi r u}}$$
(I have convinced myself that this is OK dimensionally, anyway).
So, let's assume you want to give something escape velocity, and you're going to use carbon nanotubes to make the cable. Let's assume:
$m = 1\,\mathrm{kg}$;
$r$ = $100\,\mathrm{m}$, so the diameter of the thing is going to be $200\,\mathrm{m}$, which I'm assuming is the largest structure you can plausibly build and protect (see below);
$v = 1.2\times 10^4\,\mathrm{ms^{-1}}$ (a bit over escape velocity for the Earth: orbital velocity is less of course, but it's not that much less);
$u = 10^{10}\,\mathrm{Pa}$, which is perhaps plausible.
So this gives
$$d \ge 1.35\,\mathrm{cm}$$
So, well you could probably build such a thing, but I'm pretty sure the 'light cable' assumption is wrong and you'd have to take account of the mass of the cable. This might kill you, but my intuition is it won't.
One additional thing we can work out (thanks to Christopher James Huff for pointing out that I probably should) is what the centripetal acceleration of the thing is just before launch. From the expressions $v = r\omega$ and $a = r\omega^2$ it's easy to get $a$ in terms of $v$ and $r$:
$$a = \frac{v^2}{r}$$
This shows why larger structures are better, but also why higher launch velocities are bad news. For our proposed $100\,\mathrm{m}$ radius launcher, at escape velocity, we get $a \approx 1.4\times 10^6\,\mathrm{ms^{-2}} \approx 145000\,g$, where $g$ is the acceleration due to gravity. The object we're launching is going to have to be very, very tough.
Things get better the larger you make the structure, because the acceleration goes down as it gets bigger. But I think there are practical limits to how big you can make the structure. In particular if the cable breaks just before launch then the objects you are about to launch will hit the structure at roughly escape velocity. For my $1\,\mathrm{kg}$ masses the energy you need to absorb is $1.4\times 10^8\,\mathrm{J}$, which is the equivalent of about $34\,\mathrm{kg}$ of TNT. And you probably want to launch substantially more than that mass.
Indeed, when you let go the mass you want to launch then you have to deal with the other mass anyway. If you want to launch a tonne, then you have to deal with something equivalent to exploding $17\times 10^3\,\mathrm{kg}$ of TNT. This is equivalent to a large conventional bomb (an earlier version of this answer compared it to the Trinity test because I got kilogrammes & tonnes confused when thinking about it: it's nowhere near that).
This is why I assume you can't build a really large structure: if you want to launch a significant mass then you need to deal with something equivalent to the explosion of a nuclear weapon happening inside the structure, anywhere. This has to be a really substantial structure, and building a really large one will be very, very expensive.
Note that this sort of thing is a problem for any kinetic-energy launch system: if you are going to launch a mass $m$ at velocity $v$ then it's going to have energy of $mv^2/2$ at the point of launch, and you need be ready to dissipate that energy if it is released really abruptly. Of course a rocket-based system also has to deal with dissipating all the energy stored in the fuel, but fuel explosions are a lot less abrupt than something hitting you, and they also have the advantage that the object causing the trouble is moving relatively slowly so you can reliably predict where the trouble will be.
Why I think the whole idea is silly in practice
Quite apart from the fact that doing anything serious with this involves containing explosions equivalent to nuclear weapons and building payloads which can withstand tens or hundreds of thousands of gravities of acceleration, there is a question of what happens to the object you have launched. In particular this object is travelling at escape velocity through dense atmosphere. I'm not competent to do the sums, but I imagine that this is just catastrophic: how much energy does it lose? How much faster do you have to launch is as a result, naking everyting even worse? How hot does it get, & what do you have to make it out of to ensure it can survive. What happens to anything near the launch site?
I think it's all just mad: this whole idea is a silly toy.
tfbtfb
$\begingroup$ Something that extracts tens of millions of dollars from other people's pockets is an effective silly toy. Thanks for your analysis, this certainly does sound daunting! $\endgroup$ – uhoh Jan 23 '20 at 0:20
$\begingroup$ @uhoh: yes, it may be a very effective way of moving money from people to other people: it's just not an effective way of getting to space! $\endgroup$ – tfb Jan 23 '20 at 10:07
$\begingroup$ @uhoh: in fact I have just founded the tfb super-magnetic-em-drive-kinetic-energy launch company (it lives in a drawer of my desk) and we're about to get $10 million of seed funding. My purchase of a large property in central London is entirely unrelated. $\endgroup$ – tfb Jan 23 '20 at 12:17
$\begingroup$ It's worth noting that the payload in your example is accelerating at around 150000 gravities, probably more than any reasonable payload container capable of orbital rendezvous can handle, never mind the payload itself. This decreases inversely with radius, which...isn't enough. As I mentioned in a comment to another answer, even SpinLaunch's much lower launch velocity results in extreme accelerations, while requiring the projectile to do most of the work in getting to orbit via rocket propulsion. $\endgroup$ – Christopher James Huff Jan 25 '20 at 16:41
$\begingroup$ @LeoS: I was intentionally assuming special magic cables which are ten times as good as anything we can currently make to avoid the 'oh, but advances in technology will fix all our problems' response. $\endgroup$ – tfb Jan 26 '20 at 12:19
Not the answer you're looking for? Browse other questions tagged launch or ask your own question.
Would a grinding machine be a simple and workable propulsion system for an interplanetary spacecraft?
Could we send a projectile to the Moon with a cannon?
How does a single rocket place multiple satellites into orbit?
If Space Adventures actually does a Soyuz around the Moon, will this be the first real Earth Orbit Rendezvous launch?
How does SpaceX manage parallel processing at the LC-40 Complex?
How does SpaceX handle rearranging flight order?
How often are space launches substantially delayed by protests physically at the launch site?
What kind of "jet engine-like propulsion" does HyperSciences use to accelerate projectiles to sub-orbital altitudes?
What are the biggest challenges for high altitude rail-gun launch systems? | CommonCrawl |
Jacques Drèze
Jacques H. Drèze (5 August 1929[3] – 25 September 2022) was a Belgian economist noted for his contributions to economic theory, econometrics, and economic policy as well as for his leadership in the economics profession. Drèze was the first President of the European Economic Association in 1986 and was the President of the Econometric Society in 1970.
Jacques H. Drèze
Born(1929-08-05)5 August 1929[1]
Verviers, Belgium
Died25 September 2022(2022-09-25) (aged 93)
Academic career
InstitutionCenter for Operations Research and Econometrics, Université catholique de Louvain
University of Chicago (1963–1968)
Cornell University (1971–1977)
FieldEconomic theory
Economic policy
Econometrics
Statistics
Operations research
School or
tradition
Mathematical economics
Alma materUniversité de Liège (Licencié)
Columbia University (PhD)
Doctoral
advisor
William Vickrey
Doctoral
students
Franz Palm
InfluencesKenneth J. Arrow
Robert J. Aumann
Edmond Malinvaud
Franco Modigliani
Leonard Jimmie Savage
ContributionsDrèze equilibrium with price rationing
Disequilibrium macroeconomics and econometrics
Policies to combat unemployment
Public economics
Bayesian simultaneous equations
AwardsPresident, Econometric Society (1970)
President, International Economic Association (1996–1999)
(Inaugural) President, European Economic Association (1985–1986)
Walras-Bowley Lecture (1976)
Yrjö Jahnsson Lecturer (1983)
Foreign Honorary Member of the American Economic Association
Foreign Associate, National Academy of Sciences
Foreign Honorary Member, American Academy of Arts and Sciences
Foreign Member, Royal Dutch Academy of Arts and Sciences
Corresponding Fellow, British Academy (1990)[2]
Information at IDEAS / RePEc
Jacques Drèze was also the father of five sons. One son is the economist, Jean Drèze, who is known for his work on poverty and hunger in India (some of which has been in collaboration with Amartya K. Sen); another son, Xavier Drèze, was a professor of Marketing at UCLA.
Contributions to economics
Drèze's contributions to economics combine policy-relevance and mathematical techniques.
Indeed, models basically play the same role in economics as in fashion: they provide an articulated frame on which to show off your material to advantage ... ; a useful role, but fraught with the dangers that the designer may get carried away by his personal inclination for the model, while the customers may forget that the model is more streamlined than reality.[4]
Economics of uncertainty and insurance
Between games of strategy and games against nature, there remains a middle ground where uncertainties are partially controllable by the decision-maker—situations labelled "games of strength and skill" by von Neumann and Morgenstern, or "moral hazard" in subsequent work. Such problems of moral hazard have been discussed by Jacques Drèze in his dissertation, leading to the 1961 paper (8), whose analysis was generalized in 1987 (76), and simplified in 2004 (123). Drèze's theory allows for preferences depending on the state of the environment. Rational behaviour is again characterised by subjective expected utility maximisation, where the utility is state-dependent, and the maximisation encompasses the choice of an optimal subjective probability from an underlying feasible set.
With reference to state-dependent preferences and moral hazard, a natural application of long-standing interest to economists concerns the provision of safety, for instance through road investments that are aimed at saving lives. In this area, Jacques Drèze introduced in 1962 (12) the "willingness-to-pay" approach, which is now widely adopted. That approach rests on individual preferences aggregated as per the theory of public goods. The willingness to pay approach thus fits squarely in economic theory.
The work of Jacques Drèze on the economics of uncertainty through the mid-eighties is collected in his volume of Essays on Economic Decisions under Uncertainty (B2), published in 1987.
The book is organised in seven parts, covering successively decision theory, market allocation, consumption, production, the firm under incomplete markets, labor and public decisions. Under market allocation comes an important paper (21) on the interpretation and properties of the general equilibrium model pioneered in Arrow (1953). The more significant piece in the next part is a classic paper with Franco Modigliani on savings and portfolio choice under uncertainty (28). There follow three papers on industry equilibrium (17, 42, 62).
In 1975 Drèze contributed "the first general equilibrium analysis of quantity rationing necessitated by prices failing to adjust to equate supply and demand", which "have introduced significant elements of realism to the basic model, . . . given insight, and . . . have had a major impact on subsequent work. For example, the fix-price approach to Keynesian macroeconomics . . . grows largely out of the Drèze paper."[5]
In the early seventies, motivated by the potential role of price rigidities for enhancing risk-sharing efficiency, Jacques Drèze undertook to define equilibria with price rigidities and quantity constraints and to study their properties in a general equilibrium context. His 1975 paper (36, circulated in 1971) introduces the so-called "Drèze equilibrium" at which supply (resp. demand) is constrained only when prices are downward (resp. upward) rigid, whereas a preselected commodity (e.g. money) is never rationed. Existence is proved for arbitrary bounds on prices, through an original approach repeatedly used ever since. That paper is a widely cited classic. It was followed by several others (51, 55, 63, 75), exploring properties of the new concept. Of particular significance to future developments is a joint paper with Pierre Dehez (55), which establishes the existence of Drèze equilibria with no rationing of the demand side. These are called "supply-constrained equilibria". They correspond to the empirically relevant macroeconomic situations.
Macroeconomic consequences of microeconomics
In the meantime, Jean-Pascal Bénassy (1975) and Yves Younès (1975) had approached the same problem from a macroeconomic angle, for the more restrictive case of fixed prices. There developed a lively interest in fixed price economies, and specifically in a three-good macroeconomic model, first formulated by Robert Barro and Herschel Grossman (1971) and then studied extensively by Edmond Malinvaud (1977). That model invited empirical estimation. The new statistical challenges posed by "disequilibrium econometrics" were attacked at CORE by two students of Jacques Drèze, namely Henri Sneessens (1981) and Jean-Paul Lambert (1988). Following a joint paper by Drèze and Sneessens (71), a major project (the European Unemployment Program) directed by Jacques Drèze and Richard Layard led to estimation of a common disequilibrium model in ten countries (B4, 93, 94). The results of that successful effort were to inspire policy recommendations in Europe for several years.[6]
The next steps in the theoretical research came with the work of John Roberts on supply-constrained equilibria at competitive prices, and then with the dissertation of Jean-Jacques Herings at Tilburg (1987, 1996). In both cases, there appear results on existence of a continuum of Drèze equilibria.
Following the work of Roberts and Herings, Drèze (113) proved existence of equilibria with arbitrarily severe rationing of supply. Next, in a joint paper with Herings and others (132), Drèze established the generic existence of a continuum of Pareto-ranked supply-constrained equilibria for a standard economy with some fixed prices.
An intuitive explanation of that surprising result is this: if some prices are fixed and the remaining are flexible, the level of the latter prices relative to the former introduces a degree of freedom that accounts for the multiplicity of equilibria; globally, less rationing is associated with a higher price level; the multiplicity of equilibria thus formalises a trade-off between inflation and unemployment, comparable to a Phillips curve. In this analysis, the continuum is interpreted as reflecting co-ordination failures, not short-run price dynamics à la Phillips. The fact that price-wage rigidities can sustain co-ordination failures adds a new twist to explanations of involuntary unemployment. At the same time, multiple equilibria create problems for the definition of expectations, and introduce a new dimension of uncertainty.
Starting with a paper in Econometrica by Dierker, Guesnerie and Neuefeind (1985), a theory of general equilibrium has developed for economies with non-convex production sets, where firms follow well-defined pricing rules. In particular, existence theorems of increasing generality cover (to some extent, because of various differences in assumptions) the case of Ramsey-Boiteux pricing. Those interested primarily in applications might express skepticism, perhaps even horrified skepticism, upon realizing that 90 pages of a serious economics journal—a 1988 issue of The Journal of Mathematical Economics---were devoted to existence proofs of equilibrium in non-convex economies, under alternative formulations of the assumption that marginal cost pricing entails bounded losses at normalized prices. Still, I think that economic research must cover the whole spectrum from concrete applications to that level of abstraction.[7]
Theory of the firm
Drèze gave a public lecture on "Human Capital and Risk Bearing" (48). The innovative idea here is the transposition of the reasoning underlying the theory of "implicit labour contracts" to the understanding of wage rigidities and unemployment benefits. When markets are incomplete, so that workers cannot insure the risks associated with their future terms of employment, competitive clearing of spot labour markets is not second-best efficient: wage rigidities cum unemployment benefits offer scope for improvement.
The lecture develops this theme informally. The conclusion, stated with specific reference to labour markets, has more general validity. It applies to any situation where the uninsurable uncertainty about future prices results in welfare costs. Even though price rigidities entail a loss of productive efficiency, this can be more than offset by a gain of efficiency in risk-sharing. What may be specific to the labour market is the realistic possibility of controlling (minimum) wages and organising unemployment compensation. The analysis implies that the claim that wage flexibility is efficient requires qualification.
For "price-wage rigidities", the presence of rigidities receives an explanation in Section Seven of Drèze's lecture: Under incomplete markets, wage rigidities contribute to risk-sharing efficiency. The theme of the 1979 lecture (48) is taken up in several papers (91, 95, 101), exploring the definition and implementation of second-best wage rigidities.
Since then, Jacques Drèze has examined ways of reconciling flexibility of labour costs to firms with risk-sharing efficiency of labour incomes, if needed through wage subsidies (119, 125, 131).
Labor managed firms
Disequilibrium
Drèze has suggested that research needs both to search for "microeconomic foundations for macroeconomics" and to consider the "macroconomic consequences of microecononomics", and Drèze had contributed to the latter project of macroeconomic consequences of microeconomics.
In the early seventies, motivated by the potential role of price rigidities for enhancing risk-sharing efficiency, Jacques Drèze undertook to define equilibria with price rigidities and quantity constraints and to study their properties in a general equilibrium context. His 1975 paper (36, circulated in 1971) introduces the so-called "Drèze equilibrium" at which supply (resp. demand) is constrained only when prices are downward (resp. upward) rigid, whereas a preselected commodity (e.g. money) is never rationed. Existence is proved for arbitrary bounds on prices, through an original approach repeatedly used ever since. That paper is a widely cited classic. It was followed by several others (51, 55, 63, 75), exploring properties of the new concept. Of particular significance to future developments is a joint paper with Pierre Dehez (55), which establishes the existence of Drèze equilibria with no rationing of the demand side. These are called "supply-constrained equilibria". They correspond to the empirically relevant macroeconomic situations.
In the meantime, Jean-Pascal Bénassy (1975) and Yves Younès (1975) had approached the same problem from a macroeconomic angle, for the more restrictive case of fixed prices. There developed a lively interest in fixed price economies, and specifically in a three-good macroeconomic model, first formulated by Robert Barro and Herschel Grossman (1971) and then studied extensively by Edmond Malinvaud (1977). That model invited empirical estimation. The new statistical challenges posed by "disequilibrium econometrics" were attacked at CORE by two students of Jacques Drèze, namely Henri Sneessens (1981) and Jean-Paul Lambert (1988). Following a joint paper by Drèze and Sneessens (71), a major project (the European Unemployment Program) directed by Jacques Drèze and Richard Layard led to estimation of a common disequilibrium model in ten countries (B4, 93, 94). The results of that successful effort were to inspire policy recommendations in Europe for several years.
The next steps in the theoretical research came with the work of John Roberts on supply-constrained equilibria at competitive prices, and then with the dissertation of Jean-Jacques Herings at Tilburg (1987, 1996). In both cases, there appear results on existence of a continuum of Drèze equilibria. Following these leads, Drèze (113) proved existence of equilibria with arbitrarily severe rationing of supply. Next, in a joint paper with Herings and others (132), the generic existence of a continuum of Pareto-ranked supply-constrained equilibria was established for a standard economy with some fixed prices. An intuitive explanation of that surprising result is this: if some prices are fixed and the remaining are flexible, the level of the latter prices relative to the former introduces a degree of freedom that accounts for the multiplicity of equilibria; globally, less rationing is associated with a higher price level; the multiplicity of equilibria thus formalises a trade-off between inflation and unemployment, comparable to a Phillips curve.
Econometrics and the European Unemployment Programme
Two young French economists, Jean-Pascal Bénassy (1975) and Yves Younès (1975), approached the same problem from a macroeconomic angle, for the more restrictive case of fixed prices. There developed a lively interest in fixed-price economies, and specifically in a three-good macroeconomic model, first formulated by Robert Barro and Herschel Grossman (1971) and then studied extensively by Edmond Malinvaud (1977).
That model invited empirical estimation. The new statistical challenges posed by "disequilibrium econometrics" were attacked at CORE by two students of Jacques Drèze, namely Henri Sneessens (1981) and Jean-Paul Lambert (1988), whose dissertations were published and widely read. Drèze and Sneessens proposed and estimated a disequilibrium model of Belgium's open economy (71). This model became the prototypical model estimated by the European Unemployment Programme, which under the guidance of Drèze and Richard Layard developed similar models for ten countries (B4, 93, 94). The results of that successful effort were to inspire policy recommendations in Europe for several years.[6]
Following the emergence of European unemployment in the 1970s, Jacques Drèze worked with Franco Modigliani on macroeconomic policies. There resulted a paper (56), which contains some methodological innovations (an early formulation of the "union-wage model", and Bayesian synthesis of classical estimates from several models). It also contains an innovative discussion of work sharing, a topic to which Drèze returned in (73).
In the 1980s and early 1990s, Drèze wrote about the policy front, campaigning for two-sided policies of demand stimulation and supply-side restructuring (100). With Edmond Malinvaud, Drèze organized a group of thirteen Belgian and French economists who wrote "Growth and employment: the scope for a European initiative" (103, 104): This position paper advocated an ambitious program of public investments coupled with elimination of social security contributions by employees on minimum wages. That paper has influenced the programs of reduced contributions on low wages introduced recently in several countries, especially France and Belgium.
The logic of these two-handed policies stands out more sharply in the light of the work on co-ordination failures (124, section 6). These failures are more naturally remedied through demand stimulation. But the failures are apt to be recurrent, so that deficit spending could lead to continued growth of the public debt. Accordingly, demand stimulation should take the form of socially profitable investments, with returns covering the debt service. Substituting profitable investments and variable social security contributions for deficit spending and straight wage rigidities, the proposed two-handed policies differ from either orthodox Keynesianism or New Keynesian policies.
I am impressed by the depth and breadth of knowledge that a serious public economist dreams of commanding. The methodological spectrum includes at one end practical and institutional aspects of public utility pricing, taxation or health care provision, which give the field its substantive content. The real problems encountered in these and many other areas offer scope for the general equilibrium mathematical analysis of second-best policies. At the far end of the spectrum is abstract modelling of economies with non-convex technologies or uncertainty and incomplete markets. Confronted by this spectrum, duly illustrated here, I feel neither despairing nor resigned to narrow specialization, but probably over-extended.[7]
Bayesian econometrics of simultaneous equations
See also: Bayesian statistics, Simultaneous equation methods (econometrics), Structural equation modeling, and Arnold Zellner
One important by-product of the theory of rational decisions under uncertainty has been the emergence of the Bayesian approach to statistics, which views problems of statistical decision as no different from other decision problems, and problems of statistical inference as concerned with the revision of subjective probabilities on the basis of observations.
Bayesian analysis of structural econometric models raises specific difficulties, linked to the so-called "identification problem", readily illustrated by a single market: we observe prices and quantities at the intersection of supply and demand, whereas we wish to estimate the demand and supply curves. The development of suitable Bayesian methods for this problem followed circulation in 1962 of a discussion paper by Drèze,[8] fully developed in several subsequent papers (34, 39, 41, 61). The "Drèze Prior" is introduced in (39).
Leadership
Jacques Drèze has been involved in helping to found several institutions that have strengthened economic research in Europe, notably the Center for Operations Research and Econometrics (CORE), the European Doctoral Program in Quantitative Economics (EDP) and the European Economic Association (EEA).
CORE was created in 1966, and rapidly grew into a leading research centre of international significance. Jacques Drèze was the instigator, the organiser, the first Director and a long-time President of CORE. His outside connections were critical in gathering outside support and in attracting foreign members or visitors.
As expressed by Robert Aumann, CORE is "a unique breeding ground; a place where cross-fertilisation leads to the conception of new ideas, as well as a womb – a warm, supportive environment in which these ideas can grow and mature". The research output at CORE since 1966 consists to date of some 110 books, 125 doctoral dissertations, 1700 published articles; Discussion Papers now average 85 per year.
Also, CORE has served as a model, emulated in other European countries, often at the hands of former CORE members or visitors: Bonn, for GREQAM in Marseille, CentER at Tilburg or Delta in Paris.
Doctoral study and the European Doctoral Program in Quantitative Economics
It is also at CORE, and again at the initiative of Jacques Drèze, that EDP was conceived in 1975. Two ideas came together:
• An institution should not organise its own doctoral program if it cannot do as well as leading institutions elsewhere.
• Education for research is greatly enhanced if students attend at least two institutions, being thereby led to hear confronting opinions and form their own!
These ideas were realised under EDP, where several universities organise a joint doctoral program, with all students attending at least two institutions and having access to supervisors from both. Some 120 students have graduated under this program, which again has been emulated by others in Europe.
European Economic Association
In 1985 the EEA was conceived by Jean Gabszewicz and Jacques Thisse, both of CORE. The first secretary was CORE's Louis Phlips and Jacques Drèze was the first President. Today the EEA sponsors the Journal of the European Economic Association (JEEA), holds annual meetings, and organizes summer schools for young researchers.
Personal biography
Born in Verviers (Belgium) in 1929, Jacques Drèze did undergraduate economics at the nearby Université de Liège, and then a PhD at Columbia University, with a thesis on "Individual Decision Making under Partially Controllable Uncertainty" supervised by William Vickrey. After a first academic job at Carnegie Mellon University in Pittsburgh, he joined Université Catholique de Louvain in 1958, and has been there ever since—apart from visiting appointments at Northwestern University, the University of Chicago, and Cornell University—until his retirement from teaching and administration in 1989. Since his nominal retirement, he remained active in research.
In 1980 he became Foreign Member of the Royal Netherlands Academy of Arts and Sciences.[9]
Jacques Drèze had five sons, including the economist and anti-hunger activist Jean Drèze, who has collaborated on three books with Amartya K. Sen. His first son, Benoît Drèze is a Belgian politician. Another son, Xavier Drèze, was a marketing professor at UCLA.[10]
Drèze died on 25 September 2022, at the age of 93.[11][12]
Notes
1. "From uncertainty to macroeconomics and back: An interview with Jacques Drèze, by Omar Licandro and Pierre Dehez" (PDF). Archived from the original (PDF) on 3 July 2007. Retrieved 11 September 2011.
2. "Professor Jacques Dreze".
3. "Drèze, Jacques H. - Social Networks and Archival Context".
• JHD. "(Uncertainty and) The Firm in General Equilibrium Theory". The Economic Journal, Vol. 95, Supplement: Conference Papers (1985), pp. 1–20.
4. John Roberts, "Equilibrium without Market Clearing", page 147, in Cornet and Tulkens.
5. Dehez 2006.
6. Dréze, Jacques H. (1995). "Forty years of public economics: A personal perspective". Journal of Economic Perspectives. Vol. 9, no. 2. pp. 111–130.
7. Bauwens, Luc; van Dijk, Herman K. (1990). "Bayesian Limited Information Analysis Revisited". Economic decision-making : games, econometrics, and optimisation : contributions in honour of Jacques H. Drèze. Amsterdam: North-Holland. pp. 385–424. ISBN 0-444-88422-X.
8. "Jacques Drèze". Royal Netherlands Academy of Arts and Sciences. Archived from the original on 14 September 2018. Retrieved 17 July 2015.
9. Five sons are listed in the dedication to
• JHD. 1989. Labour Management, Contracts and Capital Markets: A General Equilibrium Approach. [1983 Yrjö Jahnsson Lectures]. Basil Blackwell
10. Belgische topeconoom Jacques Drèze overleden (26 September 2022) tijd.be (in German)
11. "On aurait besoin d'un Jacques Drèze dans cette crise" (in German). L'Echo. 2 October 2022. Retrieved 5 October 2022.
Bibliography
Books by Jacques Drèze
These enumerated citations and comments were based on the curriculum vitae of Jacques Drèze (2009-03-06):
• 1. Allocation under Uncertainty: Equilibrium and Optimality (Ed.), Macmillan, London, 1974.
• 2. Essays on Economic Decisions under Uncertainty, Cambridge University Press, Cambridge, 1987.
• Twenty reprinted papers, organised under 7 headings: individual decision theory, markets and prices, consumer decisions, producer decisions, theory of the firm, human capital and labour contracts, public decisions.
• 3. Labour Management, Contracts and Capital Markets, A General Equilibrium Approach, Oxford, 1989.
• An extended version of the 1983 Yrjö Jahnsson Lectures, dealing with the pure theory of labour-managed, then stock-market economies; stock-market economics with labour contracts; labour management versus labour contracts under incomplete capital markets; and some macroeconomic aspects.
• 4. Europe's Unemployment Problem (Ed.), MIT Press, Cambridge (Mass.), 1990. (With C. Bean, J.P. Lambert, F. Mehta and H. Sneessens, Eds)
• Papers prepared under the European Unemployment Program, a 10-country research initiative supervised by Richard Layard and Drèze in 1986–88. The country papers adopted a common econometric framework inspired by work on Belgium by Drèze and Henri Sneesens (see article [71]). Includes a 65-page synthesis by Charles Bean and Drèze.
• 5. Underemployment Equilibria: Essays in Theory, Econometrics and Policy, Cambridge University Press, Cambridge, 1991.
• Eighteen reprinted papers, organised under 8 headings: overview, equilibria with price rigidities, efficiency of constrained equilibria, public goods and the public sector, price adjustments, wage policies, econometrics, and policy.
• 6. Money and Uncertainty: Inflation, Interest, Indexation, Edizioni Dell' Elefante, Roma, 1992.
• An extended version of the 1992 Paolo Baffi Lecture at Banca d'Italia, dealing successively with a positive theory of positive inflation, with interest rates policies and with wage indexation.
• 7. Pour l'emploi, la croissance et l'Europe, De Boeck Université, 1995.
• Ten papers (some initially written in French, some translated from English) dealing successively with growth and employment, technical progress and low-skilled employment, European macroeconomic policies, work sharing, Europe's capital city and a status for regions within a Europe of nations. Most papers are based on lectures addressed to non-specialist audiences.
Selected articles by Jacques Drèze
These enumerated citations and comments come from the curriculum vitae of Jacques Drèze (2009-03-06):
• 7. "Quelques réflexions sereines sur l'adaptation de l'industrie belge au Marché Commun", Comptes Rendus de la Société d'Economie Politique de Belgique, Bruxelles, 275, 3–37, 1960; translated as "The Standard Goods Hypothesis" in The European Internal Market: Trade and Competition, Eds. A. Jacquemin and A. Sapir, 13–32, Oxford University Press, Oxford,1989.
• Product differentiation and economies of scale as a new source of comparative advantage—a pillar of the extensive theory of "intra-industry trade", and of some more recent developments following the paper by Krugman in AER 1970.
• 8. "Les fondements logiques de l'utilité cardinale et de la probabilité subjective", in La Decision, Colloques Internationaux du CNRS, Paris, 73–97, 1961.
• Extension of individual decision theory to moral hazard and state-dependent preferences, based on unpublished PhD Thesis, revised and more systematically presented in [76].
• 12. "L'utilité sociale d'une vie humaine", Revue Francaise de Recherche Opérationnelle, 23, 93–118, 1962.
• Introduces the "willingness to pay" approach to the demand for safety.
• 13. "Some Postwar Contributions of French Economists to Theory and Public Policy", American Economic Review, 54, 2, 1–64, 1964.
• An extensive review (with some extensions) of the work of the French marginalist school (Allais, Boiteux, Massé, ...), with additional sections on intertemporal allocation (Allais, Malinvaud, ...) and French planning.
• 21. "Market Allocation under Uncertainty", European Economic Review, 2, 2, 133–165, 1971.
• Interpretation of the Arrow-Debreu contingent-markets model. An early statement and demonstration of the martingale property of prices for contingent claims.
• 23. "A Tâtonnement Process for Public Goods", Review of Economic Studies, 38, 2, 133–150, 1971. (With D. de la Vallèe Poussin.)
• Introduces the well-known MDP process for public goods, demonstrates convergence and provides an early analysis of incentive compatibility.
• 25. "Discount Rates for Public Investments in Closed and Open Economies", Economica, 38, 152, 395–412, 1971; reprinted in Cost-Benefit Analysis, A.C. Harberger and G.P. Jenkins Eds, Edward, 2002, and in Discounting and Environmental Policy, J. Scheraga, Ed., Ashgate, 2002. (With A. Sandmo.)
• Second-best analysis of the choice of a discount rate for public investment (previously confined to partial analysis). The social discount rate should be a weighted average of rates of return on specific investments, with weights reflecting marginal shares.
• 26. "Cores and Prices in an Exchange Economy with an Atomless Sector", Econometrica 40, 6, 1090–1108, 1972. (With J. Jaskold Gabszewicz, D. Schmeidler and K. Vind.)
• For an exchange economy with both an atomless sector and atoms, the paper gives alternative sufficient conditions for a core allocation to have a competitive restriction to the atomless sector.
• 27. "Econometrics and Decision Theory", Econometrica, 40, 1, 1–17, 1972.
• Presidential address to the Econometric Society; summarises Drèze's work on Bayesian Econometrics (see also [61]) and expounds complementarities between economic theory, decision theory, econometrics and mathematical programming.
• 28. "Consumption Decisions under Uncertainty", Journal of Economic Theory, 5, 3, 308–335, 1972. (With F. Modigliani.)
• Clearly distinguished time preferences from risk preferences and temporal versus timeless uncertainty, while expounding results on savings and portfolio choices under uncertainty.
• 33. "Investment under Private Ownership: Optimality, Equilibrium and Stability" in Allocation under Uncertainty: Equilibrium and Optimality, Macmillan, chap. 9, 1974.
• Develops the "incomplete markets" model of general equilibrium under uncertainty, with a single commodity per state, as an extension of the special model (fixed coefficients) introduced in the seminal paper by Diamond in AER 1967; proves basic results (most notably non-convexity, but also existence of stockholders equilibria, their inefficiency, and stability of stock-market valuation of investments.)
• 34. "Bayesian Theory of Identification in Simultaneous Equations Models" in Studies in Bayesian Econometrics and Statistics, Eds. S.E. Fienberg and A. Zellner, North-Holland, 1974.
• Based on an unpublished manuscript of 1962. Introduces the Bayesian concept of identification and applies it to SEM; together with [39, 41, 44], forms the core of the material summarised in [61] and outlined in [27].
• 36. "Existence of an Exchange Equilibrium under Price rigidities", International Economic Review, 16, 2, 301–320, 1975.
• Introduces an equilibrium concept for market economies operating under price rigidities (the so-called Drèze equilibrium) and a now widely used method of proving existence. Covers both real and nominal rigidities defined by upper and/or lower bounds on individual prices.
• 38. "Pricing, Spending and Gambling Rules for Non-Profit Organisations" in Public and Urban Economics, Essays in Honor of William S. Vickrey, Ed. R.E. Grieson, Lexington Books, 59–89, 1976. (With M. Marchand.)
• Second-best theory applied to non-profit organisations, including Ramsey-Boiteux pricing, criteria for capital accumulation or consumption and guidelines for risk-taking.
• 39. "Bayesian Limited Information Analysis of the Simultaneous Equations Model", Econometrica 44, 5, 1045–1075, 1976.
• The fundamental paper on Bayesian methods for SEM, including the use of ratio-form poly-t densities.
• 40. "Some Theory of Labour Management and Participation", Econometrica, 44, 6, 1125–1139, 1976.
• Walras lecture to the 1975 World Congress of the Econometric Society. Preview of book [3]. Includes the first general-equilibrium analysis of labour management. Under labour-mobility across firms, labour-management equilibria replicate competitive equilibria.
• 41. "Bayesian Full Information Analysis of Simultaneous Equations", Journal of the American Statistical Association 71, 345, 919–923, 1976. (With J.-A. Morales.)
• Extension of [39] from limited to full information: a broader class of prior densities and a more informative analysis, at greater computational cost.
Vision and projects
• "From uncertainty to macroeconomics and back: an interview with Jacques Drèze", Pierre Dehez and Omar Licandro. Macroeconomic Dynamics, 9, 2005, 429–461.
• Jacques H. Drèze. 1972. "Econometrics and decision theory [Presidential address to the Econometric Society]" Econometrica, 40(1): 1–18. [J. H. Drèze 1987. Essays on Economic Decisions Under Uncertainty. Cambridge UP]:
• Jacques H. Drèze. 1987. "Underemployment Equilibria: From Theory to Econometrics and Policy" [First Congress of the European Economic Association, Presidential Address] European Economic Review, 31: 9–34. In Drèze 1993
• Gérard Debreu. 1991. "Address in honor of Jacques Drèze". Pages 3–6 in W. A. Barnett, B. Cornet, C. D'Aspremont, J. Gabszewicz, A. Mas-Colell, eds. Equilibrium Theory and Applications. Cambridge U. P.
Unemployment
• Jacques H. Drèze, Charles R. Bean, JP Lambert. 1990. Europe's Unemployment Problem. MIT Press. This book has chapter-versions of the following refereed articles:
• Henri R. Sneessens and Jacques H. Drèze. 1986. "A Discussion of Belgian unemployment, combining traditional concepts and disequilibrium econometrics." Economica 53: S89—S119. [Supplement: Charles Bean, Richard Layard, and Stephen Nickell, eds. 1986. The Rise in Unemployment. Blackwell]
• Jacques H. Drèze and Charles Bean. 1990. "European unemployment: Lessons from a multicountry econometric study." Scandinavian Journal of Economics Vol 92, No. 2: 135–165 [Bertil Holmlund and Garl-Gustaf Löfgren, eds. Unemployment and Wage Determination in Europe. Blackwell. 3–33. In Dréze 1993.]
• Jacques H. Drèze. 1993. Underemployment Equilibria: Essays in Theory, Econometrics, and Policy. Cambridge UP. This collection contains the following essay:
• Jacques H. Drèze; Torsten Persson; Marcus Miller. "Work-Sharing: Some Theory and Recent European Experience". Economic Policy, Vol. 1, No. 3 (Oct. 1986), pp. 561–619.
Dissertations of PhD students
• Sneessens, Henri B. 1981. Theory and Estimation of Macroeconomic Rationing Models. Springer-Verlag Lecture Notes in Economics and Mathematical Systems, Volume 191.
• Lambert, Jean-Paul. 1988. Disequilibrium Macroeconomic Models: Theory and Estimation of Rationing Models Using Business Survey Data. Cambridge UP.
Economic policy, especially for Europe
• Drèze, Jacques H.; Malinvaud, Edmond. 1994. 'Growth and employment: The scope for a European initiative', European Economic Review 38, 3–4: 489–504.
• Drèze, Jacques; Malinvaud, E.; De Grauwe, P.; Gevers, L.; Italianer, A.; Lefebvre, O.; Marchand, M.; Sneesens, H.; Steinherr, A.; Champsaur, Paul; Charpin, J.-M.; Fitoussi, J.-P.; Laroque, G. (1994). "Growth and employment: the scope for a European initiative". European Economy, Reports and Studies. 1: 75–106.
• Drèze, Jacques H.; Henri Sneessens (1996). 'Technical development, competition from low-wage economies and low-skilled unemployment', Swedish Economic Policy Review. 185–214.
• Drèze, Jacques H. (2000). "Economic and social security in the twenty-first century, with attention to Europe". Scandinavian Journal of Economics. 102 (3): 327–348. CiteSeerX 10.1.1.21.7509. doi:10.1111/1467-9442.00204.
Theory of the firm, especially labor in the firm
• Jacques H. Drèze (1989). Labour Management, Contracts, and Capital Markets: A General Equilibrium Approach. B. Blackwell. ISBN 978-0-631-13784-9.
• Dreze, Jacques H. (1976). "Some Theory of Labor Management and Participation". Econometrica. 44 (6): 1125–1139. doi:10.2307/1914250. ISSN 0012-9682. JSTOR 1914250.
• Dreze, Jacques H. (1985). "(Uncertainty and) The Firm in General Equilibrium Theory" (PDF). The Economic Journal. 95 (Supplement: Conference Papers (1985)): 1–20. doi:10.2307/2232866. JSTOR 2232866. Archived from the original (PDF) on 6 October 2017. Retrieved 6 October 2017.
Public economics
• Dreze, Jacques (1997). "Research and Development in Public Economics: William Vickrey's Inventive Quest of Efficiency". Scandinavian Journal of Economics. 99 (2): 179–198. doi:10.1111/1467-9442.00057. ISSN 0347-0520.
• Dreze, Jacques H. (Spring 1995). "Forty years of public economics: A personal perspective". The Journal of Economic Perspectives. 9 (2): 111–130. doi:10.1257/jep.9.2.111. JSTOR 2138169.
• Dreze, H.; de la Vallee Poussin, D. (April 1971). "A Tâtonnement Process for Public Goods". The Review of Economic Studies. 38 (2): 133–150. doi:10.2307/2296777. JSTOR 2296777.
Planning and regional economics
• Jacques Drèze; Paul De Grauwe; Jeremy Edwards. "Regions of Europe: A Feasible Status, to Be Discussed". Economic Policy, Vol. 8, No. 17 (Oct. 1993), pp. 265–307
• Abraham Charnes; Jacques Drèze; Merton Miller. "Decision and Horizon Rules for Stochastic Planning Problems: A Linear Example". Econometrica, Vol. 34, No. 2. (Apr. 1966), pp. 307–330.
Statistics and Bayesian econometrics: Simultaneous equations and the Louvain School
• JHD. "Bayesian Limited Information Analysis of the Simultaneous Equations Model". Econometrica, Vol. 44, No. 5 (Sep. 1976), pp. 1045–1075.
• JHD and Juan-Antonio Morales. "Bayesian Full Information Analysis of Simultaneous Equations". Journal of the American Statistical Association. Vol. 71, No. 356 (Dec. 1976), pp. 919–923.
• JHD and Jean-François Richard. 1983. "Bayesian Analysis of Simultaneous Equation Systems". Chapter 9, pages 517–598, in Handbook of Econometrics, Volume I, edited by Zvi Griliches and Michael D. Intriligator. (Book 2 of Handbooks in Economics, edited by Kenneth J. Arrow and Michael D. Intriligator) North-Holland.
Colleagues
• Luc Bauwens, Michel Lubrano, Jean-François Richard. 1999. Bayesian Inference in Dynamic Econometric Models. Oxford University Press. (JHD wrote the "Foreword", pages v–vi)
• Jean Pierre Florens, Michel Mouchart, Jean-Marie Rolin. 1990. Elements of Bayesian Statistics. Pure and Applied Mathematics, Volume 134. Marcel Dekker.
CORE
• Bernard Cornet and Henry Tulkens, eds. Contributions to Operations Research and Economics. The twentieth anniversary of CORE. Papers from the symposium held in Louvain-la-Neuve, January 1987. Edited by. MIT Press, Cambridge, MA, 1989. xii+561 pp. ISBN 0-262-03149-3
References
• Arrow, K. J. (1964). "Le rôle des valeurs boursières pour la répartition la meilleure des risques", in Econométrie, CNRS, Paris, 41–47 (translated as "The role of securities in the optimal allocation of risk-bearing". Review of Economic Studies. 31 (2): 91–96. doi:10.2307/2296188. JSTOR 2296188. S2CID 154606108.
• Barro, R. J.; Grossman, H.I. (1971). "A general disequilibrium model of income and employment". American Economic Review. 61: 82–93.
• Benassy, J. P. (1975). "Neo-Keynesian disequilibrium theory in a monetary economy" (PDF). Review of Economic Studies. 42 (4): 502–23. doi:10.2307/2296791. JSTOR 2296791.
• Boiteux, M (1951). "La tarification au coût marginal et les demandes aléatoires". Cahiers du Séminaire d'Économétrie. 1 (1): 56–69. doi:10.2307/20075348. JSTOR 20075348.
• Dehez, Pierre (August 2006), "About Jacques H. Drèze".
• Diamond, Peter A. (1967). "The role of a stock market in a general equilibrium model with technological uncertainty". American Economic Review. 42: 759–76.
• Grandmont, J. M. (1977). "Temporary general equilibrium theory". Econometrica. 45 (3): 535–72. doi:10.2307/1911674. JSTOR 1911674.
• Herings, J. J. (1996), Static and Dynamic Aspects of General Disequilibrium Theory, Kluwer.
• Magill, Michael and Martine Quinzii (1996), Theory of Incomplete Markets, MIT Press.
• Malinvaud, E. (1977), The Theory of Unemployment Reconsidered, Basil Blackwell.
• Radner, R. (1967). "Equilibre des marchés à terme et au comptant en cas d'incertitude". Cahiers du Séminaire d'Econométrie. 17: 35–52.
• Radner, R. (1972). "Existence of Equilibrium of Plans, Prices and Price Expectations in a Sequence of Markets". Econometrica. 40 (2): 289–304. doi:10.2307/1909407. JSTOR 1909407. S2CID 15497940.
• Roberts, J. (1987). "An equilibrium model with involuntary unemployment at flexible, competitive prices and wages". American Economic Review. 77: 856–74.
• Rosen, S (1985). "Implicit contracts: a survey". Journal of Economic Literature. 23: 1144–75.
• Zellner, A. (1971), An Introduction to Bayesian Inference in Econometrics, Wiley.
Neoclassical economists
• William Stanley Jevons
• Léon Walras
• Francis Ysidro Edgeworth
• Alfred Marshall
• John Bates Clark
• Irving Fisher
Walrasian economics
• Maurice Allais
• Gérard Debreu
• Edmond Malinvaud
• Jacques Drèze
• Jean-Michel Grandmont
• Guy Laroque
• Jean-Pascal Bénassy
• Yves Younes
New classical macroeconomics
• Robert Barro
• Axel Leijonhufvud
• Robert Lucas Jr.
• Bennett McCallum
• Patrick Minford
• Thomas J. Sargent
• Neil Wallace
Real business cycle school
• Robert King
• Finn E. Kydland
• John Long
• Charles Plosser
• Edward C. Prescott
• Sérgio Rebelo
• Alan Stockman
Presidents of the Econometric Society
1931–1950
• Irving Fisher (1931–1934)
• François Divisia (1935)
• Harold Hotelling (1936–1937)
• Arthur Bowley (1938–1939)
• Joseph Schumpeter (1940–1941)
• Wesley Mitchell (1942–1943)
• John Maynard Keynes (1944–1945)
• Jacob Marschak (1946)
• Jan Tinbergen (1947)
• Charles Roos (1948)
• Ragnar Frisch (1949)
• Tjalling Koopmans (1950)
1951–1975
• R. G. D. Allen (1951)
• Paul Samuelson (1952)
• René Roy (1953)
• Wassily Leontief (1954)
• Richard Stone (1955)
• Kenneth Arrow (1956)
• Trygve Haavelmo (1957)
• James Tobin (1958)
• Marcel Boiteux (1959)
• Lawrence Klein (1960)
• Henri Theil (1961)
• Franco Modigliani (1962)
• Edmond Malinvaud (1963)
• Robert Solow (1964)
• Michio Morishima (1965)
• Herman Wold (1966)
• Hendrik Houthakker (1967)
• Frank Hahn (1968)
• Leonid Hurwicz (1969)
• Jacques Drèze (1970)
• Gérard Debreu (1971)
• W. M. Gorman (1972)
• Roy Radner (1973)
• Don Patinkin (1974)
• Zvi Griliches (1975)
1976–2000
• Hirofumi Uzawa (1976)
• Lionel McKenzie (1977)
• János Kornai (1978)
• Franklin M. Fisher (1979)
• J. Denis Sargan (1980)
• Marc Nerlove (1981)
• James A. Mirrlees (1982)
• Herbert Scarf (1983)
• Amartya K. Sen (1984)
• Daniel McFadden (1985)
• Michael Bruno (1986)
• Dale Jorgenson (1987)
• Anthony B. Atkinson (1988)
• Hugo Sonnenschein (1989)
• Jean-Michel Grandmont (1990)
• Peter Diamond (1991)
• Jean-Jacques Laffont (1992)
• Andreu Mas-Colell (1993)
• Takashi Negishi (1994)
• Christopher Sims (1995)
• Roger Guesnerie (1996)
• Robert E. Lucas, Jr. (1997)
• Jean Tirole (1998)
• Robert B. Wilson (1999)
• Elhanan Helpman (2000)
2001–present
• Avinash Dixit (2001)
• Guy Laroque (2002)
• Eric Maskin (2003)
• Ariel Rubinstein (2004)
• Thomas J. Sargent (2005)
• Richard Blundell (2006)
• Lars Peter Hansen (2007)
• Torsten Persson (2008)
• Roger B. Myerson (2009)
• John H. Moore (2010)
• Bengt Holmström (2011)
• Jean-Charles Rochet (2012)
• James J. Heckman (2013)
• Manuel Arellano (2014)
• Robert Porter (2015)
• Eddie Dekel (2016)
• Drew Fudenberg (2017)
• Tim Besley (2018)
• Stephen Morris (2019)
• Orazio Attanasio (2020)
• Pinelopi Koujianou Goldberg (2021)
• Guido Tabellini (2022)
Presidents of the European Economic Association
1986–2000
• Jacques Drèze (1986)
• János Kornai (1987)
• Edmond Malinvaud (1988)
• Anthony Atkinson (1989)
• Agnar Sandmo (1990)
• Assar Lindbeck (1991)
• Martin Hellwig (1992)
• Mervyn King (1993)
• Roger Guesnerie (1994)
• Louis Philps (1995)
• David Newbery (1996)
• Reinhard Selten (1997)
• Jean-Jacques Laffont (1998)
• Partha Dasgupta (1999)
• James A. Mirrlees (2000)
2001–present
• Jean Tirole (2001)
• J. Peter Neary (2002)
• Torsten Persson (2003)
• Richard Blundell (2004)
• Mathias Dewatripont (2005)
• Andreu Mas-Colell (2006)
• Guido Tabellini (2007)
• Ernst Fehr (2008)
• Nicholas Stern (2009)
• Timothy Besley (2010)
• Christopher Pissarides (2011)
• Jordi Galí (2012)
• Manuel Arellano (2013)
• Orazio Attanasio (2014)
• Rachel Griffith (2015)
• Fabrizio Zilibotti (2016)
• Philippe Aghion (2017)
• Eliana La Ferrara (2018)
• Kjetil Storesletten (2019)
• Per Krusell (2020)
• Silvana Tenreyro (2021)
Presidents of the International Economic Association
• Joseph Schumpeter (1950)
• Gottfried Haberler (1950–1953)
• Howard S. Ellis (1953–1956)
• Erik Lindahl (1956–1959)
• E. A. G. Robinson (1959–1962)
• Giuseppe Ugo Papi (1962–1965)
• Paul A. Samuelson (1965–1968)
• Erik Lundberg (1968–1971)
• Fritz Machlup (1971–1974)
• Edmond Malinvaud (1974–1977)
• Shigeto Tsuru (1977–1980)
• Víctor L. Urquidi (1980–1983)
• Kenneth Arrow (1983–1986)
• Amartya Sen (1986–1989)
• Anthony B. Atkinson (1989–1992)
• Michael Bruno (1992–1995)
• Jacques Drèze (1995–1999)
• Robert M. Solow (1999–2002)
• János Kornai (2002–2005)
• Guillermo Calvo (2005–2008)
• Masahiko Aoki (2008–2011)
• Joseph E. Stiglitz (2011–2014)
• Tim Besley (2014–2017)
• Kaushik Basu (2017–)
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• 2
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Croatia
• Netherlands
• Portugal
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• SNAC
• IdRef
| Wikipedia |
Multilevel Monte Carlo method
Multilevel Monte Carlo (MLMC) methods in numerical analysis are algorithms for computing expectations that arise in stochastic simulations. Just as Monte Carlo methods, they rely on repeated random sampling, but these samples are taken on different levels of accuracy. MLMC methods can greatly reduce the computational cost of standard Monte Carlo methods by taking most samples with a low accuracy and corresponding low cost, and only very few samples are taken at high accuracy and corresponding high cost.
Goal
The goal of a multilevel Monte Carlo method is to approximate the expected value $\operatorname {E} [G]$ of the random variable $G$ that is the output of a stochastic simulation. Suppose this random variable cannot be simulated exactly, but there is a sequence of approximations $G_{0},G_{1},\ldots ,G_{L}$ with increasing accuracy, but also increasing cost, that converges to $G$ as $L\rightarrow \infty $. The basis of the multilevel method is the telescoping sum identity,[1]
$\operatorname {E} [G_{L}]=\operatorname {E} [G_{0}]+\sum _{\ell =1}^{L}\operatorname {E} [G_{\ell }-G_{\ell -1}],$
that is trivially satisfied because of the linearity of the expectation operator. Each of the expectations $\operatorname {E} [G_{\ell }-G_{\ell -1}]$ is then approximated by a Monte Carlo method, resulting in the multilevel Monte Carlo method. Note that taking a sample of the difference $G_{\ell }-G_{\ell -1}$ at level $\ell $ requires a simulation of both $G_{\ell }$ and $G_{\ell -1}$.
The MLMC method works if the variances $\operatorname {V} [G_{\ell }-G_{\ell -1}]\rightarrow 0$ as $\ell \rightarrow \infty $, which will be the case if both $G_{\ell }$ and $G_{\ell -1}$ approximate the same random variable $G$. By the Central Limit Theorem, this implies that one needs fewer and fewer samples to accurately approximate the expectation of the difference $G_{\ell }-G_{\ell -1}$ as $\ell \rightarrow \infty $. Hence, most samples will be taken on level $0$, where samples are cheap, and only very few samples will be required at the finest level $L$. In this sense, MLMC can be considered as a recursive control variate strategy.
Applications
The first application of MLMC is attributed to Mike Giles,[2] in the context of stochastic differential equations (SDEs) for option pricing, however, earlier traces are found in the work of Heinrich in the context of parametric integration.[3] Here, the random variable $G=f(X(T))$ is known as the payoff function, and the sequence of approximations $G_{\ell }$, $\ell =0,\ldots ,L$ use an approximation to the sample path $X(t)$ with time step $h_{\ell }=2^{-\ell }T$.
The application of MLMC to problems in uncertainty quantification (UQ) is an active area of research.[4][5] An important prototypical example of these problems are partial differential equations (PDEs) with random coefficients. In this context, the random variable $G$ is known as the quantity of interest, and the sequence of approximations corresponds to a discretization of the PDE with different mesh sizes.
An algorithm for MLMC simulation
A simple level-adaptive algorithm for MLMC simulation is given below in pseudo-code.
$L\gets 0$
repeat
Take warm-up samples at level $L$
Compute the sample variance on all levels $\ell =0,\ldots ,L$
Define the optimal number of samples $N_{\ell }$ on all levels $\ell =0,\ldots ,L$
Take additional samples on each level $\ell $ according to $N_{\ell }$
if $L\geq 2$ then
Test for convergence
end
if not converged then
$L\gets L+1$
end
until converged
Extensions of MLMC
Recent extensions of the multilevel Monte Carlo method include multi-index Monte Carlo,[6] where more than one direction of refinement is considered, and the combination of MLMC with the Quasi-Monte Carlo method.[7][8]
See also
• Monte Carlo method
• Monte Carlo methods in finance
• Quasi-Monte Carlo methods in finance
• Uncertainty quantification
• Partial differential equations with random coefficients
References
1. Giles, M. B. (2015). "Multilevel Monte Carlo Methods". Acta Numerica. 24: 259–328. arXiv:1304.5472. doi:10.1017/s096249291500001x. S2CID 13805654.
2. Giles, M. B. (2008). "Multilevel Monte Carlo Path Simulation". Operations Research. 56 (3): 607–617. CiteSeerX 10.1.1.121.713. doi:10.1287/opre.1070.0496. S2CID 3000492.
3. Heinrich, S. (2001). "Multilevel Monte Carlo Methods". Large-Scale Scientific Computing. Lecture Notes in Computer Science. Vol. 2179. Springer. pp. 58–67. doi:10.1007/3-540-45346-6_5. ISBN 978-3-540-43043-8.
4. Cliffe, A.; Giles, M. B.; Scheichl, R.; Teckentrup, A. (2011). "Multilevel Monte Carlo Methods and Applications to Elliptic PDEs with Random Coefficients" (PDF). Computing and Visualization in Science. 14 (1): 3–15. doi:10.1007/s00791-011-0160-x. S2CID 1687254.
5. Pisaroni, M.; Nobile, F. B.; Leyland, P. (2017). "A Continuation Multi Level Monte Carlo Method for Uncertainty Quantification in Compressible Inviscid Aerodynamics" (PDF). Computer Methods in Applied Mechanics and Engineering. 326 (C): 20–50. doi:10.1016/j.cma.2017.07.030. S2CID 10379943. Archived from the original (PDF) on 2018-02-14.
6. Haji-Ali, A. L.; Nobile, F.; Tempone, R. (2016). "Multi-Index Monte Carlo: When Sparsity Meets Sampling". Numerische Mathematik. 132 (4): 767–806. arXiv:1405.3757. doi:10.1007/s00211-015-0734-5. S2CID 253742676.
7. Giles, M. B.; Waterhouse, B. (2009). "Multilevel Quasi-Monte Carlo Path Simulation" (PDF). Advanced Financial Modelling, Radon Series on Computational and Applied Mathematics. De Gruyter: 165–181.
8. Robbe, P.; Nuyens, D.; Vandewalle, S. (2017). "A Multi-Index Quasi-Monte Carlo Algorithm for Lognormal Diffusion Problems". SIAM Journal on Scientific Computing. 39 (5): A1811–C392. arXiv:1608.03157. Bibcode:2017SJSC...39S.851R. doi:10.1137/16M1082561. S2CID 42818387.
| Wikipedia |
Varadhan's lemma
In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a family of random variables Zε as ε becomes small in terms of a rate function for the variables.
Statement of the lemma
Let X be a regular topological space; let (Zε)ε>0 be a family of random variables taking values in X; let με be the law (probability measure) of Zε. Suppose that (με)ε>0 satisfies the large deviation principle with good rate function I : X → [0, +∞]. Let ϕ : X → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition
$\lim _{M\to \infty }\limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}\,\mathbf {1} {\big (}\phi (Z_{\varepsilon })\geq M{\big )}{\big ]}{\big )}=-\infty ,$
where 1(E) denotes the indicator function of the event E; or, for some γ > 1, the moment condition
$\limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\gamma \phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}{\big )}<\infty .$
Then
$\lim _{\varepsilon \to 0}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}=\sup _{x\in X}{\big (}\phi (x)-I(x){\big )}.$
See also
• Laplace principle (large deviations theory)
References
• Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See theorem 4.3.1)
| Wikipedia |
\begin{document}
\title{A new family of links topologically, but not smoothly,\\concordant to the Hopf link}
\author{Christopher W. Davis} \address{Department of Mathematics, The University of Wisconsin at Eau Claire} \email{[email protected]} \urladdr{http://people.uwec.edu/daviscw/}
\author{Arunima Ray} \address{Department of Mathematics, Brandeis University} \email{[email protected]} \urladdr{http://people.brandeis.edu/~aruray/}
\date{\today} \subjclass[2010]{57M25} \keywords{Hopf link, link concordance}
\begin{abstract} We give new examples of 2--component links with linking number one and unknotted components that are topologically concordant to the positive Hopf link, but not smoothly so -- in addition they are not smoothly concordant to the positive Hopf link with a knot tied in the first component. Such examples were previously constructed by Cha--Kim--Ruberman--Strle; we show that our examples are distinct in smooth concordance from theirs. \end{abstract}
\maketitle
\section{Introduction}
The study of smooth and topological knot concordance can be considered to be a model for the significant differences between the smooth and topological categories in four dimensions. For instance, mirroring the fact that there exist 4--manifolds that are homeomorphic but not diffeomorphic, there exist knots that are topologically slice but not smoothly slice, i.e.\ knots that are topologically concordant to the unknot, but not smoothly so (see, for example, \cite{ChaPow14a, CHHo13, CHo15, End95, Gom86, HeK12, HeLivRub12, Hom14}). Similarly, one might ask whether there are links that are topologically concordant to the Hopf link, but not smoothly so. Infinitely many examples of such links were constructed by Cha--Kim--Ruberman--Strle in \cite{ChaKimRubStr12}. We construct another infinite family that we show to be distinct from the known examples in smooth concordance.
In the following, all links will be considered to be ordered and oriented. Two links will be said to be concordant (resp.\ topologically concordant) if their (ordered, oriented) components cobound smooth (resp.\ topologically locally flat) properly embedded annuli in $S^3 \times [0,1]$. From now on, when we say the Hopf link, we refer to the positive Hopf link, i.e.\ the components are oriented so that the linking number is one.
\begin{figure}
\caption{The (untwisted) satellite operation on knots. The boxes containing `$3$' indicate that all the strands passing vertically through the box should be given three full positive twists, to account for the writhe in the given diagram of $K$.}
\label{fig:satellite}
\end{figure}
Any 2--component link with
second component unknotted corresponds to a knot inside a solid torus, i.e.\ a \textit{pattern}, by carving out a regular neighborhood of the second component in $S^3$. Any pattern $P$ induces a function on the knot concordance group $\mathcal{C}$ via the usual satellite construction, called a \textit{satellite operator}, given by \begin{align*} P:\mathcal{C}&\rightarrow\mathcal{C}\\ K&\mapsto P(K) \end{align*} where $P(K)$ is the satellite knot with companion $K$ and pattern $P$; see Fig.~\ref{fig:satellite} and~\cite[p.\ 111]{Ro90} for more details.
We will occasionally work in a slight generalization of usual concordance, which we now describe. We say that two knots are \textit{exotically concordant} if they cobound a smooth, properly embedded annulus in a smooth 4--manifold \textit{homeomorphic} to $S^3\times [0,1]$ (but not necessarily \textit{diffeomorphic}). $\mathcal{K}$ modulo exotic concordance forms an abelian group called the \textit{exotic knot concordance group}, denoted by $\mathcal{C}^\text{ex}$. If the 4--dimensional smooth Poincar\'e Conjecture is true, we can see that $\mathcal{C}=\mathcal{C}^\text{ex}$ \cite{CDR14}. Any pattern $P$ induces a well-defined satellite operator $P:\mathcal{C}^\text{ex}\rightarrow \mathcal{C}^\text{ex}$ mapping $K\mapsto P(K)$.
Studying satellite operators can yield information about link concordance using the following proposition.
\begin{proposition}[Proposition 2.3 of \cite{CDR14}\label{prop:diff_functions}]If the 2--component links $L_0$ and $L_1$ are concordant (or even exotically concordant), then the corresponding patterns $P_0$ and $P_1$ induce the same satellite operator on $\mathcal{C}^\text{ex}$, i.e.\ for any knot $K$, if the links $L_0$ and $L_1$ are smoothly concordant, then the knots $P_0(K)$ and $P_1(K)$ are exotically concordant. If $L_0$ and $L_1$ are topologically concordant, then $P_0(K)$ and $P_1(K)$ are topologically concordant. \end{proposition}
In the above formulation, the Hopf link corresponds to the identity function on $\mathcal{C}^\text{ex}$, and therefore, a 2--component link can be seen to be distinct from the Hopf link in smooth concordance if it induces a non-identity satellite operator on $\mathcal{C}^\text{ex}$. Similarly, the Hopf link with a knot $J$ tied into the first component corresponds to the connected-sum operator $C_J$ (where $C_J(K)=J\#K$ for all knots $K$), and therefore, a 2--component link can be seen to be distinct in smooth concordance from all links obtained from the Hopf link by tying a knot into the first component if the induced satellite operator is distinct from that of all connected-sum operators.
This strategy can be considered to be a generalization of the method of distinguishing links by Dehn filling one component of the link or `blowing down one component', which can be seen to be the same as performing a twisted satellite operation using the unknot as companion (see, for example, \cite{ChaKo99, ChaKimRubStr12}, and item (7) of the highly useful list provided in \cite[Section 1]{FriedlPow14}).
For a winding number one pattern $P$ inside $ST=S^1\times D^2$ the standard unknotted solid torus, we let $\eta(P)$ denote the meridian $\{1\}\times \partial D^2$, oriented so that $\text{lk}(P,\eta(P))=1$. Then it is easy to see that $P$ is the pattern corresponding to the link $(P,\eta(P))$. Moreover, patterns form a monoid (see~\cite[Section 2.1]{DR13}), and so we can consider the iterated patterns $P^i$, where $P^i(K)=P(P(\cdots(K)\cdots))$. $P^0$ is the core of an unknotted standard solid torus and we see that for any pattern $P$, the link $(P^0,\eta(P^0))$ is the Hopf link.
\begin{figure}
\caption{Whenever we draw a circle containing a `$\text{Wh}_3$', such as on the right, we mean the tangle shown on the left. The boxes containing `$-2$' indicate that all the strands passing vertically through the box should be given two full negative twists. As a result, the link in Fig.~\ref{fig:wh3link} is the link obtained by Whitehead doubling both components of the Whitehead link. }
\label{fig:wh3}
\end{figure}
Consider the link obtained by Whitehead doubling each component of the Whitehead link. By the symmetry of the link, we see that this is the same link as the one obtained by Whitehead doubling one component of the Hopf link three times (see Figs.~\ref{fig:wh3} and \ref{fig:wh3link}). For the rest of the paper, $L\equiv (Q,\eta)$ will refer to the link shown in Figure~\ref{fig:patternlink}, and $Q$ will denote the corresponding pattern or satellite operator.
\begin{figure}
\caption{$\text{Wh}_3$, the link obtained by Whitehead doubling both components of the Whitehead link, or alternatively Whitehead doubling one of the components of the Hopf link three times. }
\label{fig:wh3link}
\end{figure}
\begin{figure}
\caption{The link $L\equiv (Q,\eta)$.}
\label{fig:patternlink}
\end{figure}
\begin{theorem}\label{thm:main}The links $\{(Q^i,\eta(Q^i))\}$ are each topologically concordant to the Hopf link, but are distinct from the Hopf link (and one another) in smooth concordance. Moreover, they are distinct in smooth concordance from each link obtained from the Hopf link by tying a knot into the first component. For $i\geq 4$, they are distinct in smooth concordance from the Cha--Kim--Ruberman--Strle examples. \end{theorem}
\subsection*{Acknowledgments} While the authors were already in the initial stages of this project, the idea was independently suggested to the second author by an anonymous referee for~\cite{Ray15}.
\section{Proofs}
The results of this section comprise Theorem~\ref{thm:main}.
\begin{proposition}\label{prop:topconc1}The 2--component link $(Q,\eta(Q))$ shown in Fig.~\ref{fig:patternlink} is topologically concordant to the Hopf link. \end{proposition}
\begin{figure}
\caption{Proof of Theorem \ref{prop:topconc1}}
\label{fig:proof1}
\end{figure}
\begin{proof} Let $L$ denote the link $(Q, \eta(Q))$.
By resolving a single crossing we get the link $(a, b, c)$ shown in Fig.~\ref{fig:wh3link}. Thus, there is a cobordism from $(Q, \eta(Q))$ to $(a,b,c)$ consisting of an annulus $A$ bounded by $\eta(Q)\sqcup b$ and a pair of pants $P$ bounded by $Q\sqcup a \sqcup c$.
Freedman proved that the link depicted in Fig.~\ref{fig:wh3link}, sometimes referred to as $\text{Wh}_3$, is topologically slice in~\cite{Freed88}. We label the different components of Figure~\ref{fig:proof1} as $a$, $b$, and $c$ as shown. Note that $b\sqcup c$ is $\text{Wh}_3$, the link shown in Fig.~\ref{fig:wh3link}, and $a\sqcup b$ is the Hopf link. Let $\Delta_b$, $\Delta_c\subseteq B^4$ be the disjoint slice disks for $b$ and $c$. By removing a regular neighborhood of a point on $\Delta_b$ we see that $c$ and its topological slice disk $\Delta_c$ in $S^3\times [0,1]$ is disjoint from a regular neighborhood of an annulus cobounded by $b$ in $S^3\times \{0\}$ and an unknot $U\subseteq S^3\times \{1\}$. Since $a$ is just a meridian of $b$, we can find an annulus entirely within this regular neighborhood, cobounded by $a\subset S^3\times \{0\}$ and a meridian of $U\subseteq S^3\times\{1\}$.
Gluing these to $A\sqcup P$ gives a concordance between $L$ and the Hopf link. \end{proof}
\begin{remark}\label{rem:davis} An alternative approach to Proposition~\ref{prop:topconc1} was suggested to the authors by Jim Davis, namely that if the multivariable Alexander polynomial of $L = (Q, \eta(Q))$ is one then $L$ is topologically concordant to the Hopf link by~\cite{Davis06}. We performed the computation using tools developed in ~\cite{Cooper79, CimFlo}, which we describe here. Given any link $L$ one can find a 2--complex $F$ called a \textit{C--complex} bounded by $L$ (see Figure~\ref{fig:C-complex}). Similar to the Seifert matrix one can generate a matrix by studying linking numbers between curves on $F$ and their pushoffs. In \cite[Corollary 3.4]{CimFlo} and \cite[Chapter 2, Corollary 2.2]{Cooper79} it is shown that this matrix gives a presentation for the Alexander module of $L$. With respect to the C--complex $F$ and basis for $H_1(F)$ in Figure~\ref{fig:C-complex} that matrix is $$ A = \left[\begin{array}{cccc} 0&1&0&0\\t_1&0&0&0\\0&0&0&1\\0&0&t_2&t_2-1\end{array}\right]. $$ Since $\det(A) = t_1t_2$, the Alexander module of $L$ is trivial, and so by~\cite{Davis06} $L$ is topologically concordant to the Hopf link.
\begin{figure}
\caption{A C--complex $F=F_1\cup F_2$ for $L = (Q,\eta(Q))$; the four basis curves for $H_1$ are shown. }
\label{fig:C-complex}
\end{figure}
This result, along with Theorem~\ref{thm:main}, shows that our links give another answer to the question posed by Jim Davis in~\cite[p.\ 266]{Davis06}, as did the examples of Cha--Kim--Ruberman--Strle. \end{remark}
\begin{proposition}\label{prop:topconc2}[See also Proposition 2.15 of \cite{DR13}] Each link of the form $(Q^i, \eta(Q^i))$, $i\geq 1$ is topologically concordant to the Hopf link.\end{proposition}
\begin{proof}This is essentially the proof that satellite operators are well-defined on concordance classes of knots. Since $(Q,\eta(Q))$ is topologically concordant to the Hopf link, we have two disjoint annuli $A_1$ and $A_2$ in $S^3\times [0,1]$, such that $A_1\cap S^3\times \{0\}=Q$, $A_2\cap S^3\times \{0\}=\eta(Q)$, and $(A_1 \sqcup A_2)\cap S^3\times \{1\}$ is the Hopf link. Cut out a regular neighborhood of $A_1$, and replace it with $ST\times[0,1]$, where $ST=S^3 - N(\eta)$ is a standard unknotted solid torus containing the pattern knot $Q$. The resulting manifold can be seen to be homeomorphic to $S^3\times [0,1]$. We obtain the link $(Q^2,\eta(Q^2))\subseteq S^3\times \{0\}$, the link $(Q,\eta(Q))\subseteq S^3\times \{1\}$, and a topological concordance between them in this new $S^3\times [0,1]$ given by $(Q\times [0,1])\sqcup A_2$.
By iterating this process, we see that for each $i\geq 1$, the link $(Q^{i+1},\eta(Q^{i+1}))$ is topologically concordant to $(Q^i,\eta(Q^i))$. This completes the proof since, by Proposition \ref{prop:topconc1}, $(Q,\eta(Q))$ is topologically concordant to the Hopf link. \end{proof}
\begin{proposition}\label{prop:smoothlydistinct}The members of the family $\{(Q^i,\eta(Q^i))\mid i\geq 0\}$ are distinct from one another in smooth concordance. Moreover, for $i\geq 1$, they are each distinct in smooth concordance from any link obtained from the Hopf link by tying a knot in the first component. \end{proposition}
Recall that the link $(Q^0,\eta(Q^0))$ is the Hopf link, and therefore, the first statement above says that the links $(Q^i,\eta(Q^i))$ are distinct from the Hopf link in smooth concordance.
\begin{proof}[Proof of Proposition~\ref{prop:smoothlydistinct}] For the first statement, consider the following proposition.
\begin{proposition}[\cite{Ray15}]\label{prop:distinctiterates}If $P$ is a winding number one pattern such that $P(U)$ is unknotted, where $U$ is the unknot, and $P$ has a Legendrian diagram $\mathcal{P}$ with $\text{tb}(\mathcal{P}) > 0$ and $\text{tb}(\mathcal{P}) + \text{rot}(\mathcal{P}) \geq 2$, then the iterated patterns $P^i$ induce distinct functions on $\mathcal{C}^\text{ex}$, i.e.\ there exists a knot $K$ such that $P^i(K)$ is not exotically concordant to $P^j(K)$, for each pair of distinct $i,j\geq 0$. \end{proposition}
A Legendrian diagram $\mathcal{Q}$ for $Q$ with $\text{tb}(\mathcal{Q})=2$ and $\text{rot}(\mathcal{Q})=0$ is shown in Fig.~\ref{fig:legpattern}. It is clear that $Q(U)$ is unknotted. The first statement then follows from Proposition~\ref{prop:diff_functions}.
\begin{figure}
\caption{A Legendrian diagram $\mathcal{Q}$ for the satellite operator $Q$. Note that this depicts a knot in a solid torus. }
\label{fig:legpattern}
\end{figure}
If $(Q^i, \eta(Q^i))$ were concordant to a link obtained from the Hopf link by tying a knot $J$ into the first component, we know from Proposition~\ref{prop:diff_functions} that $Q^i(K)$ would be exotically concordant to $C_J(K)=J\#K$ for all knots $K$. By letting $K=U$ the unknot, since $Q^i(U)$ is unknotted, we see that $J$ must be exotically concordant to the unknot and as a result, $Q^i(K)$ is exotically concordant to $K$ for all knots $K$. But this contradicts Proposition~\ref{prop:distinctiterates} above, since $K=Q^0(K)$. \end{proof}
\begin{figure}
\caption{{The examples $\ell_J$ of Cha--Kim--Ruberman--Strle
}}
\label{fig:oldexamples}
\end{figure}
The links constructed by Cha--Kim--Ruberman--Strle in \cite{ChaKimRubStr12} are of the form $\ell_J$ shown in Fig.~\ref{fig:oldexamples}. The box containing the letter $J$ indicates that all strands passing through the box should be tied into 0--framed parallels of a knot $J$. Cha--Kim--Ruberman--Strle showed that $\ell_J$ is topologically concordant to the Hopf link for all knots $J$, and that if $J$ is a knot with $\tau(J)>0$, $\ell_J$ is distinct from the Hopf link in smooth concordance. They also showed that if $J(n)=T(2,2n+1)$, the $(2,2n+1)$ torus knot, each member of the family $\{\ell_{J(n)}\}$ is smoothly distinct from the Hopf link (but topologically concordant to the Hopf link).
\begin{figure}
\caption{The patterns associated with the links $\ell_J$.}
\label{fig:oldexamplespatterns}
\end{figure}
\begin{proposition}\label{prop:distinctfromCKRS} The links $\{(Q^i,\eta(Q^i))\mid i\geq 4\}$ are distinct in smooth concordance from the links $\ell_J$ constructed by Cha--Kim--Ruberman--Strle~\cite{ChaKimRubStr12}.\end{proposition}
\begin{proof}The Cha--Kim--Ruberman--Strle examples, as 2--component links with unknotted components, yield patterns as shown in Figure~\ref{fig:oldexamplespatterns}. Let $L_J$ denote the pattern knot obtained from the link $\ell_J$, and $L_J(K)$ denote the satellite knot obtained by applying $L_J$ to a knot $K$. From \cite[Theorem 1.2]{Rob12} we see that $$-n_+(L_J)-w \leq \tau(L_J(K)) - \tau(\widetilde{L_J}) - w\tau(K) \leq n_+(L_J) + w$$
where $w$ is the winding number of $L_J$, $\widetilde{L_J} = L_J(U)$ is the result of erasing the second component of $\ell_J$, and $n_+(L_J)$ and $n_-(L_J)$ are the least number of positive and negative respectively intersections between $L_J$ and the meridian of the solid torus containing it. We see that $n_+(L_J)=2$, $\eta_-(L_J)=1$ and $w=1$. Since $\widetilde{L_J}$ is unknotted, $\tau(\widetilde{L_J})=0$. Therefore, if we let $K=RHT$ the right-handed trefoil, $$-2\leq \tau(L_J(RHT)) \leq 4,$$ since $\tau(RHT)=1$. Note that this does not depend on the choice of $J$.
We will show that $\tau(Q^i(RHT)) >4$ for $i\geq 4$. By Proposition~\ref{prop:diff_functions}, this will complete the proof. Our main tool will be \cite[Theorem 1]{Plam04}, which states that if $\mathcal{K}$ is a Legendrian representative for a knot $K$, then $$\text{tb}(\mathcal{K})+\lvert\text{rot}(\mathcal{K})\rvert\leq 2\tau(K)-1.$$
We first build Legendrian representatives of the satellite knots $Q^i(RHT)$. We have a Legendrian diagram $\mathcal{Q}$ for the pattern $Q$ (Figure~\ref{fig:legpattern}.) We stabilize twice to get another Legendrian diagram $\mathcal{Q}'$ for $Q$ with $\text{tb}(\mathcal{Q}')=0$ and $\text{rot}(\mathcal{Q}')=2$. We can perform the Legendrian satellite operation on this Legendrian diagram by itself to get Legendrian diagrams $\mathcal{Q}'^i$ for the iterated patterns $Q^i$ since $\text{tb}(\mathcal{Q}')=0$ (see \cite{Ng01} for background on the Legendrian satellite construction and \cite[Section 2.3]{Ray15} for details on this particular construction on patterns/satellite operators). By \cite[Lemma 2.4]{Ray15}, since $Q$ has winding number one, we see that $$\text{tb}(\mathcal{Q}'^i)=i\cdot \text{tb}(\mathcal{Q}')=0$$ and $$\text{rot}(\mathcal{Q}'^i)=i\cdot\text{rot}(\mathcal{Q}')=2i.$$
\begin{figure}
\caption{A Legendrian representative $\mathcal{K}$ for the right-handed trefoil.}
\label{fig:legtrefoil}
\end{figure}
Consider the Legendrian representative $\mathcal{K}$ for the right-handed trefoil given in Fig.~\ref{fig:legtrefoil}. Since $\text{tb}(\mathcal{K})=0$, we can perform the Legendrian satellite operation on $\mathcal{K}$ using the pattern $\mathcal{Q}'^i$ to get a Legendrian representative $\mathcal{Q}'^i(\mathcal{K})$ for the untwisted satellite $Q^i(RHT)$, and by \cite[Remark 2.4]{Ng01} we see that, since winding number of $Q^i$ is one, $$\text{tb}(\mathcal{Q}'^i(\mathcal{K}))=\text{tb}(\mathcal{Q}'^i)+\text{tb}(\mathcal{K})=0$$ and $$\text{rot}(\mathcal{Q}'^i(\mathcal{K}))=\text{rot}(\mathcal{Q}'^i)+\text{rot}(\mathcal{K})=2i+1.$$
Then using \cite[Theorem 1]{Plam04}, we see that $$\text{tb}(\mathcal{Q}'^i(\mathcal{K}))+\lvert\text{rot}(\mathcal{Q}'^i(\mathcal{K}))\rvert\leq 2\tau(Q^i(RHT))-1,$$ that is, $$i+1\leq \tau(Q^i(RHT)).$$ Therefore, if $i\geq 4$, $\tau(Q^i(RHT))>4$ as needed. \end{proof}
\begin{remark} It is natural to ask whether our method would work for the links obtained by using $\text{Wh}_1$ or $\text{Wh}_2$ instead of $\text{Wh}_3$, since those have many fewer crossings; these links are shown in Figure~\ref{fig:wh1and2}. The pattern corresponding to the link obtained by using $\text{Wh}_1$ is called the \textit{Mazur pattern}, and has been widely studied, e.g.\ in~\cite{CFHeHo13, CDR14, Ray15, Lev14}. In~\cite{CFHeHo13} it was shown that the link using $\text{Wh}_1$ is not topologically concordant to the Hopf link. For the link using $\text{Wh}_2$, we can use a C--complex as in Remark~\ref{rem:davis} to compute the multivariable Alexander polynomial, which turns out to be $$-t_1^2t_2^2+2t_1^2t_2-t_1^2+2t_1t_2^2-3t_1t_2+2t_1-t_2^2+2t_2-1.$$ This can be used to show that this link is not topologically concordant to the Hopf link, using Kawauchi's result on the Alexander polynomials of concordant links in~\cite{Kaw78}.
\begin{figure}
\caption{The links obtained by using $\text{Wh}_1$ (left) and $\text{Wh}_2$ (right).}
\label{fig:wh1and2}
\end{figure}
\end{remark}
Using our methods, we can prove the following theorem.
\begin{theorem} Any 2--component link $(P,\eta)$ with linking number one, unknotted components, and Alexander polynomial one, where the corresponding pattern has a Legendrian diagram $\mathcal{P}$ with $\text{tb}(\mathcal{P})>0$ and $\text{tb}(\mathcal{P})+\text{rot}(\mathcal{P})\geq 2$, yields a family of links $(P^i,\eta(P^i))$ that are each topologically concordant to the Hopf link, but are smoothly distinct from one another and the Hopf link. \end{theorem}
For most values of $i$ (including $i\geq 4$, but possibly more values), the above links will be distinct in smooth concordance from the Cha--Kim--Ruberman--Strle examples, using the proof of Proposition~\ref{prop:distinctfromCKRS}. Using a more general version of Proposition~\ref{prop:distinctiterates} we can weaken our assumption that the first component of the link is unknotted, and instead require it to be slice and the pattern to be strong winding number one (see~\cite{Ray15}).
\end{document} | arXiv |
Structure of index sequences for mappings with an asymptotic derivative
August 2007, 17(3): 671-689. doi: 10.3934/dcds.2007.17.671
On Ulam approximation of the isolated spectrum and eigenfunctions of hyperbolic maps
Gary Froyland 1,
School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia
Received February 2006 Revised July 2006 Published December 2006
Perron-Frobenius operators and their eigendecompositions are increasingly being used as tools of global analysis for higher dimensional systems. The numerical computation of large, isolated eigenvalues and their corresponding eigenfunctions can reveal important persistent structures such as almost-invariant sets, however, often little can be said rigorously about such calculations. We attempt to explain some of the numerically observed behaviour by constructing a hyperbolic map with a Perron-Frobenius operator whose eigendecomposition is representative of numerical calculations for hyperbolic systems. We explicitly construct an eigenfunction associated with an isolated eigenvalue and prove that a special form of Ulam's method well approximates the isolated spectrum and eigenfunctions of this map.
Keywords: almost-invariant set, Ulam's method, isolated spectrum, eigenfunction, hyperbolic map., Perron-Frobenius operator.
Mathematics Subject Classification: Primary: 37M25, 37C30; Secondary: 37D2.
Citation: Gary Froyland. On Ulam approximation of the isolated spectrum and eigenfunctions of hyperbolic maps. Discrete & Continuous Dynamical Systems, 2007, 17 (3) : 671-689. doi: 10.3934/dcds.2007.17.671
Stefan Klus, Péter Koltai, Christof Schütte. On the numerical approximation of the Perron-Frobenius and Koopman operator. Journal of Computational Dynamics, 2016, 3 (1) : 51-79. doi: 10.3934/jcd.2016003
Martin Lustig, Caglar Uyanik. Perron-Frobenius theory and frequency convergence for reducible substitutions. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 355-385. doi: 10.3934/dcds.2017015
Gary Froyland, Ognjen Stancevic. Escape rates and Perron-Frobenius operators: Open and closed dynamical systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 457-472. doi: 10.3934/dcdsb.2010.14.457
Marianne Akian, Stéphane Gaubert, Antoine Hochart. A game theory approach to the existence and uniqueness of nonlinear Perron-Frobenius eigenvectors. Discrete & Continuous Dynamical Systems, 2020, 40 (1) : 207-231. doi: 10.3934/dcds.2020009
Gary Froyland, Philip K. Pollett, Robyn M. Stuart. A closing scheme for finding almost-invariant sets in open dynamical systems. Journal of Computational Dynamics, 2014, 1 (1) : 135-162. doi: 10.3934/jcd.2014.1.135
Stefan Klus, Christof Schütte. Towards tensor-based methods for the numerical approximation of the Perron--Frobenius and Koopman operator. Journal of Computational Dynamics, 2016, 3 (2) : 139-161. doi: 10.3934/jcd.2016007
Christopher Bose, Rua Murray. The exact rate of approximation in Ulam's method. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 219-235. doi: 10.3934/dcds.2001.7.219
Jiu Ding, Noah H. Rhee. A unified maximum entropy method via spline functions for Frobenius-Perron operators. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 235-245. doi: 10.3934/naco.2013.3.235
Hsuan-Wen Su. Finding invariant tori with Poincare's map. Communications on Pure & Applied Analysis, 2008, 7 (2) : 433-443. doi: 10.3934/cpaa.2008.7.433
Rua Murray. Ulam's method for some non-uniformly expanding maps. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 1007-1018. doi: 10.3934/dcds.2010.26.1007
Paweł Góra, Abraham Boyarsky. Stochastic perturbations and Ulam's method for W-shaped maps. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1937-1944. doi: 10.3934/dcds.2013.33.1937
Amadeu Delshams, Marian Gidea, Pablo Roldán. Transition map and shadowing lemma for normally hyperbolic invariant manifolds. Discrete & Continuous Dynamical Systems, 2013, 33 (3) : 1089-1112. doi: 10.3934/dcds.2013.33.1089
Sebastián Ferrer, Francisco Crespo. Alternative angle-based approach to the $\mathcal{KS}$-Map. An interpretation through symmetry and reduction. Journal of Geometric Mechanics, 2018, 10 (3) : 359-372. doi: 10.3934/jgm.2018013
Marc Kesseböhmer, Sabrina Kombrink. A complex Ruelle-Perron-Frobenius theorem for infinite Markov shifts with applications to renewal theory. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 335-352. doi: 10.3934/dcdss.2017016
Vladimir Varlamov. Eigenfunction expansion method and the long-time asymptotics for the damped Boussinesq equation. Discrete & Continuous Dynamical Systems, 2001, 7 (4) : 675-702. doi: 10.3934/dcds.2001.7.675
Gary Froyland, Cecilia González-Tokman, Anthony Quas. Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools. Journal of Computational Dynamics, 2014, 1 (2) : 249-278. doi: 10.3934/jcd.2014.1.249
R. Baier, M. Dellnitz, M. Hessel-von Molo, S. Sertl, I. G. Kevrekidis. The computation of convex invariant sets via Newton's method. Journal of Computational Dynamics, 2014, 1 (1) : 39-69. doi: 10.3934/jcd.2014.1.39
James W. Cannon, Mark H. Meilstrup, Andreas Zastrow. The period set of a map from the Cantor set to itself. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2667-2679. doi: 10.3934/dcds.2013.33.2667
Huyuan Chen, Feng Zhou. Isolated singularities for elliptic equations with hardy operator and source nonlinearity. Discrete & Continuous Dynamical Systems, 2018, 38 (6) : 2945-2964. doi: 10.3934/dcds.2018126
Lorenzo Arona, Josep J. Masdemont. Computation of heteroclinic orbits between normally hyperbolic invariant 3-spheres foliated by 2-dimensional invariant Tori in Hill's problem. Conference Publications, 2007, 2007 (Special) : 64-74. doi: 10.3934/proc.2007.2007.64
Gary Froyland | CommonCrawl |
Discrepancy function
In structural equation modeling, a discrepancy function is a mathematical function which describes how closely a structural model conforms to observed data; it is a measure of goodness of fit. Larger values of the discrepancy function indicate a poor fit of the model to data. In general, the parameter estimates for a given model are chosen so as to make the discrepancy function for that model as small as possible. Analogous concepts in statistics are known as goodness of fit or statistical distance, and include deviance and divergence.
Examples
There are several basic types of discrepancy functions, including maximum likelihood (ML), generalized least squares (GLS), and ordinary least squares (OLS), which are considered the "classical" discrepancy functions.[1] Discrepancy functions all meet the following basic criteria:
• They are non-negative, i.e., always greater than or equal to zero.
• They are zero only if the fit is perfect, i.e., if the model and parameter estimates perfectly reproduce the observed data.
• The discrepancy function is a continuous function of the elements of S, the sample covariance matrix, and Σ(θ), the "reproduced" estimate of S obtained by using the parameter estimates and the structural model.
In order for "maximum likelihood" to meet the first criterion, it is used in a revised form as the deviance.
See also
• Constructions of low-discrepancy sequences
• Discrepancy theory
• Low-discrepancy sequence
References
1. "Discrepancy Functions Used in SEM". Retrieved 2008-08-18.
| Wikipedia |
\begin{document}
\newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example}
\renewcommand{\noindent \textbf{Proof. }}{\noindent \textbf{Proof. }} \renewcommand{
{\vrule width 6pt height 6pt depth 0pt}
}{
{\vrule width 6pt height 6pt depth 0pt}
}
\newcommand{
\centerline{\tt -------------------------------------------- }
}{
\centerline{\tt -------------------------------------------- }
} \renewcommand{
\centerline{\tt -------------------------------------------- }
}{}
\newcommand{\note}[1]{
\noindent \framebox{\begin{minipage}[c]{\textwidth} {\tt #1} \end{minipage}}
}
\newcommand{\varepsilon}{\varepsilon} \renewcommand{\det}{\mathrm{det}} \newcommand{ {\mbox{\tiny 0}} }{ {\mbox{\tiny 0}} } \newcommand{\stress}{\boldsymbol{\sigma}} \newcommand{\strain}{\boldsymbol{\epsilon}} \newcommand{\boldsymbol{I}}{\boldsymbol{I}} \newcommand{\argmin}{ \mathrm{argmin} \,}\newcommand{\argmax}{ \mathrm{argmax} \,} \newcommand{\weakto}{\rightharpoonup}\newcommand{\weakstarto}{\stackrel{*}{\rightharpoonup}} \newcommand{\R}{\mathbb{R}}\newcommand{\N}{\mathbb{N}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\Omega}{\Omega} \newcommand{\mathrm{d}}{\mathrm{d}} \newcommand{\coloneq }{\hspace{1pt}\raisebox{0.74pt}{\scalebox{0.8}{:}}\hspace{-2.2pt}=} \newcommand{\FF}{\boldsymbol{F}}\newcommand{\EE}{\strain} \newcommand{\boldsymbol{I}}{\boldsymbol{I}}
\def\Xint#1{\mathchoice
{\XXint\displaystyle\textstyle{#1}}
{\XXint\textstyle\scriptstyle{#1}}
{\XXint\scriptstyle\scriptscriptstyle{#1}}
{\XXint\scriptscriptstyle\scriptscriptstyle{#1}}
\!\int} \def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$}
\vcenter{\hbox{$#2#3$}}\kern-.5\wd0}}
\def\Xint={\Xint=}
\def\Xint-{\Xint-} \def\mathaccent'27{\mathaccent'27}
\newcommand{\text{tr} \,}{\text{tr} \,}
\thispagestyle{empty}
\phantom{1}
{\LARGE \noindent {\bf Analysis of staggered evolutions for nonlinear energies \\[3mm] in phase field fracture} }
\begin{small}
{\bf S.~Almi}
{Fakult\"at f\"ur Mathematik - TUM}
{Boltzmannstr.~3 - 85748 Garching bei M\"unchen - Germany}
{[email protected]}
{\bf M.~Negri}
{Department of Mathematics - University of Pavia}
{Via A.~Ferrata 1 - 27100 Pavia - Italy}
{[email protected]}
\noindent {\bf Abstract.} We consider a class of separately convex phase field energies employed in fracture mechanics, featuring non-interpenetration
and a general softening behavior. We analyze the time-discrete evolutions generated by a staggered minimization scheme, where the fracture irreversibility is modeled by a monotonicity constraint on the phase field variable. We characterize the time-continuous limits of the discrete solutions in terms of balanced viscosity evolutions, parametrized by their arc-length with respect to the $L^2$-norm (for the phase field) and the $H^1$-norm (for the displacement field). By a careful study of the energy balance we deduce that time-continuous evolutions may still exhibit an alternate behavior in discontinuity times.
\noindent {\bf AMS Subject Classification.} 49M25, 49J45, 74B05, 74R05, 74R10.
\end{small}
\section{Introduction} \label{Intro}
In the last decades the use of phase field models in computational fracture mechanics has been constantly increasing and has found many interesting applications. In the original formulation of~\cite{MR1745759} for quasi-static evolution of brittle fracture in linearly elastic bodies, the propagation of a crack, here represented by a phase field function~$z$, is described in terms of equilibrium configuration (or critical points) of the \emph{Ambrosio-Tortorelli functional} \begin{equation}\label{intro1}
\mathcal{G}_{\varepsilon}(u,z):= \tfrac{1}{2} \int_{\Omega} (z^{2}+\eta_{\varepsilon}) \stress(u){\,:\,}\strain(u)\,\mathrm{d} x + G_{c} \int_{\Omega} \varepsilon |\nabla{z}|^{2} + \tfrac{1}{4\varepsilon} (1-z)^{2}\,\mathrm{d} x \,, \end{equation} where~$\Omega$ is an open bounded subset of~$\R^{n}$ with Lipschitz boundary~$\partial\Omega$, $u\in H^{1}(\Omega;\R^{n})$ is the displacement field,~$\strain(u)$ denotes the symmetric part of the gradient of~$u$,~$\stress(u):=\mathbb{C}\strain(u)$ is the stress,~$\mathbb{C}$ being the usual elasticity tensor, $\varepsilon$ and~$\eta_{\varepsilon}$ are two small positive parameters, and~$G_{c}$ is the toughness, a positive constant related to the physical properties of the material under consideration (from now on we impose $G_{c}=1$). In~\eqref{intro1} the function~$z\in H^{1}(\Omega)$ is supposed to take values in~$[0,1]$, where~$z(x)=1$ if the material is safe at~$x$, while $z(x)=0$ means that the elastic body~$\Omega$ presents a crack at~$x$. Hence, the zero level set of~$z$ represents the fracture and~$z$ can be interpreted as a regularization of a crack set.
The advantage in using phase field models like~\eqref{intro1} lies in their ability to handle the complexity of moving cracks, making the numerical implementation of the fracture process feasible even in rather involved geometrical settings. Indeed, energies of the form~$\mathcal{G}_{\varepsilon}$, defined on Sobolev spaces, can be easily discretized in finite element spaces or by finite differences. Furthermore, equilibrium configurations for~$\mathcal{G}_{\varepsilon}$ can be efficiently computed by means of \emph{alternate minimization} algorithms (see, e.g.,~\cite{MR2341850, MR1745759, MR2669398}), where~$\mathcal{G}_{\varepsilon}$ is iteratively minimized first w.r.t.~$u$ and then w.r.t.~$z$. This implies, in view of the quadratic nature of the functional, that at each step of the algorithm only a linear system has to be solved.
Starting from the seminal paper~\cite{MR1075076}, the connection between~\eqref{intro1} and brittle fracture mechanics has been drawn from a theoretical point of view by studying the~$\Gamma$-convergence as~$\varepsilon\to 0$ of~$\mathcal{G}_{\varepsilon}$ in BV-like spaces. A first result has been obtained in~\cite{MR2074682} in an~$SBD^{2}$ setting, while the generalization to~$GSBD^{2}$~\cite{MR3082250} has been presented in~\cite{ChambolleCrismale_D, MR3247391}. In this context, the limit functional~$\mathcal{G}_{0}$ is defined as \begin{equation}\label{intro2} \mathcal{G}_{0}(u):= \tfrac{1}{2} \int_{\Omega} \stress(u){\,:\,}\strain(u)\,\mathrm{d} x + \mathcal{H}^{n-1}(J_{u}) \qquad\text{for $u\in GSBD^{2}(\Omega)$}, \end{equation} where~$J_{u}$ denotes the approximate discontinuity set of~$u$ and therefore represents, in a suitable sense, a crack set.
While the $\Gamma$-convergence analysis ensures the convergence of minimizers of~\eqref{intro1} to minimizers of~\eqref{intro2}, and hence provides a rigorous justification of the phase field model~\eqref{intro1} at a static level, not so much is known for the convergence of evolutions, in particular for those obtained by alternate minimization schemes.
A first analysis of convergence of these algorithms has been carried out in the recent paper~\cite{KneesNegri_M3AS17}, together with a full description of the limit evolutions in the language of rate-independent processes (see, e.g.,~\cite{MR2182832, MielkeRoubicek} and reference therein). The techniques developed in~\cite{KneesNegri_M3AS17} have then been applied to a finite dimensional approximation of~\eqref{intro1} in~\cite{Almi2017}.
Let us briefly discuss the result obtained in~\cite{KneesNegri_M3AS17}. In dimension $n=2$, let~$[0,T]$ be a time interval and consider, for instance, a time dependent boundary condition $u = g(t) $ on~$\partial \Omega$ and initial conditions~$u_0$ and~$z_0$, with $0 \le z_0 \le 1$. Proceeding by time discretization, for every $k\in\mathbb{N}\setminus\{0\}$ let $\tau_k \coloneq T/k$ be a time increment and denote $t^{k}_{i} \coloneq i \tau_k$, for $i=0,...,k$. A discrete in time evolution is constructed using the following procedure: at time~$t^{k}_{i}$ for $j=0$ we define the sequences~$u^{k}_{i,j}$ and~$z^{k}_{i,j}$ setting $u^{k}_{i,0}:=u^{k}_{i-1}$, $z^{k}_{i,0}:=z^{k}_{i-1}$, and \begin{eqnarray}
&& u^{k}_{i,j+1}:=\argmin\,\{ \mathcal{G}_{\varepsilon}(u,z^{k}_{i,j}):\, u\in H^{1}(\Omega;\R^{2}),\, u=g(t^{k}_{i}) \text{ on~$\partial\Omega$} \}\,, \label{altmin}\\[1mm] && z^{k}_{i,j+1}:=\argmin\,\{\mathcal{G}_{\varepsilon}(u^{k}_{i,j+1},z):\,z\in H^{1}(\Omega),\, z\leq z^{k}_{i,j}\}\,. \label{altmin2} \end{eqnarray} In the limit $j\to \infty$, the algorithm~\eqref{altmin}-\eqref{altmin2} computes a limit pair~$(u^{k}_{i},z^{k}_{i})\in H^{1}(\Omega;\R^{2})\times H^{1}(\Omega)$, which turns out to be an equilibrium configuration of~$\mathcal{G}_{\varepsilon}$. We notice here that in the minimization~\eqref{altmin2} a \emph{strong irreversibility} is imposed, which forces the phase field variable~$z$ to decrease at each iteration. A complete convergence result for the scheme~\eqref{altmin}-\eqref{altmin2} with the \emph{weaker} constraint~$z\leq z^{k}_{i-1}$ is still out of reach in our quasi static setting. We mention that a first result in this direction has been obtained in the work~\cite{A-B-N17} in the context of gradient flows, i.e., adding to the minimum problem~\eqref{altmin2} an~$L^{2}$-penalization of the distance between~$z^{k}_{i,j}$ and~$z^{k}_{i-1}$. Clearly the two above constraints are equivalent if we consider the simpler scheme with only one iteration of~\eqref{altmin}-\eqref{altmin2}, that has been employed in many mathematical papers (see, for instance,~\cite{MR3249813, MR2106765, MR3021776, MR3332887, MR3893258, Thomas13}). We also point out that the restriction to a two dimensional setting is rather technical, and is due to Sobolev embeddings that hold only in~$\Omega\subseteq \R^{2}$.
In order to study the limit as the time step~$\tau_{k}$ tends to~$0$, it is not technically convenient to investigate the limit of each configuration~$(u^{k}_{i,j}, z^{k}_{i,j})$. On the contrary, in~\cite{KneesNegri_M3AS17} the authors provide a global description of the evolution by introducing an arc-length reparametrization of time, that is, a reparametrization based on the distance between the steps of the scheme~\eqref{altmin}-\eqref{altmin2}. This reminds of the usual approach to viscous approximation (see, e.g.,~\cite{MR3021776, MR3332887, MR3893258} in the context of phase field). The crucial point in~\cite{KneesNegri_M3AS17} is the choice of the norms used to compute the arc-length of the algorithm: while in the viscous setting it is natural to employ the viscosity norm, in~\eqref{altmin}-\eqref{altmin2} it is not clear whether there are preferable norms. Nevertheless, by its \emph{quadratic structure}, the functional~$\mathcal{G}_{\varepsilon}$ induces two weighted $H^{1}$-norms for~$u$ and~$z$, respectively, that are therefore referred to as \emph{energy norms}. With respect to these particular norms, it turns out that the affine interpolation curves between two consecutive states of the algorithm~\eqref{altmin}-\eqref{altmin2} are actually gradient flows of~$\mathcal{G}_{\varepsilon}$, whose lengths can be uniformly bounded. Gluing together all the interpolations and reparametrizing time, we obtain a piecewise linear curve with bounded velocity connecting all the states of the minimizing scheme and satisfying a discrete energy balance. As $k\to\infty$, the limit of these interpolation curves is a \emph{parametrized Balanced Viscosity evolution} complying with an equilibrium condition and an energy-dissipation balance (we refer to~\cite{MR3531671, MielkeRoubicek} for more details on this kind of solutions).
Despite the sound mathematical result, when reading~\cite{KneesNegri_M3AS17} one immediately notices that the whole convergence analysis strongly depends on the specific structure of the functional~\eqref{intro1}. This remark becomes clear if we try to repeat the above strategy with a different phase field energy, such as \begin{equation}\label{intro3}
\mathcal{I}_{\varepsilon}(u,z):= \tfrac{1}{2} \int_{\Omega} W(z, \strain(u))\,\mathrm{d} x + \int_{\Omega} \varepsilon |\nabla{z}|^{2} + f_{\varepsilon}(z) \,\mathrm{d} x\,, \end{equation} where some nonlinearities~$W$ and~$f_{\varepsilon}$ have been introduced, which make the functional~$\mathcal{I}_{\varepsilon}$ not separately quadratic. In this new context, there is no clear notion of energy norms. Hence, when trying to define an arc-length reparametrization of time, we would be forced to choose \emph{a priori} some norms in order to estimate the distance between two steps of the alternate minimization algorithm. Moreover, being~$\mathcal{I}_{\varepsilon}$ strongly nonlinear, we can not anymore expect that the linear interpolation between two consecutive states of the minimization algorithm can represent a gradient flow of~$\mathcal{I}_{\varepsilon}$. Therefore, the convergence of a numerical scheme of the form~\eqref{altmin}-\eqref{altmin2} for~$\mathcal{I}_{\varepsilon}$ does not trivially follow from the results of~\cite{KneesNegri_M3AS17} and needs further analysis, which is indeed the goal of the present paper.
More precisely, we focus, always in a two dimensional setting, on the phase field model introduced in~\cite{A-M-M09, ComiPerego_IJSS01}. The basic idea of the model is that an elastic material behaves differently under tension or compression, and a crack can appear or evolve only under tension. This means that the presence of a phase field should not affect the ability of the elastic body to store energy under compression. Hence, differently from~\eqref{intro1}, the factor~$z^{2}+\eta_{\varepsilon}$ can not pre-multiply the whole stress~$\stress(u)$. On the contrary, a splitting of~$\strain(u)$ into its volumetric $\strain_{v}(u):= \tfrac{1}{2} \text{tr} \, \strain(u)\boldsymbol{I}$ and deviatoric~$\strain_{d}(u):= \strain(u) - \strain_{v}(u)$ components has to be considered, where the symbol $\text{tr} \,$ stands the trace of a matrix. In order to further distinguish between tension and compression, we introduce~$\strain^{\pm}_{v}(u):= \tfrac{1}{2} (\text{tr} \, \strain(u))_{\pm}\boldsymbol{I}$,~$(\cdot)_{\pm}$ denoting the positive and negative part, respectively. With this notation at hand, the elastic energy density~$W$ in~\eqref{intro3} takes the form \begin{equation}\label{intro5}
W(z,\strain):= h(z) \big(\mu |\strain_{d}|^{2}+\kappa|\strain_{v}^{+}|^{2} \big) +\kappa|\strain_{v}^{-}|^{2} \qquad\text{for every~$z\in\R$ and every~$\strain\in\mathbb{M}^{2}_{s}$} , \end{equation} where~$h$ is a suitable \emph{degradation function},~$\mu,\kappa>0$ are two positive constants related to the Lam\'e coefficients of the material, and~$\mathbb{M}^{2}_{s}$ denotes the space of symmetric matrices of order two with real coefficients. Introducing a time dependent boundary condition~$g\colon [0,T]\to H^{1}(\Omega;\R^{2})$, the complete energy functional reads, for $t\in[0,T]$, $u\in H^{1}_{0}(\Omega;\R^{2})$, and~$z\in H^{1}(\Omega)$, as \begin{equation}\label{intro6}
\F_{\varepsilon}(t, u,z):= \tfrac{1}{2} \int_{\Omega} W(z, \strain(u + g(t)))\,\mathrm{d} x + \int_{\Omega} \varepsilon |\nabla{z}|^{2} + f_{\varepsilon}(z) \,\mathrm{d} x \,. \end{equation} We note that the explicit time dependence in~$\F_{\varepsilon}$ has been introduced in order to fix once and for all the ambient space~$H^{1}_{0}(\Omega;\R^{2})$ for the displacement variable~$u$. This means that the real displacement will be $u+g(t)$, but the unknown of the problem is only~$u$. The advantage of this choice will be clear in the discussions of Section~\ref{s.gf}. More important, we again remark that in~\eqref{intro5} and~\eqref{intro6} we allow for nonlinearities~$h$ and~$f_{\varepsilon}$ different from the usual $z^{2}+\eta_{\varepsilon}$ and $\tfrac{1}{4\varepsilon}(1-z)^{2}$. This freedom is well justified by the existing literature on phase field fracture mechanics (see, for instance,~\cite{ MR3304294, PhysRevLett.87.045501, Wu_JMPS17}), where the modeling of different phenomena, such as brittle or cohesive fracture growth, results in the choice of different degradation profiles. Here, we will assume~$h \in C^{1,1}_{\mathrm{loc}}(\R)$, convex, positive, non-decreasing in~$[0,+\infty)$, and with minimum in~$0$, and $f_{\varepsilon} \in C^{1,1}_{\mathrm{loc}}(\R)$ strongly convex, non-negative, and with minimum in~$1$. We refer to Section~\ref{s.setting} for the precise setting.
The asymptotic behavior of~$\F_{\varepsilon}$ has been recently investigated in~\cite{MR3780140}. In dimension~$n=2$ and with the usual degradation functions $h(z) = z^{2}+\eta_{\varepsilon}$ and $f_{\varepsilon}(z)= \tfrac{1}{4\varepsilon}(1-z)^{2}$, it has been shown that~$\F_{\varepsilon}$ $\Gamma$-converges as~$\varepsilon\to 0$ to the functional \begin{displaymath} \F_{0}(u):= \left\{ \begin{array}{ll}
\displaystyle \tfrac{1}{2}\int_{\Omega} \big( 2\mu|\strain_{d}(u)|^{2} + \kappa |\strain_{v}(u)|^{2}\big) \,\mathrm{d} x + \mathcal{H}^{1}(J_{u}) & \text{if $u\in SBD(\Omega)$, $[ u ] {\,\cdot\,}\nu_{u} \geq 0$ $\mathcal{H}^{1}$-a.e. in~$J_{u}$,}\\
\displaystyle \vphantom{\int}+\infty &\text{otherwise},
\end{array}\right. \end{displaymath} where~$ [u]$ stands for the approximate jump of~$u$ on~$J_{u}$ and~$\nu_{u}$ is the approximate unit normal to~$J_{u}$. The condition $[ u ] {\,\cdot\,}\nu_{u} \geq 0$ on~$J_{u}$ represents a \emph{linear non-interpenetration} constraint which, in the fracture mechanics language, forces the lips of the crack set~$J_{u}$ to not interpenetrate.
In this paper we are interested in the study of the convergence of alternate minimization algorithms for the evolution problem of the phase field model~\eqref{intro6}. To simplify the notation, we fix $\varepsilon:=\tfrac{1}{2}$ and denote with~$\F$ the functional~$\F_{\frac{1}{2}}$. Given $T>0$, $\tau_{k}:= T/k$, and $t^{k}_{i}:= i\tau_{k}$, we consider the following iterative procedure, similar to~\eqref{altmin}-\eqref{altmin2}: at time $t^{k}_{i}$, we set $u^{k}_{i,0}:= u^{k}_{i-1}$, $z^{k}_{i,0}:= z^{k}_{i-1}$ and, for $j\geq 1$, we define \begin{eqnarray} && u^{k}_{i,j} := \argmin\,\{ \F (t^{k}_{i}, u, z^{k}_{i,j-1}): \, u\in H^{1}_{0}(\Omega;\R^{2}) \}\,, \label{altmin3}\\[1mm] && z^{k}_{i,j} := \argmin\,\{ \F (t^{k}_{i}, u^{k}_{i,j}, z): \, z\in H^{1}(\Omega),\, z\leq z^{k}_{i,j-1} \}\,. \label{altmin4} \end{eqnarray} In the limit as $j\to\infty$ we detect a critical point $(u^{k}_{i}, z^{k}_{i})$ of~$\F$. In order to analyze the limit of the time-discrete evolution~$(u^{k}_{i}, z^{k}_{i})$ as the time step~$\tau_{k}\to 0$, we follow the general scheme of~\cite{KneesNegri_M3AS17}. First, we want to interpolate between all the steps of the scheme~\eqref{altmin3}-\eqref{altmin4} and reparametrize time w.r.t.~an arc-length parameter. As already mentioned, we have to face here the fact that the energy~$\F$ is highly nonlinear and not separately quadratic. This implies that there are no intrinsic norms stemming out from the functional, as it happens in~\cite{KneesNegri_M3AS17}. In our framework, instead, we a priori fix the $H^{1}$-norm for the displacement field~$u$ and the $L^{2}$-norm for the phase field~$z$. Our choices, made clear in Section~\ref{s.gf}, are guided by the possibility to construct suitable gradient flows connecting consecutive states of our alternate minimization algorithm. In particular, being~$\F$ differentiable w.r.t.~$u$, by classical results we get the existence of a gradient flow of~$\F$ in the $H^{1}$-norm starting from~$u^{k}_{i,j-1}$ and ending in~$u^{k}_{i,j}$. When constructing a gradient flow for~$z$ connecting~$z^{k}_{i,j-1}$ and~$z^{k}_{i,j}$, instead, we have to deal with the irreversibility condition~$z\leq z^{k}_{i,j-1}$ which forces us to work with the weaker $L^{2}$-norm (we refer to Theorem~\ref{t.3} for more details). As a byproduct of our construction, the total length of the scheme is uniformly bounded in~$k$. Hence, gluing all the gradient flows together and reparametrizing time we obtain a sequence of curves~$(t_{k}, u_{k}, z_{k})$ with bounded velocity interpolating between all the states of the minimization scheme and satisfying, once again, discrete in time equilibrium and energy balance.
In the limit as $k\to\infty$, we prove the convergence to a parametric BV evolution $(t,u,z)\colon [0,S]\to [0,T]\times H^{1}_{0}(\Omega;\R^{2})\times H^{1}(\Omega)$, which we characterize in terms of \emph{equilibrium} and \emph{energy-dissipation balance} as follows (see Theorem~\ref{t.1} for further details): \begin{itemize} \item[$(i)$] for every $s\in(0,S)$ such that $t'(s)>0$ \begin{displaymath}
| \partial_{u}\F| (t(s), u(s), z(s))=0\qquad\text{and}\qquad | \partial_{z}^{-}\F| (t(s), u(s), z(s))=0\,, \end{displaymath} \item[$(ii)$] for every $s\in[0,S]$ \begin{displaymath} \begin{split}
\F(t(s),& \ u(s),z(s)) = \F(0,u_0,z_0)-\int_{0}^{s}|\partial_{u}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| u' (s) \|_{H^1} \mathrm{d}\sigma\\
&-\int_{0}^{s} \!\! |\partial_{z}^{-}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| z' (s) \|_{L^2} \mathrm{d}\sigma +\int_{0}^{s} \!\! \mathcal{P}( t(\sigma),u(\sigma), z(\sigma)) \,t'(\sigma)\,\mathrm{d}\sigma\,, \end{split} \end{displaymath} \end{itemize}
where $| \partial_{u}\F|$ and~$| \partial_{z}^{-}\F|$ denotes the slopes of~$\F$ w.r.t.~$u$ and~$z$, respectively (see Definition~\ref{d.slope}) and~$\mathcal{P}$ is the power expended by the external forces (boundary datum~$g$ in our case), and is defined in~\eqref{e.power}.
Roughly speaking, the equilibrium condition~$(i)$ says that at continuity times, i.e., when $t'(s)>0$, the pair~$(u(s), z(s))$ is an equilibrium configuration for~$\F$, while the energy-dissipation balance~$(ii)$ gives us a complete description of the behavior of a solution at discontinuity times. As it was already noticed in~\cite{KneesNegri_M3AS17}, the characterization~$(i)$-$(ii)$ is very similar to the one obtained in~\cite{MR3021776, MR3332887, MR3893258} with a vanishing viscosity approach. The main advantage of the iterative minimization~\eqref{altmin3}-\eqref{altmin4} is that we do not have to add a fictitious viscosity term. Moreover, our constructive scheme is closer to the numerical applications, where alternate minimization schemes are usually adopted.
We conclude with a short description of main steps of the proof of~$(i)$ and~$(ii)$. The convergence of~$(t_{k}, u_{k}, z_{k})$ is obtained by a compactness argument. By the nonlinearity of~$\F$, we actually need a pointwise strong convergence of~$u_{k}$ in~$H^{1}(\Omega;\R^{2})$, which is shown in Proposition~\ref{p.compactness} by studying convergence of gradient flows. The equilibrium~$(i)$ and the lower energy-dissipation inequality follow then from lower semicontinuity of the functional~$\F$ and of the slopes~$|\partial_{u} \F|$ and~$|\partial_{z}^{-} \F|$, discussed in Section~\ref{prop:slopes}. The technically hard part comes with the upper energy-dissipation inequality, where we pay the choice of the $L^{2}$-norm to estimate the arc-length of the algorithm~\eqref{altmin3}-\eqref{altmin4} w.r.t.~$z$. Comparing with~\cite{KneesNegri_M3AS17}, indeed, we can not employ a chain rule argument, since the evolution~$z$ is qualitatively the reparametrization of an $L^{2}$-gradient flow, instead of an $H^{1}$-gradient flow. For this reason, we need to exploit a Riemann sum argument (see, e.g.,~\cite{MR2186036, Negri_ACV}). In this respect, we have to face the lack of summability of the slope $|\partial_{z}^{-} \F| (t(\cdot), u(\cdot), z(\cdot))$, which does not follow from energy estimates, since we are only able to control $|\partial_{z}^{-} \F| (t(\cdot), u(\cdot), z(\cdot)) \| z'(\cdot) \|_{L^{2}}$. This problem is overcome by a careful analysis of the evolution of~$z$. The idea is to gain the summability of~$|\partial_{z}^{-} \F| (t(\cdot), u(\cdot), z(\cdot))$ outside the set $\{\| z' \|_{L^{2}}=0 \}$. This allows us to perform a further change of variable and employ a Riemann sum argument in the new variable. As a byproduct of our analysis, we also show that a limit evolution~$(t,u,z)$ may still exhibit an alternate behavior in discontinuity times. We refer to Section~\ref{s.inequality} and Appendix~\ref{AppB} for the full details.
\tableofcontents
\section{Setting and statement of the main result}\label{s.setting}
\subsection{Elastic energy density with anisotropic softening}
Let us first introduce some notation. We denote by~$\mathbb{M}^{2}$ the space of squared matrices of order~$2$ (with real entries) and by~$\mathbb{M}^{2}_{s}$ the subspace of symmetric matrices. For every $\FF\in\mathbb{M}^{2}$, its volumetric and deviatoric part, respectively, are denoted by \begin{equation*} \FF_{v}\coloneq \tfrac{1}{2}(\text{tr} \, \FF) \boldsymbol{I}\qquad \text{and} \qquad \FF_{d}\coloneq \FF-\FF_{v}\,, \end{equation*} where $\text{tr} \, \FF$ stands for the trace of~$\FF$ and~$\boldsymbol{I}$ is the identity matrix. We notice that $\FF_{v}{\,:\,}\FF_{d}=0$, where the symbol~$:$ indicates the usual scalar product between matrices. As a consequence, we have that \begin{displaymath}
|\FF|^{2}=|\FF_{v}|^{2}+|\FF_{d}|^{2}\qquad\text{for every $\FF\in\mathbb{M}^{2}$}\,, \end{displaymath}
where $| \cdot |$ denotes Frobenius norm. Furthermore, we set \begin{displaymath} \FF_{v}^{\pm}\coloneq \tfrac{1}{2}(\text{tr} \, \FF)_{\pm}\boldsymbol{I}\,, \end{displaymath}
where~$(\cdot)_{+}$ and~$(\cdot)_{-}$ denote positive and negative part, respectively. Clearly, $| \FF_v |^2 = | \FF^+_v |^2 + | \FF^-_v |^2$.
With this notation, for a (strain) matrix $\EE\in \mathbb{M}^{2}_{s}$ we first rewrite the linear elastic energy density as \begin{align}
\Psi(\EE)&= \tfrac{\lambda}{2} |\text{tr} \,\EE|^{2}+\mu|\EE|^{2}=\lambda(|\EE_{v}^{+}|^{2}+|\EE_{v}^{-}|^{2})+\mu(|\EE_{v}^{+}|^{2}+|\EE_{v}^{-}|^{2}+|\EE_{d}|^{2}) \nonumber\\
&=\mu|\EE_{d}|^{2}+\kappa|\EE_{v}^{+}|^{2}+\kappa|\EE_{v}^{-}|^{2}=\Psi_{+}(\EE)+\Psi_{-}(\EE)\,,\label{e.density} \end{align}
where we have set $\kappa\coloneq\lambda+\mu$, $\Psi_{+}(\EE)\coloneq \mu|\EE_{d}|^{2}+\kappa|\EE_{v}^{+}|^{2}$, and $\Psi_{-}(\EE)\coloneq \kappa|\EE_{v}^{-}|^{2}$. We assume that $\mu >0$ and that $\kappa >0$.
Our phase field model, inspired by~\cite{A-M-M09, MR3780140}, does not allow for fracture under compression, i.e.~where $( \text{tr} \, \EE)_-\neq 0$; this is obtained employing
an \emph{elastic energy density} is of the form \begin{equation}\label{e.enelden} W(z,\EE)\coloneq h(z) \Psi_{+}(\EE)+ \Psi_{-}(\EE)\qquad\text{for $z\in\R$ and $\EE\in\mathbb{M}^{2}_{s}$}\,, \end{equation} where $z$ is the phase field variable and $h$ is the {\it softening} or {\it degradation function}. We assume that $h$ is convex, of class $C^{1,1}_{\mathrm{loc}}(\R)$ and that \begin{equation}
\text{ $h(z)\geq h(0) >0$ for every $z\in\R$.} \label{hp2}
\end{equation} Note that, under these assumptions, $h$ is non-decreasing in $[0,+\infty)$. We denote $\eta : = h (0) > 0$. Note that $W ( z, \cdot) $ is differentiable w.r.t.~$\strain$ and that \begin{equation} \label{stress} \partial_{\strain} W(z,\strain ) = 2 h(z) \big( \mu \strain_{d} + \kappa \strain_{v}^{+} \big) - 2\kappa \strain_{v}^{-} \,. \end{equation} Further properties of the energy density $W$ are provided in Section~\ref{prop:energy}.
\subsection{Energy, slopes and power}
The reference configuration $\Omega$ is assumed to be a bounded, connected, open subset of~$\R^2$ with Lipschitz boundary~$\partial\Omega$. We denote by $\partial_D \Omega$ a non-empty subset of $\partial \Omega$ made of finitely many, relatively open, connected components. We consider a time interval $[0,T]$ and, for every $t \in [0,T]$, admissible displacements of the form $u + g(t)$ where $u$ belongs to $\U\coloneq \{u\in H^{1}(\Omega;\R^{2}):\, u=0 \text{ on~$\partial_{D}\Omega$}\}$ while the ``boundary datum'' $g$ belongs to $W^{1,q}([0,T];W^{1,p}(\Omega;\R^{2}))$ with $q\in (1,+\infty)$ and $p\in(2,+\infty)$. The phase field $z$ belongs instead to $\Z\coloneq H^{1}(\Omega)\cap L^{\infty}(\Omega)$ (even though in the evolution it will take value in $[0,1]$).
For $p\in[1,+\infty]$, we denote by~$\|\cdot\|_{W^{1,p}}$ and by~$\|\cdot\|_{L^p}$ the usual $W^{1,p}$ and~$L^{p}$-norms, respectively; we use also the notation~$\|\cdot\|_{H^{1}}$ for the $H^{1}$-norm.
Then, for every $(t,u,z) \in [0,T] \times \U \times \Z$ we define the \emph{elastic energy} as \begin{equation}\label{e.E} \begin{split}
\E ( t , u , z ) &\coloneq\int_{\Omega}W\big(z,\strain(u+g(t))\big)\,\mathrm{d} x = \int_\Omega h(z) \Psi_{+}\big(\strain(u+g(t))\big)+ \Psi_{-}\big( \strain(u+g(t))\big) \,\mathrm{d} x \\
&= \int_{\Omega}h(z)\big(\mu |\strain_{d}(u+g(t))|^{2}+\kappa|\strain_{v}^{+}(u+g(t))|^{2}\big)\,\mathrm{d} x+\int_{\Omega}\kappa|\strain_{v}^{-}(u+g(t))|^{2}\,\mathrm{d} x\,, \end{split} \end{equation} where $\strain(u+g(t))$ denotes the symmetric part of the gradient of the displacement $u+g(t)\in H^{1}(\Omega;\R^{2})$. We introduce the \emph{dissipation pseudo-potential} for the phase field $z\in\Z$ as \begin{equation}\label{e.D}
\D(z)\coloneq\tfrac{1}{2}\int_{\Omega} |\nabla{z}|^{2}+f(z) \,\mathrm{d} x\,. \end{equation} We assume that $f : \mathbb{R} \to \mathbb{R}$ is strongly convex, of class $C^{1,1}_{\mathrm{loc}}$ and that $0 \le f(1) \le f(z)$ for every $z \in \mathbb{R}$. The \emph{total energy} of the system $\F\colon[0,T]\times\U\times\Z\to[0,+\infty)$ is defined as the sum of elastic energy and dissipation pseudo-potential. Hence, for every $t\in[0,T]$, every $u\in\U$, and every $z\in\Z$ we set \begin{equation}\label{e.F} \F(t,u,z)\coloneq \E(t,u,z)+\D(z)\,. \end{equation}
In our study of quasi-static evolutions we will often employ the following slopes for the functional~$\F$, w.r.t.~the displacement~$u$ and the phase field~$z$.
\begin{definition}\label{d.slope} Let $(t,u,z) \in [0,T] \times \U \times \Z$. We define \begin{align}
|\partial_{u}\F|(t,u,z)&\coloneq \limsup_{ w\to u \text{ \rm in~$H^1$}}
\frac{(\F(t,w,z)-\F(t,u,z))_{-}}{\|w-u\|_{H^{1}}}\,,\label{e.slope1}\\
|\partial_{z}^{-}\F|(t,u,z)&\coloneq \limsup_{ v \to z^- \text{ \rm in $L^2$}}
\frac{(\F(t,u,v)-\F(t,u,z))_{-}}{\|v-z\|_{L^2}}\,,\label{e.slope2} \end{align} where $v \to z^-$ in $L^2$ means that $v \le z$ and $v \to z$ in $L^2$ (with $v \in \Z$). \end{definition}
For the properties of the slopes we refer to Section~\ref{prop:slopes}.
\begin{remark} Note that here we employ a unilateral $L^2$-slope while in \cite{KneesNegri_M3AS17} we used a unilateral $H^1$-slope. \end{remark}
In order to simplify the notation later on, for a.e.~$t \in [0,T]$, every $u\in\U$, and every $z\in\Z$, we define the {\it power functional} \begin{equation}\label{e.power} \mathcal{P}(t, u,z)\coloneq \int_{\Omega}\partial_{\EE}W(z, \EE (u+g(t))) {\,:\,} \EE (\dot{g}(t)) \, \mathrm{d} x \,, \end{equation} where $\dot{g}$ denotes the time derivative of $g$. We notice that for a.e.~$t\in[0,T]$, every $u\in \U$, and every $z\in\Z$ we have \begin{equation}\label{e.pw} \partial_{t} \F(t, u, z) = \mathcal{P}(t,u,z)\,. \end{equation}
\subsection{Time-discrete evolutions and their time-continuous limit}
First, let us briefly describe the discrete alternate minimization scheme, without entering into the technical details. Let the initial condition be $u_0 \in \U$ and $z_0 \in\Z$ with $0 \le z_0 \le 1$ and \begin{equation}\label{e.14.18}
u_{0} = \min\,\{ \F(0,u,z_{0}):\, u \in \U\} \qquad\text{and}\qquad z_{0} = \min\,\{ \F(0,u_{0},z):\, z\in \Z,\, z\leq z_{0}\}\,. \end{equation} For $k \in \mathbb{N}$, $k \neq 0$, consider a time step $\tau_{k}:=T/k$ and let $t^{k}_{i}:= i\tau_{k}$ for every $i=0,\ldots, k$. The time-discrete evolution $( u^k_i , z^k_i)$ is defined by induction w.r.t.~the index $i\in\mathbb{N}$, as follows. We set $u^{k}_{0}\coloneq u_{0}$, $z^{k}_{0}\coloneq z_{0}$. In order to define~$u^k_{i}$ and~$z^k_{i}$, known~$u^k_{i-1}$ and~$z^k_{i-1}$, we need the auxiliary sequences~$u^{k}_{i,j}$ and~$z^k_{i,j}$ defined in this way: set $u^{k}_{i,0}:= u^{k}_{i-1}$ and $z^{k}_{i,0}:=z^{k}_{i-1}$, and, by induction w.r.t.~the index $j\in\mathbb{N}$, define by alternate minimization \begin{eqnarray*} &&\displaystyle u^{k}_{i,j+1}:=\argmin\,\{\F(t^{k}_{i},u,z^{k}_{i,j}):\,u\in \U \}\,, \\[2mm] &&\displaystyle z^{k}_{i,j+1}:=\argmin\,\{\F(t^{k}_{i},u^{k}_{i,j + 1},z):\, z\in \Z,\, z\leq z^{k}_{i,j}\}\,. \end{eqnarray*} We set $z^k_{i}=\lim_{j \to \infty} z^k_{i,j}$ and $u^k_{i}=\lim_{j \to \infty} u^k_{i,j}$ (existence of these sequences and of their limits will be proven in the sequel).
In order to study the limit as $k \to \infty$, i.e.~as the time step $\tau_k$ vanishes, it will be technically convenient to interpolate all the configurations $u^k_{i,j}$ and $z^k_{i,j}$ by suitable rescaled gradient flows; this will ultimately provide, for every index $k$, an ``arc-length'' parametrization $s \mapsto ( t_k (s) , u_k (s) , z_k (s))$ from a fixed inteval $[0,S]$ to $[0,T] \times \U \times \Z$ which interpolates all the configuration $u^k_{i,j}$ and $z^k_{i,j}$.
In the parametrized framework,
\begin{definition}\label{d.continuitypoint}
A point $s \in [0,S]$ is a \emph{continuity point} for $(t,u,z)$ if for every $\delta>0$ there exists $s_\delta$ such that $| s_\delta -s | < \delta$ and $t( s_\delta ) \neq t (s)$. On the contrary, $s\in[0,S]$ is a \emph{discontinuity point} of~$(t,u,z)$ if~$t$ is constant in a neighborhood of~$s$. \end{definition}
We are now ready to give the main result of this paper.
\begin{theorem}\label{t.1}
Up to subsequences, not relabelled, the parametrizations $(t_k, u_k, z_k)$ converge to a parametrization $(t,u,z)\colon[0,S]\to[0,T]\times\U\times\Z$ with $(t(0) , u(0) , z(0)) = (0,u_0,z_0)$, which satisfies the following properties: \begin{itemize}
\item [$(a)$] \emph{Regularity}: $(t,u,z)\in W^{1,\infty}([0,S];[0,T]\times H^1 ( \Omega ; \R^2) \times L^{2}(\Omega))$, $z\in L^{\infty}([0,S];H^1(\Omega))$, and, for a.e.~$s\in[0,S]$, \begin{displaymath}
|t'(s)|+\|u'(s)\|_{H^{1}}+\|z'(s)\|_{L^2}\leq 1\,, \end{displaymath} where the symbol~$'$ denotes the derivative w.r.t.~the parametrization variable $s$;
\item [$(b)$] \emph{Time parametrization}: the function $t\colon [0,S]\to[0,T]$ is non-decreasing and surjective;
\item [$(c)$] \emph{Irreversibility}: the function $z\colon[0,S]\to\Z$ is non-increasing and $0 \le z(s) \leq 1$ for every $0\leq s \leq S$;
\item [$(d)$] \emph{Equilibrium}: for every continuity point $s\in[0,S]$ of $(t,u,z)$ \begin{displaymath}
|\partial_{u}\F|(t(s),u(s),z(s))=0\qquad\text{and}\qquad|\partial_{z}^{-}\F|(t(s),u(s),z(s))=0 \,; \end{displaymath}
\item [$(e)$] \emph{Energy-dissipation equality}: for every $s\in[0,S]$ \begin{equation}\label{e.eneq} \begin{split}
\F(t(s),& \ u(s),z(s)) = \F(0,u_0,z_0)-\int_{0}^{s}|\partial_{u}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| u' (s) \|_{H^1} \mathrm{d}\sigma\\
&-\int_{0}^{s} \!\! |\partial_{z}^{-}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| z' (\sigma) \|_{L^2} \mathrm{d}\sigma +\int_{0}^{s} \!\! \mathcal{P}( t(\sigma),u(\sigma), z(\sigma)) \,t'(\sigma)\,\mathrm{d}\sigma\,, \end{split} \end{equation}
where we intend that $|\partial_{z}^{-}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| z' (s) \|_{L^2}=0$ whenever~$\| z'(\sigma)\|_{L^{2}}=0$ (including the case $|\partial_{z}^{-}\F|(t(\sigma),u(\sigma),z(\sigma))=+\infty$). \end{itemize} Any evolution satisfying the above properties, will be called \emph{parametrized Balanced Viscosity evolution}~\cite{MR3531671}.
\end{theorem}
The proof of this theorem is contained in Section~\ref{s.prooft1}.
\begin{remark} We note that the equilibrium condition~\eqref{e.14.18} is not strictly necessary. However, it allows to shorten some proofs, without affecting the convergence analysis and the behavior of solutions.
\end{remark}
\begin{remark}
The convention $|\partial_{u}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| u' (\sigma) \|_{H^1}=0$ when~$\| u'(\sigma)\|_{H^{1}}=0$ is not necessary, since~$u(\sigma)\in H^{1}(\Omega;\R^{2})$, which implies that $|\partial_{u}\F|(t(\sigma), u(\sigma), z(\sigma)) < +\infty$ for every~$\sigma\in[0,S]$. \end{remark}
\begin{remark} In Section \ref{s.inequality} we prove a refined energy-dissipation identity which implies (see Appendix~\ref{AppB}) that the limit evolution may still present an alternate behavior in discontinuity points. \end{remark}
\section{Lemmata} \label{s.lemmata}
In this section we collect some technical results that will be useful in the forthcoming discussions.
\subsection{Properties of the energy}
We first show some basic properties of the elastic energy density $W$.
\begin{lemma}\label{l.HMWw} The function $W\colon\R\times\mathbb{M}^{2}_{s}\to[0,+\infty)$ is of class~$C^{1,1}_{\mathrm {loc}}$. Moreover, there exist two positive constants $c,C$ such that for every $z\in [0,1]$ and every $\strain_{1},\strain_{2} \in\mathbb{M}^{2}_{s}$ the following holds: \begin{itemize}
\item[$(a)$] $\big(\partial_{\strain} W(z,\strain_{1})-\partial_{\strain}W(z,\strain_{2})\big){\,:\,}(\strain_{1}-\strain_{2})\geq c|\strain_{1}-\strain_{2}|^{2}$;\label{e.2}
\item[$(b)$] $\big|\partial_{\strain} W(z,\strain_{1})-\partial_{\strain}W(z,\strain_{2})\big|\leq C |\strain_{1}-\strain_{2}|$.\label{e.3} \end{itemize} Since $\partial_{\strain} W ( z , 0 ) = 0$ it follows also that for every $\strain \in \mathbb{M}^2_s$ we have \begin{itemize}
\item[$(c)$] $| \partial_{\strain} W ( z , \strain ) | \leq C | \strain | $. \label{e.4} \end{itemize} \end{lemma}
\begin{proof} Write \begin{displaymath}
\partial_{\strain}W(z,\strain_{1})-\partial_{\strain}W(z,\strain_{2})= 2 h(z)\big( \mu (\strain_{1,d}-\strain_{2,d}) + \kappa (\strain_{1,v}^{+}-\strain_{2,v}^{+})\big) - 2\kappa(\strain_{1,v}^{-}-\strain_{2,v}^{-})\,. \end{displaymath} By linearity and orthogonality, to prove~$(a)$ it is enough to check that \begin{eqnarray*}
& h(z) \mu (\strain_{1,d}-\strain_{2,d}) {\,:\,} (\strain_{1,d}-\strain_{2,d}) \geq \eta \mu | \strain_{1,d}-\strain_{2,d}|^{2}\,, \\[1mm]
& h(z)\kappa (\strain_{1,v}^{+}-\strain_{2,v}^{+}){\,:\,} (\strain_{1,v}-\strain_{2,v}) - \kappa(\strain_{1,v}^{-}-\strain_{2,v}^{-}){\,:\,}(\strain_{1,v}-\strain_{2,v}) \geq c | \strain_{1,v} - \strain_{2,v} |^{2}\,. \end{eqnarray*} The first inequality is straightforward. For the second we can write the left hand side in terms of traces as \begin{equation} \label{e.piuemeno} \begin{split} \tfrac12 h(z)\kappa &\big((\text{tr} \,\strain_{1})_{+}-(\text{tr} \,\strain_{2})_{+}\big)(\text{tr} \,\strain_{1}-\text{tr} \,\strain_{2})- \tfrac12 \kappa \big((\text{tr} \,\strain_{1})_{-}-(\text{tr} \,\strain_{2})_{-}\big)(\text{tr} \,\strain_{1}-\text{tr} \,\strain_{2}) . \end{split} \end{equation} Let $c = \tfrac12 \kappa \min \{ h(z) , 1 \} \ge \tfrac12 \kappa \eta >0 $.
Since $(\cdot)_+$ is monotone non-decreasing we get $$
\tfrac12 h(z)\kappa \big((\text{tr} \,\strain_{1})_{+}-(\text{tr} \,\strain_{2})_{+}\big)(\text{tr} \,\strain_{1}-\text{tr} \,\strain_{2}) \ge c \big((\text{tr} \,\strain_{1})_{+}-(\text{tr} \,\strain_{2})_{+}\big)(\text{tr} \,\strain_{1}-\text{tr} \,\strain_{2}) . $$
Using the fact that $-(\cdot)_-$ is non-decreasing, we can argue in a similar way for the second term in \eqref{e.piuemeno} and get $$
- \tfrac12 \kappa \big((\text{tr} \,\strain_{1})_{-}-(\text{tr} \,\strain_{2})_{-}\big)(\text{tr} \,\strain_{1}-\text{tr} \,\strain_{2}) \ge - c \big((\text{tr} \,\strain_{1})_{-}-(\text{tr} \,\strain_{2})_{-}\big)(\text{tr} \,\strain_{1}-\text{tr} \,\strain_{2}) . $$ Taking the sum of the last two inequalities gives the required estimate.
Finally, $(b)$ follows from \eqref{stress} thanks to the fact that $z \in [0,1]$ and $h$ is continuous.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\centerline{\tt -------------------------------------------- }
We notice that for every $t\in[0,T]$, every $u,\varphi\in\U$, and every $z,\psi\in\Z$ we can express the partial derivatives of~$\F ( t, \cdot, \cdot)$ w.r.t.~$u$ and~$z$ as \begin{align*} \partial_{u}\F(t,u,z)[\varphi]&=\int_{\Omega} \partial_{\strain}W(z,\strain(u + g(t) )){\,:\,}\strain(\varphi)\,\mathrm{d} x\\ &=2\int_{\Omega}h(z)\big(\mu\strain_{d}(u + g(t)){\,:\,}\strain_{d}(\varphi)+\kappa\strain_{v}^{+}(u + g(t)) {\,:\,}\strain_{v}(\varphi)\big)\,\mathrm{d} x-2\kappa\int_{\Omega}\strain_{v}^{-}(u + g(t)) {\,:\,}\strain_{v}(\varphi)\,\mathrm{d} x\,,\\[1mm]
\partial_{z}\F(t,u,z)[\psi]&=\int_{\Omega} \partial_z W(z,\strain(u+ g(t))) \psi \,\mathrm{d} x+\int_{\Omega}\nabla{z}{\,\cdot\,}\nabla\psi\,\mathrm{d} x + \int_{\Omega} f'(z) \psi\,\mathrm{d} x \\
& = \int_{\Omega}h'(z) \psi \Psi_+ \big( \strain (u + g(t)) \big) \,\mathrm{d} x +\int_{\Omega}\nabla{z}{\,\cdot\,}\nabla\psi\,\mathrm{d} x + \int_{\Omega} f'(z) \psi\,\mathrm{d} x\,. \end{align*}
\begin{remark} \label{r.sepstrconv}\textnormal{ It is important to note that the energy $\F ( t , \cdot ,\cdot)$ is separately strongly convex in $\U \times \Z$, with respect to the $H^1$-norms. More precisely, there exists $C>0$ such that, uniformly w.r.t.~$t \in [0,T]$ and $u \in \U$, it holds \begin{equation} \label{e.strconv-z}
\partial_z \F ( t, u , z_2 ) [ z_2 - z_1] - \partial_z \F ( t , u , z_1) [z_2 - z_1] \ge C \| z_2 - z_1 \|^2_{H^1} \,, \end{equation} indeed, by convexity of $h$ and strong convexity of $f$, we can write the left hand side as \begin{align*}
\int_\Omega [ h' (z_2) - h'(z_1) ] (z_2 - z_1) \Psi_+ \big( \strain (u + g(t)) \big) & + | \nabla (z_2 -z_1) |^2 + [ f' (z_2) - f'(z_1)] ( z_2 - z_1) \, dx \ge \\
& \ge \int_\Omega | \nabla (z_2 -z_1) |^2 + c \, ( z_2 - z_1)^2 \, dx . \end{align*} In a similar way, there exists $C>0$ such that, uniformly w.r.t.~$t \in [0,T]$ and $z \in \Z$, it holds \begin{equation} \label{e.strconv-u}
\partial_u \F ( t, u_2 , z ) [ u_2 - u_1] - \partial_u \F ( t , u_1 , z) [u_2 - u_1] \ge C \| u_2 - u_1 \|^2_{H^1} \,, \end{equation} indeed, by (a) in Lemma \ref{l.HMWw} the left hand side reads \begin{align*}
\int_{\Omega} \big( \partial_{\strain}W(z,\strain(u_2 + g(t) )) - \partial_{\strain} W(z,\strain(u_1 + g(t) )) \big) {\,:\,}\strain(u_2 - u_1)\,\mathrm{d} x
& \ge C \| \strain (u_2 + g(t) ) - \strain (u_1 + g(t)) \|^2_{L^2} \\
& \ge C \| \strain (u_2 - u_1) \|^2_{L^2} \ge C' \| u_2 - u_1 \|^2_{H^1} , \end{align*} where we used Korn inequality for the last estimate. In particular, the elastic energy $\E (t , \cdot , z)$ is strongly convex.} \end{remark}
\centerline{\tt -------------------------------------------- }
\begin{lemma} \label{l.lscFE} Let $(t_m , u_m , z_m) \in [0,T] \times \U \times \Z$. If $t_m \to t$, $u_m \rightharpoonup u$ in $H^1(\Omega , \mathbb{R}^2)$, and $z_m \weakto z$ in $H^1(\Omega)$ then \begin{equation}\label{e.lscFE}
\E ( t , u ,z) \le \liminf_{m \to \infty} \E ( t_m , u_m ,z_m) , \qquad
\F ( t , u ,z) \le \liminf_{m \to \infty} \F ( t_m , u_m ,z_m) . \end{equation} \end{lemma}
\begin{proof}
Recalling the definition~\eqref{e.enelden} of the elastic energy~$W(z,\cdot)$ is convex in~$\mathbb{M}^{2}$ for every~$z \in \R$. Hence, we are in a position to apply~\cite[Theorem~7.5]{Fonseca2007} in order to deduce the first inequality in~\eqref{e.lscFE}. The second inequality follows immediately since the dissipation pseudo-potential~$\mathcal{D}$ is lower semicontinuous w.r.t.~weak convergence in~$H^{1}(\Omega)$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\centerline{\tt -------------------------------------------- }
\subsection{Higher integrability and continuity of the displacement field} \label{prop:energy}
We now establish a uniform, continuous dependence estimates for the minimizer of the functional~$\F(t,\cdot,z)$ which follows from~\cite[Theorem~1.1]{HerzogMeyerWachsmuth_JMAA11}.
In the following, for every $\beta \in(1,+\infty)$ we denote \begin{displaymath} W^{1,\beta}_{D}(\Omega;\R^{2})\coloneq\{u\in W^{1,\beta}(\Omega;\R^{2}):\,u=0\text{ on~$\partial_{D}\Omega$}\} \end{displaymath} and let $W^{-1,\beta'}_{D}(\Omega;\R^{2})$ be its dual. Furthermore, given $z\in\Z$ and $g\in W^{1,\beta}(\Omega;\R^{2})$, we define the operator $A_{z,g}\colon W^{1,\beta}_{D}(\Omega;\R^{2})\to W^{-1,\beta}_{D}(\Omega;\R^{2})$ as
\begin{equation}\label{e.53} \langle A_{z,g}(u),\varphi\rangle\coloneq \int_{\Omega}\partial_{\strain}W(z,\strain(u+g)){\,: \,}\strain(\varphi)\,\mathrm{d} x, \text{ for $\varphi\in W^{1,\beta'}_{D}(\Omega;\R^{2})$.} \end{equation} With this notation, if $\xi \in W_D^{-1,\beta}(\Omega;\R^2)$ then $u = A_{z,g}^{-1}(\xi)$ if and only if $ u \in W^{1,\beta}_D(\Omega;\R^{2})$ is the solution of the variational problem $$
\int_{\Omega}\partial_{\strain}W(z,\strain(u+g)){\,: \,}\strain(\varphi)\,\mathrm{d} x = \langle \xi , \varphi \rangle ,
\text{ for every $\varphi \in W^{1,\beta'}_D(\Omega;\R^{2}) $.} $$
\begin{lemma} \label{l.HMWTh}
Let us fix $p>2$ and $M>0$. Then, there exists $\tilde{p}\in(2,p)$ such that the operator $A_{z,g} \colon W^{1,\beta}_{D}(\Omega;\R^{2})\to W^{-1,\beta}_{D}(\Omega;\R^{2})$ is invertible for every $\beta\in[2,\tilde{p}]$, every $g\in W^{1,p}(\Omega;\R^{2})$, and every $z\in\Z$ with $\|z\|_{\infty}\leq M$. In particular, there exist two constants $C_{1},C_{2}>0$ (independent of~$g$, $z$, and $\beta \in [2,\tilde{p}]$) such that \begin{equation}\label{e.54}
\|A_{z,g}^{-1}(\xi)\|_{W^{1,\beta}}\leq C_{1}(\|\xi\|_{W_D^{-1,\beta}} + \| g \|_{W^{1,\beta}}) \quad\text{and}\quad \|A^{-1}_{z,g}(\xi_{1}) - A^{-1}_{z,g}(\xi_{2})\|_{W^{1,\beta}}\leq C_{2}\|\xi_{1}-\xi_{2}\|_{W_D^{-1,\beta}} \end{equation} for every $\xi,\xi_{1},\xi_{2}\in W^{-1,\beta}_{D}(\Omega;\R^{2})$. \end{lemma}
\begin{proof} The inequalities~\eqref{e.54} follow from a direct application of~\cite[Theorem~1.1 and Remark~1.3]{HerzogMeyerWachsmuth_JMAA11}, whose hypotheses are satisfied in view of Lemma~\ref{l.HMWw}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
By a direct application of Lemma~\ref{l.HMWTh}, for $M=1$, we deduce the next corollary.
\begin{corollary}\label{c.3} Let $g\in W^{1,q}([0,T];W^{1,p}(\Omega;\R^{2}))$ for $q\in(1,+\infty)$ and $p\in(2,+\infty)$. Let $\tilde{p} \in (2,p)$ be as in Lemma \ref{l.HMWTh}.
Then, there exist a positive constant~$C_1$ such that for every for every $\beta \in[2,\tilde{p}]$, $t\in [0,T]$, and $z \in \Z$ with $0 \le z \le 1$ it holds \begin{equation}\label{e.pbound}
\| u \|_{W^{1,\beta}}\leq C_{1} \| g(t)\|_{W^{1,p}}\,, \end{equation} where $u :=\argmin\,\{\F(t,w,z):\,w\in\U\}$.
Moreover, there exists $\nu\in(2,+\infty)$ and a positive constant $C_2$ such that for every $\beta\in[2,\tilde{p})$, $t_{1},t_{2}\in[0,T]$, and $z_{1},z_{2}\in\Z$ with $0 \le z_i \le 1$ (for $i=1,2$) it holds \begin{equation}\label{e.25}
\|u_{1}-u_{2}\|_{W^{1,\beta}}\leq C_{2} (\|g(t_{1})-g(t_{2})\|_{W^{1,p}}+\|z_{1}-z_{2}\|_{L^\nu})\,, \end{equation} where $u_{i}=\argmin\,\{\F(t_i,w,z_i):\,w\in\U\}$ (for $i=1,2$) and $\frac{1}{\nu} = \frac{1}{\beta} - \frac{1}{\tilde{p}}$.
\end{corollary}
\begin{proof} Inequality~\eqref{e.pbound} is a direct consequence of Lemma~\ref{l.HMWTh}. Indeed, being $1 \le \beta' \le 2$ the Euler-Lagrange equation \begin{displaymath} \int_{\Omega}\partial_{\strain}W(z,\strain(u +g(t))){\,:\,}\strain(\varphi)\,\mathrm{d} x=0\qquad\text{for every $\varphi\in\U = W^{1,2}_D(\Omega ; \R^2) \supset W^{1,\beta'}_D ( \Omega ; \R^2)$} \end{displaymath} gives $u = A^{-1}_{z,g(t)}(0)$.
Applying Lemma~\ref{l.HMWTh} we deduce that $u\in W^{1,\beta}_{D}(\Omega;\R^{2})$ for every $\beta \in [2, \tilde{p})$ and~\eqref{e.pbound} is satisfied.
Let us now show~\eqref{e.25}. Using the Euler-Lagrange equation for $u_2$, we get \begin{align*} \int_{\Omega} \partial_{\strain} & W(z_{1},\strain(u_{2}+g(t_{1}))){\,:\,}\strain(\varphi)\,\mathrm{d} x = \\
&=\int_{\Omega} h(z_{1}) \big( \mu\strain_{d}(u_{2}+g(t_{1})){\,:\,}\strain_{d}(\varphi)+\kappa\strain_{v}^{+}(u_{2}+g(t_{1})){\,:\,}\strain_{v}(\varphi) \big) - \kappa \strain_{v}^{-}(u_{2}+g(t_{1})){\,:\,}\strain_{v}(\varphi)\,\mathrm{d} x \\ & \quad - \int_{\Omega} h(z_{2}) \big( \mu\strain_{d}(u_{2}+g(t_{2})){\,:\,}\strain_{d}(\varphi)+\kappa\strain_{v}^{+}(u_{2}+g(t_{2})){\,:\,}\strain_{v}(\varphi) \big) - \kappa \strain_{v}^{-}(u_{2}+g(t_{2})){\,:\,}\strain_{v}(\varphi)\,\mathrm{d} x \\
&=\int_{\Omega}(h(z_{1})-h(z_{2}))(\mu\strain_{d}(u_{2}+g(t_{1})){\,:\,}\strain_{d}(\varphi)+\kappa\strain_{v}^{+}(u_{2}+g(t_{1})){\,:\,}\strain_{v}(\varphi))\,\mathrm{d} x \nonumber\\ &\quad +\mu\int_{\Omega}h(z_{2})\strain_{d}(g(t_{1})-g(t_{2})){\,:\,}\strain_{d}(\varphi)\,\mathrm{d} x\\ &\quad + \kappa\int_{\Omega} h(z_{2}) ( \strain_{v}^{+} ( u_{2} + g(t_{1}) ) - \strain_{v}^{+} ( u_{2} + g(t_{2})) ) {\,:\,} \strain_{v} ( \varphi ) \, \mathrm{d} x \nonumber \\ &\quad +\kappa \int_{\Omega} (\strain_{v}^{-}(u_{2}+g(t_{2}))-\strain_{v}^{-}(u_{2}+g(t_{1})){\,:\,}\strain_{v}(\varphi)\,\mathrm{d} x =:\langle \xi,\varphi\rangle\,, \nonumber \end{align*} where $\xi\in L^{\beta}(\Omega;\mathbb{M}^{2})$ for every $\beta\in[2,\tilde{p})$. Therefore, $u_2 = A^{-1}_{z_1, g(t_1)} ( \xi)$ while $u_1 = A^{-1}_{z_1, g(t_1)} (0)$; applying the second estimate of Lemma~\ref{l.HMWTh}, we get that there exists a positive constant~$C_{2}$ (independent of~$z_{i}$, $t_{i}$, and $\beta \in [2, \tilde{p})$) such that \begin{equation} \label{e.56}
\|u_{1}-u_{2}\|_{W^{1,\beta}}\leq C_{2}\|\xi\|_{L^\beta}\,. \end{equation} Let $\tfrac{1}{\beta} = \tfrac{1}{\nu} + \tfrac{1}{\tilde{p}}$, then by H\"older inequality we have \begin{equation}\label{e.57}
\|\xi\|_{L^\beta}\leq \|h(z_{1})-h(z_{2})\|_{L^\nu}\|u_{2}+g(t_{1})\|_{L^{\tilde{p}}}+ C(1 + \|h\|_{L^\infty(0,1)} ) \|g(t_{1})-g(t_{2})\|_{W^{1,\beta}}\,. \end{equation}
Since $0 \le z_{i} \le 1$ and $h\in C^{1}(\R)$, we have that $\| h(z_{1}) - h(z_{2})\|_{L^\nu}\leq C\|z_{1}-z_{2}\|_{L^\nu}$ for some positive constant $C$. Combining~\eqref{e.56} and~\eqref{e.57} we obtain~\eqref{e.25}, and the proof is concluded.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\subsection{Continuous dependence of the phase field} \label{prop:phase}
\begin{proposition}\label{p.5} Let $\tilde{p}\in(2,+\infty)$ be as in Lemma~\ref{l.HMWTh}. Let $t_{1}, t_{2} \in[0,T]$, $u_{1},u_{2}\in W^{1,\tilde{p}}(\Omega;\R^{2})$, and $z_{0},z_{1},z_{2}\in H^{1}(\Omega;[0,1])$ be such that \begin{equation}\label{e.26} z_{i}=\argmin\,\{\F(t_{i} ,u_{i},z):\,z\in \Z ,\,z_{i}\leq z_{i-1}\}\qquad\text{for $i=1,2$}\,. \end{equation} Then there exist a positive constant~$C$, independent of~$t$,~$u_{i}$, and~$z_{i}$, such that \begin{equation}\label{e.27}
\|z_{1}-z_{2}\|_{H^{1}}\leq C(\|u_{1}+ g(t_{1}) \|_{W^{1,\tilde{p}}} + \| u_{2} + g(t_{2}) \|_{W^{1,\tilde{p}}})( \| u_{1} - u_{2} \|_{H^{1}} + \| g(t_{1}) - g(t_{2}) \|_{H^{1}}) \,. \end{equation}
\end{proposition}
\begin{proof} We adapt the proof of \cite[Lemma A.2]{KneesNegri_M3AS17}. By~\eqref{e.26}, for every~$\varphi\in H^{1}(\Omega)$, $\varphi\leq0$, we have \begin{equation}\label{e.28} \partial_{z}\F(t_{i}, u_{i},z_{i})[\varphi]\geq0\qquad\text{for~$i=1,2$}\,. \end{equation} Moreover, for $\varphi=z_{2}-z_{1}$ we get \begin{equation}\label{e.29} \partial_{z}\F(t_{2},u_{2},z_{2})[z_{2}-z_{1}]=0\,. \end{equation} Therefore, combining~\eqref{e.28} and~\eqref{e.29}, we obtain \begin{equation*}\label{e.30} \big(\partial_{z}\F(t_{2}, u_{2}, z_{2}) - \partial_{z} \F ( t_{1}, u_{1}, z_{1} ) \big ) [ z_{2} - z_{1} ] \leq 0 \,. \end{equation*} Adding and subtracting the term $\partial_{z}\F(t_{1},u_{1},z_{2})[z_{2}-z_{1}]$ to the previous inequality, we get \begin{equation}\label{e.31} \big(\partial_{z}\F(t_{1},u_{1},z_{2})-\partial_{z}\F(t_{1},u_{1},z_{1})\big)[z_{2}-z_{1}]\leq\big(\partial_{z}\F(t_{1},u_{1},z_{2})-\partial_{z}\F(t_{2},u_{2},z_{2})\big)[z_{2}-z_{1}]\,. \end{equation} The left-hand side of~\eqref{e.31} reads as \begin{align*}
\| \nabla{z_{1}} - \nabla{z_{2}}\|_{L^2}^{2} & + \int_{\Omega}(h'(z_{2})-h'(z_{1}))(z_{2}-z_{1})\big(\mu|\strain_{d}(u_{1} + g(t_{1}) )|^{2}+\kappa|\strain_{v}^{+}(u_{1}+g(t_{1}))|^{2}\big)\, \mathrm{d} x \\ & + \int_\Omega ( f'(z_2) - f'(z_1) ) ( z_2 - z_1) \, \mathrm{d} x \,. \end{align*} Being $h$ convex, the second term in the previous expression is positive. By the strong convexity of $f$ we have $$
( f' (z_2) - f'(z_1) ) ( z_2 - z_1) \ge C | z_2 - z_1 |^2 . $$ Thus, we can continue in~\eqref{e.31} with \begin{equation*} \begin{split}
C' \|z_{1}-z_{2}\|_{H^{1}}^{2} & \leq \big(\partial_{z}\F(t_{1},u_{1},z_{2})-\partial_{z}\F(t_{1},u_{1},z_{1})\big)[z_{2}-z_{1}] \le \big(\partial_{z}\F(t_{1},u_{1},z_{2})-\partial_{z}\F(t_{2},u_{2},z_{2})\big)[z_{2}-z_{1}] \end{split} \end{equation*} where the right hand side reads \begin{align*} & \big(\partial_{z}\F(t_{1},u_{1},z_{2}) -\partial_{z}\F(t_{2},u_{2},z_{2})\big)[z_{2}-z_{1}] \\
& = \int_{\Omega} h'(z_{2}) (z_{2}-z_{1}) \big(\mu |\strain_{d}(u_{1}+ g(t_{1}))|^2 - \mu |\strain_{d}( u_{2} + g(t_{2}))|^2 + \kappa|\strain_{v}^{+}(u_{1}+ g(t_{1}))|^2 - \kappa |\strain_{v}^{+}(u_{2}+ g(t_{2}))|^2 \big)\, \mathrm{d} x \,. \end{align*} Since $z_{2}\in H^{1}(\Omega;[0,1])$, we have that~$h'(z_{2})$ is bounded. Moreover, \begin{align*}
|\strain_{v}^{+}(u_{1}+ g(t_{1}))| - |\strain_{v}^{+}(u_{2}+ g(t_{2}))| & = ( \text{tr} \, ( \strain (u_{1}+ g(t_{1})) ) )_+ - ( \text{tr} \, ( \strain (u_{2}+ g(t_{2})) ) )_+ \\
& \le | \text{tr} \, ( \strain ( u_1 +g(t_{1})- u_2 - g(t_{2})) ) | \\ & = |\strain_{v} (u_{1} + g(t_{1}) - u_2 - g(t_{2}) )| . \end{align*} Thus, there exists a positive constant $C$ such that \begin{align*}\label{e.33}
\|z_{1}-z_{2}\|_{H^{1}}^{2} \leq & \ C \int_{\Omega} |z_{2}-z_{1}|\big( |\strain_{d}(u_{1}+ g(t_{1}))| + |\strain_{d}(u_{2} + g(t_{2})) | \big) | \strain (u_{1} + g(t_{1}) - u_{2} - g(t_{2}))| \, \mathrm{d} x\\
& + C\int_{\Omega}| z_{2} - z_{1} | \big( | \strain_{v}^{+}(u_{1} + g(t_{1}))| + |\strain_{v}^{+}(u_{2} + g(t_{2})) | \big) | \strain (u_{1} + g(t_{1}) - u_{2} - g(t_{2}))| \, \mathrm{d} x . \end{align*}
By hypothesis, we have $u_{1},u_{2}\in W^{1,\tilde{p}}(\Omega;\R^{2})$. Hence, applying H\"older inequality with $ \tfrac{1}{\alpha} + \tfrac{1}{\tilde{p}} + \tfrac12 = 1$ we get that \begin{equation*}\label{e.34}
\|z_{1}-z_{2}\|_{H^{1}}^{2} \leq C \| z_{2} - z_{1} \|_{L^\alpha} ( \| u_{1} + g(t_{1}) \|_{ W^{ 1, \tilde{p} } } + \| u_{2} + g(t_{2}) \|_{ W^{1,\tilde{p}}} ) \| u_{1} + g(t_{1}) - u_{2} - g(t_{2}) \|_{H^{1}} . \end{equation*}
Inequality~\eqref{e.27} follows by triangle inequality and by Sobolev embedding in dimension~$2$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\begin{proposition}\label{p.6} Let $\tilde{p}\in(2,+\infty)$ be as in Lemma~\ref{l.HMWTh}. Let $( t_{k}, u_k , z_k) \in [0,T] \times \U \times \Z$ with $0 \le z_k \le 1$ and \begin{displaymath} u_{k}=\argmin\,\{\F(t_{k},v,z_{k}):\,v\in\U\} \qquad \text{for every~$k$}\,. \end{displaymath}
If $t_{k}\to t$, $z_{k}\rightharpoonup z$ in~$H^{1}(\Omega)$, and $u\coloneq\argmin\,\{\F(t,v,z):\, v\in\U\}$, then $u_{k}\to u$ in $W^{1,\beta}(\Omega;\R^{2})$ for every $\beta\in[2,\tilde{p})$. \end{proposition}
\begin{proof} In view of the hypotheses of the proposition and of Corollary~\ref{c.3}, we have that the sequence~$u_{k}$ is a Cauchy sequence in~$W^{1,\beta}(\Omega;\R^{2})$ for every $\beta\in[2,\tilde{p})$. We denote by~$\overline{u}$ the limit function. By the strong convergence in~$W^{1,\beta}(\Omega;\R^{2})$, it is easy to see that~$\overline{u}$ is the solution of $\min\,\{\F(t,v,z):\,v\in\U\}$. Hence, by uniqueness of minimizer we have $\overline{u}=u$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\subsection{Properties of the slopes} \label{prop:slopes}
Now, we can give a convenient characterization of the slopes introduced in Definition~\ref{d.slope}.
\begin{remark}\label{p.1} Let $(t, u, z) \in [0,T] \times \U \times \Z$, then \begin{align}
|\partial_{u}\F|(t,u,z)&=\max \,\{-\partial_{u}\F(t,u,z)[\varphi]:\,\varphi\in \U,\, \|\varphi\|_{H^{1}}\leq 1\}\,,\label{e.slope3}\\[1mm]
|\partial_{z}^{-}\F|(t,u,z)&=\sup\,\{-\partial_{z}\F(t,u,z)[\psi]:\,\psi\in \Z,\,\psi\leq 0,\,\|\psi\|_{L^2}\leq 1\}\label{e.slope4}\,. \end{align} For the proof of \eqref{e.slope3} we refer for instance to \cite[Proposition 1.4.4]{AGS}. For the proof of \eqref{e.slope4} it is sufficient to employ the arguments of~\cite[Lemma~2.3]{A-B-N17} or \cite[Lemma 2.2]{Negri_ACV}. \end{remark}
Next two lemmata are devoted to lower semicontinuity and continuity of the slopes.
\begin{lemma}\label{l.2} Let $(t_k, u_k, z_k) \in [0,T] \times \U \times \Z$ such that $t_{k} \to t$ in $[0,T]$, $u_{k}\rightharpoonup u$ weakly in~$H^1( \Omega , \mathbb{R}^2)$, and $z_{k}\rightharpoonup z$ weakly in~$H^{1}(\Omega)$ with $0\leq z_{k}\leq 1$, for every~$k$. Then \begin{displaymath}
|\partial_{z}^{-}\F|(t,u,z)\leq\liminf_{k \to \infty}|\partial_{z}^{-}\F|(t_{k},u_{k},z_{k})\,. \end{displaymath} \end{lemma}
\begin{proof}
By Remark~\ref{p.1}, for every~$\psi\in\Z$ such that $\psi\leq0$ and $\|\psi\|_{L^2}\leq1$ we have that \begin{equation}\label{e.10} \begin{split}
|\partial_{z}^{-}\F|(t_{k},u_{k},z_{k})\geq-\int_{\Omega} & h' (z) \psi \Psi_+ \big( \strain(u_{k} + g(t_{k})) \big) \,\mathrm{d} x -\int_{\Omega} \nabla{z_{k}}{\,\cdot\,}\nabla{\psi} + f'( z_{k} ) \psi \,\mathrm{d} x\,. \end{split} \end{equation} Since~$h\in C^{1,1}_{\mathrm{loc}}(\R)$ is non-decreasing in~$[0,+\infty)$ and $z_{k} \to z$ in~$L^{r}( \Omega )$ for every $r<+ \infty$ with $0\leq z_{k} \leq 1$, we deduce that $-h'(z_{k})\psi\geq0$ for every~$k$ and that~$h'(z_{k})\psi\to h'(z)\psi$ in~$L^{r}(\Omega)$ for every $r<+\infty$. In a similar way $f'(z_k) \psi \to f'(z) \psi$ in $L^1(\Omega)$. Hence, passing to the liminf in~\eqref{e.10} as $k\to+\infty$ and applying for instance~\cite[Theorem~7.5]{Fonseca2007} we deduce that \begin{displaymath}
\liminf_{k \to \infty}|\partial_{z}^{-}\F|(t_{k},u_{k},z_{k}) \geq-\int_{\Omega} h' (z) \psi \Psi_+ \big( \strain(u + g(t)) \big) \,\mathrm{d} x -\int_{\Omega} \nabla{z}{\,\cdot\,}\nabla{\psi} + f'(z) \psi \,\mathrm{d} x = -\partial_{z} \F (t, u, z)[\psi] \,. \end{displaymath} We conclude by taking the supremum over~$\psi$ in the previous inequality.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\begin{lemma}\label{l.3} Let $(t_k, u_k, z_k) \in [0,T] \times \U \times \Z$ such that $t_{k}\to t$ in~$[0,T]$, $u_{k} \to u$ in~$H^1( \Omega , \mathbb{R}^2)$, and $z_{k}\rightharpoonup z$ in~$H^1(\Omega)$ with $0\leq z_{k}\leq 1$, for every~$k$. Then \begin{displaymath}
|\partial_{u}\F|(t,u,z)=\lim_{k \to \infty}\,|\partial_{u}\F|(t_{k},u_{k},z_{k})\,. \end{displaymath} \end{lemma}
\begin{proof}
By Remark~\ref{p.1}, for every $\varphi\in\U$ with~$\|\varphi\|_{H^1}\leq 1$ we have that \begin{equation}\label{e.21} \begin{split}
|\partial_{u}\F|(t_{k},u_{k},z_{k}) \geq -\int_{\Omega} \partial_{\strain} W( z_{k}, \strain (u_{k} + g(t_{k}) ){\,:\,} \strain (\varphi) \,\mathrm{d} x .
\end{split} \end{equation} Remember that $\partial_{\strain} W(z,\strain ) = 2 h(z) \big( \mu \strain_{d} + \kappa \strain_{v}^{+} \big) - 2\kappa \strain_{v}^{-}$. Since $u_{k}\to u$ in~$H^1(\Omega, \R^2)$ and since $g$ belongs to $W^{1,q}([0,T];W^{1,p}(\Omega;\R^{2}))$, we have that $\strain_{d}(u_{k}+g(t_{k}))\to \strain_{d}(u+g(t))$ and $\strain^{\pm}_{v}(u_{k}+g(t_{k}))\to \strain_{v}^{\pm}(u+g(t))$ in $L^{2}(\Omega;\mathbb{M}^{2})$. Being $0\leq z_{k}\leq 1$ and $z_{k}\to z$ in~$L^{2}(\Omega)$, we have that $h(z_{k})(\mu\strain_{d}(u_{k}+g(t_{k}))+\kappa\strain^{+}_{v}(u_{k}+g(t_{k})))$ converges to $h(z)(\mu\strain_{d}(u+g(t))+\kappa\strain_{v}^{+}(u+g(t)))$ in~$L^{2}(\Omega;\mathbb{M}^{2})$. Therefore, $ \partial_{\strain} W( z_{k}, \strain (u_{k} + g(t_{k}) )$ converges to $ \partial_{\strain} W( z, \strain (u + g(t) )$ in~$L^{2}(\Omega;\mathbb{M}^{2})$ and, passing to the liminf in~\eqref{e.21}, we obtain \begin{displaymath} \begin{split}
\liminf_{k\to\infty}\,|\partial_{u}\F|(t_{k},u_{k},z_{k})\geq - \int_{\Omega} & \ \partial_{\strain} W( \strain (u_{} + g(t_{}))) {\,:\,} \strain(\varphi) \, \mathrm{d} x = - \partial_{u}\F(t,u,z)[\varphi]\,. \end{split} \end{displaymath}
Passing to the supremum over $\varphi\in\U$ with $\|\varphi\|_{H^1}\leq 1$, we deduce that \begin{equation}\label{e.22}
|\partial_{u}\F|(t,u,z)\leq \liminf_{k\to\infty}\,|\partial_{u}\F|(t_{k},u_{k},z_{k})\,. \end{equation}
As for the opposite inequality, for every~$k$ let $\varphi_{k}\in\U$ with $\|\varphi_{k}\|_{H^1}\leq 1$ be such that $|\partial_{u}\F|(t_{k},u_{k},z_{k})=-\partial_{u}\F(t_{k}.u_{k},z_{k})[\varphi_{k}]$. Up to a subsequence, we have that $\varphi_{k}\rightharpoonup \varphi$ weakly in~$H^1(\Omega ; \R^2)$ for some $\varphi\in\U$ with $\|\varphi\|_{H^1}\leq 1$. Hence, by the strong convergence of $\partial_{\strain} W( z_{k}, \strain (u_{k} + g(t_{k}) )$, we get that \begin{equation}\label{e.23} \begin{split}
\limsup_{k\to\infty} \, | \partial_{u} \F | ( t_{k}, u_{k}, z_{k} ) & = - \partial_{u} \F ( t_{k}, u_{k}, z_{k} ) [ \varphi_{k} ] \\ &= \limsup_{k\to\infty} \, - \int_{\Omega}\partial_{\strain} W(\strain (u_{k} + g(t_{k}) ) ) {\,:\,}\strain (\varphi_{k} ) \, \mathrm{d} x \\
& = - \int_{\Omega} \partial_{\strain} W( z, \strain (u + g(t) ) {\,:\,} \strain (\varphi) \, \mathrm{d} x = - \partial_{u} \F ( t, u, z ) [\varphi] \leq | \partial_{u} \F | ( t, u, z ) \,. \end{split} \end{equation} This concludes the proof of the proposition.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\section{Auxiliary gradient-flows} \label{s.gf}
In this section we present some auxiliary results for two gradient flows which will be employed in the interpolation of the discrete evolutions obtained by alternate minimization.
\subsection[An $H^1$-gradient flow for the displacement field]{An \boldmath{$H^1$}-gradient flow for the displacement field}
Given, $t\in[0,T]$ and $z\in \Z$, we start with recalling some results about the system $$ \left\{\begin{array}{ll} u'(l)=-\nabla_{u}\F(t,u(l),z)=-\nabla_{u}\E(t,u(l),z)\,,\\ u(0)=u^{0}\,, \end{array}\right. $$
where $u^0 \in \U$ and $\nabla_u \F ( t, u , z)$ denotes the $H^1$-element representing, by Riesz Theorem, the functional $\partial_u \F ( t, u ,z)$, i.e., $\partial_u \F ( t,u,z) [\phi] = \langle \nabla_u \F ( t,u,z) , \phi \rangle$ for every $\phi\in \U$. Note that $\| \nabla_u \F ( t,u,z) \|_{H^1} = | \partial_u \F | ( t,u,z)$.
\begin{theorem}\label{t.2} Let $(t, u^{0}, z)\in [0,T] \times \U \times \Z$. Then, there exists a unique evolution $u \colon [0,+\infty)\to \U$ such that the following facts hold: \begin{itemize} \item[$(a)$] $u\in W^{1,\infty}([0,+\infty); H^{1}(\Omega;\R^{2}))$ and $u'\in L^{2}([0,+\infty); H^{1}(\Omega;\R^{2}))$;
\item[$(b)$] $u(0)=u^0$ and for a.e.~$l \in[0,+\infty)$ we have $u'(l)=-\nabla_{u}\F(t,u(l),z)$;
\item[$(c)$] for every $\ell \in[0,+\infty)$ \begin{equation}\label{e.38}
\F(t,u(\ell),z)=\F(t,u^0,z)-\tfrac{1}{2}\int_{0}^{\ell}|\partial_{u}\F|^{2}(t,u(l),z)+\|u'(l)\|^{2}_{H^{1}}\,\mathrm{d} l\,; \end{equation}
\item[$(d)$] $u(l)$ converges strongly to $\overline{u}$ in~$H^{1}(\Omega;\R^{2})$ as $l \to+\infty$, where $\overline{u}=\argmin\,\{\F(t,u,z):\,u\in\U\}$. Moreover, \begin{align}
& \F(t,\overline{u},z) =\F(t,u^0,z)-\tfrac{1}{2}\int_{0}^{+\infty}|\partial_{u}\F|^{2}(t,u(l),z)+\|u'(l)\|^{2}_{H^{1}}\,\mathrm{d} l\,, \label{e.41}\\[1mm]
& \|u(l)-\overline{u}\|_{H^{1}} \leq e^{-cl}\|u^0-\overline{u}\|_{H^{1}}\,,\label{e.40} \end{align} where $c$ depends only on the constant appearing in (a) of Lemma \ref{l.HMWw}. \end{itemize} \end{theorem}
\begin{proof} We invoke \cite[Theorem 3.1, Lemma 3.3, and Theorem 3.9]{Brezis_73} for the operator $\mathcal{A}:= \nabla_{u} \E (t, \cdot, z )\colon \U \to \U$. Indeed, $\mathcal{A}$ is maximal monotone, by convexity and continuity of~$\E (t,\cdot, z)$. Moreover, by~$(a)$ of Lemma~\ref{l.HMWw} and by Korn inequality, the operator~$\mathcal{A}$ is strongly monotone, that is, \begin{equation}\label{e.40.1}
\langle \mathcal{A} u - \mathcal{A} v , u - v \rangle \geq c \| u - v \|_{H^{1}}^{2} \,. \end{equation} Therefore, we are in a position to apply~\cite[Theorem 3.1 and Theorem 3.9]{Brezis_73} which, put together, prove~$(b)$, that $u'\in L^{\infty}([0,+\infty); H^{1}(\Omega;\R^{2}))$, and that~$u(l)$ admits the limit~$\overline{u}=\argmin\,\{\F(t,u,z):\,u\in\U\}$ in~$H^{1}(\Omega;\R^{2})$ as $l\to+\infty$, and the exponential decay~\eqref{e.40}, where the constant~$c$ coincides with the ellipticity constant of~\eqref{e.40.1}. In view of~\cite[Lemma~3.3]{Brezis_73} we get~$(c)$ and the uniform boundedness of~$u(\cdot)$ in~$H^{1}(\Omega;\R^{2})$. Passing to the limit in~\eqref{e.38} as~$\ell\to+\infty$ and applying monotone convergence theorem, we deduce~\eqref{e.41} and that $u'\in L^{2}([0,+\infty); H^{1}(\Omega;\R^{2}))$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
Moreover, by a \L ojasiewicz \cite{Lojasiewicz_AIF93} argument we have the following result on the length of the flow.
\begin{theorem}\label{t.2,5}
Let~$u$ be the solution of the above gradient flow. Then, either $u(l) \equiv u^0$ or $\| u' (l) \|_{H^1} \neq 0$ in $[0, +\infty)$. Moreover, there exists a constant~$\overline{C}$ (independent of $t$, $u^0$, and $z$) such that \begin{equation}\label{e.lengthu}
\int_{0}^{+\infty}\|u'(l)\|_{H^{1}}\,\mathrm{d} l\leq \overline{C} \|u^0-\overline{u}\|_{H^{1}}\,, \end{equation} \end{theorem}
\begin{proof}
If $u^{0}=\overline{u}$, then $u(l)\equiv \underline{u}$, since $\underline{u}=\argmin\,\{\F(t,u,z):\, u\in \U\}$. Let us therefore assume that $u^{0}\neq \overline{u}$. In what follows we denote by~$C$ a generic positive constant which could change from line to line. Let $s(l) = \frac12 \| u(l) - \overline{u} \|^2_{H^1}$. Then, being $\nabla_u \F ( t, \overline{u} , z) = 0$ and by $(b)$ in Lemma \ref{l.HMWw} $$
s'(l) = \langle u(l) - \overline{u} , u' (l) \rangle = \langle u(l) - \overline{u} , - \nabla_u \F( t , u(l) , z) + \nabla_u \F ( t , \overline{u} , z) \rangle \ge - C \| u(l) - \overline{u} \|^2_{H^1} = - C' s(l) \,. $$ It follows that $s(l) \ge s(0) e^{-C'l} >0$ and then $u(l) \neq \overline{u}$ for every $l \in [0,+\infty)$. As a consequence $u '(l) = - \nabla_u \F (t, u(l),z) \neq 0$ for a.e.~$l \in [0,+\infty)$.
We now prove the bound \eqref{e.lengthu}. By convexity, for every $l\in [0,+\infty)$ we have \begin{equation}\label{e.18.09}
0 \leq \F (t, u(l), z) - \F (t, \overline{u}, z) \leq \langle \nabla_{u} \F (t, u(l), z) , u(l) - \overline{u} \rangle \leq \| \nabla_{u} \F (t, u(l), z)\|_{H^1} \| u(l) - \overline{u} \|_{H^1} \,. \end{equation} By $(a)$ of Lemma \ref{l.HMWw} and by Korn inequality, we get \begin{displaymath} \begin{split}
C \| u (l) - \overline{u}\|_{H^{1}}^{2}&\leq \left\langle \nabla_{u}\F ( t, \overline{u}, z ) - \nabla_{u} \F ( t, u(l), z) , \overline{u} - u(l) \right\rangle = - \left\langle \nabla_{u} \F ( t, u(l), z) , \overline{u} - u(l) \right\rangle \\
&\leq \| \nabla_{u} \F ( t, u(l), z) \|_{H^1} \| u(l) - \overline{u} \|_{H^1} \,, \end{split} \end{displaymath} which implies \begin{equation}\label{e.18.07}
C \| u(l) - \overline{u} \|_{H^1} \leq \| \nabla_{u} \F ( t, u(l), z) \|_{H^1}\,. \end{equation}
Combining \eqref{e.18.09} and \eqref{e.18.07} we deduce that \begin{equation}\label{e.18.10}
C (\F (t, u(l), z) - \F (t, \overline{u}, z))^{\frac{1}{2}} \leq \| \nabla_u \F (t, u(l), z) \|_{H^1} \,. \end{equation} We now apply a \L ojasiewicz argument: in view of $(b)$ of Theorem \ref{t.2} and of the monotonicity of $l\mapsto \F(t, u(l), z)$, for a.e. $l\in [0,+\infty)$ we have \begin{equation}\label{e.18.36} \begin{split} -2\frac{\mathrm{d}}{\mathrm{d} l}\big( \F ( t, u(l), z ) - \F ( t, \overline{u}, z ) \big)^{\frac12} & = - \big( \F (t, u(l), z ) - \F ( t,\overline{u}, z ) \big)^{-\frac12} \langle \nabla_{u} \F (t, u(l), z), u'(l) \rangle \\
& = \big( \F (t, u(l), z ) - \F ( t,\overline{u}, z ) \big)^{-\frac12} \| \nabla_{u}\F ( t, u(l) , z )\|_{H^1}^{2} \\
& \geq C \| \nabla_{u}\F ( t, u(l) , z )\|_{H^1} = C \| u'(l) \|_{H^{1}} \,. \end{split} \end{equation} Hence, inequality~\eqref{e.18.36} implies that for every~$\ell\in[0,+\infty)$ \begin{equation}\label{e.18.37}
C\int_{0}^{\ell} \| u'(l) \|_{H^{1}} \, \mathrm{d} l \leq - 2 \int_{0}^{\ell} \frac{\mathrm{d}}{\mathrm{d} l} \big ( \F (t, u(l), z) - \F (t ,\overline{u}, z ) \big)^{\frac12} \, \mathrm{d} l \leq 2 \big( \F (t, u^{0}, z ) - \F ( t, \overline{u}, z ) \big)^{\frac12} \,. \end{equation} In the limit as~$\ell\to+\infty$ in~\eqref{e.18.37} we obtain by monotone convergence theorem \begin{equation}\label{e.18.38}
C\int_{0}^{+\infty} \| u'(l) \|_{H^{1}} \, \mathrm{d} l \leq 2 \big( \F (t, u^{0}, z ) - \F ( t, \overline{u}, z ) \big)^{\frac{1}{2}} \,. \end{equation} By convexity and minimality of~$\overline{u}$ we have that \begin{align}
\big( \F ( t, u^{0}, z ) - \F ( t, \overline{u}, z ) \big)^{\frac12} & \leq \big( -\left\langle \nabla_{u} \F ( t, u^{0}, z ) , \overline{u} - u^0 \right\rangle \big)^{\frac12} \nonumber \\
& = \big( \left\langle \nabla_{u} \F ( t, \overline{u}, z ) , \overline{u} - u^0 \right\rangle - \left\langle \nabla_{u} \F ( t, u^{0}, z ) , \overline{u} - u^0 \right\rangle \big)^{\frac12} \,,\label{e.18.43} \\
& \leq C \| u^{0} - \overline{u} \|_{H^{1}} \nonumber \,, \end{align} where in the last inequality we applied~$(b)$ of Lemma~\ref{l.HMWw}. Combining~\eqref{e.18.37} and~\eqref{e.18.43} we conclude~\eqref{e.lengthu}. In particular, we notice that all the constants appearing in~\eqref{e.18.07}-\eqref{e.18.43} do not depend on~$t$,~$u^{0}$,~$z$, and $l$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
As a corollary of Theorems~\ref{t.2} and~\ref{t.2,5}, we define a suitable reparametrization of $l\in[0,+\infty)$ which makes the gradient flows computed in Theorem~\ref{t.2} $1$-Lipschitz continuous. This reparametrization will be exploited in Section~\ref{s.prooft1} for the proof of Theorem~\ref{t.1}.
\begin{corollary}\label{c.gfu} Let~$t$,~$u^{0}$, and~$z$ be as in the statement of Theorem~\ref{t.2}. Let~$u$ be the gradient flow computed in Theorem~\ref{t.2} with initial condition~$u^{0}$. Let us assume that $u^{0} \neq \overline{u}:= \argmin\,\{ \F(t, u, z):\, u\in\U \}$, and let us set \begin{displaymath}
L (u) :=\int_{0}^{+\infty} \| u' (l) \|_{H^{1}}\,\mathrm{d} l \,, \qquad \lambda( \ell ):=\int_{0}^{\ell} \| u' (l) \|_{H^{1}}\,\mathrm{d} l \,, \quad \text{for $\ell \in [0, +\infty ]$} . \\ \end{displaymath}
Moreover, let $\rho\colon [0,L(u)]\to [0,+\infty]$ be defined by $\rho:= \lambda^{-1}$. Then, the function $\omega:= u\circ \rho$ belongs to $W^{1,\infty} ([0,L(u)]; H^{1}(\Omega;\R^{2}) )$, $\| \omega' (s) \|_{H^1} = 1$ for a.e.~$s \in [0, L(u)]$, $\omega(0)=u^{0}$, $\omega(L(u)) = \overline{u}$, and \begin{equation}\label{e.enbal}
\F( t, \omega(s), z) = \F( t, u^{0}, z)-\int_{0}^{s} |\partial_{u} \F|(t, \omega( \sigma ), z ) \| \omega' ( \sigma ) \|_{H^1} \, \mathrm{d} \sigma \qquad \text{for every $s \in [0, L(u) ]$}\,. \end{equation} \end{corollary}
\begin{proof}
We notice that the function $\rho \colon [0,L(u)]\to [0,+\infty]$ is well defined since~$\lambda$ is monotone increasing thanks to Theorem~\ref{t.2,5}, and hence invertible. As a consequence, also~$\omega\colon [0,L(u)] \to \U$ is well defined, continuous, with $\omega(0)= u^{0}$ and $\omega(L(u))= \overline{u}$. Moreover, by Theorem~\ref{t.2,5}~$\rho$ is Lipschitz continuous in~$[0,s]$ for every $s< L(u)$. Thanks to Theorem~\ref{t.2}, we have that $\omega \in W^{1,\infty}([0,s] ; H^{1}(\Omega;\R^{2}))$ with $\| \omega' ( \sigma ) \|_{H^{1}} = 1$ for a.e.~$\sigma\in [0, s]$ and every~$s\in[0,L(u))$. Hence, we deduce that $\omega\in W^{1,\infty}([0,L(u)]; H^{1}(\Omega;\R^{2}))$. Furthermore, by~$(b)$-$(d)$ of Theorem~\ref{t.2} we know that \begin{displaymath}
\F(t,u(\ell),z)=\F(t,u^0,z) - \int_{0}^{\ell} |\partial_{u}\F| ( t, u(l), z ) \| u' (l) \|_{H^{1}}\,\mathrm{d} l \qquad\text{for every $\ell \in [0,+\infty]$}\,. \end{displaymath} By the change of variable $l = \rho(\sigma)$ for~$\sigma \in [0,L(u)]$ we deduce~\eqref{e.enbal}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\begin{remark}\label{r.3}
In the notation of Corollary~\ref{c.gfu}, we notice that, as a consequence of Theorem~\ref{t.2,5}, $L(u)\leq \overline{C} \|u^{0}-\overline{u}\|_{H^{1}}$. \end{remark}
We now prove a continuity property of the gradient flows w.r.t.~the data.
\begin{proposition}\label{p.continuitygradflow} Let $(t_{m}, u^{0}_{m}, z_{m}) \in [0,T] \times \U \times \Z$ be such that $t_{m} \to t_{\infty}$ in~$[0,T]$, $u^{0}_{m} \to u^{0}_{\infty}$ in~$H^1(\Omega ; \R^2)$, and $z_{m}\rightharpoonup z_{\infty}$ weakly in~$H^{1}(\Omega)$. Let $u_{m}, u_{\infty}\colon [0,+\infty)\to \U$ be the gradient flows computed in Theorem~\ref{t.2} with initial data $u_{m}(0)=u^{0}_{m}$ and $u_{\infty}(0)= u_{\infty}^{0}$ and parameters~$(t_{m},z_{m})$ and~$(t_{\infty},z_{\infty})$, respectively.
Then,~$u_{m}$ converges strongly to~$u_{\infty}$ uniformly in $[0,+\infty)$, i.e.,~in $C( [0,+\infty); H^{1}(\Omega;\R^{2}))$. Moreover, if~$l_{m}\to +\infty$ as $m\to\infty$ and $\overline{u}_{\infty}:= \argmin\, \{ \F(t_{\infty}, u, z_{\infty} ):\, u\in\U \}$ then $u_{m}(l_{m}) \to \overline{u}_{\infty}$ in~$H^1(\Omega ; \R^2)$. \end{proposition}
\begin{proof} To prove the desired convergence we want to apply~\cite[Theorem~3.16]{Brezis_73}. In the notation of~\cite{Brezis_73}, we consider the operators~$A_{m} := \nabla_{u} \F ( t_{m}, \cdot, z_{m} ) = \nabla_{u}\E( t_{m}, \cdot, z_{m} )$, and $A_{\infty} := \nabla_{u} \F ( t_{\infty}, \cdot, z_{\infty} ) = \nabla_{u}\E( t_{\infty}, \cdot, z_\infty )$. In view of the hypotheses on~$h$ and~$W$, the operators~$A_{m}$ and~$A_{\infty}$ defined on the Hilbert space~$\U$ (endowed with the~$H^{1}$-norm) are maximal monotone. For~$\lambda>0$ and~$w\in \U$, let us denote with~$\varphi_{m}(\lambda,w)$ the solution of \begin{equation}\label{e.gf2}
\min\,\{\tfrac{1}{2} \|\varphi \|^{2}_{H^{1}} + \lambda \E( t_{m}, \varphi, z_{m} ) - \langle w, \varphi \rangle:\, \varphi\in \U\}\,, \end{equation} where $\langle \cdot , \cdot \rangle$ is the usual duality pairing in~$\U$. By strict convexity of~$\E$ in~$\U$,~$\varphi_{m}(\lambda, w)$ is well-defined, since the solution of the minimum problem~\eqref{e.gf2} is unique. Moreover,~$\varphi_{m}( \lambda, w )$ solves the equation \begin{equation}\label{e.gf3} \varphi_{m}( \lambda, w) + \lambda A_{m} \varphi_{m}(\lambda, w)= w \,, \end{equation} so that $\varphi_{m}(\lambda, w)=(\mathrm{I} + \lambda A_{m})^{-1} w$. In the same way, we can define~$\varphi_{\infty}(\lambda, w)$ as the solution of~\eqref{e.gf2} where we replace~$(t_{m},z_{m})$ with~$(t_{\infty},z_{\infty})$. Again, we have $\varphi_{\infty}(\lambda, w) = ( \mathrm{I} + \lambda A_{\infty})^{-1} w$.
To make use of~\cite[Theorem~3.16]{Brezis_73}, we have to show that for every~$\lambda>0$ and every~$w\in \U$ the function~$\varphi_{m}(\lambda, w)$ converges to~$\varphi_{\infty}(\lambda, w)$ in~$H^{1}(\Omega;\R^{2})$. Using~\eqref{e.gf2} it is easy to see that the sequence~$\varphi_{m} (\lambda, w)$ is bounded in~$H^{1}(\Omega;\R^{2})$, so that, up to a subsequence, we may assume that~$\varphi_{m}( \lambda, w ) \rightharpoonup \overline{\varphi}$ weakly in~$H^{1}(\Omega;\R^{2})$ for some~$\overline{\varphi}\in \U$. We now show that $\overline{\varphi} = \varphi_{\infty}(\lambda, w )$. Indeed, by~\eqref{e.gf2} and by Lemma \ref{l.lscFE} for every $\varphi \in \U$ we have that \begin{equation}\label{e.gf4} \begin{split}
\tfrac{1}{2} \| \overline{\varphi} \|_{H^{1}}^{2} &+ \lambda \E(t_{\infty}, \overline{\varphi}, z_{\infty}) - \langle w, \overline{\varphi}\rangle \\
& \leq\liminf_{m\to\infty}\,\tfrac{1}{2} \|\varphi_{m} (\lambda, w) \|^{2}_{H^{1}} + \lambda \E( t_{m}, \varphi_{m} (\lambda, w), z_{m} ) - \langle w, \varphi_{m} (\lambda, w) \rangle\\
&\leq \limsup_{m\to\infty}\, \tfrac{1}{2} \|\varphi_{m} (\lambda, w) \|^{2}_{H^{1}} + \lambda \E( t_{m}, \varphi_{m} (\lambda, w), z_{m} ) - \langle w, \varphi_{m} (\lambda, w) \rangle\\
& \leq \limsup_{m\to\infty}\, \tfrac{1}{2} \|\varphi \|^{2}_{H^{1}} + \lambda \E( t_{m}, \varphi, z_{m} ) - \langle w, \varphi \rangle =\tfrac{1}{2} \|\varphi \|^{2}_{H^{1}} + \lambda \E( t_{\infty}, \varphi, z_{\infty} ) - \langle w, \varphi \rangle\,, \end{split} \end{equation} which implies that $\overline{\varphi}= \varphi_{\infty}(\lambda, w)$ by uniqueness of minimizer. Repeating the argument of~\eqref{e.gf4} with $\varphi= \overline{\varphi}$, we also deduce that \begin{equation}\label{e.gf5} \begin{split}
\lim_{m\to\infty}\, \tfrac{1}{2} \|\varphi_{m} (\lambda, w) \|^{2}_{H^{1}} & + \lambda \E( t_{m}, \varphi_{m} (\lambda, w), z_{m} ) - \langle w, \varphi_{m} (\lambda, w) \rangle \\
& = \tfrac{1}{2} \|\varphi_{\infty} (\lambda, w) \|^{2}_{H^{1}} + \lambda \E( t_{\infty}, \varphi_{\infty} (\lambda, w), z_{\infty} ) - \langle w, \varphi_{\infty} (\lambda, w) \rangle \,. \end{split} \end{equation} As a consequene of~$(a)$ in Lemma~\ref{l.HMWw},
there exists a constant~$\beta=\beta(\lambda)>0$ such that for every~$t,s \in[0,T]$, every~$z\in\Z$, and every~$u_{1},u_{2}\in\U$ we have
\begin{displaymath}
\begin{split}
\lambda \E(t, u_{1}, z ) - \lambda \E(s , u_{2}, z ) & \geq \lambda \left\langle \nabla_{u} \E(s , u_{2}, z) , u_{1} +g(t)- u_{2} - g(s) \right\rangle \\
& \qquad + \beta \|\strain (u_{1} + g(t)) - \strain( u_{2} + g(s)) \|_{L^2}^{2}\,.
\end{split}
\end{displaymath}
Therefore, for every~$m$ we can write \begin{equation}\label{e.gf6} \begin{split}
\tfrac{1}{2} \| \varphi_{\infty}&( \lambda, w ) \|_{H^{1}}^{2} + \lambda \E(t_{\infty}, \varphi_{\infty} (\lambda, w), z_{m} ) \ - \ \tfrac{1}{2} \| \varphi_{m}( \lambda, w ) \|_{H^{1}}^{2} - \lambda \E(t_{m}, \varphi_{m} (\lambda, w), z_{m} ) \\ & \geq \langle \varphi_{m}( \lambda, w), \varphi_{\infty}(\lambda, w) - \varphi_{m}(\lambda, w)\rangle + \lambda \left\langle \nabla_{u} \E(t_{m}, \varphi_{m}(\lambda, w ), z_{m}) ,\varphi_{\infty} ( \lambda, w) - \varphi_{m} (\lambda, w) \right\rangle \\
& \qquad + \beta \|\strain (\varphi_{m}(\lambda, w) + g(t_{m})) - \strain( \varphi_{\infty}(\lambda, w) + g(t_{\infty})) \|_{L^2}^{2} \\
&= \beta \|\strain (\varphi_{m}(\lambda, w) + g(t_{m})) - \strain( \varphi_{\infty}(\lambda, w) + g(t_{\infty})) \|_{L^2}^{2} \,, \end{split} \end{equation} where, in the last inequality, we have used the minimality of~$\varphi_{m}(\lambda, w)$. We now pass to the limit in~\eqref{e.gf6} as $m\to\infty$. In view of~\eqref{e.gf5} and of the convergences of~$t_{m}$ and~$z_{m}$, the left-hand side of~\eqref{e.gf6} tends to~$0$, so that $\strain (\varphi_{m}(\lambda, w) + g(t_{m}))$ converges to $ \strain( \varphi_{\infty}(\lambda, w) + g(t_{\infty}))$ in~$L^{2}(\Omega;\mathbb{M}^{2}_{s})$. By Korn inequality, we get that $\varphi_{m}(\lambda, w) \to \varphi_{\infty} (\lambda, w)$ in~$H^1(\Omega; \R^2)$.
Therefore, we are in a position to apply~\cite[Theorem~3.16]{Brezis_73}, from which we deduce the convergence of~$u_{m}$ to~$u_{\infty}$ uniformly in~$H^1(\Omega; \R^2)$ on compact subsets of~$[0,+\infty)$. To show the convergence in $L^{\infty}([0,+\infty); H^{1}(\Omega;\R^{2}))$ it remains to control what happens in a neighborhood of~$\infty$. Let us fix~$\delta>0$. By~\eqref{e.40}, for every $l\in[0,+\infty)$ and for every $m\in\mathbb{N}\cup\{\infty\}$ we have \begin{equation}\label{e.11.38}
\| u_{m}(l) - \overline{u}_{m} \|_{H^{1}} \leq e^{-c l} \|u^{0}_{m} - \overline{u}_{m} \|_{H^{1}}\,, \end{equation}
where the constant~$c>0$ does not depend on~$m$. By hypothesis $u^{0}_{m} \to u^{0}_{\infty}$, while applying Proposition~\ref{p.6} we get that $\overline{u}_{m} \to \overline{u}_{\infty}$ in~$H^1(\Omega; \R^2)$ as $m\to\infty$, which implies that $u^{0}_{m} - \overline{u}_{m}$ is bounded in~$H^1(\Omega; \R^2)$. Hence, by~\eqref{e.11.38} there exists $\ell_{\delta}\in[0,+\infty)$ such that $\| u_{m}(l) - \overline{u}_{m} \|_{H^{1}} \leq \tfrac{\delta}{4}$ for every $l \geq \ell_{\delta}$ and every~$m\in\mathbb{N}\cup\{\infty\}$. By triangle inequality, for every $l \geq \ell_{\delta}$ we have \begin{displaymath}
\| u_{m}(l) - u_{\infty}(l) \|_{H^{1}} \leq \|u_{m} (l) - \overline{u}_{m} \|_{H^{1}} + \|\overline{u}_{m} - \overline{u}_{\infty} \|_{H^{1}} + \|\overline{u}_{\infty} - u_{\infty} (l) \|_{H^{1}} \leq \tfrac{\delta}{2} + \|\overline{u}_{m} - \overline{u}_{\infty} \|_{H^{1}}\,, \end{displaymath} from which we deduce that there exists~$m_{\delta}\in\mathbb{N}$ such that \begin{displaymath}
\| u_{m}(l) - u_{\infty}(l) \|_{H^{1}}\leq \delta \qquad \text{for every~$m\geq m_{\delta}$ and every~$l\geq \ell_{\delta}$}\,. \end{displaymath} Combining the previous estimate with the uniform convergence of~$u_{m}$ to~$u_{\infty}$ on compact subsets of~$[0,+\infty)$ we conclude that $u_{m}\to u_{\infty}$ uniformly in $[0,+\infty)$.
Finally, the last part of the thesis follows from~\eqref{e.11.38} and from the convergence of~$\overline{u}_{m}$ to~$\overline{u}_{\infty}$ in~$H^{1}(\Omega;\R^{2})$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
As a corollary of Proposition~\ref{p.continuitygradflow}, we deduce a convergence result for the reparametrized functions defined in Corollary~\ref{c.gfu}.
\begin{corollary}\label{c.5} Let $(t_{m}, u^{0}_{m}, z_{m})$, $(t_{\infty}, u^{0}_{\infty}, z_{\infty})$,~$u_{m}$,~$u_{\infty}$, and $\overline{u}_{\infty}$ be as in Proposition~\ref{p.continuitygradflow}. Let $L(u_{m})$, $\omega_{m}$, and~$\rho_{m}$ be as in Corollary~\ref{c.gfu}. Then, for every $s_{m}\in [0,L(u_{m})]$ such that $\rho_{m}(s_{m}) \to \overline{\rho} \in[0,+\infty]$, we have that $\omega_{m}(s_{m})\to u_{\infty}(\overline{\rho})$ in~$H^1(\Omega; \R^2)$, where we intend $u_{\infty}(+\infty) = \overline{u}_{\infty}$. \end{corollary}
\begin{proof} Let~$\rho_{m}$ and $\rho_{\infty}$ be as in Corollary~\ref{c.gfu}. For every~$m$ let $\ell_{m}:=\rho_{m}(s_{m})$, so that $\omega_{m}(s_{m})=u_{m}\circ \rho_{m} (s_{m}) = u_{m}(\ell_{m})$. By assumption $\ell_{m}\to \overline{\rho}$. Hence, the thesis follows by applying Proposition~\ref{p.continuitygradflow}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\subsection[A unilateral $L^2$-grandient flow for the phase field]{A unilateral \boldmath{$L^2$}-grandient flow for the phase field}
A result similar to Theorem~\ref{t.2} holds also for the phase field~$z$ when we consider the time~$t\in[0,T]$ and the displacement $u\in\U$ as fixed parameters. In this case, however, we will need a unilateral gradient flow in the topology of~$L^{2}(\Omega)$, mainly to take care of the irreversibility condition imposed on the phase field. For this reason, the following result, similar in nature to Theorem~\ref{t.2}, needs to be proven.
\begin{theorem}\label{t.3} Let $\tilde{p}\in(2,+\infty)$ be as in Lemma~\ref{l.HMWTh}, and let $(t, u, z^{0}) \in [0,T] \times W^{1,\tilde{p}}(\Omega;\R^{2}) \times \Z$ with $0\leq z^0\leq 1$. Then, there exists an evolution $z\colon[0,+\infty)\to\Z$ satisfying the following conditions: \begin{itemize}
\item[$(a)$] $z\in L^{\infty}([0,+\infty);H^{1}(\Omega))$ and $z'\in L^{1}([0,+\infty); L^{2}(\Omega))\cap L^{2}([0,+\infty); L^{2}(\Omega))$;
\item[$(b)$] $z(0) = z^0$, $z$ is non-increasing, $0 \le z \le 1$, $\|z'(l)\|_{L^2}=|\partial_{z}^{-}\F|(t,u,z(l))$ for a.e.~$l\in[0,+\infty)$;
\item[$(c)$] for every $\ell \in[0,+\infty)$ it holds \begin{equation}\label{e.42}
\F(t,u,z(\ell))=\F(t,u,z^0)-\tfrac{1}{2}\int_{0}^{\ell} |\partial_{z}^{-}\F|^{2}(t,u,z(l))+\|z'(l)\|_{L^2}^{2}\,\mathrm{d} l\,; \end{equation}
\item[$(d)$] $z(l)$ converges to $\overline{z}$ strongly in $H^1 (\Omega)$ as $l \to+\infty$, where $\overline{z}=\argmin\,\{\F(t,u,z):\, z\in\Z,\, z\leq z^0\}$. Moreover, \begin{align}
\F(t,u,\overline{z})&=\F(t,u,z^0)-\tfrac{1}{2}\int_{0}^{+\infty}|\partial_{z}^{-}\F|^{2}(t,u,z(l))+\|z'(l)\|_{L^2}^{2}\,\mathrm{d} l \,; \label{e.44} \end{align}
\item[$(e)$] there exists $\overline{\ell}\in [0, +\infty]$ such that $\| z'(\ell) \|_{L^2} \neq 0$ for a.e.~$\ell < \overline{\ell}$ and $\| z'(\ell)\|_{L^2}=0$ for a.e.~$\ell \geq \overline{\ell}$;
\item[$(f)$] there exists a constant $\overline{C}>0$ such that \begin{equation}\label{e.lengthz}
\int_{0}^{+\infty}\|z'(l)\|_{L^2}\,\mathrm{d} l\leq \overline{C}(1+\|u+g(t)\|_{W^{1,\tilde{p}}}) \|z^0-\overline{z}\|_{H^{1}}\,. \end{equation} \end{itemize} \end{theorem}
\begin{proof}
We set $\overline{z}=\argmin\,\{\F(t,u,z):\,z\in\Z,\,z\leq z^0\}$. In order to construct a gradient flow $l \mapsto z(l)$ as in the statement of the theorem, we proceed by time-discretization. For $k \in \mathbb{N}\setminus\{0\}$, and every $i\in\mathbb{N}$ we set $l^{k}_{i}\coloneq i / k$ and we solve iteratively the minimum problem \begin{equation} \label{e.1306}
\min\,\{\F(t,u,z)+\tfrac{k}{2}\|z-z^{k}_{i}\|_{L^2}^{2}:\,z\in \Z,\, z\leq z^{k}_{i}\}\,, \end{equation} where $z^{k}_{0}\coloneq z^0$. First, let us prove that $z^{k}_{i}\geq\overline{z}$ for every $k,i$. We proceed by induction w.r.t.~$i$. A similar proof is contained in~\cite{NegriKimura}. By definition $z^0 \ge \overline{z}$. Let $z^k_{i} \ge \overline{z}$. Let us introduce the sets $\Omega^+ = \{ z^k_{i+1} \ge \overline{z} \}$, $\Omega^- = \{ z^k_{i+1} < \overline{z} \}$, and the corresponding energies $$
\F_{|\Omega^\pm} ( t, u , z ) = \int_{\Omega^\pm} W\big(z,\strain(u+g(t))\big)\,\mathrm{d} x + \int_{\Omega^\pm} | \nabla z |^2 + f(z) \, \mathrm{d} x . \\ $$ Let $$ \hat{z} := \max \{ z^k_{i+1} , \overline{z} \} = \begin{cases} \overline{z} & \text{ in $\Omega^-$} , \\
z^k_{i+1} & \text{ in $\Omega^+$,} \end{cases}
\qquad
\check{z} := \min \{ z^k_{i+1} , \overline{z} \} = \begin{cases} z^k_{i+1} & \text{ in $\Omega^-$} , \\
\overline{z} & \text{ in $\Omega^+$.} \end{cases} $$ By minimality of $\overline{z}$ we can write $$
\F ( t , u , \check{z} ) = \F_{|\Omega^+} ( t , u , \overline{z} ) + \F_{|\Omega^-} ( t , u , z^k_{i+1} ) \ge \F ( t, u , \overline{z}) = \F_{|\Omega^+} ( t, u , \overline{z}) + \F_{|\Omega^-} ( t, u , \overline{z}) \, , $$
from which we deduce that $\F_{|\Omega^-} ( t , u , z^k_{i+1} ) \ge \F_{|\Omega^-} ( t, u , \overline{z})$. Since $z^k_{i+1} < \overline{z} \le z^k_i$ in the set $\Omega^-$ we can write \begin{align*}
\F ( t , u , \hat{z} ) + \tfrac{k}{2} \| \hat{z} - z^k_i \|^2_{L^{2}} & = \F_{|\Omega^+} ( t , u , z^k_{i+1} ) + \F_{|\Omega^-} ( t , u , \overline{z} ) + \tfrac{k}{2} \| \hat{z} - z^k_i \|^2_{L^2} \\
& \le \F ( t , u , z^k_{i+1} ) + \tfrac{k}{2} \| z^k_{i+1} - z^k_i \|^2_{L^2} . \end{align*} Hence $\hat{z}$ is the minimizer of \eqref{e.1306}. By uniqueness it implies that $z^k_{i+1} = \hat{z} \ge \overline{z}$.
Defining the usual piecewise affine interpolant $z^k$, we get a sequence $z^k$ bounded in $H^{1}_{\mathrm{loc}}([0,+\infty),L^{2}(\Omega))$ and in $L^{\infty}([0,+\infty);H^1(\Omega))$ with $z^k (l) \ge \overline{z}$ for every $l \in [0,+\infty)$. Passing to the limit (up to subsequences) we identify a limit function $z \in H^{1}_{\mathrm{loc}}([0,+\infty);L^{2}(\Omega))\cap L^{\infty}([0,+\infty); H^{1}(\Omega))$, satisfying $z (l) \ge \overline{z}$ for every $l \in [0,+\infty)$ and \begin{equation}\label{e.60}
\F(t,u,z(\ell))\leq \F(t,u,z^0)-\tfrac{1}{2}\int_{0}^{\ell}|\partial_{z}^{-}\F|^{2}(t,u,z(l))+\|z'(l)\|_{L^2}^{2}\,\mathrm{d} l \end{equation} for every $\ell \in[0,+\infty)$. With the usual Riemann sum argument we can show that in~\eqref{e.60} the equality holds, see e.g.~\cite{Negri_ACV}, we deduce that \begin{displaymath}
\F(t,u,z(\ell))\geq \F(t,u,z^0)-\tfrac{1}{2}\int_{0}^{\ell}|\partial_{z}^{-} \F | ( t, u, z(l) ) \|z'(l)\|_{L^2} \,\mathrm{d} l \,, \end{displaymath} which implies, by Young inequality, the energy equality in~\eqref{e.42} and the following identities, valid for a.e.~$l\in[0,+\infty)$:
\begin{equation}\label{e.66}
\|z'(l)\|_{L^2}=|\partial_{z}^{-}\F|(t,u,z(l)) \qquad \text{and} \qquad \frac{\mathrm{d}}{\mathrm{d} l}\F(t,u,z(l))=-|\partial_{z}^{-}\F|(t,u,z(l)) \|z'(l)\|_{L^2}\,. \end{equation}
Since $l \mapsto z(l)$ is decreasing, there exists a limit $\widetilde{z}\in\Z$, as $l \to +\infty$, weakly in $H^{1}(\Omega)$ and strongly in~$L^{2}(\Omega)$. In particular, $\widetilde{z}\geq\overline{z}$; we want to show that equality holds. To this aim, passing to the liminf as $l \to+\infty$ in~\eqref{e.42} we easily obtain that \begin{equation}\label{e.6.16}
\F(t,u,\widetilde{z})\leq\F(t,u,z^0)-\tfrac{1}{2}\int_{0}^{+\infty}|\partial_{z}^{-}\F|^{2}(t,u,z(l))+\|z'(l)\|_{L^2}^{2}\,\mathrm{d} l\,. \end{equation} Coupling~\eqref{e.66} with~\eqref{e.6.16} we get that $z'\in L^{2}([0,+\infty); L^{2}(\Omega))$. Moreover, being~$\F\geq0$, from~\eqref{e.6.16} we obtain that \begin{equation}\label{e.62}
\liminf_{l \to+\infty}\,|\partial^{-}_{z}\F|(t,u,z(l))=0\,. \end{equation}
By Lemma~\ref{l.2} we have that $|\partial_{z}^{-}\F|(t,u,\widetilde{z})=0$, that is,~$\widetilde{z}$ is a solution of \begin{equation}\label{e.61} \min\,\{\F(t,u,z):\,z\in\Z, \, z\leq\widetilde{z}\}\,. \end{equation} Since $\overline{z}\leq\widetilde{z}$, by uniqueness of solution of~\eqref{e.61} we get that $\overline{z}=\widetilde{z}$.
Now, we show that $z(l)\to\overline{z}$ strongly in~$H^1(\Omega)$. Indeed, for every $l \in[0,+\infty)$ we have, by convexity of $z\mapsto\F(t,u,z)$, \begin{equation}\label{e.64}
0 \le \F(t,u,z(l))-\F(t,u,\overline{z})\leq -\partial_{z}\F(t,u,z(l))[\overline{z}-z(l)]\leq|\partial_{z}^{-}\F|(t,u,z(l))\|z(l)-\overline{z}\|_{L^2}\,, \end{equation}
where, in the last inequality, we have used the characterization~\eqref{e.slope2} of the slope w.r.t.~$z$. By~\eqref{e.62}, we know that along a suitable subsequence $l_{j}\to+\infty$ we have $|\partial_{z}^{-}\F|(t,u,z(l_{j}))\to0$, so that $\F(t,u,z(l_{j}))\to\F(t,u,\overline{z})$. By monotonicity of $l\mapsto \F(t,u,z(l))$, we therefore get that $\F(t,u,z(l)) \to \F(t,u,\overline{z})$ as $l\to+\infty$. Hence, we deduce that $\| \nabla z(l) \|_{L^2} \to \| \nabla \overline{z} \|_{L^2}$, which in turn implies the convergence of~$z(l)$ to~$\overline{z}$ in~$H^1(\Omega)$.
In order to prove~$(e)$, we define $\overline{\ell}:=\inf\,\{l \geq 0:\, \F(t,u,z(l))= \F(t,u, \overline{z})\}$. If $\overline{\ell} < +\infty$ then, being $\overline{z}$ the unique minimizer of $\{\F(t,u,z):\, z\in\Z,\, z\leq z^0\}$, we have $z(l) = \overline{z}$ for every $l \ge \ell$.
In general, for a.e.~$l < \overline{\ell}$, we claim that $|\partial_{z}^{-} \F|(t,u, z(l))\neq 0$. By contradiction, if $|\partial_{z}^{-} \F|(t,u, z(l)) = 0$, then $z(l) = \argmin\,\{\F(t,u,z):\, z\in\Z,\, z\leq z(l)\}$. Since $\overline{z}\leq z(l)$, we would get that $z(l)=\overline{z}$, which contradicts the assumption $l< \overline{\ell}$. Therefore, $|\partial_{z}^{-}\F| (t, u, z(l)) \neq0$ for a.e.~$l<\overline{\ell}$. This implies, together with~\eqref{e.66}, that $\| z'(l) \|_{L^2}\neq 0$ for a.e.~$l<\overline{\ell}$.
The proof of property~$(f)$ is similar to the proof of~\eqref{e.lengthu} in Theorem~\ref{t.2,5}, but we have to take care of the monotonicity of~$l\mapsto z(l)$ and of the different norm of the gradient flow. By strong convexity, see~\eqref{e.strconv-z}, there exists a positive constant~$c$ independent of~$z$,~$u$, and~$t$, such that \begin{displaymath} \begin{split}
c\| \overline{z} - z(l) \|_{H^{1}}^{2}&\leq (\partial_{z}\F(t,u,\overline{z})-\partial_{z}\F(t,u,z(l)))[\overline{z}-z(l)] \le -\partial_{z}\F(t,u,z(l))[\overline{z}-z(l)]\\
&\leq|\partial_{z}^{-}\F|(t,u,z(l))\| \overline{z} - z(l) \|_{L^2} \leq|\partial_{z}^{-}\F|(t,u,z(l))\| \overline{z} - z(l) \|_{H^{1}}\,, \end{split} \end{displaymath} which implies \begin{equation}\label{e.63}
\| \overline{z} - z(l) \|_{H^{1}}\leq C|\partial_{z}^{-}\F|(t,u,z(l)) \end{equation} for some positive constant~$C$. Combining~\eqref{e.63} with~\eqref{e.64} we get \begin{equation}\label{e.65}
(\F(t,u,z(l))-\F(t,u,\overline{z}))^{1/2}\leq C |\partial_{z}^{-}\F|(t,u,z(l))\,. \end{equation}
Exploiting~\eqref{e.65}, we can now perform a \L ojasiewicz argument: by~\eqref{e.66},~\eqref{e.65}, and by the monotonicity and absolute continuity of $l\mapsto \F(t,u, z(l))$, for a.e.~$l \in[0, \overline{\ell})$ we have \begin{equation}\label{e.67} \begin{split}
-2\frac{\mathrm{d}}{\mathrm{d} l}\big( \F(t,u,z(l)) - \F(t,u,\overline{z})\big)^{\frac12} & =\big( \F(t,u,z(l)) - \F(t,u,\overline{z})\big)^{-\frac12} |\partial_{z}^{-}\F|(t,u,z(l))\|z'(l)\|_{L^2}\\
& \geq C \|z'(l)\|_{L^2}\,. \end{split} \end{equation} Therefore, inequality~\eqref{e.67} implies that for every $\ell\in[0,\overline{\ell})$ \begin{displaymath}
\int_{0}^{ \ell }\|z'(l)\|_{L^2}\,\mathrm{d} l \leq -2 C \int_{0}^{ \ell }\frac{\mathrm{d}}{\mathrm{d} l}\big( \F(t,u,z(l)) - \F(t,u,\overline{z})\big)^{\frac12}\,\mathrm{d} l \leq 2C \big( \F(t,u,z^0) - \F(t,u,\overline{z})\big)^{\frac12}\,. \end{displaymath} In the limit as~$\ell\to+\infty$, from the previous inequality we get \begin{equation}\label{e.68}
\int_{0}^{ +\infty }\|z'(l)\|_{L^2}\,\mathrm{d} l \leq 2C \big( \F(t,u,z^0) - \F(t,u,\overline{z})\big)^{\frac12}\,. \end{equation} By convexity, we have that \begin{align}
\big( \F(t,u,z^0) - \F(t,u,\overline{z})\big)^{\frac12} & \leq \big( -\partial_{z}\F(t,u,z^0)[\overline{z}-z^0]\big)^{\frac12} \nonumber \\
& \le \big( \partial_{z}\F(t,u,\overline{z})[\overline{z}-z^0] -\partial_{z}\F(t,u,z^0)[\overline{z}-z^0]\big)^{\frac12}\,,\label{e.69} \end{align} where, in the last inequality, we have used the fact that $\partial_{z}\F(t,u,\overline{z})[z^0 - \overline{z}] = 0$, by minimality of~$\overline{z}$.
\centerline{\tt -------------------------------------------- }
The right hand side of~\eqref{e.69} is \begin{displaymath} \begin{split}
\big(\partial_{z}\F(t,u,\overline{z})-\partial_{z}\F(t,u,z^0)\big)[\overline{z}-z^0]&=\int_{\Omega}(h'(\overline{z})-h'(z^0))(\overline{z}-z^0)(\mu|\strain_{d}(u+g(t))|^{2}+\kappa|\strain_{v}^{+}(u+g(t))|^{2})\,\mathrm{d} x\\
&\qquad+\int_{\Omega} \big(f' (\overline{z} ) - f'( z^{0} ) \big)( \overline{z} - z^{0} )\,\mathrm{d} x + \|\nabla {\overline{z}} - \nabla {z^0} \|_{L^2}^{2}\,. \end{split} \end{displaymath} Applying H\"older inequality to the first term of the right-hand side of previous inequality with $\tfrac{1}{\nu} + \tfrac{2}{\tilde{p}}=1$ and recalling that $0 \le z(l) \le z^0 \le 1$, and that $h,f \in C^{1,1}([0,1])$, we deduce that \begin{equation}\label{e.71} \begin{split}
\Big|\big(\partial_{z}\F&(t,u,\overline{z})-\partial_{z}\F(t,u,z^0)\big)[\overline{z}-z^0]\Big|\\
& \leq\int_{\Omega}|h'(\overline{z})-h'(z^0)||\overline{z}-z^0|(\mu|\strain_{d}(u+g(t))|^{2}+\kappa|\strain_{v}^{+}(u+g(t))|^{2})\,\mathrm{d} x + C\| \overline{z} - z^{0} \|_{H^{1}}^{2}\\
& \leq C \int_{\Omega} | \overline{z} - z^0 |^{2} (\mu |\strain_{d}(u+g(t))|^{2} + \kappa |\strain_{v}^{+}(u+g(t))|^{2}) \,\mathrm{d} x + C\| \overline{z} - z^{0} \|_{H^{1}}^{2} \\
& \leq C \|\overline{z}-z^0\|_{2\nu}^{2} \|u+g(t)\|^{2}_{W^{1,\tilde{p}}} + C\| \overline{z} - z^{0} \|_{H^{1}}^{2} \leq C( 1 + \|u+g(t)\|^{2}_{W^{1,\tilde{p}}} ) \|\overline{z}-z^0\|_{H^{1}}^{2} \,. \end{split} \end{equation} Thus, combining inequalities~\eqref{e.68}-\eqref{e.71} we get \begin{displaymath}
\int_{0}^{+\infty}\|z'(l)\|_{L^2}\,\mathrm{d} s\leq C(1+\|u+g(t)\|^{2}_{W^{1,\tilde{p}}})^{\frac12}\|\overline{z}-z^0\|_{H^{1}}\,. \end{displaymath} This concludes the proof of the theorem.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
As in Corollary~\ref{c.gfu}, we define here a reparametrization of~$l\in[0,+\infty)$ which makes the gradient flow of Theorem~\ref{t.3} $1$-Lipschitz. Again, this reparametrization will be used in Section~\ref{s.prooft1}.
\begin{corollary}\label{c.gfz} Let $\tilde{p}\in(2,+\infty)$ be as in Lemma~\ref{l.HMWTh}, let $(t, u, z^{0}) \in [0,T]\times W^{1,\tilde{p}} (\Omega; \mathbb{R}^2) \times \Z$ with $0\leq z^{0} \leq 1$, let $\overline{z}:=\argmin\,\{\F(t,u,z):\, z\in\Z, \, z\leq z^{0}\}$, and let~$z$ be the gradient flow computed in Theorem~\ref{t.3} with initial condition~$z^{0}$ and parameters~$t$ and~$u$. Given~$\overline{\ell}\in[0,+\infty]$ as in~$(e)$ of Theorem~\ref{t.3}, let us set \begin{displaymath}
L(z) := \int_{0}^{\overline{\ell}} \| z'(l) \|_{L^2}\,\mathrm{d} l \qquad \text{and} \qquad \lambda(\ell) := \int_{0}^{\ell} \| z'(l) \|_{L^2}\,\mathrm{d} l \quad \text{for $\ell \in [ 0, \overline{\ell} ]$}\,. \end{displaymath}
Moreover, let $\rho \colon [0, L(z)]\to[0,\overline{\ell}]$ be defined by $\rho := \lambda^{-1}$. Then, the function $\zeta:= z\circ \rho$ belongs to the space $W^{1,\infty} ([0,L(z)]; L^{2}(\Omega) )$ with $\| \zeta' (s) \|_{L^2} =1$ a.e.~in $[0, L(z)]$, $\zeta(0)=z^{0}$, $\zeta(L(z)) = \overline{z}$, and \begin{equation}\label{e.enbal2}
\F(t,u, \zeta( s) ) = \F ( t, u, z^{0} ) - \int_{0}^{s} |\partial_{z}^{-} \F| (t, u, \zeta(\sigma )) \|\zeta'(\sigma)\|_{L^{2}} \, \mathrm{d} \sigma \,. \end{equation} \end{corollary}
\begin{proof}
We notice that~$\rho=\lambda^{-1}$ is well defined in view of~$(e)$ of Theorem~\ref{t.3}. As a consequence, also $\zeta\colon[0,L(z)]\to \Z$ is well defined and satisfies $\zeta(0)=z^{0}$, $\zeta(L(z)) = \overline{z}$, and $\|\zeta'(s) \|_{L^2}=1$ for a.e.~$s\in[0,L(z)]$. By $(b)$-$(d)$ of Theorem~\ref{t.3} we have that \begin{displaymath}
\F(t, u, z(\ell) ) = \F(t, u, z^{0}) - \int_{0}^{\ell} |\partial_{z}^{-} \F|(t, u, z(l)) \|z'(l)\|_{L^2}\,\mathrm{d} l\qquad\text{for $\ell\in[0,\overline{\ell}]$}\,. \end{displaymath} By the change of coordinate~$l= \rho(\sigma)$ for $\sigma\in[0,L(z)]$ we deduce~\eqref{e.enbal2}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\begin{remark}\label{r.4} In the notation of Corollary~\ref{c.gfz}, we notice that, as a consequence of Theorem~\ref{t.3}, \begin{displaymath}
L(z)\leq \overline{C}(1+\|u+g(t)\|_{W^{1,\tilde{p}}})\| z^{0} - \overline{z} \|_{H^{1}}\,. \end{displaymath} \end{remark}
\section{Proof of the convergence result} \label{s.prooft1}
We develop in this section the proof of Theorem~\ref{t.1}. We follow the main structure of~\cite{KneesNegri_M3AS17}. We start with constructing a time-discrete evolution by an alternate minimization algorithm. Next we interpolate between all the steps of the scheme w.r.t.~an arc-length parameter in a suitable norm. Since the energy~$\F$ is not separately quadratic, in this context there are no intrinsic norms stemming out from the functional, as it happens in~\cite{KneesNegri_M3AS17}; in our framework, instead, it is natural to use the $H^{1}$-norm for the displacement field~$u$ and the $L^{2}$-norm for the phase field~$z$. The latter technical choice is due to the existence of a unilateral $L^{2}$-gradient flow (see Theorem~\ref{t.3}) which in turn is related to the irreversibility of~$z$ along the whole algorithm.
In Proposition~\ref{p.compactness} we prove compactness of the discrete parametrized evolutions. We characterize the limit evolution in terms of \emph{equilibrium} and \emph{energy-dissipation balance} (see~$(d)$ and~$(e)$ of Theorem~\ref{t.1}). The proof of equilibrium and of the lower energy-dissipation inequality follows from lower semicontinuity of the functional~$\F$ and of the slopes~$|\partial_{u} \F|$ and~$|\partial_{z}^{-} \F|$. The technically hard part comes with the upper energy-dissipation inequality (see Section~\ref{s.inequality}). Comparing with~\cite{KneesNegri_M3AS17}, here we can not employ a chain rule argument, since the evolution~$z$ is qualitatively the reparametrization of an $L^{2}$-gradient flow, instead of an $H^{1}$-gradient flow. For this reason, we need to exploit a Riemann sum argument (see, e.g.,~\cite{MR2186036, Negri_ACV}). In this respect, the starting point would be the summability of $|\partial_{z}^{-} \F| (t(\cdot), u(\cdot), z(\cdot))$, which does not follow from the energy estimates, since we are only able to control $|\partial_{z}^{-} \F| (t(\cdot), u(\cdot), z(\cdot)) \| z'(\cdot) \|_{L^{2}}$. Nevertheless, we can show that~$|\partial_{z}^{-}\F|(t(\cdot), u(\cdot), z(\cdot)) $ belongs to $L^1$ in the set where~$\|z' \|_{L^{2}} \neq 0$. At this point, we can apply a Riemann sum argument in an auxiliary reparametrized setting which, roughly speaking, concentrates the intervals where~$\|z'\|_{L^{2}}= 0$ to an at most countable set of points, at the price of introducing discontinuities in the displacement evolution, which, however, can be controlled a posteriori via chain rule.
\subsection{Parametrization and discrete energy estimate} \label{s.4.1}
For $k \in \mathbb{N}$, $k \neq 0$ let $\tau_{k}:=T/k$ and $t^{k}_{i}:= i\tau_{k}$ for $i=0,\ldots, k$. We define the discrete evolutions~$u^k_i$ and~$v^k_i$ (in the time nodes $t^k_i$) by induction. We set $u^{k}_{0}\coloneq u_{0}$ and $z^{k}_{0}\coloneq z_{0}$. Given $u^k_{i-1}$ and $z^k_{i-1}$ we define~$u^k_{i}$ and~$z^k_{i}$ with the aid of two auxiliary sequences~$u^{k}_{i,j}$ and~$z^{k}_{i,j}$ defined as follows: let $u^{k}_{i,0}:= u^{k}_{i-1}$ and $z^{k}_{i,0}:=z^{k}_{i-1}$, then for every $j\in\mathbb{N}$ let \begin{eqnarray} &&\displaystyle u^{k}_{i,j+1}:=\argmin\,\{\F(t^{k}_{i},u,z^{k}_{i,j}):\,u\in \U \}\,,\label{e.minu}\\[2mm] &&\displaystyle z^{k}_{i,j+1}:=\argmin\,\{\F(t^{k}_{i},u^{k}_{i,j + 1},z):\, z\in \Z,\, z\leq z^{k}_{i,j}\}\,.\label{e.minz} \end{eqnarray} Note that~$0\leq z^{k}_{i,j} \leq 1$ for every $k,i,j$ and that the sequence $z^k_{i,j}$ is bounded in $H^1(\Omega)$ and non-increasing w.r.t.~$j$; hence, in the limit as $j \to\infty$, $z^{k}_{i,j} \rightharpoonup z^{k}_{i}$ weakly in~$H^{1}(\Omega)$ and, by Proposition~\ref{p.6}, $u^{k}_{i,j} \to u^{k}_{i}$ in~$W^{1,\beta}(\Omega;\R^{2})$ for $\beta\in[2,\tilde{p})$, where $u^k_i$ solves \begin{equation}\label{e.minu2} \min\,\{\F(t^{k}_{i},u,z^{k}_{i}):\, u\in\U\}\,. \end{equation}
Moreover, being $g\in W^{1,q} ([0,T];W^{1,p}(\Omega;\R^{2}))$, by Lemma~\ref{l.HMWTh} we deduce that $u^{k}_{i,j}$ is bounded in~$W^{1,\tilde{p}}(\Omega;\R^{2})$, uniformly w.r.t.~$k,i,j$. By \eqref{e.minz} we know that $| \partial_z^- \F | (t^{k}_{i},u^{k}_{i,j},z^k_{i,j}) = 0$ for $j\geq 1$. Since $u^k_{i,j} \to u^k_i$ in~$W^{1,\beta}(\Omega;\R^{2})$ and $z^{k}_{i,j}\rightharpoonup z^{k}_{i}$ in $H^1(\Omega)$ , as a consequence of Lemma~\ref{l.2} we deduce that $| \partial_z^- \F | (t^{k}_{i},u^{k}_{i},z^k_{i}) = 0$. Hence, $z^{k}_{i}$ is the solution of \begin{equation}\label{e.minz2} \min\,\{\F(t^{k}_{i},u^{k}_{i},z):\, z\in\Z,\, z\leq z^{k}_{i}\}\,. \end{equation}
\begin{remark} In general it may happen that the alternate minimization algorithm~\eqref{e.minu}-\eqref{e.minz} converges after a finite number of iterations. This case is anyway a special case of the above scheme and it will not be treated separately. \end{remark}
\centerline{\tt -------------------------------------------- }
Recalling the results and the notation of Theorem~\ref{t.2} and Corollary~\ref{c.gfu}, for every $k,i,j$ there exists an auxiliary parametrized gradient flow $\omega^{k}_{i,j}\in W^{1,\infty}([0,L( \omega^{k}_{i,j})]; H^{1}(\Omega;\R^{2}))$ such that $\omega^{k}_{i,j}(0) = u^{k}_{i,j}$, $\omega^{k}_{i,j}(L( \omega^{k}_{i,j}) ) = u^{k}_{i,j+1}$, $\| (\omega^{k}_{i,j})'(s) \|_{H^{1}} = 1$ for a.e.~$s\in [0,L( \omega^{k}_{i,j})]$, and
\begin{equation}\label{e.72}
\F(t^{k}_{i}, \omega^{k}_{i,j} (s), z^{k}_{i,j} ) = \F ( t^{k}_{i}, u^{k}_{i,j}, z^{k}_{i,j} ) - \int_{0}^{s} | \partial_{u} \F | ( t^{k}_{i}, \omega^{k}_{i,j} ( \sigma ), z^{k}_{i,j} ) \| (\omega^{k}_{i,j})'(\sigma) \|_{H^{1}}\, \mathrm{d} \sigma \end{equation}
for every $s\in[0,L(w^{k}_{i,j})]$. Moreover, in view of Theorem~\ref{t.2,5} and Remark~\ref{r.3},
\begin{equation}\label{e.15.40}
L( \omega^{k}_{i,j}) \leq \bar{C} \|u^{k}_{i,j} - u^{k}_{i,j+1}\|_{H^{1}}
\end{equation}
for some positive constant~$\bar{C}$ independent of~$k,i,j$.
In a similar way, by Theorem~\ref{t.3} there exists an auxiliary parametrized gradient flow $\zeta^{k}_{i,j}$ belonging to $W^{1,\infty}([0, L( \zeta^{k}_{i,j})]; L^{2}(\Omega) )$ such that $\zeta^{k}_{i,j}(0) = z^{k}_{i,j}$, $\zeta^{k}_{i,j}( L( \zeta^{k}_{i,j}) ) = z^{k}_{i,j+1}$, $ (\zeta^{k}_{i,j})'(s) \le0 $ and $\| (\zeta^{k}_{i,j})'(s)\|_{L^2} = 1$ for a.e.~$s\in[0, L(\zeta^{k}_{i,j})]$, and
\begin{equation}\label{e.73}
\F ( t^{k}_{i}, u^{k}_{i,j+1}, \zeta^{k}_{i,j} (s)) = \F ( t^{k}_{i}, u^{k}_{i,j+1}, z^{k}_{i,j} ) - \int_{0}^{s} | \partial_{z}^{-} \F | ( t^{k}_{i}, u^{k}_{i,j+1}, \zeta^{k}_{i,j}( \sigma )) \|(\zeta^{k}_{i,j})'(\sigma) \|_{L^{2}} \,\mathrm{d} \sigma \end{equation} for every $s\in [0, L(\zeta^{k}_{i,j})]$. Furthermore, by Theorem~\ref{t.3} and Remark~\ref{r.4} we have \begin{equation}\label{e.15.52}
L(\zeta^{k}_{i,j}) \leq \bar{C} (1 + \|u^{k}_{i,j} + g(t^{k}_{i}) \|_{1,\tilde{p}} ) \|z^{k}_{i,j} - z^{k}_{i,j+1}\|_{H^{1}}\,, \end{equation} for some positive constant~$\bar{C}$ independent of~$k,i,j$. In view of the uniform boundedness of~$u^{k}_{i,j}$ in~$W^{1,\tilde{p}}(\Omega;\R^{2})$ (see Corollary~\ref{c.3}) and of the regularity of the boundary datum~$g$, inequality~\eqref{e.15.52} can be rewritten as \begin{equation}\label{e.16.06}
L( \zeta^{k}_{i,j}) \leq \tilde{C} \|z^{k}_{i,j} - z^{k}_{i,j+1}\|_{H^{1}} \end{equation} for some positive constant~$\tilde{C}$ independent of~$k,i,j$.
We now start showing a uniform bound on the arc-length of the alternate minimization scheme~\eqref{e.minu}-\eqref{e.minz}. This is done by estimating the term \begin{equation}\label{e.Sk} S_{k}\coloneq \sum_{i=1}^{k}\sum_{j=0}^{\infty} L ( \omega^{k}_{i,j}) + L ( \zeta^{k}_{i,j}) \end{equation} uniformly w.r.t.~$k\in\mathbb{N}$.
\begin{proposition}\label{p.finitelength} There exists $\overline{S}\in(0,+\infty)$ such that $S_{k}\leq \overline{S}$ for every index~$k$. \end{proposition}
\begin{proof} In this proof we denote with~$C$ a generic positive constant, which could change from line to line.
Thanks to \eqref{e.15.40} and \eqref{e.16.06} we deduce that \begin{equation}\label{e.74}
S_{k}\leq C\sum_{i=1}^{k}\sum_{j=0}^{\infty}\big(\|z^{k}_{i,j}-z^{k}_{i,j+1}\|_{H^1}+\|u^{k}_{i,j}-u^{k}_{i,j+1}\|_{H^{1}}\big)\,, \end{equation} for~$C$ independent of~$k$. Therefore, it is sufficient to prove that the right-hand side of~\eqref{e.74} is uniformly bounded.
For $j=0$, applying Corollary~\ref{c.3}, recalling that $z^k_{i,0}=z^k_{i-1}$ and that the boundary datum varies, we have that \begin{displaymath}
\|u^{k}_{i,1}-u^{k}_{i,0}\|_{H^{1}} \leq C \|g(t^{k}_{i})-g(t^{k}_{i-1})\|_{W^{1,p}} \,. \end{displaymath} Moreover, by Proposition~\ref{p.5}, \begin{displaymath}
\|z^{k}_{i,1}-z^{k}_{i,0}\|_{H^{1}} \leq C( \|u^{k}_{i,1}-u^{k}_{i,0}\|_{H^{1}} + \|g(t^{k}_{i})-g(t^{k}_{i-1})\|_{H^{1}} ) \leq C \|g(t^{k}_{i})-g(t^{k}_{i-1})\|_{W^{1,p}}\,. \end{displaymath}
For $j\geq 1$, in view of Proposition~\ref{p.5}, we have \begin{equation}\label{e.2.07}
\|z^{k}_{i,j}-z^{k}_{i,j+1}\|_{H^{1}} \leq C \|u^{k}_{i,j}-u^{k}_{i,j+1}\|_{H^{1}}\,. \end{equation} By~\eqref{e.25} in Corollary~\ref{c.3}, recalling that for fixed $k$ the boundary datum does not change, we can continue in~\eqref{e.2.07} with \begin{equation}\label{e.2.09}
\|z^{k}_{i,j}-z^{k}_{i,j+1}\|_{H^{1}} \leq C \|z^{k}_{i,j}-z^{k}_{i,j-1}\|_{L^\nu} , \end{equation} for some exponent $\nu \gg 1$. The rest of the proof works as in~\cite[Theorem~4.1]{KneesNegri_M3AS17}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
At this point we are ready to define a new parametrization of the graph of the evolution in terms of the arc-length of the curves $\omega^k_{i,j}$ and $\zeta^k_{i,j}$, connecting all the intermediate steps of the alternate minimization scheme \eqref{e.minu}-\eqref{e.minz}.
We set~$s^{k}_{0}\coloneq 0$, $t_{k}(0)\coloneq 0$, $u_{k}(0)\coloneq u_{0}$, and $z_{k}(0)\coloneq z_{0}$. For every $i\geq 1$, assume to know $s^{k}_{i-1}$ and let us construct~$s^{k}_{i}$. We define $s^{k}_{i,-1}\coloneq s^{k}_{i-1}$ and, for $j\geq0$, \begin{equation}\label{e.esse} s^{k}_{i,0}\coloneq s^{k}_{i,-1}+\tau_{k}\,,\qquad s^{k}_{i,j+\frac{1}{2}}\coloneq s^{k}_{i,j}+L ( \omega^{k}_{i,j}) \,, \qquad s^{k}_{i,j+1} \coloneq s^{k}_{i,j+\frac{1}{2}}+L(\zeta^{k}_{i,j}) \,. \end{equation} In view of Proposition~\ref{p.finitelength}, we have that there exists a finite limit~$s^{k}_{i}$ of the sequence~$s^{k}_{i,j}$ as $j\to \infty$. For every $s\in[s^{k}_{i,-1},s^{k}_{i,0}]$ we define \begin{equation}\label{e.interpolantt}
t_{k}(s) \coloneq t^{k}_{i-1}+ (s - s^{k}_{i,-1})\,,\qquad u_{k}(s)\coloneq u^{k}_{i-1}\,,\qquad z_{k}(s)\coloneq z^{k}_{i-1}\,. \end{equation} For $j\geq 0$ and $s\in[s^{k}_{i,j}, s^{k}_{i,j+\frac{1}{2}}]$ we set \begin{eqnarray} && t_{k}(s)\coloneq t^{k}_{i} = t_{k}(s^{k}_{i,j})\,, \nonumber \\[1mm] &&u_{k}(s)\coloneq \left\{\begin{array}{ll}
\omega^{k}_{i,j} (s-s^{k}_{i,j}) & \text{if $u^{k}_{i,j}\neq u^{k}_{i,j+1}$}\,, \label{e.interpolantu} \\ [2mm]
u^{k}_{i,j}=u^{k}_{i,j+1} &\text{otherwise}\,, \end{array}\right.\\[1mm] && z_{k}(s) \coloneq z^{k}_{i, j} = z_{k}(s^{k}_{i,j}) \,.\nonumber \end{eqnarray} Finally, for $s \in [ s^{k}_{i,j+\frac{1}{2}}, s^{k}_{i,j+1} ]$ we define \begin{eqnarray} && t_{k}(s) \coloneq t^{k}_{i} = t_{k} ( s^{k}_{i,j+\frac{1}{2}} ) \,, \nonumber \\ [1mm] && u_{k}(s) \coloneq u^{k}_{i,j+1} = u_{k} ( s^{k}_{i,j+\frac{1}{2}}) \,, \label{e.interpolantz} \\ [1mm] && z_{k}(s) \coloneq \left\{ \begin{array}{ll} \zeta^{k}_{i,j} ( s - s^{k}_{i,j+\frac{1}{2}} ) & \text{if $z^{k}_{i,j}\neq z^{k}_{i,j+1}$} \,, \\ [2mm] z^{k}_{i,j}=z^{k}_{i,j+1} & \text{otherwise}\,. \nonumber \end{array}\right. \end{eqnarray} In the limit as~$s \to s^k_i$, we have that $t_{k}(s) \to t_{k}(s^{k}_{i})=t^{k}_{i}$ and $z_{k} (s) \weakto z^{k}_{i}=: z_{k}(s^{k}_{i})$ in $H^1(\Omega)$. As for~$u_{k}$, by Proposition~\ref{p.6} we know that $u^{k}_{i,j} \to u^{k}_{i}$ in~$W^{1,\beta}(\Omega;\R^{2})$ for every $\beta\in[2,\tilde{p})$. As a consequence of the exponential decay~\eqref{e.40} in Theorem~\ref{t.2}, we also deduce that $u_k ( s) \to u^{k}_{i} =: u_{k}(s^{k}_{i})$ in~$H^{1}(\Omega;\R^{2})$.
In view of Proposition~\ref{p.finitelength}, we may assume that there exists~$0\leq S <+\infty$ such that, up to a constant extension, for every~$k\in\mathbb{N}$ the triple~$(t_{k},u_{k},z_{k})$ is well defined on the interval $[0,S]$, takes values in $[0,T]\times \U\times \Z$, and satisfies $t_{k}(S)=T$. We notice that since~$u^{k}_{i,j}$ are uniformly bounded in~$W^{1,\tilde{p}}(\Omega;\R^{2})$, Theorem~\ref{t.2} implies that $u_{k}(s)$ is bounded in~$H^{1}(\Omega;\R^{2})$ uniformly w.r.t.~$k$ and~$s$. It follows that also~$z^{k}_{i,j}$ are bounded in~$H^{1}(\Omega)$ and hence, by Theorem~\ref{t.3},~$z_{k}(s)$ is bounded in~$H^{1}(\Omega)$ uniformly w.r.t.~$k$ and~$s$. Moreover, as a consequence of Corollaries~\ref{c.gfu} and~\ref{c.gfz} and of the above construction, we have \begin{equation}\label{e.boundlip}
t'_{k}(s) + \| u_{k}'(s) \|_{H^{1}} + \| z_{k}'(s) \|_{L^2} \leq 1\qquad\text{for a.e.~$s\in[0,S]$}\,, \end{equation} so that the triple $(t_{k},u_{k},z_{k})\in W^{1,\infty}([0,S]; [0,T]\times H^{1}(\Omega;\R^{2}) \times L^{2}(\Omega))$ is bounded. We notice that~$t_{k}$, $u_{k}$, and $z_{k}$ coincide with their Lipschitz continuous representatives.
We collect in the following proposition the equilibrium properties and a discrete energy-dissipation inequality satisfied by the triple~$(t_{k},u_{k},z_{k})$.
\begin{proposition}\label{p.7} For every $k,i$ it holds \begin{equation}\label{e.77}
|\partial_{u}\F|(t_{k}(s^{k}_{i}),u_{k}(s^{k}_{i}),z_{k}(s^{k}_{i}))=0\qquad\text{and}\qquad |\partial_{z}^{-}\F|(t_{k}(s^{k}_{i}),u_{k}(s^{k}_{i}),z_{k}(s^{k}_{i}))=0\,. \end{equation} Moreover, for every $s\in[0,S]$ we have \begin{equation}\label{e.78} \begin{split}
\F(t_{k}(s),&u_{k}(s), z_{k}(s))\leq\F(0,u_{0},z_{0})-\int_{0}^{s}|\partial_{u}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| u'_k (s) \|_{H^1} \, \mathrm{d} \sigma \\
&\quad-\int_{0}^{s} |\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| z'_k(\sigma) \|_{L^2}\,\mathrm{d} \sigma + \int_{0}^{s} \mathcal{P}(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \,t'_{k}( \sigma )\,\mathrm{d} \sigma \,, \end{split} \end{equation}
where we intend $ |\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| z'_k(\sigma) \|_{L^2}=0$ whenever $\|z'_{k}(\sigma) \|_{L^{2}}=0$. \end{proposition}
\begin{proof} The equilibrium equalities in~\eqref{e.77} follow from the construction \eqref{e.interpolantt}-\eqref{e.interpolantz} of the interpolation functions~$t_{k}$,~$u_{k}$, and~$z_{k}$ and from the minimality properties of~$u_{k}(s^{k}_{i}) = u^k_i$ and $z_{k}(s^{k}_{i})=z^k_i$ summarized in~\eqref{e.minu2}-\eqref{e.minz2}.
Let us show~\eqref{e.78}. Without loss of generality, we consider $s\in[0,S_{k}]$, where~$S_{k}$ is defined in~\eqref{e.Sk}. Let~$k$ and~$i\in\{1,\ldots,k\}$ be fixed. For $j=-1$, for every $s\in[s^{k}_{i,-1},s^{k}_{i,0}] = [s^{k}_{i-1},s^{k}_{i,0}]$ we have $u_{k}'(s)= z_{k}'(s)=0$ and, therefore, \begin{align} \F(t_{k}(s),& \,u_{k}(s),z_{k}(s)) = \F(t_{k}(s^{k}_{i-1}),u_{k}(s^{k}_{i-1}),z_{k}(s^{k}_{i-1}))+\int_{s^{k}_{i-1}}^{s} \!\!\! \partial_{t}\F(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \, t'_{k}( \sigma )\,\mathrm{d} \sigma \nonumber \\ & = \F(t_{k}(s^{k}_{i-1}),u_{k}(s^{k}_{i-1}),z_{k}(s^{k}_{i-1}))+\int_{s^{k}_{i-1}}^{s} \!\!\! \mathcal{P} (t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,t'_{k}( \sigma )\,\mathrm{d} \sigma \label{e.80} \\
& \quad - \int_{s^{k}_{i-1}}^{s} \!\!\! |\partial_{u}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| u'_k(s) \|_{H^1}\,\mathrm{d} \sigma - \int_{s^{k}_{i-1}}^{s} \!\!\! |\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| z'_k(s) \|_{L^2}\,\mathrm{d} \sigma \nonumber \,, \end{align}
where, in the last equality, we have used the definition of the power functional~$\mathcal{P}$ in~\eqref{e.power} and~\eqref{e.pw}. For every $j\geq 0$, we distinguish between $s\in[s^{k}_{i,j},s^{k}_{i,j+\frac{1}{2}}]$ and $s\in[s^{k}_{i,j+\frac{1}{2}},s^{k}_{i,j+1}]$. In the first case we have $t_{k}'(s)=z_{k}'(s)=0$ while $\| u'_k(s) \|_{H^1}=1$ for a.e.~$s \in[s^{k}_{i,j},s^{k}_{i,j+\frac{1}{2}}]$; then, in view of~\eqref{e.72}, \begin{align}
\F(t_{k}(s), & \, u_{k}(s),z_{k}(s)) = \F(t_{k}(s^{k}_{i,j}),u_{k}(s^{k}_{i,j}),z_{k}(s^{k}_{i,j}))-\int_{s^{k}_{i,j}}^{s} \!\! |\partial_{u}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\, \| u'_k(s) \|_{H^1}\, \mathrm{d} \sigma \nonumber \\
& = \F(t_{k}(s^{k}_{i,j}),u_{k}(s^{k}_{i,j}),z_{k}(s^{k}_{i,j})) - \int_{s^{k}_{i,j}}^{s} \!\! |\partial_{u}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \,\| u'_k(s) \|_{H^1} \,\mathrm{d} \sigma \label{e.81} \\
& \quad - \int_{s^{k}_{i,j}}^{s} \!\! |\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| z'_k(s) \|_{L^2} \,\mathrm{d} \sigma +\int_{s^{k}_{i,j}}^{s} \!\!\! \mathcal{P} (t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \, t'_{k}( \sigma )\,\mathrm{d} \sigma \nonumber \,. \end{align}
In the second case we have $t_{k}'(s)=u_{k}'(s)=0$ while $\| z'_k(s) \|_{L^2}=1$ for a.e.~$s \in[s^{k}_{i,j+\frac12},s^{k}_{i,j+1}]$; then, by~\eqref{e.73}, \begin{align}
\F(t_{k}(s), & \, u_{k}(s),z_{k}(s)) = \F(t_{k}(s^{k}_{i,j+\frac{1}{2}}),u_{k}(s^{k}_{i,j+\frac{1}{2}}),z_{k}(s^{k}_{i,j+\frac{1}{2}})) - \int_{s^{k}_{i,j+\frac{1}{2}}}^{s} \!\!\!\!\!\!\! |\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \,\| z'_k(s) \|_{L^2} \,\mathrm{d} \sigma \nonumber \\
& = \F(t_{k}(s^{k}_{i, j+\frac{1}{2}}),u_{k}(s^{k}_{i+ \frac{1}{2}}),z_{k}(s^{k}_{i+\frac{1}{2}})) - \int_{s^{k}_{i,j+\frac{1}{2}}}^{s} \!\!\!\!\!\!\! |\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| z'_k(s) \|_{L^2} \,\mathrm{d} \sigma \label{e.82} \\
& \quad - \int_{s^{k}_{i,j+\frac{1}{2}}}^{s} \!\!\!\! |\partial_{u}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\, \| u'_k(s) \|_{H^1} \,\mathrm{d} \sigma +\int_{s^{k}_{i, j+\frac{1}{2}}}^{s} \!\!\!\! \mathcal{P} (t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \, t'_{k}( \sigma )\,\mathrm{d} \sigma \nonumber \,. \end{align} Summing up~\eqref{e.80}-\eqref{e.82}, we deduce that for every $s\in[s^{k}_{i-1},s^{k}_{i})$ it holds \begin{equation*}\label{e.83} \begin{split}
\F(t_{k}(s), & \, u_{k}(s),z_{k}(s)) =\F(t_{k}(s^{k}_{i-1}),u_{k}(s^{k}_{i-1}),z_{k}(s^{k}_{i-1}))-\int_{s^{k}_{i-1}}^{s} \!\!\! |\partial_{u}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \, \| u'_k(s) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{s^{k}_{i-1}}^{s} \!\!\!\! |\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\ \, \| z'_k(s) \|_{L^2} \,\mathrm{d} \sigma + \int_{s^{k}_{i-1}}^{s} \!\!\!\! \mathcal{P} (t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\, t_{k}' \, ( \sigma )\,\mathrm{d} \sigma \,. \end{split} \end{equation*} Passing to the limit as $s\to s^{k}_{i}$ by Lemma \ref{l.lscFE} we get
\begin{equation*}\label{e.84} \begin{split}
\F(t_{k}(s^{k}_{i}), & \, u_{k}(s^{k}_{i}), z_{k}(s^{k}_{i})) \leq \F(t_{k}(s^{k}_{i-1}),u_{k}(s^{k}_{i-1}),z_{k}(s^{k}_{i-1}))-\int_{s^{k}_{i-1}}^{s^{k}_{i}} \!\!\! |\partial_{u}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\, \| u'_k(s) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{s^{k}_{i-1}}^{s^{k}_{i}} \!\!\! |\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \, \| z'_k(s) \|_{L^2} \,\mathrm{d} \sigma + \int_{s^{k}_{i-1}}^{s^{k}_{i}} \!\!\! \mathcal{P} ( t_{k}( \sigma ), u_{k}( \sigma ), z_{k}( \sigma ) ) \, t_{k}'( \sigma ) \, \mathrm{d} \sigma \,, \end{split} \end{equation*} observing that the passage to the limit in the power integral is straightforward since~$t'_{k}(\sigma)=0$ for $\sigma\in(s^{k}_{i,0}, s^{k}_{i})$.
Finally, iterating the previous estimate w.r.t.~$i$ and combining again~\eqref{e.80}-\eqref{e.82} we deduce~\eqref{e.78}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\subsection{Compactness and lower energy inequality}
In the following proposition we show the compactness of the sequence $(t_{k},u_{k},z_{k})$.
\begin{proposition}\label{p.compactness} There exist a subsequence of~$(t_{k}, u_{k}, z_{k})$ and a triple $(t,u,z)\in W^{1,\infty} ( [0,S] ; [0,T] \times H^{1}(\Omega;\R^{2}) \times L^{2}(\Omega) )$ such that for every sequence~$s_{k}$ converging to~$s\in[0,S]$ we have \begin{displaymath} t_{k}(s_{k})\to t(s)\,,\qquad u_{k}(s_{k})\to u(s) \text{ in~$H^1(\Omega; \R^2)$,}\qquad z_{k}(s_{k}) \rightharpoonup z(s) \text{ weakly in~$H^{1}(\Omega)$}. \end{displaymath}
Moreover, \begin{equation}\label{e.boundlip2}
|t'(s)|+\|u'(s)\|_{H^{1}}+\|z'(s)\|_{L^2}\leq 1\qquad\text{for a.e.~$s\in[0,S]$}\,. \end{equation} In particular, $s \mapsto t(s)$ is non-decreasing and $t(S) = T$. \end{proposition}
\begin{proof} In view of~\eqref{e.boundlip}, we have that there exists a triple $(t,u,z)\in W^{1,\infty}( [0,S] ; [0,T] \times H^{1}(\Omega;\R^{2}) \times L^{2}(\Omega))$ such that, up to a subsequence, $(t_{k},u_{k},z_{k}) \rightharpoonup (t,u,z)$ weakly* in $W^{1,\infty} ( [0,S] ; [0,T] \times H^{1}(\Omega;\R^{2}) \times L^{2}(\Omega))$. In particular, for every $s\in[0,S]$ if $s_k \to s$ we have \begin{equation}\label{e.1106} t_{k}(s_k)\to t(s)\,, \qquad u_{k}(s_k)\rightharpoonup u(s) \text{ in~$H^1(\Omega;\R^2)$}\,,\qquad z_{k}(s_k)\rightharpoonup z(s) \text{ in~$H^{1}(\Omega)$}\,, \end{equation} the latter being a consequence of the boundedness of~$z_{k}(\sigma)$ in~$H^{1}(\Omega)$ uniformly for~$\sigma\in[0,S]$. Inequality~\eqref{e.boundlip2} can be obtained from~\eqref{e.boundlip} by integration and by weak lower semicontinuity of the norms. It is easy to check that $s \mapsto t(s)$ is non-decreasing and that $t_k(S) = T \to t(S) = T$.
It remains to show that, along the same subsequence,~$u_{k}$ converges strongly in~$H^1(\Omega; \R^2)$ pointwise in $[0,S]$. Let us fix~$s\in[0,S]$. For every~$k$, let $i_{k}\in\{1,\ldots,k\}$ and~$j_{k}\in\mathbb{N} \cup \{-1\}$ be such that $s \in [s^{k}_{i_{k},j_{k}},s^{k}_{i_{k},j_{k}+1})$. We have to distinguish between three different cases: up to a further (non-relabelled) subsequence, either $s\in[s^{k}_{i_{k},-1},s^{k}_{i_{k},0})$, or $s\in[s^{k}_{i_{k},j_{k}},s^{k}_{i_{k},j_{k}+\frac{1}{2}})$, or $s\in[s^{k}_{i_{k},j_{k}+\frac{1}{2}},s^{k}_{i_{k},j_{k}+1})$ for every~$k$, see~\eqref{e.interpolantt}-\eqref{e.interpolantz}.
In the first case, for every index~$k$ we have that $u_{k}(s)=u^{k}_{i_{k}-1}= u_{k}(s^{k}_{i_{k}-1})$ and $z_{k}(s)=z^{k}_{i_{k}-1} = z_{k}(s^{k}_{i_{k}-1})$. If $i_k =1$ for infinitely many~$k$, then $u_{k}(s^{k}_{i_{k}-1})= u_{k}(0)=u_{0}$ and there is nothing to show. Let us therefore assume that $i_{k}\geq 2$ for every~$k$. Hence, by \eqref{e.minu2} we have
\begin{displaymath} u_{k}(s) = u^k_{i_k-1}=\argmin\,\{\F(t^{k}_{i_{k}-1},u,z^{k}_{i_k-1}):\, u\in\U\} = \argmin\,\{\F(t^{k}_{i_{k}-1},u,z_{k}(s)):\, u\in\U\} . \end{displaymath} Since $s^{k}_{i_{k},0}-s^{k}_{i_{k}-1}=\tau_{k}\to0$ as $k\to \infty$ and $t_{k}(s)=t^{k}_{i_{k}-1}+(s-s^{k}_{i_{k}-1})$, we have that $t_{i_{k}-1}\to t(s)$. Moreover, $z_{k}(s)\rightharpoonup z(s)$ weakly in~$H^{1}(\Omega)$ by~\eqref{e.1106}. Thus, applying Proposition~\ref{p.6} we deduce that $u_{k}(s) \to \bar{u}$ strongly in~$H^1(\Omega;\R^2)$ where $\bar{u} \in \argmin\,\{\F(t(s) ,u,z(s)):\, u\in\U\}$. Since $u_k (s) \weakto u(s)$ by \eqref{e.1106}, it follows that $\bar{u} = u (s)$ and that the $u_{k}(s) \to u (s)$ strongly in~$H^1(\Omega;\R^2)$.
In the second case we have $s\in[s^{k}_{i_{k},j_{k}},s^{k}_{i_{k},j_{k}+\frac{1}{2}})$ for every~$k$. Here we want to apply Proposition~\ref{p.continuitygradflow}, and we use explicitly the parametrization~$\rho_{k}$ of the gradient flow~$\omega^{k}_{i_{k},j_{k}}$ from Corollary~\ref{c.gfu}. As a first step, we show that, up to a subsequence, the initial condition $\omega^{k}_{i_{k},j_{k}} (0) = u^k_{i_k,j_k}$ converges to some $u^*$ strongly in $H^1(\Omega;\R^2)$. If, up to a further subsequence, $j_{k}=0$ for every $k$, we have $\omega^{k}_{i_{k}, 0} (0) = u^k_{i_k,0} = u^k_{i_k-1}$, thus by \eqref{e.minu2} we know that \begin{equation*}
u^{k}_{i_{k}-1} = \argmin\,\{ \F ( t^{k}_{i_{k}-1}, u, z^{k}_{i_{k}-1}):\, u\in\U\} \,. \end{equation*} Being $ t^{k}_{i_{k}-1} = t_{k}(s) - \tau_k$, we have that $t^{k}_{i_{k}-1}\to t(s)$ as $k\to\infty$, while, along a subsequence, we have $z^{k}_{i_{k}-1} \rightharpoonup z^*$ weakly in~$H^{1}(\Omega)$ for some $z^*\in \Z$. Therefore, again by Proposition~\ref{p.6} we get that $u^{k}_{i_{k}, 0} \to u^*$ in~$H^{1}(\Omega;\R^{2})$.
In a similar way, if~$j_{k}\geq 1$ for every $k$ large enough, we have~$\omega^{k}_{i_{k}, j_{k}}(0)= u^{k}_{i_{k}, j_{k}}$ and, using~\eqref{e.minu}, we get \begin{equation*}
u^{k}_{i_{k},j_{k}} = \argmin\,\{ \F ( t^{k}_{i_{k}}, u, z^{k}_{i_{k},j_{k}-1}):\, u\in\U\} \,. \end{equation*} Then $t^{k}_{i_{k}} =t_{k}(s) \to t(s)$ as $k\to\infty$, while, up to a subsequence, $z^{k}_{i_{k},j_{k}-1} \rightharpoonup z^*$ weakly in~$H^{1}(\Omega)$ for some $z^*\in \Z$. As above, by Proposition~\ref{p.6} we conclude that $u^{k}_{i_{k}, j_{k}} \to u^{*}$ in~$H^{1}(\Omega;\R^{2})$. In all the cases, we have that the initial condition of the reparametrized gradient flow~$\omega^{k}_{i_{k},j_{k}}$ converges in~$H^1(\Omega;\R^2)$ to some~$u^*$. Now, let us consider the parametrization $\rho_{k}(s - s^{k}_{i_{k},j_{k}})\in [0,+\infty)$. Up to a subsequence, we may assume that~$\rho_{k}(s - s^{k}_{i_{k},j_{k}})\to \overline{\rho}\in[0,+\infty]$. Since $t_{k}(s)\to t(s)$, $z_{k}(s)\rightharpoonup z(s)$ weakly in~$H^{1}(\Omega)$, and $\omega^{k}_{i_{k},j_{k}}(0) \to u^*$ in~$H^1(\Omega;\R^2)$, from Corollary~\ref{c.5} we deduce that $u_{k}(s) = \omega^{k}_{i_{k},j_{k}}(s - s^{k}_{i_{k},j_{k}})$ admits a strong limit~$\widetilde{u}$ in~$H^{1}(\Omega;\R^{2})$. In view of~\eqref{e.1106} $\widetilde{u}= u(s)$ and $u_{k}(s) \to u (s)$ strongly in~$H^1(\Omega;\R^2)$.
Finally, let us consider the case $s\in[s^{k}_{i_{k},j_{k}+\frac{1}{2}},s^{k}_{i_{k},j_{k}+1})$ for every~$k$. Then, $t_{k}(s)=t^{k}_{i_{k}}$ and $u_{k}(s)=u^{k}_{i_{k},j_{k}+1}$. By construction of~$u^{k}_{i_{k},j_{k}+1}$ in~\eqref{e.minu}, we have that \begin{displaymath} u_{k}(s)=u^{k}_{i_{k},j_{k}+1}=\argmin\,\{\F(t^{k}_{i_{k}},u,z^{k}_{i_{k},j_{k}}):\, u\in\U\}\,. \end{displaymath} Again, we know that $t^{k}_{i_k} \to t(s)$ and that, up to subsequence, $z^{k}_{i_{k},j_{k}}\rightharpoonup z^*$ weakly in~$H^{1}(\Omega)$ for some~$z^* \in\Z$. We are in a position to apply again Proposition~\ref{p.6}, which implies the strong convergence of~$u_{k}(s)$ to~$u(s)$ in~$H^1(\Omega;\R^2)$.
Combining the three cases described above, we have shown that every subsequence of~$u_{k}(s)$ admits a further subsequence converging to~$u(s)$ in~$H^1(\Omega;\R^2)$. Hence, the whole sequence~$u_{k}(s)$ converges to~$u(s)$ in~$H^1(\Omega;\R^2)$ for every $s\in [0,S]$. Noticing that, by~\eqref{e.boundlip}, $\| u_{k}( s_{k}) - u_{k}(s) \|_{H^{1}} \leq |s_{k} - s|$, we conclude the proof.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
We are now in a position to prove the lower energy-dissipation inequality for the triple~$(t,u,z)$.
\begin{proposition}\label{p.3} Let $g$, $u_{0}$, and~$z_{0}$ be as in Theorem \ref{t.1}. Let $(t,u,z)\colon[0,S]\to[0,T]\times\U\times\Z$ be given by Proposition \ref{p.compactness}. Then $(t,u,z)$ satisfies~$(a)$-$(d)$ of Theorem \ref{t.1} and for every~$s\in[0,S]$ it holds \begin{equation}\label{e.3e} \begin{split}
\F(t(s),u(s),z(s)) \leq& \,\F(0,u_{0},z_{0})-\int_{0}^{s}|\partial_{u}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| u' (s) \|_{H^1} \,\mathrm{d}\sigma \\
&-\int_{0}^{s}|\partial^-_{z}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| z' (s) \|_{L^2} \,\mathrm{d}\sigma +\int_{0}^{s}\mathcal{P}(t(\sigma),u(\sigma),z (\sigma)) \,t'(\sigma)\,\mathrm{d}\sigma \,. \end{split} \end{equation}
\end{proposition}
\begin{proof}
We have already seen that the function~$s\mapsto t(s)$ is non decreasing and such that~$t(S)=T$. Thus, condition~$(b)$ is satisfied.
From inequality~\eqref{e.boundlip2} we get~$(a)$. Moreover, being $s\mapsto z_{k}(s)$ non-increasing for every $k\in\mathbb{N}$, it is clear that the pointwise limit $s\mapsto z(s)$ is non-increasing, so that~$(c)$ holds.
Let us now show property~$(d)$. For every $s\in[0,S]$ of continuity for~$(t,u,z)$ we can find a sequence $s_{m}\in[0,S]$ such that $s_{m}\to s$ and $t(s_{m})\neq t(s)$ for every~$m$. Without loss of generality, we may assume that $s_{m} \leq s$. Since~$t_{k}$ converges pointwise to~$t$, we can construct a subsequence~$k_{m}$ such that~$t_{k_{m}}(s_{m}) \neq t_{k_{m}}(s)$ for every~$m$. By construction of the interpolation functions~$t_{k_{m}}$ (see~\eqref{e.interpolantt}-\eqref{e.interpolantz}), there exists a sequence of indexes~$i_{m}\in\{1,\ldots , k_{m}\}$ such that, up to a further subsequence, one of the following conditions is satisfied: \begin{displaymath} s \in [s^{k_{m}}_{i_{m}-1} , s^{k_{m}}_{i_{m}, 0}) \qquad \text{or}\qquad s_{m} \leq s^{k_{m}}_{i_{m}-1} < s^{k_{m}}_{i_{m}, 0} \leq s \qquad\text{or}\qquad
s^{k_{m}}_{i_{m}-1} < s_{m}\leq s^{k_{m}}_{i_{m}, 0} \leq s\,. \end{displaymath}
In any case, since $|s^{k_{m}}_{i_{m}, 0} - s^{k_{m}}_{i_{m}-1}| \leq \tau_{k_{m}}$ and $s_{m} \to s$, we have that $s^{k_{m}}_{i_{m}-1} \to s$ as $m\to\infty$. In view of~\eqref{e.77} of Proposition~\ref{p.7}, we know that \begin{equation}\label{e.88} \begin{split}
& |\partial_{u}\F| \big (t_{k_{m}}(s^{k_{m}}_{i_{k_{m}-1}}), u_{k_{m}}(s^{k_{m}}_{i_{k_{m}-1}}),z_{k_{m}}(s^{k_{m}}_{i_{k_{m}-1}}) \big ) = 0 \,,\\
& |\partial_{z}^{-}\F| \big( t_{k_{m}}(s^{k_{m}}_{i_{k_{m}-1}}), u_{k_{m}}(s^{k_{m}}_{i_{k_{m}-1}}),z_{k_{m}}(s^{k_{m}}_{i_{k_{m}-1}}) \big) = 0 \,. \end{split} \end{equation} By Proposition~\ref{p.compactness} we know that $t_{k_{m}}(s^{k_{m}}_{i_{m}-1}) \to t(s)$ in~$[0,T]$, $u_{k_{m}}(s^{k_{m}}_{i_{m}-1})\to u(s)$ in~$H^1(\Omega; \R^2)$, and $z_{k_{m}}(s^{k_{m}}_{i_{k_{m}}-1})\rightharpoonup z(s)$ weakly in~$H^{1}(\Omega)$. Hence, applying Lemmata~\ref{l.2} and~\ref{l.3} and passing to the limit in~\eqref{e.88} as~$m\to \infty$ we get the equilibrium conditions~$(d)$.
\centerline{\tt -------------------------------------------- }
The proof of the lower energy-dissipation inequality~\eqref{e.3e} is divided into two steps. Clearly, the starting point is \eqref{e.78}, i.e., \begin{equation}\label{e.78bis} \begin{split}
\F(t_{k}(s),&u_{k}(s), z_{k}(s))\leq\F(0,u_{0},z_{0})-\int_{0}^{s}|\partial_{u}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| u'_k (s) \|_{H^1} \, \mathrm{d} \sigma \\
&\quad-\int_{0}^{s}|\partial_{z}^{-}\F|(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma ))\,\| z'_k(s) \|_{L^2}\,\mathrm{d} \sigma + \int_{0}^{s} \mathcal{P}(t_{k}( \sigma ),u_{k}( \sigma ),z_{k}( \sigma )) \,t'_{k}( \sigma )\,\mathrm{d} \sigma \,. \end{split} \end{equation}
{\bf Step 1: Slopes.} By \eqref{e.1106} and Lemma \ref{l.lscFE} we get $$
\F ( t(s) , u(s) , z(s) ) \le \liminf_{k \to \infty} \F ( t_k (s) , u_k(s) , z_k(s)) . $$ Let us take the limsup in the right hand side of \eqref{e.78bis}. The inequality \begin{align}
\int_{0}^{s}|\partial_{u}\F|& (t(\sigma),u(\sigma),z(\sigma))\, \| u' (s) \|_{H^1} + |\partial^-_{z}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| z' (s) \|_{L^2} \,\mathrm{d}\sigma \label{e.balder4} \\
& \le
\liminf_{k \to \infty}
\int_{0}^{s}|\partial_{u}\F|(t_k (\sigma),u_k(\sigma),z_k(\sigma))\, \| u'_k (s) \|_{H^1} + |\partial^-_{z}\F|(t_k(\sigma),u_k(\sigma),z_k(\sigma))\, \| z'_k (s) \|_{L^2} \,\mathrm{d}\sigma \nonumber \end{align} follows, for instance, from \cite[Theorem 3.1]{Balder_RCMP85}. Let us see how our setting fits into the framework and the notation of \cite{Balder_RCMP85}. We set $X = [0,T] \times H^1(\Omega;\R^{2})
\times \{z\in H^{1}(\Omega;[0,1]):\, \|z\|_{H^{1}}\leq R\}$ and $\Xi = H^1 (\Omega ; \R^2) \times L^2 (\Omega)$.
The space~$X$ is endowed with the strong topology in~$[0,T]\times H^{1}(\Omega;\R^{2})$ and the weak topology in the ball of~$H^{1}(\Omega)$. Being the latter metrizable,~$X$ is a metric space. The space~$\Xi$ is endowed with the weak topology.
For $x = ( t , u , z)$ and for $\xi = ( u' , z' )$ the integrand is $$ l ( x , \xi) = \left\{ \begin{array}{ll}
| \partial_u \F | ( t , u ,z) \ \| u' \|_{H^1} + |
\partial_z^- \F | ( t , u ,z) \ \| z' \|_{L^2} & \text{if $\|z'\|_{L^{2}} \neq 0$}\,,\\[1mm]
| \partial_u \F | ( t , u ,z) \ \| u' \|_{H^1} & \text{if $\|z'\|_{L^{2}} =0$}\,. \end{array}\right. $$
Clearly $l \ge 0$. Let us check that $l ( x ,\cdot)=l(t,u,z,\cdot ,\cdot)$ is convex in $\Xi = H^1(\Omega;\R^2) \times L^2(\Omega)$. If $| \partial_z^- \F| (t,u,z)=+\infty$, then $$ l ( x , \xi) = \left\{ \begin{array}{ll}
+\infty & \text{if $\|z'\|_{L^{2}} \neq 0$}\,,\\[1mm]
| \partial_u \F | ( t , u ,z) \ \| u' \|_{H^1} & \text{if $\|z'\|_{L^{2}} =0$}
\end{array}\right. = | \partial_u \F | ( t , u ,z) \ \| u' \|_{H^1} + \chi_{\{ z' = 0\} }\,, $$
where~$\chi$ denotes the indicator function. Hence, $l(x,\cdot)$ is convex, since it is the sum of convex functions. If~$|\partial_{z}^{-}\F|(t,u,z)<+\infty$, then $$
l ( x , \xi) = | \partial_u \F | ( t , u ,z) \ \| u' \|_{H^1} + | \partial_z^- \F | ( t , u ,z) \ \| z' \|_{L^2} $$
which is convex w.r.t~$(u',z')$. We now show that $l ( \cdot , \cdot)$ is sequentially lower semicontinuous in $X \times \Xi$. Let $(u'_{k}, z'_{k}) \rightharpoonup (u', z')$ (weakly) in~$\Xi$ and let $(t_{k},u_{k}, z_{k}) \to (t, u, z)$ in the metric of~$X$, that is, $t_{k}\to t$, $u_{k} \to u$ in~$H^{1}(\Omega;\R^{2})$, and $z_{k}\rightharpoonup z$ in~$H^{1}(\Omega)$. We notice that by Lemma~\ref{l.3} and the fact that~$|\partial_{u} \F|<+\infty$ on~$X$, \begin{equation}\label{e.balder1}
|\partial_{u} \F| (t,u,z)\, \|u'\|_{H^{1}} \leq \liminf_{k\to\infty} \, |\partial_{u} \F| (t_{k}, u_{k}, z_{k}) \, \|u'_{k}\|_{H^{1}}\,. \end{equation}
If~$\| z' \|_{L^{2}} =0$, then~\eqref{e.balder1} is enough to show lower semicontinuity of~$l$. If~$\|z'\|_{L^{2}} \neq 0$ and~$|\partial_{z}^{-} \F|( t,u,z)=+\infty$, by the lower semicontinuity of the~$L^{2}$-norm we have that $\|z'_{k}\|_{L^{2}}> \delta>0$ for some positive~$\delta$ and for every~$k$ sufficiently large. Thus, \begin{equation}\label{e.balder2}
\liminf_{k\to\infty} \, |\partial_{z}^{-} \F| (t_{k}, u_{k}, z_{k})\, \|z'_{k}\|_{L^{2}} \geq \delta \liminf_{k\to\infty} \, |\partial_{z}^{-} \F| (t_{k}, u_{k}, z_{k}) =+\infty\,, \end{equation}
where the last equality follows from Lemma~\ref{l.2}. If, instead,~$\|z'\|_{L^{2}} \neq 0$ and $|\partial_{z}^{-} \F|( t,u,z) < +\infty$, then Lemma~\ref{l.2} implies \begin{equation}\label{e.balder3}
|\partial_{z}^{-} \F| (t,u,z)\, \|z'\|_{L^{2}} \leq \liminf_{k\to\infty} \, |\partial_{z}^{-} \F| (t_{k}, u_{k}, z_{k}) \, \|z'_{k}\|_{L^{2}} \,. \end{equation} Collecting inequalities~\eqref{e.balder1}-\eqref{e.balder3} we deduce the lower semicontinuity of~$l$.
By Proposition \ref{p.compactness} we know that $x_k = ( t_k , u_k , z_k)$ converges pointwise in~$[0,S]$ to $x = ( t , u , z)$ w.r.t.~the metric of $X$, and thus in measure. Moreover, again by Proposition~\ref{p.compactness}, we have that $\xi_k = ( u'_k , z'_k ) $ converges to $ \xi = (u' , z')$ weakly* in $L^\infty ( (0,S) ; H^1 (\Omega; \R^2) \times L^2 (\Omega) )$ and thus weakly in~$L^1 ( (0,S) ; H^1 (\Omega; \R^2) \times L^2 (\Omega) )$. Hence,~\eqref{e.balder4} holds.
{\bf Step 2: Power.} We claim that \begin{equation}\label{e.91} \int_{0}^{s} \mathcal{P} ( t( \sigma ), u( \sigma ), z( \sigma ) )\, t'( \sigma )\,\mathrm{d} \sigma = \lim_{k\to\infty}\,\int_{0}^{s} \mathcal{P} ( t_{k}( \sigma ), u_{k}( \sigma ), z_{k}( \sigma ) )\, t'_{k}( \sigma )\,\mathrm{d} \sigma \,. \end{equation}
Let us fix $s\in[0,S]$. By definition~\eqref{e.power} of~$\mathcal{P}$ we have that \begin{displaymath} \int_{0}^{s} \mathcal{P}(t_{k}(\sigma), u_{k}(\sigma), z_{k}(\sigma) )\, t'_{k}(\sigma) \,\mathrm{d}\sigma = \int_{0}^{s} \int_{\Omega} \partial_{\strain} W \big( z_{k}(\sigma) , \strain (u_{k}(\sigma) + g(t_{k}(\sigma)) ) \big) {\,:\,} \strain ( \dot{g}(t_{k} (\sigma)) )\, t'_{k}(\sigma) \,\mathrm{d} x\,\mathrm{d}\sigma\,. \end{displaymath} In order to show~\eqref{e.91}, we will prove that $\strain ( \dot{g}(t_{k} (\cdot)) ) t'_{k}(\cdot) \rightharpoonup \strain ( \dot{g}(t (\cdot)) ) t'(\cdot)$ in $L^{q}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$ and that $ \partial_{\strain} W ( z_{k}(\cdot) , \strain (u_{k}(\cdot) + g(t_{k}(\cdot)) )) \to \partial_{\strain} W ( z(\cdot) , \strain (u(\cdot) + g(t(\cdot)) ))$ in $L^{q'}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$.
Let us start with the latter. Remember that $$ \partial_{\strain} W \big( z , \strain (u + g ) \big) = 2 h(z) \big( \mu \strain_{d} ( u +g) + \kappa \strain_{v}^{+} (u+g) \big) - 2\kappa \strain_{v}^{-} ( u + g) . $$ For every $\sigma\in[0,S]$, by Lemma~\ref{l.HMWw} and since~$t_{k}\to t$,~$u_{k}\to u$ in~$H^{1}(\Omega;\R^{2})$, and~$z_{k} \to z$ in~$L^{2}(\Omega)$, we have that, up to a not relabelled subsequence, $ \partial_{\strain} W ( z_{k}(\sigma) , \strain (u_{k}(\sigma) + g(t_{k}(\sigma)) )) \to \partial_{\strain} W ( z(\sigma) , \strain (u(\sigma) + g(t(\sigma)) ))$ a.e.~in~$\Omega$. Since~$z_{k}(\sigma)$ takes values in~$[0,1]$, we can apply~$(c)$ of Lemma~\ref{l.HMWw} to deduce that \begin{equation}\label{e.pow2}
\big| \partial_{\strain} W \big( z_{k}(\sigma) , \strain (u_{k}(\sigma) + g(t_{k}(\sigma)) )\big)\big|\leq C |\strain (u_{k}(\sigma)+ g(t_{k}(\sigma)) ) | \end{equation} for some positive constant~$C$ independent of~$k$. Therefore, by dominated convergence we get that $\partial_{\strain} W ( z_{k}(\sigma) , \strain (u_{k}(\sigma) + g(t_{k}(\sigma)) ))$ converges to~$\partial_{\strain} W ( z(\sigma) , \strain (u(\sigma) + g(t(\sigma)) ))$ strongly in~$L^{2}(\Omega;\mathbb{M}^{2}_{s})$. Moreover, being~$u_{k}$ and~$g\circ t_{k}$ bounded in $L^{\infty}([0,S];H^{1}(\Omega;\R^{2}))$, in view of~\eqref{e.pow2} and of the previous convergence we deduce that $ \partial_{\strain} W ( z_{k}(\cdot) , \strain (u_{k}(\cdot) + g(t_{k}(\cdot)) ))$ converges to $ \partial_{\strain} W ( z(\cdot) , \strain (u(\cdot) + g(t(\cdot)) ))$ strongly in $L^{q'}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$ (actually, in $L^{\nu}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$ for every $\nu<+\infty$).
As for $\strain ( \dot{g}(t_{k} (\cdot)) ) \,t'_{k}(\cdot)$, we proceed by a density argument. Indeed, by density of $C^{1}_{c}([0,T]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$ in $L^{q}([0,T]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$, for every $\delta>0$ there exists~$E\in C^{1}_{c}([0,T]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$ such that \begin{equation}\label{e.pow3}
\| E - \strain ( \dot{g} ) \|_{L^{q}([0,T]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))}\leq \delta\,. \end{equation} Using a change of variables, that~$t'_{k}(\sigma)\leq 1$ for a.e.~$\sigma\in[0,S]$, and~\eqref{e.pow3}, we have that \begin{equation}\label{e.pow4} \begin{split}
\int_{0}^{S} \| E (t_{k} (\sigma))\,t'_{k}(\sigma) - \strain (\dot{g} ( t_{k}(\sigma)))\, t'_{k}(\sigma)\|_{L^2}^{q}\,\mathrm{d}\sigma &\leq
\int_{0}^{S} \| E (t_{k} (\sigma)) - \strain (\dot{g} ( t_{k}(\sigma)))\|_{L^2}^{q} \, t'_{k}(\sigma)\,\mathrm{d}\sigma\\
& \leq \int_{0}^{T} \|E (t) - \strain (\dot{g}(t))\|_{L^2}^{q}\,\mathrm{d} t \leq \delta^{q}\,. \end{split} \end{equation} The same inequality holds for $E(t(\cdot))\,t'(\cdot) - \strain(\dot{g}(t(\cdot)))\, t'(\cdot)$.
Let us now fix $\varphi \in L^{q'}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$. Simply by adding and subtracting $(E\circ t_{k})\, t'_{k}$ and~$(E\circ t)\, t'$, we have that \begin{equation}\label{e.pow5} \begin{split} \int_{0}^{S} \int_{\Omega} \Big( \strain (\dot{g} ( t_{k}(\sigma)))\, t'_{k}(\sigma) & - \strain (\dot{g} ( t(\sigma)))\, t'(\sigma) \Big) {\,:\,} \varphi\,\mathrm{d} x\,\mathrm{d}\sigma \\ & = \int_{0}^{S} \int_{\Omega} \Big( \strain (\dot{g} ( t_{k}(\sigma)))\, t'_{k}(\sigma) - E (t_{k} (\sigma))\,t'_{k}(\sigma)\Big) {\,:\,} \varphi\,\mathrm{d} x\,\mathrm{d}\sigma\\ & \quad + \int_{0}^{S}\int_{\Omega} \Big( E (t_{k} (\sigma))\,t'_{k}(\sigma) - E(t(\sigma))\,t'(\sigma) \Big ) {\,:\,} \varphi\,\mathrm{d} x\,\mathrm{d}\sigma \\ & \quad + \int_{0}^{S} \int_{\Omega} \Big( E(t(\sigma))\,t'(\sigma) - \strain (\dot{g} ( t(\sigma)))\, t'(\sigma) \Big) {\,:\,} \varphi\,\mathrm{d} x\,\mathrm{d}\sigma\,. \end{split} \end{equation} By~\eqref{e.pow4}, the first term on the right-hand side of~\eqref{e.pow5} can be estimated by \begin{displaymath}
\int_{0}^{S} \int_{\Omega} \Big( \strain (\dot{g} ( t_{k}(\sigma)))\, t'_{k}(\sigma) - E (t_{k} (\sigma))\,t'_{k}(\sigma)\Big) {\,:\,} \varphi\,\mathrm{d} x\,\mathrm{d}\sigma \leq \delta\, \|\varphi\|_{L^{q'}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))}\,. \end{displaymath} The same estimate holds for the third term on the right-hand side of~\eqref{e.pow5}. Recalling that $t_{k} \rightharpoonup t$ weakly* in~$W^{1,\infty}(0,S)$ and that $E\in C^{1}_{c}([0,T]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$, we get that $E \circ t_{k} \to E\circ t$ strongly in $L^{q}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$ and $t'_{k}\varphi \rightharpoonup t' \varphi$ weakly in $L^{q'}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$, so that \begin{displaymath} \lim_{k\to\infty}\, \int_{0}^{S}\int_{\Omega} \Big( E (t_{k} (\sigma))\,t'_{k}(\sigma) - E(t(\sigma))\,t'(\sigma) \Big ) {\,:\,} \varphi\,\mathrm{d} x\,\mathrm{d}\sigma = 0\,. \end{displaymath} Collecting the above inequalities, taking the modulus of~\eqref{e.pow5} and passing to the limsup as $k\to \infty$ we obtain \begin{displaymath}
\limsup_{k\to\infty}\, \left| \int_{0}^{S} \int_{\Omega} \Big( \strain (\dot{g} ( t_{k}(\sigma)))\, t'_{k}(\sigma) - \strain (\dot{g} ( t(\sigma)))\, t'(\sigma) \Big) {\,:\,} \varphi\,\mathrm{d} x\,\mathrm{d}\sigma \right| \leq 2\delta\, \|\varphi\|_{L^{q'}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))}\,. \end{displaymath} Hence, passing to the limit as $\delta\to 0$, by the arbitrariness of~$\varphi\in L^{q'}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$ we deduce that $\strain ( \dot{g}(t_{k} (\cdot)) ) \,t'_{k}(\cdot)$ converges to $\strain ( \dot{g}(t (\cdot)) ) \,t'(\cdot)$ weakly in $L^{q}([0,S]; L^{2}(\Omega;\mathbb{M}^{2}_{s}))$, and this concludes the proof of~\eqref{e.91}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\subsection{Upper energy-dissipation inequality} \label{s.inequality}
This section is devoted to the proof of the inequality \begin{displaymath} \begin{split}
\F(t(s),u(s),z(s)) \geq & \,\F(0,u_{0},z_{0})-\int_{0}^{s}|\partial_{u}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| u' (s) \|_{H^1} \,\mathrm{d}\sigma \\
&-\int_{0}^{s}|\partial^-_{z}\F|(t(\sigma),u(\sigma),z(\sigma))\, \| z' (s) \|_{L^2} \,\mathrm{d}\sigma +\int_{0}^{s}\mathcal{P}(t(\sigma),u(\sigma),z (\sigma)) \,t'(\sigma)\,\mathrm{d}\sigma \end{split} \end{displaymath} for the triple~$(t,u,z)$ defined in Proposition~\ref{p.compactness}.
The function~$z$ belongs to~$W^{1,\infty}([0,S]; L^{2}(\Omega)) \cap L^{\infty}([0,S];H^{1}(\Omega))$. Therefore,~$z$ is differentiable a.e.~in~$(0,S)$ with~$z'(s)\in L^{2}(\Omega)$. We set $z'(s)=0$ for every~$s\in(0,S]$ of non-differentiability for~$z$. Clearly, this does not change the differentiability properties of~$z$, the representation \begin{displaymath} z(s)=z_{0}+\int_{0}^{s} z'(\sigma)\,\mathrm{d}\sigma, \end{displaymath} and the energy-dissipation inequality above. In what follows we need the following auxiliary piecewise constant interpolation functions \begin{eqnarray} && \underline{t}_{k}(\sigma):= \left\{ \begin{array}{ll} t^{k}_{i-1} & \text{if $\sigma \in [s^{k}_{i,-1}, s^{k}_{i,\frac{1}{2}})$}\,, \label{e.undert}\\ [2mm] t^{k}_{i} &\text{if $\sigma \in [ s^{k}_{i,\frac{1}{2}}, s^{k}_{i} )$} \,, \end{array} \right. \\[2mm] && \underline{u}_{k}(\sigma):= \left\{ \begin{array}{lll} u^{k}_{i-1} & \text{if $\sigma\in[s^{k}_{i,-1},s^{k}_{i,\frac{1}{2}})$}\,,\\[2mm] u^{k}_{i,j+1} & \text{if $\sigma\in[s^{k}_{i,j+\frac{1}{2}}, s^{k}_{i,j+\frac{3}{2}})$, $j\geq 0$}\,, \end{array}\right. \label{e.underu} \end{eqnarray} where~$t^{k}_{i}$,~$u^{k}_{i,j}$,~$s^{k}_{i,j}$, and~$s^{k}_{i}$, $j, k\in\mathbb{N}$, $i=0,\ldots, k$, have been defined in~Section~\ref{s.4.1}.
We now discuss the convergence of~$\underline{t}_{k}$ and~$\underline{u}_{k}$.
\begin{lemma}\label{r.t} The sequence~$\underline{t}_{k}$ converges pointwise in $[0,S]$ and weakly* in $L^{\infty}(0,S)$ to the function~$t(\cdot)$ defined in Proposition~\ref{p.compactness}. \end{lemma}
\begin{proof}
Recalling the definition of the affine interpolation function~$t_{k}$ in~\eqref{e.interpolantt}-\eqref{e.interpolantz}, we have that $\underline{t}_{k}(\sigma) = t_{k}(\sigma)$ if~$\sigma\in [s^{k}_{i,\frac{1}{2}}, s^{k}_{i}]$, while $|\underline{t}_{k}(\sigma) - t_{k}(\sigma)| \leq \tau_{k}$ if $\sigma \in [s^{k}_{i,-1}, s^{k}_{i,\frac{1}{2}}]$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\begin{lemma}\label{p.nonstationaryz} Let $\sigma\in (0,S)$ be such that $z'(\sigma)\neq 0$. Then, $\underline{u}_{k}(\sigma)\to u(\sigma)$ in $H^{1}(\Omega;\R^{2})$. \end{lemma}
\begin{proof} We actually show that there exist a subsequence~$k_{m}$ and a sequence $\sigma_{m} \to \sigma$ such that $\sigma_{{m}} \leq \sigma$ and $u_{k_m}(\sigma_{m})=\underline{u}_{k_{m}}(\sigma)$. From this property and Proposition~\ref{p.compactness} the thesis follows.
Since~$ z'(\sigma) \neq 0$, there exists~$\delta>0$ such that $z(s)\neq z(\sigma)$ for every~$s\in[\sigma-\delta, \sigma)$. Let us fix a sequence $\delta_{m}\searrow 0$. Since, by Proposition~\ref{p.compactness}, $z_{k}$ converges to~$z$ in~$L^{2}(\Omega)$ pointwise in~$[0,S]$, for every~$m$ we can find~$k_{m}> k_{m-1}$ such that \begin{displaymath} z_{k_{m}}(\sigma - \delta_{m}) \neq z_{k_{m}}(\sigma). \end{displaymath}
We deduce that there exists a point~$\sigma^{-}_{m}\in (\sigma - \delta_{m}, \sigma)$ with $z'_{k_{m}}(\sigma_{m}^{-})\neq 0$. Moreover, by definition of the interpolation function~$z_{k}$ in~\eqref{e.interpolantt}-\eqref{e.interpolantz}, $z_{k_{m}}$ changes only in intervals of the form~$[s^{k_{m}}_{i,j+\frac{1}{2}} , s^{k_{m}}_{i,j+1})$. Therefore, there exist suitable indexes $i_{m},j_{m}$ such that $\sigma_{m}^{-}\in [s^{k_{m}}_{i_{m}, j_{m}+\frac{1}{2}} , s^{k_{m}}_{i_{m}, j_{m}+1})$.
For every~$m$, there exists two indexes~$\lambda_{m}, \gamma_{m}$ such that $\sigma\in[s^{k_{m}}_{\lambda_{m},\gamma_{m}}, s^{k_{m}}_{\lambda_{m}, \gamma_{m}+1})$. We now distinguish three different cases, according to the value of~$\gamma_{m}$ (along an infinite sequence of indexes $m_{n}$ not explicitly indicated): \begin{itemize} \item if~$\gamma_{m} = -1$, then $\sigma\in [s^{k_{m}}_{\lambda_{m}-1}, s^{k_{m}}_{\lambda_{m},0})$ and $\underline{u}_{k_{m}}(\sigma)= u^{k_{m}}_{\lambda_{m}-1} = u_{k_{m}}(\sigma)$, so that we could simply set $\sigma_{{m}}:=\sigma$;
\item if~$\gamma_{m}\geq 0$ and~$\sigma\in[s^{k_{m}}_{\lambda_{m},\gamma_{m}+\frac{1}{2}}, s^{k_{m}}_{\lambda_{m},\gamma_{m}+1})$, then $\underline{u}_{k_{m}}(\sigma) = u^{k_{m}}_{\lambda_{m},\gamma_{m}+1} = u_{k_{m}}(\sigma)$, and, as before, we set $\sigma_{m} := \sigma$;
\item if ~$\gamma_{m}\geq 0$ and~$\sigma\in [ s^{k_{m}}_{\lambda_{m},\gamma_{m}}, s^{k_{m}}_{\lambda_{m}, \gamma_{m}+\frac{1}{2}})$, then~$\underline{u}_{k_{m}}(\sigma) = u^{k_{m}}_{\lambda_{m},\gamma_{m}} = u_{k_{m}}(s^{k_{m}}_{\lambda_{m},\gamma_{m}})$. Since~$\sigma_{m}^{-} < \sigma$ with $\sigma_{m}^{-}\in [s^{k_{m}}_{i_{m},j_{m} + \frac{1}{2}}, s^{k_{m}}_{i_{m},j_{m}+1})$, we have that either~$i_{m}<\lambda_{m}$ or~$i_{m}=\lambda_{m}$ and $j_{m}<\gamma_{m}$; in any case \begin{displaymath} \sigma_{m}^{-} \leq s^{k_{m}}_{i_{m},j_{m}+1} \leq s^{k_{m}}_{\lambda_{m}, \gamma_{m}} \leq \sigma\,. \end{displaymath} Since~$\sigma_{m}^{-}\to \sigma$, we also deduce that $s^{k_{m}}_{\lambda_{m},\gamma_{m}} \to \sigma$, so that we set $\sigma_{m} := s^{k_{m}}_{\lambda_{m},\gamma_{m}}$.
{\vrule width 6pt height 6pt depth 0pt}
\end{itemize}
\end{proof}
\begin{lemma} \label{p.almoststationaryz} Let $\sigma \in (0,S]$. Assume that there exists a sequence $\sigma_{m}\nearrow \sigma$ such that $\sigma_{m}< \sigma$ and $z'(\sigma_{m})\neq 0$ for every~$m$. Then, $\underline{u}_{k}(\sigma)\to u(\sigma)$ in~$H^{1}(\Omega;\R^{2})$. \end{lemma}
\begin{proof}
For every~$\sigma_{m}$ we have $\underline{u}_{k}(\sigma_{m}) \to u(\sigma_{m})$ in~$H^{1}(\Omega;\R^{2})$ for every $m\in\mathbb{N}$. Hence, we can extract a subsequence~$k_{m}$ such that $\underline{u}_{k_{m}}(\sigma_{m}) \to u(\sigma)$ in~$H^{1}(\Omega;\R^{2})$ as $m\to\infty$.
To conclude that also $\underline{u}_{k_{m}}(\sigma)\to u(\sigma)$ we discuss the mutual position of~$\sigma_{m}$ and~$\sigma$. As in the previous Lemma, we could have: \begin{itemize} \item if $\sigma \in[s^{k_{m}}_{\lambda_{m},-1},s^{k_{m}}_{\lambda_{m},0})$, then $\underline{u}_{k_{m}}(\sigma) = u^{k_{m}}_{\lambda_{m}-1}= u_{k_{m}}(\sigma)$ and $u_{k_{m}}(\sigma)\to u(\sigma)$ in~$H^{1}(\Omega;\R^{2})$;
\item if $\sigma \in[s^{k_{m}}_{\lambda_{m},\gamma_{m}+\frac{1}{2}},s^{k_{m}}_{\lambda_{m},\gamma_{m}+1})$, then $\underline{u}_{k_{m}}(\sigma)=u^{k_{m}}_{\lambda_{m}, \gamma_{m}+1} = u_{k_{m}}(\sigma)$ and $u_{k_{m}}(\sigma)\to u(\sigma)$ in~$H^{1}(\Omega;\R^{2})$;
\item if $\sigma_{m},\sigma \in[s^{k_{m}}_{\lambda_{m},\gamma_{m}},s^{k_{m}}_{\lambda_{m},\gamma_{m}+\frac{1}{2}})$, then $\underline{u}_{k_{m}}(\sigma)=\underline{u}_{k_{m}}(\sigma_{m}) \to u(\sigma)$ in~$H^{1}(\Omega;\R^{2})$;
\item if $\sigma \in[s^{k_{m}}_{\lambda_{m},\gamma_{m}}, s^{k_{m}}_{\lambda_{m},\gamma_{m}+\frac{1}{2}})$ and $\sigma_{m}\in [s^{k_{m}}_{i_{m},j_{m}}, s^{k_{m}}_{i_{m},j_{m}+1})$ with~$(i_{m},j_{m})\neq (\lambda_{m},\gamma_{m})$ then, being $\sigma_{m}<\sigma$, we have that either~$i_{m}<\lambda_{m}$ or~$i_{m}=\lambda_{m}$ and $j_{m}<\gamma_{m}$. In both cases we have $\sigma_{m} \leq s^{k_{m}}_{i_{m}, j_{m}+1} \leq s^{k_{m}}_{\lambda_{m},\gamma_{m}} \leq \sigma$. Thus, the sequence of nodes~$s^{k_{m}}_{\lambda_{m},\gamma_{m}}$ converges to~$\sigma$. We deduce that $\underline{u}_{k_{m}}(\sigma)= u_{k_{m}}(s^{k_{m}}_{\lambda_{m},\gamma_{m}}) \to u(\sigma)$ in~$H^{1}(\Omega;\R^{2})$. \end{itemize}
Repeating the above argument for any subsequence~$k_{j}$ of~$k$ we conclude the thesis.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
Let us define the set \begin{equation}\label{e.U} U := \{ \sigma\in(0,S] : \,\text{there exists a sequence $\sigma_{m}\nearrow \sigma$ such that $\sigma_{m} \leq \sigma$ and $z'(\sigma_{m})\neq 0$} \} \,. \end{equation} In view of Lemmata~\ref{p.nonstationaryz} and~\ref{p.almoststationaryz}, for every~$\sigma\in U$ we have $\underline{u}_{k}(\sigma)\to u(\sigma)$ in~$H^{1}(\Omega;\R^{2})$. Viceversa, we still have no information on the set $U^{c}:= (0,S]\setminus U$. In the following lemma we show the structure of~$U^{c}$.
\begin{lemma}\label{l.Uc}
There exist countably many $s^{-}_{i} < s^{+}_{i}$ in $[0,S]$, such that \begin{displaymath} U^{c} = \bigcup_{i\in\mathbb{N}} (s^{-}_{i},s^{+}_{i}]\,, \end{displaymath} where the intervals $(s^{-}_{i},s^{+}_{i}]$ are pairwise disjoint.
\end{lemma}
\begin{proof} First, note that $$U^{c} = \{ \sigma\in (0,S] : \, z'(\sigma)=0 \text{ and there is no sequence $\sigma_{m}\nearrow \sigma$ such that $z'(\sigma_{m}) \neq 0$} \}. $$ Clearly~$\sigma\in U^{c}$ if and only if there is no sequence~$\sigma_{m}\nearrow \sigma$ such that $z'(\sigma_{m})\neq 0$. This implies that $z$ is constant in a left neighborhood of $\sigma$. Then, if~$z$ is differentiable in~$\sigma$ we have~$z'(\sigma)=0$, if it is not differentiable in~$\sigma$ then~$z'(\sigma)=0$ by convention.
It follows that for every $\sigma\in U^{c}$ there exists a left-neighborhood~$U_{\sigma}$ of~$\sigma$ in~$(0,S]$ such that $U_{\sigma} \subseteq U^{c}$. Indeed, for every~$\sigma \in U^{c}$ we have that~$z'$ has to vanish in a left-neighborhood of~$\sigma$ in~$(0,S]$. We denote this left-neighborhood with~$U_{\sigma}\subseteq U^{c}$.
We first write~$U^{c}$ as the union of its connected components \begin{displaymath} U^{c} = \bigcup_{ \alpha\in A} I_{\alpha}\,, \end{displaymath} where~$A$ is some set of indexes. From what we have seen above, each~$I_{\alpha}$ contains at least an interval. Therefore,~$U^{c}$ can be actually written as the union of countably many connected components: \begin{displaymath} U^{c} = \bigcup_{i\in\mathbb{N}} I_{i}\,. \end{displaymath} For every~$i\in\mathbb{N}$ there exist $s^{-}_{i}< s^{+}_{i}$ such that $(s^{-}_{i},s^{+}_{i})\subseteq I_{i}\subseteq [s^{-}_{i},s^{+}_{i}]$. Since every point in $U^c$ admits a left neighborhood contained in $U^c$, we deduce that~$s^{-}_{i}\notin I_{i}$. On the other hand, $s^{+}_{i} \in I_{i}$. Indeed, $z'(\sigma)=0$ for every~$\sigma\in I_{i}$, so that~$z$ is constant on~$I_{i}$. Hence, if~$z$ is differentiable in~$s^{+}_{i}$ we get~$z'(s^{+}_{i})=0$, if~$z$ is not differentiable in~$s^{+}_{i}$ then~$z'(s^{+}_{i})=0$ by convention. This implies that~$s^{+}_{i}\in I_{i}$. All in all, we have proved that each connected component~$I_{i}$ is of the form~$(s^{-}_{i},s^{+}_{i}]$ for suitable~$s^{-}_{i} < s^{+}_{i} \in [0,S]$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
We set $R:= S-|U^c|$ and define the absolutely continuous function \begin{equation}\label{e.beta} \beta(s) := \int_{0}^{s} \mathbf{1}_{U}( \sigma ) \, \mathrm{d} \sigma\qquad \text{for every $s\in[0,S]$} \,. \end{equation}
\begin{lemma}\label{l.beta} The following facts hold: \begin{itemize} \item[$(a)$] $\beta \colon [0,S]\to[0,R]$ is $1$-Lipschitz continuous, non-decreasing, and surjective;
\item[$(b)$] let $B:=\{\sigma\in[0,S]:\, \text{$\beta$ is not differentiable in~$\sigma$ or $\beta'(\sigma)\neq \mathbf{1}_{U}(\sigma)$} \}\cup U^{c}$, then $|\beta(B)|=0$; \item[$(c)$] if $\beta$ is constant in~$[a,b]$, then~$z$ is constant in~$[a,b]$ and~$(a,b]\subseteq U^{c}$. \end{itemize} \end{lemma}
\begin{proof}
It is clear that $\beta$ is non-decreasing and 1-Lipschitz continuous. Moreover,~$\beta(0)=0$ and~$\beta(S)=|U|= S - |U^{c}| = R$, so that $\beta$ is onto~$[0,R]$.
Since $|\{\sigma\in[0,S]:\, \text{$\beta$ is not differentiable in~$\sigma$ or $\beta'(\sigma)\neq \mathbf{1}_{U}(\sigma)$} \}|=0$ and~$\beta$ is Lipschitz, we have that \begin{displaymath}
|\beta ( \{\sigma\in[0,S]:\, \text{$\beta$ is not differentiable in~$\sigma$ or $\beta'(\sigma)\neq \mathbf{1}_{U}(\sigma)$} \})|=0\,. \end{displaymath}
As for $\beta(U^{c})$, we have that if~$s\in U^{c}$, then there exist $i\in\mathbb{N}$ and~$s^{-}_{i} < s^{+}_{i}$ such that $s\in (s^{-}_{i},s^{+}_{i}]\subseteq U^{c}$. Hence, $\beta(s)=\beta(s^{-}_{i})= \beta(s^{+}_{i})$ and $\beta (U^c) = \{ \beta ( s_i^-) \}_{i \in \N}$, thus $|\beta(U^{c})|=0$. All in all, we have shown that~$|\beta(B)|=0$, so that~$(b)$ holds.
As for~$(c)$, we have that if~$\beta$ is constant in~$[a,b]$ then~$\beta'=0$ in~$(a,b)$. Being~$\beta'(\sigma)=\mathbf{1}_{U}(\sigma)$ a.e.~in~$(0,S]$, we deduce that $|(a,b)\setminus U^{c}|=0$. Therefore, for a.e.~$\sigma\in(a,b)$ we have~$z'(\sigma)=0$, which, together with the continuity of~$z$ in~$[0,S]$, implies that~$z$ is constant on~$[a,b]$. From this we get that~$z'(\sigma)=0$ for every~$\sigma\in (a,b)$. Hence~$(a,b)\subseteq U^{c}$. As for the point~$b$, the only possibility to have~$b\in U$ is that~$z$ is differentiable in~$b$ with~$z'(b) \neq 0$. But this can not happen, since~$z$ is constant on~$[a,b]$. Thus,~$b\in U^{c}$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
We now introduce the ``right-inverse'' of~$\beta$: \begin{equation}\label{e.alpha} \alpha(r):= \min\,\{ s\in[0,S]:\beta(s)=r\}\,. \end{equation} The function~$\alpha$ is well-defined on~$[0,R]$ because of the continuity of~$\beta$. Its main properties are listed in the next lemma, where we denote with~$\alpha^{\pm}$ the left and right limit of~$\alpha$, where they exist.
\begin{lemma}\label{l.alpha} The following facts hold: \begin{itemize} \item[$(a)$] $\alpha$ is strictly increasing and left-continuous;
\item[$(b)$] $\beta( \alpha (r)) = \beta(\alpha^{+}(r)) = r$ for every~$r\in[0,R]$, $\alpha(\beta(s))\leq s$ for every $s\in[0,S]$ and $\alpha(\beta(s))=s$ for every $s\in U$;
\item[$(c)$]$\alpha$ is differentiable in $(0,R)\setminus \beta(B)$ with $\alpha'(r) = 1$. \end{itemize} \end{lemma}
\begin{proof} The function~$\alpha$ is strictly increasing since~$\beta$ is increasing. Hence, left and right limits of $\alpha$ exist in every point of $(0,R)$.
In order to prove the left-continuity of~$\alpha$, we first notice that, by construction, we have $\beta(\alpha(r))=r$ for $r\in[0,R]$. Since~$\alpha$ is strictly increasing, it is clear that $\alpha^{-}( \bar{r} ) = \lim_{r\nearrow \bar{r}} \alpha(r) \leq \alpha(\bar{r})$. To show the opposite inequality, we consider the equality $\beta(\alpha(r)) = r$ and pass to the limit as~$r\nearrow \bar{r}$, which gives $\beta(\alpha^{-}( \bar{r})) = \bar{r}$. From the definition of~$\alpha$ we deduce that $\alpha(\bar{r})\leq \alpha^{-}(\bar{r})$. Therefore, $\alpha^{-}(\bar{r}) = \alpha(\bar{r})$ and~$\alpha$ is left-continuous.
Let us now prove~$(b)$. The equality~$\beta(\alpha(r))=r$ has been already shown while the equality~$\beta(\alpha^{+}(r))=r$ follows by definition of~$\alpha^{+}(r)$ and the continuity of~$\beta$. For every~$s\in[0,S]$, it is clear by construction that~$\alpha(\beta(s)) \leq s$. Let us now consider $s\in U$. By contradiction, let us assume that $\alpha(\beta(s))<s$. Then, the function~$\beta$ is constant in the interval~$[ \alpha(\beta(s)), s]$. By~(c) of Lemma~\ref{l.beta} we have that $z$ is constant in the interval~$[\alpha(\beta(s)), s]$ and $(\alpha(\beta(s)), s]\in U^{c}$, which is a contradiction. Therefore, it has to be~$\alpha(\beta(s))=s$.
Let us now show~$(c)$.
We start by proving that every~$\bar{r} \in (0,R)\setminus \beta(U^{c})$ is of continuity for~$\alpha$. In view of~$(a)$, we only have to show that $\alpha(r) \to \alpha(\bar{r})$ for $r\searrow \bar{r}$. By contradiction, let us assume that \begin{displaymath} \alpha( \bar{r}) < \lim_{r\searrow \bar{r}}\, \alpha(r) = \alpha^{+}( \bar{r} ) \,. \end{displaymath} Then by~$(b)$ and by monotonicity of~$\beta$ we have that~$\beta$ is constant in the interval $[\alpha(\bar{r}) , \alpha^{+} (\bar{r} )]$. From~$(c)$ of Lemma~\ref{l.beta}, we deduce that~$z$ is constant on the same interval and $(\alpha( \bar{r} ) , \alpha^{+} (\bar{r} ) ] \subseteq U^{c}$. Therefore, $\bar{r}\in \beta(U^{c})$, which is a contradiction. Hence,~$\alpha$ is continuous in~$\bar{r}$.
In view of~$(a)$, we already know that $\alpha\in BV(0,R)$. We now prove that every $\bar{r} \in (0,R) \setminus \beta ( B )$ is of differentiability for~$\alpha$ with~$\alpha'( \bar{r} )=1$. By the previous argument,~$\bar{r}$ is of continuity for~$\alpha$. For $h\in\R$ with~$|h|$ small enough, let us write \begin{displaymath} \frac{ \alpha(\bar{r}+h)-\alpha( \bar{r})}{h} = \frac{\alpha(\bar{r}+h)-\alpha( \bar{r})}{(\bar{r} + h) - \bar{r}} = \frac{\alpha(\bar{r}+h)-\alpha( \bar{r})}{\beta(\alpha(\bar{r}+h)) - \beta(\alpha( \bar{r}))}\,. \end{displaymath} As~$h\to0$ we have, by continuity of~$\alpha$ in~$\bar{r}$, that~$\alpha (\bar{r}+h) \to \alpha(\bar{r})$. Hence, passing to the limit in the previous equality we get \begin{displaymath} \lim_{h\to 0}\, \frac{ \alpha(\bar{r}+h)-\alpha( \bar{r})}{h} = \frac{1}{\beta'(\alpha(\bar{r}))} = 1\,, \end{displaymath} where we have used the fact that $\bar{r} \notin \beta(B)$, so that $\alpha( \bar{r}) \notin B$ and~$\beta$ is differentiable in~$\alpha( \bar{r} )$ with~$\beta'(\alpha(\bar{r})) = \mathbf{1}_{U}( \alpha( \bar{r} ))=1$. Thus, we have proved that~$\alpha$ is differentiable at every~$\bar{r}\in(0,R)\setminus\beta(B)$ and~$\alpha'(\bar{r})=1$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
We now consider the reparametrized functions \begin{displaymath} \tilde{t}:= t\circ \alpha\,,\qquad \tilde{z}:=z\circ \alpha\,, \qquad \tilde{u}:=u\circ\alpha \,. \end{displaymath}
\begin{lemma}\label{l.tildez} $\tilde{z}\in W^{1,\infty}([0,R]; L^{2}(\Omega))$ with Lipschitz constant~$1$. Moreover, $\tilde{z}'=z'\circ\alpha$ a.e.~in $[0,R]$. \end{lemma}
\begin{proof} Let $\rho<r\in[0,R]$. Being~$z\in W^{1,\infty}([0,S]; L^{2}(\Omega))$, we have that \begin{displaymath}
\| \tilde{z} (r) - \tilde{z} (\rho) \|_{L^2} = \|z( \alpha (r) ) - z ( \alpha ( \rho ) ) \|_{L^2} \leq \int_{ \alpha ( \rho ) }^{\alpha ( r ) } \| z'( \sigma ) \|_{L^2} \, \mathrm{d} \sigma\,. \end{displaymath}
Since~$z' = 0$ in~$U^{c}$ and~$\|z'(s)\|_{L^2} \leq 1$ for a.e.~$s\in[0,S]$ by~\eqref{e.boundlip2}, we can continue in the previous chain of inequalities with \begin{displaymath}
\| \tilde{z} (r) - \tilde{z} (\rho) \|_{L^2} \leq \int_{ \alpha ( \rho ) }^{ \alpha ( r ) } \| z'( \sigma ) \|_{L^2} \mathbf{1}_{U}( \sigma ) \, \mathrm{d} \sigma \leq \int_{ \alpha ( \rho ) }^{ \alpha ( r ) } \mathbf{1}_{ U } ( \sigma ) \, \mathrm{d} \sigma = \beta(\alpha( r) ) - \beta ( \alpha( \rho) ) = r - \rho\,, \end{displaymath} where we have used the definition~\eqref{e.beta} of~$\beta$ and~$(b)$ of Lemma~\ref{l.alpha}.
Let us denote with~$C:= \{s\in[0,S]:\, \text{$z$ is not differentiable in~$s$}\}$. Since~$|C|=0$ and~$\beta$ is Lipschitz continuous, we have that $|\beta ( C ) | = 0$. Let us show that~$\tilde{z}$ is differentiable in every~$\bar{r}\in(0,R)\setminus(\beta(B) \cup \beta(C))$. Indeed, we notice that for such~$\bar{r}$ we have, by~$(c)$ in Lemma~\ref{l.alpha}, that~$\alpha$ is differentiable in~$\bar{r}$ with~$\alpha' ( \bar{r} ) = 1$. Moreover, since~$\bar{r}\notin \beta(C)$, from the definition~\eqref{e.alpha} of~$\alpha$ we deduce that~$\alpha(\bar{r})\notin C$, so that~$z$ is differentiable in~$\alpha( \bar{r} )$. Therefore, for~$r \neq \bar{r}$ we can write \begin{equation} \label{e.tildezdiff} \frac{ z ( \alpha ( r )) - z ( \alpha ( \bar{r} ) )} {r - \bar{r}} = \frac{ z ( \alpha ( r )) - z ( \alpha ( \bar{r} ) )} {\alpha(r) - \alpha(\bar{r})}\,\frac{\alpha(r) - \alpha(\bar{r})}{r - \bar{r}}\,, \end{equation} since~$\alpha$ is strictly increasing by~$(a)$ of Lemma~\ref{l.alpha}. In view of the previous considerations, we can pass to the limit in~\eqref{e.tildezdiff} as $r\to\bar{r}$ obtaining \begin{displaymath} \lim_{ r\to\bar{r}}\, \frac{ z ( \alpha ( r )) - z ( \alpha ( \bar{r} ) )} {r - \bar{r}} = z'(\alpha(\bar{r})) \, \alpha'(\bar{r}) = z'(\alpha(\bar{r}))\,. \end{displaymath}
In conclusion, we have shown that~$\tilde{z}$ is differentiable in every~$\bar{r}\in(0,R)\setminus (\beta(B)\cup \beta(C))$ with~$\tilde{z}'(\bar{r}) = z'(\alpha(\bar{r}))$. Since~$|\beta(B) \cup \beta(C)|=0$, we get that~$\tilde{z}' = z' \circ \alpha$ a.e.~in~$[0,R]$, and this concludes the proof of the proposition.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
We now go back to the proof of the upper-energy inequality. In the following two lemmata we further investigate the summability of the unilateral slope~$|\partial_{z}^{-}\F|$ on the set~$U$.
\begin{lemma}\label{l.L1-1}
The function $s \mapsto | \partial_z^- \F | ( t(s), u(s) , z(s) ) \, \mathbf{1}_{U} (s)$ belongs to $L^1(0,S)$.
\end{lemma}
\begin{proof}
To prove this property, we slightly modify the energy inequality~\eqref{e.78} of Proposition~\ref{p.7} making use of the piecewise constant interpolation function~$\underline{u}_{k}$ defined in~\eqref{e.underu} on the interval~$[0,S]$.
Let $k$ and~$i\in\{1,\ldots,k\}$ be fixed. For $j=-1$, for every $s\in[s^{k}_{i-1},s^{k}_{i,0}]$ we have $u_{k}'(s)= z_{k}'(s)=0$ and, by~\eqref{e.77}, $|\partial_{z}^{-}\F|(\underline{t}_{k}(s), \underline{u}_{k}(s), z_{k}(s)) = 0$. Therefore, \begin{align} \F(t_{k}(s), u_{k}(s), & \ z_{k}(s)) =
\F(t_{k}(s^{k}_{i-1}), u_{k}(s^{k}_{i-1}),z_{k}(s^{k}_{i-1})) - \int_{s^{k}_{i-1}}^{s} \!\!\! |\partial_{u}\F|(t_{k}(\sigma), u_{k}( \sigma ),z_{k}( \sigma ))\,\| u'_k(\sigma) \|_{H^1}\,\mathrm{d} \sigma \nonumber \\
& - \int_{s^{k}_{i-1}}^{s} \!\!\! |\partial_{z}^{-}\F|( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ),z_{k}( \sigma )) \,\mathrm{d} \sigma + \int_{s^{k}_{i-1}}^{s} \mathcal{P} ( t_{k}(\sigma), u_{k}(\sigma), z_{k}(\sigma) ) \, t'_{k}(\sigma) \, \mathrm{d} \sigma \,. \label{e.100} \end{align}
For every $j\geq 0$, we distinguish between $s\in[s^{k}_{i,j},s^{k}_{i,j+\frac{1}{2}}]$ and $s\in[s^{k}_{i,j+\frac{1}{2}},s^{k}_{i,j+1}]$. In the first case we have $t_{k}'(s)=z_{k}'(s)=0$ and $\| u'_k(s) \|_{H^1}=1$ for a.e.~$s \in[s^{k}_{i,j},s^{k}_{i,j+\frac{1}{2}}]$. For $j=0$ we have $\underline{t}_{k}(s) = t^{k}_{i-1}$, $\underline{u}_{k}(s)= u^{k}_{i-1}$, $z_{k}(s) = z^{k}_{i-1}$, and $|\partial_{z}^{-} \F| ( \underline{t}_{k}(s), \underline{u}_{k}(s), z_{k}(s))=0$ again by~\eqref{e.77}. If $j\geq 1$, then $\underline{t}_{k}(s) = t^{k}_{i}$, $\underline{u}_{k}(s) = u^{k}_{i,j}$, $z_{k}(s)= z^{k}_{i,j}$, and $|\partial_{z}^{-} \F | ( \underline{t}_{k}(s), \underline{u}_{k}(s), z_{k}(s))=0$ by~\eqref{e.minz}. Hence, we rewrite~\eqref{e.72} as \begin{align}
\F & (t_{k}(s), u_{k}(s), z_{k}(s)) = \F ( t_{k}(s^{k}_{i,j}), u_{k} ( s^{k}_{i,j} ) , z_{k} ( s^{k}_{i,j}) ) - \int_{s^{k}_{i,j}}^{s} \!\! | \partial_{u} \F | ( t_{k}(\sigma), u_{k}( \sigma ), z_{k}( \sigma ) ) \, \| u'_{k}(\sigma) \|_{H^{1}}\, \mathrm{d} \sigma \nonumber \\
& = \F ( t_{k}(s^{k}_{i,j}), u_{k}(s^{k}_{i,j}),z_{k}(s^{k}_{i,j})) - \int_{s^{k}_{i,j}}^{s} \!\! |\partial_{u} \F | ( t_{k}(\sigma), u_{k} ( \sigma ), z_{k} ( \sigma ) ) \, \| u'_k(\sigma) \|_{H^1} \,\mathrm{d} \sigma \label{e.101} \\
&\quad - \int_{s^{k}_{i,j}}^{s} \!\! | \partial_{z}^{-} \F |( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ),z_{k}( \sigma )) \,\mathrm{d} \sigma + \int_{s^{k}_{i,j}}^{s} \mathcal{P} ( t_{k}(\sigma), u_{k}(\sigma), z_{k}(\sigma)) \, t'_{k}(\sigma) \,\mathrm{d}\sigma \nonumber \,.
\end{align}
In the case $s\in [s^{k}_{i,j+\frac{1}{2}}, s^{k}_{i, j+1})$ we have $t_{k}'(s)=u_{k}'(s)=0$, $\| z'_k(s) \|_{L^2}=1$ for a.e.~$s \in[s^{k}_{i,j+\frac12},s^{k}_{i,j+1}]$, $\underline{t}_{k}(s) = t^{k}_{i}$, and~$\underline{u}_{k}(s) = u^{k}_{i,j+1}$. Then, we rewrite~\eqref{e.73} as \begin{align}
\F & ( t_{k}(s), u_{k}(s), z_{k}(s)) = \F( t_{k}(s^{k}_{i,j+\frac{1}{2}}), u_{k}(s^{k}_{i,j+\frac{1}{2}}), z_{k}(s^{k}_{i,j+\frac{1}{2}}) ) - \! \int_{s^{k}_{i,j+\frac{1}{2}}}^{s} \!\!\!\!\!\!\!\!\! | \partial_{z}^{-} \F |( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ), z_{k}( \sigma ) ) \,\| z'_{k}(\sigma)\|_{L^{2}}\, \mathrm{d} \sigma \nonumber \\
& = \F( t_{k}( s^{k}_{i,j+\frac{1}{2}}), u_{k}(s^{k}_{i, j + \frac{1}{2}}), z_{k}(s^{k}_{i, j +\frac{1}{2}})) - \int_{s^{k}_{i,j+\frac{1}{2}}}^{s} \!\!\!\!\!\!\!\!\! |\partial_{u}\F| ( t_{k}(\sigma), u_{k}( \sigma ), z_{k}( \sigma ) ) \, \| u'_k(\sigma) \|_{H^1} \,\mathrm{d} \sigma \label{e.102} \\
& \quad - \int_{s^{k}_{i,j+\frac{1}{2}}}^{s} \!\!\!\!\!\!\!\!\! | \partial_{z}^{-} \F |( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ),z_{k}( \sigma ) )\,\mathrm{d} \sigma + \int_{s^{k}_{i, j+\frac{1}{2}}}^{s} \!\!\!\!\!\!\!\! \mathcal{P} ( t_{k}( \sigma ), u_{k}( \sigma ), z_{k}( \sigma ) ) \, t'_{k}( \sigma )\,\mathrm{d} \sigma \nonumber \,. \end{align} Summing up~\eqref{e.100}-\eqref{e.102}, we deduce that for every $s\in[s^{k}_{i-1},s^{k}_{i})$ it holds
\begin{align*}
\F( t_{k}(s), u_{k}(s), z_{k}(s)) = & \ \F( t_{k}(s^{k}_{i-1}), u_{k}(s^{k}_{i-1}), z_{k}(s^{k}_{i-1}) ) - \int_{s^{k}_{i-1}}^{s} \!\!\! |\partial_{u} \F | ( t_{k}(\sigma), u_{k}( \sigma ), z_{k}( \sigma )) \, \| u'_k(\sigma) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{s^{k}_{i-1}}^{s} \!\!\!\! |\partial_{z}^{-} \F | ( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ), z_{k}( \sigma )) \,\mathrm{d} \sigma + \int_{s^{k}_{i-1}}^{s} \!\!\!\! \mathcal{P} ( t_{k}( \sigma ), u_{k}( \sigma ), z_{k}( \sigma ) ) \, t'_{k}( \sigma )\,\mathrm{d} \sigma \,.
\end{align*}
Passing to the limit as $s\to s^{k}_{i}$ by Lemma \ref{l.lscFE} we get
\begin{equation*} \begin{split}
\F( t_{k}(s^{k}_{i}), u_{k}(s^{k}_{i}), & \ z_{k}(s^{k}_{i})) \leq \F ( t_{k}(s^{k}_{i-1}), u_{k}(s^{k}_{i-1}), z_{k}(s^{k}_{i-1})) - \int_{s^{k}_{i-1}}^{s^{k}_{i}} \!\!\! |\partial_{u} \F | ( t_{k}(\sigma), u_{k}( \sigma ), z_{k}( \sigma ))\, \| u'_k(s) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{s^{k}_{i-1}}^{s^{k}_{i}} \!\!\! | \partial_{z}^{-} \F | ( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ), z_{k}( \sigma )) \,\mathrm{d} \sigma + \int_{s^{k}_{i-1}}^{s^{k}_{i}} \!\!\!\! \mathcal{P} ( t_{k}( \sigma ), u_{k}( \sigma ), z_{k}( \sigma ) ) \, t'_{k}( \sigma )\,\mathrm{d} \sigma \,.
\end{split} \end{equation*} Iterating the previous estimates we deduce for every $s\in[0,S]$ \begin{equation}\label{e.103} \begin{split}
\F( t_{k}(s), u_{k}(s), & \ z_{k}(s)) \leq \F( 0, u_{0}, z_{0}) - \int_{0}^{s} \!\! |\partial_{u} \F | ( t_{k}(\sigma), u_{k}( \sigma ), z_{k}( \sigma ))\, \| u'_k(\sigma) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{0}^{s} \!\! | \partial_{z}^{-} \F | ( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ), z_{k}( \sigma )) \,\mathrm{d} \sigma + \int_{0}^{s} \!\! \mathcal{P} ( t_{k}( \sigma ), u_{k}( \sigma ), z_{k}( \sigma ) ) \, t_{k}'( \sigma ) \, \mathrm{d} \sigma \,. \end{split} \end{equation} We take the liminf on the left-hand side of~\eqref{e.103} and use lower semicontinuity of the energy. We take the limsup on the right-hand side of~\eqref{e.103} and apply the same argument as in the proof of Proposition~\ref{p.3} for the first and the last integral, while we apply Fatou to the second integral. Thus we obtain \begin{displaymath} \begin{split}
\F( t(s), u (s), z (s)) &\leq \F(0, u_{0}, z_{0}) - \liminf_{k\to\infty} \, \int_{0}^{s} \!\! |\partial_{u} \F | ( t_{k}(\sigma), u_{k}( \sigma ), z_{k}( \sigma ))\, \| u'_k(\sigma) \|_{H^1} \,\mathrm{d} \sigma \\
&\quad- \liminf_{k\to\infty}\, \int_{0}^{s} \!\!\! | \partial_{z}^{-} \F | (\underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ), z_{k}( \sigma )) \,\mathrm{d} \sigma + \limsup_{k\to\infty} \, \int_{0}^{s} \!\! \mathcal{P}(t_{k}(\sigma), u_{k}(\sigma), z_{k}(\sigma))\,t'_{k}(\sigma) \, \mathrm{d} \sigma \\
& \leq \F( 0, u_{0}, z_{0}) - \int_{0}^{s} \!\! |\partial_{u} \F | ( t(\sigma), u ( \sigma ), z ( \sigma ))\, \| u' (\sigma) \|_{H^1} \,\mathrm{d} \sigma \\
&\quad - \int_{0}^{s} \!\!\! \liminf_{k\to\infty}\, | \partial_{z}^{-} \F | ( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ), z_{k}( \sigma )) \,\mathrm{d} \sigma + \int_{0}^{s} \!\! \mathcal{P}(t (\sigma), u (\sigma), z (\sigma) )\,t' (\sigma) \, \mathrm{d} \sigma \,. \end{split} \end{displaymath} For every~$\sigma \in U$ we know thanks to Lemmata~\ref{p.nonstationaryz} and~\ref{p.almoststationaryz} that $\underline{u}_{k}(\sigma)\to u(\sigma)$ in~$H^{1}(\Omega;\R^{2})$, while from Lemma~\ref{r.t} we get that $\underline{t}_{k} \to t$ pointwise in~$[0,S]$. Hence, by Lemma~\ref{l.2} we can continue in the previous inequality with \begin{equation}\label{e.120} \begin{split}
\F( t(s), & u (s), z (s)) \leq \F( 0, u_{0}, z_{0}) - \int_{0}^{s} \!\! |\partial_{u} \F | ( t (\sigma), u ( \sigma ), z ( \sigma ))\, \| u' (\sigma) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{0}^{s} \!\! | \partial_{z}^{-} \F | ( t(\sigma), u ( \sigma ), z( \sigma )) \, \mathbf{1}_{U}(\sigma) \, \mathrm{d} \sigma - \int_{0}^{s} \!\! \liminf_{k\to\infty} | \partial_{z}^{-} \F | ( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ), z_{k}( \sigma )) \, \mathbf{1}_{U^{c}} (\sigma) \, \mathrm{d} \sigma \\ & +\int_{0}^{s} \mathcal{P}(t(\sigma), u(\sigma), z(\sigma)) \, t'(\sigma) \, \mathrm{d} \sigma \,. \end{split} \end{equation}
Since $u \in W^{1,\infty} ([0,S]; H^{1}(\Omega;\R^{2}))$, $g\in W^{1,q}([0,T]; W^{1,p}(\Omega;\R^{2}))$ for some $p>2$ and $q>1$, and~$0 \leq z(s) \leq 1$ for every $s\in[0,S]$, the power functional $\mathcal{P} (t(\cdot), u(\cdot), z(\cdot))\, t'(\cdot)$ belongs to~$L^{1}(0,S)$. Therefore, being the energy functional~$\F$ and the slopes~$|\partial_{u} \F|$ and~$|\partial_{z}^{-} \F|$ positive, we deduce from~\eqref{e.120} that~$|\partial^{-}_{z} \F| ( t(\cdot), u(\cdot), z(\cdot))\,\mathbf{1}_{U}(\cdot) \in L^{1}(0,S)$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
\begin{lemma}{\bf (Riemann sum).} \label{l.L1-2} The function $r \mapsto | \partial_z^- \F | ( \tilde{t} (r), \tilde{u}(r) , \tilde{z}(r))$ belongs to $L^1(0,R)$. Moreover, for every $R' \in (0, R\,]$ there exists a sequence of subdivisions $\{ r^m_n , \text{ for $n=0,...,N_m$}\}$ with $$
r^{m}_{0} = 0 , \quad r^{m}_{N_{m}} = R' , \quad \lim_{m \to \infty} \big( \max_{n=0,...,N_m-1} \, ( r^m_{n+1} - r^m_{n} ) \,\big) \to 0 , $$ such that the simple functions $$
F^m (r) := \sum_{n=0}^{N_{m}-1} | \partial_z^- \F | ( \tilde{t}(r^m_n), \tilde{u}(r^m_n) , \tilde{z}(r^m_n) )
\left\| \frac{\tilde{z} (r^m_{n+1}) - \tilde{z} (r^m_n) }{r^m_{n+1} - r^m_{n}} \right\|_{L^2} \! \mathbf{1}_{( r^m_n , \, r^m_{n+1})} (r) $$
converge to $| \partial_z^- \F |( \tilde{t}(\cdot), \tilde{u}(\cdot) , \tilde{z}(\cdot)) \| \tilde{z}' (\cdot) \|_{L^2}$ strongly in $L^1(0,R')$ (as $m \to \infty$). \end{lemma}
\begin{proof} Since~$\beta\colon[0,S]\to[0,R]$ is Lipschitz continuous, surjective, and $\beta' = \mathbf{1}_{U}$ a.e.~in~$[0,S]$, by the change of variable formula we have for every Borel measurable function~$g\colon [0,S]\to[0,+\infty]$ \begin{displaymath} \int_{[0,S]} g(\sigma) \, \mathbf{1}_{U} (\sigma)\,\mathrm{d}\sigma = \int_{[0,R]} \sum_{\sigma\in \beta^{-1} ( r)} g(\sigma)\,\mathrm{d} r\,. \end{displaymath}
If~$r\in[0,R]\setminus\beta(U^{c})$, then $\{\sigma \in \beta^{-1}(r)\} = \{\alpha(r)\}$. Indeed, $\beta(\alpha(r))= r$ and if there exists $s > \alpha(r)$ such that $\beta(s)=r$, then $(\alpha(r), s]\subseteq U^{c}$ by~$(c)$ of Lemma~\ref{l.beta} and~$r\in\beta(U^{c})$, which is a contradiction. Since~$|\beta(U^{c})|=0$, from the previous equality we obtain \begin{equation}\label{e.changeofvariable} \int_{[0,S]} g(\sigma) \, \mathbf{1}_{U} (\sigma)\,\mathrm{d}\sigma = \int_{[0,R]} g(\alpha(r))\,\mathrm{d} r\,. \end{equation}
We now apply~\eqref{e.changeofvariable} to the function~$|\partial_{z}^{-}\F|(t(\cdot), u(\cdot), z(\cdot))$: \begin{displaymath}
\int_{0}^{S} | \partial_z^- \F | ( t(s), u (s) , z (s)) \, \mathbf{1}_{U}(s) \, \mathrm{d} s = \int_{0}^{R} | \partial_z^- \F | ( \tilde{t}(r), \tilde{u}(r) , \tilde{z}(r)) \, \mathrm{d} r \,. \end{displaymath}
Hence $| \partial_z^- \F |( \tilde{t}(\cdot), \tilde{u}(\cdot) , \tilde{z}(\cdot)) $ belongs to~$L^1(0,R')$ for every $R' \in (0,R\,]$ by Lemma~\ref{l.L1-1}. Thus, by classical results (see, e.g.,~\cite[Lemma~4.12]{MR2186036}) there exists a sequence of subdivisions $\{ r^m_n \}$ with $$
r^{m}_{0} = 0 , \quad r^{m}_{N_{m}} = R' , \quad \lim_{m \to \infty} \big( \max_{n=0,...,N_m-1} \, ( r^m_{n+1} - r^m_{n} ) \,\big) \to 0 , $$ such that the simple functions $$
F^m (r) := \sum_{n=0}^{N_{m}-1} | \partial_z^- \F | ( \tilde{t} (r^m_n), \tilde{u}(r^m_n) , \tilde{z}(r^m_n) ) \, \mathbf{1}_{( r^m_n , \, r^m_{n+1})} (r) $$
converge to $| \partial_z^- \F |( \tilde{t} (\cdot), \tilde{u}(\cdot) , \tilde{z}(\cdot)) $ strongly in~$L^1 (0,R')$.
Invoking for instance \cite[Lemma D.1]{NegriKimura}, for a.e.~$r \in (0,R')$ it holds $$
\sum_{n=0}^{N_{m}-1} \left\| \frac{\tilde{z} (r^m_{n+1}) - \tilde{z} (r^m_n) }{r^m_{n+1} - r^m_{n}} \right\|_{L^2} \! \mathbf{1}_{( r^m_n , \, r^m_{n+1})} (r) \to \| \tilde{z}' (r) \|_{L^2} \, . $$
The thesis follows by dominated convergence, since~$\| \tilde{z}' (r) \|_{2} \leq 1$ for a.e.~$r\in[0,R']$.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
We are now in a position to prove the upper energy-dissipation inequality.
\begin{proposition} \label{p.R-sum} Let $s \in (0,S]$ and $(t,u,z)$ be the triple obtained in Proposition~\ref{p.compactness}. Then, \begin{displaymath} \begin{split}
\F ( t(s), u (s) , z(s) ) \ge & \ \F(0, u_0 , z_0) - \int_0^{s} | \partial^-_z \F | ( t(\sigma), u(\sigma) , z(\sigma) ) \| z' (\sigma) \|_{L^2} \, \mathrm{d} \sigma \\
&-\int_{0}^{s} | \partial_u \F | ( t(\sigma), u(\sigma) , z(\sigma) ) \| u' (\sigma) \|_{H^1} \, \mathrm{d} \sigma +\int_{0}^{s}\mathcal{P}(t(\sigma), u(\sigma), z(\sigma) ) \, t'(\sigma)\,\mathrm{d} \sigma \,. \end{split} \end{displaymath} \end{proposition}
\noindent \textbf{Proof. } We divide the proof in two steps.
{\bf Step 1: \boldmath{$s \in U $}.} Let $ R' = \beta(s)$. Since $s\in U$, then $R' > 0$. Let~$\{r^m_n\}$ a sequence of subdivision of~$[0, R']$ provided by Lemma~\ref{l.L1-2}. We recall that $\tilde{u} (r) = u \circ \alpha( r)$ and $\tilde{z} (r) = z \circ \alpha( r)$. Thus, by the regularity of~$t$ and of~$u$ we can write by chain rule \begin{align*}
\F & ( \tilde{t}(r^{m}_{n+1}), \tilde{u} ( r^{m}_{n+1}) , \tilde{z} ( r^{m}_{n+1}) ) = \F ( t \circ \alpha (r^{m}_{n+1}), u \circ \alpha (r^{m}_{n+1}) , z \circ \alpha ( r^{m}_{n+1}) ) \\
& = \F ( t \circ \alpha (r^m_n), u \circ \alpha (r^{m}_{n}) , z \circ \alpha ( r^{m}_{n+1}) ) + \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\!\! \partial_u \F ( t(\sigma), u ( \sigma) , z \circ \alpha ( r^{m}_{n+1} ) ) [ u' (\sigma) ] \, \mathrm{d} \sigma \\
& \quad + \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \mathcal{P}(t(\sigma), u(\sigma), z \circ \alpha (r^{m}_{n+1})) \, t'(\sigma) \, \mathrm{d} \sigma \\
& = \F ( \tilde{t}(r^m_n), \tilde{u} (r^{m}_{n}) , \tilde{z} ( r^{m}_{n+1}) ) + \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\!\! \partial_u \F ( t(\sigma), u ( \sigma) , z \circ \alpha ( r^{m}_{n+1} ) ) [ u' (\sigma) ] \, \mathrm{d} \sigma\\
& \quad + \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\!\! \mathcal{P}(t(\sigma), u(\sigma), z \circ \alpha (r^{m}_{n+1})) \, t'(\sigma) \, \mathrm{d} \sigma \,. \end{align*} Using the convexity of the energy $\F ( t, u , \cdot)$ we can write \begin{align*}
\F ( \tilde{t}(r^m_n), \tilde{u} ( r^{m}_{n}) , \tilde{z} ( r^{m}_{n+1}) )
& \ge \F ( \tilde{t}(r^m_n), \tilde{u} ( r^m_n) , \tilde{z} ( r^{m}_{n}) )
+ \partial_z \F ( \tilde{t}(r^m_n), \tilde{u} ( r^{m}_{n}) , \tilde{z} ( r^{m}_{n}) ) [ \tilde{z} (r^{m}_{n+1} ) - \tilde{z} (r^{m}_{n} ) ] \\
& \ge \F ( \tilde{t}(r^m_n), \tilde{u} ( r^m_n) , \tilde{z} ( r^{m}_{n}) )
- | \partial^-_z \F | ( \tilde{t}(r^m_n), \tilde{u} ( r^{m}_{n}) , \tilde{z} ( r^{m}_{n}) ) \, \| \tilde{z} (r^{m}_{n+1} ) - \tilde{z} (r^{m}_{n} ) \|_{L^2} \\
& = \F ( \tilde{t}(r^m_n), \tilde{u} ( r^m_n) , \tilde{z} ( r^{m}_{n}) )
- \int_{r^m_n}^{r^{m}_{n+1}} \!\!\!\! | \partial^-_z \F | ( \tilde{t}(r^m_n), \tilde{u} ( r^{m}_{n}) , \tilde{z} ( r^{m}_{n}) ) \, \left\| \frac{ \tilde{z} (r^{m}_{n+1} ) - \tilde{z} (r^{m}_{n} ) }{r^{m}_{n+1}-r^m_n} \right\|_{L^2} \!\!\!\! \mathrm{d} \rho . \end{align*} In conclusion, for every index $n=0,...,N_{m}-1$ we have \begin{align*}
\F ( \tilde{t}(r^{m}_{n+1}), \tilde{u} ( r^{m}_{n+1}) , \tilde{z} ( r^{m}_{n+1}) ) \ge & \ \F ( \tilde{t}(r^m_n), \tilde{u} ( r^m_n) , \tilde{z} ( r^{m}_{n}) ) \\
& - \int_{r^m_n}^{r^{m}_{n+1}} \!\!\! | \partial^-_z \F | ( \tilde{t}(r^m_n), \tilde{u} ( r^{m}_{n}) , \tilde{z} ( r^{m}_{n}) ) \, \left\| \frac{ \tilde{z} (r^{m}_{n+1} ) - \tilde{z} (r^{m}_{n} ) }{r^{m}_{n+1}-r^m_n} \right\|_{L^2} \!\!\!\! \mathrm{d} \rho \, \\
& + \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\! \partial_u \F ( t(\sigma), u ( \sigma) , z \circ \alpha ( r^{m}_{n+1} ) ) [ u' (\sigma) ] \, \mathrm{d} \sigma \\
& + \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\!\! \mathcal{P}(t(\sigma), u(\sigma), z \circ \alpha (r^{m}_{n+1})) \, t'(\sigma) \, \mathrm{d} \sigma \,. \end{align*} Note that $\tilde{u} (r^m_0) = \tilde{u} (0) = u_0$ and that $\alpha(R') = \alpha(\beta(s)) = s$ because $s \in U$. Thus, \begin{displaymath} \tilde{u} (r^m_{N_{m}}) = \tilde{u} ( R' ) = u \circ \alpha (R') = u ( s)\,. \end{displaymath} In a similar way, $\tilde{z} (r^m_0) = z_0$ and $\tilde{z} ( r^m_{N_{m}})= z(s)$. Therefore, iterating the previous inequality for $n=0,...,N_{m}-1$ yields \begin{align}
\F ( t(s), u (s) , z(s) ) & \ge \F ( 0, u_0 , z_0 )
- \sum_{n=0}^{N_{m}-1} \int_{r^m_n}^{r^{m}_{n+1}} | \partial^-_z \F | ( \tilde{t}(r^m_n), \tilde{u} ( r^{m}_{n}) , \tilde{z} ( r^{m}_{n}) ) \, \left\| \frac{ \tilde{z} (r^{m}_{n+1} ) - \tilde{z} (r^{m}_{n} ) }{r^{m}_{n+1}-r^m_n} \right\|_{L^2} \!\!\!\! \mathrm{d} \rho \nonumber \\
& \quad + \sum_{n=0}^{N_{m}-1} \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\! \partial_u \F ( t(\sigma), u ( \sigma) , z \circ \alpha ( r^{m}_{n+1} ) ) [ u' (\sigma) ] \, \mathrm{d} \sigma \label{e.marameo} \\
& \quad + \sum_{n=0}^{N_{m}-1} \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\!\! \mathcal{P}(t(\sigma), u(\sigma), z \circ \alpha (r^{m}_{n+1})) \, t'(\sigma) \, \mathrm{d} \sigma \nonumber \,. \end{align}
We now pass to the limit as~$m\to\infty$. By Lemma \ref{l.L1-2} we know that the first sum in~\eqref{e.marameo} converges to $$
\int_0^{R'} | \partial_z^- \F |( \tilde{t}(\rho), \tilde{u}( \rho ) , \tilde{z}(\rho)) \| \tilde{z}' (\rho) \|_{L^2} \, \mathrm{d} \rho . $$
By the change of variable formula~\eqref{e.changeofvariable} with $g(\sigma)= |\partial_{z}^{-} \F| (t(\sigma), u(\sigma), z(\sigma)) \|z'(\sigma)\|_{L^{2}}$, and recalling the definition of~$\tilde{t}$,~$\tilde{u}$,~$\tilde{z}$ and that~$\tilde{z}'= z' \circ \alpha$ a.e.~in $[0,R]$ by Lemma~\ref{l.tildez}, we get \begin{align}
\int_0^{R'} | \partial_z^- \F |( \tilde{t}(\rho), \tilde{u}( \rho ) , \tilde{z}(\rho)) \| \tilde{z}' (\rho) \|_{L^2} \, \mathrm{d} \rho & = \int_0^s | \partial_z^- \F |( t(\sigma), u ( \sigma ) , z (\sigma)) \| z' (\sigma) \|_{L^2} \,\mathbf{1}_{U} ( \sigma) \, \mathrm{d} \sigma \nonumber \\
& = \int_0^s | \partial_z^- \F |( t(\sigma), u ( \sigma ) , z (\sigma)) \| z' (\sigma) \|_{L^2} \, \mathrm{d} \sigma \,, \label{e.cucu} \end{align}
where in the last equality we have used the fact that $z'=0$ in $[0,S] \setminus U$, and hence $| \partial_z^- \F | \| z' \|_{L^2}=0$.
We claim that the second and the third sums in~\eqref{e.marameo} converge to \begin{align}
\int_0^s \partial_u \F ( t(\sigma), u ( \sigma ) , z (\sigma)) [ u' (\sigma) ] \, \mathrm{d} \sigma \qquad \text{and} \qquad \int_{0}^{s} \mathcal{P}( t(\sigma), u(\sigma), z(\sigma))\, t'(\sigma) \, \mathrm{d} \sigma\,, \label{e.uccu} \end{align} respectively. We notice that if the claim holds, then, passing to the limit in~\eqref{e.marameo} as~$m\to\infty$ and using~\eqref{e.cucu} we would get \begin{displaymath} \begin{split}
\F ( t(s), u (s) , z(s) ) \ge & \ \F( 0, u_0 , z_0) - \int_0^{s} | \partial_u \F | ( t(\sigma), u(\sigma) , z(\sigma) ) \| u' (\sigma) \|_{H^1} \, \mathrm{d} \sigma \\
& - \int_{0}^{s} | \partial^-_z \F | ( t(\sigma), u(\sigma) , z(\sigma) ) \| z' (\sigma) \|_{L^2} \, \mathrm{d} \sigma + \int_{0}^{s} \mathcal{P}(t(\sigma), u(\sigma), z(\sigma)) \, t'(\sigma) \, \mathrm{d} \sigma \,. \end{split} \end{displaymath} Let us prove the claim. Fix $\bar\sigma \in (0, s)$ and let $j$ (depending on $\bar{\sigma}$ and $k$) be such that $\bar\sigma \in [\alpha(r^m_n) , \alpha (r^{m}_{n+1}) )$. Note that, being $\alpha$ discontinuous, it may happen that $\alpha (r^{m}_{n+1}) - \alpha (r^m_n) \not\to 0$. However, we can write $$
z ( \bar\sigma) = z ( \alpha (r^m_n) ) + \int_{\alpha(r^m_n)}^{\bar\sigma} z' ( \sigma) \, \mathrm{d} \sigma $$ and thus \begin{align*}
\| z ( \bar \sigma) - z ( \alpha (r^m_n) ) \|_{L^2} & \le \int_{\alpha(r^m_n)}^{\bar\sigma} \| z' ( \sigma) \|_{L^2} \, \mathrm{d} \sigma = \int_{\alpha(r^m_n)}^{\bar\sigma} \| z' ( \sigma) \|_{L^2}\, \mathbf{1}_{U}(\sigma) \, \mathrm{d} \sigma \\
& \le \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \,\mathbf{1}_{U}(\sigma) \, \mathrm{d} \sigma = \beta \circ \alpha(r^{m}_{n+1}) - \beta \circ \alpha(r^m_n)= r^{m}_{n+1} - r^m_n \to 0 , \end{align*}
where in the last limit we have used the property of the subdivision $r^m_n$. Arguing in the same way, we also prove that $\|z(\alpha(r^{m}_{n+1})) - z(\alpha(r^m_n))\|_{2} \to 0$ as $k\to \infty$, which implies that $\| z ( \bar{\sigma} ) - z ( \alpha ( r^{m}_{n+1} ) ) \|_{2} \to 0$ as well. Hence, we have shown that the sequence $\sum_{n=0}^{N_{m}-1} z(\alpha(r^{m}_{n+1}))\,\mathbf{1}_{[\alpha(r^m_n), \alpha(r^{m}_{n+1}))}$ converges pointwise to~$z$ in~$L^{2}(\Omega)$.
Recall that $$
\partial_{u} \F( t(\sigma), u (\sigma) , z \circ \alpha (r^{m}_{n+1})) [u'(\sigma)] =
\int_{\Omega} \partial_{\strain} W \big(z \circ \alpha (r^{m}_{n+1}) ,\strain(u (\sigma) + g \circ t (\sigma) ) \big){\,:\,}\strain(u'(\sigma))\,\mathrm{d} x . $$
By $(c)$ in Lemma \ref{l.HMWw} we have $| \partial_{\strain} W \big(z \circ \alpha (r^{m}_{n+1}),\strain(u (\sigma) + g \circ t (\sigma) ) \big) | \le C | \strain(u (\sigma) + g \circ t (\sigma) ) | $. Let us consider a subsequence (not relabelled) such that $z \circ \alpha (r^{m}_{n+1}) \to z (\sigma)$ a.e.~in $\Omega$. Then, being $W$ of class $C^1$, $$ \partial_{\strain} W \big(z \circ \alpha (r^m_{n+1}) ,\strain(u (\sigma) + g \circ t (\sigma) ) \big) \to \partial_{\strain} W \big(z ( \sigma) ,\strain(u (\sigma) + g \circ t (\sigma) ) \big) \quad \text{a.e.~in $\Omega$.} $$ By dominated convergence $$ \partial_{u}\F( t(\sigma), u (\sigma) ,z \circ \alpha (r^{m}_{n+1}) )[u'(\sigma)] \to \partial_{u}\F ( t(\sigma), u (\sigma) ,z (\sigma))[u'(\sigma)] \quad\text{for a.e.~$\sigma\in[0,s]$}. $$
Applying again dominated convergence (in the integral over~$[0,s]$) we prove the first part of the claim~\eqref{e.uccu}, i.e., \begin{displaymath}
\lim_{m\to\infty} \sum_{n=0}^{N_{m}-1} \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\! \partial_u \F ( t(\sigma), u ( \sigma) , z \circ \alpha ( r^{m}_{n+1} ) ) [ u' (\sigma) ] \, \mathrm{d} \sigma = \int_{0}^{s} |\partial_{u} \F| (t(\sigma), u(\sigma), z(\sigma))\, [u'(\sigma)]\, \mathrm{d} \sigma\,. \end{displaymath}
Following the proof of~\eqref{e.91} we also obtain the second part of the claim~\eqref{e.uccu}, that is, \begin{displaymath} \lim_{m\to\infty} \sum_{n=0}^{N_{m}-1} \int_{\alpha(r^m_n)}^{\alpha(r^{m}_{n+1})} \!\!\! \mathcal{P}(t(\sigma), u(\sigma), z \circ \alpha (r^{m}_{n+1})) \, t'(\sigma) \, \mathrm{d} \sigma = \int_{0}^{s} \mathcal{P}(t(\sigma), u(\sigma), z(\sigma))\, t'(\sigma) \, \mathrm{d} \sigma \,. \end{displaymath}
{\bf Step 2: \boldmath{ $s \in U^{c}$.}} In this case $s \in ( s^-_i , s^+_i]$ for some index $i \in \N$. In the interval $[ s^{-}_{i} , s]$ we have $z(\sigma) = z (s^-_i)$ and~$z'(\sigma)=0$, while~$t$ and~$u$ are of class $W^{1,\infty}$. Thus, we can write \begin{align*}
\F ( t(s), u ( s) , z (s)) & = \F ( t(s), u(s) , z(s^-_i) ) = \F ( t(s^{-}_{i}) , u (s^-_i) , z (s^-_i) )
+ \int_{s^-_i}^{s} \partial_u \F ( t(\sigma), u ( \sigma) , z(s^-_i)) [ u' ( \sigma) ] \, \mathrm{d} \sigma \\
& \quad +\int_{s^{-}_{i}}^{s} \mathcal{P} (t(\sigma), u(\sigma), z(s^{-}_{i}))\, t'(\sigma)\, \mathrm{d} \sigma \\
& \ge \F ( t(s^{-}_{i}), u (s^-_i) , z (s^-_i) )
- \int_{s^-_i}^{s} | \partial_u \F | ( t(\sigma), u ( \sigma) , z(\sigma)) \, \| u' ( \sigma) \|_{H^1} \, \mathrm{d} \sigma \\
&\quad -\int_{s^{-}_{i}}^{s} | \partial^-_z \F | ( t(\sigma), u(\sigma) , z(\sigma) ) \, \| z' (\sigma) \|_{L^2} \, \mathrm{d} \sigma + \int_{s^{-}_{i}}^{s} \mathcal{P}(t(\sigma), u(\sigma), z(\sigma)) \, t'(\sigma)\, \mathrm{d} \sigma \,. \end{align*} Since $s^-_i \in U$ we can apply the previous step and we conclude the proof.
{\vrule width 6pt height 6pt depth 0pt}
\appendix
\section{Comparing different parametrizations}
In this appendix we will compare, qualitatively, the evolutions of Theorem~\ref{t.1} with those of \cite[Theorem~4.2]{KneesNegri_M3AS17}, or, more precisely, we will compare the evolutions obtained here, employing $H^1$-norm for~$u$ and $L^2$-norm for~$z$, with those obtained employing energy norms. As we will see, the evolutions will be qualitatively the same (up to subsequences) even if these norms are not equivalent.
We need to consider the setting of~\cite{KneesNegri_M3AS17}, otherwise energy norms would not be defined. Let $\mathcal{J} \colon [0,T] \times H^1_0 (\Omega , \mathbb{R}^2) \times H^1 (\Omega; [0,1]) \to [0,+\infty)$ given by $$
\mathcal{J} (t,u,z) = \tfrac12 \int_\Omega (z^2 + \eta) \stress (u + g(t)){\,:\,} \strain(u + g(t)) \, \mathrm{d} x + \tfrac12 \int_\Omega |\nabla z |^2 + (z-1)^2 \, \mathrm{d} x , $$ where we assume that the boundary datum~$g$ belongs to $C^{1,1}([0,T]; W^{1,p}(\Omega;\R^{2}))$, $p>2$. Note that this energy is separately quadratic, thus it is natural, and technically convenient, to introduce a couple of energy (instrinsic) norms: $$
\| u \|^2_{z} = \int_\Omega (z^2 + \eta) \stress(u){\,:\,} \strain(u)\, \mathrm{d} x ,
\qquad
\| z \|^2_{u} = \int_\Omega | \nabla z |^2 + z^2 ( 1 + \stress(u){\,:\,} \strain(u)) \, \mathrm{d} x , $$ which correspond, respectively, to the quadratic part of the energies $\mathcal{J}(t, \cdot, z)$ and $\mathcal{J}(t, u, \cdot)$. Accordingly, we employ the slopes \begin{align*}
|\partial_{u} \mathcal{J}|_z (t,u,z) &= \max \,\{-\partial_{u}\mathcal{J}(t,u,z)[\varphi]:\,\varphi \in H^1_0(\Omega ; \R^2),\, \|\varphi\|_{z}\leq 1\}\,,\\[1mm]
|\partial_{z}^{-} \mathcal{J}|_u (t,u,z) &= \max \,\{-\partial_{z} \mathcal{J}(t,u,z)[\xi]:\,\xi \in H^1(\Omega),\,\xi\leq 0,\,\|\xi\|_{u}\leq 1\} \,. \end{align*}
Let us consider again the alternate scheme (at time $t^k_i$) \begin{eqnarray*} &&\displaystyle u^{k}_{i,j+1}:=\argmin \{\mathcal{J}(t^{k}_{i},u,z^{k}_{i,j}):\,u\in \U \} , \\[2mm] &&\displaystyle z^{k}_{i,j+1}:=\argmin \{\mathcal{J}(t^{k}_{i},u^{k}_{i,j + 1},z):\, z\in \Z,\, z\leq z^{k}_{i,j}\}. \end{eqnarray*} We remark that, given $\tau_k$, the families $u^{k}_{i,j}$ and $z^k_{i,j}$ are uniquely determined (by strict separate convexity of the energy).
Following \cite{KneesNegri_M3AS17}, we interpolate and parametrize the discrete configurations $u^k_{i,j}$ and $z^k_{i,j}$ with respect to the energy norms $\| \cdot \|^2_{z}$ (for the displacement field) and $\| \cdot \|^2_{u}$ (for the phase field). We remark that in this case it is enough to consider piece-wise affine interpolation, which actually coincides, for both $u$ and $z$, with the gradient flow in the energy norm. As a result, we get a sequence of arc-length parametrizations $ ( \bar{t}_k , \bar{u}_k, \bar{z}_k)$, bounded in $W^{1,\infty} ( [ 0, R] ; [0,T] \times H^1_0(\Omega ; \mathbb{R}^2) \times H^1 (\Omega))$ and of uniformly finite length, i.e., with $R$ independent of $k \in \mathbb{N}$. Invoking \cite[Lemma 4.3]{KneesNegri_M3AS17}, there exists a subsequence (non relabelled) and a limit $( \bar{t}, \bar{u}, \bar{z})$ in $W^{1,\infty} ( [ 0, R] ; [0,T] \times H^1_0(\Omega ; \mathbb{R}^2) \times H^1 (\Omega))$ such that for every sequence~$r_{k}$ converging to~$r \in[0,R]$ we have \begin{equation} \label{e.barconv} \bar{t}_{k}(r_{k})\to \bar{t} (r)\,,\qquad \bar{u}_{k}(r_{k})\to \bar{u}(r) \text{ in~$H^1_0(\Omega; \R^2)$,}\qquad \bar{z}_{k}(r_{k}) \rightharpoonup \bar{z}(r) \text{ weakly in~$H^{1}(\Omega)$}. \end{equation} Moreover, invoking~\cite[Theorem 4.2]{KneesNegri_M3AS17} the limit evolution satisfies the following properties: \begin{itemize} \item [$(\bar{a})$] \emph{Regularity}: $(\bar{t},\bar{u},\bar{z})\in W^{1,\infty}([0,R];[0,T]\times H^1_0 ( \Omega;\R^{2}) \times H^{1}(\Omega; [0,1]))$, and for a.e.~$r\in[0,R]$ \begin{displaymath}
\bar{t\,}'(r)+\| \bar{u}'(r)\|_{ \bar{z}(r)}+\| \bar{z}'(r)\|_{\bar{u} (r)}\leq 1\,, \end{displaymath} here the symbol~$'$ denotes the derivative w.r.t.~the parametrization variable $r$;
\item [$(\bar{b})$] \emph{Time parametrization}: the function $\bar{t}\colon [0,R]\to[0,T]$ is non-decreasing and surjective;
\item [$(\bar{c})$] \emph{Irreversibility}: the function $\bar{z}$ is non-increasing and $0 \le \bar{z}(r) \leq 1$ for every $0\leq r \leq R$;
\item [$(\bar{d})$] \emph{Equilibrium}: for every continuity point $r\in[0,R]$ of~$(\bar{t},\bar{u},\bar{z})$ \begin{displaymath}
|\partial_{u} \mathcal{J}|_{\bar{z}(r)} (\bar{t}(r),\bar{u}(r),\bar{z}(r))=0\qquad\text{and}\qquad|\partial_{z}^{-} \mathcal{J}|_{\bar{u}(r)} (\bar{t}(r),\bar{u}(r),\bar{z}(r))=0 \,; \end{displaymath}
\item [$(\bar{e})$] \emph{Energy-dissipation equality}: for every $r \in[0,R]$ \begin{equation}\label{e.eneqR} \begin{split}
\mathcal{J}(\bar{t}(r),& \ \bar{u}(r), \bar{z}(r)) = \mathcal{J}(0,u_0,z_0)-\int_{0}^{r}|\partial_{u} \mathcal{J}|_{\bar{z}(\rho)} (\bar{t}(\rho),\bar{u}(\rho),\bar{z}(\rho))\, \| \bar{u}' (\rho) \|_{\bar{z}(\rho)} \mathrm{d}\rho\\
&-\int_{0}^{r} \!\! |\partial_{z}^{-} \mathcal{J}|_{\bar{u}(\rho)} (\bar{t}(\rho),\bar{u}(\rho),\bar{z}(\rho))\, \| \bar{z}' (\rho) \|_{\bar{u}(\rho)} \mathrm{d}\rho +\int_{0}^{r} \!\! \mathcal{P}( \bar{t}(\rho),\bar{u}(\rho), \bar{z}(\rho)) \,\bar{t\,} '(\rho)\,\mathrm{d}\rho\,. \end{split} \end{equation} \end{itemize} In~\cite{KneesNegri_M3AS17} the authors showed property~$(\bar{d})$ for every $r\in[0,R]$ with $\bar{t}'(r)>0$. However, it is not difficult to see that the same equilibrium condition is verified at continuity points.
\noindent Moreover, by \cite[Proposition 4.1]{KneesNegri_M3AS17} we have \begin{itemize} \item [$(\bar{f})$] {\it Non-degeneracy:} there exists $C>0$ such that for a.e.~$r\in[0,R]$ \begin{equation} \label{e.nondegR}
C < \bar{t\,}'(r) + \| \bar{u}'(r) \|_{\bar{z}(r)} + \| \bar{z}'(r) \|_{\bar{u}(r)} . \end{equation} \end{itemize} Finally, note that, by the separate differentiability of the energy, the equilibrium conditions $(d)$ can be written in an equivalent ``norm-free'' fashion as \begin{itemize} \item [$(\bar{d}')$] \emph{Equilibrium}: for every continuity point $r\in[0,R]$ of $(\bar{t},\bar{u},\bar{z})$ \begin{displaymath}
\partial_{u} \mathcal{J} (\bar{t}(r), \bar{u}(r), \bar{z}(r)) [ \varphi] = 0 ,
\qquad \text{and} \qquad
\partial_{z} \mathcal{J} (\bar{t}(r),\bar{u}(r),\bar{z}(r)) [ \xi ] = 0 , \end{displaymath} for every $\varphi \in H^1_0 ( \Omega ; \mathbb{R}^2)$ and every $\xi \in H^1(\Omega)$ with $\xi \leq 0$. \end{itemize}
At this point, consider the subsequence (not relabelled) converging to the limit~$(t,u,z)$ and let us re-interpolate the discrete configurations~$u^k_{i,j}$ and~$z^k_{i,j}$ with respect to the norms~$\| \cdot \|_{H^1}$ (for the displacement field) and $\| \cdot \|_{L^2}$ (for the phase field) as we did in Section~\ref{s.4.1}. In this way we get a new sequence of parametrizations~$(t_k , u_k ,z_k)$ bounded in $W^{1,\infty} ( [ 0, S] ; [0,T] \times H^1_0(\Omega ; \mathbb{R}^2) \times L^2 (\Omega))$. Clearly, we can apply Proposition~\ref{p.compactness} which provides (up to a further subsequence) a limit parametrization $(t , u ,z) \in W^{1,\infty} ( [ 0, S] ; [0,T] \times H^1_0(\Omega ; \mathbb{R}^2) \times L^2 (\Omega) )$ such that \begin{displaymath} t_{k}(s_{k})\to t(s)\,,\qquad u_{k}(s_{k})\to u(s) \text{ in~$H^1(\Omega; \R^2)$,}\qquad z_{k}(s_{k}) \rightharpoonup z(s) \text{ weakly in~$H^{1}(\Omega)$}, \end{displaymath} for every sequence~$s_{k}$ converging to~$s\in[0,S]$. The limit $(t,u,z)$ satisfies properties $(a)$-$(e)$ of Theorem~\ref{t.1}.
\centerline{\tt -------------------------------------------- }
We recall that $(t_k, u_k, z_k)$ is defined in the points~$s^k_{i,j}$ and~$s^k_{i,j+\frac12}$ (see Section~\ref{s.4.1}). In a similar way, the interpolation $(\bar{t}_k, \bar{u}_k, \bar{z}_k)$ is defined in points of the form~$r^k_{i,j}$ and~$r^k_{i,j+\frac12}$ (see Section~4.3 in~\cite{KneesNegri_M3AS17}). Moreover, we notice that the interpolation nodes are different since the underlying parametrizations are different. However, the configurations computed by the alternate minimization scheme are the same. Therefore, we have that \begin{displaymath} t_{k}(s^{k}_{i,j}) = \bar{t}_{k}(r^{k}_{i,j})\, , \quad u_{k}(s^{k}_{i,j}) = \bar{u}_{k}(r^{k}_{i,j}) \,, \quad z_{k}(s^{k}_{i,j}) = \bar{z}_{k}(r^{k}_{i,j})\,. \end{displaymath} The same holds for nodes of the form $s^{k}_{i,j+\frac{1}{2}}$ and $r^{k}_{i,j+\frac{1}{2}}$.
Since~$(\bar{t}_k, \bar{u}_k , \bar{z}_k)$ is piecewise affine while $(t_k , u_k ,z_k)$ is not, a direct comparison of the triples~$(\bar{t}, \bar{u},\bar{z})$ and~$(t,u,z)$ is not immediate. Nevertheless, we can show the following ``equivalence'' of the reparametrizations.
\begin{lemma} \label{l.s<r} There exist two positive constants $C_{1}, C_{2}$ such that for every $k\in\mathbb{N}\setminus\{0\}$ and every $i\in\{1,\ldots, k\}$ \begin{equation} \label{e.s<r} C_{1} (s^{k}_{i+1} - s^{k}_{i}) \leq r^{k}_{i+1} - r^{k}_{i} \leq C_{2} (s^{k}_{i+1} - s^{k}_{i}) \,. \end{equation} \end{lemma}
\begin{proof}
Using the fact that $\| \cdot \|_z$ and $\| \cdot \|_{H^1}$ are equivalent, by Korn's inequality, while~$\|\cdot\|_{u}$ and~$\|\cdot\|_{H^{1}}$ are equivalent by~\cite[Lemma~2.3]{KneesNegri_M3AS17}, by \eqref{e.74} we can write \begin{align*}
s^k_{i+1} - s^k_i
& = \tau_k + \sum_{j=0}^{\infty} L ( \zeta^k_{i,j} ) + L ( w^k_{i,j}) \le \tau_k + C \sum_{j=0}^{\infty} \| z^k_{i,j} - z^k_{i,j+1} \|_{H^1} + \| u^k_{i,j} - u^k_{i,j+1} \|_{H^1} \\
& \le \tau_k + C' \sum_{j=0}^{\infty} r^k_{i,j+1} - r^i_{i,j} \le C' ( r^k_{i+1} - r^k_i ) \,. \end{align*}
On the other hand, by Proposition~\ref{p.5} and Corollary~\ref{c.3} we have that \begin{displaymath}
\| z^k_{i,j} - z^k_{i,j+1} \|_{H^{1}} \leq \left\{ \begin{array}{ll}
C \| u^k_{i,j} - u^k_{i,j+1} \|_{H^1} & \text{if $j\geq 1$}\,,\\[1mm]
C\tau_{k} & \text{if $j=0$}\,. \end{array}\right. \end{displaymath} Hence, again by equivalence of norms we get \begin{align*}
r^k_{i+1} - r^k_i
& = \tau_k + C \sum_{j=0}^{\infty} \| z^k_{i,j} - z^k_{i,j+1} \|_{u^{k}_{i,j+1}} + \| u^k_{i,j} - u^k_{i,j+1} \|_{z^{k}_{i,j}} \\
& \le C' \Big( \tau_k + \sum_{j=0}^{\infty} \| u^k_{i,j} - u^k_{i,j+1} \|_{H^{1}} \Big)\le C' ( s^k_{i+1} - s^k_i ) \,. \end{align*} This concludes the proof of~\eqref{e.s<r}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
Let us consider an interval of the form $(s^k_{i_k,j_k+\frac12} , s^k_{i_k,j_k+1}) \subset (0,S)$ and the corresponding interval $(r^k_{i_k,j_k+\frac12} , r^k_{i_k,j_k+1}) \subset (0,R)$. By definition, we have $$
t^k_{i_k} = \bar{t}_k (r) = t_k (s)
\quad \text{ and } \quad
u^k_{i_k,j_k+1} = \bar{u}_k (r) = u_k ( s) , $$ for every $r \in (r^k_{i_k,j_k+\frac12} , r^k_{i_k,j_k+1})$ and every $s \in (s^k_{i_k,j_k+\frac12} , s^k_{i_k,j_k+1})$. On the contrary, the phase field interpolations coincide only in the extrema, i.e., \begin{equation} \label{e.zuzzu}
\bar{z}_k (r^k_{i_k,j_k+\frac12}) = z_k (s^k_{i_k,j_k+\frac12}) = z^k_{i,j}
\quad \text{ and } \quad
\bar{z}_k (r^k_{i_k,j_k+1}) = z_k (s^k_{i_k,j_k+1}) = z^k_{i,j+1}. \end{equation}
Now, up to subsequences (non relabelled), we can assume that, as $k \to \infty$, $$
s^k_{i_k,j_k+\frac12} \to s^- , \qquad s^k_{i_k,j_k+1} \to s^+ , \qquad r^k_{i_k,j_k+\frac12} \to r^- , \qquad r^k_{i_k,j_k+1} \to r^+ . $$ Since parametrizations are different, in general we should distinguish between all the following cases: $s^- = s^+$, $s^- < s^+$, $r^- = r^+$, and $r^- < r^+$; however the situation is much simpler, thanks to the following lemma.
\begin{lemma} \label{l.s=r} We have $r^- = r^+$ if and only if $s^- = s^+$. \end{lemma}
\begin{proof} Assume that $r^- = r^+$. By compactness, we know that $\bar{z}_k ( r^k_{i_k,j_k+\frac12} ) = z^k_{i_k,j_k} \weakto z ( r^-)$ weakly in~$H^{1}(\Omega)$ and that $\bar{z}_k ( r^k_{i_k,j_k+1} ) = z^k_{i_k,j_k+1} \weakto z ( r^+)$ weakly in~$H^{1}(\Omega)$. Since $r^- = r^+$, we have $\bar{z}(r^-) = \bar{z}(r^+)$ and
$$\| z^k_{i_k,j_k} - z^k_{i_k,j_k+1} \|_{H^1} = \big( r^k_{i_k,j_k+1} - r^k_{i_k,j_k+\frac12} \big) \to 0 . $$ By \eqref{e.16.06} we know that \begin{align*}
\big( s^k_{i_k,j_k+1} - s^k_{i_k,j_k+\frac12} \big) = L (\zeta^k_{i,j}) \le
C \| z^k_{i_k,j_k} - z^k_{i_k,j_k+1} \|_{H^1} . \end{align*} Thus $s^- = s^+$.
Assume that $s^- = s^+$. Hence, arguing as above, by \eqref{e.zuzzu} we have $z (s^-) = \bar{z} (r^-) = z(s^+) = \bar{z} (r^+)$. Moreover, being $\bar{t}_k$, $t_k$, $\bar{u}_k$ and $u_k$ constant in the corresponding intervals, in the limit we have $t (s^-) = \bar{t} (r^-) = t(s^+) = \bar{t} (r^+)$ and $u (s^-) = \bar{u} (r^-) = u(s^+) = \bar{u} (r^+ )$. Then, if $r^- < r^+$ we would contradict the non-degeneracy condition \eqref{e.nondegR}.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
As a consequence of Lemma~\ref{l.s<r}, the solutions $(\bar{t}, \bar{u}, \bar{z})$ and $(t,u,z)$ coincide in continuity points.
\begin{proposition} \label{p.z=barz} Let $r$ be a continuity point for~$(\bar{t}, \bar{u}, \bar{z})$. Then, there exists a continuity point~$s$ for~$(t,u,z)$ such that $(\bar{t} (r), \bar{u} (r), \bar{z}(r) ) = (t (s), u (s), z (s))$.
Viceversa, if $s$ is a continuity point for~$(t,u,z)$, then there exists a continuity point~$r$ for~$(\bar{t},\bar{u},\bar{z})$ such that $(t (s), u (s), z (s)) = (\bar{t} (r), \bar{u} (r), \bar{z}(r) )$. \end{proposition}
\begin{proof} Fix $\delta >0$. Since $r$ is a continuity point, by monotonicity of~$\bar{t}$ we have $\bar{t}(r + \delta) > \bar{t} (r)$. Since~$\bar{t}_k$ converges pointwise to~$\bar{t}$, we have $ \bar{t}_k (r + \delta ) - \bar{t}_k (r) \ge \tfrac{1}{2} ( \bar{t}(r + \delta) - \bar{t} (r)) > 0$ for every $k$ sufficiently large. As~$\bar{t}_k$ changes only in parametrization intervals of the form $(r^k_{i,-1} , r^k_{i,0})$ and since $\tau_k \to 0$, there exist two indexes~$i_k< i'_{k}$ such that, \begin{equation}\label{e.stolemma}
r^{k}_{i_{k}-1}\leq r < r^k_{i_k} < r^k_{i_k,0} \leq \ldots \leq r^{k}_{i'_{k}} \leq r + \delta \leq r^{k}_{i'_{k}+1}\,. \end{equation} Hence for every index $j \in \N$, we have $$
r < r^k_{i_k,j+ \frac12} \leq r^k_{i_k,j+1} \leq r^k_{i_k+1,0} < r + \delta . $$ Since $\delta$ can be arbitrarily small, we can find a sequence $(i_{k},j_{k})$ such that $$
r^k_{i_k,j_k+ \frac12} \to r \quad \text{ and } \quad r^k_{i_k,j_k + 1} \to r \,. $$ Then $\bar{z}_{k} ( r^k_{i_k,j_k + 1} ) \weakto \bar{z} (r)$ weakly in $H^1(\Omega)$. Since, by construction, $\bar{z}_{k} ( r^k_{i_k,j_k + 1} ) = z_{k} ( s^k_{i_k,j_k + 1} ) $ we have $z _{k}( s^k_{i_k,j_k + 1} ) \weakto \bar{z} (r)$ weakly in~$H^1(\Omega)$. Up to a subsequence (not relabelled) $s^k_{i_k,j_k + 1} \to s$ and thus $z_{k} ( s^k_{i_k,j_k + 1} ) \weakto z (s)$. We conclude that $z(s) = \bar{z} (r)$. In a similar way $\bar{t} (r) = t(s)$ and $\bar{u} ( r) = u (s)$.
It remains to show that~$s$ is a continuity point for~$(t,u,z)$. To this aim, let us set, up to subsequence, $s_{\delta}\coloneq \lim_{k} s^{k}_{i'_{k}}$, where the indexes~$i'_{k}$ have been defined in~\eqref{e.stolemma}. Hence, applying Lemma~\ref{l.s<r} we deduce that \begin{displaymath}
|s - s_{\delta}| = \lim_{k\to\infty}\, |s^{k}_{i_{k}, j_{k}+1} - s^{k}_{i'_{k}}| \leq \lim_{k\to\infty}\, |s^{k}_{i_{k}} - s^{k}_{i'_{k}}| \leq C \lim_{k\to\infty}\, |r^{k}_{i_{k}} - r^{k}_{i'_{k}}| \leq C\delta\,. \end{displaymath}
By definition of~$s_{\delta}$ we have that $t(s_{\delta})= \lim_{k\to\infty} t_{k}(s^{k}_{i'_{k}}) = \lim_{k\to\infty} \bar{t}_{k}(r^{k}_{i'_{k}})$. By~\eqref{e.stolemma} we get that $| \bar{t}_{k}(r^{k}_{i'_{k}}) - \bar{t}_{k}(r+\delta)|\leq \tau_{k}$, from which we deduce that $t(s_{\delta}) = \bar{t} ( r + \delta) > \bar{t}(r) = t(s)$. This implies that~$s$ is of continuity for~$(t,u,z)$.
The viceversa can be shown in a similar way.
{\vrule width 6pt height 6pt depth 0pt}
\end{proof}
On the contrary, in discontinuity points the evolution $(t,u,z)$ and $(\bar{t}, \bar{u}, \bar{z})$ interpolate the same configurations but with different paths. To better understand, let us consider an interval of the form $(r^k_{i_{k},j_{k}+\frac12}, r^k_{i_{k},j_{k}+1})$ such that $r^k_{i_{k},j_{k}+\frac12} \to r^-$, $r^k_{i_{k},j_{k}+1} \to r^+$ with $r^- < r^+$. As a consequence both $\bar{t}$ and $\bar{u}$ are constant in $(r^-, r^+)$ and thus every $r \in (r^-, r^+)$ is not a continuity point. In this case, we have $$
\bar{z}_{k} ( r^k_{i_{k},j_{k}+\frac12} ) \weakto \bar{z} ( r^-)
\quad \text{ and } \quad
\bar{z}_{k} ( r^k_{i_{k},j_{k}+1} ) \weakto \bar{z} ( r^+) \,. $$ By the non-degeneracy property of $(\bar{t}, \bar{u}, \bar{z})$, we deduce that $\bar{z} ( r^-) \neq \bar{z} ( r^+)$. Moreover, up to subsequence we have $$
z_{k} ( s^k_{i_{k},j_{k}+\frac12} ) \weakto z ( s^-)
\quad \text{ and } \quad
z_{k} ( s^k_{i_{k},j_{k}+1} ) \weakto z ( s^+) \,. $$ Since $z\in W^{1,\infty}([0,S]; L^{2}(\Omega))$, we get that $s^- \neq s^+$. Recalling that $t$ and $u$ are constant in $(s^-,s^+)$, the energy balance of Theorem \ref{t.1} reads $$
\mathcal{J} ( t(s^-) , u(s^-) , z(s^+) ) = \mathcal{J} ( t(s^-) , u(s^-) , z(s^-) ) - \int_{s^-}^{s^+} \| z' (\sigma) \|_{L^2} \, | \partial^-_v \mathcal{J}| ( t(s^-) , u(s^-) , z(\sigma) ) \, \mathrm{d} \sigma . $$ Thus $z$ is a (normalized) unilateral gradient flow in $L^2$, with initial datum $z(s^-)$. On the contrary, $\bar{z}$ is the affine interpolation of $\bar{z} ( r^-)$ and $\bar{z} ( r^+)$. Thus, in general, $\bar{z}$ and $z$ do not coincide in the corresponding intervals $(r^-,r^+)$ and $(s^-,s^+)$ even if they coincide in the extrema.
\section{On the alternate behavior in discontiuity points \label{AppB}}
Let us consider the set $U$ defined in \eqref{e.U}
and denote by $\mathbf{1}_{U}$ its characteristic function. From \eqref{e.120} we know that \begin{align*}
\F( t(s), u (s), z (s)) \leq & \ \F( 0, u_{0}, z_{0}) - \int_{0}^{s} \!\! |\partial_{u} \F | ( t (\sigma), u ( \sigma ), z ( \sigma ))\, \| u' (\sigma) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{0}^{s} \!\! | \partial_{z}^{-} \F | ( t(\sigma), u ( \sigma ), z( \sigma )) \, \mathbf{1}_{U}(\sigma) \, \mathrm{d} \sigma - \int_{0}^{s} \!\! \liminf_{k\to\infty} | \partial_{z}^{-} \F | ( \underline{t}_{k}(\sigma), \underline{u}_{k}( \sigma ), z_{k}( \sigma )) \, \mathbf{1}_{U^{c}} (\sigma) \, \mathrm{d} \sigma . \end{align*}
Note that $z' (\sigma) = 0$ for every $\sigma \in U^c$ (see Lemma \ref{l.Uc}), thus, being $\| z' \|_{L^2} \le 1$ a.e.~in $[0,S]$, we have $\| z' \|_{L^2} \le \mathbf{1}_{U}$ a.e.~in $[0,S]$; as a consequence the above estimate together with \eqref{e.eneq} yield \begin{align*} \F( t(s), u (s), z (s))
\leq & \ \F( 0, u_{0}, z_{0}) - \int_{0}^{s} \!\! |\partial_{u} \F | ( t (\sigma), u ( \sigma ), z ( \sigma ))\, \| u' (\sigma) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{0}^{s} \!\! | \partial_{z}^{-} \F | ( t(\sigma), u ( \sigma ), z( \sigma )) \, \mathbf{1}_{U}(\sigma) \, \mathrm{d} \sigma +\int_{0}^{s} \mathcal{P}(t(\sigma), u(\sigma), z(\sigma)) \, t'(\sigma) \, \mathrm{d} \sigma \,. \\
\leq & \ \F( 0, u_{0}, z_{0}) - \int_{0}^{s} \!\! |\partial_{u} \F | ( t (\sigma), u ( \sigma ), z ( \sigma ))\, \| u' (\sigma) \|_{H^1} \,\mathrm{d} \sigma \\
& - \int_{0}^{s} \!\! | \partial_{z}^{-} \F | ( t(\sigma), u ( \sigma ), z( \sigma )) \, \| z' (\sigma) \|_{L^2} \, \mathrm{d} \sigma +\int_{0}^{s} \mathcal{P}(t(\sigma), u(\sigma), z(\sigma)) \, t'(\sigma) \, \mathrm{d} \sigma \\ = & \ \F ( t(s), u (s), z (s)) . \end{align*} Therefore, all inequalities turn into equalitites and thus $$
0 \le \int_{0}^{s} \!\! | \partial_{z}^{-} \F | ( t(\sigma), u ( \sigma ), z( \sigma )) \, ( \mathbf{1}_{U}(\sigma) - \| z' (\sigma) \|_{L^2} ) \, \mathrm{d} \sigma = 0 ; $$
hence $| \partial_{z}^{-} \F | ( t(\sigma), u ( \sigma ), z( \sigma )) \, ( \mathbf{1}_{U}(\sigma) - \| z' (\sigma) \|_{L^2} ) =0$ a.e.~in $[0,S]$.
Therefore, if $| \partial_{z}^{-} \F | ( t(\sigma), u ( \sigma ), z( \sigma )) \neq 0$ for $\sigma \in (\sigma_1, \sigma_2) \subset U$ then $\| z' (\sigma) \|_{L^2} = 1$ a.e.~in $(\sigma_1, \sigma_2)$ and then both $t'(\sigma)=0$ and $u'(\sigma)=0$ a.e.~in $(\sigma_1, \sigma_2)$, by (a) in Theorem \ref{t.1}.
This means that in the discontinuity interval $(\sigma_1, \sigma_2)$ only $z$ changes, following a normalized, unilateral $L^2$ gradient flow. In view of this observation, excluding the cases in which $| \partial_{z}^{-} \F | ( t(\sigma), u ( \sigma ), z( \sigma )) = 0$, we expect that in the presence of time-discontinuties the limit evolution is still alternate, and thus it is not simultaneous in $u$ and $z$. More precisely, consider a discontinuity at time $t$, with a transition from $(u^-,z^-)$ to $(u^+,z^+)$, parametrized in the interval $(\sigma^-, \sigma^+)$. Being~$t$ the limit of continuity points, the left limit $(u^-,z^-)$ is an equilibrium configuration\ at time~$t$. We expect that the parametrizations of~$u$ and~$z$, in a right neighborhood of~$\sigma^-$, provide an alternate interpolation of sequences $z^-_m \nearrow z^-$, with $z^-_m \neq z^-$, and $u^-_m \to u^-$ such that $$ \quad \begin{cases}
u^-_{m-1} \in \mbox{argmin} \, \big\{ \F ( t \,, u , z^-_{m} ) \big\},
\\%[3mm]
z^-_{m-1} \in \mbox{argmin} \, \big\{ \F ( t \, , u^-_{m-1} \, , z \, ) : z \le z^-_{m} \big\}.
\end{cases} $$ The non-degeneracy condition $z^-_m \neq z^-$ is due to the fact that~$(u^-,z^-)$ is an equilibrium configuration, and thus a separate minimizer of the energy $\F ( t , \cdot, \cdot)$. Indeed, if $z^-_m = z^-$, for some index $m \in \mathbb{N}$, then, by uniqueness of the minimizer, $u^-_{m-1} = u^-$ and then $z^-_{m-1} = z^-$. By induction and by monotonicity, $z^-_m=z^-$ for every index $m \in \mathbb{N}$ and then $u^-_m = u^-$ for every $m \in \mathbb{N}$; thus, there would be no transition between $(u^-,z^-)$ to $(u^+,z^+)$. In a similar way, we expect sequences $z^+_m \searrow z^+$ and $u^+_m \to u^+$ in a left neighborhood $\sigma^+$ such that $$ \quad \begin{cases}
u^+_{m+1} \in \mbox{argmin} \, \big\{ \F ( t \,, u , z^+_{m} ) \big\},
\\%[3mm]
z^+_{m+1} \in \mbox{argmin} \, \big\{ \F ( t \, , u^+_{m+1} \, , z \, ) : z \le z^+_{m} \big\}.
\end{cases} $$ However, in this case we cannot exclude that $z^+_m = z^+$ for some index $m \in \mathbb{N}$. We remark that this qualitative behavior is confirmed by numerical computations.
\section*{Acknowledgments} The work of S.A. was supported by the SFB TRR109. M.N. is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Instituto Nazionale di Alta Matematica (INdAM).
\end{document} | arXiv |
\begin{document}
\title[Regularized inner products of meromorphic modular forms and higher Green's functions]{Regularized inner products of meromorphic modular forms and higher Green's functions} \author{Kathrin Bringmann} \address{\rm Mathematical Institute, University of Cologne, Weyertal 86-90, 50931 Cologne, Germany} \email{[email protected]} \author{Ben Kane} \address{\rm Mathematics Department, University of Hong Kong, Pokfulam, Hong Kong} \email{[email protected]} \author{Anna-Maria von Pippich} \address{\rm Fachbereich Mathematik, Technische Universit\"at Darmstadt, Schlo{\upshape{\ss}}gartenstr. 7, 64289 Darmstadt, Germany } \email{[email protected]} \date{\today} \subjclass[2010] {11F37, 11F11}
\keywords{CM-values, harmonic Maass forms, higher Green's functions, meromorphic modular forms, polar harmonic Maass forms, regularized Petersson inner products, theta lifts, weakly holomorphic modular forms}
\begin{abstract} In this paper we study generalizations of Poincar\'e series arising from quadratic forms, which naturally occur as outputs of theta lifts. Integrating against them yields evaluations of higher Green's functions. For this we require a new regularized inner product, which is of independent interest. \end{abstract}
\thanks{ The research of the first author is supported by the Alfried Krupp Prize for Young University Teachers of the Krupp foundation and the research leading to these results receives funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant agreement n. 335220 - AQSER. The research of the second author was supported by grants from the Research Grants Council of the Hong Kong SAR, China (project numbers HKU 27300314, 17302515, and 17316416).} \maketitle
\section{Introduction and statement of results}\label{sec:intro} While investigating the Doi-Naganuma lift, Zagier \cite{ZagierRQ} encountered interesting weight $2k$ cusp forms for ${\text {\rm SL}}_2(\mathbb{Z})$ ($k\in \mathbb{N}_{\geq 2}$, $\ell\in\mathbb{Z}$) for $\ell = \delta >0$ defined by \begin{equation} \label{fkd} f_{k,\ell}:=\sum_{\mathcal{A} \in \mathcal{Q}_{\ell}/{\text {\rm SL}}_2(\mathbb{Z})} f_{k,\ell,\mathcal{A}}. \end{equation} Here $\mathcal{Q}_{\ell}$ is the set of integral binary quadratic forms of discriminant $\ell \in \mathbb{Z}$ and for $\mathcal{A}$ an ${\text {\rm SL}}_2(\mathbb{Z})$-equivalence class of quadratic forms of discriminant $\ell$, we set ($z\in\mathbb{H}$) \begin{equation*}
f_{\mathcal{A}}(z)=f_{k,\ell, \mathcal{A}}(z):= |\ell|^{\frac{k}{2}} \sum_{Q\in \mathcal{A}} Q(z,1)^{-k}. \end{equation*} Throughout we write $\delta>0$ for positive discriminants and let $-D<0$ denote negative discriminants. Kohnen and Zagier \cite{KohnenZagierRational} showed, using a different normalization, that the even periods of $f_{k,\delta}$ are rational, and Kramer \cite{Kramer} proved that the $f_{k,\delta, \mathcal{A}}$ span the space of weight $2k$ cusp forms. Furthermore, Kohnen and Zagier \cite{KohnenZagier} used the functions $f_{k,\delta}$ to construct a kernel function for the Shimura and Shintani lifts. These may also be realized as theta lifts.
Roughly speaking, a theta lift is a map between modular objects in different spaces. One begins with a theta kernel $\Theta(z,\tau)$, which is modular in both variables. In our setting, both variables lie in $\mathbb{H}$ and $\Theta(z,\tau)$ has integral weight in $z$ and a half-integral weight in $\tau$. Given a function $\tau\mapsto P(\tau)$ transforming with the same weight as $\Theta$ in the $\tau$-variable, one may then define the \begin{it}theta lift\end{it} of $P$ by taking the Petersson inner product $\langle \cdot,\cdot\rangle$ between $\Theta$ and $P$: \[ \Phi(\Theta;P)(z):=\left<P,\Theta(z,\cdot)\right>. \]
Niwa \cite{Niwa} wrote the Shimura and Shintani lifts as theta lifts by using a theta kernel corresponding to an integral quadratic form of signature $(2,1)$, which was later extended by Oda \cite{Oda} to signature $(2,n)$ for $n\in \mathbb{N}$. These lifts fit into the general framework of the theta correspondence between automorphic forms associated to two groups of a dual reductive pair \cite{Howe}. Theta lifts have appeared in a variety of applications, including a relation of Katok and Sarnak \cite{KatokSarnak} between central values of $L$-functions and Fourier coefficients. Paralleling the results in \cite{KatokSarnak}, the realization of $f_{k,\delta}$ as theta lifts gave the non-negativity of twisted central $L$-values \cite{KohnenZagier}.
Natural inputs for theta lifts are Poincar\'e series. These are defined, in the simplest case, for a translation-invariant function $\varphi$ (in the case of absolute convergence) as \begin{equation*}
\sum_{\gamma \in \Gamma_{\infty} \backslash {\text {\rm SL}}_2(\mathbb{Z})} \varphi|_{\kappa}\gamma (\tau), \end{equation*}
where $\Gamma_{\infty}:= \{\pm \left(\begin{smallmatrix}1&n\\0&1 \end{smallmatrix} \right) : n \in \mathbb{Z} \}$, $\kappa\in\frac{1}{2}\mathbb{Z}$ (throughout the paper we use $\kappa$ for arbitrary weight in $\frac12\mathbb{Z}$ and reserve $k$ for restricted weights), and $|_{\kappa}$ denotes the usual slash operator. A natural choice for $\varphi$ is a term from the Fourier expansions of forms in the space of automorphic forms in which one is interested. In this paper, we consider in particular half-integral weight modular forms and \begin{it}harmonic Maass forms\end{it}, which transform and grow like modular forms but instead of being meromorphic they are annihilated by the weight $\kappa$ {\it hyperbolic Laplace operator} (in the variable $z=x+iy\in\mathbb{H}$), defined by \begin{equation}\label{Laplace} \Delta_{\kappa} := -y^2\left(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\right)+i\kappa y\left(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y}\right). \end{equation} We denote the Poincar\'e series constructed by choosing a particular function $\varphi$ from the Fourier expansions of these forms by $P_{k+\frac12,m}$ and $\mathcal{P}_{\frac32-k,m}$ (see \eqref{eqn:gDdef} and \eqref{eqn:PnD3/2-kdef} for the explicit definitions). This gives in particular four relevant cases: for positive weight one can average a cusp form coefficient or a coefficient that grows towards $i\infty$, while in negative weight one can define two kinds of Poincar\'e series, one that grows in the holomorphic part and one that grows in the non-holomorphic part (see \eqref{split} for the decomposition).
We start with the case of positive weight and define (with $\tau=u+iv\in\mathbb{H}$) the theta kernel
\begin{equation}\label{eqn:thetadef1} \Theta_k(z,\tau):=y^{-2k}v^{\frac{1}{2}}\sum_{D\in \mathbb{Z}}\sum_{Q\in\mathcal{Q}_D}Q(z,1)^k e^{-4\pi Q_{z}^2 v}e^{2\pi i D\tau}. \end{equation} Here, for $Q=[a,b,c]$, we set \begin{equation}\label{eqn:Qzdef}
Q_{z}:=y^{-1}\left(a|z|^2+bx+c\right). \end{equation}
It is well-known that the function $z\mapsto \Theta_k(-\overline{z},\tau)$ is modular of weight $2k$ and $\tau\mapsto \Theta_{k}(z,\tau)$ is modular of weight $k+\frac12$. Hence taking the inner product in either variable yields a lift between integral and half-integral weights. For this, we define the following theta lift \begin{equation*} \Phi_k(f)(z):=\Phi(\Theta_k;f)(z). \end{equation*} Using as input positive weight cuspidal Poincar\'e series, one recovers the functions $f_{k,\delta}$: \begin{equation*}
f_{k,\delta}=C_{k,\delta}\cdot\Phi_{k}\left(P_{k+\frac12,\delta}\right) \end{equation*}
with $C_{k,\delta}$ an explicit constant. By the Petersson coefficient formula, the holomorphic projection (recalled below) of the theta kernel $\Theta_k$ yields the generating function \[ \Omega_k(z,\tau):=\sum_{\delta> 0} \delta^{\frac{k-1}{2}}f_{k,\delta}(z) e^{2\pi i \delta\tau}. \] Kohnen and Zagier \cite{KohnenZagier} proved that $z\mapsto\Omega_k(z,\tau)$ is a weight $2k$ cusp form and $\tau\mapsto\Omega_k(z,\tau)$ is a weight $k+\frac12$ cusp form. By integrating in either variable, $\Omega_k$ yields theta lifts from weight $2k$ to $k+\frac12$ and from weight $k+\frac12$ to $2k$; these lifts turn out to yield an alternative construction of the well-known (first) Shintani \cite{Shintani} and Shimura \cite{Shimura} lifts. Hereby, the idea underlying the holomorphic projection operator is simple. Suppose that $f$ is a weight $\kappa$ real-analytic modular form with moderate growth at cusps. Then $g\mapsto \langle g,f\rangle$ yields a linear functional on the space of weight $\kappa$ cusp forms. Since the Petersson inner product is non-degenerate, this functional must be given by $\langle \cdot, F \rangle$ for some weight $\kappa$ cusp form $F$. This $F$ is essentially the holomorphic projection of $f$.
If one takes weakly holomorphic Poincar\'e series (i.e., Poincar\'e series which yield meromorphic modular forms with poles only at the cusps) as inputs of the theta lifts, one obtains, instead of the $f_{k,\delta}$'s, the analogous meromorphic modular forms $f_{k,-D}$ defined in \eqref{fkd}. We note that some care is needed if the inputs are no longer cusp forms. Although the naive definition of the inner product usually diverges when taking weakly holomorphic modular forms one can extend its definition, and define a regularized theta lift that is meaningful for more general inputs;
we describe this in Section \ref{sec:regularization}. To obtain the functions $f_{k,-D}$ as theta lifts, we use a regularization of Borcherds. The functions $f_{k,-D}$ were first constructed by Bengoechea \cite{Bengoechea} in her thesis.
\begin{theorem}\label{thm:liftfkd} We have \begin{equation*} \Phi_k\left(P_{k+\frac{1}{2},-D}\right)=f_{k,-D}. \end{equation*} \end{theorem} \begin{remarks} \noindent
\noindent \begin{enumerate}[leftmargin=*] \item The theta lift in Theorem \ref{thm:liftfkd} is a special case of a more general theta lift introduced by Borcherds in Theorem 14.3 of \cite{Bo1}. We choose a Poincar\'e series as a distinguished input, while Borcherds had more general inputs. Moreover, Borcherds unfolded against the theta function, while we apply the unfolding method to the Poincar\'e series. As a result, the two approaches yield different representations of the functions $f_{k,-D}$. \item The theta lift $\Phi_k$ maps (parabolic) Poincar\'e series $P_{k+\frac12,\ell}$ to other types of Poincar\'e series; the functions $f_{k,\delta}$ are sums of the hyperbolic Poincar\'e series which appeared in previous work of Petersson \cite{Pe3} (see also \cite{ImOs}), while we see in \eqref{eqn:fkDPsi} that the $f_{k,-D}$ are sums of the elliptic Poincar\'e series defined by Petersson in \cite{Pe1}. This implies that they are elements of $\mathbb{S}_{2k}$, the space of {\it meromorphic cusp forms of weight $2k$} for ${\text {\rm SL}}_2(\mathbb{Z})$, which are meromorphic modular forms that decay like cusp forms towards $i\infty$. \end{enumerate} \end{remarks}
We turn now to the case of negative weight. We use the theta kernel ($k\in \mathbb{N}_{\geq 2}$)
\begin{equation*}
\Theta^*_{1-k} (z,\tau):=v^k \sum_{D\in \mathbb{Z}}\sum_{Q\in \mathcal{Q}_D} Q_{z}Q(z,1)^{k-1} e^{-\frac{4\pi |Q(z,1)|^2 v}{y^2}} e^{-2\pi i D\tau}. \end{equation*} \noindent The function $z\mapsto \Theta_{1-k}^*(z,\tau)$ is modular of weight $2-2k$, and $\tau\mapsto \Theta^*_{1-k}(z,\tau)$ is modular of weight $\frac32-k$. We set \begin{equation*} \Phi_{1-k}^*(f)(z):= \Phi(\Theta_{1-k}^*;f)\!\left(-\overline{z}\right). \end{equation*}
We then define negative-weight analogues of the functions $f_{k,\ell}$ (with $\ell\in\mathbb{Z}$), namely \begin{equation*} \mathcal{F}_{1-k,\ell}:=\sum_{\mathcal{A} \in \mathcal{Q}_{\ell}/{\text {\rm SL}}_2(\mathbb{Z})} \mathcal{F}_{1-k,\ell,\mathcal{A}}, \end{equation*} where \begin{equation}\label{eqn:Gdef} \mathcal{F}_\mathcal{A}(z)=\mathcal{F}_{1-k,\ell,\mathcal{A}}(z):=\sum_{Q\in \mathcal{A} }\mathbb{P}_{1-k,\ell,Q}(z) \end{equation} with \begin{equation}\label{defineP}
\mathbb{P}_{1-k,\ell,Q}(z):=\frac{i(-1)^k}{2}|\ell|^{\frac{1-k}{2}}\operatorname{sgn}\left(Q_{z}\right) Q\left(z,1\right)^{k-1} \beta\left(\frac{ \ell y^2}{\left|Q\left(z,1\right)\right|^2_{\phantom{-}}};k-\frac{1}{2},\frac{1}{2}\right). \end{equation} Here $\beta\left(Z;a,b\right)$ denotes the {\it incomplete $\beta$-function}, which is defined for $a,b\in \mathbb{C}$ satisfying $\textnormal{Re}(a)$, $\textnormal{Re}(b)>0$ by $\beta\left(Z;a,b\right):=\int_{0}^Z t^{a-1}\left(1-t\right)^{b-1}dt$. Note that we can also write the incomplete $\beta$-function in terms of the hypergeometric function $_2F_1$ (see \eqref{equ-beta-hypergeom}).
We recall some of the properties of these functions for $\ell=\delta>0$. The $\mathcal{F}_{1-k,\delta,\mathcal{A}}$, with a different normalization, were investigated by Kohnen and the first two authors in \cite{BKW}, and a variant of these functions was studied by H\"ovel \cite{Hoevel} for $k=1$. It turns out that they are locally harmonic Maass forms. Locally harmonic Maass forms allow jump singularities in the upper half-plane. These functions and their higher-dimensional analogues also appeared as theta lifts in both physics and mathematics -- see the work of Angelantonj, Florakis, and Pioline \cite{PiolineOneLoop} for the former and the work of Viazovska and the first two authors \cite{BKM} for the latter. Namely, in analogy to the positive weight case, we have \cite{BKM, Hoevel} $$ \mathcal{F}_{1-k,\delta}=C_{1-k,\delta}\cdot \Phi_{1-k}\left(\mathcal{P}_{\frac32 -k,\delta}\right), $$ with $C_{1-k,\delta}$ an explicit constant and $\mathcal{P}_{3/2 -k,\delta}$ defined in \eqref{eqn:PnD3/2-kdef}. In addition to their relationship via theta lifts, the functions $\mathcal{F}_{\mathcal A}$ are connected to the functions $f_{\mathcal A}$ via the differential operators $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$, where \begin{equation}\label{XiD} \xi_{\kappa}:=2iy^{\kappa}\overline{\frac{\partial}{\partial \overline{z}}}\quad\textnormal{ and }\quad\mathcal{D}:=\frac{1}{2\pi i } \frac{\partial}{\partial z}. \end{equation} Specifically, we have \begin{align*} \xi_{2-2k}\left(\mathcal{F}_{\mathcal{A}}\right) = \mathcal{C}_{1,k,\delta}\cdot f_{\mathcal{A}}, \quad\quad \mathcal{D}^{2k-1}\left(\mathcal{F}_{\mathcal{A}}\right)= \mathcal{C}_{2,k,\delta}\cdot f_{\mathcal{A}}, \end{align*} where the $\mathcal{C}_{j,k,\delta}$ are explicit constants.
It is unusual for a harmonic Maass form to map to a constant multiple of the same function under $\xi_{2-2k}$ and $\mathcal{D}^{2k-1}$. However, given their uniform definition in \eqref{eqn:Gdef}, it is not a surprise to find out that for discriminants $-D<0$, the functions $\mathcal{F}_{1-k,-D,\mathcal{A}}$ have many properties similar to those of $\mathcal{F}_{1-k,\delta,\mathcal{A}}$. As a difference between negative and positive discriminants, note that for negative discriminants the functions have poles instead of jump singularities. We call functions that behave like harmonic Maass forms away from singularities of this type \begin{it}polar harmonic Maass forms\end{it}. \begin{theorem}\label{thm:Gpolar} \noindent
\noindent \begin{enumerate}[leftmargin=*,label={\rm(\arabic*)}]
\item We have
\[
\Phi_{1-k}^* \left( \mathcal{P}_{\frac{3}{2}-k,-D}\right) =\mathcal{F}_{1-k,-D}.
\]
\item For $\mathcal{A}\in\mathcal{Q}_{-D}\slash{\text {\rm SL}}_2(\mathbb{Z})$, the functions $\mathcal{F}_{\mathcal{A}}$ are weight $2-2k$ polar harmonic Maass forms whose only singularities occur at $\tau_Q$ for $Q\in \mathcal{A}$; here $\tau_Q\in\mathbb H$ is the unique solution to $Q(z,1)=0$. Furthermore, we have
\begin{align}\label{eqn:xiDG}
\xi_{2-2k}\left(\mathcal{F}_{\mathcal{A}}\right)& = f_{\mathcal{A}},& \mathcal{D}^{2k-1}\left(\mathcal{F}_{\mathcal{A}}\right)&= -\frac{(2k-2)!}{(4\pi)^{2k-1}} f_{\mathcal{A}}.
\end{align}
\end{enumerate} \end{theorem} \begin{remarks} \noindent
\noindent \begin{enumerate}[leftmargin=*] \item The difference in the singularities of $\mathcal{F}_{1-k,\ell,\mathcal{A}}$ for discriminants $\ell>0$ and $\ell<0$ comes from the sign factor in \eqref{defineP}. For $\ell>0$, $Q_z=0$ along a geodesic $S_Q$, and the function ``jumps'' as one crosses from one side of $S_Q$ to the other. For $\ell<0$, $\operatorname{sgn}(Q_z)\neq 0$ and $\operatorname{sgn}(Q_z)$ is independent of $z$; namely, $\operatorname{sgn}(Q_z)=1$ for all $z\in\mathbb{H}$ if $Q$ is positive-definite and $\operatorname{sgn}(Q_z)=-1$ for all $z\in\mathbb{H}$ if $Q$ is negative-definite. \item A key step in proving Theorem \ref{thm:Gpolar} (2) is to relate $\mathcal F_\mathcal{A}$ to the higher Green's functions $G_k$ defined in Subsection \ref{sec:Greens} (see Corollary \ref{diffop}). These have appeared in a number of interesting applications, and their evaluations at pairs of CM-points has been intensively studied. Values of higher Green's functions at CM-points are conjectured to be roughly logarithms of algebraic numbers, and a number of cases are known by work of Mellit \cite{Mellit} and Viazovska \cite{Vi}. \end{enumerate} \end{remarks}
Let us now return to the positive weight cusp forms $f_{k,\delta,\mathcal{A}}$. Integrating against them gives cycle integrals. To be more precise, we have \[ \left<f,f_{k,\delta,\mathcal{A}}\right>=\mathcal{C}_{k,\delta}\int_{\Gamma_{Q_0}\,\backslash \, S_{Q_0}}f(z)Q_0(z,1)^{k-1}dz, \] where $\mathcal C_{k,\delta}\in\mathbb{R}$ is an explicit constant, $Q_0 \in \mathcal{A}$ is arbitrary, $S_{Q_0}$ is an oriented geodesic joining the two real roots of $Q_0$, and $\Gamma_{Q_0}\subset {\text {\rm SL}}_2(\mathbb{R})$ is the stabilizer group of $Q_0$. These cycle integrals then occur as coefficients of the (first) Shintani lift. In this paper we take the Petersson inner product of $f_{\mathcal A}$ with other meromorphic cusp forms.
Since the classical inner product diverges, one needs to regularize it. In addition to their inherent interest, extensions of Petersson's inner product yield applications to other areas, including generalized Kac--Moody algebras \cite{GritsenkoNikulin} and the arithmetic of Shimura varieties \cite{BrY}. Those applications used a regularization of Petersson \cite{Pe2}, later rediscovered and generalized by Borcherds \cite{Bo1} and Harvey--Moore \cite{HM}. However, Petersson's inner product $\langle f,f \rangle$ for any (non-cuspidal) $f\in \mathbb{S}_{2k}$ always diverges (see Satz 1 of \cite{Pe2}), so one cannot use it to extend the classical inner product to an inner product on any larger subspace. For the application in this paper, we introduce a new regularized inner product, again denoted by $\left<\cdot,\cdot\right>$ and formally defined in \eqref{eqn:OurReg} below, which extends the domain of the inner product to include all meromorphic cusp forms.
\begin{theorem}\label{thm:innerconverge}
The regularized inner product $\left<f,g\right>$ exists for all $f,g\in\mathbb{S}_{2k}$. It is Hermitian, and it equals Petersson's regularized inner product whenever his exists. \end{theorem}
\indent We next consider an application of the inner product to higher Green's functions. To state the formula we let $\omega_{\varrho}$ be the size of the stabilizer $\Gamma_{\varrho}$ of $\varrho\in\mathbb{H}$ with respect to the action of $\operatorname{PSL}_{2}(\mathbb{Z})$.
We require the \begin{it}elliptic expansion\end{it} of a meromorphic modular form $f$ around $\varrho\in\mathbb{H}$, namely \begin{equation}\label{eqn:fEllExp}
f(z)=\left(z-\overline{\varrho}\right)^{-2k}\sum_{n\gg -\infty} c_{f,\varrho}(n) X_{\varrho}(z)^n, \qquad\text{with}
\qquad X_{\varrho}(z):=\frac{z-\varrho}{z-\overline{\varrho}}. \end{equation} Furthermore, set \begin{equation}\label{eqn:bkndef} b_{k,n}:= \frac{(-1)^k (2k-2)!}{2^{3k-2}(k-1)!} \begin{cases} \frac{1}{n!}& \text{if }n\geq k-1,\\ \frac{1}{(2k-2-n)!} &\text{if }n<k-1. \end{cases} \end{equation} Throughout, we let \begin{equation}\label{raising} R_\kappa:=2i\frac{\partial }{\partial z} +\frac{\kappa}{y} \end{equation} be the {\it Maass raising operator}, and denote repeated raising by $R_{\kappa}^{n}:=R_{\kappa+2n-2}\circ\cdots\circ R_{\kappa}$. \begin{theorem}\label{generalint} If $Q_0\in \mathcal{A}\in\mathcal{Q}_{-D}/{\text {\rm SL}}_2(\mathbb{Z})$ and $f\in \mathbb{S}_{2k}$ with poles in the ${\text {\rm SL}}_2(\mathbb{Z})$-orbits of $\mathfrak{z}_1,\dots,\mathfrak{z}_r$ with $\mathfrak{z}_{\ell}=\mathbbm x_\ell+i\mathbbm y_\ell \neq \tau_Q$ for all $Q\in\mathcal{A}$ and $\ell\in \{1,\dots,r\}$, then \begin{multline*} \left<f,f_{\mathcal{A}}\right>=\frac{\pi}{\omega_{\tau_{Q_0}}} \sum_{\ell=1}^r \frac1{\omega_{\mathfrak{z}_\ell}} \Bigg(\sum_{n\geq k} b_{k,n-1}\mathbbm{y}_\ell^{-2k+n} c_{f,\mathfrak{z}_{\ell}}(-n) R_{0}^{n-k}\left(G_k(z,\tau_{Q_0})\right)\\ + \sum_{n=1}^{k-1} b_{k,n-1} \mathbbm{y}_\ell^{-n} c_{f,\mathfrak{z}_{\ell}}(-n) \overline{R_{0}^{k-n}\left(G_k(z,\tau_{Q_0})\right)}\Bigg). \end{multline*} \end{theorem} Particularly interesting is the following special case. \begin{corollary} \label{cor:Greensinner} For every $Q_j\in \mathcal{A}_j\in \mathcal{Q}_{-D_j}\slash{\text {\rm SL}}_2(\mathbb{Z})\ (j=1,2)$ with $\mathcal{A}_{1}\neq \mathcal{A}_2$ we have
$$
\left<f_{\mathcal{A}_1},f_{\mathcal{A}_2}\right>= \pi\,b_{k,k-1} \frac{G_k\!\left(\tau_{Q_1},\tau_{Q_2}\right)}{\omega_{\tau_{Q_1}}\omega_{\tau_{Q_2}}}.
$$ \end{corollary}
\begin{remarks}
\noindent \begin{enumerate}[leftmargin=*] \item For arbitrary $z_1,z_2\in\mathbb{H}$, which are not necessarily CM-points, one may also realize $G_k(z_1,z_2)$ as an inner product. In order to obtain such a relation, one replaces $f_{\mathcal{A}_j}$ with the more general functions $\Psi_{2k,-k}(\cdot,z_j)$, defined in \eqref{eqn:Psidef} below, which have poles at $z_1,z_2\in\mathbb{H}$. Furthermore, since Theorems \ref{thm:innerconverge} and \ref{generalint} can be generalized to arbitrary congruence subgroups, similar relations can be established for the corresponding Green's functions associated to these subgroups. \item Given the interest in $G_{k}$ evaluated at CM-points, one may wonder what further implications Corollary \ref{cor:Greensinner} may have. Possible future directions of study along these lines are discussed in Section \ref{sec:future} below. The relation between higher Green's functions and inner products in Corollary \ref{cor:Greensinner} leads one to search for connections with geometry. In the case $k=1$, which is excluded here, Gross and Zagier related the Green's function evaluated at CM-points to the height pairing of certain Heegner points on modular curves (see Proposition 2.22 in Section II of \cite{GZ}).
This has been generalized to higher $k$ by Zhang, who defined a global height pairing between CM-cycles in certain Kuga--Sato varieties using arithmetic intersection theory, as developed by Gillet and Soul\'e \cite{GilletSoule}. The archimedean part of this height pairing is then given by the values of higher Green's functions evaluated at CM-points (see Propositions 3.4.1 and 4.1.2 of \cite{Zhang}). \item Although we restrict ourselves in the introduction to the case $\mathfrak{z}_{\ell}\neq \tau_Q$ in Theorem \ref{generalint}, and correspondingly $\mathcal{A}_1\neq \mathcal{A}_2$ in Corollary \ref{cor:Greensinner}, this is only done for convenience of notation. By replacing the Green's function with a regularized version, we obtain a more general version of Theorem \ref{generalint} in Theorem \ref{thm:innerGreensGeneral} below, and consequently an extension of Corollary \ref{cor:Greensinner}.
\end{enumerate} \end{remarks}
The paper is organized as follows. In Section \ref{sec:prelim} we recall basic geometric facts and certain special functions, and introduce the relevant modular objects. In Section \ref{sec:regularization} we study regularized inner products and prove Theorem \ref{thm:innerconverge}. In Section \ref{sec:theta} we investigate theta lifts, proving Theorem \ref{thm:liftfkd} and Theorem \ref{thm:Gpolar} (1). Theorem \ref{thm:Gpolar} (2) is established while studying the functions $\mathcal{F}_\mathcal{A}$ in Section \ref{sec:FA}. In Section \ref{sec:residue} we compute regularized inner products in order to prove Theorem \ref{generalint} and Corollary \ref{cor:Greensinner}. We conclude the paper with a discussion of natural questions in Section \ref{sec:future}.
\section{Preliminaries}\label{sec:prelim}
\subsection{CM-points and the hyperbolic distance}
For a positive-definite $Q=[a,b,c]\in\mathcal{Q}_{-D}$ (with $a>0$), we denote the associated CM-point by \begin{equation}\label{eqn:yQval}
\tau_Q=u_Q+iv_Q, \qquad\text{with}\qquad u_Q=-\frac{b}{2a} \quad{\text{ and }}\quad v_Q=\frac{\sqrt{D}}{2a}. \end{equation} We note that for $z=x+iy\in\mathbb{H}$, with $X_{\tau_Q}$ defined in \eqref{eqn:fEllExp}, we have \begin{equation}\label{rewriteQ}
Q(z,1) =\frac{\sqrt{D}}{2v_Q} \left(z-\overline{\tau_Q}\right)^2 X_{\tau_Q}(z). \end{equation} Moreover, for $Q\in \mathcal{Q}_{\ell}$, we often make use of the identity \begin{equation}\label{eqn:Qrewrite}
y^{-2}\left|Q(z,1)\right|^2 =Q_{z}^2+\ell, \end{equation} with $Q_z$ given in \eqref{eqn:Qzdef}. The quantity $Q_z$ naturally occurs when computing the hyperbolic distance $d(z,\mathfrak{z})$ between $z$ and $\mathfrak{z}=\mathbbm{x}+i\mathbbm{y}\in \mathbb{H}$, which is expressed through \begin{equation} \label{eqn:coshgen}
\cosh\left(d(z,\mathfrak{z})\right) = 1+\frac{\left|z-\mathfrak{z}\right|^2}{2y\mathbbm{y}} \end{equation} (see p.~131 of \cite{Beardon}). In particular, when $\mathfrak{z}$ is a CM-point $\tau_Q$ with $Q\in\mathcal{Q}_{-D}$, we have the equality \begin{equation}\label{eqn:coshQz/D}
\cosh\left(d\left(z,\tau_{Q}\right)\right) = \frac{Q_z}{\sqrt{D}}. \end{equation} The combination of \eqref{eqn:Qrewrite} and \eqref{eqn:coshQz/D} gives \begin{equation}\label{eqn:coshrat}
\left(1-\cosh(d(z,\tau_Q))^{2}\right)^{-1}=-\frac{Dy^2}{|Q(z,1)|^2}. \end{equation} Finally, for $z\in \mathbb{H}$ (and fixed $\varrho\in\mathbb{H}$) we set
\begin{equation*}
r_{\varrho}(z):=\tanh\left(\frac{d(z,\varrho)}{2} \right)= \left|X_{\varrho}(z)\right|. \end{equation*} Here the last equality follows from the half-argument formula \[ \tanh\left(\frac{Z}{2}\right) = \sqrt{\frac{\cosh(Z)-1}{\cosh(Z)+1}} \]
combined with \eqref{eqn:coshgen} and $\left|z-\mathfrak{z}\right|^2+4y\mathbbm{y}=\left|z-\overline{\mathfrak{z}}\right|^2$. Using this half-argument formula once again, equation \eqref{eqn:coshQz/D} implies that \begin{equation}\label{1r} 1-r_{\tau_Q}(z)^2= \frac{2}{\cosh\!\left(d\!\left(z,\tau_Q\right)\right) +1}=\frac{2\sqrt{D}}{Q_z+\sqrt{D}}. \end{equation} \subsection{Properties of hypergeometric functions} In this subsection we recall relations between the hypergeometric function and other functions, as well as its transformations that are required for this paper.
For $Z\in\mathbb{C}$ with $|Z|<1$ the hypergeometric function is defined by the series \begin{align}\label{2F1} {}_2 F_1\left(a,b;c;Z\right):=\sum_{n\geq 0} \frac{(a)_n(b)_n}{n!(c)_n} Z^n, \end{align}
with parameters $a,b,c\in\mathbb{C}$, $c$ not a non-negative integer, and $(a)_n:=\prod_{j=0}^{n-1}(a+j)$. Outside the disk $|Z|<1$, the hypergeometric function is defined by analytic continuation. Using the symmetry in \eqref{2F1}, one directly sees that \begin{equation} {}_2 F_1\left(a,b;c;Z\right)={}_2 F_1\left(b,a;c;Z\right). \label{2F1sym} \end{equation} Furthermore, by 15.4.6 of \cite{NIST} we have \begin{equation} {}_2 F_1\left(a,b;b;Z\right)=\left(1-Z\right)^{-a}. \label{2F1 special} \end{equation} We also require the following transformation law from 15.8.1 of \cite{NIST}, valid when $1-Z\notin\mathbb{R}^-$: \begin{equation}\label{2F1tr} {}_2 F_1\left(a,b;c;Z\right)=\left(1-Z\right)^{-b}{}_2 F_1\left(c-a,b;c;\frac{Z}{Z-1}\right). \end{equation} By 15.5.3 of \cite{NIST} (with $n=1$) we have
\begin{equation}\label{diff2F1}
\frac{\partial}{\partial Z}\left(Z^{a}\,{_2F_1}\left(a,b;c;Z\right)\right)=aZ^{a-1}\,{_2F_1}\left(a+1,b;c;Z\right).
\end{equation} By 8.17.7 of \cite{NIST} the
hypergeometric function is related to the incomplete $\beta$-function via \begin{equation}\label{equ-beta-hypergeom} \beta(Z;a,b)=\frac{Z^a}{a}\, {}_2F_1\left(a,1-b;a+1;Z\right).
\end{equation} Using 15.8.14 of \cite{NIST} then implies that for $1-Z\not\in\mathbb{R}^-$ we have \begin{equation}\label{betatran}
\beta(Z;2k-1,1-k)=2^{2k-2}i(-1)^k\beta\left(\frac{Z^2}{4(Z-1)}; k-\frac12, \frac{1}{2}\right). \end{equation}
\subsection{Polar harmonic Maass forms and their elliptic expansions}
For $\gamma=\left(\begin{smallmatrix}a &b\\c &d\end{smallmatrix}\right)\in{\text {\rm SL}}_2(\mathbb{Z})$ and $f\colon \mathbb{H}\to\mathbb{C}$, the weight $\kappa\in\frac{1}{2}\mathbb{Z}$ {\it slash-action} is defined by $$
f|_{\kappa}\gamma(\tau):= (c\tau +d)^{-\kappa} f(\gamma \tau) \begin{cases}
1 & \text{ if } \kappa \in \mathbb{Z},\\ \left(\frac{c}{d}\right) \varepsilon_d^{2\kappa} & \text{ if } \kappa \in \frac{1}{2}\mathbb{Z} \backslash \mathbb{Z}\text{ and }\gamma\in\Gamma_0(4), \end{cases} $$ with the extended Legendre symbol ($\frac{\cdot}{\cdot}$) and \[ \varepsilon_d:=\begin{cases} 1 &\text{if }d\equiv 1\pmod{4},\\ i&\text{if }d\equiv 3\pmod{4}.\end{cases} \] We assume throughout that $N\in\mathbb{N}$ is divisible by $4$ if $\kappa\in\frac12+\mathbb{Z}$.
\begin{definitionno} For $N\in\mathbb{N}$, a \begin{it}weight $\kappa\in \frac{1}{2}\mathbb{Z}$ polar harmonic Maass form on $\Gamma_0(N)$\end{it} is a real-analytic function $\mathcal{M} \colon \mathbb{H} \rightarrow \mathbb{C}$ that satisfies the following conditions, outside finitely many singularities in $\Gamma_0(N)\backslash(\mathbb{H}\cup \mathbb{Q} \cup \{i\infty\})$: \noindent
\noindent \begin{enumerate}[leftmargin=*]
\item For all $\gamma\in\Gamma_0(N)$ we have $\mathcal{M}|_{\kappa} \gamma =\mathcal{M}$. \item We have $\Delta_\kappa(\mathcal{M})=0$, with the hyperbolic Laplacian defined in \eqref{Laplace}. \item For every $\varrho\in\mathbb{H}$ there exists $n\in\mathbb{N}$ such that $(\tau-\varrho)^{n}\mathcal{M}(\tau)$ is bounded for $r_{\varrho}(\tau)\ll_{\mathcal{M}} 1$. We say that $\mathcal{M}$ has a \begin{it}singularity of finite order\end{it} at $\varrho$ if this condition is satisfied. \item The function $\mathcal{M}$ grows at most linear exponentially at the cusps. \end{enumerate} If the only singularities of $\mathcal{M}$ lie at the cusps, then $\mathcal{M}$ is a \begin{it}harmonic Maass form\end{it}. \end{definitionno} For $\kappa<1$, the Fourier expansion of a polar harmonic Maass form at $i\infty$ has a natural splitting of the shape \begin{equation} \label{split} \mathcal{M}(\tau)=\mathcal{M}^+(\tau)+\mathcal{M}^-(\tau), \end{equation} where the {\it holomorphic} and {\it non-holomorphic parts} (at $i\infty$) are defined by the following series, that converge for $v$ sufficiently large, with $c_{\mathcal M}^{\pm}(n)\in\mathbb{C}$: \begin{align*} \mathcal{M}^+(\tau)&:=\sum_{n\gg -\infty} c^+_{\mathcal{M}}(n) e^{2\pi i n \tau},\\ \mathcal{M}^-(\tau)&:=c^-_{\mathcal{M}}(0) v^{1-\kappa}+\sum_{0\neq n\ll \infty} c^-_{\mathcal{M}}(n) \Gamma\left(1-\kappa,-4\pi n v\right)e^{2\pi i n\tau}. \end{align*} Here $\Gamma(r,Z):=\int_{Z}^{\infty} t^{r-1} e^{-t} dt$ denotes the \begin{it}incomplete gamma function\end{it}. Expansions of this type also exist for $\kappa\geq 1$, but the term in $\mathcal{M}^-$ containing $v^{1-\kappa}$ is replaced with a logarithmic term for $\kappa=1$, and more care is needed for the terms containing an incomplete gamma function when both parameters are negative. There are also similar expansions at the other cusps. One reason that the splitting into holomorphic and non-holomorphic parts is natural is that the holomorphic part is annihilated by the operator $\xi_{\kappa}$ defined in \eqref{XiD}. Both parts can have singularities; the singularities in the holomorphic part are poles, while one can determine the kind of singularities in the non-holomorphic part by noting that its image under $\xi_{\kappa}$ is meromorphic. The terms in the expansion which grow as $v\to\infty$ are called the \begin{it}principal part of $\mathcal{M}$ \end{it}(at $i\infty$); namely, for $\kappa<1$ these are the terms in $\mathcal{M}^+$ with $n<0$ and those terms in $\mathcal{M}^-$ with $n\geq 0$. The coefficients $c_{\mathcal M}^-$ are closely related to coefficients of meromorphic modular forms of weight $2-\kappa$, following from the fact that if $\mathcal{M}$ is modular of weight $\kappa$, then $\xi_{\kappa}(\mathcal{M})$ is modular of weight $2-\kappa$. Thus $\xi_{\kappa}$ maps weight $\kappa$ polar harmonic Maass forms to weight $2-\kappa$ meromorphic modular forms.
Solving the second-order differential equation coming from \eqref{Laplace}, one obtains an elliptic expansion of polar harmonic Maass forms that parallels the expansion \eqref{eqn:fEllExp} for meromorphic cusp forms. The resulting expansion is given in Proposition 2.2 of \cite{BKweight0}, which appears, as Pioline later pointed out, as a special case of Theorem 1.1 of \cite{Fay}. To describe it, under the restriction $0\leq Z<1$, $a\in\mathbb{N}$, and $b\in\mathbb{Z}$, we set \begin{equation}\label{eqn:beta0def} \beta_0\left(Z; a,b\right):=\beta\left(Z; a,b\right)-\mathcal{C}_{a,b} \hspace{7mm} \text{ with }\hspace{7mm} \mathcal{C}_{a,b}:=\sum_{\substack{0\leq j\leq a-1\\ j\neq -b}} \binom{a-1}{j}\frac{(-1)^j}{j+b}. \end{equation} Making the change of variables $t \mapsto 1-t$ in the integral representation and then applying the Binomial Theorem, we obtain \begin{equation}\label{betid}
\beta_0 (Z;a,b) = \sum_{ \substack{0\leq j \leq a-1 \\ j\neq -b}} \binom{a-1}{j} \frac{(-1)^{j+1}}{j+b} (1-Z)^{j+b} +\delta_{1-a\leq b \leq 0} \binom{a-1}{-b} (-1)^{b+1} \log (1-Z). \end{equation} Here for a property $S$, $\delta_S=1$ if $S$ is satisfied and $\delta_S=0$ otherwise.
For every $\varrho\in \mathbb{H}$, a polar harmonic Maass form $\mathcal{M}$ of weight $2-2k$ (or more generally any function $\mathcal{M}$ that is annihilated by $\Delta_{2-2k}$ and has a singularlity of finite order at $\varrho$), there exist $c_{\mathcal{M},\varrho}^{\pm}(n)\in\mathbb{C}$ such that for $r_{\varrho}(z)\ll_{\varrho} 1$ one has \begin{equation}\label{eqn:expw} \mathcal{M}(z)=\left(z-\overline{\varrho}\right)^{2k-2}\left(\sum_{n\gg -\infty}c_{\mathcal{M},\varrho}^+(n) X_{\varrho}(z)^n + \sum_{n\ll\infty }c_{\mathcal{M},\varrho}^-(n) \beta_0\left(1-r_{\varrho}(z)^2;2k-1,-n\right) X_{\varrho}(z)^n\right). \end{equation}
The \begin{it}meromorphic\end{it} and the \begin{it}non-meromorphic parts\end{it} of the elliptic expansion around $\varrho$ are \begin{align*} \mathcal{M}_{\varrho}^+ (z)&:=\left(z-\overline{\varrho}\right)^{2k-2}\sum_{n\gg -\infty}c_{\mathcal{M},\varrho}^+(n) X_{\varrho}(z)^n,\\ \mathcal{M}_{\varrho}^- (z)&:=\left(z-\overline{\varrho}\right)^{2k-2}\sum_{n\ll \infty} c_{\mathcal{M},\varrho}^-(n)\beta_0\left(1-r_{\varrho}(z)^2;2k-1,-n\right) X_{\varrho}(z)^n. \end{align*} We refer to the terms in \eqref{eqn:expw} that grow as $z\to \varrho$ as the \begin{it}principal part around $\varrho$\end{it} and denote them by $\mathscr{P}_{\mathcal{M},\varrho}$; the corresponding meromorphic and non-meromorphic parts of $\mathscr{P}_{\mathcal{M},\varrho}$ are \begin{align*}
\mathscr{P}_{\mathcal{M},\varrho}^{+}(z)&:=\left(z-\overline{\varrho}\right)^{2k-2}\sum_{n<0}c_{\mathcal{M},\varrho}^+(n)X_\varrho(z)^n,\\
\mathscr{P}_{\mathcal{M},\varrho}^{-}(z)&:= \left(z-\overline{\varrho}\right)^{2k-2}\sum_{n\geq 0} c_{\mathcal{M},\varrho}^-(n)\beta_0\left(1-r_{\varrho}(z)^2;2k-1,-n\right) X_{\varrho}(z)^n. \end{align*}
\begin{remark} Note that the principal parts of the Fourier expansions around all cusps and the principal parts of the elliptic expansions uniquely determine the form. Indeed, Proposition 3.5 of \cite{BruinierFunke} implies that harmonic Maass forms $\mathcal{M}$ without any singularities must satisfy $\xi_{2-2k}(\mathcal{M})=0$ and there are no non-trivial negative-weight holomorphic modular forms. \end{remark}
\subsection{Differential operators}\label{sec:diffops} Recall the raising operator defined in \eqref{raising}. If $g$ has eigenvalue $\lambda$ and weight $\kappa$, then $R_\kappa^\ell(g)$ ($\ell\in\mathbb{N}_0$) has weight $\kappa+2\ell$ and eigenvalue $\lambda+\kappa\ell+\ell(\ell-1)$. The following lemma may easily be verified by induction on $\ell$.
\begin{lemma}\label{lem:raiserepeat}
For $\ell\in\mathbb{N}_0$ and $g:\mathbb{H}\to\mathbb{C}$ satisfying $\Delta_\kappa(g)=\lambda g$, we have
\begin{equation*}
R_{-\kappa-2\ell}^{\ell}\left( y^{2\ell+\kappa}\overline{R_\kappa^\ell\left(g(z)\right)}\right) = y^\kappa \prod\limits_{j=1}^{\ell}\left(-\overline{\lambda}-j(j+\kappa-1)\right)\overline{g(z)}.
\end{equation*}
\end{lemma} The next lemma rewrites the elliptic coefficients of a meromorphic function $f$ in terms of the raising operator and $\eta:=\textnormal{Im}(\varrho)$. Its proof may be found in Proposition 17 of \cite{BGHZ}. \begin{lemma}\label{lem:ellexpraise}
If $f: \mathbb{H}\to\mathbb{C}$ is a meromorphic function that is holomorphic in some neighborhood of $\varrho\in\mathbb{H}$ and $\kappa\in\mathbb{Z}$, then for $z$ in this neighborhood we have
\[
f(z)=(2i\eta)^{\kappa} (z-\overline{\varrho})^{-\kappa}\sum_{n\geq 0} \frac{\eta^n}{n!}R_{\kappa}^n(f(\varrho)) X_{\varrho}(z)^{n}.
\] \end{lemma}
We also recall that raising and differentiation are related through Bol's identity ($k\in\mathbb{N}$) \begin{equation}\label{Bol}
\mathcal D^{2k-1}=-(4\pi)^{1-2k} R_{2-2k}^{2k-1}. \end{equation} We note that the constant $\mathcal{C}_{a,b}$ in \eqref{eqn:beta0def} is chosen so that the operator $\mathcal{D}^{2k-1}$ acts nicely on $\mathcal{M}_{\varrho}^-$. Namely, most of the terms in $\mathcal{M}_{\varrho}^-$ are annihilated by $\mathcal{D}^{2k-1}$. One can use this to conclude that $\mathcal{D}^{2k-1}$ maps polar harmonic Maass forms of weight $2-2k$ to meromorphic modular forms of weight $2k$. One can also easily show that $\xi_{2-2k}$ maps polar harmonic Maass forms of weight $2-2k$ to meromorphic modular forms of weight $2k$. This operator may also be written in terms of the raising operator. More precisely, for every $g:\mathbb H\rightarrow \mathbb{C}$ we have the equality \begin{equation}\label{XiR} \xi_{\kappa}\left(y^{-\kappa}\overline{g(z)}\right)=R_{-\kappa}(g(z)). \end{equation}
\subsection{Poincar\'e series}
In this section we review the Maass--Poincar\'e series (see Theorem 3.1 of \cite{Fay}) with singularities at the cusps, which are used as inputs of the theta lifts of Theorems \ref{thm:liftfkd} and \ref{thm:Gpolar} (1), and Petersson's meromorphic Poincar\'e series \cite{Pe3,Pe1}, which are closely connected to $f_{\mathcal{A}}$.
To construct the Maass--Poincar\'e series we define, for $Z\in\mathbb{R}\backslash \{0\}$, the expression \begin{equation*}
\mathcal{M}_{\kappa,s}\left(Z\right):=\left|Z\right|^{-\frac{\kappa}{2}}M_{\frac{\kappa}{2}\operatorname{sgn}(Z),\, s-\frac{1}{2}}\left(|Z|\right), \end{equation*} with $M_{\mu,\nu}$ the usual $M$-Whittaker function. For $\mu,s\in \mathbb{C}$ with $\textnormal{Re}\left(s\pm \mu \right)>0$ and $Z\in\mathbb{R}^+$ we have \[ M_{\mu,s-\frac{1}{2}}(Z)=Z^{s}e^{\frac{Z}{2}}\frac{\Gamma(2s)}{\Gamma\left(s+\mu\right)\Gamma\left(s-\mu\right)}\int_0^1 t^{s+\mu-1}(1-t)^{s-\mu-1}e^{-Zt}dt. \] For $s= \pm \mu$, we have the well-known identities \begin{equation}\label{eqn:MWhitSpec} M_{\mu,\mu-\frac12}(Z)=e^{-\frac{Z}{2}} Z^{\mu}\text{ and } M_{-\mu,\mu-\frac{1}{2}}(Z) = e^{\frac{Z}{2}}Z^{\mu}. \end{equation}
For $m\in\mathbb{Z}\setminus\{0\}$ and $\kappa\in\frac{1}{2}\mathbb{Z}$, the function $$ \psi_{\kappa,m,s}\left(\tau\right):=
\left(4\pi |m|\right)^{\frac{\kappa}{2}} \mathcal{M}_{\kappa,s}\left(4\pi m v\right)e^{2\pi i mu} $$
is then an eigenfunction of $\Delta_{\kappa}$ with eigenvalue $(s-\frac{\kappa}{2})(1-s-\frac{\kappa}{2})$. Denoting by $|_{\kappa}\text {\rm pr}$ the identity for $\kappa\in\mathbb{Z}$ and Kohnen's projection operator (see p. 250 of \cite{Kohnen}) for $\kappa\notin\mathbb{Z}$, one concludes that for $\sigma:=\textnormal{Re}(s)>1$, the following Poincar\'e series are also eigenfunctions of $\Delta_{\kappa}$ and have weight $\kappa$: \begin{equation} \label{eqn:Psdef}
P_{\kappa,m,s}:=\sum_{\gamma\in \Gamma_{\infty}\backslash \Gamma_0(4)} \psi_{\kappa,\operatorname{sgn}(\kappa)m,s}\Big|_{\kappa}\gamma\Big|_{\kappa}\text {\rm pr}. \end{equation} If $s=1-\frac{\kappa}{2}$ or $s=\frac{\kappa}{2}$, then the functions $P_{\kappa,m,s}$ are harmonic.
The Poincar\'e series satisfy the growth condition \[
P_{\kappa,m,s}(\tau)-\psi_{\kappa,m,s}\left(\tau\right)\big|_{\kappa}\text {\rm pr} = O\left(v^{1-\textnormal{Re}(s)-\frac{\kappa}{2}}\right). \] Here we simply abbreviate the operator appearing on the right-hand side of the identity on page 250 of \cite{Kohnen} by $\text {\rm pr}$, despite the fact that $\psi_{\kappa,m,s}$ is not modular. In the special case that $\kappa=k+\frac12>1$ and $s=\frac{\kappa}{2}$, we normalize the resulting weakly holomorphic Poincar\'e series as \begin{equation}\label{eqn:gDdef}
P_{k+\frac{1}{2},m}:=\frac{6(4\pi)^{\frac{k}{2}-\frac{1}{4}}}{(k-1)!|m|^{\frac{1}{4}}}P_{k+\frac{1}{2},m,\frac{k}{2}+\frac{1}{4}}. \end{equation} For $\kappa=\frac32-k<1$ we set \begin{equation}\label{eqn:PnD3/2-kdef}
\mathcal{P}_{\frac{3}{2}-k,m}:=-\frac{6 (4\pi)^{\frac{k}{2}-\frac{1}{4}} }{(k-1)!|m|^{\frac{1}{4}}(2k-1)} P_{\frac{3}{2}-k,m,\frac{k}{2}+\frac{1}{4}}. \end{equation}
We turn next to Poincar\'e series with singularities in the upper half-plane, defining, for $\kappa>2$ even and $n\in\mathbb{Z}$, \begin{equation}\label{eqn:Psidef}
\Psi_{\kappa,n}(z,\mathfrak{z}):=\sum_{\gamma\in{\text {\rm SL}}_2 (\mathbb{Z})}\!\Big(\left(z-\overline{\mathfrak{z}}\right)^{-\kappa}X_{\mathfrak{z}}(z)^n\Big)\bigg|_{\kappa,z}\gamma. \end{equation} One has $z\mapsto \Psi_{\kappa,n}(z,\mathfrak{z})\in\mathbb{S}_{\kappa}$ and $\mathfrak{z}\mapsto\mathbbm{y}^{-\kappa-n}\Psi_{\kappa,n}(z,\mathfrak{z})$ are modular of weight $-2n-\kappa$ (see page 72 of \cite{Pe1}). Moreover, the functions $z\mapsto \Psi_{\kappa,n}(z,\mathfrak{z})\in\mathbb{S}_{\kappa}$ vanish identically if $n\not\equiv -\kappa/2\pmod{\omega_{\mathfrak{z}}}$ and are cusp forms in $z$ if $n\in\mathbb{N}_0$. Furthermore, the set $\{\Psi_{\kappa,n}(z,\mathfrak{z}): \mathfrak{z}\in\mathbb{H} ,n\in\mathbb{Z}\}$ spans $\mathbb{S}_{\kappa}$ (see S\"atze 7 and 9 of \cite{Pe2}). The principal part of $z\mapsto \Psi_{\kappa,m}(z,\mathfrak{z})$ has a simple shape. To be more precise, set $f(z)=(2\omega_{\mathfrak{z}})^{-1}\Psi_{\kappa,m}(z,\mathfrak{z})$ and write $c(n):=c_{f,\varrho}(n)-\delta_{n=m}\delta_{\mathfrak{z}=\varrho}$, where in the latter $\delta$-term, and throughout the paper in similar identities, we consider $\mathfrak z$ and $\varrho$ as elements of ${\text {\rm SL}}_2(\mathbb{Z})\backslash\mathbb{H}$. Using this notation, Satz 7 of \cite{Pe2} implies that \begin{equation}\label{eqn:Psiexp} \left(2\omega_{\mathfrak{z}}\right)^{-1}\Psi_{\kappa,m}\left(z,\mathfrak{z}\right)=\left(z-\overline{\varrho} \right)^{-\kappa}\left(\delta_{\mathfrak{z}=\varrho}X_{\varrho}(z)^m +\sum_{n\geq 0} c(n) X_{\varrho}(z)^n\right). \end{equation}
Moreover, $f_{\mathcal{A}}$ is a specialization of $\Psi_{2k,-k}$, as given in the following straightforward lemma. \begin{lemma}\label{lem:fPsi} With $Q_0\in\mathcal{A}\in\mathcal{Q}_{-D}/{\text {\rm SL}}_2(\mathbb{Z})$ we have \begin{equation}\label{eqn:fkDPsi} f_{\mathcal{A}}(z)=\frac{\left(2v_{Q_0}\right)^{k}}{2\omega_{\tau_{Q_0}}} \Psi_{2k,-k}\left(z,\tau_{Q_0}\right).
\end{equation} \end{lemma}
\subsection{Higher Green's functions}\label{sec:Greens} For $z, \mathfrak{z}\in \mathbb{H}$ and $s\in\mathbb{C}$ with $\sigma>1$, the \textit{automorphic Green's function $G_s$} on ${\text {\rm SL}}_2(\mathbb{Z})\backslash\mathbb{H}$ is given by \begin{equation*}
G_s(z,\mathfrak{z}): = \sum_{\gamma\in{\text {\rm SL}}_2(\mathbb{Z})}g^{\mathbb{H}}_{s}(z,\gamma\mathfrak{z}), \end{equation*} where \begin{equation*} g^{\mathbb{H}}_{s}(z,\mathfrak{z}):=-\frac{2^{s-1}\,\Gamma(s)^2}{ \Gamma(2s)} \cosh(d(z,\mathfrak{z}))^{-s}{_2F_1}\left(\frac{s}{2},\frac{s+1}{2};s+\frac{1}{2};\frac{1}{\cosh(d(z,\mathfrak{z}))^2}\right). \end{equation*} Note that we have the equality $g^{\mathbb{H}}_{s}(z,\mathfrak{z})=-Q_{s-1}(\cosh(d(z,\mathfrak{z})))$, with $Q_{\nu}$ the associated Legendre function of the second kind. Furthermore, note that there are different normalizations of $G_s$ in the literature; our normalization agrees with the one from \cite{Mellit}. Automorphic Green's functions can be defined for arbitrary Fuchsian groups of the first kind, and hence in particular for any congruence group. They also arise as the resolvent kernel for the hyperbolic Laplacian (see, e.g.~\cite{Fay, Hejhal}).
In the case $s=k\in\mathbb{N}_{>1}$, the function $G_k: \mathbb{H}\times\mathbb{H}\to\mathbb{C}$ is called a \textit{higher Green's function}. It is uniquely characterized by the following properties: \begin{enumerate}[leftmargin=*] \item The function $G_k$ is smooth and real-valued on $\mathbb{H}\times\mathbb{H}\setminus \{(z, \gamma z):\gamma\in{\text {\rm SL}}_2(\mathbb{Z}), z\in\mathbb{H}\}.$ \item For $\gamma_1, \gamma_2\in{\text {\rm SL}}_2(\mathbb{Z})$ we have $G_k(\gamma_1 z, \gamma_2\mathfrak{z})= G_k(z, \mathfrak{z}).$ \item We have \[ \Delta_{0, z}\!\left(G_k\!\left(z, \mathfrak{z}\right)\right)=\Delta_{0, \mathfrak{z}}\!\left(G_k\!\left(z, \mathfrak{z}\right)\right)=k(1-k)G_k\left(z, \mathfrak{z}\right). \] \item As $z\to\mathfrak{z}$ we have \[ G_k(z, \mathfrak{z})=2\omega_{\mathfrak{z}}\log\left(r_{\mathfrak{z}}(z)\right)+O(1). \] \item As $z$ approaches a cusp, we have $G_k(z, \mathfrak{z})\to 0$. \end{enumerate}
These higher Green's functions have a long history (cf. \cite{Fay,GZ,Hejhal,Mellit}). For example, Gross and Zagier \cite{GZ} conjectured that their evaluations at CM-points are essentially logarithms of algebraic numbers. If $S_{2k}(\Gamma)=\{0\}$, with $\Gamma\subseteq{\text {\rm SL}}_2(\mathbb{Z})$ of finite index, then the conjecture states that \[ G_k(z, \mathfrak{z})=(D_1 D_2)^{\frac{1-k}{2}}\log(\alpha) \] for CM-points $z, \mathfrak{z}$ of discriminants $D_1 \text{ and } D_2$ respectively and some algebraic number $\alpha$. Various cases of this conjecture have been proved. For example, Mellit \cite{Mellit} proved the case with $k=2$ and $\mathfrak{z}=i$ for $\Gamma={\text {\rm SL}}_2(\mathbb{Z})$, and also interpreted $\alpha$ as an intersection number of certain algebraic cycles. Further cases were then investigated by Viazovska \cite{Vi}.
\section{Regularized Petersson inner products and the proof of Theorem \ref{thm:innerconverge}}\label{sec:regularization}
\subsection{Known regularized inner products}\label{3.1} The classical Petersson inner product of two weight $\kappa\in \frac{1}{2}\mathbb{Z}$ (holomorphic) modular forms $f$ and $g$ on $\Gamma_0(N)$ such that $fg$ is a cusp form is given by \begin{equation}\label{Petint} \langle f,g \rangle := \frac{1}{\left[{\text {\rm SL}}_2(\mathbb{Z}):\Gamma_0(N)\right]}\int_{\Gamma_0(N)\backslash \mathbb{H}}
f(z) \overline{g(z)} y^{\kappa}\frac{dx dy}{y^2}. \end{equation}
Although \eqref{Petint} generally diverges for meromorphic modular forms, one may still define a regularized inner product in some cases. The first to do so appears to be Petersson \cite{Pe2}. If all of the poles of $f$ and $g$ are at the cusps, the regularization of Petersson was later rediscovered and extended by Harvey-Moore \cite{HM} and Borcherds \cite{Bo1}, and subsequently used by Bruinier \cite{Bruinier} and others to obtain a regularized integral that exists in many cases. Indeed, Petersson gave explicit necessary and sufficient conditions for existence of his regularized inner product in Satz 1a of \cite{Pe2}. In particular, for $f,g\in\mathbb{S}_{\kappa}$, his regularization exists if and only if for every $n<0$ and $\varrho \in \mathbb{H}$ we have \begin{equation}\label{eqn:regexist} c_{f,\varrho}(n)c_{g,\varrho}(n)=0; \end{equation} the conditions are similar if $f$ and $g$ have singularities at the cusps. This regularization is used to define theta lifts of functions with singularities, some of which are evaluated in this paper.
To give a full definition we require some notation. We let $F_T$ be the restriction of the standard fundamental domain for ${\text {\rm SL}}_2(\mathbb{Z})$ to those $z$ with $y\leq T$, and let $F_T(N):=\bigcup_{\gamma\in\Gamma_0(N)\backslash {\text {\rm SL}}_2(\mathbb{Z})} \gamma F_T.$ For functions $f$ and $g$ transforming like modular forms of weight $\kappa\in \frac{1}{2}\mathbb{Z}$ for $\Gamma_0(N)$ we define \begin{equation}\label{eqn:innerweaklyPe} \left<f,g\right> :=\frac{1}{\left[{\text {\rm SL}}_2(\mathbb{Z}):\Gamma_0(N)\right]} \lim_{T\to\infty}\int_{F_T(N)} f(z)\overline{g(z)} y^{\kappa}\frac{dx dy}{y^2}, \end{equation} in the case the integral exists.
The definition above may be interpreted as cutting out neighborhoods around cusps and letting the hyperbolic volume of the neighborhood shrink to zero. If poles exist in $\mathbb{H}$, the construction in \cite{Pe2} is similar. For $f,g\in\mathbb{S}_{2k}$ with poles at $\mathfrak{z}_1,\dots,\mathfrak{z}_r\in {\text {\rm SL}}_2(\mathbb{Z})\backslash\mathbb{H}$, we choose a fundamental domain $F^*$ such that the representatives of $\mathfrak{z}_1,\dots,\mathfrak{z}_r$ in $F^*$ (also denoted by $\mathfrak{z}_{\ell}$) all lie in the interior of $\Gamma_{\mathfrak{z}_\ell}F^*$. We then set $\mathcal{B}_{\varepsilon}(\mathfrak{z}):=\{ z\in\mathbb{H} : r_{\mathfrak{z}}(z)<\varepsilon\}$, and define Petersson's regularized inner product as \begin{equation}\label{eqn:innermeroPe} \langle f,g \rangle := \lim_{\varepsilon_1,\dots,\varepsilon_r\to 0^+} \int_{F^*{\backslash} \bigcup_{\ell=1}^r \mathcal{B}_{\varepsilon_{\ell}}\left(\mathfrak{z}_\ell\right)} f(z) \overline{g(z)} y^{2k}\frac{dx dy}{y^2}. \end{equation} Like \eqref{eqn:innerweaklyPe}, the regularized inner product \eqref{eqn:innermeroPe} does not always exist. The inner product \eqref{eqn:innerweaklyPe} has recently been further extended to an inner product on all weakly holomorphic modular forms by Diamantis, Ehlen, and the first author in \cite{BDE}, and we address the extension of \eqref{eqn:innermeroPe} to all meromorphic cusp forms in the next section.
\subsection{A new regularization} In this section we restrict to $\kappa=2k\in 2\mathbb{Z}$ and $N=1$, but the construction can be easily generalized to subgroups. We also assume that $f$ and $g$ decay like cusp forms towards the cusps, but this restriction can be removed by combining with the regularization from Subsection \ref{3.1}. We choose a fundamental domain $F^*$ as in Subsection \ref{3.1} and denote the poles of $f$ and $g$ in $F^*$ by $\mathfrak{z}_1,\dots,\mathfrak{z}_{r}$.
For an analytic function $A(s)$ in $s=(s_1,\dots,s_r)$, denote by $\mathrm{CT}_{s=0}A(s)$ the constant term of the meromorphic continuation of $A(s)$ around $s_1=\cdots =s_{r}=0$, and define \begin{equation}\label{eqn:OurReg} \left<f,g\right>:= \operatorname{CT}_{s=0}\left(\int_{{\text {\rm SL}}_2(\mathbb{Z})\backslash \mathbb{H}} f(z) H_s(z) \overline{g(z)} y^{2k}\frac{dx dy}{y^2}\right) \end{equation} where \[ H_s (z) = H_{s_1,\dots, s_r, \mathfrak{z}_1,\dots,\mathfrak{z}_r} (z) := \prod_{\ell=1}^r h_{s_{\ell},\mathfrak{z}_\ell} (z). \] Here for $\mathfrak{z}_\ell\in F^*$ and $z\in\mathbb{H}$ we set $h_{s_{\ell},\mathfrak{z}_\ell}(z):=r_{\mathfrak{z}_\ell}(\gamma z)^{2s_{\ell}}$, with $\gamma \in {\text {\rm SL}}_2(\mathbb{Z})$ chosen such that $\gamma z\in F^*$. Note that $r_{\mathfrak{z}_\ell}(\gamma z)\to 0$ as $z\to \gamma^{-1}\mathfrak{z}_\ell$, so the integral in \eqref{eqn:OurReg} converges for $\sigma\gg 0$, where this notation means that for every $1\leq \ell\leq r$, $\sigma_{\ell}:=\textnormal{Re}(s_{\ell})\gg 0$. One can show that the regularization is independent of the choice of fundamental domain.
\begin{proof}[Proof of Theorem \ref{thm:innerconverge}]
For $\delta>0$ sufficiently small, we may assume that the $\mathcal{B}_{\delta}(\mathfrak{z}_\ell)$ are disjoint and split off the integral over those $z$ that lie in one of these balls.
If $z\notin \mathcal{B}_{\delta}(\mathfrak{z}_\ell)$ for all $\ell$, then one can bound the integrand locally uniformly for $s$ contained in a small open neighborhood around $0$. Hence we conclude that \begin{equation}\label{eqn:largereval} \operatorname{CT}_{s=0}\left(\int_{F^*\backslash\bigcup_{\ell=1}^{r} \mathcal{B}_{\delta}\left(\mathfrak{z}_\ell\right)} f(z)H_s(z) \overline{g(z)} y^{2k}\frac{ dx dy}{y^2}\right)=\int_{F^* \backslash \bigcup_{\ell=1}^{r} \mathcal{B}_{\delta}\left(\mathfrak{z}_\ell\right)} f(z)\overline{g(z)}y^{2k}\frac{ dx dy}{y^2}. \end{equation} Thus we are left to show existence of the meromorphic continuation to a small open neighborhood around $0$ of \begin{equation}\label{eqn:ballint} \int_{\mathcal{B}_{\delta}(\mathfrak{z}_\ell)\cap F^*} f(z) H_s(z) \overline{g(z)} y^{2k}\frac{dx dy}{y^2}. \end{equation} By construction, $\mathfrak{z}_\ell$ lies in the interior of $\Gamma_{\mathfrak{z}_\ell}F^*$, so we may assume that $\mathcal{B}_{\delta}(\mathfrak{z}_\ell)\subseteq \Gamma_{\mathfrak{z}_\ell}F^*$. To rewrite \eqref{eqn:ballint} as an integral over the entire ball $\mathcal{B}_{\delta}(\mathfrak{z}_\ell)$, we decompose the ball into the disjoint union \begin{equation}\label{eqn:Ballsplit} \mathcal{B}_{\delta}(\mathfrak{z}_\ell) =\overset{\bullet}{\bigcup}_{\gamma\in \Gamma_{\mathfrak{z}_\ell}} \gamma\left( \mathcal{B}_{\delta}\left(\mathfrak{z}_\ell\right)\cap F^*\right). \end{equation} Moreover, bounding $h_{s_m,\mathfrak{z}_{m}}(z)$ locally uniformly for $\sigma_m>-\varepsilon$ for $m\neq \ell$, we may plug in $s_m=0$, and hence the invariance of the integrand under ${\text {\rm SL}}_2(\mathbb{Z})$ implies that the constant term at $s=0$ of \eqref{eqn:ballint} is the constant term at $s_{\ell}=0$ of $$ \mathcal{I}_{s_{\ell},\mathfrak{z}_\ell,\delta}(f,g):= \frac{1}{\omega_{\mathfrak{z}_\ell}}\int_{\mathcal{B}_{\delta}(\mathfrak{z}_\ell)} f(z) h_{s_{\ell},\mathfrak{z}_\ell}(z) \overline{g(z)} y^{2k}\frac{dx dy}{y^2}. $$ Setting $R:=r_{\mathfrak{z}_\ell}(z)$ for $z\in \mathcal{B}_{\delta}(\mathfrak{z}_\ell)$, we have for $\gamma\in \Gamma_{\mathfrak{z}_\ell}$ the equality \begin{equation}\label{eqn:requal} h_{s_{\ell},\mathfrak{z}_\ell}(z)=r_{\mathfrak{z}_\ell}(\gamma z)^{2s_{\ell}}=R^{2s_{\ell}}. \end{equation} We now closely follow the proof of Satz 1a in \cite{Pe2}. We rewrite \[
y=\frac{\mathbbm{y}_{\ell}\left(1-r_{\mathfrak{z}_{\ell}}^2(z)\right)}{\left|1-X_{\mathfrak{z}_{\ell}}(z)\right|^2} \] for $\mathfrak{z}_{\ell}=\mathbbm{x}_\ell+i\mathbbm{y}_\ell$ and compute \[ dz=\frac{2iy}{\left(1-X_{\mathfrak{z}_{\ell}}(z)\right)^2} dX_{\mathfrak{z}_{\ell}}(z). \] Hence changing variables $X_{\mathfrak{z}_\ell}(z) = Re^{i\vartheta}$ and inserting the elliptic expansions \eqref{eqn:fEllExp} of $f, g$ around $\varrho=\mathfrak{z}_\ell$ yields \begin{align} \nonumber\mathcal{I}_{s_{\ell},\mathfrak{z}_\ell,\delta}(f,g)&= \frac{1}{\omega_{\mathfrak{z}_\ell}} \int_{\mathcal{B}_\delta(\mathfrak{z}_\ell)} f(z) \overline{g(z)}r_{\mathfrak{z}_\ell}(z)^{2s_{\ell}} y^{2k} \frac{dx dy}{y^2}\\ \nonumber &=\frac{4}{\omega_{\mathfrak{z}_\ell}\left(4 \mathbbm{y}_\ell\right)^{2k}}\sum_{m,n\gg -\infty} c_{f,\mathfrak{z}_\ell}(n)\overline{c_{g,\mathfrak{z}_\ell}(m)}\int_{0}^{\delta} \int_{0}^{2\pi} e^{i(n-m)\vartheta}R^{n+m+2s_{\ell}}\left(1-R^2\right)^{2k-2} R d\vartheta dR\\ &=\frac{8\pi}{\omega_{\mathfrak{z}_\ell}\left(4\mathbbm{y}_\ell\right)^{2k}}\sum_{n\gg -\infty} c_{f,\mathfrak{z}_\ell}(n)\overline{c_{g,\mathfrak{z}_\ell}(n)} \int_{0}^{\delta} R^{1+2n+2s_{\ell}}\left(1-R^2\right)^{2k-2} dR.\label{eqn:innerconverge} \end{align} Plugging in the binomial expansion of $(1-R^2)^{2k-2}$, the
remaining integral in \eqref{eqn:innerconverge} becomes \begin{equation*}
\sum_{j=0}^{2k-2} (-1)^j \binom{2k-2}{j} \int_{0}^{\delta} R^{1+2(n+j)+2s_{\ell}} dR = \sum_{j=0}^{2k-2} (-1)^j \binom{2k-2}{j} \frac{\delta^{2\left(n+j+1+s_{\ell}\right)}}{2\left(n+j+1+s_{\ell}\right)}. \end{equation*} Since this is meromorphic at $s_{\ell}=0$, its constant term at $s_{\ell}=0$ exists, yielding the existence of the inner product.
We next prove that the inner product is Hermitian. For $f,g\in \mathbb{S}_{2k}$, let $F_{f,g}$ denote the meromorphic continuation of the function defined for $s\in\mathbb{C}^r$ with $\sigma_{\ell}\gg 0$ by \[ F_{f,g}(s):=\int_{ F^*} f(z)\overline{g(z)} H_s(z) y^{2k} \frac{dxdy}{y^2}. \] Since $\langle f, g\rangle$ always exists, $F_{f,g}$ has an expansion around $s=0$ of the shape \[ F_{f,g}(s)=\sum_{n=(n_1,\ldots,n_r)\in\mathbb{Z}^r} a_{f,g}(n) s_1^{n_1}\cdot\cdots\cdot s_r^{n_r} \] with $\langle f,g\rangle=a_{f,g}(0)$. Since $r_{\mathfrak{z}_\ell}(z)\in\mathbb{R}$, we have $\overline{H_{\overline{s}}(z)}=H_{s}(z)$, and thus \[ \overline{\langle g,f\rangle}=\overline{a_{g,f}(0)}=\operatorname{CT}_{s=0}\!\left(\overline{F_{g,f}\left(\overline{s}\right)}\right)=\operatorname{CT}_{s=0}\!\left(\int_{ F^*} f(z)\overline{g(z)}\ \overline{H_{\overline{s}}(z)} y^{2k} \frac{dxdy}{y^2}\right) =\langle f,g\rangle. \]
We finally show that the new regularization agrees with Petersson's, wherever his exists. Setting $\mathcal{B}\left( \mathfrak{z}_\ell,\varepsilon,\delta\right):=\left\{ z\in\, \mathbb{H}\, : \varepsilon<r_{\mathfrak{z}_\ell}(z)<\delta\right\},$ Petersson's regularization equals \begin{multline}\label{eqn:Petreg} \lim_{\varepsilon_1,\dots,\varepsilon_r\to 0^+} \int_{ F^*{\backslash}\bigcup_{\ell=1}^r \mathcal{B}_{\varepsilon_{\ell}}\left(\mathfrak{z}_\ell\right)} f(z) \overline{g(z)} y^{2k}\frac{dx dy}{y^2}\\ =\int_{ F^*{\backslash} \bigcup_{\ell=1}^{r} \mathcal{B}_{\delta}\left(\mathfrak{z}_\ell\right)} f(z) \overline{g(z)} y^{2k}\frac{dx dy}{y^2} +\lim_{\varepsilon_1,\dots,\varepsilon_r\to 0^+} \int_{ F^*\cap\bigcup_{\ell=1}^{r} \mathcal{B}\left(\mathfrak{z}_\ell,\varepsilon_{\ell},\delta\right)} f(z) \overline{g(z)} y^{2k}\frac{dx dy}{y^2}. \end{multline} The first term on the right-hand side of \eqref{eqn:Petreg} is precisely the right-hand side of \eqref{eqn:largereval}. It thus remains to prove that \begin{equation}\label{eqn:limball} \lim_{\varepsilon_{\ell}\to 0^+} \int_{ F^{*}\cap\mathcal{B}\left(\mathfrak{z}_\ell,\varepsilon_{\ell},\delta\right)}f(z) \overline{g(z)} y^{2k}\frac{dx dy}{y^2} =\operatorname{CT}_{s_{\ell}=0}\mathcal{I}_{s_{\ell},\mathfrak{z}_\ell,\delta}(f,g). \end{equation} By the existence condition \eqref{eqn:regexist}, we may plug in $s_{\ell}=0$ in \eqref{eqn:innerconverge}, and obtain that $$ \operatorname{CT}_{s_{\ell}=0}\mathcal{I}_{s_{\ell},\mathfrak{z}_\ell,\delta}(f,g)=\frac{8\pi}{\omega_{\mathfrak{z}_\ell}\left(4\mathbbm{y}_\ell\right)^{2k}}\sum_{n\geq 0} c_{f,\mathfrak{z}_\ell}(n)\overline{c_{g,\mathfrak{z}_\ell}(n)} \int_{0}^{\delta} R^{1+2n}\left(1-R^2\right)^{2k-2} dR.$$ Using \eqref{eqn:Ballsplit} and then following the calculation in \eqref{eqn:innerconverge}, plugging the elliptic expansion into the left-hand side of \eqref{eqn:limball} yields that the two regularizations match since \[ \lim_{\varepsilon_{\ell}\to 0^+} \int_{\mathcal{B}\left(\mathfrak{z}_\ell,\varepsilon_{\ell},\delta\right)}f(z) \overline{g(z)} y^{2k} \frac{dx dy}{y^2} = \frac{8\pi}{\left(4\mathbbm{y}_\ell\right)^{2k}}\sum_{n\geq 0} c_{f,\mathfrak{z}_\ell}(n)\overline{c_{g,\mathfrak{z}_\ell}(n)} \int_{0}^{\delta} R^{1+2n}\left(1-R^2\right)^{2k-2}dR. \] \end{proof}
\section{Theta lifts and the proofs of Theorems \ref{thm:liftfkd} and \ref{thm:Gpolar} (1)}\label{sec:theta}
\subsection{Proof of Theorem \ref{thm:liftfkd}}
Before proving Theorem \ref{thm:liftfkd}, we note a (well-known) fact about the theta kernel $\Theta_k$. \begin{lemma} The function
$\tau\mapsto \Theta_k(z,\tau)$ grows at most polynomially towards the cusps and decays exponentially for $\tau\to i\infty$. \end{lemma} \begin{proof}
To show that it exponentially decays towards $i\infty$, we use \eqref{eqn:Qrewrite} to rewrite the absolute value of the exponential in definition \eqref{eqn:thetadef1} as $e^{-2\pi v (Q_z^2 +|Q(z,1)|^2/y^2)}$, and then note that $Q_z^2 +|Q(z,1)|^2/y^2>0$ for $Q\neq [0,0,0]$. Modularity then implies the claim at the other cusps. \end{proof}
We specifically apply the theta lift $\Phi_k$ to the weight $k+\frac12$ weakly holomorphic Poincar\'e series $P_{k+\frac12,-D}$ defined in \eqref{eqn:gDdef} in order to obtain Theorem \ref{thm:liftfkd}. \begin{proof}[Proof of Theorem \ref{thm:liftfkd}] A standard unfolding argument (see, e.g., \cite{Zagiernotrapid}) combined with \eqref{eqn:MWhitSpec} gives that $\Phi_k(P_{k+\frac12,-D,\frac{k}{2}+\frac14})$ equals \begin{multline}\label{Zagiersplit}
\frac{(4\pi D)^{\frac{k}{2}+\frac{1}{4}}}{6}\lim_{T\to\infty}\left(\int_0^{T} \int_0^1 e^{-2\pi iD\tau}\overline{\Theta_k (z, \tau)}v^{k+\frac{1}{2}}\frac{dudv}{v^2}\vphantom{-\sum_{c\geq 1}\sum_{a\!\!\pmod{c}^*}\int_{S_{\frac{a}{c}}} e^{-2\pi iD\tau}\overline{\Theta_k (z, \tau)}v^{k+\frac{1}{2}}\frac{dudv}{v^2}}\right.
\\
\left.\vphantom{\int_0^{T} \int_0^1 e^{-2\pi iD\tau}\overline{\Theta_k (z, \tau)}v^{k+\frac{1}{2}}\frac{dudv}{v^2}}
-\sum_{c\geq 1}\sum_{a\!\!\pmod{c}^*}\int_{S_{\frac{a}{c}}} e^{-2\pi iD\tau}\overline{\Theta_k (z, \tau)}v^{k+\frac{1}{2}}\frac{dudv}{v^2}\right), \end{multline} where $a$ runs over residues modulo $c$ that are coprime to $c$ and for each $a$ and $c$ we denote by $S_{\frac{a}{c}}$ the disc of radius $(2c^2T)^{-1}$ tangent to the real axis at $\frac{a}{c}$. Note that the factor $\frac16=[\text{SL}_2(\mathbb Z):\Gamma_0(4)]$ comes from the fact that the inner product is taken over $\Gamma_0(4)$. Following an argument similar to the proof of Theorem 1.1 (2) in \cite{BKM}, the polynomial growth of $\tau\mapsto\Theta_k (z,\tau)$ towards the cusps yields that the second term of \eqref{Zagiersplit} does not contribute in the limit $T\to\infty$. To evaluate the integral in the first term of \eqref{Zagiersplit}, we plug in the defining series \eqref{eqn:thetadef1} and integrate over $u$ to obtain, as $T\rightarrow\infty$, the expression \[ \frac{(4\pi D)^{\frac{k}{2}+\frac{1}{4}}}{6}y^{-2k}\sum_{Q\in\mathcal{Q}_{-D}}Q\left(\overline{z},1\right)^{k}\int_0^{\infty} e^{4\pi Dv-4 \pi Q_z^2 v}v^{k-1}dv. \] The claim now easily follows using \eqref{eqn:Qrewrite} to show that the integral on $v$ equals $$
\int_0^{\infty}e^{-\frac{4\pi |Q(z,1)|^2 v}{y^2}}v^{k-1}dv =\frac{(k-1)!}{(4\pi)^k}
y^{2k}|Q(z,1)|^{-2k}. $$ \end{proof}
\subsection{Proof of Theorem \ref{thm:Gpolar} (1)} The goal of this section is to compute the image of the Maass--Poincar\'e series $P_{\frac32-k,-D,s}$ defined in \eqref{eqn:Psdef} under $\Phi_{1-k}^*$, and to connect these images to the functions $\mathcal F_{\mathcal{A}}$. We do so in the following theorem, which extends Theorem \ref{thm:Gpolar} (1). \begin{theorem}\label{thm:thetalift} For $s\in\mathbb{C}$ with $\sigma>1$ we have \begin{align}\notag &\Phi_{1-k}^*\left(P_{\frac{3}{2}-k,-D,s}\right)(z)\\ \label{eqn:Phivals} &\qquad\qquad=\frac{D^s\,\Gamma\left(s+\frac{k}{2}-\frac14\right)}{6(4\pi)^{\frac{k}{2}-\frac14}}\sum_{Q\in \mathcal{Q}_{-D}} Q_z^{-2s-k+\frac{3}{2}} Q(z,1)^{k-1}{_2F_1}\left(s+\frac{k}{2}-\frac{1}{4},s+\frac{k}{2}-\frac{3}{4}; 2s; \frac{D}{Q_z^2}\right). \end{align} In particular, we have the equality \[ \Phi_{1-k}^* \left( \mathcal{P}_{\frac{3}{2}-k,-D}\right) =\mathcal{F}_{1-k,-D}. \] \end{theorem} \begin{remark}
After seeing a preliminary version of this paper, Zemel \cite{Zemel} has obtained further theta lifts related to vector-valued versions of $\mathcal{F}_{\mathcal{A}}$. \end{remark} \begin{proof}[Proof of Theorem \ref{thm:thetalift}] One can show that both sides of \eqref{eqn:Phivals} converge absolutely and locally uniformly for $\sigma>1$ and $ z\notin\{ \tau_Q: Q\in \mathcal{A}\} $ and hence are analytic in $s$. It thus suffices to show \eqref{eqn:Phivals} for $\sigma>k-\frac32$. Following the proof of Theorem \ref{thm:liftfkd} and noting that the map $[a,b,c]\mapsto [a,-b,c]$ is an involution on $\mathcal{Q}_{-D}$, we obtain \begin{equation}\label{eqn:Phi*eval} \Phi_{1-k}^*\left(P_{\frac{3}{2}-k,-D,s}\right)(z)=
\frac{(4\pi D)^{\frac{1}{4}-\frac{k}{2}}}{6} \sum_{Q\in\mathcal{Q}_{-D}} Q_z Q(z,1)^{k-1}\mathcal{I}_s\left(\frac{Dy^2}{|Q(z,1)|^2}\right), \end{equation} where $$ \mathcal{I}_s(Z):=\int_0^{\infty}\mathcal{M}_{\frac{3}{2}-k,s}\left(v\right) v^{-\frac{1}{2}}e^{-\frac{v}{2}} e^{-\frac{v}{Z}} dv\quad\left(Z\in\mathbb{R}^+\right). $$ Applying formula 7.621.1.~of \cite{GR} yields \begin{equation}\label{eqn:Ieval} \mathcal{I}_{s}(Z) =\Gamma\left(s+\frac{k}{2}-\frac{1}{4}\right)\left( \frac{Z}{Z+1}\right)^{s+\frac{k}{2}-\frac{1}{4}}{_2F_1}\left(s+\frac{k}{2}-\frac{1}{4}, s+\frac{k}{2}-\frac{3}{4}; 2s; \frac{Z}{Z+1}\right). \end{equation} Moreover, \eqref{eqn:Qrewrite} implies that \begin{equation}\label{eqn:rel-w-Qz} \frac{Z}{Z+1}= \frac{D}{Q_z^2} \end{equation}
for $Z:=Dy^2/|Q(z,1)|^2$, and substituting this and \eqref{eqn:Ieval} back into \eqref{eqn:Phi*eval} yields \eqref{eqn:Phivals}.
To prove the second claim, we use \eqref{2F1tr}, \eqref{2F1sym}, and \eqref{equ-beta-hypergeom} to obtain \begin{align}\nonumber
{_2F_1}\left(k,k-\frac{1}{2};k+\frac{1}{2}; W\right)
&= \nonumber
(1-W)^{\frac{1}{2}-k}\,{_2F_1}\!\left(\frac{1}{2},k-\frac{1}{2};k+\frac{1}{2};\frac{W}{W-1}\right)\\ \label{eqn:NISTbeta} &=(-1)^{k-\frac{1}{2}}\left(k-\frac{1}{2}\right)W^{-k+\frac{1}{2}}\,\beta\!\left(\frac{W}{W-1};k-\frac{1}{2},\frac{1}{2}\right).
\end{align}
Employing \eqref{eqn:NISTbeta} with $W:=D/Q_z^2$ and using \eqref{eqn:rel-w-Qz} to evaluate $W/(W-1)=-Z=-Dy^2/|Q(z,1)|^2$, we obtain the second claim, using the fact that $\operatorname{sgn}(Q_z)=1$. \end{proof}
\section{Properties of the function $\mathcal{F}_{\mathcal{A}}$ and proof of theorem \ref{thm:Gpolar} (2)}\label{sec:FA} \subsection{Relation to higher Green's functions} The goal of this section is to write the functions
$\mathcal{F}_{\mathcal A}$ defined in \eqref{eqn:Gdef} in terms of higher Green's functions.
For this, set
\begin{align}\label{eqn:akndef}
a_{k,n}:= -\frac{(2k-2)!}{2^{k-1}(k-1)!}
\begin{cases}1 & \text{if }n\geq k-1,\\
\frac{n!}{(2k-2-n)!}& \text{if }n<k-1.
\end{cases}
\end{align}
\begin{lemma}\label{lem:diffopseed}
For $Q\in\mathcal{Q}_{-D}$ and $n\in\mathbb{N}_0$ we have
\[
R_{2-2k}^n\!\left(\mathbb{P}_{1-k,-D,Q}(z)\right) = a_{k,n}
\begin{cases} R_0^{n+1-k}\!\left(g_k^{\mathbb{H}}(z,\tau_{Q})\right) & \text{ if }n\geq k-1,\\[0.1cm]
y^{2k-2-2n} \overline{R_{0}^{k-1-n}\!\left(g_k^{\mathbb{H}}(z,\tau_{Q})\right)} & \text{ if } n\leq k-1.
\end{cases}
\]
\end{lemma}
\begin{proof} We first prove that for every $z,\mathfrak z\in\mathbb{H}$ and $j\in\mathbb{N}_0$ we
have \begin{multline}\label{greenraise} R_{0,z}^{j}\!\left(g^{\mathbb{H}}_{k}(z,\mathfrak{z})\right) \\ =\frac{-2^{k-j-1}(k-1)!(k+j-1)!(\overline{z}-\mathfrak{z})^j(\overline{z}-\overline{\mathfrak{z}})^j}{(2k-1)!y^{2j}\mathbbm{y}^j\cosh(d(z,\mathfrak{z}))^{k+j}}\,{_2F_1}\left(\frac{k+j+1}{2},\frac{k+j}{2};k+\frac{1}{2};\frac{1}{\cosh(d(z,\mathfrak{z}))^2}\right). \end{multline}
We note that the images under repeated raising of the Green's function are known in the literature (see, e.g., \cite{Mellit}), but some rewriting is still required to derive the form \eqref{greenraise}. So, for the convenience of the reader, we present a direct proof. To show \eqref{greenraise}, we first compute \begin{equation}\label{raisecos} R_{0,z}\left(\cosh(d(z,\mathfrak{z}))\right)=-\frac{(\overline{z}-\mathfrak z)(\overline{z}-\overline{\mathfrak z})}{2y^2\mathbbm{y}}, \end{equation}
from which we conclude that $R^{2}_{0,z}\left(\cosh(d(z,\mathfrak{z}))\right)=0$. Employing these identities, induction on $j\in\mathbb{N}_0$ gives \begin{multline}\label{raisefirst} R_{0,z}^{j}\!\left(g^{\mathbb{H}}_{k}(z,\mathfrak{z})\right)\\ =-\frac{2^{k-1}\,(k-1)!^2}{ (2k-1)!}\, \left(R_{0,z}\!\left(\cosh(d(z,\mathfrak{z}))\right) \right)^j\frac{\partial^j}{\partial Z^j}\left[Z^{-k}\,{_2F_1}\left(\frac{k}{2},\frac{k+1}{2};k+\frac{1}{2};\frac{1}{Z^2}\right)\right]_{Z=\cosh(d(z,\mathfrak{z}))}. \end{multline}
Next, again by induction on $j\in\mathbb{N}_0$, and employing \eqref{diff2F1}, we obtain
\begin{align*}
\frac{\partial^j}{\partial Z^j}\left(Z^{-k}\,{_2F_1}\left(\frac{k}{2},\frac{k+1}{2};k+\frac{1}{2};\frac{1}{Z^2}\right)\right)
=\frac{(-1)^{j}(k+j-1)!}{(k-1)!Z^{k+j}}
\,{_2F_1}\left(\frac{k+j+1}{2},\frac{k+j}{2};k+\frac{1}{2};\frac{1}{Z^2}\right).
\end{align*}
Plugging this and \eqref{raisecos} into \eqref{raisefirst} gives \eqref{greenraise}.
Using \eqref{greenraise}, we next show the $n=0$ case of the assertion of Lemma \ref{lem:diffopseed}, namely
\begin{equation}\label{n0}
\mathbb{P}_{1-k,-D,Q}(z) = -\frac{1}{2^{k-1}(k-1)!} y^{2k-2}\overline{R_{0}^{k-1}\left(g_k^{\mathbb{H}}(z,\tau_Q)\right)}.
\end{equation}
For this we let $j=k-1$ in \eqref{greenraise}, which yields that
\begin{equation}\label{jk1}
R_{0,z}^{k-1}\!\(g^{\mathbb{H}}_{k}(z,\mathfrak{z})\right)=-\frac{(k-1)!(\overline{z}-\mathfrak{z})^{k-1}(\overline{z}-\overline{\mathfrak{z}})^{k-1}}{(2k-1)y^{2k-2}\mathbbm{y}^{k-1}\cosh(d(z,\mathfrak{z}))^{2k-1}}{_2F_1}\!\left(k,k-\frac{1}{2};k+\frac{1}{2};\frac{1}{\cosh(d(z,\mathfrak{z}))^2}\right).
\end{equation} From now on we choose $\mathfrak{z}=\tau_Q$. Hence, employing \eqref{eqn:NISTbeta}, \eqref{eqn:coshrat}, and \eqref{rewriteQ}, \eqref{jk1} becomes
\begin{align*}
R_{0}^{k-1}\!\left(g^{\mathbb{H}}_{k}\!\left(z,\tau_{Q}\right)\right)=
i(-1)^{k}2^{k-2} (k-1)! D^{\frac{1-k}{2}} y^{2-2k} \overline{Q(z,1)}^{k-1}\beta\!\left( -\frac{Dy^2}{\abs{Q(z,1)}^2}; k-\frac{1}{2},\frac{1}{2}\right).
\end{align*}
Since $Q$ is positive-definite, so that $\operatorname{sgn}(Q_z)=1$, this gives \eqref{n0}.
To finish the proof, we apply raising $n$ times to \eqref{n0}, yielding
\begin{equation}\label{eqn:raiseseeds}
R_{2-2k}^n\left(\mathbb{P}_{1-k,-D,Q}(z)\right)=-\frac{1}{2^{k-1}(k-1)!}R_{2-2k}^n\left(y^{2k-2}\overline{R_0^{k-1}\left(g_k^{\mathbb{H}}(z,\tau_Q)\right)}\right).
\end{equation}
We now distinguish among two cases depending on whether $n\geq k-1$ or $n\leq k-1$.
In the case $n\geq k-1$, the claim follows from applying Lemma \ref{lem:raiserepeat} with $\ell=k-1$ to \eqref{eqn:raiseseeds} and noting that $g_k^{\mathbb{H}}$ is real-valued.
This yields the claim.
For $n\leq k-1$ the eigenvalue of $R_{0}^{k-1-n}(g_k^{\mathbb{H}})$ is
$ (n+1)(n+2-2k).$
Thus we have, by Lemma \ref{lem:raiserepeat} with $\ell=n$,
\begin{align*}
R_{2-2k}^n\!\left(y^{2k-2}\overline{R_{2k-2-2n}^n\left(R_{0}^{k-1-n}\!\left(g_k^{\mathbb{H}}\!\left(z, \tau_Q \right)\right)\right)}\right) = y^{2k-2-2n}\frac{n!(2k-2)!}{(2k-2-n)!} \overline{R_0^{k-1-n}\!\left(g_k^{\mathbb{H}}\!\left(z, \tau_Q \right)\right)}.
\end{align*}
Plugging this back into \eqref{eqn:raiseseeds} yields the claim.
\end{proof}
We also need regularized versions of $G_k$ and $\mathcal F_{\mathcal{A}}$. For this, define \[
\sideset{}{^{\operatorname{reg}}}\sum\limits_{w\in S} h(w):=\sum_{\substack{w\in S\\ h(w)\neq \infty}} h(w), \]
where $h$ is an arbitrary function taking inputs from some set $S$ and with outputs in $\mathbb{C}\cup \{\infty\}$. Note that different choices of $w$ lead to different subsets of $S$ being excluded on the right-hand side of the above equation. For any operator $\mathcal O$ we then let
$$
\mathcal O\left(\sideset{}{^{\operatorname{reg}}}\sum\limits_{w\in S} h(w)\right):=\sideset{}{^{\operatorname{reg}}}\sum\limits_{w\in S} \mathcal O(h(w)).
$$ Moreover, for $\mathcal{H}(z):=\sum_{w\in S} h_z(w)$ we set \[ \mathcal{H}^{\operatorname{reg}}(z):=\sideset{}{^{\operatorname{reg}}}{\sum}_{w\in S} h_z(w). \]
Note that $\mathcal{H}$ may possibly have distinct presentations of this type, written both as a sum over $w\in S$ of $h_z(w)$ and as sum over another set with a different function. The regularization $\mathcal{H}^{\operatorname{reg}}(z)$ depends on the choice of its presentation so we emphasize that the regularization uses the choice of $h_z$ and the set $S$ given in the definition of $\mathcal{H}$.
We obtain the following corollary by applying Lemma \ref{lem:diffopseed} termwise.
\begin{corollary}\label{diffop}
For $Q_0\in\mathcal{A}\in\mathcal{Q}_{-D}/{\text {\rm SL}}_2(\mathbb{Z})$ and $n\in\mathbb{N}_0$ we have
\[
R_{2-2k}^n\!\left(\mathcal F_{\mathcal{A}}^{\operatorname{reg}}(z)\right) = \frac{a_{k,n}}{2\omega_{\tau_{Q_0}}} \begin{cases} R_0^{n+1-k}\!\left(G_k^{\operatorname{reg}}(z,\tau_{Q_0})\right) & \text{ if }n\geq k-1,\\ y^{2k-2-2n} \overline{R_{0}^{k-1-n}\!\left(G_k^{\operatorname{reg}}(z,\tau_{Q_0})\right)} & \text{ if } n\leq k-1,
\end{cases}
\] where $a_{k,n}$ is defined in \eqref{eqn:akndef}. In particular \[ \mathcal{F}^{\mathrm{reg}}_{\mathcal{A}}(z)=-\frac1{2^k (k-1)!\,\omega_{\tau_{Q_0}}}y^{2k-2}\overline{ R_0^{k-1}\left(G_k^{\mathrm{reg}}\left(z, \tau_Q\right)\right)}. \]
\end{corollary}
\begin{proof}
Writing $Q=Q_0\circ M$ with $M\in \Gamma_{\tau_{Q_0}}\backslash{\text {\rm SL}}_2(\mathbb{Z})$, we have
\[
R_{2-2k}^n\!\left(\mathcal F_{\mathcal{A}}^{\operatorname{reg}}\right)=\frac{1}{2\omega_{Q_0}}\ \ \ \sideset{}{^{\operatorname{reg}}}\sum_{M\in{\text {\rm SL}}_2(\mathbb{Z})} R_{2-2k}^{n}\!\left(\mathbb{P}_{1-k,-D,Q_0\circ M}\right).
\]
The result then follows from Lemma \ref{lem:diffopseed}, using the relation $\tau_{Q_0\circ M} =M^{-1} \tau_{Q_0}$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:Gpolar} (2)}\label{sec:GQprop}
We now have the necessary pieces to show the modularity of $\mathcal{F}_{\mathcal{A}}$ and its relation to $f_{\mathcal{A}}$ under the differential operators. \begin{proof}[Proof of Theorem \ref{thm:Gpolar} (2)] We first show that \begin{equation}\label{eqn:RkGreens}
R_0^k\left(G_k\!\left(z,\tau_{Q_0}\right)\right)= -2^{k}(k-1)!\,\omega_{\tau_{Q_0}} f_{\mathcal{A}}(z).
\end{equation} For this, we plug $j=k$ into \eqref{greenraise}, which implies that $R_0^k\left(g_k^{\mathbb{H}}(z,\mathfrak{z})\right)$ equals
\begin{align}
-\frac{(k-1)!(\overline{z}-\mathfrak{z})^k(\overline{z}-\overline{\mathfrak{z}})^{k}}{2y^{2k}\mathbbm{y}^k\cosh(d(z,\mathfrak{z}))^{2k}} {_2F_1}\left(k+\frac{1}{2},k;k+\frac{1}{2};\frac{1}{\cosh(d(z,\mathfrak{z}))^2}\right). \label{jk}
\end{align}
Now, by \eqref{2F1sym} and \eqref{2F1 special}, \eqref{jk} becomes
\begin{align*}
-\frac{(k-1)!(\overline{z}-\mathfrak{z})^k(\overline{z}-\overline{\mathfrak{z}})^k}{2y^{2k}\mathbbm{y}^k(\cosh(d(z,\mathfrak{z}))^2-1)^k}.
\end{align*}
Taking $\mathfrak{z}=\tau_Q$, using \eqref{eqn:coshrat} and \eqref{eqn:yQval}, and plugging in termwise, gives \eqref{eqn:RkGreens}.
The statement for the $\xi$-operator now follows from Corollary \ref{diffop}, using \eqref{XiR} and \eqref{eqn:RkGreens}.
We next compute the image of $\mathcal F_{\mathcal A}$ under $\mathcal{D}^{2k-1}$. Using Bol's identity \eqref{Bol} we have, by Corollary \ref{diffop}, the equality
\begin{align*}
\mathcal{D}^{2k-1}\!\left(\mathcal F_{\mathcal{A}}(z)\right)=-\frac{1}{(4\pi)^{2k-1}} \frac{a_{k,2k-1}}{2\omega_{\tau_{Q_0}}}R_{0}^k\left(G_k\left(z,\tau_{Q_0}\right)\right).
\end{align*}
Using \eqref{eqn:RkGreens} then gives the claim.
Finally, Corollary \ref{diffop} immediately implies that $\mathcal{F}_{\mathcal{A}}$ is modular of weight $2-2k$, and the decomposition $\Delta_{2-2k}=-\xi_{2k}\circ \xi_{2-2k}$ together with \eqref{eqn:xiDG} give that $\mathcal{F}_{\mathcal{A}}$ is annihilated by that operator. From this one concludes that $\mathcal{F}_{\mathcal{A}}$ is a polar harmonic Maass form. \end{proof}
\subsection{Elliptic expansion of $\mathcal F_{\mathcal{A}}$}
Before stating the elliptic expansion of $\mathcal{F}_{\mathcal{A}}$, we first give a general formula for functions annihilated by the Laplace operator. \begin{lemma}\label{lem:F+ellexpraise} Suppose that $\mathcal{M}:\mathbb{H}\to\mathbb{C}$ satisfies $\Delta_{2-2k}(\mathcal M)=0$ and has a singularity of finite order at $\varrho\in\mathbb H$. Then for $0\leq r_{\varrho}(z)\ll_{\varrho} 1$ we have
\begin{equation}\label{id1}
\left(\mathcal{M}_{\varrho}^+-\mathscr{P}_{\mathcal{M},\varrho}^+\right)(z) = (2i\eta)^{2-2k} \left(z-\overline{\varrho}\right)^{2k-2}\sum_{n\geq 0} \frac{\eta^n}{n!} R_{2-2k}^{n}\left(\mathcal{M}-\mathscr{P}_{\mathcal{M},\varrho}\right)(\varrho) X_{\varrho}(z)^n.
\end{equation}
In particular, if $\mathcal{M}$ does not have a singularity at $\varrho$, then
for $0\leq r_{\varrho}(z)\ll_{\varrho} 1$
we have
\begin{equation}\label{id2}
\mathcal{M}_{\varrho}^+(z) = (2i\eta)^{2-2k} \left(z-\overline{\varrho}\right)^{2k-2}\sum_{n\geq 0} \frac{\eta^n}{n!} R_{2-2k}^{n}\left(\mathcal{M}(\varrho)\right) X_{\varrho}(z)^n.
\end{equation}
\end{lemma} \begin{remarks} \noindent
\noindent \begin{enumerate}[leftmargin=*] \item In \eqref{id1}, one first needs to act on an independent variable and then plug in $\varrho$, because $\mathcal{M}-\mathscr{P}_{\mathcal{M},\varrho}$ depends on $\varrho$. In \eqref{id2}, one may directly apply raising. \item
A similar statement is true for the non-meromorphic part $\mathcal{M}_{\varrho}^-$, where raising is instead applied to the conjugate of $\mathcal{M}$. However, we do not work out the details here. \end{enumerate} \end{remarks} \begin{proof}[Proof of Lemma \ref{lem:F+ellexpraise}]
Since $\mathcal{M}_{\varrho}^+-\mathscr{P}_{\mathcal{M},\varrho}^+$ is holomorphic in some region around $\varrho$, Lemma \ref{lem:ellexpraise} provides, for $z$ in some neighborhood of $\varrho$, the expansion \begin{equation*} \left(\mathcal{M}_{\varrho}^+-\mathscr{P}_{\mathcal{M},\varrho}^+\right)(z)=(2i\eta)^{2-2k} \left(z-\overline{\varrho}\right)^{2k-2}\sum_{n\geq 0} \frac{ \eta^n}{n!} R_{2-2k}^{n}\left(\mathcal{M}_{\varrho}^+-\mathscr{P}_{\mathcal{M},\varrho}^+\right)(\varrho) X_{\varrho}(z)^n. \end{equation*} The claim hence follows once we prove that
\[
R_{2-2k}^n\left(\mathcal{M}_{\varrho}^+-\mathscr{P}_{\mathcal{M},\varrho}^+\right)(\varrho)=R_{2-2k}^n\left(\mathcal{M}-\mathscr{P}_{\mathcal{M},\varrho}\right)(\varrho). \] Noting which terms in the expansion \eqref{eqn:expw} grow as $z\to\varrho$, it suffices to show that for all $m\in \mathbb{N}$ and $n\in\mathbb{N}_0$ one has
\begin{equation}\label{eqn:raisevanish}
\left[R_{2-2k,z}^n\!\left((z-\overline{\varrho})^{2k-2}\beta_0\!\left(1-r_{\varrho}(z)^2;2k-1,m\right)X_{\varrho}(z)^{-m}\right)\right]_{z=\varrho}=0.
\end{equation}
To prove \eqref{eqn:raisevanish}, we first use \eqref{betid} to rewrite
\begin{multline}\label{eqn:toraise}
(z-\overline{\varrho})^{2k-2}\beta_0\!\left(1-r_{\varrho}(z)^2;2k-1,m\right)X_{\varrho}(z)^{-m} \\
=\!\!\!\! \sum_{0\leq j\leq 2k-2} \binom{2k-2}{j} \frac{(-1)^{j+1}}{j+m} (z-\overline{\varrho})^{2k-2} X_{\varrho}(z)^j \overline{X_{\varrho}(z)^{j+m}}.
\end{multline}
We next apply $R_{2-2k,z}^{n}$ to \eqref{eqn:toraise}. All of the factors other than $(z-\overline{\varrho})^{2k-2} X_{\varrho}(z)^j$ are annihilated by differentiation in $z$. Note moreover that $\overline{X_{\varrho}(z)^{j+m}}$ vanishes at $z=\varrho$, and also that the limit of $R_{2-2k,z}^{n}((z-\overline{\varrho})^{2k-2}X_{\varrho}(z)^j)$ as $z\to \varrho$ exists because $j\geq 0$ and the resulting function is a polynomial (of degree at most $2k-2$) in $z$ with coefficients depending on $y$ and $\varrho$. Therefore \eqref{eqn:raisevanish} follows. \end{proof}
We next describe the principal part of the elliptic expansion of $\mathcal F_{\mathcal{A}}$ around $\varrho$, and relate the coefficients of its expansion to higher Green's functions.
\begin{lemma}\label{ellipticF} The principal part of $\mathcal F_{\mathcal{A}}$ around $\varrho\in\mathbb{H}$ is \[ \mathscr{P}_{\mathcal F_{\mathcal{A}},\varrho}(z)=\delta_{\varrho=\tau_Q}\mathbb{P}_{1-k,-D,Q}(z), \] where here by $\delta_{\varrho=\tau_Q}$ we mean that $\varrho=\tau_Q$ as points in $\mathbb{H}$ instead of ${\text {\rm SL}}_2(\mathbb{Z})\backslash\mathbb{H}$ as used throughout the paper. The elliptic coefficients of the meromorphic part of $\mathcal F_{\mathcal{A}}$ are given by \[ c_{\mathcal F_{\mathcal{A}},\varrho}^{+}(n)=\frac{b_{k,n}}{\omega_{\tau_{Q_0}} } \begin{cases} \eta^{2-2k+n}R_{0,\varrho}^{n+1-k}\!\left(G_k^{\operatorname{reg}}\!\left(\varrho,\tau_{Q_0}\right)\right)&\text{if }n\geq k-1,\\[0.1cm] \eta^{-n} \overline{R_{0,\varrho}^{k-1-n}\!\left(G_k^{\operatorname{reg}}\!\left(\varrho,\tau_{Q_0}\right)\right)} &\text{if }n\leq k-1, \end{cases} \] where $b_{k,n}$ is defined in \eqref{eqn:bkndef}. \end{lemma}
\begin{proof}
First note that since $0<D\leq Q_z^2$ with $D=Q_z^2$ if and only if $Q(z,1)=0$ by
\eqref{eqn:Qrewrite}, and since the only possible singularities of $\beta(x;a,b)$ are at $x=0$, $x=1$, and $x\to\infty$, the only terms contributing singularities are those with $\varrho=\tau_Q$. Since $Q$ is entirely determined by $\tau_Q$ and $D$, it remains to show that $\mathbb{P}_{1-k,-D,Q}$ is precisely a principal part (i.e., that its elliptic expansion \eqref{eqn:expw} only contains terms that grow towards $\varrho$). The claim hence follows once we show that
\begin{equation}\label{eqn:BetaRels}
\mathbb{P}_{1-k,-D,Q}(z)=2^{2-3k} \!\left(z-\overline{\tau_{Q}}\right)^{2k-2} v_Q^{1-k} \beta_0\left(1-r_{\tau_{Q}}(z)^2; 2k-1,1-k\right) X_{\tau_{Q}}(z)^{k-1}.
\end{equation} To obtain \eqref{eqn:BetaRels}, note that it is not hard to see that the constant $\mathcal C_{2k-1,1-k}$ defined in \eqref{eqn:beta0def} vanishes, and thus \[ \beta_0(Z;2k-1,1-k)=\beta(Z;2k-1,1-k). \] By \eqref{rewriteQ}, the right-hand side of \eqref{eqn:BetaRels} thus equals \[ 2^{1-2k} D^{\frac{1-k}{2}} Q(z,1)^{k-1} \beta\left(1-r_{\tau_{Q}}(z)^2; 2k-1,1-k\right). \] We then use \eqref{betatran}, noting that \eqref{1r} and \eqref{eqn:Qrewrite} imply that \begin{equation*}
-\frac{\left(1-r_{\tau_Q}(z)^2\right)^2}{4r_{\tau_Q}(z)^2}= -\frac{Dy^2}{|Q(z,1)|^2}, \end{equation*} to obtain \[
2^{-1}i(-1)^k D^{\frac{1-k}{2}}Q(z,1)^{k-1} \beta\!\left(\frac{-Dy^2}{|Q(z,1)|^2}; k-\frac{1}{2},\frac{1}{2}\right).
\] Recalling the definition of $\mathbb{P}_{1-k,-D,Q}$ in \eqref{defineP}, this yields the statement for the principal part.
We next evaluate the elliptic coefficients of the meromorphic part. For $n\in\mathbb{N}_0$, Lemma \ref{lem:F+ellexpraise} allows us
to rewrite
\begin{equation*}
c_{\mathcal F_\mathcal{A},\varrho}^+(n)= \frac{ (2i)^{2-2k}}{n!} \eta^{2-2k+n} R_{2-2k}^{n}\left(\mathcal F_{\mathcal{A}}-\mathscr{P}_{\mathcal F_{\mathcal{A}},\varrho}\right)(\varrho).
\end{equation*} Using \eqref{eqn:BetaRels} and acting termwise yields
\begin{equation*}
c_{\mathcal F_\mathcal{A},\varrho}^+(n)= \frac{ (2i)^{2-2k}}{n!}\eta^{2-2k+n}
R_{2-2k}^{n}\left(\mathcal F_{\mathcal{A}}^{\operatorname{reg}}(\varrho)\right).
\end{equation*}
The result then follows from Corollary \ref{diffop}. \end{proof}
\section{Proof of Theorem \ref{generalint} and Corollary \ref{cor:Greensinner}}\label{sec:residue}
In this section we prove a more general version of Theorem \ref{generalint} in Theorem \ref{thm:innerGreensGeneral} below, and then use this to prove Corollary \ref{cor:Greensinner}. In order to do so, we first rewrite the inner product $\langle f,f_{\mathcal A}\rangle$ in terms of the elliptic coefficients of $f$ given in \eqref{eqn:fEllExp}, as well as those of $\mathcal F_{\mathcal{A}}$, evaluated explicitly in Lemma \ref{ellipticF}.
\begin{theorem}\label{thm:wnotz} If $f\in \mathbb{S}_{2k}$ has its poles at $\mathfrak{z}_{1},\dots, \mathfrak{z}_r$ in ${\text {\rm SL}}_2(\mathbb{Z})\backslash\mathbb{H}$, then $$ \left<f,f_{\mathcal{A}}\right>=\pi \sum_{\ell=1}^{r} \frac{1}{\mathbbm{y}_\ell \omega_{\mathfrak{z}_\ell}}
\sum_{n\geq 1} c_{f,\mathfrak{z}_\ell}(-n)c_{\mathcal{F}_{\mathcal{A}},\mathfrak{z}_\ell}^+(n-1). $$ \end{theorem} \begin{proof} Since the functions $\left\{ \Psi_{2k,m}(\cdot,\mathfrak{z}): \mathfrak{z}\in\mathbb{H},\ m\in\mathbb{Z}\right\}$ span $\mathbb{S}_{2k}$, linearity allows us to assume that $f(z)=(2\omega_{\mathfrak{z}})^{-1}\Psi_{2k,m}(z,\mathfrak{z})$ for $m\in\mathbb{Z}$, $\mathfrak{z}\in\mathbb{H}$. We use a trick employed by many authors (cf. \cite{Bo1,BruinierFunke,DITRQ}) for rewriting the inner product. By \eqref{eqn:xiDG} we obtain \begin{equation}\label{regist} \left<f,f_{\mathcal{A}}\right>=\left<f, \xi_{2-2k}\left(\mathcal F_{\mathcal{A}}\right)\right>. \end{equation} We take the implied integral in \eqref{regist} over the cut-off fundamental domain $ F_T^*$, consisting of those $z\in F^*$ for which $z$ is equivalent to a point in $ F_T$ under the action of ${\text {\rm SL}}_2(\mathbb{Z})$, and then let $T\to \infty$. We require a few additional properties of $ F^*$. First we may assume, without loss of generality, that $\tau_{Q_0},\mathfrak{z}\in F^*$. We also claim that since there are no poles of $f$ or $f_{\mathcal{A}}$ for $y\gg 0$, $ F^*$ may be constructed so that for $T\gg 0$, the boundary of $ F_T^*$ includes the line from $-\frac12+iT$ to $\frac12+iT$. Indeed, one can explicitly build $ F^*$ from the standard fundamental domain $ F$ by successively removing partial balls $\mathcal{B}_{\delta}(\mathfrak{z}_\ell)\cap F$ around each pole $\mathfrak{z}_\ell\in \partial F$ that is not an elliptic fixed point and moving them to the other side of the fundamental domain with respect to the imaginary axis to combine with other partial balls around equivalent points $\gamma\mathfrak{z}_\ell\in \partial F$ to form entire balls $\mathcal{B}_{\delta}(\gamma\mathfrak{z}_\ell)$ for some $\gamma\in{\text {\rm SL}}_2(\mathbb{Z})$. Since the part of the fundamental domain with $y\gg 0$ remains unchanged, the boundary of $ F_T^*$ is as desired. Moreover, we may choose $\delta>0$ sufficiently small such that $\mathcal{B}_{\delta}(\mathfrak{z}_{\ell})$ is contained inside $\Gamma_{\mathfrak{z}_{\ell}} F^*$ and balls around different points are disjoint. By Stokes' Theorem, using the meromorphicity of $f$ and the vanishing of $f(z)h_{s,\varrho}(z)$ at $z=\varrho$ for $s\in\mathbb{C}$ with $\sigma\gg 0$, \eqref{regist} equals \begin{multline}\label{eqn:polez} -\operatorname{CT}_{s=0}\Bigg(\int_{ F^*} f(z)\overline{ \xi_{0} \left(h_{s_1,\mathfrak{z}}(z)h_{s_2,\tau_{Q_0}}(z)\right)} \mathcal F_{\mathcal A}(z) dxdy \\ +\lim_{T\to\infty}\int_{\partial F_T^*} f(z) h_{s_1,\mathfrak{z}}(z)h_{s_2,\tau_{Q_0}}(z)\mathcal F_{\mathcal A}(z)dz\Bigg). \end{multline} Note that the minus sign occurring in the second term in \eqref{eqn:polez} comes from the computation of the exterior derivative in terms of the $\xi$-operator; see the last two formulas on page 12 of \cite{BruinierFunke} for further details.
Recalling the definition after \eqref{eqn:OurReg}, we note that $z\mapsto h_{s_0,\varrho}(z)$ is invariant under $\Gamma$, and hence the integrand in the second term of \eqref{eqn:polez} is modular of weight $2$. Combining this modularity with the exponential decay of $f$ towards $i\infty$ and the polynomial growth of the other factors, one concludes that the second term vanishes as $T\to\infty$. Using the invariance of the integrand under the action of $\Gamma_{\tau_{Q_0}}$ and $\Gamma_{\mathfrak{z}}$, we then rewrite \eqref{eqn:polez} as \begin{equation}\label{eqn:polez5} -\frac{1}{\omega_{\tau_{Q_0}}\omega_{\mathfrak{z}}}\operatorname{CT}_{s=0}\left(\sum_{\gamma_1\in\Gamma_{\mathfrak{z}}}\sum_{\gamma_2\in\Gamma_{\tau_{Q_0}}}\int_{\gamma_1\gamma_2 F^*} f(z)\overline{\xi_{0}\left(h_{s_1,\mathfrak{z}}(z)h_{s_2,\tau_{Q_0}}(z)\right)} \mathcal F_{\mathcal A}(z)dxdy\right). \end{equation} Note that for $\varrho\in F^*$, no other element of $\Gamma_{\varrho} F^*$ is equivalent to $\varrho$ modulo ${\text {\rm SL}}_2(\mathbb{Z})$, and we have the equality $r_{\varrho}(Mz)=r_{\varrho}(z)$ for every $M\in \Gamma_{\varrho}$ by \eqref{eqn:requal}. Hence the equality $h_{s_j,\varrho}(z)=r_{\varrho}(z)^{2s_j}$ holds for every $z\in \Gamma_{\varrho} F^*$. For $(\varrho,s_0)\in\{(\mathfrak{z},s_1),(\tau_{Q_0},s_2)\}$ we may therefore compute \begin{equation}\label{eqn:xih} \overline{\xi_{0}\left(h_{s_0,\varrho}(z)\right)}=-4s_0\eta r_{\varrho}(z)^{2s_0-2} \frac{X_{\varrho}(z)}{\left(\overline{z}-\varrho\right)^2}. \end{equation}
Thus $1/(s_1s_2)$ times the first integral in \eqref{eqn:polez} restricted to $z\notin \mathcal{B}_{\delta}(\mathfrak{z})\cup \mathcal{B}_{\delta}(\tau_{Q_0})$ converges absolutely and locally uniformly in $s$, and hence the corresponding contribution to the integral is analytic and vanishes at $s=0$. To evaluate the remaining part of \eqref{eqn:polez5}, we first compute $\xi_{0}(h_{s_1,\mathfrak{z}}(z)h_{s_2,\tau_{Q_0}}(z))$ for $z\in \mathcal{B}_{\delta}(\mathfrak{z})\cap F^*$ using the product rule. The term coming from differentiating $h_{s_2,\tau_{Q_0}}$ vanishes in the limit $s_2\to 0$ by \eqref{eqn:xih}. We then use \eqref{eqn:Ballsplit} and \eqref{eqn:polez5} to show that the first integral in \eqref{eqn:polez} over $\mathcal{B}_{\delta}(\mathfrak{z})\cap F^*$ equals $$ -\frac{1}{\omega_{\mathfrak{z}}}\operatorname{CT}_{s_1=0}\left(\int_{\mathcal{B}_\delta(\mathfrak{z})} f(z)\overline{\xi_{0,z}\left(h_{s_1,\mathfrak{z}}(z)\right)} \mathcal F_{\mathcal A}(z) dxdy\right). $$
Plugging \eqref{eqn:xih} into the latter expression, and repeating the argument for $\tau_{Q_0}$ if $\tau_{Q_0}\neq \mathfrak{z}$, \eqref{eqn:polez} becomes \begin{equation} \label{eqn:polez2} \frac{4v_{Q_0}}{\omega_{\tau_{Q_0}}}\mathcal{J}\!\left(\tau_{Q_0}\right)+\delta_{\mathfrak{z}\neq \tau_{Q_0}}\frac{4\mathbbm{y}}{\omega_{\mathfrak{z}}}\mathcal{J}(\mathfrak{z}), \end{equation} where \begin{equation} \label{eqn:inttoeval} \mathcal{J}(\varrho):= \operatorname{CT}_{s_0=0}\left(s_0 \int_{\mathcal{B}_{\delta}(\varrho)} f(z)r_{\varrho}(z)^{2s_0-2}\frac{X_{\varrho}(z)}{\left(\overline{z}-\varrho\right)^2} \mathcal F_{\mathcal A}(z) dxdy \right). \end{equation}
To evaluate $\mathcal{J}(\varrho)$, we insert the elliptic expansion \eqref{eqn:Psiexp} of $f(z)=(2\omega_{\mathfrak{z}})^{-1}\Psi_{2k,m}(z,\mathfrak{z})$ around $\varrho$ and the expansion of $\mathcal F_{\mathcal A}$ using the explicit principal part given in Lemma \ref{ellipticF} (rewritten as in \eqref{eqn:BetaRels}) to
see that the integral in \eqref{eqn:inttoeval} equals \begin{multline}\label{eqn:intb4change}
\frac{1}{\eta^2}\int_{\mathcal{B}_{\delta}(\varrho)}\frac{\eta^2}{\left|z-\overline{\varrho}\right|^4}\left(\delta_{\varrho=\mathfrak{z}} X_{\varrho}(z)^m +\sum_{n\geq 0} c(n) X_{\varrho}(z)^n\right)r_{\varrho}(z)^{2s_0-2}X_{\varrho}(z)\\ \times \Bigg( 2^{2-3k}v_{Q_0}^{1-k}\delta_{\varrho=\tau_{Q_0}}\beta_0\left(1-r_{\tau_{Q_0}}(z)^2;2k-1,1-k\right) X_{\tau_{Q_0}}(z)^{k-1} +\sum_{\ell\geq 0} c_{\mathcal{F}_{\mathcal{A}},\varrho}^+(\ell) X_{\varrho}(z)^{\ell}\\
+ \sum_{\ell<0} c_{\mathcal{F}_{\mathcal{A}},\varrho}^-(\ell) \beta_0\left(1-r_{\varrho}(z)^2;2k-1,-\ell\right)X_{\varrho}(z)^{\ell} \Bigg)dxdy. \end{multline}
Making the change of variables $X_{\varrho}(z)=Re^{i\theta}$ and noting that $\frac{\eta^2}{\left|z-\overline{\varrho}\right|^{4}} dxdy = \frac{R}{4} d\theta dR$ and $r_{\varrho}(z)=R$, we may rewrite \eqref{eqn:intb4change} as \begin{multline*} \frac{1}{4\eta^2}\int_{0}^{\delta}\int_{0}^{2\pi}\left( \delta_{\varrho=\mathfrak{z}}R^m e^{im\theta} +\sum_{n\geq 0} c(n) R^n e^{in\theta}\right)
\Bigg(\delta_{\varrho=\tau_{Q_0}}\frac{ R^{k-1+2s_0}e^{ik\theta}}{2^{3k-2}v_{Q_0}^{k-1}}\beta_0\left(1-R^2;2k-1,1-k\right) \\ +\sum_{\ell\geq 0} c_{\mathcal{F}_{\mathcal{A}},\varrho}^+(\ell) R^{\ell+2s_0}e^{i(\ell+1)\theta}+ \sum_{\ell<0} c_{\mathcal{F}_{\mathcal{A}},\varrho}^-(\ell)\beta_0\left(1-R^2;2k-1,-\ell\right)R^{\ell+2s_0}e^{i(\ell+1) \theta} \Bigg) d\theta dR.
\end{multline*} Expanding, the integral over $\theta$ vanishes unless the power of $e^{i\theta}$ is zero. The latter expression thus equals \begin{multline}\label{eqn:polez3} \frac{\pi }{2\eta^2} \int_{0}^{\delta} \bigg(\frac{\delta_{m=-k}\delta_{\varrho=\tau_{Q}=\mathfrak{z}}}{2^{3k-2}v_{Q_0}^{k-1}}\beta_0\left(1-R^2;2k-1,1-k\right) + \delta_{\varrho=\mathfrak{z}} c_{\mathcal{F}_{\mathcal{A}},\varrho}^+ \left(-m-1\right)\\ +\sum_{n\geq 0} \left(c(n)+\delta_{n=m}\delta_{\varrho=\mathfrak{z}}\right) c_{\mathcal{F}_{\mathcal{A}},\varrho}^- \left(-n-1\right) \beta_0\left(1-R^2;2k-1,n+1\right) \bigg)R^{2s_0-1} dR.
\end{multline} \noindent To determine the residue of \eqref{eqn:polez3} at $s_0=0$, we use \eqref{betid} with $a=2k-1$ and $b=-\ell$ to expand $\beta_0(1-R^2;2k-1,-\ell)$. For $\sigma_0\gg 0$, multiplying the first term in \eqref{betid} by $R^{2s_0-1}$ and integrating then yields $$ \sum_{\substack{0\leq j\leq 2k-2\\ j\neq \ell}} \binom{2k-2}{j}\frac{(-1)^{j+1}}{j-\ell}\int_{0}^{\delta}R^{2(j-\ell+s_0)-1} dR = \sum_{\substack{0\leq j\leq 2k-2\\ j\neq \ell}} \binom{2k-2}{j}\frac{(-1)^{j+1} \delta^{2(j-\ell+s_0)}}{2(j-\ell)(j-\ell+s_0)}, $$ which is holomorphic at $s_0=0$, so the corresponding terms in \eqref{eqn:polez3} give no residue. Hence
\begin{equation}\label{eqn:polez4}
\mathcal{J}(\varrho)=\frac{\pi \delta_{\varrho=\mathfrak{z}}}{2\eta^2 }\operatorname{CT}_{s_0=0}\left( -s_0\frac{\delta_{m=-k}\delta_{\varrho=\tau_{Q_0}}}{(-8v_{Q_0})^{k-1}}\binom{2k-2}{k-1}\int_{0}^{\delta} \log(R)R^{2s_0-1}dR + c_{\mathcal{F}_{\mathcal{A}},\varrho}^+ \left(-m-1\right)\frac{\delta^{2s_0}}{2}\right).
\end{equation} \noindent Using integration by parts for the first summand in \eqref{eqn:polez4}, we
obtain a meromorphic continuation with no constant term, as $$ s_0\int_{0}^{\delta} \log\left(R\right)R^{2s_0-1}dR = \frac{\delta^{2s_0}}{2}\log(\delta) -\frac{1}{4s_0} \delta^{2s_0} =-\frac{1}{4s_0} + O(s_0). $$ Therefore \[ \mathcal{J}(\varrho)=\frac{\pi \delta_{\varrho=\mathfrak{z}}}{4\eta^2}c_{\mathcal{F}_{\mathcal{A}},\varrho}(-m-1). \] Plugging this back into \eqref{eqn:polez2} and recalling that this equals \eqref{regist} then gives \begin{align*} \langle f, f_{\mathcal{A}}\rangle & =\left\langle f, \xi_{2-2k}\left(\mathcal{F}_{\mathcal{A}}\right)\right\rangle =\frac{4v_{Q_0}}{\omega_{\tau_{Q_0}}}\mathcal{J}\left(\tau_{Q_0}\right)+\frac{4\mathbbm{y}\delta_{\mathfrak{z}\neq \tau_{Q_0}}}{\omega_{\mathfrak{z}}}\mathcal{J}(\mathfrak{z})\\ &\ =\frac{\pi}{v_{Q_0}\omega_{\tau_{Q_0}}}\delta_{\mathfrak{z}=\tau_{Q_0}}c^+_{\mathcal{F}_{\mathcal{A}, \tau_{Q_0}}}(-m-1) +\frac{\pi}{\mathbbm{y}\omega_{\mathfrak{z}}}\delta_{\mathfrak{z}\neq\tau_{Q_0}}c^+_{\mathcal{F}_{\mathcal{A}, \mathfrak{z}}}(-m-1)=\frac{\pi}{\mathbbm{y}\omega_{\mathfrak{z}}} c^+_{\mathcal{F}_{\mathcal{A}, \mathfrak{z}}}(-m-1). \end{align*} \end{proof}
The following Theorem generalizes Theorem \ref{generalint} to also allow poles at $\tau_{Q_0}$. \begin{theorem}\label{thm:innerGreensGeneral} If $Q_0\in \mathcal{A}\in\mathcal{Q}_{-D}\backslash {\text {\rm SL}}_2(\mathbb{Z})$ and $f\in \mathbb{S}_{2k}$ with poles in ${\text {\rm SL}}_2(\mathbb{Z})\backslash \mathbb{H}$ at $\mathfrak{z}_1,\dots,\mathfrak{z}_r$, then \begin{multline*} \left<f,f_{\mathcal{A}}\right>=\frac{\pi}{\omega_{\tau_{Q_0}}}
\sum_{\ell=1}^r \frac1{ \omega_{\mathfrak{z}_\ell}} \Bigg(\sum_{n\geq k}b_{k,n-1} \mathbbm{y}_\ell^{-2k+n} c_{f,\mathfrak{z}_{\ell}}(-n) R_{0}^{n-k}\left(G_k^{\operatorname{reg}}(z,\tau_{Q_0})\right)\\ + \sum_{n=1}^{k-1} b_{k,n-1} \mathbbm{y}_\ell^{-n} c_{f,\mathfrak{z}_{\ell}}(-n) \overline{R_0^{k-n}\left(G_k^{\operatorname{reg}}(z,\tau_{Q_0})\right)} \Bigg). \end{multline*} \end{theorem} \begin{proof} The result follows directly by plugging Lemma \ref{ellipticF} into the statement of Theorem \ref{thm:wnotz}. \end{proof}
We finally prove Corollary \ref{cor:Greensinner}. \begin{proof}[Proof of Corollary \ref{cor:Greensinner}] This follows immediately from Theorem \ref{generalint} and Lemma \ref{lem:fPsi}. \end{proof} \section{Future questions}\label{sec:future}
\noindent We conclude the paper by discussing some possible future directions that one could pursue: \noindent \begin{enumerate}[leftmargin=*] \item Note that by Theorem \ref{generalint}, $f_{\mathcal{A}}$ is orthogonal to cusp forms, which was also proven by Petersson \cite{Pe2}.
Combining the regularizations for growth towards the cusps and towards points in $\mathbb{H}$, one can further prove that $f_{\mathcal{A}}$ is orthogonal to weakly holomorphic modular forms, but we do not carry out the details here. After reading a preliminary version of this paper, Zemel \cite{Zemel} considered some questions related to inner products between weakly holomorphic modular forms and meromorphic cusp forms. \item Images of lifts between integral and half-integral weight weak Maass forms have Fourier expansions that can be written as CM-traces for negative discriminants and cycle integrals for positive discriminants \cite{BO,BruO,DIT}. Thus the appearance of CM-values of ${\text {\rm SL}}_2(\mathbb{Z})$-invariant functions in Theorem \ref{thm:innerGreensGeneral} is natural. Since the generating function of Zagier's cusp forms for positive discriminants yields the (holomorphic) kernel function for the first Shintani lift, one may ask whether there is a connection between CM-traces and the generating function of the $f_{k,-D}$. However, the naive generating function diverges, and furthermore, it would have a dense set of poles in the upper half-plane. It hence might be interesting to investigate instead whether the generating function for the regularized function $f_{k,-D}^{\operatorname{reg}}$ has any connection to CM-traces. \item In light of the connection in Corollary \ref{cor:Greensinner}, it would be interesting to investigate Conjecture 4.4 of \cite{GZ}, concerning $G_k$ evaluated at CM-points. Moreover, since $G_k^{\operatorname{reg}}(\tau_{Q_0},\tau_{Q_0})$ naturally appears when computing $\left<f_{\mathcal{A}},f_{\mathcal{A}}\right>$, one can probably use the regularized higher Green's functions to reformulate the conjecture to include the case when the CM-points agree. Given the connections to heights and geometry in \cite{GZ} and \cite{Zhang}, it would also be interesting to see if the identity in Corollary \ref{cor:Greensinner} holds for $k=1$ and higher level in this case. \item In Conjecture 4.4 of \cite{GZ}, Gross and Zagier took linear combinations of $G_{k}$ acted on by Hecke operators, and
conjectured that these linear combinations evaluated at CM-points are essentially logarithms of algebraic numbers whenever the linear combinations satisfy certain relations. These relations are determined by linear equations defined by the Fourier coefficients of weight $2k$ cusp forms. Note that by Corollary \ref{diffop}, $G_{k}(z,\tau_Q)$ is essentially $R_0^{k-1}(\mathcal{F}_{\mathcal{A}}(z))$, while $\mathcal F_{\mathcal{A}}$ is naturally related to $f_{\mathcal{A}}$ via differential operators in Theorem \ref{thm:Gpolar} (2). Translating the condition of Gross and Zagier into a condition on polar harmonic Maass forms might be enlightening in two directions. On the one hand, it might carve out a natural subspace of weight $2k$ meromorphic modular forms (corresponding to the image under $\xi_{2-2k}$
of those polar harmonic Maass forms
satisfying these conditions), which
may satisfy other interesting properties. On the other hand,
by applying the theory of harmonic Maass forms, one may be able to loosen the conditions and investigate what happens for general linear combinations. \end{enumerate}
\end{document} | arXiv |
\begin{definition}[Definition:Alternating Series]
Let $\ds s = \sum_{n \mathop = 1}^\infty a_n$ be a series in the real numbers $\R$.
The series $s$ is an '''alternating series''' {{iff}} the terms of $\sequence {a_n}$ alternate between positive and negative.
\end{definition} | ProofWiki |
K-trivial set
In mathematics, a set of natural numbers is called a K-trivial set if its initial segments viewed as binary strings are easy to describe: the prefix-free Kolmogorov complexity is as low as possible, close to that of a computable set. Solovay proved in 1975 that a set can be K-trivial without being computable.
The Schnorr–Levin theorem says that random sets have a high initial segment complexity. Thus the K-trivials are far from random. This is why these sets are studied in the field of algorithmic randomness, which is a subfield of Computability theory and related to algorithmic information theory in computer science.
At the same time, K-trivial sets are close to computable. For instance, they are all superlow, i.e. sets whose Turing jump is computable from the Halting problem, and form a Turing ideal, i.e. class of sets closed under Turing join and closed downward under Turing reduction.
Definition
Let K be the prefix-free Kolmogorov Complexity, i.e. given a string x, K(x) outputs the least length of the input string under a prefix-free universal machine. Such a machine, intuitively, represents a universal programming language with the property that no valid program can be obtained as a proper extension of another valid program. For more background of K, see e.g. Chaitin's constant.
We say a set A of the natural numbers is K-trivial via a constant b ∈ $\mathbb {N} $ if
$\forall nK(A\upharpoonright n)\leq K(n)+b$.
A set is K-trivial if it is K-trivial via some constant.[1][2]
Brief history and development
In the early days of the development of K-triviality, attention was paid to separation of K-trivial sets and computable sets.
Chaitin in his 1976 paper [3] mainly studied sets such that there exists b ∈$\mathbb {N} $ with
$\forall nC(A\upharpoonright n)\leq C(n)+b$
where C denotes the plain Kolmogorov complexity. These sets are known as C-trivial sets. Chaitin showed they coincide with the computable sets. He also showed that the K-trivials are computable in the halting problem. This class of sets is commonly known as $\Delta _{2}^{0}$ sets in arithmetical hierarchy.
Robert M. Solovay was the first to construct a noncomputable K-trivial set, while construction of a computably enumerable such A was attempted by Calude, Coles [4] and other unpublished constructions by Kummer of a K-trivial, and Muchnik junior of a low for K set.
Developments 1999–2008
In the context of computability theory, a cost function is a computable function
$c:\mathbb {N} \times \mathbb {N} \to \mathbb {Q} ^{\geq 0}.$
For a computable approximation $\langle A_{s}\rangle $ of $\Delta _{2}^{0}$ set A, such a function measures the cost c(n,s) of changing the approximation to A(n) at stage s. The first cost function construction was due to Kučera and Terwijn.[5] They built a computably enumerable set that is low for Martin-Löf-randomness but not computable. Their cost function was adaptive, in that the definition of the cost function depends on the computable approximation of the $\Delta _{2}^{0}$ set being built.
A cost function construction of a K-trivial computably enumerable noncomputable set first appeared in Downey et al.[6]
We say a $\Delta _{2}^{0}$ set A obeys a cost function c if there exists a computable approximation of A, $\langle A_{s}:s\in \omega \rangle $ $S=\Sigma _{x,s}c(x,s)[x<s\wedge {\text{x is the least s.t. }}A_{s-1}(x)\neq A_{s}(x)]<\infty .$
K-trivial sets are characterized[7] by obedience to the Standard cost function, defined by
$c_{K}(x,s)=\Sigma _{x<y\leq s}2^{-K_{s}(x)}$ where $K_{s}(x)=\min\{|\sigma |:\mathbb {U} _{s}(\sigma )=x\}$
and $\mathbb {U} _{s}$ is the s-th step in a computable approximation of a fixed universal prefix-free machine $\mathbb {U} $.
Sketch of the construction of a non-computable K-trivial set
In fact the set can be made promptly simple. The idea is to meet the prompt simplicity requirements,
$PS_{e}:|W_{e}|=\infty \Rightarrow \exists s\exists x[x\in W_{e,s}\backslash W_{e,s-1}\wedge x\in A_{s}]$
as well as to keep the costs low. We need the cost function to satisfy the limit condition
$\lim _{x}\sup _{s>x}c(x,s)=0$
namely the supremum over stages of the cost for x goes to 0 as x increases. For instance, the standard cost function has this property. The construction essentially waits until the cost is low before putting numbers into $A$ to meet the promptly simple requirements. We define a computable enumeration $\langle A_{s}:s\in \omega \rangle $ such that
$A_{0}=\emptyset $. At stage s> 0 , for each e < s, if $PS_{e}$ has not been met yet and there exists x ≥ 2e such that $x\in W_{e,s}\backslash W_{e,s-1}$ and $c(x,s)\leq 2^{-e}$, then we put x into $A_{s}$ and declare that $PS_{e}$ is met. End of construction.
To verify that the construction works, note first that A obeys the cost function since at most one number enters A for the sake of each requirement. The sum S is therefore at most
$\Sigma _{e}2^{-e}<\infty .$
Secondly, each requirement is met: if $W_{e}$ is infinite, by the fact that the cost function satisfies the limit condition, some number will eventually be enumerated into A to meet the requirement.
Equivalent characterizations
K-triviality turns out to coincide with some computational lowness notions, saying that a set is close to computable. The following notions capture the same class of sets.[7]
Lowness for K
We say that A is low for K if there is b ∈ $\mathbb {N} $ such that
$\forall nK^{A}(n)+b\geq K(n).$
Here $K^{A}$ is prefix-free Kolmogorov complexity relative to oracle $A$.
Lowness for Martin-Löf-randomness
A is low for Martin-Löf-randomness[8] if whenever Z is Martin-Löf random, it is already Martin-Löf random relative to A.
Base for Martin-Löf-randomness
A is a base for Martin-Löf-randomness if A is Turing reducible to Z for some set Z that is Martin-Löf random relative to A.[7]
More equivalent characterizations of K-triviality have been studied, such as:
1. Lowness for weakly-2-randomness;
2. Lowness for difference-left-c.e. reals (notice here no randomness is mentioned).
Developments after 2008
From 2009 on, concepts from analysis entered the stage. This helped solving some notorious problems.
One says that a set Y is a positive density point if every effectively closed class containing Y has positive lower Lebesgue density at Y. Bienvenu, Hölzl, Miller, and Nies[9] showed that a ML-random is Turing incomplete iff it is a positive density point. Day and Miller[10] used this for an affirmative answer to the ML-cupping problem:[11] A is K-trivial iff for every Martin-Löf random set Z such that A⊕Z compute the halting problem, already Z by itself computes the halting problem.
One says that a set Y is a density-one point if every effectively closed class containing Y has Lebesgue density 1 at Y. Any Martin-Löf random set that is not a density-one point computes every K trivial set by Bienvenu, et al.[12] Day and Miller showed that there is Martin-Löf random set which is a positive density point but not a density one point. Thus there is an incomplete such Martin-Löf random set which computes every K-trivial set. This affirmatively answered the covering problem first asked by Stephan and then published by Miller and Nies.[13] For a summary see L. Bienvenu, A. Day, N. Greenberg, A. Kucera, J. Miller, A. Nies, and D. Turetsky.[14]
Variants of K-triviality have been studied:
• Schnorr trivial sets where the machines have domain with computable measure.
• strongly jump traceable sets, a lowness property of sets far inside K-triviality.
References
1. A. Nies (2009). Computability and Randomness, Oxford Science Publications, ISBN 978-0199230761
2. Downey, Rodney G., Hirschfeldt, Denis R. (2010), "Algorithmic Randomness and Complexity", ISBN 978-0-387-68441-3
3. Gregory J. Chaitin (1976), "Information-Theoretic Characterizations of Recursive Infinite Strings", Theoretical Computer Science Volume 2, Issue 1, June 1976, Pages 45–48
4. Cristian Calude, Richard J. Coles, Program-Size Complexity of Initial Segments and Domination Reducibility, (1999), proceeding of: Jewels are Forever, Contributions on Theoretical Computer Science in Honor of Arto Salomaa
5. Antonin Kučera and Sebastiaan A. Terwijn (1999), "Lowness for the Class of Random Sets", The Journal of Symbolic Logic Vol. 64, No. 4 (Dec., 1999), pp. 1396–1402
6. Rod G. Downey, Denis R. Hirschfeldt, Andr ́e Nies, Frank Stephan, "Trivial Reals", Electronic Notes in Theoretical Computer Science 66 No. 1 (2002), URL: "Elsevier.nl - de pagina kan niet worden weergegeven". Archived from the original on 2005-10-03. Retrieved 2014-01-03.
7. André Nies, (2005), "Lowness properties and randomness", Advances in Mathematics, Volume 197, Issue 1, 20 October 2005, Pages 274–305
8. Antonin Kučera and Sebastiaan A. Terwijn (1999), "Lowness for the Class of Random Sets", The Journal of Symbolic Logic, Vol. 64, No. 4 (Dec., 1999), pp. 1396–1402
9. Laurent Bienvenu, Rupert Hölzl, Joseph S. Miller, and André Nies, (2012), "The Denjoy alternative for computable functions", Proceedings of the 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012), volume 14 of Leibniz International Proceedings in Informatics, pages 543–554. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2012.
10. J. Miller, A. Day. (2012) "Cupping with random sets", To appear in the Proceedings of the American Mathematical Society
11. Miller and Nies, Randomness and computability: Open questions. Bull. Symb. Logic. 12 no 3 (2006) 390-410
12. Bienvenu, Greenberg, Kucera, Nies and Turetsky, "K-Triviality, Oberwolfach Randomness, and Differentiability", Mathematisches Forschungsinstitut Oberwolfach, Oberwolfach Preprints (OWP), ISSN 1864-7596
13. Miller and Nies, Randomness and computability: Open questions. Bull. Symb. Logic. 12 no 3 (2006) 390–410
14. Computing K-trivial sets by incomplete random sets. Bull. Symbolic Logic. 20, March 2014, pp 80-90.
| Wikipedia |
You're reading: Irregulars
Review: Who's Counting, by John Allen Paulos
By Elliott Baxby. Posted January 24, 2023
We asked guest author Elliott Baxby to take a look at John Allen Paulos' latest book, Who's Counting.
Mathematics is an increasingly complex subject, and we are often taught it in an abstract manner. John Allen Paulos delves into the hidden mathematics within everyday life, and illustrates how it permeates everything from politics to pop culture – for example, how game show hosts use mathematics for puzzles like the classic Monty Hall problem.
The book is a collection of essays from Paulos' ABC News column together with some original new content written for the book, on a huge range of topics from card shuffling and the butterfly effect to error correcting codes and COVID, and even the Bible code. As it's a collection of separate columns, it doesn't always flow fluently – I did find myself losing focus on some of the topics covered, particularly ones that didn't interest me as much. This was mainly down to the content though – the writing style is extremely accessible and at times witty.
The book included some interesting puzzles and questions, which were challenging and engaging, and included solutions to each problem – very helpful for a Saturday night maths challenge! I even showed some to my friends, who at times were truly puzzled. I loved the idea of puzzles being a means of sneaking cleverly designed mathematical problems onto TV game shows. It goes to show maths is everywhere!
I enjoyed the sections on probability and logic as these are topics I'm particularly interested in. One chapter also explored the constant $e$, where it came from and where else it pops up – a very interesting read. It does deserve more attention, as π seems to be the main mathematical constant you hear about, and I appreciated seeing $e$ being explored in more depth.
This book would suit anyone who seeks to see a different side of mathematics – which we aren't often taught in school – and how it manifests itself in politics and the world around us. That said, it would be better for someone with an A-level mathematics background, as some of the topics could be challenging for a less experienced reader.
It's mostly enjoyable and has a good depth of knowledge, including questions to test your mind. While I didn't find all of it completely engaging, there are definitely some points made in the book that I'll refer back to in the future!
Recurring decimals and 1/7
By David Benjamin. Posted January 9, 2023
This is a guest post by David Benjamin.
Rational numbers, when written in decimal, either have a terminating string of digits, like $\frac{3}{8}=0.375$, or produce an infinite repeating string: one well-known example is $\frac{1}{7}=0.142857142857142857…$, and for a full list of reciprocals and their decimal strings, the Aperiodical's own Christian Lawson-Perfect has built a website which generates a full list.
I've collected some interesting observations about the patterns generated by the cycles of recurring decimals, and in particular several relating to $\frac{1}{7}$.
What does DALL·E 'think' mathematics and a mathematician looks like?
By Tom Briggs. Posted August 10, 2022
DALL·E is an Artificial Intelligence (AI) system that has been designed to generate new images given a text prompt. It's very much like doing a Google image search with one very important difference: DALL·E doesn't try to find existing images to match your query, but creates a handful of new ones that it hopes will fit the bill.
Sequences in the classroom
By David Benjamin. Posted August 8, 2022
Guest author David Benjamin shares some of his favourite ways to use sequences in a teaching context.
As a maths teacher, I've found that sequences are a great way to engage and inspire mathematical reasoning. I thought I'd share some examples of sequences, and sequence-related activities, I've used with success in the past.
John Conway and his fruitful fractions
By David Benjamin. Posted March 25, 2022
Following on from the series of 'Pascal's Triangle and its Secrets' posts, guest author David Benjamin shares another delightful piece of mathematics – this time relating to prime numbers.
At the time of writing the largest known prime number has $24862048$ digits. The number of digits does not reflect the true size of this prime but if we were to type it out at Times New Roman font size 12, it would reach approximately $51.5$ km, or about $32$ miles. Astonishing!
Patrick Laroche from Ocala, Florida discovered this Mersenne prime on December 7, 2018. I was surprised to discover that it's exponent $82589933$ is the length of the hypotenuse of a primitive Pythagorean triple where $82589933^{2} = 30120165^{2} + 76901708^{2}$ as indeed are 8 of the exponents of those currently ranked from 1 to 10.
The Greek mathematician Euclid of Alexandria ($\sim$325 BC-265 BC) was arguably the first to prove that there are an infinite number of primes – and since then, people have been searching for new ones. Some do it for kudos, for the prize money, to test the power of computers and the need to find more of the large primes used to help protect the massive amount of data which is being moved around the internet.
Mersenne primes, named after the French monk Marin Mersenne, are of the form $2^{p} -1$, where the exponent $p$ is also prime. Mersenne primes are easier to test for primality, which is one reason we find so many large ones (all but one of the top ten known primes are Mersenne). When Mersenne primes are converted to binary they become a string of $1$s, which makes them suitable for computer algorithms and an excellent starting point for any search.
Marin Mersenne
Since generally testing numbers for primality is slow, some have tried to find methods to produce primes using a formula. Euler's quadratic polynomial $n^2+n+41$ produces this set of $40$ primes for $n = 0$ to $39$. When $n=40$, the polynomial produces the square number $1681$. Other prime-generating polynomials are listed in this Wolfram Mathworld entry.
The French mathematician Lejeune Dirichlet proved that the linear polynomial $a+nb$ will produce an infinite set of primes if $a$ and $b$ are coprime for $n=0,1,2,3,4,…$. Then again, it also produces an infinite number of composite numbers! However, this gem: $224584605939537911 + 1813569659748930n$ produces 27 consecutive primes for $n=0$ to $n=26$ – and of course, all the primes are in arithmetic progression.
14 fruitful fractions
The primes are unpredictable, and become less common as they get larger. Consequently there is no formula that will generate all the prime numbers. However, there is a finite sequence of fractions, that – given an infinite amount of time – would generate all the primes, and in sequential order.
They are the fruitful fractions, created by the brilliant Liverpool-born mathematician, John Horton Conway (1937–2020) who, until his untimely death from complications related to COVID-19, was the John von Neumann Emeritus Professor in Applied and Computational Mathematics at Princeton University, New Jersey, USA.
John Horton Conway (Photo: Denise Applewhite, Office of Communications)
The fruitful fractions are
$\frac{17}{91}$ $\frac{78}{85}$ $\frac{19}{51}$ $\frac{23}{38}$ $\frac{29}{33}$ $\frac{77}{29}$ $\frac{95}{23}$ $\frac{77}{19}$ $\frac{1}{17}$ $\frac{11}{13}$ $\frac{13}{11}$ $\frac{15}{44}$ $\frac{15}{2}$ $\frac{55}{1}$
The first time I encountered this set of fractions was in the wonderful book, The Book of Numbers, by Conway and Guy. I was so intrigued as to how Conway came up with his idea, I emailed him to ask. I was delighted to receive an outline of an explanation and even a second set of fractions, neither of which I can now find – it was 1996 and pre-cloud storage! But no worries… Conway explains everything in this lecture, which also demonstrates his passion for mathematics and his ability to express his ideas in a relaxed and humorous way, even when he searches for an error in his proof on 26 minutes. The lecture also includes an introduction to Conway's computer language, FRACTRAN, which includes the statement:
'It should now be obvious to you that you can write a one line fraction program that does almost anything, or one and a half lines if you want to be precise'.
Using the fractions to find prime numbers
Here's how the fractions are used to generate primes.
Start with the number $2$
Multiply by each of the fourteen fractions until you find one for which the product is an integer
Starting with this new integer, continue multiplying through the fractions until another integer is produced. (If this process reaches fraction $N=\frac{55}{1}$, the integer's product with N is guaranteed to be another integer as N has a denominator of $1$; the process continues with this new integer being multiplied by fraction A)
Continue multiplying through the set to create more integers
When the integer is a power of $2$, its exponent will be a prime number.
The 19 steps needed to produce the first prime number are:
$2 \overset{ \times M}{\rightarrow} 15 \overset{ \times N}\rightarrow 825\overset{ \times E} \rightarrow 725 \overset{ \times F}\rightarrow 1925\overset{ \times K} \rightarrow 2275 \overset{ \times A}\rightarrow 425 \overset{ \times B}\rightarrow 390 \overset{ \times J}\rightarrow 330 \overset{ \times E}\rightarrow 290 \overset{ \times F}\rightarrow 770 \overset{ \times K}\rightarrow 910\overset{ \times A} \rightarrow 170\overset{ \times B} \rightarrow 156\overset{ \times J} \rightarrow 132\overset{ \times E} \rightarrow 116 \overset{ \times F}\rightarrow 308\overset{ \times K} \rightarrow 364\overset{ \times A} \rightarrow 68 \overset{ \times I}\rightarrow 4 \equiv2^{2}$
The number of steps needed to produce the first 7 primes are shown in the table below:
Prime 2 3 5 7 11 13 17
Steps 19 69 281 710 2375 3893 8102
And here is the start and end of the sequence of fractions used to produce the next prime number from $2^{2}$:
$4 \overset{ \times M}{\rightarrow} 30 \overset{ \times M}\rightarrow 225\overset{ \times N} \rightarrow 12375 \overset{ \times E}\rightarrow 10875 \rightarrow \cdots \rightarrow 232 \overset{ \times F}{\rightarrow} 616 \overset{ \times K}\rightarrow 728\overset{ \times A} \rightarrow 136 \overset{ \times I}\rightarrow 8\equiv2^{3}$
The steps needed for the first 34 primes are given as OEIS A007547 and the first 8102 products in the B-list for A007542.
The successive primes are produced almost like magic – but the number of multiplications needed to produce each new prime becomes larger and larger, and so the method, though wonderfully inventive, is not at all efficient.
Edit: Since this article was first published, the exponent $82589933$ of the Laroche prime has been accepted as the next term in the sequence http://oeis.org/A112634
Further Reading on John Conway
Listen to this Numberphile interview with Conway on how he invented the Game of Life
Play the Game of Life
Aperiodical posts about John Conway
Conway's publications on Scholia
Conway and knots: 'I proved this when I was at high school in England'
Graduate Student Solves Decades-Old Conway Knot Problem, in Quanta
A Mathematician's Guide to Wordle
By Ali Lloyd. Posted February 1, 2022
We invited mathematician and wordplay fan Ali Lloyd to share his thoughts on hit internet word game phenomenon Wordle. If you're not familiar with the game, we recommend you go and have a play first.
CC BY-SA ZeroOne
When I first saw Wordle I said what I saw many other people subsequently say: "Oh, so it's a bit like Mastermind but with words? That's a neat idea". | CommonCrawl |
Al-Salam–Ismail polynomials
In mathematics, the Al-Salam–Ismail polynomials are a family of orthogonal polynomials introduced by Al-Salam and Ismail (1983).
References
• Al-Salam, Waleed A.; Ismail, Mourad E. H. (1983), "Orthogonal polynomials associated with the Rogers–Ramanujan continued fraction", Pacific Journal of Mathematics, 104 (2): 269–283, doi:10.2140/pjm.1983.104.269, ISSN 0030-8730, MR 0684290
| Wikipedia |
Coxeter element
In mathematics, the Coxeter number h is the order of a Coxeter element of an irreducible Coxeter group. It is named after H.S.M. Coxeter.[1]
Not to be confused with Longest element of a Coxeter group.
Definitions
Note that this article assumes a finite Coxeter group. For infinite Coxeter groups, there are multiple conjugacy classes of Coxeter elements, and they have infinite order.
There are many different ways to define the Coxeter number h of an irreducible root system.
A Coxeter element is a product of all simple reflections. The product depends on the order in which they are taken, but different orderings produce conjugate elements, which have the same order.
• The Coxeter number is the order of any Coxeter element;.
• The Coxeter number is 2m/n, where n is the rank, and m is the number of reflections. In the crystallographic case, m is half the number of roots; and 2m+n is the dimension of the corresponding semisimple Lie algebra.
• If the highest root is Σmiαi for simple roots αi, then the Coxeter number is 1 + Σmi.
• The Coxeter number is the highest degree of a fundamental invariant of the Coxeter group acting on polynomials.
The Coxeter number for each Dynkin type is given in the following table:
Coxeter group Coxeter
diagram
Dynkin
diagram
Reflections
m=nh/2[2]
Coxeter number
h
Dual Coxeter number Degrees of fundamental invariants
An [3,3...,3] ... ... n(n+1)/2 n + 1 n + 1 2, 3, 4, ..., n + 1
Bn [4,3...,3] ... ... n2 2n 2n − 1 2, 4, 6, ..., 2n
Cn ... n + 1
Dn [3,3,..31,1] ... ... n(n-1) 2n − 2 2n − 2 n; 2, 4, 6, ..., 2n − 2
E6 [32,2,1] 36 12 12 2, 5, 6, 8, 9, 12
E7 [33,2,1] 63 18 18 2, 6, 8, 10, 12, 14, 18
E8 [34,2,1] 120 30 30 2, 8, 12, 14, 18, 20, 24, 30
F4 [3,4,3]
24 12 9 2, 6, 8, 12
G2 [6]
6 6 4 2, 6
H3 [5,3] - 15 10 2, 6, 10
H4 [5,3,3] - 60 30 2, 12, 20, 30
I2(p) [p] - p p 2, p
The invariants of the Coxeter group acting on polynomials form a polynomial algebra whose generators are the fundamental invariants; their degrees are given in the table above. Notice that if m is a degree of a fundamental invariant then so is h + 2 − m.
The eigenvalues of a Coxeter element are the numbers e2πi(m − 1)/h as m runs through the degrees of the fundamental invariants. Since this starts with m = 2, these include the primitive hth root of unity, ζh = e2πi/h, which is important in the Coxeter plane, below.
The dual Coxeter number is 1 plus the sum of the coefficients of simple roots in the highest short root of the dual root system.
Group order
There are relations between the order g of the Coxeter group and the Coxeter number h:[3]
• [p]: 2h/gp = 1
• [p,q]: 8/gp,q = 2/p + 2/q -1
• [p,q,r]: 64h/gp,q,r = 12 - p - 2q - r + 4/p + 4/r
• [p,q,r,s]: 16/gp,q,r,s = 8/gp,q,r + 8/gq,r,s + 2/(ps) - 1/p - 1/q - 1/r - 1/s +1
• ...
For example, [3,3,5] has h=30, so 64*30/g = 12 - 3 - 6 - 5 + 4/3 + 4/5 = 2/15, so g = 1920*15/2 = 960*15 = 14400.
Coxeter elements
Distinct Coxeter elements correspond to orientations of the Coxeter diagram (i.e. to Dynkin quivers): the simple reflections corresponding to source vertices are written first, downstream vertices later, and sinks last. (The choice of order among non-adjacent vertices is irrelevant, since they correspond to commuting reflections.) A special choice is the alternating orientation, in which the simple reflections are partitioned into two sets of non-adjacent vertices, and all edges are oriented from the first to the second set.[4] The alternating orientation produces a special Coxeter element w satisfying $w^{h/2}=w_{0}$, where w0 is the longest element, provided the Coxeter number h is even.
For $A_{n-1}\cong S_{n}$, the symmetric group on n elements, Coxeter elements are certain n-cycles: the product of simple reflections $(1,2)(2,3)\cdots (n{-}1\,n)$ is the Coxeter element $(1,2,3,\dots ,n)$.[5] For n even, the alternating orientation Coxeter element is:
$(1,2)(3,4)\cdots (2,3)(4,5)\cdots =(2,4,6,\ldots ,n{-}2,n,n{-}1,n{-}3,\ldots ,5,3,1).$
There are $2^{n-2}$ distinct Coxeter elements among the $(n{-}1)!$ n-cycles.
The dihedral group Dihp is generated by two reflections that form an angle of $2\pi /2p$, and thus the two Coxeter elements are their product in either order, which is a rotation by $\pm 2\pi /p$.
Coxeter plane
For a given Coxeter element w, there is a unique plane P on which w acts by rotation by 2π/h. This is called the Coxeter plane[6] and is the plane on which P has eigenvalues e2πi/h and e−2πi/h = e2πi(h−1)/h.[7] This plane was first systematically studied in (Coxeter 1948),[8] and subsequently used in (Steinberg 1959) to provide uniform proofs about properties of Coxeter elements.[8]
The Coxeter plane is often used to draw diagrams of higher-dimensional polytopes and root systems – the vertices and edges of the polytope, or roots (and some edges connecting these) are orthogonally projected onto the Coxeter plane, yielding a Petrie polygon with h-fold rotational symmetry.[9] For root systems, no root maps to zero, corresponding to the Coxeter element not fixing any root or rather axis (not having eigenvalue 1 or −1), so the projections of orbits under w form h-fold circular arrangements[9] and there is an empty center, as in the E8 diagram at above right. For polytopes, a vertex may map to zero, as depicted below. Projections onto the Coxeter plane are depicted below for the Platonic solids.
In three dimensions, the symmetry of a regular polyhedron, {p,q}, with one directed Petrie polygon marked, defined as a composite of 3 reflections, has rotoinversion symmetry Sh, [2+,h+], order h. Adding a mirror, the symmetry can be doubled to antiprismatic symmetry, Dhd, [2+,h], order 2h. In orthogonal 2D projection, this becomes dihedral symmetry, Dihh, [h], order 2h.
Coxeter group A3
Td
B3
Oh
H3
Ih
Regular
polyhedron
{3,3}
{4,3}
{3,4}
{5,3}
{3,5}
Symmetry S4, [2+,4+], (2×)
D2d, [2+,4], (2*2)
S6, [2+,6+], (3×)
D3d, [2+,6], (2*3)
S10, [2+,10+], (5×)
D5d, [2+,10], (2*5)
Coxeter plane
symmetry
Dih4, [4], (*4•) Dih6, [6], (*6•) Dih10, [10], (*10•)
Petrie polygons of the Platonic solids, showing 4-fold, 6-fold, and 10-fold symmetry.
In four dimensions, the symmetry of a regular polychoron, {p,q,r}, with one directed Petrie polygon marked is a double rotation, defined as a composite of 4 reflections, with symmetry +1/h[Ch×Ch][10] (John H. Conway), (C2h/C1;C2h/C1) (#1', Patrick du Val (1964)[11]), order h.
Coxeter group A4 B4 F4 H4
Regular
polychoron
{3,3,3}
{3,3,4}
{4,3,3}
{3,4,3}
{5,3,3}
{3,3,5}
Symmetry +1/5[C5×C5] +1/8[C8×C8] +1/12[C12×C12] +1/30[C30×C30]
Coxeter plane
symmetry
Dih5, [5], (*5•) Dih8, [8], (*8•) Dih12, [12], (*12•) Dih30, [30], (*30•)
Petrie polygons of the regular 4D solids, showing 5-fold, 8-fold, 12-fold and 30-fold symmetry.
In five dimensions, the symmetry of a regular 5-polytope, {p,q,r,s}, with one directed Petrie polygon marked, is represented by the composite of 5 reflections.
Coxeter group A5 B5 D5
Regular
polyteron
{3,3,3,3}
{3,3,3,4}
{4,3,3,3}
h{4,3,3,3}
Coxeter plane
symmetry
Dih6, [6], (*6•) Dih10, [10], (*10•) Dih8, [8], (*8•)
In dimensions 6 to 8 there are 3 exceptional Coxeter groups; one uniform polytope from each dimension represents the roots of the exceptional Lie groups En. The Coxeter elements are 12, 18 and 30 respectively.
En groups
Coxeter group E6 E7 E8
Graph
122
231
421
Coxeter plane
symmetry
Dih12, [12], (*12•) Dih18, [18], (*18•) Dih30, [30], (*30•)
See also
• Longest element of a Coxeter group
Notes
1. Coxeter, Harold Scott Macdonald; Chandler Davis; Erlich W. Ellers (2006), The Coxeter Legacy: Reflections and Projections, AMS Bookstore, p. 112, ISBN 978-0-8218-3722-1
2. Coxeter, Regular polytopes, §12.6 The number of reflections, equation 12.61
3. Regular polytopes, p. 233
4. George Lusztig, Introduction to Quantum Groups, Birkhauser (2010)
5. (Humphreys 1992, p. 75)
6. Coxeter Planes Archived 2018-02-10 at the Wayback Machine and More Coxeter Planes Archived 2017-08-21 at the Wayback Machine John Stembridge
7. (Humphreys 1992, Section 3.17, "Action on a Plane", pp. 76–78)
8. (Reading 2010, p. 2)
9. (Stembridge 2007)
10. On Quaternions and Octonions, 2003, John Horton Conway and Derek A. Smith ISBN 978-1-56881-134-5
11. Patrick Du Val, Homographies, quaternions and rotations, Oxford Mathematical Monographs, Clarendon Press, Oxford, 1964.
References
• Coxeter, H. S. M. (1948), Regular Polytopes, Methuen and Co.
• Steinberg, R. (June 1959), "Finite Reflection Groups", Transactions of the American Mathematical Society, 91 (3): 493–504, doi:10.1090/S0002-9947-1959-0106428-2, ISSN 0002-9947, JSTOR 1993261
• Hiller, Howard Geometry of Coxeter groups. Research Notes in Mathematics, 54. Pitman (Advanced Publishing Program), Boston, Mass.-London, 1982. iv+213 pp. ISBN 0-273-08517-4
• Humphreys, James E. (1992), Reflection Groups and Coxeter Groups, Cambridge University Press, pp. 74–76 (Section 3.16, Coxeter Elements), ISBN 978-0-521-43613-7
• Stembridge, John (April 9, 2007), Coxeter Planes, archived from the original on February 10, 2018, retrieved April 21, 2010
• Stekolshchik, R. (2008), Notes on Coxeter Transformations and the McKay Correspondence, Springer Monographs in Mathematics, arXiv:math/0510216, doi:10.1007/978-3-540-77399-3, ISBN 978-3-540-77398-6, S2CID 117958873
• Reading, Nathan (2010), "Noncrossing Partitions, Clusters and the Coxeter Plane", Séminaire Lotharingien de Combinatoire, B63b: 32
• Bernšteĭn, I. N.; Gelʹfand, I. M.; Ponomarev, V. A., "Coxeter functors, and Gabriel's theorem" (Russian), Uspekhi Mat. Nauk 28 (1973), no. 2(170), 19–33. Translation on Bernstein's website.
| Wikipedia |
Algebraic Functions
By Dominic Milioto
Section 0: Preliminaries
Section 1: Plotting Algebraic Functions
Section 2: An Improved Plotting Method
Section 3: Applying Laurent's Theorem to Algebraic Functions
Section 4: Applying the Residue Theorem to Algebraic Functions
Section 11: Riemann Surfaces
Section 12: Evaluating the Indeterminant Form
Puiseux Series
Section 6: Puiseux Series (Background)
Section 7: Puiseux Series (Examples)
Section 8: Designing doPuiseux
Section 9: Finite power series
Section 10: Region of convergence of algebraic power series
Section 13: Analyzing the Annular Laurent Integrals
Section 14: Analyzing the Annular Laurent Puiseux Series
Mathematica Code
All algebraic functions can be computed fast
An algorithm for determining the radius of convergence of algebraic power series
Mapping to polygonal domains
Schwartz-Christoffel Transformation
Mapping double cover to torus.nb download
Unsolved problems
Analytic continuations
$$ \newcommand{\bint}{\displaystyle{\int\hspace{-10.4pt}\Large\mathit{8}}} \newcommand{\res}{\displaystyle{\text{Res}}} \newcommand{\wvalx}{\underbrace{z^{\lambda_4}(c_4+w_5)}_{w_4}} \newcommand{wvalxx}{\underbrace{z^{\lambda_3}(c_3+\wvalx)}_{w_3}} \newcommand{wvalxxx}{\underbrace{z^{\lambda_2}\{c_2+\wvalxx\}}_{w_2}} \newcommand{wvalxxxx}{z^{\lambda_1}\big(c_1+\wvalxxx\big)} $$
The Newton Polygon Method (expansion about the origin):
The reader is advised to read part 1 of this section for background information pertaining to the method.
We begin by first normalizing, if necessary, the function (see Part 1). We then represent a branch as $$ w(z)=c_1z^{\lambda_1}+c_2 z^{\lambda_1+\lambda_2}+c_3 z^{\lambda_1+\lambda_2+\lambda_3}+\cdots $$ in which $\lambda_1\geq 0$ and $\lambda_i>0$ for $i>1$ and $c_i \in \mathbb{C}$. And we find the exponents $\lambda_i$ and coefficients $c_i$ by both constructing a convex hull of points for the (normalized) function as well as by regular substitution. The points for the hull construction are obtained as follows: We plot a set of points $\left(i,\textbf{ord}(a_i)\right)$ and then extract from this set of points, the lower Newton Leg described in the previous section with one additional requirement: The lower leg is the lower part of the convex hull with negative or zero slope for the first iteration and then only negative slopes for successive iterations.
First Function:
Consider again, the (normalized) function $$ F(z,w)= (z+z^2)+(1+z) w+w^2 $$ The lower Newton leg is then the two red segments spanning the points $(0,1),(1,0),(2,0)$ shown in Figure 1. And a critical component of the algorithm is the necessity of working with a normalized version of the function: A normalized function will always have the point $(n,0)$ on the convex hull and so guaranteeing that we can always find a lower Newton Leg for the first iteration, that is, the lower portion of the convex hull with negative or zero slope with all remaining points above and to the right of the lower leg. In order to gain experience in identifying the lower Newton Leg of the type of convex hulls encountered in this paper, the interested reader should study several examples and if possible, review and run either the following Mathematica code or other types of available code. The code below generates random convex hulls of the type we study.
mypoints = Table[{n, RandomInteger[{0, 10}]}, {n, 0, 9}]
mypoints = Append[mypoints, {10, 0}]
thePoints = Graphics[{Red, PointSize[0.018], Point@ mypoints}];
p1 = ConvexHullMesh[mypoints,
MeshCellStyle -> {{1, All} -> Red, {0, All} -> Black}];
Show[{p1, thePoints}, Axes -> True, PlotRange -> {{0, 11}, {0, 11}}]
Figure 1: Newton Polygon for $f_0(z,w)$
As we discussed in Part 1, we let $$ w_1(z)=z^{\lambda_1}(c_1+w_2) $$ where $w_2=c_2 z^{\lambda_2}+c_3 z^{\lambda_2+\lambda_3}+\cdots$. Our first step of course is to find $\lambda_1$,$\beta_1$ and $c_1$. See Walker1 for a derivation of the method. We obtain the possible values of $\lambda_1$ from the negative of the slopes for the lower Newton leg of the the convex hull. And $\beta_i$ is the $y$-intercept of the segment. In Figure 1, we have two legs. One has slope of -1 and the other has zero slope. For the segment $S_1$ spanning $(0,1)$ to $(1,0)$, we have $\beta_1=1$. And for $S_2$ we have $\beta_1=0$. For each segment, we derive a separate power series for $w(z)$. We therefore consider two cases:
Power series for $S_1$
We first recall the recursive formula: $$ F_{i+1}=z^{-\beta_i} F_{i}\left(z,z^{\lambda_i}(c_i+w_{i+1})\right) $$ where we let $F_1(z,w_1)=F(z,w)$, the original function.
In Figure 1, we have $\beta_1=1$, $\lambda_1=1$. We then substitute $w\to w_1=z^{\lambda_1}(c_1+w_2)$ into the first recursive equation: $$ \begin{aligned} F_2(z,w_2)&=z^{-\beta_1} F_1\left(z,z^{\lambda_1}(c_1+w_2)\right)\\ &=z^{-1} F_1\left(z,z^{1}(c_1+w_2)\right)\\ &=1+c_1+w_2+(1+c_1+c_1^2) z+(1+2 c_1) w_2 z+w_2^2 z \end{aligned} $$ We now rely on one of the necessary conditions for the method: The lowest powers in $z$ alone must cancel so that we have $(1+c_1)z^0=0$ and note this is the same form for the characteristic equation for this segment, that is, $E_1(x)=1+x$ and this is precisely how the characteristic equation is derived. So we have $c_1=-1$ which gives us: $$ F_2(z,w_2)=z+(1-z)w_2+z w_2^2 $$ and the next Newton polygon shown in Figure 2.
This gives us $\beta_2=1$,$\lambda_2=1$ with characteristic equation $1+x=0$ or $c_2=-1$. We have now reached a simple segment and can now begin regular substitution discussed in Part 1 by letting the exponents increase by one over the denominator of $\lambda_2$ for some finite number of terms say $w_3=a_3 z+a_4 z^2+\cdots+a_{11}z^{8}$. Plugging these into the next recursive formula $$ \begin{aligned} F_3(z,w_3)&=z+(-1+z)w_3+zw_3^2\\ &=z+(1-z)(a_3z+a_4z^2+\cdots+a_{11}z^8)+z(a_3 z+a_4 z^2+\cdots+a_{11}z^8)^2 \end{aligned} $$ and it is now a simple matter to equate powers of $z$ to zero and compute the remaining constants $a_i$. (Recall we are working with the expression $f(z,w)=0$ so coefficients on both sides of the equation must be equal.) When we do this, we obtain $$ \begin{aligned} -a_2+a_3&=0 \\ a_2^2-a_3+a_4&=0 \\ 2a_2 a_3-a_4+a_5&=0 \\ a_3^2+2 a_2 a_4-a_5+a_6&=0 \\ 2 a_3 a_4+2 a_2 a_5-a_6+a_7 &=0 \\ a_4^2+2 a_3 a_5+2 a_2 a_6-a_7+a_8 &=0 \\ 2a_4 a_5+2 a_3 a_6+2 a_2 a_7-a_8+a_{9} &=0 \\ a_5^2+2 a_4 a_6+2 a_3 a_7+2 a_2 a_8-a_9+a_{10}&=0 \\ 2 a_5 a_6+2 a_4 a_7+2 a_3 a_8+2 a_2 a_9-a_{10}+a_{11}&=0 \\ \end{aligned} $$ and sequentially solving for $a_3$ then $a_4$ and so on we obtain $$ w=-z-z^2-z^3-2 z^4-4 z^5-9 z^6-21 z^7-51 z^8-127 z^9-323 z^{10} $$ However remember from the previous section that this expansion was for the normalized version of our original function and the power series for $f(z,w)$ is the power series for $F(z,w)$ divided by $z^2$ so we obtain $$ w_1(z)=-1-1/z-z-2 z^2-4 z^3-9 z^4-21 z^5-51 z^6-127 z^7-323 z^8 $$
We now consider segment $S_2$ where we have $\lambda=0$ and $\beta=0$. It's now a simple matter to substitute these into the recursive formula to obtain: $$ \begin{aligned} F(z,w)&=z^{-\beta_1} F\left(z,z^{\lambda_1}(c_1+w_2)\right)\\ &=z^0 F(z,z^0(c_1+w_2) \\ &=(z+z^2)+(z-1)(c_1+w_2)+(c_1+w_2)^2 \\ &=(c_1+c_1^2+z+c_1 z+z^2)+(1+2c_1+z) w+ w^2\\ &=F_2(z,w_2). \end{aligned} $$
We now equate the lowest powers of $z$ to zero and obtain $c_1+c_1^2=0$ once again noting $E_2(x)=x+x^2$ and at this point we wish to focus exclusively on the characteristic equation without having to manually determine the lowest power of $z$. We then choose the non-zero roots to $E_1(x)$ as our values of $c_i$ so that we have $c_1=-1$ and obtain for the second recursion formula: $$ F_2(z,w_2)=z^2+(-1+z) w+w^2. $$ and the Newton polygon shown in Figure 2. This gives us $\beta_2=2$, $\lambda_2=2$ and a characteristic equation of $1-x=0$ or $c_2=1$. And since the segment is simple, we can use one over the denominator of $\lambda_2$ noting that $2=\frac{2}{1}$ (in lowest terms) for the remaining exponents $\lambda_i$. We can then substitute for the first ten terms, $w_3=c_3 z+c_4 z^2+\cdots+c_{10} z^{8}$ into $F_2(z,w_3)$ to obtain: $$ \begin{aligned} F_2(z,w_3)&=z^2+(-1+z)w+w^2 \\ &=z^2+(-1+z)(c_3 z+c_4z^2+\cdots+c_{10}z^{8})+(c_3 z+c_4 z_2+\cdots+c_{10}z^8)^2 \end{aligned} $$ and once again equating coefficients to zero obtain $$ w(z)=-1+z^2+z^3+2 z^4+4 z^5+9 z^6+21 z^7+51 z^8+127 z^9+323 z^{10} $$ likewise remembering that the power series for $f(z,w)$ is that of $w(z)$ divided by $z^2$, we obtain the second power series for the function as $$ w_2(z)=1-1/z^2+z+2 z^2+4 z^3+9 z^4+21 z^5+51 z^6+127 z^7+323 z^8 $$
Second Function
We now wish to work with a function in which the segments (and characteristic equations) do not become simple until several iterations of the recursive equation: $$ F(z,w)=(z^4)+(2 z^2+z^4)w+(1+z^2+z^3)w^2+(z)w^3+(1/4-z/2)w^4+(-(1/2))w^5 $$
Figure 2 shows the Newton polygon for the initial function $F_1(z,w)$. The first thing to note about the lower Newton leg is that each of the segments have three points so has the potential for roots with multiplicities for their associated characteristic equations.
And we will now dispense with the manual computation of the recursive equations. Readers may wish to review the earlier sections to review. Rather, we will now use getSegmentData described in the Mathematica code page. This routine will automatically compute for us the slope and intercepts for each segment of the lower Newton leg as well as the characteristic equations and their roots and multiplicities. theSegmentData returns a list of lists in the form $\{list1,list2,\cdots,listn\}$, one list for each segment on the lower Newton leg. And each segment list has the form {slopeIntercept,characteristicEquation,rootTally}. For example the following code
theFunction=w^2+2z^2 w+z^4+z^2 w^2+z w^3+1/4 w^4+z^4 w+z^3 w^2-1/2 z w^4-1/2 w^5;
theclist=CoefficientList[theFunction,w];
expList=Exponent[theclist,z,Min];
minVals=MapThread[Coefficient[#1,z,#2]&,{theclist,expList}];
{lcpsave,myPoints,pgp,sg,newtonLeg,mylines,maxx,maxy}=getPolygonSetup[theftemp,theK];
{slopeInterceptList, charEquations, theRootTally} =
getSegmentData[minVals, myPoints, lcpsave, 1];
theSegmentData=MapThread[{#1,#2,#3}&,{slopeInterceptList,theCEquations,theRootTally}];
returns in theSegmentData:
{{{2,4},
1+2 x+x^2==0,
{{-1,2}}},
{{0,0},
x^2+x^4/4-x^5/2==0,
{1/6 (1+(217-12 Sqrt[327])^(1/3)+(217+12 Sqrt[327])^(1/3)),1},
{1/6-1/12 (1+I Sqrt[3]) (217-12 Sqrt[327])^(1/3)-1/12 (1-I Sqrt[3]) (217+12 Sqrt[327])^(1/3),1},
{1/6-1/12 (1-I Sqrt[3]) (217-12 Sqrt[327])^(1/3)-1/12 (1+I Sqrt[3]) (217+12 Sqrt[327])^(1/3),1}}}}
Where: $\{2,4\}$ represents $2$ as the (negative) slope and $4$ as the intercept of segment $S_1$. $1+2x+x^2=0$ is the associated characteristic equation for the segment, and $\{\{-1,2\}\}$ is the root tally, that is, $-1$ with multiplicity 2. For the horizontal segment, $S_2$, we next have $\{0,0\}$ as the slope and intercept, $x^2+x^4/4-x^5/2=0$ as its characteristic equation, and $\{0,2\}$ as a root of zero with multiplicity two and the three remaining roots with multiplicity one as the five roots to its associated characteristic equation.
And note for the first segment, we have non-zero root of multiplicity greater than one. This means that we will have to iterate the recursive equation for this segment. We therefore let $\beta_1=4$, $\lambda_1=2$ and $c_1=-1$ and substitute this into the recursive formula: $$ F_2=z^{-4}F_1(z,z^{2}(-1+w_2));\quad w_2=c_2 z^{\lambda_2}+c_3 z^{\lambda_2+\lambda_3}+\cdots $$ and omit the details of this substitution and rather show the next polygon for this iteration in Figure 3:
We execute getSegmentData with kval=2 for this iteration since this is the second polygon and we are now interested only in segments on the lower Newton leg having negative slopes. theSegmentData returns:
{{{2,4},1/4-x+x^2==0,{{1/2,2}}}}
Recall again, this means the slope is (negative) 2, the intercept is 4. The characteristic equation is next and we see the root tally as $\{1/2,2\}$. So that again we have a multiple root. We let $\lambda_2=2$, $\beta_2=4$ and $c_2=1/2$ and iterate a second recursion: $$ F_3=z^{-4}F_2(z,z^{2}(1/2+w_3));\quad w_3=c_3 z^{\lambda_3}+c_4 z^{\lambda_3+\lambda_4}+\cdots $$ and obtain a third Newton polygon shown in Figure 4 with segment data
{{{1,2},1/4+x+x^2==0,{{-(1/2),2}}}}
Letting $\lambda_3=1$, $\beta_3=2$ and $c_3=-1/2$, we iterate the recursion equation a third time: $$ F_4=z^{-2}F_3(z,z^{1}(-1/2+w_4));\quad w_4=c_4 z^{\lambda_4}+c_5 z^{\lambda_4+\lambda_5}+\cdots $$ and obtain the fourth Newton polygon for the function shown in Figure 5 with segment data:
{{{1/2,1},1/2+x^2==0,{{-(I/Sqrt[2]),1},{I/Sqrt[2],1}}}}
and taking $\lambda_4=1/2$, $\beta_4=1$ and $c_4=\frac{i}{\sqrt{2}}$, we generate $$ F_5=z^{-1}F_4\left[z,z^{1/2}\left(\frac{i}{\sqrt{2}}+w_5\right)\right];\quad w_5=c_5 z^{\lambda_5}+c_6 z^{\lambda_5+\lambda_6}+\cdots $$
We have after four recursions, reached a simple segment and can now use substitution to find the remaining terms using one over the denominator of $\lambda_4$ or $1/2$ as the exponent for the remaining exponents. Using the technique of equating powers to zero described above, we can compute for example, the first 13 terms for the power expansion of this $2$-cycle branch: $$ \begin{aligned} w_1(z)&=-1. z^2+0.5 z^4-0.5 z^5-(0. +0.707107 I) z^{11/2}+(0. +0.441942 I) z^{13/2}+0.5 z^7\\ &+(0. +0.933602 I) z^{15/2}-0.5 z^8-(0. +2.24493 I) z^{17/2}+1.375 z^9+(0. +1.0694 I) z^{19/2}\\ &-1.375 z^{10}+(0. +2.68726 I) z^{21/2} \end{aligned} $$ And using the second root, we would obtain the second conjugate series: $$ w_{2}(z)=-z^2+0.5 z^4-0.5 z^5+0.70711 I z^{11/2}-0.44194 I z^{13/2}+0.5 z^7-0.93360 I z^{15/2}+\cdots $$ And note the first three terms are the same for both conjugate series. And since each conjugate series is a single-valued branch, we could write: $$ w(z)=\begin{cases} w_1(z), & -\pi \leq \theta < \pi \\ w_2(z), & \pi \leq \theta <2\pi \end{cases} $$ so that both single-valued branches are identical up to order 5. We can visually see this similarity by viewing this function in AFRender and unselecting all rings except the first and then display only the $2$-cycle branch corresponding to these power series. Note how both branch surfaces are very close to one another appearing as a single sheet until the plot is magnified.
The characteristic equation for this leg is $x^2+1/4 x^4-1/2 x^5=0$ with the following non-zero root tally: $$ \text{rootTally: }\left\{\left( \begin{array}{cc} \frac{1}{6} \left(\sqrt[3]{217-12 \sqrt{327}}+\sqrt[3]{217+12 \sqrt{327}}+1\right) & ,1 \\ -\frac{1}{12} \sqrt[3]{217+12 \sqrt{327}} \left(1-i \sqrt{3}\right)-\frac{1}{12} \left(1+i \sqrt{3}\right) \sqrt[3]{217-12 \sqrt{327}}+\frac{1}{6} & ,1 \\ -\frac{1}{12} \sqrt[3]{217-12 \sqrt{327}} \left(1-i \sqrt{3}\right)-\frac{1}{12} \left(1+i \sqrt{3}\right) \sqrt[3]{217+12 \sqrt{327}}+\frac{1}{6} & ,1 \\ \end{array} \right)\right\} $$ Each of these roots generate after the first recursive equation, a simple segment having slope of one. After which, we can then use substitution to arrive at the first ten terms of the remaining branches for this function: $$ \begin{aligned} w_1(z)&=1.45054 +0.163939 z+0.926917 z^2-0.0717314 z^3-0.602722 z^4+0.148923 z^5+0.769661 z^6\\ &-0.328224 z^7-1.21161 z^8+0.748091 z^9+2.10653 z^{10}\\ w_2(z)&=(-0.47527+1.07374 I)-(0.581969 +0.330162 I) z+(0.536541 +0.268988 I) z^2+(0.0358657 -0.201663 I) z^3\\ &-(0.198639 -0.35995 I) z^4+(0.425538 +0.122047 I) z^5-(0.38483 +0.423919 I) z^6-(0.335888 -0.857064 I) z^7\\ &+(1.10581 -0.450021 I) z^8-(1.74905 +1.05306 I) z^9+(0.321735 +2.92673 I) z^{10}\\ w_3(z)&=(-0.47527-1.07374 I)-(0.581969 -0.330162 I) z+(0.536541 -0.268988 I) z^2+(0.0358657 +0.201663 I) z^3 \\ &-(0.198639 +0.35995 I) z^4+(0.425538 -0.122047 I) z^5-(0.38483 -0.423919 I) z^6-(0.335888 +0.857064 I) z^7\\ &+(1.10581 +0.450021 I) z^8-(1.74905 -1.05306 I) z^9+(0.321735 -2.92673 I) z^{10} \end{aligned} $$ We should also note that since the original function $f(z,w)$ was of degree five and the first leg generated a $2$-cycle branch, and since each of the roots above were simple, they must each lead to single-cycle branches, the total cycles summing up to five.
Third Function: The degenerate case.
Consider $$ f(z,w)=(1+z+z^2)+(2+z^2)w+(-1+2z^2)w^2+(5+z)w^3 $$ and note this function will lead to the degenerate polygon shown in Figure 6 since the lowest term in each $a_i$ is a constant.
But a degenerate polygon with zero slope is an acceptable lower Newton leg for the first iteration. This give us a characteristic equation $1 +2 x-x^2+5 x^3$, the root tally being $$\left( \begin{array}{ccc} \{-0.341781,1.\} & \{0.27089\, -0.715394 i,1.\} & \{0.27089\, +0.715394 i,1.\} \\ \end{array} \right) $$ And substituting each of these roots into the recursive equation, we get for each of the roots, a simple segment shown in Figure 7. And we know from previous cases that we can now use regular substitution with $\lambda_i=1$ for each succeeding exponent and obtain three $1$-cycle branches for this function.
Fourth Function: Polynomials in $w$
Now consider a polynomial strictly in terms of $w$: $$ f(z,w)=(w-1)(w-2)(w-3)^3=w^5-12 w^4+56 w^3-126 w^2+135 w-54 $$ Again, this function will lead to a degenerate polygon like that shown in Figure 6 with the characteristic equation being identical to the polynomial, $-54.+135. x-126. x^2+56. x^3-12. x^4+x^5$ with the roots being $\{1,2,3\}$. Now, when we substitute these roots into the first recursive equation, we obtain a second degenerate polygon with zero slope. For example, when we use the first root, we obtain $f_2(z,w)=(8)w+(-20)w^2+(18)w^3+(-7)w^4+(1)w^5$. Recall from the discussion, that we choose lower Newton legs with negative or zero slope only for the first polygon and then those only with negative slopes for successive iterations. We therefore have reached a termination point and should stop the algorithm. In this case, we would like the algorithm to report:
Series is terminating.
Series 1 is 1.
Fifth Function: Fractional Polynomials
We now consider a function having a fractional polynomial solution: $$ (1 - 3 z + 3 z^2 - z^3) + (-4 + 8 z - 4 z^2) w + (6 - 6 z) w^2 + (-4) w^3 + (1) w^4 $$ Readers should refer to the section on finite series for a derivation of this function having as it's solution $w(z)=1+z^{1/4}+z^{1/2}+z^{3/4}$. We would like to process $f(z,w)$ through the Newton polygon algorithm and confirm after the fourth iteration, that is after we have computed the coefficient on the $z^{3/4}$ term, we would obtain a degenerate polygon indicating the series is finite. And for this test, we will not use regular substitution when we obtain a simple segment but rather continue the recursive process up to the fifth iteration.
Computing the polygon for $f(z,w)$, we obtain the degenerate case shown in the top left plot of Figure 8 and a root tally of $\{1,4\}$ indicating multiple roots. We choose $c_1=1$ and then continue the recursive process through the second, third, and fourth iterations shown in Figure 8 to obtain the four coefficients of the first conjugate series for the solution. We are left with the function $$ \begin{aligned} f_4(z,w)&=(-4+8z^{1/4}-12 z^{1/2}+16 z^{3/4}-12 z+8 z^{5/4}-4 z^{3/2})w \\ &+(6 z^{1/2}-12 z^{3/4}+12 z-12 z^{5/4}+6 z^{3/2})w^2 \\ &+(-4z+4 z^{5/4}-4 z^{3/2})w^3 \\ &+ z^{3/2} w^4 \end{aligned} $$ The Newton polygon for $f_4(z,w)$ is shown in Figure 9 and this is another degenerate polygon indicating that the series has terminated. The reader is encouraged to work out these iterations.
Figure 8: Degenerate polygon for $f_4(z,w)$
Sixth Function
We would like now to used what we've learned to construct a particular type of function: We want to work on a second-degree function which completely ramifies around the origin. That is, one which consists of a single $2$-cycle branch. Using what we know about Newton polygons, it's not difficult to design such a function. For example, the following function would lead to such a geometry: $$ F(z,w)=\left(z+p(z)\right)+\left(z+q(z)\right)w+\left(k+r(z)\right)w^2 $$ where $\textbf{ord}(p(z))>1$, $\textbf{ord}(q(z))>1$ and $\textbf{ord}(r(z))>0$ are polynomials and $k$ is any non-zero constant. For example, even the unwieldy function $$ F(z,w)=(z+2z^2+100 z^{10}) +(z+z^6) z+ (1+z^3+z^5+2z^6)w^2 $$ would fit that criteria and indeed is fully-ramified about the origin. We will explain this with the simpler function $$ F(z,w)=z+z w+(1+z)w^2. $$
Figure98
and its convex hull shown in Figure 9. The characteristic equation is $E_1(x)=1+c_1^2$ with simple roots $(i,-i)$. Secondly, the slope of the segment is $1/2$. So we know immediately $\lambda_1=1/2$ and that all succeeding powers of $z$ will increase by one over the denominator of this slope or $\lambda_2=1/2$, $\lambda_3=1/2$, etc. That is, this branch is $2$-cycle. So letting $\lambda_1=1/2$, $\beta_1=1$, and taking $c_1=i$ we obtain $$ F(z,w_2)=w_2^2 z+w_2^2+2 i w_2 z+w_2 \sqrt{z}+2 i w_2-z+i \sqrt{z}. $$ And since the characteristic equation is simple for this example, we can next substitute $w_2=c_2 z^{1/2}+c_3 z+c_4 z^{3/2}+\cdots+c_n z^{(n-1)/2}$ for a desired number of terms and solve sequentially for $c_2, c_3,$ etc. The first five terms of one conjugate power expansion for this $2$-cycle branch is: $$ w(z)=i z^{1/2}-z/2-5/8 i z^{3/2}+1/2 z^2+71/128 i z^{5/2} $$ We can either substitute $c_1=-i$ and solve for the second conjugate series or derive it using the method in Section 2.
1 Algebraic Curves, Walker, R.J.
Blog Archive November (1) | CommonCrawl |
What is the connection between analog signal to noise ratio and signal to noise ratio in the IQ plane in a quadrature demodulation system?
We would like to compute the quantitative relation between analog noise near the LO frequency and the statistics of points found in the IQ plane after IQ demodulation. In order to completely understand the question we first give a detailed description of the IQ demodulation system.
IQ demodulation system
An IQ mixer takes signals at high frequency and brings them to lower frequency so they can be more easily processed. Figure 1 shows a schematic of an IQ mixer. The local oscillator (LO) signal $\cos(\Omega t$) is used to mix the RF signal down to a lower frequency.
Figure 1: Complete signal processing chain. Microwave frequency signal (and noise) come into the IQ mixer via the RF port. This signal is mixed with a local oscillator (LO) to convert to intermediate frequency signals $I$ and $Q$. The intermediate frequency signals are then filtered to remove the remaining high frequency component (see text) and digitally sampled. Detection of the amplitude and phase of each frequency component is done via discrete Fourier transform in digital logic.
Coherent signal - dc case
Suppose in the incoming RF signal were $M \cos(\Omega t + \phi)$. Then the $I$ and $Q$ signals would be \begin{align} I(t) &= \frac{M}{2} \cos(\phi) + \frac{M}{2} \cos(2\Omega t + \phi) \\ Q(t) &= -\frac{M}{2} \sin(\phi) - \frac{M}{2} \sin(2\Omega t + \phi) \, . \end{align} We pass these signals through low pass filters to remove the $2 \Omega$ terms, yielding \begin{align} I_F(t) &= \frac{M}{2} \cos(\phi) \\ Q_F(t) &= -\frac{M}{2} \sin(\phi) \, . \end{align} As we can see, the dc $I$ and $Q$ voltages can be thought of as the Cartesian coordinates giving the amplitude and phase of the original signal. Therefore, the mixer has done its job of allowing us to find the amplitude and phase of a high frequency signal by making only low frequency measurements.
Coherent signal - ac case
In practice we usually do not demodulate the RF signal to dc. There are several reasons for this:
Noise spectral density almost always incraeses sharply at low frequencies.
If we want to simultaneously measure the amplitude and phase of several sinusoidal components at different frequencies we cannot directly demodulate to dc in the analog part of the system.
As an example related to #2, we might have \begin{equation} RF(t) = M_1 \cos([\Omega + \omega_1] t + \phi_1 ) + M_2 \cos([\Omega + \omega_2] t + \phi_2) \, . \end{equation} In order to find the amplitude and phase of both frequency components we must use slightly more complex signal processing. The $I_F$ and $Q_F$ wave forms in this case are \begin{align} I_F(t) &= \frac{M_1}{2} \cos(\omega_1 t + \phi_1) + \frac{M_2}{2} \cos(\omega_2 t + \phi_2) \\ Q_F(t) &= -\frac{M_1}{2} \sin(\omega_1 t + \phi_1) - \frac{M_2}{2} \sin(\omega_2 t + \phi_2) \, . \end{align} In order to find both amplitudes and both phases, we need to essentially perform a Fourier transform. To do this, we digitize the wave forms, yielding \begin{align} I_n &= \frac{M_1}{2} \cos(\omega_1 n \delta t + \phi_1) + \frac{M_2}{2} \cos(\omega_2 n \delta t + \phi_2) \\ Q_n &= - \frac{M_1}{2} \sin(\omega_1 n \delta t + \phi_1) - \frac{M_2}{2} \sin(\omega_2 n \delta t + \phi_2) \end{align} where $\delta t$ is the digital sampling interval. Then, in digital logic we construct the complex series $z_n$ defined by $z_n \equiv I_n + i Q_n$. For the signals written above this is \begin{equation} z_n = \frac{M_1}{2} \exp \left( i \left[ \omega_1 n \delta t + \phi_1 \right] \right) + \frac{M_2}{2} \exp \left( i \left[ \omega_2 n \delta t + \phi_2 \right] \right) \, . \end{equation} If we now, in digital logic, compute the sum \begin{equation} Z(\omega_k) = \frac{1}{N}\sum_{n=0}^{N-1} z_n e^{-i \omega_k n \delta t} \end{equation} we recover the amplitude and phase for the component at frequency $\omega_k$. For example, if we were to compute $Z(\omega_1)$ we would get $(M_1/2) \exp(i \phi_1)$.
In practice the signal always comes with noise. The effect of the noise is to make $Z(\omega)$ a random variable instead of a deterministic value. In other words, for each $\omega$ $Z(\omega)$ is random and will be different for each realization of the experiment.
We can guess from intuition that in the presence of noise $Z(\omega)$ has a circularly symmetric distribution in the IQ plane with mean equal to the deterministic value $(M/2)\exp(i \phi)$. The question is what exactly is the statistical distribution of $Z$ in the presence of noise?
noise demodulation statistics quadrature
DanielSank
DanielSankDanielSank
Because each step in the processing chain is linear we consider a case with only noise and no coherent signal. Denote the noise $\xi(t)$. The $I$ and $Q$ signals are \begin{align}\ I(t) &= \xi(t) \cos(\Omega t) \\ Q(t) &= - \xi(t) \sin(\Omega t) \, . \end{align} We express the effect of the filter as a convolution with the time response function $h$, \begin{equation} I_F(t) = \int_{-\infty}^\infty dt' \, \xi(t') \cos(\Omega t') h(t - t') \end{equation} and similarly for $Q_F$. Note that, because the filter is causal, $h(t)=0$ for $t<0$. The sampling simply selects the value of $I_F$ and $Q_F$ at the times $\{ n \delta t \}$, \begin{equation} I_n = \int_{-\infty}^\infty dt' \, \xi(t') \cos(\Omega t') h(n \delta t - t') \end{equation} and similarly for $Q_n$. Following the construction described above for the digital part of the processing chain we have \begin{equation} Z(\omega) = \frac{1}{N}\sum_{n=0}^{N-1} \int_{-\infty}^\infty dt' \, \xi(t') e^{-i \Omega t'} h(n \delta t - t') e^{-i \omega n \delta t} \, . \end{equation} Our problem is therefore to compute the statistics of this expression.
Changing variables $n \delta t - t' \rightarrow t'$ produces \begin{equation} Z(\omega) = \frac{1}{N} \sum_{n=0}^{N-1} \int_{-\infty}^\infty dt' \, \xi(n \delta t - t') e^{-i \Omega (n \delta t - t')} h(t') e^{-i \omega n \delta t} \, . \end{equation} At this stage we can do a sanity check by computing the average value of $Z(\omega)$. Remember, this is an ensemble average. In other words, we are computing the average value of $Z(\omega)$ which we would find by converting many instances of demodulated noise into IQ points and then taking the mean of all those points. In any case, the result is \begin{align} \langle Z(\omega) \rangle &= \frac{1}{N} \sum_{n=0}^{N-1} \int_{-\infty}^\infty dt' \, \underbrace{\langle\xi(n \delta t - t')\rangle}_0 e^{-i \Omega (n \delta t - t')} h(t') e^{-i \omega n \delta t} \\ &= 0 \, . \end{align} This makes sense as we expect the noise should not change the average value of the demodulated IQ point, but should only add some randomness centered about the deterministic value.
I do not know how to compute the statistics of $Z(\omega)$ directly, so we take an alternative approach by computing instead the mean square of $Z(\omega)$. By the central limit theorem the real and imaginary parts of $Z$ should be at least approximately Guassian distributed (and, as we'll point out, uncorrelated) so finding the mean square modulus of $Z$ actually tell us all we need to know.
We proceed by directly constructing $|Z(\omega)|^2$ and taking the statistical average (statistical average is denoted by $\langle \cdot \rangle$). \begin{align} \langle \left| Z(\omega) \right| ^2 \rangle &= \int_{-\infty}^\infty \int_{-\infty}^\infty dt' \, dt'' \, \frac{1}{N^2} \sum_{n,m=0}^{N-1} \nonumber \\ & e^{i\Omega (t' - t'')} h(t') h(t'') \langle \xi(n\delta t - t') \xi(m\delta t - t'') \rangle e^{-i(\Omega + \omega)(n - m)\delta t} \, . \qquad (*) \end{align} We now use the Wiener-Khinchin theorem which says that for a stationary stochastic process $\xi(t)$ the statistical average $\langle \xi(\tau) \xi(0) \rangle$ is related to the power spectral density $S_\xi$ via the following equation: \begin{equation} \langle \xi(\tau) \xi(0) \rangle = \frac{1}{2}\int_{-\infty}^\infty \frac{d\omega}{2\pi} S_\xi(\omega) e^{i \omega \tau} \, . \end{equation} Using this formula for $\langle \xi(n\delta t - t') \xi(m\delta t - t'')$ yields \begin{align} \langle|Z(\omega)|^2 \rangle &= \frac{1}{2} \int_{-\infty}^\infty \int_{-\infty}^\infty dt' \, dt'' \, \int_{-\infty}^\infty \frac{d\omega'}{2\pi}\frac{1}{N^2} \sum_{n,m=0}^{N-1} \nonumber \\ & e^{i\Omega (t' - t'')} h(t') h(t'') S_\xi(\omega') e^{i\omega' ((n-m)\delta t - (t' - t''))} e^{-i(\Omega + \omega)(n - m)\delta t} \\ &= \frac{1}{2} \int_{-\infty}^\infty \frac{d\omega'}{2\pi} |h(\omega' - \Omega)|^2 S_\xi(\omega') \left| \frac{1}{N} \sum_{n=0}^{N-1} e^{-i(\Omega + \omega - \omega') n \delta t} \right|^2 \\ &= \frac{1}{2N} \int_{-\infty}^\infty \frac{d\omega'}{2\pi} |h(\omega' - \Omega)|^2 S_\xi(\omega') \underbrace{ \frac{1}{N} \left( \frac{\sin([\Omega + \omega - \omega'] \delta t N / 2)}{\sin([\Omega + \omega - \omega']\delta t / 2)} \right)^2 }_{N^{\text{th}}\text{ order Fejer kernel}} \\ &= \frac{1}{2N} \int_{-\infty}^\infty \frac{d\omega'}{2\pi} |h(\omega' - \Omega)|^2 S_\xi(\omega') \mathcal{F}_N([\Omega + \omega - \omega'] \delta t / 2) \\ \end{align} where $\mathcal{F}_N$ is the $N^{\text{th}}$ order Fejer kernel. Changing variables $\Omega - \omega' \rightarrow \omega'$ we get \begin{equation} \langle |Z(\omega)|^2 \rangle = \frac{1}{2N} \int_{-\infty}^\infty \frac{d\omega'}{2\pi} |h(-\omega')|^2 S_\xi(\Omega - \omega') \mathcal{F}_N([\omega' + \omega]\delta t / 2) \, . \end{equation} So far the results have been exact and precise results can be found by numeric evaluation of the integrals. We now make a series of relatively weak assumptions to arrive at a practical formula. The Fejer kernel $\mathcal{F}_N(x)$ has weight concentrated near $x=0$. Therefore, we integrate over $S_\xi$ only for frequencies near $\Omega$ and so, in this integral, we can approximate $S_\xi$ as a constant $S(\Omega - \omega') \approx S_\xi(\Omega)$, giving \begin{equation} \langle |Z(\omega)|^2 \rangle = \frac{1}{2N} S_\xi(\Omega) \int_{-\infty}^\infty \frac{d\omega'}{2\pi} |h(-\omega')|^2 \mathcal{F}_N([\omega' + \omega]\delta t / 2) \, . \end{equation} We can already see here that the noise statistics of the demodulated IQ point depends only on the RF spectral density near the LO frequency. This makes sense; the IQ mixer is designed to take signal content near the LO frequency and bring it down to a lower IF where it can be processed. The anti-aliasing filters remove all frequency components which are too far away from the LO.
The first null of $\mathcal{F}_N(x)$ occurs at $x = 2\pi / N$, and most of the weight is contained in the first few lobes. The first nulls are therefore at \begin{equation} \frac{\omega'_{\text{null}}}{2\pi} = - \frac{\omega}{2\pi} \pm \frac{1}{N \delta t} \, . \end{equation} This means that the integral over $\omega'$ is dominated by frequencies in a range given by the sampling frequency divided by $N$. In most practical applications this range is so small that $h(\omega)$ is roughly constant over this range. If that's the case, we can replace $h(-\omega')$ with $h(\omega)$ (note that $h(-\omega) = h(\omega)$) finding \begin{align} \langle |Z(\omega)|^2 \rangle &= \frac{1}{2N}S_\xi(\Omega)|h(\omega)|^2 \underbrace{ \int_{-\infty}^\infty \frac{d\omega'}{2\pi} \mathcal{F}_N([\omega' + \omega] \delta t / 2 N) }_{1 / \delta t} \\ &= \frac{S_\xi(\Omega)}{2 T} |h(\omega)|^2 \end{align} where $T \equiv N \delta t$ is the total measurement time.
It is reasonably well known that if a random variable $Z$ has Gaussian and independently distributed real and imaginary parts, and has average squared modulus $R$, then the distributions of the real and imaginary parts of that variable have standard deviation $\sqrt{R/2}$.$^{[a]}$ Therefore, taking our result for $\langle |Z(\omega)|^2 \rangle$, our observation that the real and imaginary parts of $Z$ are Gaussian distributed, and the fact that they're uncorrelated,$^{[b]}$ we know that the standard deviations of the distributions of the real and imaginary parts are \begin{equation} \sigma = \sqrt{S_\xi(\Omega) |h(\omega)|^2 / 4 T} \, . \end{equation} As discussed at the beginning, a signal $M \cos([\Omega + \omega] t + \phi)$ becomes $(M/2)e^{i \phi}$ in the IQ plane. Of course there we ignored the effect of the filter which is simply to scale the amplitude to \begin{equation} Z(\omega) = \frac{M |h(\omega)|}{2} e^{i \phi} \, . \end{equation} Suppose, as illustrated in Figure 2, we are using the IQ demodulation system to distinguish between two or more signals, each with a different phase but with all the same amplitude $M$. Due to the noise, each of the possible amplitude/phases leads to a cloud of points in the IQ plane with radial distance $M |h(\omega)|/2$ from the origin. The distance between two clouds' centers is $g(M/2)|h(\omega)|$ where $g$ is a geometrical factor which depends on the phases of the clouds. If the arc angle between two clouds is $\theta$ and each cloud's center is equidistant from the origin then $g = 2 \sin(\theta / 2)$. For example, if the two phases are $\pm\pi/2$ then $g=2 \sin(\pi/2) = 2$. Geometrically this is because the distance between the clouds' centers is twice bigger than the distance of either cloud from the origin.
The signal to noise ratio (SNR) is \begin{align} \text{SNR} & \equiv \frac{\text{separation}^2}{2 \times (\text{cloud std deviation})^2} \\ &= \frac{(g M |h(\omega)|/2)^2}{2 S_\xi(\Omega) |h(\omega)|^2 / 4T} \\ &= \frac{(g M)^2 T}{2 S_\xi(\Omega)}\\ &= \frac{g^2 P T}{S_\xi(\Omega)} \, . \end{align} where $P \equiv M^2/2$ is the incoming analog power. Note that the SNR does not depend on $h$. To remember this result, note that the noise power is the spectral density multiplied by a bandwidth $B$. Taking $B = 1/T$ we see that our result just says that the SNR in the IQ plane is exactly equal to the analog SNR multiplied by the geometrical factor $g^2$.
Figure 2: Two IQ clouds. The separation between the clouds' centers is proportional to their radial magnitude $M$, but scaled by a geometrical factor $g$. Projected onto the line connecting their centers, each cloud becomes a Gaussian distribution with width $\sqrt{S_\xi(\Omega)|h(\omega)|^2/4T}$.
$[a]$: Look up the chi square distribution.
$[b]$: We can see that the real and imaginary parts of $Z$ are in fact uncorrelated by writing the equivalent of equation $(*)$ but for $\langle \Re Z \Im Z \rangle$. Doing this we'd find that the sum which turned into the Fejer kernel in the case for $\langle |Z|^2 \rangle$ would go to zero (at least approximately) because it would be roughly the overlap of a sine and cosine, which are orthogonal.
Not the answer you're looking for? Browse other questions tagged noise demodulation statistics quadrature or ask your own question.
How to estimate the signal-to-noise ratio of a waveform?
ARMA models for non stationary signals
Maximum Likelihood carrier phase estimation of $A\cos(2\pi f_ct)$
Spectral density of 1-bit nonlinear detector in terms of analog spectral density
Difference between a digital lock-in amplifier and a FFT when extracting phase of a signal?
Demodulation distortion in single-sideband and double-sideband
Covariance matrix, Q, for a Kalman filter given the stochastic differential equation for the state of the system?
frequency spectrum of a sampled signal, PSD and power discussion
Noise PSD and sampling rate relation
Contradiction between complex baseband and real-valued baseband | CommonCrawl |
Relative motion on an airport walkway A person is walking on a moving walkway in the airport. Her speed with respect to the walkway is 2 $\mathrm{m} / \mathrm{s}$ . The speed of the walkway is 1 $\mathrm{m} / \mathrm{s}$ with respect to the floor. What is her speed with respect to a person walking on the floor in the opposite direction at the speed of 2 $\mathrm{m} / \mathrm{s} ?$
Zulfiqar A.
Running on a treadmill Explain how you can run on a treadmill at 3 $\mathrm{m} / \mathrm{s}$ and remain at the same location.
Describe the important parts of the Michelson-Morley experimental setup and explain how this setup could help them determine whether Earth moves with respect to ether. Explain how the setup relates to the previous problem.
Describe what Michelson and Morley would have observed when they rotated their spectrometer if Earth were moving through ether compared to what they would have observed if Earth were not moving through ether.
Person on a bus A person is sitting on a bus that stops suddenly, causing her head to tilt forward. (a) Explain the acceleration of her head from the point of view of an observer on the ground. (b) Explain the acceleration of her head from the point of view of another person on the bus. (c) Which observer is in an inertial reference frame?
Turning on a rotating turntable A matchbox is placed on a rotating turntable. The turntable starts turning faster and faster. At some instant the matchbox flies off the turning table. (a) Draw a force diagram for the box when still on the rotating turntable. (b) Draw a force diagram for the box just before it flies off. (c) Explain why the box flies off only when the turntable reaches a certain speed. In what reference frame are you when providing this explanation? (d) How would a bug sitting on the turntable explain the same situation?
Use your knowledge of electromagnetic waves to give an example illustrating that if the speed of light were different in different inertial reference frames, two inertial frame observers would see the same phenomenon differently. [Hint: Think about radio waves.]
A particle called $\Sigma^{+}$ lives for $0.80 \times 10^{-10} \mathrm{s}$ in its proper reference frame before transforming into two other particles. How long does the $\Sigma^{+}$ seem to live according to a laboratory observer when the particle moves past the observer at a speed of $2.4 \times 10^{8} \mathrm{m} / \mathrm{s}$ ?
The $\Sigma^{+}$ particle discussed in the previous problem appears to a laboratory observer to live for $1.0 \times 10^{-10}$ s. How fast is it moving relative to the observer?
A person on Earth observes 10 flashes of the light on a passing spaceship in 22 s, whereas the same 10 flashes seem to take 12 s to an observer on the ship. What can you determine using this information?
A spaceship moves away from Earth at a speed of 0.990$c$ . The pilot looks back and measures the time interval for one rotation of Earth on its axis. What time interval does the pilot measure? What assumptions did you make?
Extending life? A free neutron lives about 1000 s before transforming into an electron and a proton. If a neutron leaves the Sun at a speed of $0.999c$, (a) how long does it live according to an Earth observer? (b) Will such a neutron reach Pluto ( $5.9 \times 10^{12} \mathrm{m}$ from the Sun) before transforming? Explain your answers.
A $\Sigma^{+}$ particle lives $0.80 \times 10^{-10} \mathrm{s}$ in its proper reference frame. If it is traveling at 0.90$c$ through a bubble chamber, how far will it move before it disintegrates?
Extending the life of a muon A muon that lives $2.2 \times 10^{-6} \mathrm{s}$ in its proper reference frame is created $10,000$ $\mathrm{m}$ above Earth's surface. At what speed must it move to reach Earth's surface at the instant it disintegrates?
Effect of light speed on the time interval for a track race Suppose the speed of light were 15 $\mathrm{m} / \mathrm{s}$ . You run a $100-\mathrm{m}$ dace in 10 $\mathrm{s}$ according to the timer's clock. How long did the race last according to your watch?
Explain why an object moving past you would seem shorter in the direction of motion than when at rest
with respect to you. Draw a sketch to illustrate your reasoning.
Explain why the length of an object that is oriented perpendicular to the direction of motion would be the same for all observers. Draw a sketch to illustrate your reasoning.
You sit in a spaceship moving past the Earth at 0.97 c. Your arm, held straight out in front of you, measures 50 $\mathrm{cm} .$ How long is it when measured by an observer on Earth?
Length of a javelin A javelin hurled by Wonder Woman moves past an Earth observer at 0.90$c .$ Its proper length is 2.7 $\mathrm{m} .$ What is its length according to the Earth observer?
At what speed must a meter stick move past an observer so that it appears to be 0.50 $\mathrm{m}$ long?
Rashmi S.
Changing the shape of a billboard A billboard is 10 $\mathrm{m}$ high and 15 $\mathrm{m}$ long according to a person standing in front of it. At what speed must a person in a fast car drive by parallel to the billboard's surface so that the billboard appears to be square?
A classmate says that time dilation and length contraction can be remembered in a simple way if you think of a person eating a foot-long "sub" sandwich on a train (the sandwich is oriented parallel to the train's motion). The person on the train finishes the sandwich in 20 min. You, standing on the platform, observe the person eating a shorter sandwich but for a longer time interval. Do you agree with this example? Explain your answer.
Give examples of cases in which two observers record the motion of the same object to have different speeds, to have different directions, and to have different velocities. Provide reasonable values for the relevant velocities in your examples. Sketch each example and explain how each observer arrives at the value of the measured speed.
Now repeat Problem 23, only this time instead of a moving object, use a light flash. Describe what speeds of light different observers should measure according to the second postulate of special relativity.
Life in a slow-light-speed universe Imagine that you live in a universe where the speed of light is 50 $\mathrm{m} / \mathrm{s}$ . You sit on a train moving west at speed 20 $\mathrm{m} / \mathrm{s}$ relative to the track. Your friend moves on a train in the opposite direction at speed 15 $\mathrm{m} / \mathrm{s}$ What is the speed of his train with respect to yours?
More slow-light-speed universe In the scenario described in Problem 25, you and your friend listen to music on the same radio station. What is the speed of the radio waves that your antenna is registering compared to the speed of the waves that your friend's antenna registers if the station is 100 miles to the west of you?
You are on a spaceship traveling at $0.80c$ with respect to a nearby star sending a laser beam to a spaceship following you, which is moving at $0.50c$ in the same direction. (a) What is the speed of the laser beam registered by the second ship's personnel according to the classical addition of the velocities? (b) What is the speed of the laser beam registered by the second ship's personnel according to the relativistic addition of the velocities? (c) What is the speed of the second ship with respect to yours according to the classical addition of the velocities? (d) What is the speed of the second ship with respect to yours according to the relativistic addition of the velocities?
Your friend says that it is easy to travel faster than the speed of light; you just need to find the right observer. Give physics-based reasons for why your friend would have such an idea. Then explain whether you agree or disagree with him.
Guilherme B.
Your friend argues that Einstein's special theory of relativity says that nothing can move faster than the speed of light. (a) Give physics-based reasons for why your friend would have such an idea. (b) What examples of physical phenomena do you know of that contradict this statement? (c) Restate his idea so it is accurate in terms of physics.
An electron is moving at a speed of $0.90c$. Compare its momentum as calculated using a non relativistic equation and using a relativistic equation.
Explain why a relativistic expression is needed for fast-moving particles. Why can't we use a classical expression?
If you were to bring an electron from speed zero to $0.95c$ in 10 min, what force would need to be exerted on the electron? What object could possibly exert such a force?
If a proton has a momentum of $3.00 \times 10^{-19} \mathrm{kg} \cdot \mathrm{m} / \mathrm{s},$ what is its speed?
Determine the ratio of an electron's total energy to rest energy when moving at the following speeds: (a) 300 $\mathrm{m} / \mathrm{s}$ , (b) $3.0 \times 10^{8} \mathrm{m} / \mathrm{s},$ (c) $3.0 \times 10^{7} \mathrm{m} / \mathrm{s},$ (d) $1.0 \times 10^{8} \mathrm{m} / \mathrm{s}$ (e) $2.0 \times 10^{8} \mathrm{m} / \mathrm{s},$ and ( $\mathrm{f} ) 2.9 \times 10^{8} \mathrm{m} / \mathrm{s} .$
Solar wind To escape the gravitational pull of the Sun, a proton in the solar wind must have a speed of at least $6.2 \times 10^{5} \mathrm{m} / \mathrm{s}$ . Determine the rest energy, the kinetic energy, and the total energy of the proton.
At what speed must an object move so that its total energy is 1.0$\%$ greater than its rest energy? 10$\%$ greater? Twice its rest energy?
Space travel $A 50$ -kg space traveler starts at rest and accelerates at 5$g$ for 30 days. Determine the person's total energy after 30 days. What assumptions did you make?
A person's total energy is twice his rest energy when he moves at a certain speed. By what factor must his speed now increase to cause another doubling of his total energy?
A proton's energy after passing through the accelerator at Fermilab is 500 times its rest energy. Determine the proton's speed.
A rocket of mass $m$ starts at rest and accelerates to a speed of 0.90$c$ . Determine the change in energy needed for this change in speed.
Determine the total energy, the rest energy, and the kinetic energy of a person with $60-\mathrm{kg}$ mass moving at speed 0.95$c .$
An electron is accelerated from rest across $50,000 \mathrm{V}$ in a machine used to produce $\mathrm{X}$ -rays. Determine the electron's speed after crossing that potential difference.
A particle originally moving at a speed 0.90$c$ experiences a 5.0$\%$ increase in speed. By what percent does its kinetic energy increase?
An electron is accelerated from rest across a potential difference of $9.0 \times 10^{9} \mathrm{V}$ . Determine the electron's speed (a) using the nonrelativistic kinetic energy equation and (b) using
the relativistic kinetic energy equation. Which is the correct answer?
$A particle of mass $m$ initially moves at speed 0.40$c .(a)$ If the particle's speed is doubled, determine the ratio of its final kinetic energy to its initial kinetic energy. (b) If the particle's kinetic energy increases by a factor of $100,$ by what factor does its speed increase?
Determine the mass of an object whose rest energy equals the total yearly energy consumption of the world $\left(5 \times 10^{20} \mathrm{J}\right)$
Mass equivalent of energy to separate a molecule Separating a carbon monoxide molecule $\mathrm{CO}$ into a carbon and an oxygen atom requires $1.76 \times 10^{-18} \mathrm{J}$ of energy. (a) Determine the mass equivalent of this energy. (b) Determine the fraction of the original mass of a CO molecule $4.67 \times 10^{-26} \mathrm{kg}$ that was converted to energy.
Hydrogen fuel cell A hydrogen-oxygen fuel cell combines 2 $\mathrm{kg}$ of hydrogen with 16 $\mathrm{kg}$ of oxygen to form 18 $\mathrm{kg}$ of water, thus releasing $2.5 \times 10^{8} \mathrm{J}$ of energy. What fraction of the mass has been converted to energy?
Mass to provide human energy needs Determine the mass that must be converted to energy during a 70 -year lifetime to continually provide electric power for a person at a rate of 1000 $\mathrm{W}$ . The production of the electric power from mass is only about 33$\%$ efficient.
EST An electric utility company charges a customer about $6-7$ cents for $10^{6} \mathrm{J}$ of electrical energy. At this rate, estimate the cost of 1 $\mathrm{g}$ of mass if converted entirely to energy.
Mass to produce electric energy in a nuclear power plant A nuclear power plant produces $10^{9} \mathrm{W}$ of electric power and $2 \times 10^{9} \mathrm{W}$ of waste heating. (a) At what rate must mass be converted to energy in the reactor? (b) What is the total mass converted to energy each year?
BIO EST Metabolic energy Estimate the total metabolic energy you use during a day. (You can find more on metabolic rate in the reading passage in Chapter 6.) Determine the mass equivalent of this energy.
Energy from the Sun (a) Determine the energy radiated by the Sun each second by its conversion of $4 \times 10^{9} \mathrm{kg}$ of mass to energy. (b) Determine the fraction of this energy intercepted by Earth, which is $1.50 \times 10^{11} \mathrm{m}$ from the Sun and has a radius of $6.38 \times 10^{6} \mathrm{m} .$
Why no color change? Why don't the colors of buildings and tree leaves change when we look at them from a flying plane? Shouldn't the trees ahead look more bluish when you are approaching and reddish when you are receding?
Change red light to green In a parallel universe the speed of light in a vacuum is 70.000 $\mathrm{m} / \mathrm{s}$ . How fast should a driver's car move so that a red light looks green?
Effect of the Hubble constant on age and radius of the universe How would the estimated age of the universe change if the new accepted value of the Hubble constant became $(100$ $\mathrm{km} / \mathrm{s} / \mathrm{mpc}$ ? How would the visible radius of the universe change?
Expanding faster New observations suggest that our universe does not expand at a constant rate but instead is expanding at an increasing rate. How does this finding affect the estimation of the age of the universe using Hubble's law?
Baseball Doppler shift In September of 2010 Aroldis Chapman threw what may be the fastest baseball pitch ever recorded at 105 $\mathrm{mi} / \mathrm{h}(47 \mathrm{m} / \mathrm{s}) .$ What would the observed frequency of microwaves reflected from the ball be if the source frequency of were 10.525 GHz? What would be the beat frequency between the source frequency and the observed frequency?
Were you speeding? A police officer stops you in a 29 $\mathrm{m} / \mathrm{s}$ $(65 \mathrm{mi} / \mathrm{h})$ speed zone and says you were speeding. The officer's radar has source frequency 33.4 $\mathrm{GHz}$ and observed a $3900-\mathrm{Hz}$ beat frequency between the source frequency and waves reflected back to the radar from your car. Were you speeding? Explain.
Boat trip A boat's speed is 10 m/s. It makes a round trip between stations A and B and then another between stations A and C. Stations A and B are on the same side of the river 0.5 km apart. Stations A and C are on the opposite sides of the river across from each other and also 0.5 km apart. The river flows at 1.5 m/s. What time interval is the round trip between stations A and B and then between A and C?
Space travel An explorer travels at speed $2.90 \times 10^{8} \mathrm{m} / \mathrm{s}$ from Earth to a planet of Alpha Centauri, a distance of 4.3 light-years as measured by an Earth observer. (a) How long does the trip laccording to an Earth observer? (b) How long does the trip last for the person on the ship?
EST Extending life Suppose that the speed of light is 8.0 $\mathrm{m} / \mathrm{s}$ . You walk slowly to all of your classes during one semester while a classmate runs at a speed of 7.5 $\mathrm{m} / \mathrm{s}$ during the time you are walking. Estimate your classmate's change in age, as judged by you, and your change in age according to you during that walking time. Indicate how you chose any numbers used in your estimate.
Racecar when $c$ is 100 $\mathrm{m} /$ s Suppose that the speed of light is 100 $\mathrm{m} / \mathrm{s}$ and that you are driving a racecar at speed 90 $\mathrm{m} / \mathrm{s}$ . What time interval is required for you to travel 900 $\mathrm{m}$ along a track's straightaway (a) according to a timer on the track and (b) according to your own clock? (c) How long does the straightaway appear to you? (d) Notice that the speed at which the track moves past is your answer to part (c) divided by your answer to part (b). Does this speed agree with the speed as measured by the stationary timer?
EST Cherenkov radiation is electromagnetic radiation emitted when a fast-moving particle such as a proton passes through an insulator at a speed faster than the speed of light in that insulator. The Cherenkov radiation looks like a blue glow in the shape of a cone behind the particle. The radiation is named after Soviet physicist Pavel Cherenkov, who received a Nobel Prize in 1958 for describing the radiation. Estimate the smallest speed of a proton moving in oil that will produce Cherenkov radiation behind it.
A pilot and his spaceship of rest mass 1000 kg wish to travel from Earth to planet Scot ML, 30 light-years from Earth. However, the pilot wishes to be only 10 physiological years older when he reaches the planet. (a) At what constant speed must he travel? (b) What is the total energy of his spaceship and the rest energy, according to an Earth observer, while making the trip?
Space travel A pilot and her spaceship have a mass of 400$\mathrm {kg}$. The pilot expects to live 50 more Earth years and wishes to travel to a star that requires 100 years to reach even if she were to travel at the speed of light. (a) Determine the average speed she must travel to reach the star during the next 50 Earth years. (b) To attain this speed, a certain mass $m$ of matter is consumed and converted to the spaceship's kinetic energy. How much mass is needed? (Ignore the energy neededto accelerate the fuel that has not yet been consumed.)
(a) A container holding 4 $\mathrm{kg}$ of water is heated from $0^{\circ} \mathrm{C}$ to $60^{\circ} \mathrm{C}$ . Determine the increase in its energy and compare this to the rest energy when at $0^{\circ} \mathrm{C}$ . (b) If the water, initially at $0^{\circ} \mathrm{C},$ is converted to ice at $0^{\circ} \mathrm{C},$ determine the ratio of its energy change to its original rest energy.
Which principle can we use to determine the frequency $f_{\mathrm{D}}$ "detected" by the ball as it moved toward the source waves from the radar?
(a) The beat frequency equation
(b) The high-speed Doppler effect equation
(c) The low-speed Doppler effect equation
(d) The time dilation equation
(e) The relationship between wave speed, frequency, and wavelength
Which frequency is closest to the frequency $f_{\mathrm{D}}$ detected by the ball as it moves toward the radar source waves?
(a) 10.525 $\mathrm{GHz}$
(b) $10.525 \mathrm{GHz}+2.0 \times 10^{-6} \mathrm{GHz}$
(c) $10.525 \mathrm{GHz}-2.0 \times 10^{-6} \mathrm{GHz}$
(d) $3 \times 10^{-7} \mathrm{Hz}$
Which principle can we use to determine the frequency $f_{\mathrm{O}}$ detected by the radar from waves reflected from the ball?
(e) The relationship between wave speed, frequency. and wavelength
Which answer is closest to the frequency $f_{\mathrm{O}}$ detected by the radar from the waves reflected from the ball?
(a) Exactly 10.525 GHz
Which principle is used to determine the frequency that the radar measures of the combined source and observed waves?
Which answer is closest to the frequency that the radar measures of the combined source and observed waves?
(a) $2.0 \times 10^{3} \mathrm{Hz} \quad$ (b) $4.0 \times 10^{3} \mathrm{Hz}$
(c) 10.525 $\mathrm{GHz} \quad$ (d) $3 \times 10^{-7} \mathrm{Hz}$
(e) $6 \times 10^{-7} \mathrm{Hz}$
What principle would you use to estimate the distance of 3 $\mathrm{C} 273$ from Earth?
(a) The high-speed Doppler effect equation
(b) The low-speed Doppler effect equation
(c) Hubble's law
Which answer below is closest to the distance of 3 $\mathrm{C} 273$ from the Earth in terms of the distance of the Sun from Earth?
(a) $\approx 10^{3}$ Sun distances $\quad$ (b) $\approx 10^{6}$ Sun distances
(c) $\approx 10^{9}$ Sun distances $\quad$ (d) $\approx 10^{12}$ Sun distances
(e) $\approx 10^{14}$ Sun distances $
Which answer below is closest to the power of light and other forms of radiation emitted by 3 $\mathrm{C} 273 ?$
(a) $\approx 10^{8} \mathrm{J} / \mathrm{s} \quad(\mathrm{b})=10^{18} \mathrm{J} / \mathrm{s}$
(c) $\approx 10^{25} \mathrm{Js} \quad(\mathrm{d}) \approx 10^{32} \mathrm{J} / \mathrm{s}$
(e) $\approx 10^{40} \mathrm{J} / \mathrm{s}$
Which answer below is closest to the mass of 3 $\mathrm{C} 273$ that is converted to light and other forms of radiation each second?
By comparison, the mass of Earth is $6 \times 10^{24} \mathrm{kg}$ .
(a) $\approx 10^{11} \mathrm{kg} / \mathrm{s} \quad$ (b) $\approx 10^{15} \mathrm{kg} / \mathrm{s}$
(c) $\approx 10^{19} \mathrm{kg} / \mathrm{s} \quad(\mathrm{d}) \approx 10^{23} \mathrm{kg} / \mathrm{s}$
$(e) \approx 10^{29} \mathrm{kg} / \mathrm{s}$
What is the speed of light emitted by 3 $\mathrm{C} 273$ as detected by an observer on 3 $\mathrm{C} 273 ?$
(a) 1.15$c \quad$ (b) $c$
(c) 0.85$c \quad$ (d) None of these is correct.
If 3 $\mathrm{C} 273$ is moving away from Earth at $0.16 c,$ what speed below is closest to the light speed we on Earth detect coming from 3 $\mathrm{C} 273 ?$ | CommonCrawl |
\begin{document}
\title{Remarks on the nonlocal Dirichlet problem\thanks{Research supported in part by National Science Centre (Poland) grant 2014/14/M/ST1/00600 and by the DFG through the CRCs 701 and 1283}} \author{Grzywny, Tomasz \footnote{Wrocław University of Science and
Technology, Faculty of Pure and Applied
Mathematics, 27 Wybrzeże
Wyspiańskiego
50-370 Wrocław, Poland, \emph{email:}
[email protected]}
\and
Kassmann, Moritz \footnote{Universit\"{a}t Bielefeld, Fakult\"{a}t f\"{u}r
Mathematik, Postfach 100131, D-33501 Bielefeld, Germany, \emph{email:}
[email protected]}
\and
Le\.{z}aj, \L{}ukasz \footnote{Wrocław University of Science and
Technology, Faculty of Pure and Applied
Mathematics, 27 Wybrzeże
Wyspiańskiego
50-370 Wrocław, Poland,
\emph{email:} [email protected]} }
\maketitle
\begin{abstract} We study translation-invariant integrodifferential operators that generate L\'{e}vy processes. First, we investigate different notions of what a solution to a nonlocal Dirichlet problem is and we provide the classical representation formula for distributional solutions. Second, we study the question under which assumptions distributional solutions are twice differentiable in the classical sense. Sufficient conditions and counterexamples are provided. \end{abstract}
Keywords: Dirichlet problem, nonlocal operator, L\'evy process, regularity
MSC2010 Subject Classification: 34B05, 47G20, 60J45
\section{Introduction}
The aim of this article is to provide two results on translation-invariant integrodifferential operators, which are not surprising but have not been systematically covered in the literature. Let us briefly explain these results in case of the classical Laplace operator.
The classical result of Weyl says the following. Assume $D \subset \mathbb{R}^d$ is an open set, $f \in C^\infty(D)$, and $u \in \mathcal{D}'(D)$ is a Schwartz distribution satisfying $\Delta u = f$ in the distributional sense, i.e. $\langle u, \Delta \psi \rangle = \langle \psi, f \rangle$ for every $\psi \in C^\infty_c(D)$. Then $u \in C^\infty (D)$ and $\Delta u = f$ in $D$. This is the starting point for the study of distributional solutions to boundary value problems. Our first aim is to study distributional solutions to nonlocal boundary value problems of the form \begin{alignat*}{2} \mathscr{L} u &= f \quad &&\text{in } D\,, \\ u &= g &&\text{in } D^c\,, \end{alignat*} where $\mathscr{L}$ is an integrodifferential operator generating a unimodal L\'evy process. Our second aim is to provide sufficient conditions such that distributional solutions $u$ to the nonlocal Dirichlet problem are twice differentiable in the classical sense. In case of the Laplace operator, it is well known that Dini continuity of $f: D \to \mathbb{R}$, i.e. finiteness of the integral $\int_0^1 \omega_f(r)/r \, \textnormal{d} r$ for the modulus of continuity $\omega_f$, implies that the distributional solution $u$ to the classical Dirichlet problem satisfies $u \in C^2_{\operatorname{loc}}(D)$. On the other hand, one can construct a continuous function $f:B_1 \to \mathbb{R}$ and a distribution $u \in \mathcal{D}'(B_1)$ such that $\Delta u = f$ in the distributional sense, but $u \notin C^2_{\operatorname{loc}}(B_1)$. These observation have been made long time ago \cite{HaWi55}. They have been extended to non-translation-invariant operators by several authors \cite{MR521856, MaEi71} and to nonlinear problems \cite{Kov97,
DGM04}. Note that there are many more related contributions including treatments of partial differential equations on non-smooth domains. In the present work we treat the simple linear case for a general class of nonlocal operators generating unimodal L\'evy processes.
Let us introduce the objects of our study and formulate our main results. Let $\nu\!: \mathbb{R}^d\setminus\{0\} \to [0,\infty)$ be a function satisfying \begin{align*}
\int \big( 1 \wedge |h|^2\big) \nu(h) \, \textnormal{d} h < \infty \,. \end{align*} The function $\nu$ induces a measure $\nu(\! \, \textnormal{d} h) = \nu(h) \, \textnormal{d} h$, which is called the L\'{e}vy measure. Note that we use the same symbol for the measure as well as for the density. We study operators of the form \begin{align}\label{eq:def-L}
\mathscr{L} u(x) = \lim_{\epsilon \to 0} \int_{|y|>\epsilon} (u(x+y)-u(x))\nu(y) \, \textnormal{d} y\,. \end{align} This expression is well defined if $u$ is sufficiently regular in the neighbourhood of $x \in \mathbb{R}^d$ and satisfies some integrability condition at infinity. We recall that for $\alpha \in (0,2)$ and $\nu(\!\, \textnormal{d}
h)= c_\alpha |h|^{-d-\alpha} \, \textnormal{d} h$ with some appropriate constant $c_\alpha$, the operator $\mathscr{L}$ equals the fractional Laplace operator $-(-\Delta)^{\alpha/2}$ on $C_b^2(\mathbb{R}^d)$. The regularity theory of such operators has been intensively studied recently. For instance, it is well known \cite{MR2555009, MR3168912, MR3293447, MR3536990, MR3447732} that the solution of $-(-\Delta)^{\alpha/2} u=f$ with $f \in C^{\beta}$ belongs to $C^{\alpha+\beta}$ provided that neither $\beta$ nor $\alpha+\beta$ is an integer. The same result in more general setting is derived in \cite{BK2015}.
Our standing assumption is that $h \to \nu(h)$ is a non-increasing radial function and that there exists a L\'{e}vy measure $\nu^*$ resp. a density $\nu^*$ such that $\nu \leqslant \nu^*$ and \begin{align}\label{growth_condition} \nu^*(r) \leqslant C\nu^*(r+1), \quad r\geqslant r_0 \end{align} for some $r_0, C \geqslant 1$. Given an open set $D \subset \mathbb{R}^d$, denote by $\mathcal{L}^1(D)$ the vector space of all Borel functions $u \in L_{\operatorname{loc}}^1$ satisfying \begin{align}\label{measure_scaling}
\int_D |u(x)| (1 \wedge \nu^*(x)) \, \textnormal{d} x < \infty. \end{align} The condition $u \in \mathcal{L}^1(D)$ is the integrability condition needed to ensure well-posedness in the definition of $\mathscr{L} u$ in distributional sense. Given an open set, we denote by $G_D$ resp. $P_D$ the usual Green resp. the Poisson operator, cf. \autoref{sec:prelims}. For a definition of the Kato class $\mathcal{K}$ and $\mathcal{K}(D)$ see \autoref{def:Kato_class} below. Here is our first result.
\begin{thm}\label{thm:weak_thm}
Let $D$ be a bounded open set. Suppose $f \in L^1(D)$ and $g \in \mathcal{L}^1(D^c)$.
Let $u \in \mathcal{L}^1({\R^{d}}) $ be a distributional solution of the Dirichlet problem
\begin{align}\label{eq:weak_problem}
\begin{array}{rlll}
\mathscr{L} u &=& f & \text{in } D\,, \\
u &=& g & \text{in } D^c\,.
\end{array}
\end{align}
Then $u(x) + G_D[f](x)$ satisfies the mean-value property inside
$D$. Furthermore, if $D$ is a Lipschitz domain and there exists $V \subset \subset D$ such that $f$ and $g \ast \nu$ belongs to the Kato class $\mathcal{K}(D \setminus \overline{V})$, then there is a unique solution which is bounded close to the boundary of $D$
\begin{align*}
u(x) = - G_D[f](x)+P_D[g](x).
\end{align*} \end{thm} The theorem above says that the distributional solution of \eqref{eq:weak_problem} is unique up to a harmonic function. If, additionally, $D$ is a Lipschitz domain and we impose some regularity, then the solution is unique. Boundedness of $u$, $f$, $g$ would suffice, of course. It is obvious that one has to impose some regularity condition on $f$ in order to prove uniqueness of solutions. Note that, in the case where $\mathscr{L}$ equals the fractional Laplace operator, similar results like \autoref{thm:weak_thm} are proved in \cite{MR1671973}. A result similar to \autoref{thm:weak_thm} has recently been proved in \cite{KKLL2018}. The authors consider a smaller class of operators and concentrate on viscosity solutions instead of distributional solutions.
Variational solutions to nonlocal operators have been studied by several authors, e.g., in \cite{MR3318251, MR3738190}. The problem to determine appropriate function spaces for the data $g$ leads to the notion of nonlocal traces spaces introduced in \cite{DyKa16}. It is interesting that the study of Dirichlet problems for nonlocal operators leads to new questions regarding the theory of function spaces.
The formulation of our second main result requires some further preparation. They are rather technical because we cover a large class of translation-invariant operators. The similar condition to the following appears in \cite{BGPR2017}. \begin{enumerate}
\item[(A)] $\nu$ is twice continuously differentiable and
there is a positive constant $C$ such that
\begin{align*}
|\nu'(r)|, |\nu''(r)| \leqslant C \nu^*(r) \quad \text{ for } r \geqslant r_0.
\end{align*} \end{enumerate} (A) and \eqref{growth_condition} are essential for proving that functions with the mean-value property are twice continuously differentiable, see \autoref{lem:harm_c2}. We emphasize that in general this is not the case and usually harmonic functions lack sufficient regularity if no additional assumptions are imposed. The reader is referred to \cite[Example $7.5$]{MR3413864}, where a function $f$ with the mean-value property is constructed for which $f'(0)$ does not exist.
Let $G$ be a fundamental solution of $\mathscr{L}$ on ${\R^{d}}$ (see \eqref{G_def} for definition). Note that in the case of the fractional Laplace operator $G(x)
=c_{d,\alpha} |x|^{\alpha - d}$ for $d\neq \alpha$ and some constant $c_{d,\alpha}$. In what follows we will assume the kernel $G$ to satisfy the following growth condition: \begin{enumerate}
\item[(G)] $G \in C^2({\R^{d}} \setminus \{0\})$ and there exists a non-increasing function $S\!: \, (0,\infty) \mapsto [0,\infty)$ and $r_0>0$ such that
\begin{enumerate}
\item[(i)] if $\int_0^{1/2}|G'(t)|t^{d-1} \, \textnormal{d} t=\infty$, then
\begin{align*}
G(r),|G'(r)|,r|G''(r)| \leqslant S(r), \quad r<r_0,
\end{align*}
\item[(ii)] if $\int_0^{1/2}|G'(t)|t^{d-1} \, \textnormal{d} t<\infty$, then additionally $G \in C^3({\R^{d}} \setminus \{0\})$ and
\begin{align*}
G(r),|G'(r)|,|G''(r)|,r|G'''(r)| \leqslant S(r), \quad r<r_0.
\end{align*}
\end{enumerate} \end{enumerate}
\begin{thm}\label{thm:main_thm}
Let $D$ be an open bounded set. Assume that the measure $\nu$ satisfies (A) and \eqref{growth_condition} and the fundamental solution $G$ satisfies (G). Let $g \in \mathcal{L}^1(D^c)$ and $f\!: \, D
\mapsto \mathbb{R}$. If $\int_0^1 |G'(t)|t^{d-1}\, \textnormal{d} t < \infty$ we assume \begin{align}\label{main_thm_cond1} \int_0^{1/2} S(t) \omega_f(t,D) t^{d-1}\, \textnormal{d} t < \infty, \end{align}
or if $\int_0^1 |G'(t)|t^{d-1}\, \textnormal{d} t = \infty$ we assume \begin{align}\label{main_thm_cond2} \int_0^{1/2} S(t) \omega_{\nabla f}(t,D) t^{d-1}\, \textnormal{d} t < \infty\,. \end{align}
Then the solution $u \in \mathcal{L}^1({\R^{d}})$ of the problem
\begin{align}\label{General_problem3}
\left\{ \begin{array}{rlll}
\mathscr{L} u &=& f & \text{in } D, \\
u &=& g & \text{in } D^c.
\end{array} \right.
\end{align}
belongs to $C_{\operatorname{loc}}^2(D)$ and is unique up to a harmonic function (with respect to
$\mathscr{L}$). \end{thm}
\begin{rem}
\eqref{main_thm_cond1} or \eqref{main_thm_cond2} imply $f \in \mathcal{K}(D)$, so by \autoref{thm:weak_thm}, if $D$ is a Lipschitz domain and $g \ast v \in \mathcal{K}(D)$ then the solution is unique. \end{rem}
The result uses quite involved conditions because the measure $\nu$ interacts with the Dini-type assumptions for the right-hand side function $f$. Looking at examples, we see that the two cases described in the theorem appear naturally. In the fractional Laplacian case
($G(x) =c_{d,\alpha} |x|^{\alpha - d}$), finiteness of the expression
$\int_0^{1/2} |G'(t)|t^{d-1}\, \textnormal{d} t$ depends on the value of $\alpha \in (0,2)$. We show in \autoref{sec:examples} that the conditions hold true when $\mathscr{L}$ is the generator of a rotationally symmetric $\alpha$-stable process, i.e., when $\mathscr{L}$ equals the fractional Laplace operator. Note that \autoref{thm:main_thm} is a new result even in this case. We also study the more general class, e.g. operators of the form $-\varphi(-\Delta)$, where $\varphi$ is a Bernstein function. Note that in the theorem above we do not assume that $g$ is bounded.
\begin{rem} We emphasize that in the case of $\mathscr{L}$ being the fractional Laplace operator of order $\alpha \in (0,2)$ and $f \in C_{\operatorname{loc}}^{2-\alpha}(D)$, it is not true that every solution of $\mathscr{L} u = f$ belongs to $C_{\operatorname{loc}}^2(D)$ as is stated in \cite[Theorem $3.7$]{AJS2018}. A similar phenomenon has been mentioned in \cite{MR2555009} and is visible here as well. Observe that in such case the integrals \eqref{main_thm_cond1} and \eqref{main_thm_cond2} are clearly divergent and consequently, \autoref{thm:main_thm} cannot be applied. We devote \autoref{sec:counterexamples} to the construction of counterexamples for any $\alpha \in (0,2)$. \end{rem}
The article is organized as follows: in \autoref{sec:prelims} we provide the main definitions and some preliminary results. The proof of \autoref{thm:weak_thm} is provided in \autoref{sec:weak-solutions}. \autoref{sec:sufficient_condition} contains several rather technical computations and the proof of \autoref{thm:main_thm}. We discuss the necessity of the assumptions of \autoref{thm:main_thm} through examples in \autoref{sec:counterexamples}. Finally, in \autoref{sec:examples} we provide examples that show that the assumptions of \autoref{thm:main_thm} are natural. \section{Preliminaries}\label{sec:prelims}
In this section we explain our use of notation, define several objects and collect some basic facts. We write $f \asymp g$ when $f$ and $g$ are comparable, that is the quotient $f/g$ stays between two positive constants. To simplify the notation, for a radial function $f$ we use the same symbol to denote its radial profile. In the whole paper $c$ and $C$ denote constants which may vary from line to line. We write $c(a)$ when the constant $c$ depends only on $a$. By $B(x,r)$ we denote the ball of radius $r$ centered at $x$, that
is $B(x,r)=\{y \in {\R^{d}}: \ |y-x|<r\}$. For convenience we set $B_r=B(0,r)$. For an open set $D$ and $x \in D$ we define $\delta_D(x)=\dist(x,\partial D)$ and $\diam(D)=\sup_{x,y \in D}
|x-y|$. The modulus of continuity of a continuous function $f: D \to \mathbb{R}$ is defined by \begin{align*}
\omega_{f}(t,D) = \sup \{|f(x) - f(y)| : \; x,y \in D, |x-y| < t\} \quad (t > 0)\,. \end{align*}
For a differentiable function $f: D \to \mathbb{R}$ we set \begin{align*} \omega_{\nabla f}(t,D) = \max\limits_{i \in \{1, \ldots, d\}} \sup
\{|\partial_{x_i} f(x) - \partial_{x_i} f(y)| : \; x,y \in D, |x-y| < t\} \quad (t > 0)\,. \end{align*}
We say that a Borel measure is isotropic unimodal if it is absolutely continuous on ${\R^{d}} \setminus \{0\}$ with respect to the Lebesgue measure and has a radial, non-increasing density. Given an isotropic unimodal L\'{e}vy measure $\nu(\! \, \textnormal{d}
x)=\nu(|x|) \, \textnormal{d} x$, we define a L\'{e}vy-Khinchine exponent \begin{align*} \psi(\xi) = \int_{{\R^{d}}} \(1-\cos(\xi \cdot x)\) \nu(\! \, \textnormal{d} x), \quad \xi \in {\R^{d}}. \end{align*}
$\psi$ is usually called \emph{the characteristic exponent}. It is well known (e.g. \cite[Lemma $2.5$]{MR3413864}) that if $\nu({\R^{d}})=\infty$, there exist a continuous function $p_t \geqslant 0$ in ${\R^{d}} \setminus \{0\}$ such that \begin{align*} \widehat{p_t}(\xi)=\int_{{\R^{d}}} e^{-i\xi \cdot x} p_t(x) \, \textnormal{d} x=e^{-t \psi(\xi)}, \quad \xi \in {\R^{d}}. \end{align*} The family $\{p_t\}_{t > 0}$ induces a strongly continuous contraction semigroup on $C_0({\R^{d}})$ and $L^2({\R^{d}})$ \begin{align*} P_t f(x) = \int_{{\R^{d}}} f(y)p_t(y-x) \, \textnormal{d} y, \quad x \in {\R^{d}}, \end{align*} whose generator $\mathcal{A}$ has the Fourier symbol $-\psi$. Using the Kolmogorov theorem one can construct a stochastic process $X_t$ with transition densities $p_t(x,y)=p_t(y-x)$, namely $\mathbb{P}^x(X_t \in A)=\int_A p_t(x,y) \, \textnormal{d} y$. Here $\mathbb{P}^x$ is the probability corresponding to a process $X_t$ starting from $x$, that is $\mathbb{P}^x(X_0=x)=1$. By $\mathbb{E}^x$ we denote the corresponding expectation. In fact, $X_t$ is a pure-jump isotropic unimodal L\'{e}vy process in ${\R^{d}}$, that is a stochastic process with stationary and independent increments and c\`{a}dl\`{a}g paths (see for instance \cite{MR1739520}).
One of the objects of significant importance in this paper is the potential kernel defined as follows: \begin{align*} U(x,y) = \int_0^{\infty} p_t(x,y) \, \textnormal{d} y. \end{align*} Clearly $U(x,y)=U(y-x)$. The potential kernel can be defined in our setting if $\int_{B_1}\frac{1}{\psi(\xi)}d\xi<\infty$. In particular, for $d \geqslant 3$ the potential kernel always exists (see \cite[Theorem 37.8]{MR1739520}). If this is not the case, one can consider the compensated potential kernel \begin{align}\label{def-compensated_kernel} W_{x_0}(x-y)=\int_0^{\infty}\(p_t(x-y) - p_t(x_0)\) \, \textnormal{d} t \end{align} for some fixed $x_0 \in {\R^{d}}$. If $d=1$ and $\int_{B_1} \frac{\, \textnormal{d} \xi}{\psi(\xi)}<\infty$, we can set $x_0=0$. In other cases the compensation must be taken with $x_0 \in {\R^{d}} \setminus \{0\}$. For details we refer the reader to \cite{MR3636597} and to the \autoref{app:appendix}.
Slightly abusing the notation, we let $W_1$ be \eqref{def-compensated_kernel} for $x_0=(0,...,0,1) \in {\R^{d}}$. Thus, we have arrived with three potential kernels: $U$, $W_0$ and $W_1$. Each one corresponds to a different type of process $X_t$ and an operator associated with it. In order to merge these cases in one object, we let \begin{align}\label{G_def} G(x) = \left\lbrace \begin{array}{rl} U(x), & \text{if } \int_{B_1} \frac{\, \textnormal{d} \xi}{\psi(\xi)}<\infty, \\ W_0(x), & \text{if } $d=1$, \int_{B_1} \frac{\, \textnormal{d} \xi}{\psi(\xi)}=\infty \text{ and } \int_0^{\infty} \frac{1}{1+\psi(\xi)} \, \textnormal{d} \xi< \infty, \\ W_1(x), & \text{otherwise.} \end{array} \right. \end{align} For instance, in the case of $\mathscr{L}=\Delta$ we have \begin{align*} G(x)=\left\{ \begin{array}{ll}
c_d |x|^{2-d}, & d \geqslant 3,\\
\frac{1}{\pi} \ln \frac{1}{|x|}, & d=2,\\
|x|, & d=1. \end{array} \right. \end{align*}
The basic object in the theory of stochastic processes is the first exit time of $X$ from $D$, \begin{align*} \tau_D=\inf \{ t>0\!: \, X_t \notin D\}. \end{align*} Using $\tau_D$ we define an analogue of the generator of $X_t$, namely, the \emph{characteristic operator} or \emph{Dynkin operator}. We say a Borel function $f$ is in a domain $\mathcal{D}_{\mathcal{U}}$ of Dynkin operator $\mathcal{U}$ if there exists a limit \begin{align*} \mathcal{U} f(x) = \lim_{B \to \{x\}} \frac{\mathbb{E}^x \( X_{\tau_B} \)-f(x)}{\mathbb{E}^x \tau_B}. \end{align*} Here $B \to \{x\}$ is understood as a limit over all sequences of open sets $B_n$ whose intersection is $\{x\}$ and whose diameters tend to $0$ as $n \to \infty$. The characteristic operator is an extension of $\mathcal{A}$, that is $\mathcal{D}_{\mathcal{A}} \subset \mathcal{D}_{\mathcal{U}}$ and $\mathcal{U}\vert_{\mathcal{D}_{\mathcal{A}}}=\mathcal{A}$. For a wide description of characteristic operator and its relation with the generator of $X_t$ we refer the reader to \cite[Chapter V]{MR0193671}.
Instead of the whole ${\R^{d}}$, one can consider a process $X$ killed after exiting $D$. By $p_t^D(x,y)$ we denote its transition density (or, in other words, the fundamental solution of $\partial_t-\mathscr{L}$ in $D$). We have \begin{align*} p_t^D(x,y)=p_t(x,y)-\mathbb{E}^x[\tau_D<t;p_{t-\tau_D}(X_{\tau_D},y)], \quad x,y\in {\R^{d}}. \end{align*} It follows that $0\leqslant p_t^D \leqslant p_t.$ By $P_D(x,\, \textnormal{d} z)$ we denote the distribution of $X_{\tau_D}$ with respect to $\mathbb{P}^x$, that is $P_D(x,A) = \mathbb{P}^x(X_{\tau_D} \in A)$. We call $P_D(x,\, \textnormal{d} z)$ a \emph{harmonic measure} and its density $P_D(x,z)$ on ${\R^{d}} \setminus \overline{D}$ with respect to the Lebesgue measure --- a \emph{Poisson kernel}. For $g: D^c \mapsto \mathbb{R}$ we let \begin{align*} P_D[g](x) = \int_{D^c} g(z) P_D(x,\, \textnormal{d} z), \quad x \in D, \end{align*} if the integral exists. For $x \in D^c$ we set $P_D[g](x)=g(x)$.
\begin{rem}\label{rem:PDg_Lspace}
If $D$ is an open bounded set and $g \in \mathcal{L}^1(D^c)$ then $P_D[g] \in \mathcal{L}^1$. Indeed, since $P_D[g] \equiv g$ on $D^c$, it is enough to prove that $P_D[g] \in L^1(D)$. By the mean-value property, for any $B \subset \subset D$ we have $P_B[P_D[g]](x)=P_D[g](x)$ for $x \in B$. It follows, by the Ikeda-Watanabe formula, that for any fixed $x \in B$
\begin{align*}
\infty>\int_{B^c} P_B(x,z) P_D[g](z) \, \textnormal{d} z \geqslant c \int_{A \cap D} P_D[g](z) \, \textnormal{d} z,
\end{align*}
where $A=B^c\cap(B+\mathrm{supp}(\nu)/2)$. Arbitrary choice of $B$ yields the claim. \end{rem}
We define a Green function for the set $D$ \begin{align*} G_D(x,y)=\int_0^{\infty} p_t^D(x,y) \, \textnormal{d} y, \quad x,y \in D, \end{align*} and the Green operator \begin{align*} G_D[f](x) = \int_D G_D(x,y)f(y)\, \textnormal{d} y. \end{align*} We note that $G_D(x,y)$ can be interpreted as the occupation time density up to the exit time $\tau_D$, $G_D[f]$ --- as a mean value of $f(X_t)$. Using that we obtain $G_D[{\bf 1}]=\mathbb{E}^x \tau_D$. For bounded sets $D$ we have $\sup_{x \in {\R^{d}}} \mathbb{E}^x \tau_D < \infty$ (\cite{MR632968}, \cite{MR3350043}). By the strong Markov property for any open $\Omega\subset D$ we have \begin{align}\label{G_D_subset} G_D(x.y)=G_\Omega(x,y)+\mathbb{E}^xG_D(X_{\tau_\Omega},y), \quad x,y \in \Omega. \end{align} Obviously we have $G_{{\R^{d}}}=U$. If $U$ is well-defined (finite) a.s., the well-known Hunt formula holds: \begin{align*} G_D(x,y)=U(y-x)-\mathbb{E}^x U(y-X_{\tau_D}), \quad x,y \in D. \end{align*} In case of compensated potential kernels, a similar formula is valid, namely, \begin{align}\label{eq:Hunt} G_D(x,y)=G(y-x)-\mathbb{E}^x G(y-X_{\tau_D}), \quad x,y \in D. \end{align} See \autoref{thm:recurrent_sweeping_formula}. \begin{defn}\label{mean_value_property}
We say that a function $g:\mathbb{R}^d \to \mathbb{R}$ satisfies the mean-value
property in an open set $D \subset \mathbb{R}^d$ if $g(x)=P_D[g](x)$ for all
$x \in D$. Here we assume that the integral is absolutely convergent. If $g$ has the mean-value property in every bounded open set whose
closure is contained in $D$ then $u$ is said to have the mean-value property
inside $D$. \end{defn}
Clearly if $f$ has the mean-value property inside $D$, then $\mathcal{U} f=0$ in $D$.
In general, functions with the mean-value property lack sufficient regularity if no additional assumptions are imposed. In our setting, however, we can show that they are, in fact, twice continuously differentiable in $D$.
\begin{lem}\label{lem:harm_c2}
Let $g \in \mathcal{L}^1$ and $D$ be an open set. Suppose that (A) and \eqref{growth_condition} hold. If $g$ has the mean-value property inside $D$, then $g \in C^2_{\operatorname{loc}}(D)$. \end{lem}
The proof is similar to the proof of \cite[Theorem 4.6]{BGPR2017} and is omitted.
\begin{defn}[\cite{MR1132313}, \cite{MR3713578}] \label{def:Kato_class}
We say that a Borel function $f$ belongs to the Kato class $\mathcal{K}$ if it
satisfies the following condition
\begin{align}\label{Kato_class}
\lim_{r \to 0} \left[ \sup_{x \in {\R^{d}}} \int^r_0 P_t|f|(x) \, \textnormal{d} t \right] =0.
\end{align}
We say that $f \in \mathcal{K}(D)$, where $D$ is an open set, if $f {\bf 1}_D \in
\mathcal{K}$. \end{defn} This is one of three conditions discussed by Zhao in \cite{MR1132313}. A detailed description of different notions of the Kato class and related conditions can be found in \cite{MR3713578}.
\begin{lem}\label{lem:Kato_green_op_bounded} Let $V \subset \subset D$ and $\rho:=\dist(V,\partial D)$. Suppose $f \in \mathcal{K}(D \setminus \overline{V})$. Then $G_D[f]$ is bounded in $V_1:=\{x \in D \setminus V\! : \delta_D(x)<\rho/2\}$. \end{lem} \begin{proof}
Let $x \in V_1$ and define $V_2:=\{x \in D \setminus V\! : \delta_D(x)<3\rho/4\}$. We have
\begin{align*}
\left\lvert G_D[f{\bf 1}_{V_2^c}](x) \right\rvert \leqslant \int_{V_2^c} G_D(x,y) \left\lvert f(y) \right\rvert \, \textnormal{d} y.
\end{align*}
Let $r=2\sup_{x \in D}|x|$. Then $D \subset B_r$ and by \cite[Theorem $1.3$]{MR3729529}
\begin{align*}
\int_{V_2^c} G_D(x,y) \left\lvert f(y) \right\rvert \, \textnormal{d} y \leqslant \int_{V_2^c} G_{B_r}(x,y) \left\lvert f(y) \right\rvert \, \textnormal{d} y \leqslant c(\rho) \norm{f}_1.
\end{align*}
Moreover, by \eqref{G_D_subset}
\begin{align*}
G_D[f {\bf 1}_{V_2}](x) = G_{D\setminus V}[f{\bf 1}_{V_2}](x)+\mathbb{E}^x G_D[f{\bf 1}_{V_2}]\( X_{\tau_{D \setminus V}} \).
\end{align*}
Observe that
\begin{align*}
\left\lvert \mathbb{E}^x G_D [f {\bf 1}_{V_2}](X_{\tau_{D\setminus V}}) \right\rvert \leqslant \mathbb{E}^x \int_{V_2} G_D(X_{\tau_{D\setminus V}},y) \left\lvert f(y) \right\rvert \, \textnormal{d} y \leqslant c(\rho/4) \norm{f}_1
\end{align*}
again by \cite[Theorem $1.3$]{MR3729529}. Finally, we have
\begin{align*}
\left\lvert G_{D \setminus V}[f{\bf 1}_{V_2}](x) \right\rvert \leqslant G_{D \setminus V}[\left\lvert f \right\rvert {\bf 1}_{D \setminus V} ](x).
\end{align*}
A straightforward application of the proof of \cite[Theorem $4.3$]{MR1329992} to the last term gives the claim. \end{proof}
\begin{prop}
If $f$ satisfies \eqref{main_thm_cond1} then it is uniformly continuous in $D$. If \eqref{main_thm_cond2} holds then $\frac{\partial}{\partial x_i}f$, $i=1,...,d$, is uniformly continuous in $D$. \end{prop} \begin{proof}
Suppose $\frac{\partial}{\partial x_i}f$ for some $i=1,...,d$ is not uniformly continuous, i.e. $\omega_{\nabla f}(t,D)\geqslant c>0$ for $t \leqslant 1$. If \eqref{main_thm_cond2} holds then in particular
\begin{align*}
\infty > \int_0^{1/2} S(t) \omega_{\nabla f}(t,D) t^{d-1}\, \textnormal{d} t \geqslant c
\int_0^{1/2} |G'(t)|t^{d-1} \, \textnormal{d} t,
\end{align*}
which is a contradiction. Now let $\omega_f(t,D) \geqslant c$ for $t \leqslant 1$, and suppose \eqref{main_thm_cond1}. For $d \geqslant 3$ we have
\begin{align*}
\infty > \int_0^{1/2} S(t) \omega_f(t,D) t^{d-1}\, \textnormal{d} t \geqslant c\int_0^{1/2}
|G''(t)| t^{d-1}\, \textnormal{d} t.
\end{align*}
By integration by parts
\begin{align*}
\int_0^{1/2} G''(t) t^{d-1}\, \textnormal{d} t =
G'(t)t^{d-1}\Big|_0^{1/2}-(d-1)\int_0^{1/2}G'(t)t^{d-2} \, \textnormal{d} t.
\end{align*}
Observe that $G'$ is of constant sign. Hence, both $\lim_{t \to 0^+} G'(t)t^{d-1}$ and the integral are finite. In particular, integration by parts once again yields
\begin{align*}
\int_0^{1/2} G'(t)t^{d-2} \, \textnormal{d} t
=G(t)t^{d-2}\Big|_0^{1/2}-(d-2)\int_0^{1/2} G(t)t^{d-3} \, \textnormal{d} t.
\end{align*}
Both $\lim_{t \to 0^+} G(t)t^{d-2}$ and the integral are positive. Hence, both must be finite. By \cite[Proposition $1$ and $2$]{MR3225805} we have $\int^r_{0}G(t)t^{d-1}dt \geqslant c \psi(1/r)^{-1}$. It follows that
\begin{align*}
\int_0^1 G(t)t^{d-3}\,dt &= \int_{B_1} \frac{G(|x|)}{|x|^2} \, \textnormal{d} x = \int_{B_1}
\int_{|x|}^{\infty}\frac{1}{s^3}\, \textnormal{d} s \ G(|x|) \, \textnormal{d} x = \int_{0}^{\infty} \, \textnormal{d} s
\frac{1}{s^3} \int_{B_{1 \wedge s}} G(|x|) \, \textnormal{d} x \\ &\geqslant \int_0^1
\frac{1}{\psi(1/s)s^2}\,\frac{\, \textnormal{d} s}{s} + \psi(1) \geqslant \int_1^{\infty}
\frac{u^2}{\psi(u)}\,\frac{\, \textnormal{d} u}{u} \geqslant \int_1^{\infty} \,\frac{\, \textnormal{d} u}{u}=\infty,
\end{align*}
which is a contradiction. Now let $d=2$. By the same argument
\begin{align*}
\int_0^{1/2} G''(t)t \, \textnormal{d} t = G'(t)t\Big|_0^{1/2}-\int_0^{1/2} G'(t) \, \textnormal{d} t
\end{align*}
and we conclude that the integral is finite. Hence, $\lim_{t \to 0^+}G(t)<\infty$. By \cite[Theorems 41.5 and 41.9]{MR1739520} we get the contradiction. Finally, for $d=1$ we get that $\lim_{t \to 0^+}G'(t)<\infty$. It follows that $\limsup_{t \to 0^+}G(t)/t<\infty$. Due to \cite[Theorem $16$]{MR1406564} and \cite[Lemma 2.14]{MR3636597} we obtain that
\begin{align*}
\liminf_{x\to\infty}\psi(x)/x^2>0,
\end{align*}
which is a contradiction, since $\limsup_{x\to\infty}\psi(x)/x^2=0$. \end{proof} \begin{lem}\label{lem:conv_fact}
Let $D$ be bounded open and $k \in \mathbb{N}$. If $g \in C_{\operatorname{loc}}^k({\R^{d}} \setminus \{ 0\}) \cap L_{\operatorname{loc}}^1$ and $f \in C^k(D)$ then $g \ast f \in C_{\operatorname{loc}}^k(D)$ \end{lem}
\begin{proof}
Fix $x_0 \in D$. Let $l=\delta_D(x_0)$. Let $\chi_1, \chi_2 \in C^{\infty}({\R^{d}})$ be such that ${\bf 1}_{B(x_0,l/4)} \leqslant \chi_1 \leqslant {\bf 1}_{B(x_0,l/2)}$ and ${\bf 1}_{B_{l/8}^c} \leqslant \chi_2 \leqslant {\bf 1}_{B_{l/16}^c}$. Observe that $g \ast f = g \ast (f \chi_1) + (g \chi_2) \ast (f(1-\chi_1))$ on $B\left(x_0,l/8\right)$. Since $f \chi_1, g \chi_2 \in C^k({\R^{d}})$, it follows that $g \ast f \in C^k(B(x_0,l/8))$. Since $x_0$ was arbitrary, the claim follows by induction. \end{proof} A consequence of \autoref{lem:conv_fact} is the following corollary. \begin{cor}\label{cor:ind_fact} Let $D$ be open and bounded and $k \in \mathbb{N}$. If $g \in C_{\operatorname{loc}}^k({\R^{d}} \setminus\{ 0\}) \cap L_{\operatorname{loc}}^1$ then $g \ast {\bf 1}_D \in C_{\operatorname{loc}}^k(D)$. \end{cor}
The following lemma is crucial in one of the proofs.
\begin{lem}[{\cite[Proposition 3.2]{MR3729529}}]\label{lem:harmonic_radial_kernel}
Let $X_t$ be an isotropic unimodal L\'{e}vy process in ${\R^{d}}$. For every $r>0$ there is a radial kernel function $\overline{P}_{r}(z)$
and a constant $C(r)>0$ such that $\overline{P}_{r}(z)=C(r)$ for $x \in B_r $, $0 \leqslant \overline{P}_{r}(z) \leqslant C(r)$ for $z \in {\R^{d}}$ and the profile function of $\overline{P}_{r}$ is non-increasing. Furthermore, if $f$ has the mean-value property in $B(x,r)$
\begin{align*}
f(x)=\int_{{\R^{d}}} f(z)\overline{P}_{r}(x-z)\, \textnormal{d} z = f \ast \overline{P}_{r}(x).
\end{align*} \end{lem}
\section{Weak solutions} \label{sec:weak-solutions}
The aim of this section is to prove \autoref{thm:weak_thm}. For the fractional Laplacian related results are known, cf. \cite[Section $3$]{MR1671973}. A similar result has recently been obtained in \cite{MR3461641} using purely analytic methods instead of probabilistic ones exploited in \cite{MR1671973}. When the generalization of these results to more general nonlocal operators is immediate, we omit the proof.
\begin{lem}\label{lem:prob_harmonic_to_anihilation}
Suppose $u \in \mathcal{L}^1({\R^{d}})$ has the mean-value property inside $D$ with respect to $X_t$. Then $\mathscr{L}
u=0$ in $D$ in distributional sense. \end{lem} \begin{proof}
Let $\varphi \in C_c^{\infty}(D)$ and $\phi_{\epsilon}$ be a standard mollifier
(i.e. $\phi_{\epsilon} \in C^{\infty}({\R^{d}})$ and $\supp \phi_{\epsilon} =
\overline{B}_{\epsilon}$). Using \eqref{growth_condition} it is easy to check that $\phi_{\epsilon} \ast u \in \mathcal{L}^1({\R^{d}}) \cap C^{\infty}(D)$. Hence, $\mathscr{L}(\phi_{\epsilon} \ast u)$ can be calculated
pointwise for $x \in D$ and we have
\begin{align*}
(\phi_{\epsilon} \ast u, \mathscr{L} \varphi) = (\mathscr{L} (\phi_{\epsilon} \ast u), \varphi).
\end{align*} We consider Dynkin characteristic operator $\mathcal{U}$. Since it is an
extension of $\mathscr{L}$ and is translation-invariant, we obtain
\begin{align*}
\mathscr{L} (\phi_{\epsilon} \ast u) = \mathcal{U} (\phi_{\epsilon} \ast u) = \phi_{\epsilon}
\ast \mathcal{U} u.
\end{align*}
We have $\mathcal{U} u(x)=0$ for $x \in D$, hence
\begin{align*}
0=(\mathscr{L}(\phi_{\epsilon} \ast u), \varphi)=(\phi_{\epsilon} \ast u, \mathscr{L} \varphi), \quad \varphi \in C_c^{\infty}(D_{\epsilon}),
\end{align*}
where $D_{\epsilon}=\{ x \in D: \delta_D(x)>\epsilon \}$. Passing $\epsilon \to 0$ we get the claim. \end{proof}
The following lemma is a generalization of \cite[Theorem 3.9 and Corollary 3.10]{MR1671973}, where the fractional Laplace operator is considered. \begin{lem}\label{lem:anihilation_to_prob_harmonic_C2}
Let $u \in \mathcal{L}^1({\R^{d}}) \cap C^2_{\operatorname{loc}}(D)$ be a solution of $\mathscr{L} u=0$ in $D$ in distributional sense. Then $u$ has the mean-value property inside $D$. \end{lem} \begin{proof} Since $u \in \mathcal{L}^1({\R^{d}}) \cap C^2_{\operatorname{loc}}(D)$, $\mathscr{L} u(x)$ can be calculated pointwise for $x \in D$. Fix $D_1 \subset \subset D$ and define $\widetilde{u}(x)=P_{D_1}[u](x)$, $x
\in {\R^{d}}$. By the strong Markov property we may assume that $D_1$ is a Lipschitz domain. We claim that $\widetilde{u}$ has the mean-value property in $D_1$.
Indeed, let $D_2$ be an open set relatively compact in $D$ such that
$\overline{D_1} \subset D_2$. There exist functions $u_1$, $u_2$ on $D_1^c$ such that $u=u_1+u_2$, $u_1$ is continuous and bounded on $D_1^c$ and $u_2 \equiv 0$ in $D_2$. We have
\begin{align*}
\widetilde{u}(x) = P_{D_1}[u_1](x)+P_{D_1}[u_2](x),
\quad x \in {\R^{d}}.
\end{align*} The first integral is clearly absolutely convergent. We claim that it is also continuous as a function of $x$ in $\overline{D_1}$. Indeed, by \autoref{lem:harmonic_radial_kernel} it is continuous in $D_1$. Let $x_0 \in \partial D$. For $\epsilon>0$ there exists $\delta>0$ such that
\begin{align*} \left\lvert \int_{D_1^c} P_{D_1}(x,z)u_1(z) \, \textnormal{d} z - u_1(x_0) \right\rvert \leqslant \epsilon + \norm{u_1}_{\infty} \mathbb{P}^x \(\left\lvert X_{\tau_{D_1}}-x_0 \right\rvert >\delta \) . \end{align*}
Since the second term goes to $0$ as $x \to x_0$ (see \cite[Lemmas 2.1 and 2.9]{MR3350043}), by arbitrary choice of $\epsilon$ we get the claim.
Furthermore, from
monotonicity of $1 \wedge \nu^*(h)$ we obtain
\begin{align*}
P_{D_1}(x,z) \leqslant \big(1 \wedge \nu^*(\dist(z,D_1)) \big) \mathbb{E}^x \tau_{D_1},
\quad x \in D_1, \ z \in D_2^c.
\end{align*}
Since $u \in \mathcal{L}^1({\R^{d}})$, \eqref{measure_scaling} implies the absolute convergence of the second integral. Since by \cite[Lemma $2.9$ and Remark $2$]{MR3350043} $\mathbb{E}^x \tau_{D_1} \in C_0(D_1)$, it is continuous as well. Hence $\widetilde{u}$ is continuous and has the mean-value property in $D_1$. Note that $\widetilde{u}=u$ on $D_1^c$, since $D_1$ is a Lipschitz domain.
Let $h=\widetilde{u}-u$. We now verify that $h \equiv 0$ so that $u=\widetilde{u}$ has the mean-value property in $D_1$. Since $\mathscr{L} u=0$ in $D_1$, from \autoref{lem:prob_harmonic_to_anihilation} we have $\mathscr{L} h(x)=0$ for $x \in D_1$. Observe $h$ is continuous and compactly supported . Suppose it has a positive maximum at $x_0 \in D_1$, then
\begin{align*} 0=\mathscr{L} h(x_0)= \int_{{\R^{d}}}\(h(y)-h(x_0)\)\nu(x_0-y)\, \textnormal{d} y , \end{align*}
which implies that $h$ is constant on $\mathrm{supp}(\nu)+x_0$. If $D_1\subset \mathrm{supp}(\nu)+x_0$ we get that $h\leqslant 0$. If not we can use the chain rule to get for any $n\in\mathbb{N}$ that $h$ is constant on $n\mathrm{supp}(\nu)+x_0$ and consequently $h\leqslant 0$. Similarly, $h$ must be non-negative. \end{proof}
\begin{lem}\label{lem:anihilation_to_prob_harmonic}
Let $u \in \mathcal{L}^1({\R^{d}})$ be a solution of $\mathscr{L} u=0$ in $D$ in distributional sense.
Then $u$ has the mean-value property inside $D$. \end{lem}
\begin{proof}
Let $\Omega \subset \subset D$ be a bounded Lipschitz domain. By \cite{MR1825650} and the
Ikeda-Watanabe formula we have that the harmonic measure $P_\Omega(x,\, \textnormal{d} z)$ is
absolutely continuous with respect to the Lebesgue measure. Define $\rho = (1 \wedge \dist (\Omega,D^c))/2$ and let $V=\Omega+B_{\rho}$. For $\epsilon < \rho/2$ we consider standard mollifiers $\phi_{\epsilon}$ (i.e. $\phi_{\epsilon} \in C^{\infty}({\R^{d}})$ and $\supp \phi_{\epsilon} = \overline{B}_{\epsilon}$). Since $\mathscr{L}$ is translation-invariant we have that $\mathscr{L}(\phi_{\epsilon} \ast u)= \mathscr{L} u \ast \phi_{\epsilon}=0$ in $V_{\epsilon}=\{ x \in D: \dist(x,V^c)>\epsilon \}$
in distributional sense. By \autoref{lem:anihilation_to_prob_harmonic_C2} we
obtain
\begin{align*}
\phi_{\epsilon} \ast u(x) = P_\Omega [ \phi_{\epsilon} \ast u] (x), \quad x
\in \Omega.
\end{align*}
Note $u \in L_{\operatorname{loc}}^1$ implies $\phi_{\epsilon} \ast u \to u$ in $L_{\operatorname{loc}}^1$.
Hence, up to the subsequence
\begin{align*}
\lim\limits_{\epsilon \to 0} \phi_{\epsilon} \ast u(x) = u(x) \quad
\text{a. e.}
\end{align*}
Moreover, since $\phi_{\epsilon}\ast u$ has the mean-value property in
$\overline{V}_{\rho/2}$, by \autoref{lem:harmonic_radial_kernel}
\begin{align*}
\phi_{\epsilon} \ast u(z)
= \phi_{\epsilon} \ast u \ast \overline{P}_{r}(z)
\end{align*}
for a fixed $0<r<\rho/4$. Hence, for any $E \subset \Omega^c$
\begin{align*}
P_U[\left\lvert \phi_{\epsilon} \ast u \right\rvert;V_{\rho/2}\cap E](x) &\leqslant \int_{V_{\rho/2} \cap \Omega^c\cap E} \left\lvert \phi_{\epsilon} \ast u(z) \right\rvert P_\Omega(x,z) \, \textnormal{d} z \\ &= \int_{V_{\rho/2}\cap \Omega^c \cap E} \left\lvert \phi_{\epsilon} \ast u \ast \overline{P}_{r}(z) \right\rvert P_\Omega(x,z) \, \textnormal{d} z \\ &\leqslant \int_{B_{\epsilon}} \phi_{\epsilon}(s) \int_{{\R^{d}}} \left\lvert u(y) \right\rvert \int_{V_{\rho/2} \cap \Omega^c \cap E} \overline{P}_{r}(z-y-s)P_\Omega(x,z) \, \textnormal{d} z \, \textnormal{d} y \, \textnormal{d} s.
\end{align*}
Let $c=2\sup_{x \in V}|x|$. Then from boundedness of $\overline{P}_{r}$
and local integrability of $u$ we get
\begin{align*}
\int_{|y|\leqslant c} |u(y)| \int_{V_{\rho/2} \cap \Omega^c \cap E}
\overline{P}_{r}(z-y-s)P_\Omega(x,z) \, \textnormal{d} z \, \textnormal{d} y &\leqslant C \int_{|y|\leqslant c}
|u(y)| \, \textnormal{d} y \int_{E} P_\Omega(x,z) \, \textnormal{d} z \\ &\leqslant C \norm{u}_{\mathcal{L}^1} \int_{E} P_\Omega(x,z) \, \textnormal{d} z.
\end{align*}
Furthermore, for $|y|>c$ we have $|z-y-s|>r$, hence $\overline{P}_{r}(z-y-s) \leqslant P_{B_r}(0,z-y-s)$. From \eqref{growth_condition} and monotonicity of the L\'{e}vy measure we get
\begin{align*}
P_{B_r}(0,y+s-z) \leqslant 1 \wedge \nu^*(|y-s-z|-r) \mathbb{E}^x \tau_{B_r} \leqslant C
(1 \wedge \nu^*(|y|)).
\end{align*}
Thus,
\begin{align*}
\int_{|y|>c} |u(y)| \int_{V_{\rho/2}\cap \Omega^c \cap E}
\overline{P}_{r}(z-y-s)P_\Omega(x,z) \, \textnormal{d} z \, \textnormal{d} y
&\leqslant C\norm{u}_{\mathcal{L}^1} \int_E P_\Omega(x,z) \, \textnormal{d} z.
\end{align*}
It follows that $\phi_{\epsilon} \ast u$ are uniformly integrable with respect to the measure $P_\Omega(x,z)\, \textnormal{d} z$ in $V_{\rho/2}$. By the Vitali convergence theorem
\begin{align*} \lim\limits_{\epsilon \to 0} P_\Omega [ \phi_{\epsilon} \ast u; V_{\rho/2} ](x) = P_\Omega [u; V_{\rho/2} ](x). \end{align*}
It remains to show that $\lim_{\epsilon \to 0} P_\Omega [ \phi_{\delta} \ast u; V_{\rho/2}^c ] = P_\Omega [u; V_{\rho/2}^c]$. Since $\dist(\Omega,V_{\rho/2}^c)=\rho/2$, by the Ikeda-Watanabe formula
\begin{align*} P_\Omega [\phi_{\epsilon} \ast u;V^c_{\rho/2}](x) &= \int_{V_{\rho/2}} \phi_{\epsilon} \ast u(z) \int_\Omega G_\Omega(x,y) \nu(z-y) \, \textnormal{d} z \, \textnormal{d} y \\ &= \int_{B^c_{\rho/2}} \nu(z) \, \textnormal{d} z \int_\Omega \phi_{\epsilon} \ast u(z+y) 1_{V^c_{\rho/2}}(z+y)G_\Omega(x,y) \, \textnormal{d} y. \end{align*}
Using the fact that $\int_\Omega G_\Omega(x,y) \, \textnormal{d} y=\mathbb{E}^x \tau_\Omega<\infty$, $\nu(B_{\rho/2}^c)<\infty$ and $\lim_{\delta \to 0} \phi_{\delta} \ast u = u$ in $\mathcal{L}^1({\R^{d}})$ we obtain
\begin{align*} \lim_{\delta \to 0} P_\Omega [ \phi_{\delta} \ast u; V_{\rho/2}^c ] = P_\Omega [u; V_{\rho/2}^c]. \end{align*} Thus $u(x) = P_\Omega[u](x)$ for a. e. $x \in \Omega$. \end{proof}
Combining \autoref{lem:prob_harmonic_to_anihilation} and \autoref{lem:anihilation_to_prob_harmonic} we obtain a following result.
\begin{thm}\label{thm:prob:harmonic_anihilation} Let $D$ be an open set and $u \in \mathcal{L}^1$. Then $u$ has the mean-value property inside $D$ if and only if $\mathscr{L} u = 0$ in distributional sense. \end{thm}
\begin{lem}\label{lem:Gd_weak_solution} Let $D$ be a bounded open set and $f \in L^1(D)$. Then $-G_D[f]$ is a distributional solution of \eqref{eq:weak_problem} with $g \equiv 0$. \end{lem}
\begin{proof}
First assume $f$ is continuous. Then by \cite[Chapter V]{MR0193671} we have \begin{align*} \mathcal{U} G_D[f](x)=-f(x), \quad x \in D. \end{align*}
Let $\phi_{\epsilon}$, $\epsilon>0$, be a standard mollifier. Since $\mathcal{U}$ is an extension of $\mathscr{L}$ and is translation-invariant we get
\begin{align*} \mathscr{L} (\phi_{\epsilon} \ast G_D[f]) = \mathcal{U} (\phi_{\epsilon} \ast G_D[f]) = \phi_{\epsilon} \ast \mathcal{U} G_D[f] = -\phi_{\epsilon} \ast f. \end{align*}
Thus \begin{align*} (-\phi_{\epsilon} \ast G_D[f],\mathscr{L} \phi) = (\phi_{\epsilon} \ast f,\phi). \end{align*} Passing $\epsilon \to 0$ we obtain
\begin{align*} (-G_D[f], \mathscr{L} \varphi) = (f,\varphi), \quad \varphi \in C_c^{\infty}({\R^{d}}). \end{align*}
In general case, since $D$ is bounded, we have $\norm{G_D[f]}_{L^1} \leqslant \norm{G_D [1]}_{\infty} \norm{f {\bf 1}_D}_{L^1}$ and
\begin{align*} \norm{G_D[1]}_{\infty} = \sup\limits_{x \in {\R^{d}}} G_D[1](x) = \sup\limits_{x \in {\R^{d}}} \mathbb{E}^x \tau_D \leqslant \mathbb{E}^0 \tau_{B(0, \diam (D))} < \infty. \end{align*}
Using mollification of $f$ we get the claim. \end{proof}
\begin{proof}[\bf{Proof of \autoref{thm:weak_thm}}]
Let $h = u + G_D[f]$. By \autoref{lem:Gd_weak_solution} $h$ is a harmonic function in distributional sense. Hence, by \autoref{lem:anihilation_to_prob_harmonic} $h$ has the mean-value property, which finishes the first claim.
Now let $f, g \ast \nu \in \mathcal{K}(D \setminus \overline{V})$ and $D$ be a Lipschitz domain. Then it follows that
\begin{align*}
\widetilde{u}(x) = -G_D[f](x) + P_D[g](x).
\end{align*} is a solution of \eqref{eq:weak_problem}, which is bounded near to the boundary. Let $U_n \nearrow D$ be a sequence of Lipschitz domains approaching $D$. We have
\begin{align*}
P_{U_n}[h](x) = P_{U_n}[h;D^c](x) + P_{U_n}[h;\overline{D}\setminus U_n](x).
\end{align*}
By the dominated convergence theorem $P_{U_n}[h;D^c](x) \xrightarrow[]{n \to \infty} P_D[h;D^c](x)=P_D[g](x)$. Note that by our additional assumptions on $g$ and $\nu$ we have that $P_D[g]$ is well-defined. Furthermore, since $f \in \mathcal{K}(D \setminus \overline{V})$, there exists $n_0 \in \mathbb{N}$ such that for $n \geqslant n_0$ we have $V \subset U_n$. From boundedness of $u$ and \autoref{lem:Kato_green_op_bounded} we get that $h$ is bounded in $\overline{D} \setminus U_n$ for $n > n_0$ and
\begin{align*} P_{U_n}[h;\overline{D}\setminus U_n](x) \leqslant C P_{U_n}\(x, \overline{D} \setminus U_n\)\,. \end{align*}
By \cite[Theorem 1]{MR1825650} we have \begin{align*} \mathbb{P}_{U_n} ( x, \overline{D} \setminus U_n ) \xrightarrow[]{n \to \infty} P_D \(x, \partial D\)=0 \end{align*}
Hence, $u=\widetilde{u}$. \end{proof}
\section{The sufficient condition for twice
differentiability}\label{sec:sufficient_condition}
In this section, we provide auxiliary technical results and the proof of \autoref{thm:main_thm}. Throughout this section we assume $D \subset \mathbb{R}^d$ be an open bounded set. The following lemmas are modifications of Lemma 2.2 and Lemma 2.3 in \cite{MR521856}.
\begin{lem}\label{lem:lemma22modification} Suppose $f$ is a uniformly continuous function on $D$ and $H(x,y)$ is a continuous function for $x,y \in D$, $x \neq y$ satisfying
\begin{align*}
|H(x,y)| \leqslant F(|x-y|), \quad \left\lvert \frac{\partial H(x,y)}{\partial x_i} \right\rvert \leqslant
\frac{F(|x-y|)}{|x-y|}, \quad i=1,...,d \end{align*}
for some non-increasing function $F\!:(0,\infty) \mapsto [0,\infty)$. If the following holds
\begin{align}\label{int_condition}
\int_0^{1/2} F(t)\omega_f(t,D)t^{d-1}\, \textnormal{d} t < \infty, \end{align}
then the function $g(x) = \int_D H(x,y) \left(f(y) - f(x)\right) \, \textnormal{d} y$ is uniformly continuous in $D$. \end{lem}
\begin{rem}\label{rem:omega*_remark} The integral condition \eqref{int_condition} and boundedness of the integrand for $1/2 \leqslant t \leqslant \diam(D)$ imply that
\begin{align*}
\int_0^{\diam(D)} F(t)\omega_f(t,D)t^{d-1} \, \textnormal{d} t < \infty. \end{align*}
Moreover, \begin{align*} \lim_{h \to 0} h \int_h^{\diam(D)} F(t) \omega_f(t,D)t^{d-2} \, \textnormal{d} t = 0. \end{align*}
Indeed, clearly we have \begin{align*}
h \int_h^{\diam(D)} F(t) \omega_f(t,D)t^{d-2} \, \textnormal{d} t = \int_0^{\diam(D)} {\bf 1}_{[h,\infty)}(t) F(t) \omega_f(t,D)t^{d-1} \frac{h}{t} \, \textnormal{d} t. \end{align*}
Since ${\bf 1}_{[h,\infty)}(t)h/t\leqslant 1$, the claim follows by the dominated convergence theorem. \end{rem}
\begin{proof}
First note that by integration in polar coordinates one can check that
the integral defining $g$ actually exists. Set $\epsilon > 0$. Let $0<h < \delta(D)$ and $x$ i $z$ be arbitrary fixed points in $D$ such that
$|x-z|=h$. Denote $j(x,y):=H(x,y)\( f(y)-f(x) \)$. Observe that
$|g(x)-g(z)|$ is bounded by the sum of two integrals $I_1$ and $I_2$ of $j(x,\cdot)-j(z,\cdot)$ over the sets $D \cap B(x,2h)$ and $D \setminus B(x,2h)$ respectively. On $D \cap B(x,2h)$ we have
\begin{align*}
I_1 &= \left\lvert \int_{D \cap B(x,2h)} H(x,y) \( f(y)-f(x) \) \, \textnormal{d} y
- \int_{D \cap B(x,2h)} H(z,y) \( f(y)-f(z) \) \, \textnormal{d} y \right\rvert \\
&\leqslant \int_{D \cap B(x,3h)} \left\lvert H(x,y) \right\rvert \left\lvert
f(y)-f(x) \right\rvert \, \textnormal{d} y +\int_{D \cap B(z,3h)} \left\lvert H(z,y)
\right\rvert \left\lvert f(y)-f(z) \right\rvert \, \textnormal{d} y \\ &\leqslant 2 \int_0^{3h}
F(t)\omega_f(t,D)t^{d-1}\, \textnormal{d} t < \frac{\epsilon}{3}
\end{align*}
for sufficiently small $h$. Obviously $I_2\leqslant I_3+I_4$, where
\begin{align*}
I_3 &:= \left\lvert \int_{D \setminus B(x,2h)} \( f(y) - f(z) \)
\( H(x,y) - H(z,y) \) \, \textnormal{d} y \right\rvert, \\
I_4 &:= \left\lvert f(z) - f(x) \right\rvert \left\lvert \int_{D \setminus
B(x,2h)} H(x,y) \, \textnormal{d} y \right\rvert.
\end{align*}
By the mean value theorem
\begin{align*}
I_3 \leqslant |x-z| \sum_{i=1}^d \int_{D \setminus B(x,2h)} \left\lvert
H_{x_i}(\widetilde{x},y) \right\rvert \left\lvert f(y)-f(z) \right\rvert
\, \textnormal{d} y
\end{align*}
for some $\widetilde{x} = \theta x + (1-\theta)z$, $\theta \in (0,1)$. Note
that for $y \in D \setminus B(x,2h)$ we have $|x-y|\geqslant 2|x-z|=2h>0$. It follows
that $|\widetilde{x}-y| \geqslant h$ and consequently
$|z-y| \leqslant |z-\widetilde{x}|+|\widetilde{x}-y|\leqslant 2|\widetilde{x}-y|$.
Thus,
\begin{align*}
I_3 &\leqslant Ch \int_{D \setminus B(x,2h)}
\frac{F(|\widetilde{x}-y|)}{|\widetilde{x}-y|} \left\lvert f(y)-f(z)
\right\rvert \, \textnormal{d} y \\ &\leqslant Ch \int_{D \setminus B(x,2h)}
\frac{F\left(|z-y|/2\right)}{|z-y|} \left\lvert f(y)-f(z) \right\rvert \, \textnormal{d} y
\\ &\leqslant h \int_{D \setminus B(z,h)}
\frac{F\left(|z-y|/2\right)}{|z-y|}
\left\lvert f(y)-f(z) \right\rvert \, \textnormal{d} y \leqslant h \int_{h}^{\diam(D)}
F\left(t/2\right) \omega_f(t,D) t^{d-2}\, \textnormal{d} t \\ &\leqslant h
\int_{h/2}^{\diam(D)/3} F(t) \omega_f(2t,D) t^{d-2}\, \textnormal{d} t.
\end{align*}
Thus, by \autoref{rem:omega*_remark} we see that $I_3<\epsilon/3$ for sufficiently small $h$. Finally, \eqref{int_condition} implies
\begin{align*}
I_4 &\leqslant \omega_f(h,D) \int_{D \setminus B(x,2h)} F(|x-y|) \, \textnormal{d} y =
\int_0^{\diam(D)} {\bf 1}_{[2h,\infty)}(t) F(t) \frac{\omega_f(h,D)}{\omega_f(t,D)} \omega_f(t,D) t^{d-1}\, \textnormal{d} t. \end{align*}
Observe that ${\bf 1}_{[2h,\infty)}(t) \frac{\omega_f(h,D)}{\omega_f(t,D)} \leqslant 1$ by monotonicity of $\omega_f(\cdot,D)$. Thus, \eqref{int_condition} justifies the application of the dominated convergence theorem and we obtain
\begin{align*}
\lim_{h \to 0} \omega_f(h,D) \int_{D \setminus B(x,2h)} F(|x-y|) \, \textnormal{d} y = 0. \end{align*}
In particular, $I_4\leqslant \epsilon/3$ for sufficiently small $h$. It follows that $\left\lvert g(x)-g(z) \right\rvert < \epsilon$, if $h$ is sufficiently small. Thus, $g$ is uniformly continuous. \end{proof}
\begin{lem}\label{lem:lemma23modification}
Suppose $f$ is a uniformly continuous function on $D$ and $H(x,y)$ is a continuous
function for $x,y \in D$, $x \neq y$ such that $\int\limits_D H(x,y) \, \textnormal{d} y$ is
continuously differentiable with respect to $x$. Assume there exists a
non-increasing function $F\!:(0,\infty) \mapsto [0,\infty)$ such that for $i,j=1,...,d$
\begin{align}\label{diff_properties}
|H(x,y)|, \left\lvert \frac{\partial
H(x,y)}{\partial x_i} \right\rvert \leqslant F(|x-y|), \quad
\left\lvert \frac{\partial^2 H(x,y)}{\partial x_i \partial x_j}
\right\rvert \leqslant \frac{F(|x-y|)}{|x-y|}.
\end{align} If the following holds
\begin{align}\label{lemma23condition}
\int_0^{1/2} F(t)\omega_f(t,D)t^{d-1}\,dt < \infty,
\end{align}
then $u(x) = \int_D H(x,y)f(y) \, \textnormal{d} y$ is continuously differentiable
with respect to $x \in D$ and
\begin{align}\label{main_lem_thesis}
\frac{\partial u(x)}{\partial x_i} = \int_D \frac{\partial H(x,y)}{\partial
x_i} \( f(y)-f(x) \) \, \textnormal{d} y + f(x) \frac{\partial}{\partial x_i} \int_D
H(x,y) \, \textnormal{d} y, \quad x \in D, \quad i=1,...,d.
\end{align} \end{lem} \begin{proof}
Fix $s>0$. Let $V_s=\{ x \in D: \dist(x,\partial D) \geqslant s \}$. We will show
that \eqref{main_lem_thesis} holds for $x \in B(\overline{x},r)$, where $r>0$
is such that $B(\overline{x},4r)\subset V_s$. For $\epsilon < r$ we consider standard mollifiers $\phi_{\epsilon}(x)$ and set $f_{\epsilon}(x)= \phi_{\epsilon} \ast f$. Note that \begin{align}\label{modulus_inequality} \omega_{f_{\epsilon}}\left( h, B(\overline{x},2r) \right) \leqslant \omega_f(h,D). \end{align}
For $x \in D$ we define $u_{\epsilon}(x) = \int_D H(x,y)f_{\epsilon}(y)\, \textnormal{d} y$. From boundedness of $f_{\epsilon}$ we see that the integral defining $u_{\epsilon}$ is well defined and by the dominated convergence theorem $u_{\epsilon}(x) \to u(x)$ for $x \in V_s$, as ${\epsilon} \to 0$. By \autoref{lem:lemma22modification} applied to $\frac{\partial H(x,y)}{\partial x_i}$ we have that the function
\begin{align}\label{continuity_prop} \int_D \frac{\partial H(x,y)}{\partial x_i}
\( f_{\epsilon}(y)-f_{\epsilon}(x) \)
\, \textnormal{d} y + f_{\epsilon}(x) \frac{\partial}{\partial x_i} \int_D H(x,y)\, \textnormal{d} y \end{align}
is continuous on $V_s$. Let $x \in B(\overline{x},r)$. Integrating \eqref{continuity_prop} with respect to $x_i$ from $\overline{x}_i$ to $x_i$ we obtain a continuously differentiable function $\Psi_{\epsilon}(x)$ with respect to $x_i$ with \eqref{continuity_prop} being its derivative. Denote $x=(\tilde{x},x_d)$ and $\overline{x}=(\tilde{x},\overline{x}_d)$, where $\tilde{x}=(x_1,...,x_{d-1})$ and $\overline{x}_d$ is fixed. the Fubini theorem and interchanging the order of integration yields
\begin{align*} \Psi_{\epsilon}(x) &= \int_{\overline{x}_d}^{x_d} \( \int_D \frac{\partial H(\tilde{x},s,y)}{\partial s} \( f_{\epsilon}(y)-f_{\epsilon}(\tilde{x},s) \) \, \textnormal{d} y + f_{\epsilon}(\tilde{x},s) \frac{\partial}{\partial s} \int_D H(\tilde{x},s,y) \, \textnormal{d} y \) \, \textnormal{d} s \\ &= \int_D H(\tilde{x},s,y) \(
f_{\epsilon}(y)-f_{\epsilon}(\tilde{x},s) \) \Big|_{\overline{x_d}}^{x_d} \, \textnormal{d} y-\int_D \int_{\overline{x}_d}^{x_d} H(\tilde{x},s,y) \frac{\partial}{\partial s} \( f_{\epsilon}(y)-f_{\epsilon}(\tilde{x},s) \)
\, \textnormal{d} s \, \textnormal{d} y \\ &+ f_{\epsilon}(\tilde{x},s) \int_D H(\tilde{x},s,y) \, \textnormal{d}
y \Big|_{\overline{x}_d}^{x_d} - \int_{\overline{x}_d}^{x_d} \frac{\partial f_{\epsilon}(\tilde{x},s)}{\partial s} \int_D H(\tilde{x},s,y) \, \textnormal{d} y \, \textnormal{d} s = u_{\epsilon}(x) - u_{\epsilon}(\overline{x}). \end{align*}
Thus, for $x \in B(\overline{x},r)$ the partial derivative $\frac{\partial u_{\epsilon}(x)}{\partial x_d}$ exists and is equal to \eqref{continuity_prop}. The same argument applies to any $i=1,...,d$. It remains to prove that \eqref{continuity_prop} converges uniformly to \eqref{main_lem_thesis}, as $\epsilon \to 0$. Since $f_{\epsilon} \to f$ uniformly, as $\epsilon \to 0$, it is enough to prove the convergence of first integral in \eqref{continuity_prop}. Fix $\delta > 0$. Since $\int_0^{\diam(D)} F(t)\omega_f(t, D) t^{d-1}\, \textnormal{d} t<\infty$, there is $\gamma > 0$ such that $\int_0^{\gamma} F(t)\omega_f(t, D) t^{d-1}\, \textnormal{d} t<\delta/4$. \eqref{modulus_inequality} implies
\begin{align}\label{proof_inequality} & \left\lvert \int_{B(x,\gamma)} \frac{\partial H(x,y)}{\partial x_i} \( f_{\epsilon}(y)-f_{\epsilon}(x) \) \, \textnormal{d} y - \int_{B(x,\gamma)} \frac{\partial H(x,y)}{\partial x_i} \( f(y) - f(x) \) \, \textnormal{d} y \right\rvert \nonumber \\ & \leqslant 2 \left\lvert \int_{B(x,\gamma)} \frac{\partial H(x,y)}{\partial x_i}
\omega_f(|x-y|,D) \, \textnormal{d} y \right\rvert \leqslant 2 \int_0^{\gamma} S(t) \omega_f(t,D)
t^{d-1}\, \textnormal{d} t < \frac{\delta}{2}.
\end{align}
On the complement of $B(x,\gamma)$ the function $\left\lvert \frac{\partial
H(x,y)}{\partial x_i} \right\rvert$ is bounded by some constant $C>0$. Choose $\epsilon_0>0$ such that $\norm{f_{\epsilon} -f}_{\infty} \leqslant
\delta/(4 C |D|)$ for $\epsilon<\epsilon_0$. Then
\begin{align*} \left\lvert \int_{D \setminus B(x,\gamma)} \frac{\partial (x,y)}{\partial x_i} \( f_{\epsilon}(y)-f_{\epsilon}(x)-f(y)+f(x) \) \, \textnormal{d} y \right\rvert
\leqslant 2 \frac{\delta}{4 C |D|} |D| C = \frac{\delta}{2}, \end{align*}
which combined with \eqref{proof_inequality} and arbitrary choice of $\delta$ ends the proof. \end{proof}
Now we are ready to prove \autoref{thm:main_thm}.
\begin{proof}[\textbf{Proof of \autoref{thm:main_thm}}]
Let $u$ be of the form
\begin{align*}
u(x) &= -G_D[f](x) + P_D[g](x) \\
&= -\int_D G(x,y)f(y) \, \textnormal{d} y +
\int_D \mathbb{E}^x G(X_{\tau_D},y)f(y) \, \textnormal{d} y + P_D[g](x) \\ &=: I_1(x) + I_2(x) +
I_3(x).
\end{align*}
Observe $I_3$ has the mean-value property in $D$, thus, by \autoref{rem:PDg_Lspace} and \autoref{lem:harm_c2} it belongs to $C^2_{\operatorname{loc}}(D)$. Moreover, for $x \in D$ from symmetry of $G$ and (G) we obtain that both $G$ and its first and second derivative are bounded either by $S(\delta_D(x))$ or $S(\delta_D(x))/\delta_D(x)$, depending on the
finiteness of $\int_0^{1/2}|G'(t)|t^{d-1}\, \textnormal{d} t$, and we are allowed to differentiate under the integral sign. Hence, it is enough to prove that $g(x):=\int_D G(x,y)f(y) \, \textnormal{d} y$ is in $C^2_{\operatorname{loc}}(D)$. Fix $i,j \in \{1,...,d \}$. Consider two cases.
\begin{enumerate}
\item
Let $\int_0^1 |G'(t)|t^{d-1}\, \textnormal{d} t=\infty$. Fix $x \in D$. From
\autoref{lem:conv_fact} we get
\begin{align*} \frac{\partial}{\partial x_i} g(x) &= \int_{{\R^{d}}} G(x-y) \frac{\partial}{\partial x_i} \left( f \chi_1 \right)(y) \, \textnormal{d} y + \int_{{\R^{d}}} \frac{\partial}{\partial x_i} \left(G \chi_2 \right) (x-y)\left(f {\bf 1}_D\right)(y) \( 1 - \chi_1\)(y) \, \textnormal{d} y \\ &=: w_1(x) + w_2(x), \end{align*}
where the localization functions $\chi_1$ and $\chi_2$ are chosen in dependence of $x$. Note that in the integral defining $w_2$, due to the function $\chi_2$ and (G), integration w.r.t. $y$ takes place in a region where $G$ and its derivative are bounded. Hence, from (G) we see that differentiating under the integral sign is justified. We obtain
\begin{align*}
\frac{\partial}{\partial x_j} w_2(x) = \int_{{\R^{d}}} \frac{\partial^2}{\partial
x_i \partial x_j} \left(G \chi_2 \right) (x-y)\left(f {\bf 1}_{D}\right)(y)
\left[ 1 - \chi_1\right](y) \, \textnormal{d} y.
\end{align*}
If we split $w_1$ into two integrals
\begin{align*}
w_1(x) &= \int_{D_1} G(x-y) \frac{\partial}{\partial y_i} \left( f \chi_1
\right)(y) \, \textnormal{d} y + \int_{D \setminus D_1} G(x-y) \frac{\partial}{\partial y_i}
\left( f \chi_1 \right)(y) \, \textnormal{d} y \\&=: w_3(x) + w_4(x),
\end{align*}
where $D_1 \subset D$ is such that $\chi_1 \big\vert_{D_1} \equiv 1$ then the
same argument can be applied to $w_4$. Thus
\begin{align*}
\frac{\partial}{\partial x_j} w_4(x) = \int_{D \setminus D_1}
\frac{\partial}{\partial x_j} G(x-y) \frac{\partial}{\partial y_i} \left( f
\chi_1 \right)(y) \, \textnormal{d} y.
\end{align*}
Next, observe that
\begin{align*}
\int_0^{\diam(D_1)}S(t) \omega_{\nabla f}(t,D_1)t^{d-1}\, \textnormal{d} t \leqslant
\int_0^{\diam(D)}S(t) \omega_{\nabla f}(t,D)t^{d-1}\, \textnormal{d} t < \infty.
\end{align*}
Moreover, by \autoref{cor:ind_fact} the function $x \mapsto \int_D G(x,y)\,dy$ is continuously differentiable and from (G) we see that \eqref{diff_properties} of \autoref{lem:lemma23modification} is
satisfied for $H(x,y)=G(|x-y|)$ and $F=S$. Hence, for $h(x) = \frac{\partial}{\partial x_i} f(x)$ we obtain
\begin{align*}
\frac{\partial}{\partial x_j} w_3(x) = \int_D \frac{\partial G(x,y)}{\partial
x_j} \left( h(y)- h(x) \right)\, \textnormal{d} y + h(x) \frac{\partial}{\partial x_i} \int_D
G(x,y)\, \textnormal{d} y.
\end{align*}
\item Now let $\int_0^1 |G'(t)|t^{d-1}\, \textnormal{d} t < \infty$. In this case, by the Fubini theorem and the fundamental theorem of calculus we get
\begin{align*}
\frac{\partial}{\partial x_i} \int_D G(x,y)f(y)\, \textnormal{d} y = \int_D \frac{\partial
G(x,y)}{\partial x_i} f(y)\, \textnormal{d} y.
\end{align*}
A similar argument applied to $H(x,y) = \frac{\partial G(x,y)}{\partial x_i}$
shows that the assumptions of \autoref{lem:lemma23modification} are satisfied
with $F=S$. Note that here we use the additional assumption on $G'''$. Thus,
\begin{align*}
\frac{\partial^2}{\partial x_i \partial x_j} \int_D G(x,y)f(y)\, \textnormal{d} y &= \int_D
\frac{\partial^2 G(x,y)}{\partial x_i \partial x_j} \( f(y)-f(x) \) \, \textnormal{d} y + f(x) \frac{\partial}{\partial x_j} \int_D \frac{\partial
G(x,y)}{\partial x_i}\, \textnormal{d} y \\ &+ \frac{\partial}{\partial x_j} \int_D
\frac{\partial G(x,y)}{\partial x_i} f(y)\, \textnormal{d} y.
\end{align*}
\end{enumerate}
We have proved that $u \in C^2_{\operatorname{loc}}(D)$. Then by \cite[Lemma 4.7]{BGPR2017} the
Dynkin characteristic operator $\mathcal{U}$ coincides with $\mathscr{L}$. Hence $u$ indeed is
a solution of the problem \eqref{General_problem3}.
Now suppose $\widetilde{u}$ is another solution of \eqref{General_problem3}.
By \autoref{thm:weak_thm} we find that it is of the form
\begin{align*}
\widetilde{u}(x) = - G_D[f](x) + P_U[h](x), \quad x \in U,
\end{align*} where $h(x)=u+G_D[f](x)$ and $U$ is any Lipschitz domain such that $U \subset \subset D$. Fix $x_0 \in D$. Then
$U_0=B(x_0,r) \subset \subset D$ for any $r < \dist (x_0,D^c)$ and obviously $U_0$ is
also Lipschitz. Hence,
\begin{align*}
\widetilde{u}(x)-u(x) = P_{U_0}[\widetilde{h}](x) - P_D[g](x), \quad x \in U_0,
\end{align*}
is harmonic in $U_0$, so it belongs to $C^2_{\operatorname{loc}}(U_0)$. The proof yields $-G_D[f]
\in C^2_{\operatorname{loc}}(D)$, thus $\widetilde{u}$ is twice continuously differentiable in the
neighbourhood $x_0$. Since $x_0$ was arbitrary, it follows that every solution
of \eqref{General_problem3} is $C^2_{\operatorname{loc}}(D)$.
\end{proof}
\section{Counterexamples for the case
,,\texorpdfstring{$\alpha+\beta=2$}{a+b=2}''}\label{sec:counterexamples}
In this section we provide several counterexamples for \autoref{thm:main_thm}. These examples are of the nature ,,$\alpha+\beta=2$'', i.e., for $\alpha \in (0,2)$ we give a function $f \in C^{2-\alpha}(D)$ for which the solution of the Dirichlet problem \eqref{Frac_lapl_problem} is not twice continuously differentiable inside of $D$. In \autoref{sec:examples} we explain how the counterexamples can be modified in order to match the assumptions of \autoref{thm:main_thm}.
Let $D = B_1$. Consider a Dirichlet problem \begin{align}\label{Frac_lapl_problem} \left\{ \begin{array}{rlll} \Delta^{\alpha/2} u &=& f & \text{in } D, \\ u &=& 0 & \text{in } D^c, \\ \end{array} \right. \end{align} where $\alpha \in (0,2)$. It is known (see \cite{MR1671973} or \autoref{thm:weak_thm}) that $u(x) = \int_D G_D(x,y)f(y) \, \textnormal{d} y$, where $G_D(x,y)$ is Green function for the operator $\Delta^{\alpha/2}$ and domain $D$ solves \eqref{Frac_lapl_problem}. By the Hunt formula \begin{align*} G_D(x,y) = G(x,y) - \mathbb{E}^x G(X_{\tau_D},y), \end{align*} where $G$ is the (compensated) potential for process $X_t$ whose generator is $\Delta^{\alpha/2}$. Note that since $\mathbb{E}^x G(X_{\tau_D},y)$ is $C^{\infty}$, the regularity problem is reduced to the regularity of the function $x \mapsto g(x) = \int_{B(0,1)}G(x,y)f(y) \, \textnormal{d} y = G \ast f(x)$.
\subsection{Case \texorpdfstring{$\alpha \in (0,1)$}{a e (0,1)}} We follow closely the idea from the proof of \autoref{thm:main_thm} apart from the fact that at the end we will show that the last function $w_3$ is not continuously differentiable. From \autoref{lem:conv_fact} we get \begin{align}\label{alpha01ref} \frac{\partial}{\partial x_d} g(x) &= \int_{{\R^{d}}} G(x-y) \frac{\partial}{\partial y_d} \left( f \chi_1 \right)(y) \, \textnormal{d} y + \int_{{\R^{d}}} \frac{\partial}{\partial x_d} \left(G \chi_2 \right) (x-y)\left(f {\bf 1}_{B_1}\right)(y) \( 1 - \chi_1\)(y) \, \textnormal{d} y \nonumber \\ &=: w_1(x) + w_2(x), \end{align} if only $f \in C_b^1(B_1)$. $\chi_1$ and $\chi_2$ in \eqref{alpha01ref} are chosen for $x_0=0$. Put $f(y) = \left((y_d)_+\right)^{2-\alpha}$ and calculate $\frac{\partial^2}{\partial x_d^2}g(x)$ w $x=0$. Since in $w_2$ we are separated from the origin, it follows that
\begin{align*} \frac{\partial}{\partial x_d} w_2(x) = \int_{{\R^{d}}} \frac{\partial^2}{\partial
x_d^2} \left(G \chi_2 \right) (x-y)\left(f {\bf 1}_{B_1}\right)(y) \( 1 - \chi_1\)(y) \, \textnormal{d} y. \end{align*} If we split $w_1$ into \begin{align*} w_1(x) = \int_{B_{1/4}} G(x-y) \frac{\partial}{\partial y_d} \left( f \chi_1 \right)(y) \, \textnormal{d} y + \int_{B_{1/4}^c} G(x-y) \frac{\partial}{\partial y_d} \left( f \chi_1 \right)(y) \, \textnormal{d} y =: w_3(x) + w_4(x), \end{align*} then the same argument applies for $w_4$. Therefore, it remains to calculate the derivative of $w_3$. Observe that on ${B_{1/4}}$ we have $f\chi_1 \equiv f$. To simplify the notation we accept a mild ambiguity and by $h$ we denote, depending on the context, either a real number or a vector in ${\R^{d}}$ of the form $(0,...,0,h)$. Let $h>0$.
\begin{align*} \frac{1}{-h} \left( w_3(-h) - w_3(0) \right) &= \frac{2-\alpha}{-h}
\int_{B_{1/4}} \left( |-h-y|^{\alpha-d} - |y|^{\alpha-d} \right)
((y_d)_+)^{1-\alpha} \, \textnormal{d} y = \\ &= (2-\alpha) \int_A \frac{|y|^{\alpha-d} -
|y+h|^{\alpha-d}}{h} y_d ^{1-\alpha} \, \textnormal{d} y=:(2-\alpha)I(h), \end{align*} where $A=B_{1/4} \cap \{y_d>0\}$.
Let $S_1$ be a $d$-dimensional cube contained in $A$, that is \begin{align}\label{S1}
S_1=\{y \in {\R^{d}}: |y_i|<a , 0 < y_d < a, \ i=1,...,d-1 \}, \end{align} where $a=(4\sqrt{d})^{-1}$. Define $S_2 \subset S_1$ \begin{align}\label{S2}
S_2=\{y \in S_1: |y_i| < y_d, \ i=1,...,d-1\}. \end{align} By the Fatou lemma and the Fubini theorem \begin{align*} \liminf\limits_{h \to 0} I(h) &\geqslant \int\limits_{A} \liminf\limits_{h \to 0}
\frac{|y|^{-d+\alpha}-|y_d+h|^{-d+\alpha}}{h} {y_d}^{1-\alpha} \, \textnormal{d} y =
\int\limits_{A} \frac{y_d}{|y|^{d+2-\alpha}} {y_d}^{1-\alpha} \, \textnormal{d} y \\ &\geqslant
\int\limits_{S_2} \frac{y_d}{|y|^{d+2-\alpha}} {y_d}^{1-\alpha} \, \textnormal{d} y \geqslant \frac{1}{\sqrt{d}} \int\limits_{S_2} \frac{y_d}{y_d^{d+2-\alpha}} {y_d}^{1-\alpha} \, \textnormal{d} y = C\int\limits_0^a \,\frac{\, \textnormal{d} y}{y}. \end{align*} Hence $\frac{\partial^2}{\partial {x_d}^2} g_- \left(0 \right)=\infty$. \subsection{Case \texorpdfstring{$\alpha = 1$}{a=1}}\label{alpha1_counterex} Let $d=1$. The compensated kernel is of the form $G(x,y)=\frac{1}{\pi}\ln
\frac{1}{|x-y|}$. Note that we cannot apply \cite[Lemma 2.3]{MR521856} because (ii) does not hold. Instead write \begin{align*} \frac{g(x+h)-g(x)}{h}&=\int_{-1}^{1} \frac{G(x+h-y)-G(x-y)}{h} \( f(y)-f(x) \) \, \textnormal{d} y \\ &+f(x)\int_{-1}^1 \frac{G(x+h-y)-G(x-y)}{h} \, \textnormal{d} y=: I_1(h)+I_2(h). \end{align*} Let $f$ be a Lipschitz function. By the mean value theorem
\begin{align*} \lim_{h \to 0} \int_{-1}^{1} \frac{G(x+h-y)-G(x-y)}{h} \( f(y)-f(x) \) \, \textnormal{d} y = \int_{-1}^1 G'(x-y) \( f(y)-f(x) \) \, \textnormal{d} y. \end{align*} Furthermore, denote
\begin{align*} F(x)&:=\int_{-1}^1 G(x-y) \, \textnormal{d} y=-\int_{-1}^1
\ln{|y-x|} \, \textnormal{d} y=-\int_{-1-x}^{1-x}\ln |s| \, \textnormal{d} s \\ &=-\int_0^{1+x}\ln{s} \, \textnormal{d} s -\int_0^{1-x} \ln{s} \, \textnormal{d} s. \end{align*}
It follows that \begin{align*} \lim_{h \to 0} \int_{-1}^1 \frac{G(x+h-y)-G(x-y)}{h} \, \textnormal{d} y = F'(x) = \ln{\frac{1-x}{1+x}}. \end{align*} Hence, \begin{align}\label{alpha1ref} g'(x) = \int_{-1}^1 G'(x-y) \( f(y)-f(x) \) \, \textnormal{d} y + f(x) F'(x). \end{align} Put $f(y)=y_+ \ln^{-\beta} \left( 1+\left( y^{-1} \right)_+ \right)$, $\beta \in (0,1)$. It is easy to check that $f$ is a Lipschitz function. Let $h<0$. Since $f(y)=0$ for $y \leqslant 0$, from \eqref{alpha1ref} we obtain
\begin{align*} \frac1h \left( g'(h)-g'(0) \right) &=\frac1h \int_0^1 \left(
\frac{1}{|h-y|}-\frac{1}{|y|} \right) y_+ \ln^{-\beta} \left( 1+\frac1y \right) \, \textnormal{d} y \\ &=\frac1h \int_0^1 \left( \frac{1}{y-h}-\frac{1}{y} \right) y \ln^{-\beta} \left( 1+\frac1y \right) \, \textnormal{d} y \\ & = \frac1h \int_0^1 \frac{h}{y(y-h)} y \ln^{-\beta} \left( 1+\frac1y \right) \, \textnormal{d} y = \int_0^1 \frac{1}{y-h} \ln^{-\beta} \left( 1+\frac1y \right) \, \textnormal{d} y. \end{align*}
By the Monotone Convergence Theorem \begin{align}\label{divergent_integral} \int_0^1 \frac{1}{y-h} \ln^{-\beta} \left( 1+\frac1y \right) \, \textnormal{d} y \xrightarrow{h \to 0^-} \int_0^1 \frac1y \ln^{-\beta} \left( 1+\frac1y \right) \, \textnormal{d} y. \end{align}
Since
\begin{align*} \lim_{y \to 0^+} \frac{\ln \left( 1+\frac1y \right)}{\ln \frac1y} = 1, \end{align*}
we obtain $g_-''(0) = \infty$. For $d>1$ and $\beta \in (0,1)$ we apply \cite[Lemma 2.3]{MR521856} to the function $f(y)= (y_d)_+ \ln^{-\beta} \left( 1+\left(y_d^{-1}
\right)_+ \right)$, and $G(x,y) = |x-y|^{-d+1}$ in order to obtain
\begin{align}\label{burch_lemma_app} \frac{\partial}{\partial x_d}g(x)=\int_{B_1} \frac{\partial G(x,y)}{\partial
x_d} \left[f(y)-f(x) \right] \, \textnormal{d} y+f(x) \frac{\partial}{\partial x_d} \int_{B_1}G(x,y) \, \textnormal{d} y. \end{align} By \autoref{cor:ind_fact} the condition (iii) of \cite[Lemma 2.3]{MR521856} holds. Denote \begin{align*} H(x,y):=\frac{\partial G(x,y)}{\partial x_d} = (1-d) \frac{\left( x-y
\right)_d}{|x-y|^{d+1}} = -C \frac{\left( x-y \right)_d}{|x-y|^{d+1}}, \end{align*} $C>0$. Let $h>0$. We calculate the left-sided second partial derivative $\frac{\partial^2}{\partial {x_d}^2} g(x)$ in $x=0$.
Note that some of terms vanish and the remaining limit is \begin{align*} \lim_{h \to 0} \frac{1}{-h} \int\limits_{B(0,1)}\left( H(y+h)-H(y) \right) f(y) \, \textnormal{d} y. \end{align*} Let $f_1(s)=f((0,...,0,s))$. We have
\begin{align}\label{wild_calc} &\int_{B_1}\left( H(y+h)-H(y) \right) f(y) \, \textnormal{d} y = \int_{B_1}\left( H(y+h)-H(y) \right) f_1(y_d) \, \textnormal{d} y \nonumber \\ = &\int_{B_1}\left( H(y+h)-H(y) \right) \int_0^{y_d} f_1'(s) \, \textnormal{d} s \, \textnormal{d} y = \int_0^1 \, \textnormal{d} sf_1'(s) \int_{B_1
\cap \mathbb{H}_s}\left( H(y+h)-H(y) \right) \, \textnormal{d} y, \end{align}
where $\mathbb{H}_s = \left\lbrace y: y_d > s \right\rbrace$. Denote $\tilde{y}=(y_1,...,y_{d-1})$. Then
\begin{align}\label{wild_calc2} &\int_0^1 \, \textnormal{d} sf_1'(s) \int_{B_1 \cap \mathbb{H}_s}\left( H(y+h)-H(y) \right) \, \textnormal{d} y
\nonumber \\ = &\int_0^1 \, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1}\, \textnormal{d} \tilde{y}
\int_s^{\sqrt{1-|\tilde{y}|^2}} \left[ H(y+h)-H(y) \right] \, \textnormal{d} y_d \nonumber \\
= &\int_0^1 \, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1}\, \textnormal{d} \tilde{y} \left[
\int_{s+h}^{\sqrt{1-|\tilde{y}|^2}+h} H(y) \, \textnormal{d} y_d -
\int_s^{\sqrt{1-|\tilde{y}|^2}} H(y) \, \textnormal{d} y_d \right] \nonumber \\ = &\int_0^1
\, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1}\, \textnormal{d} \tilde{y} \left[ \int_s^{s+h} H(y) \, \textnormal{d} y_d -
\int_{\sqrt{1-|\tilde{y}|^2}}^{\sqrt{1-|\tilde{y}|^2}+h} H(y) \, \textnormal{d} y_d \right]
\nonumber \\ = &\int_0^1 \, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1} \left[ \left( G(\tilde{y},s+h) - G(\tilde{y},s) \right) - \left(
G(\tilde{y},\sqrt{1-|\tilde{y}|^2}+h) - G(\tilde{y},\sqrt{1-|\tilde{y}|^2}) \right) \right] \, \textnormal{d} \tilde{y} \nonumber \\ =&: I_1(h)-I_2(h). \end{align}
The Dominated Convergence Theorem implies \begin{align}\label{wild_calc3}
\lim_{h \to 0^+} &\int\limits_0^1 \, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1} \frac{
G(\tilde{y},\sqrt{1-|\tilde{y}|^2}+h) -
G(\tilde{y},\sqrt{1-|\tilde{y}|^2})}{-h} \, \textnormal{d} \tilde{y} \nonumber \\ =-&\int_0^1
\, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1} \lim_{h \to 0^+} \frac{
G(\tilde{y},\sqrt{1-|\tilde{y}|^2}+h) -
G(\tilde{y},\sqrt{1-|\tilde{y}|^2})}{h}
\, \textnormal{d} \tilde{y} \nonumber \\ = -&\int_0^1 \, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1}
H(\tilde{y},\sqrt{1-|\tilde{y}|^2}) \, \textnormal{d} \tilde{y} = -\int_{B_1}
H(\tilde{y},\sqrt{1-|\tilde{y}|^2}) f_1'(y_d) \, \textnormal{d} y. \end{align}
Note that the function $H$ under the integral sign is bounded on $B_1$. It follows that \begin{align*} \lim_{h \to 0^+} \frac{I_2(h)}{h} \leqslant C \int_{B_1} f_1'(y_d) \, \textnormal{d} y < \infty. \end{align*}
By the Fatou lemma
\begin{align}\label{wild_calc4}
\liminf_{h \to 0^+} &\int_0^1 \, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1} \frac{
G(\tilde{y},s+h) - G(\tilde{y},s)}{-h} \, \textnormal{d} \tilde{y} \\
&\geqslant \int_0^1 \, \textnormal{d} sf_1'(s) \int_{|\tilde{y}|<1} -H(\tilde{y},s)\, \textnormal{d} \tilde{y} \nonumber = \int_{B_1} -H(y)f_1'(y_d) \, \textnormal{d} y. \end{align}
We have \begin{align*} f_1'(s) = \ln^{-\beta} \left( 1+s^{-1} \right) + \frac{\beta}{s+1} \ln^{-\beta-1} \left( 1+s^{-1} \right), \quad s >0. \end{align*} Thus,
\begin{align*}
\int_{B_1} &\frac{y_d}{|y|^{d+1}} \ln^{-\beta} \left( 1+\left( y_d^{-1}
\right)_+ \right) \, \textnormal{d} y \geqslant \int_{S_2} \frac{y_d}{|y|^{d+1}} \ln^{-\beta} \left( 1+y_d^{-1} \right) \, \textnormal{d} y \\ &\geqslant \int_{S_2} \frac{y_d}{y_d^{d+1}} \ln^{-\beta} \left( 1+y_d^{-1} \right) \, \textnormal{d} y \geqslant C \int_0^a \frac1y \ln^{-\beta} \left( 1+y^{-1} \right) \, \textnormal{d} y. \end{align*}
Hence $\frac{\partial^2}{\partial x_d^2}g_-(0)=\infty$.
\subsection{Case \texorpdfstring{$\alpha \in (1,2)$}{a e
(1,2)}}\label{counterex_12} Let $d=1$. The compensated potential kernel is of the form $G(x,y) = c_{\alpha}
|x-y|^{\alpha-1}$. From \cite[Lemma 2.1]{MR521856} we have \begin{align*} g'(x) = \int_{-1}^1 G'(y-x)f(y) \, \textnormal{d} y. \end{align*}
We count the second derivative $g(x)$ for $|x|<1$. Observe that \begin{align*}
I_1(x) := \frac{d}{\, \textnormal{d} x} \int_{\substack{|y|<1, \\ |y-x|>\frac{1-|x|}{2}}}
G'(y-x)f(y) \, \textnormal{d} y = \int_{\substack{|y|<1, \\ |y-x|>\frac{1-|x|}{2}}} G''(y-x)f(y) \, \textnormal{d} y. \end{align*} Hence $g''(x) = I_1(x)+I_2(x)$, where \begin{align*}
I_2(x) := \lim_{h \to 0} \int_{|y-x|<\frac{1-|x|}{2}} = \frac{G'(y-x-h)-G'(y-x)}{h}f(y) \, \textnormal{d} y. \end{align*} Put $f(y) = (y_+)^{2-\alpha}$. Then \begin{align*} I_2(0) = \lim_{h \to 0} \int_{0}^{1/2} \frac{G'(y-h)-G'(y)}{h} y^{2-\alpha} \, \textnormal{d} y. \end{align*} We count the left-sided limit. Let $h > 0$. \begin{align*} \int_0^{1/2} \frac{G'(y+h)-G'(y)}{-h}y^{2-\alpha} \, \textnormal{d} y &= C\int_0^{1/2} \frac{y^{\alpha -2} - (y+h)^{\alpha -2}}{h}y^{2-\alpha} \, \textnormal{d} y \\&= C\int_0^{1/2} \frac{1-\left( 1+ h/y \right)^{\alpha -2}}{h} \, \textnormal{d} y \\ &= C\int_0^{1/(2h)} \left(1 - \left(1 + y^{-1} \right)^{\alpha -2} \right) \, \textnormal{d} y \\&= C\int_{2h}^{\infty} \left(1-(1+s)^{\alpha -2} \right) \,\frac{\, \textnormal{d} s}{s^2}. \end{align*}
Thus $g''(0_-)=\infty$. Now let $d>1$. Then $G(x,y) = |x-y|^{-d+\alpha}$. Denote \begin{align*} g(x) = \int_{B_1}G(x,y)f(y) \, \textnormal{d} y, \end{align*} where $f(y) = \left( (y_d)_+ \right)^{2-\alpha}$. \cite[Lemma 2.1]{MR521856} implies \begin{align*} \frac{\partial g(x)}{\partial x_d} = \int_{B_1}\frac{\partial G(x,y)}{\partial
x_d}f(y) \, \textnormal{d} y. \end{align*} We follow closely the argumentation from the case $\alpha=1$, $d>1$. We introduce the same notation \begin{align*} H(x,y):=\frac{\partial G(x,y)}{\partial x_d} = -(d-\alpha) \frac{\left( x-y
\right)_d}{|x-y|^{d+2-\alpha}} = -C \frac{\left( x-y
\right)_d}{|x-y|^{d+\beta}}, \end{align*} $C>0$, $\beta:=2-\alpha \in (0,1)$. Let $h>0$. By repeating \eqref{wild_calc} --- \eqref{wild_calc4} we conclude that it remains to calculate \begin{align*} \int_{B_1}-H(y)f_1'(y_d) \, \textnormal{d} y, \end{align*} where $f_1$ is the same as for $\alpha=1$. Here the derivative has simpler form. Note that the argumentation \eqref{wild_calc} -- \eqref{wild_calc4} is correct even though $f_1$ does not belong to $C^1(B_1)$ for $\alpha > 1$. We obtain \begin{align*}
\int_{B_1} \frac{y_d}{|y|^{d+\beta}}\left(y_d\right)_+^{1-\alpha} \, \textnormal{d} y &=
\int_A \frac{y_d}{|y|^{d+\beta}}y_d^{1-\alpha} \, \textnormal{d} y \geqslant \int_{S_2}
\frac{y_d}{|y|^{d+\beta}}y_d^{1-\alpha} \, \textnormal{d} y \geqslant \int_{S_2} \frac{y_d}{y_d^{d+\beta}}y_d^{1-\alpha} \, \textnormal{d} y \\ &\geqslant C \int_0^a \,\frac{\, \textnormal{d} y}{y}=\infty. \end{align*} Hence $\frac{\partial^2}{\partial x_d^2}g_-(0)=\infty$.
\section{Examples}\label{sec:examples}
In the last section we present some examples of operators $\mathscr{L}$ resp. corresponding Dirichlet problems that allow for an application of \autoref{thm:main_thm}. In \autoref{ex:frac-laplace} we modify the considerations from \autoref{sec:counterexamples} in order to match the assumptions of \autoref{thm:main_thm}. In \autoref{ex:subordinate-BM} we generalize to subordinated Brownian motion. Finally, in \autoref{ex:scaling_prop} we extend the above class and discuss the process which is assumed only to have the lower scaling property on the characteristic exponent.
\begin{exmp}[fractional Laplace operator]\label{ex:frac-laplace}
Let $X_t$ be strictly stable process whose generator is the fractional Laplace
operator $-(-\Delta)^{\alpha/2}$. Let $D$ be a bounded open set.
\begin{enumerate}
\item Let $\alpha \in (0,1)$. The potential kernel is of the form
$G(y) = c_{d,\alpha}|y|^{\alpha-d}$ and satisfies \begin{align}\label{ex_Gint}
\int_0^{1/2} |G'(t)|t^{d-1}\, \textnormal{d} t = \infty. \end{align}
Here $S(r)=|G'(r)|$. According to \autoref{thm:main_thm}, there is a $C^2_{\operatorname{loc}}(D)$ solution of \eqref{General_problem3} if the following holds: \begin{align}\label{ex_alpha01_cond}
\int_0^{1/2} |G'(t)|\omega_{ \nabla f}(t,D)t^{d-1}\,dt = \int_0^{1/2} t^{\alpha-2} \omega_{\nabla f}(t,D)\,dt < \infty. \end{align} Obviously our function from the counterexample $f(y)=((y_d)_+)^{2-\alpha}$ which is $C^{2-\alpha}(D)$ does not satisfy \eqref{ex_alpha01_cond}. On the
other hand, it is well known that for any function which is
$C^{2-\alpha+\epsilon}$, $\epsilon>0$ (i.e.
$\tilde{f}(y)=((y_d)_+)^{2-\alpha+\epsilon}$), the solution of
\eqref{General_problem3} is $C^2_{\operatorname{loc}}(D)$. Clearly, this function satisfies
\eqref{ex_alpha01_cond} as well, so in some sense \autoref{thm:main_thm}
extends already known results. The sufficient condition is also
$\omega_{\nabla f}(t,D) \leqslant C t^{1-\alpha} \ln^{-\beta} \left( 1+ t^{-1} \right)$, $\beta > 1$. Then
\begin{align*}
\int_0^{1/2} |G'(t)|\omega_{\nabla f}(t,D)t^{d-1}\, \textnormal{d} t &= \int_0^{1/2} t^{\alpha-2} t^{1-\alpha} \ln^{-\beta} \left( 1 + t^{-1} \right) \, \textnormal{d} t \\ &\leqslant C\int_0^{1/2} t^{-1} \ln^{-\beta}\left( t^{-1} \right) \, \textnormal{d} t \\ &= C\int_{\ln 2}^{\infty} \,\frac{\! \, \textnormal{d} t}{t^{\beta}} < \infty. \end{align*}
Calculations in the cases below are very similar and therefore will be omitted.
\item Let $\alpha=d=1$. The compensated potential kernel is of
the form $G(y) = \frac1\pi \ln \frac{1}{|y|}$ and \eqref{ex_Gint} holds for
$S(r)=|G'(r)|$. Note that in this case $|G'(r)| \neq c \frac{G(r)}{r}$. By \autoref{thm:main_thm} the solution of \eqref{General_problem3} will be in $C^2_{\operatorname{loc}}(D)$ if
\begin{align*}
\int_0^{1/2} |G'(t)|\omega_{\nabla f}(t,D)t^{d-1}\, \textnormal{d} t = \int_0^{1/2} t^{-1} \omega_{\nabla f}(t,D)\, \textnormal{d} t < \infty. \end{align*}
Hence, it suffices that $\omega_{\nabla f}(t,D)\leqslant C \ln^{-\beta} \left( 1+ t^{-1} \right)$, $\beta>1$. \item Let $\alpha =1, d > 1$.
The potential kernel has a form $G(y) = c_{d,\alpha} |y|^{1-d}$ and
\eqref{ex_Gint} holds for $S(r)=|G'(r)|$. Analogous to the case
$\alpha \in (0,1)$ it suffices that $\omega_{ \nabla f}(t,D)\leqslant C \ln^{-\beta}
\left( 1+ t^{-1} \right)$, $\beta > 1$. \item $\alpha \in (1,2), d=1$. The compensated potential kernel
is of the form $G(y)=c_{\alpha} |y|^{\alpha-1}$, $S(r)=|G''(r)|$, and we have
$\int_0^1 |G'(t)|\, \textnormal{d} t<\infty$, thus by \autoref{thm:main_thm}, there will be a $C^2_{\operatorname{loc}}(D)$ solution if
\begin{align}\label{ex_alpha12_cond}
\int_0^{1/2} |G''(t)| \omega_f(t,D) t^{d-1}\, \textnormal{d} t = \int_0^{1/2} t^{\alpha-3} \omega_f(t,D)
\, \textnormal{d} t < \infty.
\end{align}
Clearly the function $f(y)=(y_+)^{2-\alpha}$ from \autoref{sec:counterexamples}
does not satisfy \eqref{ex_alpha12_cond}. In order to correct it we must either
take a function from $C^{2-\alpha+\epsilon}(D)$, $\epsilon > 0$ (i.e.
$\tilde{f}(y)=(y_+)^{2-\alpha+\epsilon}$) or a function whose modulus of
continuity is of the form $\omega_{\tilde{f}}(t,D) = t^{2-\alpha} \ln^{-\beta}
\left( 1+t^{-1} \right)$, $\beta>1$.
\item $\alpha \in (1,2), d\geqslant 2$. The potential kernel has the
form $G(y) = c_{d,\alpha}|y|^{\alpha-d}$ and $S(r)=|G''(r)|$. We have
\begin{align*}
\int_0^{1/2} |G'(t)|t^{d-1}\, \textnormal{d} t < \infty.
\end{align*}
By \autoref{thm:main_thm} we have to take a function $\tilde{f}$ from
$C^{2-\alpha+\epsilon}(D)$ or such that its modulus of continuity has the form
$\omega_{\tilde{f}}(t,D) = t^{2-\alpha} \ln^{-\beta} \left( 1+ t^{-1} \right)$,
$\beta>1$.
\end{enumerate} \end{exmp}
\begin{exmp}[Subordinate Brownian motion]\label{ex:subordinate-BM}
Let $(B_t, t\geqslant 0)$ be a Brownian motion in ${\R^{d}}$ and $(S_t, t \geqslant 0)$ --- a
subordinator independent from $B_t$, i.e. a L\'{e}vy process in $\mathbb{R}$ which
stars from $0$ and has non-negative trajectories. Process $(X_t, t\geqslant 0)$
defined by $X_t=B_{S_t}$ is called a subordinated Brownian motion.
Denote by $\phi$ the Laplace exponent of $S_t$:
\begin{align*}
\mathbb{E} \exp \{ -\lambda S_t \} = \exp \{ -t \phi(\lambda) \}.
\end{align*}
It is well known that $\phi$ is of the form
\begin{align*}
\phi(\lambda) = \gamma t + \int_0^{\infty} \left( 1- e^{-\lambda t} \right)
\,\mu(\! \, \textnormal{d} t)
\end{align*}
where $\mu$ is the L\'{e}vy measure of $S_t$ satisfying $\int_0^{\infty} (1 \wedge t) \mu(\! \, \textnormal{d} t)<\infty$. The corresponding operator is of the
form $\mathscr{L} = -\phi(-\Delta)$ and we have $\psi(\xi) = \phi(|\xi|^2)$. An example of subordinated Brownian motion is the
process from \autoref{ex:frac-laplace} with $\phi(\lambda) =
\lambda^{\alpha/2}$, $\alpha \in (0,2)$. Another example is geometric stable
process with $\phi(\lambda)=\ln \left(1+\lambda^{\alpha/2} \right)$, $\alpha \in
(0,2)$. Denote by $G_d(r)$ the potential of $d$-dimensional subordinated
Brownian motion $X_t$. From \cite[Theorem
5.17]{MR3646773} we have
\begin{align}\label{subord_green_est}
G_d(r) \asymp r^{-d-2} \frac{\phi'(r^{-2})}{\phi^2(r^{-2})}, \quad r
\to 0^+,
\end{align} if $d \geqslant 3$ and there exist $\beta \in [0,d/2+1)$ and $\alpha>0$ such that $\phi^{-2}\phi'$ satisfies weak lower and upper scaling condition at infinity with exponents $-\beta$ and $-\alpha$, respectively (see \cite{MR3646773}). The same result under slightly stronger assumptions is derived in \cite[Proposition 3.5]{MR2928720}.
For $d$-dimensional subordinated Brownian motion $X_t$, $d\geqslant 3$, we have
\begin{align*}
G_d(r) = \int_0^{\infty} (4 \pi t)^{-d/2} \exp \left( -\frac{r^2}{4t} \right)
\,u(\! \, \textnormal{d} t).
\end{align*}
It follows that
\begin{align}\label{G'_subordinator}
G_d'(r) &= G_d(r) = -\int_0^{\infty} (4 \pi t)^{-d/2} \exp \left(
-\frac{r^2}{4t} \right) \frac{2r}{4t} u(\! \, \textnormal{d} t) \nonumber \\ &= -2r \pi \int_0^{\infty} (4
\pi t)^{-(d+2)/2} \exp \left( -\frac{r^2}{4t} \right) u(\! \, \textnormal{d} t) = -2r \pi
G_{d+2}(r).
\end{align}
That and \eqref{subord_green_est} imply
\begin{align*}
\left\lvert G_d'(r) \right\rvert \leqslant Cr \cdot r^{-(d+1)-2}
\frac{\phi'(r^{-2})}{\phi^2(r^{-2})} = C \frac1r r^{-d-2}
\frac{\phi'(r^{-2})}{\phi^2(r^{-2})} \leqslant C \frac{G_d(r)}{r}.
\end{align*}
By induction
\begin{align*}
\left\lvert G_d^{(k)}(r) \right\rvert \leqslant C \frac{G(r)}{r^k}, \quad k
\in \mathbb{N}.
\end{align*}
Thus, the necessary conditions involving $G$ and its derivatives hold true for $S(r)=G_d(r)/r^2$. Note that the density of
L\'{e}vy measure of $X_t$
\begin{align*}
\nu(r) = \int_{0}^{\infty} (4 \pi t)^{-d/2} \exp \left( -\frac{r^2}{4t} \right) \,\mu(\! \, \textnormal{d} t) \end{align*}
belongs to $C^{\infty}$. By \cite[Lemma $7.4$]{BGPR2017} the assumptions of \autoref{thm:main_thm} are satisfied with $\nu^*\equiv \nu$ if $\phi$ is a complete Bernstein function.
Take geometric stable process with $\phi(\lambda)= \ln \left(
1+\lambda^{\alpha/2} \right)$. Then by \eqref{subord_green_est} and \eqref{G'_subordinator}
\begin{align*} \int_0^{1/2} \left\lvert G_d'(t) \right\rvert t^{d-1}\, \textnormal{d} t &\geqslant C \int_0^{1/2} t^{-d-3} t^{d-1} \frac{\phi'(t^{-2})}{\phi^2(t^{-2})}\, \textnormal{d} r = C \int_0^{1/2} \frac{1}{t^4} \frac{\frac{1}{1+t^{-\alpha}} \frac{1}{t^{\alpha-2}}}{\ln^2\left( 1+t^{-\alpha} \right)}\, \textnormal{d} t \\ &\geqslant C \int_0^{1/2} \frac{1}{t^2} \frac{1}{\ln^2\left(1 + t^{-\alpha}\right)}\, \textnormal{d} t \geqslant \int_0^{1/2} \frac{1}{t^2 \ln^2 t^{-1}}\, \textnormal{d} t = \infty,
\end{align*}
hence, for the solution of \eqref{General_problem3} to be in $C^2_{\operatorname{loc}}(D)$, it
suffices that the modulus of continuity of gradient of function $f$ is of the form $\omega_{\nabla f}(t,D)=t \ln^{1-\epsilon} \left( 1+ t^{-1}\right)$, $\epsilon \in (0,1)$. \end{exmp}
Before moving to the last example, let us define concentration functions $K$ and $h$ by setting \begin{align*}
K(r) = \frac{1}{r^2} \int_{|x| \leqslant r} |x|^2 \nu (\! \, \textnormal{d} x), \quad r>0,\\
h(r)=\int_{{\R^{d}}} \(1 \wedge \frac{|x|^2}{r^2}\)\nu(\! \, \textnormal{d} x), \quad r>0. \end{align*} \begin{prop}\label{prop:scaling_prop}
Let $d \geqslant 3$. Suppose there exist $c>0$ and $\alpha \geqslant 3/2$ such that
\begin{align}\label{scaling_cond}
h(r) \leqslant c\lambda^{\alpha} h(\lambda r), \quad \lambda \leqslant 1, r>0.
\end{align}
Then there exists $c>0$ such that $|U'(r)|\leqslant cU(r)/r$, $|U''(r)|\leqslant cU(r)/r^2$, $|U'''(r)|\leqslant c U(r)/r^3$ for $r>0$. \end{prop} \begin{proof} Observe that for $d \geqslant 3$ the potential $U$ always exists. By \cite[Theorem $3$]{MR3225805} there exists $c>0$ such that
\begin{align*}
U(x)\geqslant \frac{c}{|x|^d h(1/r)}, \quad r>0.
\end{align*}
Our aim is to prove (G). By definition and isotropy of $p_t$
\begin{align*}
U(r)=\int_0^{\infty} p_t(\tilde{r}) \, \textnormal{d} t,
\end{align*}
where by $\tilde{r}=(0,...,0,r) \in {\R^{d}}$. Since $p_t$ is radially decreasing, by the Tonelli theorem
\begin{align*}
U(r)-U(1) = \int_0^{\infty} \int_1^r \partial_{x_d} p_t(y) \, \textnormal{d} y \, \textnormal{d} t = \int_1^r \int_0^{\infty} \partial_{x_d} p_t(\tilde{y}) \, \textnormal{d} t \, \textnormal{d} y,
\end{align*}
where $\tilde{y} = (0,...,0,y)\in{\R^{d}}$. Hence,
\begin{align*}
U'(r)=\int_0^{\infty} \partial_{x_d} p_t(\tilde{r}) \, \textnormal{d} t, \quad r>0.
\end{align*}
By \cite[Theorem $5.6$ and Corollary $6.8$]{GS2017}
\begin{align*}
\left\lvert \partial^{\beta}_x p_t(x) \right\rvert \leqslant c \( h^{-1}(1/t)\)^{-|\beta|} \varphi_t(x), \quad t>0, x \in {\R^{d}},
\end{align*}
where
\begin{align*}
\varphi_t(x) = \left\{
\begin{array}{ll}
\(h^{-1}(1/t)\)^{-d}, & |x|\leqslant h^{-1}(1/t), \\
tK(|x|)|x|^{-d}, & |x|> h^{-1}(1/t).
\end{array}\right.
\end{align*}
Let us estimate $|U'(r)|$. We have
\begin{align*}
|U'(r)| \leqslant \frac{K(|x|)}{|x|^{d}}\int_0^{1/h(|x|)} \frac{t}{
h^{-1}(1/t)} \, \textnormal{d} t + \int_{1/h(|x|)}^{\infty} \frac{\, \textnormal{d} t}{\(h^{-1}(1/t)\)^{d+1}}.
\end{align*}
The scaling property of $h$ for $|x| > h^{-1}(1/t)$ yields
\begin{align*}
h(|x|) \leqslant c \(\frac{h^{-1}(1/t)}{|x|}\)^{\alpha} h(h^{-1}(1/t)).
\end{align*}
It follows that
\begin{align*}
\frac{K(|x|)}{|x|^{d}}\int_0^{1/h(|x|)} \frac{t}{ h^{-1}(1/t)} \, \textnormal{d} t
&\leqslant c \frac{K(|x|)}{|x|^{d+1}}\int_0^{1/h(|x|)} t
\(\frac{1}{th(|x|)}\)^{1/\alpha} \, \textnormal{d} t \\ &\leqslant c \frac{K(|x|)} {|x|^{d+1}}
(h(|x|))^{-1/\alpha} \int_0^{1/h(|x|)} t^{1-1/\alpha} \, \textnormal{d} t.
\end{align*}
For $\alpha > 1/2$ the integral is finite and we get
\begin{align*}
\frac{K(|x|)}{|x|^{d}}\int_0^{1/h(|x|)} \frac{t}{ h^{-1}(1/t)} \, \textnormal{d} t
\leqslant c \frac{K(|x|)}{|x|^{d+1}h(|x|)^2}.
\end{align*}
The comparability $K$ and $h$ (\cite[Lemma $2.3$]{GS2017}) implies
\begin{align*}
\frac{K(|x|)}{|x|^{d}}\int_0^{1/h(|x|)} \frac{t}{ h^{-1}(1/t)} \, \textnormal{d} t
\leqslant c \frac{1}{|x|^{d+1}h(|x|)} \leqslant c \frac{U(r)}{r}.
\end{align*}
Furthermore, we always have $h(r) \geqslant \lambda^2 h(\lambda r)$ for $\lambda \leqslant 1$ and $r>0$. Thus,
\begin{align*}
\int_{1/h(|x|)}^{\infty} \frac{\, \textnormal{d} t}{\(h^{-1}(1/t)\)^{d+1}} &=
\frac{1}{|x|^{d+1}} \int_{1/h(|x|)}^{\infty}
\frac{|x|^{d+1}}{\(h^{-1}(1/t)\)^{d+1}} \, \textnormal{d} t \\ &\leqslant \frac{1}{|x|^{d+1}}
\int_{1/h(|x|)}^{\infty} \(\frac{1}{th(|x|)}\)^{(d+1)/2} \, \textnormal{d} t.
\end{align*}
Since $d>1$, the integral is finite and we get
\begin{align*}
\int_{1/h(|x|)}^{\infty} \frac{\, \textnormal{d} t}{\(h^{-1}(1/t)\)^{d+1}} \leqslant c
\frac{1}{|x|^{d+1}h(|x|)} \leqslant c \frac{U(r)}{r}.
\end{align*}
Hence, for $\alpha>1/2$ we obtain $|U'(r)|\leqslant cU(r)/r$, $r>0$. By
similar argument one may conclude that $|U''(r)| \leqslant cU(r)/r^2$ if $\alpha>1$
and $|U'''(r)| \leqslant cU(r)/r^3$ for $\alpha>3/2$. \end{proof}
\begin{exmp}\label{ex:scaling_prop}
Let $d \geqslant 3$, $\alpha > 3/2$, and $X_t$ be a truncated $\alpha$-stable L\'{e}vy process in ${\R^{d}}$, i.e. with L\'{e}vy measure $\nu(\! \, \textnormal{d}
x)=|x|^{-d-\alpha} \varphi(x)$, where $\varphi$ is a cut-off function, i.e. $\varphi \in C^{\infty}({\R^{d}})$ and ${\bf 1}_{B_{1/2}} \leqslant \varphi \leqslant {\bf 1}_{B_1}$. One can easily check that $h(r)\asymp r^{-\alpha} \wedge r^{-2}$. \autoref{prop:scaling_prop} yields that the assumptions of \autoref{thm:main_thm} imposed on function $G$ are satisfied. Observe that (A) and \eqref{growth_condition} is satisfied for $\nu^*\equiv 0$. In that case the appropriate $\mathcal{L}^1$ space is simply $L_{\operatorname{loc}}^1$. \end{exmp}
\appendix \section{Potential theory for recurrent unimodal L\'{e}vy process}\label{app:appendix}
In this appendix we establish a formula for the Green function for a bounded open set $D$ in case of recurrent unimodal L\'{e}vy process $X_t$. Contrary to the transient case, here the potential kernel $U(x)=\int_0^{\infty}p_t(x) \, \textnormal{d} t$ is infinite, so the classical Hunt formula has no application. Instead, one can define the $\lambda$-potential kernel $U^{\lambda}$ by setting \begin{align*}
U^{\lambda}(x)=\int_0^{\infty} e^{-\lambda t} p_t(x) \, \textnormal{d} t. \end{align*}
Similarly, we define the $\lambda$-Green function for an open set $D$
\begin{align*}
G_D^{\lambda}(x,y)=\int_0^{\infty} e^{-\lambda t} p_t^D(x-y) \, \textnormal{d} t.
\end{align*}
Note that both $U^{\lambda}$ and $G_D^{\lambda}$ exist. An analogue of the Hunt formula for $G_D^{\lambda}$ holds, namely, for $x,y \in D$
\begin{align*}
G_D^{\lambda}(x,y)=U^{\lambda}(y-x)-\mathbb{E}^x \[ e^{-\lambda \tau_D} U^{\lambda}(y-X_{\tau_D}) \right]. \end{align*}
\begin{lem}\label{lem:lambdaUlambda_lem}
Let $d \geqslant 1$. For any fixed $x_0 \in {\R^{d}} \setminus \{0\}$ we have $\lambda U^{\lambda}(x_0) \to 0$ as $\lambda \to 0$.
\end{lem}
\begin{proof}
In the following part we introduce a mild ambiguity by denoting by $1$, depending on the context, either a real number or the vector
$(0,...,0,1) \in {\R^{d}}$. Set $x_0=1$. Let $f_{\lambda}(r)= \int_{|x|<r}\, \textnormal{d} x \int_0^{\infty} e^{-\lambda u}p_u(x) \, \textnormal{d} u$. We have
\begin{align*}
L f_{\lambda}(s) &=\int_0^{\infty} e^{-st} f_{\lambda}(t) \, \textnormal{d}
t=\int_0^{\infty} e^{-st} \int_{|x|<\sqrt{t}} \, \textnormal{d} x \int_0^{\infty} e^{-\lambda
u} p_u(x)\, \textnormal{d} u \, \textnormal{d} t \\ &=\int_{{\R^{d}}} \, \textnormal{d} x \int_{t>|x|^2} e^{-st}\, \textnormal{d} t \int_0^{\infty} e^{-\lambda u} p_u(x)\, \textnormal{d} u =\frac1s \int_0^{\infty} e^{-\lambda
u} \int_{{\R^{d}}} e^{-s|x|^2} p_u(x) \, \textnormal{d} x \, \textnormal{d} u.
\end{align*}
By \cite[Lemma $6$]{MR3225805}
\begin{align*}
\int_{{\R^{d}}} e^{-s|x|^2} p_u(x) \, \textnormal{d} x =c_d \int_{{\R^{d}}}
e^{-u\psi(\sqrt{s}x)}e^{-|x|^2/4} \, \textnormal{d} x.
\end{align*}
Hence, we have for $\lambda>0$
\begin{align*}
s L_{\lambda} f(s)=c_d \int_0^{\infty} e^{-\lambda u} \, \textnormal{d} u \int_{{\R^{d}}}
e^{-u \psi(\sqrt{s} \xi)}e^{-|\xi|^2/4} \, \textnormal{d} \xi = c_d \int_{{\R^{d}}}
\frac{1}{\lambda+\psi(\sqrt{s}\xi)} e^{-|\xi|^2/4} \, \textnormal{d} \xi.
\end{align*}
By monotonicity of $f$
\begin{align*} f_{\lambda}(r) &= \frac{e}{r} \int_r^{\infty} e^{-u/r} f(r) \, \textnormal{d} u \leqslant \frac{e}{r} \int_0^{\infty} e^{-u/r} f_{\lambda}(u) \, \textnormal{d} u = \frac{e}{r} L f_{\lambda}(1/r) \\ &= c' \int_{{\R^{d}}} \frac{1}{\lambda+\psi(\sqrt{1/r}\xi)}
e^{-|\xi|^2/4}\, \textnormal{d} \xi.
\end{align*}
Since by \cite[Lemma $1$ and Proposition $1$]{MR3225805}
\begin{align*}
\sup_{|x|\leqslant 1} \psi(x) \leqslant \frac{4}{|\xi|^2} \sup_{|x|\leqslant |\xi|} \psi(x)
\leqslant c \frac{\psi(\xi)}{|\xi|^2},
\end{align*}
we obtain
\begin{align*}
\lambda G^{\lambda}(1) \leqslant \lambda \frac{f_{\lambda}(1)}{|B_1|} \leqslant
c_d \int_{{\R^{d}}} \frac{\lambda}{\lambda+\psi(\xi)}e^{-|\xi|^2/4} \, \textnormal{d} \xi \leqslant
\frac{\lambda}{\psi(1)} \int_{B_1^c} e^{-|\xi|^2/4} \, d\xi + \int_{B_1}
\frac{\lambda}{\lambda + |\xi|^2} \, \textnormal{d} \xi.
\end{align*}
Hence, $\lambda U^{\lambda}(1) \to 0$ as $\lambda \to 0$. The extension to arbitrary $x_0$ is immediate.
\end{proof}
\begin{lem}\label{lem:compensation_lemma} Let $x_0 \in {\R^{d}} \setminus \{0\}$ be an arbitrary fixed point. For all $x \in {\R^{d}} \setminus \{0\}$ we have $\int_0^{\infty} \left\lvert p_t(x)-p_t(x_0) \right\rvert \, \, \textnormal{d} t<\infty$. \end{lem}
\begin{proof}
Let $f \in C_c^{\infty}({\R^{d}})$ be such that ${\bf 1}_{B_{\epsilon}} \leqslant f \leqslant {\bf 1}_{B_{4\epsilon}}$, where $0<4\epsilon<1$. Denote
\begin{align*}
W_{x_0}^{\lambda}(x)&=\int_0^{\infty} e^{-\lambda t} \(p_t(x)-p_t(x_0)\) \, \textnormal{d} t, \quad x \neq 0,\\ W_{x_0}(x) &= \int_0^{\infty}\(p_t(x)-p_t(x_0)\) \, \textnormal{d} t, \quad x \neq 0. \end{align*}
Let $x_0=1$. Observe that \begin{align*} W_1^{\lambda} \ast f(0)=\int_0^{\infty} e^{-\lambda t} \( p_t \ast f(0) - p_t(1)\norm{f}_1 \) \, \textnormal{d} t. \end{align*}
Note that the integrand has a positive sign. Indeed, \begin{align*} p_t \ast f(0)-p_t(1)\norm{f}_1 = \int_{B_{4\epsilon}} \(p_t(y)f(y)-p_t(1)f(y)\) \, \textnormal{d} y>0, \end{align*}
since $4\epsilon<1$. Furthermore, \begin{align*} p_t(1)\norm{f}_1 = \int_{B_{4\epsilon}} p_t(1)f(y) \, \textnormal{d} y \geqslant \int_{B_{4\epsilon}} p_t(1+4\epsilon-y)f(y) \, \textnormal{d} y = p_t \ast f(1+4\epsilon). \end{align*}
Hence, by the Fourier inversion theorem \begin{align*} \int_0^{\infty} e^{-\lambda t} \( p_t \ast f(0) - p_t(1)\norm{f}_1 \) \, \textnormal{d} t &\leqslant \int_0^{\infty} e^{-\lambda t} \int_{{\R^{d}}} (1-\cos \((1+4\epsilon)\xi \))\widehat{p_t}(\xi)\left\lvert \widehat{f}(\xi) \right\rvert \, \textnormal{d} \xi \, \textnormal{d} t \\ &\leqslant \int_{{\R^{d}}} (1-\cos \((1+4\epsilon)\xi \)) \frac{\left\lvert \widehat{f}(\xi) \right\rvert}{\psi(\xi)} \, \textnormal{d} \xi. \end{align*}
By the monotone convergence theorem and the fact that $ \left\lvert \widehat{f}(\xi) \right\rvert$ decays faster than any polynomial \begin{align*}
W_1 \ast f(0) = \lim_{\lambda \to 0} W_1^{\lambda} \ast f(0) \leqslant \int_{{\R^{d}}} (1-\cos \((1+4\epsilon)\xi \)) \frac{\left\lvert \widehat{f}(\xi) \right\rvert}{\psi(\xi)} \, \textnormal{d} \xi < \infty. \end{align*}
Hence, \begin{align}\label{W-L1loc} \int_{B_{\epsilon}}W_1(x) \, \textnormal{d} x \leqslant W_1 \ast f(0) < \infty. \end{align}
Since $W_1$ is radially decreasing and positive for $|x|<1$, \eqref{W-L1loc} implies that it may be infinite only for $x=0$. It follows that
$W_1$ is well defined for $0<|x|\leq1$. Similarly $0 \leqslant W_{x_0}<\infty$ for
$0<|x|\leqslant |x_0|$.
It remains to notice that for $|x|>|x_0|$ we have $0 \leqslant \left\lvert W_{x_0}(x) \right\rvert=-W_{x_0}(x) = W_x(x_0)<\infty$ by the first part of the proof. \end{proof}
\autoref{lem:compensation_lemma} allows us to introduce, following \cite{MR0126885}, \cite{MR0099725}, \cite{MR2256481}, a compensated potential kernel by setting for $x \in {\R^{d}} \setminus \{0\}$
\begin{align}\label{compensated_kernel} W_{x_0}(x):=\int_0^{\infty} \( p_t(x)-p_t(x_0) \) \, \textnormal{d} t, \end{align}
where $x_0 \in {\R^{d}} \setminus \{0\}$ is an arbitrary but fixed point. From the proof of \autoref{lem:compensation_lemma} we immediately obtain the following corollary.
\begin{cor}
$W$ is locally integrable in ${\R^{d}}$. \end{cor}
\begin{thm}\label{thm:recurrent_sweeping_formula}
Let $x_0 \in D^c$, $d \leqslant 2$ and $D$ be bounded. Then for $x,y \in D$
\begin{align}
G_D(x,y)=W_{x_0}(y-x)-\mathbb{E}^x W_{x_0}(y-X_{\tau_D}).
\end{align} \end{thm}
\begin{proof} Let $x,y \in D$. Fix $x_0 \in D^c$ and observe that \begin{align}\label{compensation_proof}
G_D^{\lambda}(x,y)&=U^{\lambda}(y-x)-\mathbb{E}^x \left[ e^{-\lambda \tau_D} U^{\lambda}(y-X_{\tau_D}) \right] \nonumber \\ &=U^{\lambda}(x-y)-U^{\lambda}(x_0)-\mathbb{E}^x \left[ e^{-\lambda \tau_D} \left( U^{\lambda}(y-X_{\tau_D})-U^{\lambda}(x_0)\right) \right] \nonumber \\ &+ U^{\lambda}(x_0) \mathbb{E}^x \left[ 1-e^{-\lambda \tau_D} \right]. \end{align} We want to pass with $\lambda$ to $0$. The limit of left-hand side is well defined and is equal to $G_D(x,y)$. From \autoref{lem:lambdaUlambda_lem} we get
\begin{align*} U^{\lambda}(x_0) \mathbb{E}^x \left[ 1-e^{-\lambda \tau_D} \right] \leqslant \lambda U^{\lambda}(x_0) \sup_{x \in {\R^{d}}} \mathbb{E}^x \tau_D \xrightarrow{\lambda \to 0} 0. \end{align*}
Moreover, from \autoref{lem:compensation_lemma} we obtain that \begin{align} \lim_{\lambda \to 0} \(U^{\lambda}(y-x)- U^{\lambda}(x_0) \) = W_{x_0}(y-x). \end{align}
It remains to show the convergence of the middle term of \eqref{compensation_proof}. Since $U^{\lambda}$ is radially decreasing, $U^{\lambda}(y-X_{\tau_D})-U^{\lambda}(x_0)$ is positive on the set $\{y \in
{\R^{d}}\! : \, |y-X_{\tau_D}|\leqslant|x_0|\}$ and non-positive on its complement. By \autoref{lem:compensation_lemma} and the Monotone Convergence Theorem
\begin{align*} &\lim_{\lambda \to 0} \mathbb{E}^x \left[ e^{-\lambda \tau_D} \left(
U^{\lambda}(y-X_{\tau_D})-U^{\lambda}(x_0)\right);|y-X_{\tau_D}| < |x_0|\right] \\
= &\mathbb{E}^x \left[ W_{x_0}(y-X_{\tau_D});|y-X_{\tau_D}| < |x_0|\right] \leqslant W_{x_0}(\delta_D(y)) < \infty\,. \end{align*}
Observe that the left-hand side of \eqref{compensation_proof} converges to $G_D$ so it is finite. The remaining integral on the right-hand side converges as well by the monotone convergence theorem, but since all the other terms are finite, it follows that the integral is also finite and we obtain \begin{align*}
\lim_{\lambda \to 0} \mathbb{E}^x \left[ e^{-\lambda \tau_D} \left( U^{\lambda}(y-X_{\tau_D})-U^{\lambda}(x_0)\right) \right] = \mathbb{E}^x W_{x_0}(y-X_{\tau_D})\,, \end{align*}
which ends the proof. \end{proof}
\end{document} | arXiv |
Penn/Temple Probability Seminar
Feb 18 @ Penn: Jiaoyang Huang(Institute for Advanced Study)
Extreme eigenvalue distributions of sparse random graphs
I will discuss the extreme eigenvalue distributions of adjacency matrices of sparse random graphs, in particular the Erd{\H o}s-R{\'e}nyi graphs $G(N,p)$ and the random $d$-regular graphs. For Erd{\H o}s-R{\'e}nyi graphs, there is a crossover in the behavior of the extreme eigenvalues. When the average degree $Np$ is much larger than $N^{1/3}$, the extreme eigenvalues have asymptotically Tracy-Widom fluctuations, the same as Gaussian orthogonal ensemble. However, when $N^{2/9}\ll Np\ll N^{1/3}$ the extreme eigenvalues have asymptotically Gaussian fluctuations. The extreme eigenvalues of random $d$-regular graphs are more rigid, we prove on the regime $N^{2/9}\ll d\ll N^{1/3}$ the extremal eigenvalues are concentrated at scale $N^{-2/3}$ and their fluctuations are governed by the Tracy-Widom statistics. Thus, in the same regime of $d$, $52\%$ of all $d$-regular graphs have the second-largest eigenvalue strictly less than $2\sqrt{d-1}$. These are based on joint works with Roland Bauerschmids, Antti Knowles, Benjamin Landon and Horng-Tzer Yau.
Feb 11 @ Temple: Vladislav Kargin (Binghamton, SUNY)
Entropy of ribbon tilings
I will talk about ribbon tilings, which have been originally introduced and studied by Pak and Sheffield. These are a generalization of the domino tilings which, unfortunately, lacks relations to determinants and spanning trees but still retains some of the nice properties of domino tilings. I will explain how ribbon tilings are connected to multidimensional heights and acyclic orientations, and present some results about enumeration of these tilings on simple regions. Joint work with Yinsong Chen.
Feb 4 @ Penn: Jian Song(Shandong University)
Scaling limit of a directed polymer among a Poisson field of independent walks
We consider a directed polymer model in dimension 1+1, where the disorder is given by the occupation field of a Poisson system of independent random walks on Z. In a suitable continuum and weak disorder limit, we show that the family of quenched partition functions of the directed polymer converges to the Stratonovich solution of a multiplicative stochastic heat equation with a Gaussian noise whose space-time covariance is given by the heat kernel.
Jan 28 @ Temple: Konstantin Matetski (Columbia)
The KPZ fixed point
The KPZ universality class is a broad collection of models, which includes directed random polymers, interacting particle systems and random interface growth, characterized by unusual scale of fluctuations which also appear in the random matrix theory. The KPZ fixed point is a scaling invariant Markov process which is the conjectural universal limit of all models in the class. A complete description of the KPZ fixed point was obtained in a joint work with Jeremy Quastel and Daniel Remenik. In this talk I will describe how the KPZ fixed point was derived by solving a special model in the class called TASEP.
Jan 21 @ Penn: Jonathan Weare (Courant)
Fast randomized iterative numerical linear algebra for quantum chemistry (and other applications)
I will discuss a family of recently developed stochastic techniques for linear algebra problems involving very large matrices. These methods can be used to, for example, solve linear systems, estimate eigenvalues/vectors, and apply a matrix exponential to a vector, even in cases where the desired solution vector is too large to store. The first incarnations of this idea appear for dominant eigenproblems arising in statistical physics and in quantum chemistry and were inspired by the real space diffusion Monte Carlo algorithm which has been used to compute chemical ground states since the 1970's. I will discuss our own general framework for fast randomized iterative linear algebra as well share a very partial explanation for their effectiveness. I will also report on the progress of an ongoing collaboration aimed at developing fast randomized iterative schemes specifically for applications in quantum chemistry. This talk is based on joint work with Lek-Heng Lim, Timothy Berkelbach, Sam Greene, and Rob Webber.
Dec 3 @ Penn: Eyal Lubetzky (Courant)
Maximum of 3D Ising interfaces
Consider the random surface separating the plus and minus phases, above and below the $xy$-plane, in the low temperature Ising model in dimension $d\geq 3$. Dobrushin (1972) showed that if the inverse-temperature $\beta$ is large enough then this interface is localized: it has $O(1)$ height fluctuations above a fixed point, and its maximum height on a box of side length $n$ is $O_P ( \log n )$.
We study the large deviations of the interface in Dobrushin's setting, and derive a shape theorem for its ``pillars'' conditionally on reaching an atypically large height. We use this to obtain a law of large numbers for the maximum height $M_n$ of the interface: $M_n/ \log n$ converges to $c_\beta$ in probability, where $c_\beta$ is given by a large deviation rate in infinite volume. Furthermore, the sequence $(M_n - E[M_n])_{n\geq 1}$ is tight, and even though this sequence does not converge, its subsequential limits satisfy uniform Gumbel tails bounds.
Joint work with Reza Gheissari.
Nov 19 @ Temple: Yu Gu (CMU)
The Edwards-Wilkinson limit of the KPZ equation in d>1
In this talk, I will explain some recent work where we prove that in a certain weak disorder regime, the KPZ equation scales to the Edwards-Wilkinson equation in d>1.
Nov 12 @ Penn: Changji Xu (Chicago)
Sharp threshold for the Ising perceptron model
Consider the discrete cube ${-1,1}^N$ and a random collection of half spaces which includes each half space $H(x) := {y in {-1,1}^N: x cdot y geq kappa sqrt{N}}$ for $x in {-1,1}^N$ independently with probability $p$. Is the intersection of these half spaces empty? This is called the Ising perceptron model under Bernoulli disorder. We prove that this event has a sharp threshold; that is, the probability that the intersection is empty increases quickly from $epsilon$ to $1- epsilon$ when $p$ increases only by a factor of $1 + o(1)$ as $N o infty$.
Nov 5 @ Temple: Michael Damron (Georgia Tech)
Absence of backward infinite paths in first-passage percolation in arbitrary dimension
In first-passage percolation (FPP), one places weights (t_e) on the edges of Z^d and considers the induced metric. Optimizing paths for this metric are called geodesics, and infinite geodesics are infinite paths all whose finite subpaths are geodesics. It is a major open problem to show that in two dimensions, with i.i.d. continuous weights, there are no bigeodesics (doubly-infinite geodesics). In this talk, I will describe work on bigeodesics in arbitrary dimension using "geodesic graph'' measures introduced in '13 in joint work with J. Hanson. Our main result is that these measures are supported on graphs with no doubly-infinite paths, and this implies that bigeodesics cannot be constructed in a translation-invariant manner in any dimension as limits of point-to-hyperplane geodesics. Because all previous works on bigeodesics were for two dimensions and heavily used planarity and coalescence, we must develop new tools based on the mass transport principle. Joint with G. Brito (Georgia Tech) and J. Hanson (CUNY).
Oct 29 @ Penn: Eliran Subag (Courant)
Geometric TAP approach for spherical spin glasses
The celebrated Thouless-Anderson-Palmer approach suggests a way to relate the free energy of a mean-field spin glass model to the solutions of certain self-consistency equations for the local magnetizations. In this talk I will first describe a new geometric approach to define free energy landscapes for general spherical mixed p-spin models and derive from them a generalized TAP representation for the free energy. I will then explain how these landscapes are related to various concepts and problems: the pure states decomposition, ultrametricity, temperature chaos, and optimization of full-RSB models.
Oct 15 @ Penn: Tatyana Shcherbyna(Princeton)
Local regime of random band matrices
Random band matrices (RBM) are natural intermediate models to study eigenvalue statistics and quantum propagation in disordered systems, since they interpolate between mean-field type Wigner matrices and random Schrodinger operators. In particular, RBM can be used to model the Anderson metal-insulator phase transition (crossover) even in 1d. In this talk we will discuss some recent progress in application of the supersymmetric method (SUSY) and transfer matrix approach to the analysis of local spectral characteristics of some specific types of 1d RBM.
Oct 8 @ Temple: Li-Cheng Tsai (Rutgers)
Lower-tail large deviations of the KPZ Equation
Consider the solution of the KPZ equation with the narrow wedge initial condition. We prove the one-point, lower-tail Large Deviation Principle (LDP) of the solution, with time $ t\to\infty $ being the scaling parameter, and with an explicit rate function. This result confirms existing physics predictions. We utilize a formula from Borodin and Gorin (2016) to convert the LDP of the KPZ equation to calculating an exponential moment of the Airy point process, and analyze the latter via stochastic Airy operator and Riccati transform.
Oct 1 @ Penn: Amir Dembo (Stanford)
Dynamics for spherical spin glasses: Disorder dependent initial conditions
In this talk, based on a joint work with Eliran Subag, I will explain how to rigorously derive the integro-differential equations that arise in the thermodynamic limit of the empirical correlation and response functions for Langevin dynamics in mixed spherical p-spin disordered mean-field models.
I will then compare the large time asymptotic of these equations in case of a uniform (infinite-temperature) starting point, to what one obtains when starting within one of the spherical bands on which the Gibbs measure concentrates at low temperature, commenting on the existence of an aging phenomenon, and on the relations with the recently discovered geometric structure of the Gibbs measures at low temperature.
Sep 24 @ Temple: Axel Saenz-Rodriguez (Virginia)
Stationary Dynamics in Finite Time for the Totally Asymmetric Simple Exclusion Process
The totally asymmetric simple exclusion process (TASEP) is a Markov process that is the prototypical model for transport phenomena in non-equilibrium statistical mechanics. It was first introduced by Spitzer in 1970, and in the last 20 years, it has gained a strong resurgence in the emerging field of "Integrable Probability" due to exact formulas from Johanson in 2000 and Tracy and Widom in 2007 (among other related formulas and results). In particular, these formulas led to great insights regarding fluctuations related to the Tracy-Widom distribution and scalings to the Kardar-Parisi-Zhang (KPZ) stochastic differential equation.
In this joint work with Leonid Petrov (University of Virginia), we introduce a new and simple Markov Process that maps the distribution of the TASEP at time $t>0$, given step initial time data, to the distribution of the TASEP at some earlier time $t-s>0$. This process "back in time" is closely related to the Hammersley Process introduced by JM Hammersley in 1972, which later found a resurgence in the longest increasing subsequence problem in the work of Aldous and Diaconis in 1995. Hence, we call our process the Backwards Hammersley-type Process (BHP). As an fun application of our results, we have a new proof of the limit shape for the TASEP. The central objects in our constructions and proofs are the Schur point processes and the Yang-Baxter equation for the sl_2 quantum affine Lie algebra. In this talk, we will discuss the background in more detail and will explain the main ideas behind the constructions and proof.
September 17 @Penn: Izabella Stuhl(PSU)
Hard-core models in discrete 2D
Do hard disks in the plane admit a unique Gibbs measure at high density? This is one of the outstanding open problems of statistical mechanics, and it seems natural to approach it by requiring the centers to lie in a fine lattice; equivalently, we may fix the lattice, but let the Euclidean diameter $D$ of the hard disks tend to infinity. Unlike most models in statistical physics, we find non-universality and connections to number theory, with different new phenomena arising in the triangular lattice $\mathbb{A}_2$, the square lattice $\mathbb{Z}^2$ and the hexagonal tiling $\mathbb{H}_2$.
In particular, number-theoretic properties of the exclusion diameter $D$ turn out to be important. We analyze high-density hard-core Gibbs measures via Pirogov-Sinai theory. The first step is to identify periodic ground states, i.e., maximal density disk configurations which cannot be locally 'improved'. A key finding is that only certain `dominant' ground states, which we determine, generate nearby Gibbs measures. Another important ingredient is the Peierls bound separating ground states from other admissible configurations.
Answers are provided in terms of Eisenstein primes for $\mathbb{A}_2$ and norm equations in the ring $\mathbb{Z}[\sqrt{3}]$ for $\mathbb{Z}^2$. The number of high-density hard-core Gibbs measures grows indefinitely with $D$ but non-monotonically. In $\mathbb{Z}^2$ we analyze the phenomenon of 'sliding' and show it is rare.
This is a joint work with A. Mazel and Y. Suhov.
Sep 10 @Temple: Pierre Yves Gaudreau Lamarre(Princeton)
Semigroups for One-Dimensional Schrödinger Operators with Multiplicative White Noise
In this talk, we are interested in the semigroup theory of continuous one-dimensional random Schrödinger Operators with white noise. We will begin with a brief reminder of the rigorous definition of these operators as well as some of the problems in which they naturally arise. Then, we will discuss the proof of a Feynman-Kac formula describing their semigroups. In closing, we will showcase an application of this new semigroup theory to the study of rigidity (in the sense of Ghosh-Peres) of random Schrödinger eigenvalue point processes.
Some of the results discussed in this talk are joint work with Promit Ghosal (Columbia) and Yuchen Liao (Michigan).
September 03 @Penn: Ewain Gwynne (Cambridge)
Existence and uniqueness of the Liouville quantum gravity metric for $\gamma \in (0,2)$
We show that for each $\gamma \in (0,2)$, there is a unique metric associated with $\gamma$-Liouville quantum gravity (LQG). More precisely, we show that for the Gaussian free field $h$ on a planar domain $U$, there is a unique random metric $D_h = ``e^{\gamma h} (dx^2 + dy^2)"$ on $U$ which is uniquely characterized by a list of natural axioms.
The $\gamma$-LQG metric can be constructed explicitly as the scaling limit of \emph{Liouville first passage percolation} (LFPP), the random metric obtained by exponentiating a mollified version of the Gaussian free field. Earlier work by Ding, Dub\'edat, Dunlap, and Falconet (2019) showed that LFPP admits non-trivial subsequential limits. We show that the subsequential limit is unique and satisfies our list of axioms. In the case when $\gamma = \sqrt{8/3}$, our metric coincides with the $\sqrt{8/3}$-LQG metric constructed in previous work by Miller and Sheffield.
Based on four joint papers with Jason Miller, one joint paper with Julien Dubedat, Hugo Falconet, Josh Pfeffer, and Xin Sun, and one joint paper with Josh Pfeffer.
April 30 @ Temple: Tom Alberts (Utah)
The geometry of the last passage percolation problem
Last passage percolation is a well-studied model in probability theory that is simple to state but notoriously difficult to analyze. In recent years it has been shown to be related to many seemingly unrelated things: longest increasing subsequences in random permutations, eigenvalues of random matrices, and long-time asymptotics of solutions to stochastic partial differential equations. Much of the previous analysis of the last passage model has been made possible through connections with representation theory of the symmetric group that comes about for certain exact choices of the random input into the last passage model. This has the disadvantage that if the random inputs are modified even slightly then the analysis falls apart. In an attempt to generalize beyond exact analysis, recently my collaborator Eric Cator (Radboud University, Nijmegen) and I have started using tools of tropical geometry to analyze the last passage model. The tools we use to this point are purely geometric, but have the potential advantage that they can be used for very general choices of random inputs. I will describe the very pretty geometry of the last passage model and our work to use it to produce probabilistic information.
April 16 @ Temple: Jessica Lin (McGill)
Stochastic homogenization for reaction-diffusion equations
I will present several results concerning the stochastic homogenization for reaction-diffusion equations. We consider reaction-diffusion equations with nonlinear, heterogeneous, stationary-ergodic reaction terms. Under certain hypotheses on the environment, we show that the typical large-time, large-scale behavior of solutions is governed by a deterministic front propagation. Our arguments rely on analyzing a suitable analogue of "first passage times" for solutions of reaction-diffusion equations. In particular, under these hypotheses, solutions of heterogeneous reaction-diffusion equations with front-like initial data become asymptotically front-like with a deterministic speed. This talk is based on joint work with Andrej Zlatos.
April 9 @ Temple: Guillaume Dubach (Courant)
Eigenvectors of non-Hermitian random matrices
Eigenvectors of non-Hermitian matrices are non-orthogonal, and their distance to a unitary basis can be quantified through the matrix of overlaps. These variables also quantify the stability of the spectrum, and characterize the joint eigenvalue increments under Dyson-type dynamics. Overlaps first appeared in the physics literature, when Chalker and Mehlig calculated their conditional expectation for complex Ginibre matrices (1998). For the same model, we extend their results by deriving the distribution of the overlaps and their correlations (joint work with P. Bourgade). Similar results are expected to hold in other integrable models, and some have been established for quaternionic Gaussian matrices.
April 2 @ Temple: Timo Seppalainen (UW-Madison)
Geometry of the corner growth model
The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
March 26 @ Penn: Xiaoming Song (Drexel)
Large deviations for functionals of Gaussian processes
We prove large deviation principles for $\int_0^t \gamma(X_s)ds$, where $X$ is a $d$-dimensional Gaussian process and $\gamma(x)$ takes the form of the Dirac delta function $\delta(x)$, $|x|^{-β}$ with $β\in (0,d)$, or $\prod_{i=1}^d |x_i|^{-\beta_i}$ with $\beta_i\in (0,1)$. In particular, large deviations are obtained for the functionals of $d$-dimensional fractional Brownian motion, sub-fractional Brownian motion and bi-fractional Brownian motion. As an application, the critical exponential integrability of the functionals is discussed.
March 19 @ Penn: Fan Yang (UCLA)
Delocalization of random band matrices
We consider Hermitian random band matrices $H$ in dimension $d$, where the entries $h_{xy}$, indexed by $x,y \in [1,N]^d$, vanishes if $|x-y|$ exceeds the band width $W$. It is conjectured that a sharp transition of the eigenvalue and eigenvector statistics occurs at a critical band width $W_c$, with $W_c=\sqrt{N}$ in $d=1$, $W_c=\sqrt{\log N}$ in $d=2$, and $W_c=O(1)$ in $d\ge 3$. Recently, Bourgade, Yau and Yin proved the eigenvector delocalization for 1D random band matrices with generally distributed entries and band width $W\gg N^{3/4}$. In this talk, we will show that for $d\ge 2$, the delocalization of eigenvectors in certain averaged sense holds under the condition $W\gg N^{2/(2+d)}$. Based on Joint work with Bourgade, Yau and Yin.
February 26 @Penn: Tiefeng Jiang (Minnesota)
Distances between Random Orthogonal Matrices and Independent Normals
We study the distance between Haar-orthogonal matrices and independent normal random variables.The distance is measured by the total variation distance, the Kullback-Leibler distance, the Hellinger distance and the Euclidean distance. They appear different features. Optimal rates are obtained. This is a joint work with Yutao Ma.
February 19 @Penn: Duncan Dauvergne (Toronto)
Asymptotic zero distribution of random polynomials
It is well known that the roots of a random polynomial with i.i.d. coefficients tend to concentrate near the unit circle. In particular, the zero measures of such random polynomials converge almost surely to normalized Lebesgue measure on the unit circle if and only if the underlying coefficient distribution satisfies a particular moment condition. In this talk, I will discuss how to generalize this result to random sums of orthogonal (or asymptotically minimal) polynomials.
February 13 @Penn Colloquium: Bálint Virág (Toronto)
Random interfaces and geodesics
Coastlines, the edge of burned paper, the boundary of coffee spots, the game of Tetris: random interfaces surround us. Still, the mathematical theory of the most important case, the "KPZ universality class", has only been cracked very recently. This class is related to traffic models, longest increasing subsequences of random permutations, the RSK correspondence of combinatorics, last passage percolation, integrable systems and the stochastic heat equation. A new random metric in the plane, the "directed landscape" captures the essence of these problems.
February 5 @Penn: Xin Sun (Columbia)
Conformal embedding and percolation on the uniform triangulation
Following Smirnov's proof of Cardy's formula and Schramm's discovery of SLE, a thorough understanding of the scaling limit of critical percolation on the regular triangular lattice the has been achieved. Smirnorv's proof in fact gives a discrete approximation of the conformal embedding which we call the Cardy embedding. In this talk I will present a joint project with Nina Holden where we show that the uniform triangulation under the Cardy embedding converges to the Brownian disk under the conformal embedding. Moreover, we prove a quenched scaling limit result for critical percolation on uniform triangulations. I will also explain how this result fits in the the larger picture of random planar maps and Liouville quantum gravity.
January 29 @Temple: Alex Moll (Northeastern)
Fractional Gaussian Fields in Geometric Quantization and the Semi-Classical Analysis of Coherent States
The Born Rule (1926) formalized in von Neumann's spectral theorem (1932) gives a precise definition of the random outcomes of quantum measurements as random variables from the spectral theory of non-random matrices. In [M. 2017], the Born rule provided a way to derive limit shapes and global fractional Gaussian field fluctuations for a large class of point processes from the first principles of geometric quantization and semi-classical analysis of coherent states. Rather than take a point process as a starting point, these point process are realized as auxiliary objects in an analysis that starts instead from a classical Hamiltonian system with possibly infinitely-many degrees of freedom that is not necessarily Liouville integrable. In this talk, we present these results with a focus on the case of one degree of freedom, where the core ideas in the arguments are faithfully represented.
January 22 @Penn: Xinyi Li (Chicago)
One-point function estimates and natural parametrization for loop-erased random walk in three dimensions
In this talk, I will talk about loop-erased random walk (LERW) in three dimensions. I will first give an asymptotic estimate on the probability that 3D LERW passes a given point (commonly referred to as the one-point function). I will then talk about how to apply this estimate to show that 3D LERW as a curve converges to its scaling limit in natural parametrization. If time permits, I will also talk about the asymptotics of non-intersection probabilities of 3D LERW with simple random walk. This is a joint work with Daisuke Shiraishi (Kyoto).
December 4 @ Temple: Pascal Maillard (CRM)
The algorithmic hardness threshold for continuous random energy models
I will report on recent work with Louigi Addario-Berry on algorithmic hardness for finding low-energy states in the continuous random energy model of Bovier and Kurkova. This model can be regarded as a toy model for strongly correlated random energy landscapes such as the Sherrington--Kirkpatrick model. We exhibit a precise and explicit hardness threshold: finding states of energy above the threshold can be done in linear time, while below the threshold this takes exponential time for any algorithm with high probability. I further discuss what insights this yields for understanding algorithmic hardness thresholds for random instances of combinatorial optimization problems.
November 27 @ Temple: Sunder Sethuraman (Arizona)
Stick breaking processes, clumping, and Markov chain occupation laws
A GEM (Griffiths-Engen-McCloskey) sequence specifies the (random) proportions in splitting a `resource' infinitely many ways. Such sequences form the backbone of `stick breaking' representations of Dirichlet processes used in nonparametric Bayesian statistics. In this talk, we consider the connections between a class of generalized `stick breaking' processes, an intermediate structure via `clumped' GEM sequences, and the occupation laws of certain time-inhomogeneous Markov chains.
November 13 @ Penn: Abram Magner (Purdue)
Inference and Compression Problems on Dynamic Networks
Networks in the real world are dynamic -- nodes and edges are added and removed over time, and time-varying processes (such as epidemics) run on them. In this talk, I will describe mathematical aspects of some of my recent work with collaborators on statistical inference and compression problems that involve this time-varying aspect of networks. I will focus on two related lines of work: (i) network archaeology -- broadly concerning problems of dynamic graph model validation and inference about previous states of a network given a snapshot of its current state, and (ii) structural compression -- for a given graph model, exhibit an efficient algorithm for invertibly mapping network structures (i.e., graph isomorphism types) to bit strings of minimum expected length. For both classes of problems, I give both information-theoretic limits and efficient algorithms for achieving those limits. Finally, I briefly describe some ongoing projects that continue these lines of work.
November 6 @ Penn: Philippe Sosoe (Cornell)
Applications of CLTs and homogenization for Dyson Brownian Motion to Random Matrix Theory
I will explain how two recent technical developments in Random Matrix Theory allow for a precise description of the fluctuations of single eigenvalues in the spectrum of large symmetric matrices. No prior knowledge of random matrix theory will be assumed. (Based on joint work with B Landon and HT Yau).
October 30 @ Temple: Anirban Basak (ICTS-TIFR)
Sharp transition of invertibility of sparse random matrices
Consider an $n \times n$ matrix with i.i.d.~Bernoulli($p$) entries. It is well known that for $p= \Omega(1)$, i.e.~$p$ is bounded below by some positive constant, the matrix is invertible with high probability. If $p \ll \frac{\log n}{n}$ then the matrix contains zero rows and columns with high probability and hence it is singular with high probability. In this talk, we will discuss the sharp transition of the invertibility of this matrix at $p =\frac{\log n}{n}$. This phenomenon extends to the adjacency matrices of directed and undirected Erd\H{o}s-R\'{e}nyi graphs, and random bipartite graph. Joint work with Mark Rudelson.
October 23 @ Penn: Julian Gold (Northwestern)
Isoperimetric shapes in supercritical bond percolation
We study the isoperimetric subgraphs of the infinite cluster $\textbf{C}_\infty$ of supercritical bond percolation on $\mathbb{Z}^d$, $d \geq 3$. We prove a shape theorem for these random graphs, showing that upon rescaling they tend almost surely to a deterministic shape. This limit shape is itself an isoperimetric set for a norm we construct. In addition, we obtain sharp asymptotics for a modification of the Cheeger constant of $\textbf{C}_\infty \cap [-n,n]^d$, settling a conjecture of Benjamini for this modified Cheeger constant. Analogous results are shown for the giant component in dimension two, where we use the original definition of the Cheeger constant, and a more complicated continuum isoperimetric problem emerges as a result.
October 9 @ Temple: Janos Englander (Boulder)
The coin turning walk and its scaling limit.
Given a sequence of numbers p_n ? [0, 1], consider the following experiment. First, we fl ip a fair coin and then, at step n, we turn the coin over to the other side with probability p_n, n > 1, independently of the sequence of the previous terms. What can we say about the distribution of the empirical frequency of heads as n ? 8? We show that a number of phase transitions take place as the turning gets slower (i.e. p_n is getting smaller), leading fi rst to the breakdown of the Central Limit Theorem and then to that of the Law of Large Numbers. It turns out that the critical regime is p_n = const/n. Among the scaling limits, we obtain Uniform, Gaussian, Semicircle and Arcsine laws. The critical regime is particularly interesting: when the corresponding random walk is considered, an interesting process emerges as the scaling limit; also, a connection with Polya urns will be mentioned. This is joint work with S. Volkov (Lund) and Z. Wang (Boulder).
October 2 @ Temple: Firas Rassoul-Agha (Utah)
SHIFTED WEIGHTS AND RESTRICTED PATH LENGTH IN FIRST-PASSAGE PERCOLATION
We study standard first-passage percolation via related optimization problems that restrict path length. The path length variable is in duality with a shift of the weights. This puts into a convex duality framework old observations about the convergence of geodesic length due to Hammersley, Smythe and Wierman, and Kesten. We study the regularity of the time constant as a function of the shift of weights. For unbounded weights, this function is strictly concave and in case of two or more atoms it has a dense set of singularities. For any weight distribution with an atom at the origin there is a singularity at zero, generalizing a result of Steele and Zhang for Bernoulli FPP. The regularity results are proved by the van den Berg-Kesten modification argument. This is joint work with Arjun Krishnan and Timo Seppalainen
September 25 @ Penn: Julian Sahasrabudhe (Cambridge)
Zeros of polynomials, the distribution of coefficients, and a problem of J.E. Littlewood
While it is an old and fundamental fact that every (nice enough) even function $f : [-\pi,\pi] \rightarrow \mathbb{C}$ may be uniquely expressed as a cosine series \[ f(\theta) = \sum_{r \geq 0 } C_r\cos(r\theta), \] the relationship between the sequence of coefficients $(C_r)_{r \geq 0 }$ and the behavior of the function $f$ remains mysterious in many aspects. We mention two variations on this theme. First a more probabilistic setting: what can be said about a random variable if we constrain the roots of the probability generating function? We then settle on our main topic; a solution to a problem of J.E. Littlewood about the behavior of the zeros of cosine polynomials with coefficients $C_r \in \{0,1\}$.
September 18 @ Temple: Arjun Krishnan (Rochester)
Stationary coalescing walks on the lattice
Consider a measurable dense family of semi-infinite nearest-neighbor paths on the integer lattice in d dimensions. If the measure on the paths is translation invariant, we completely classify their collective behavior in d=2 under mild assumptions. We use our theory to classify the behavior of semi-infinite geodesics in random translation invariant metrics on the lattice; it applies, in particular, to first- and last-passage percolation. We also construct several examples displaying unexpected behaviors. (joint work with Jon Chaika)
September 11 @ Temple: Thomas Leblé (NYU)
The Sine-beta process: DLR equations and applications
One-dimensional log-gases, or Beta-ensembles, are statistical physics toy models finding their incarnation in random matrix theory. Their limit behavior at microscopic scale is known as the Sine-beta process, its original description involves systems of coupled SDE's. We give a new description of Sine-beta as an "infinite volume Gibbs measure", using the Dobrushin-Lanford-Ruelle (DLR) formalism, and use it to prove the "rigidity" of the process, in the sense of Ghosh-Peres. If time permits, I will mention another application to the study of fluctuations of linear statistics. Joint work with David Dereudre, Adrien Hardy, and Mylene Maida.
September 4 @ Penn: Swee Hong Chan (Cornell)
In between random walk and rotor walk in the square lattice
How much randomness is needed to prove a scaling limit result? In this talk we consider this question for a family of random walks on the square lattice. When the randomness is turned to the maximum, we have the symmetric random walk, which is known to scale to a two-dimensional Brownian motion. When the randomness is turned to zero, we have the rotor walk, for which its scaling limit is an open problem. This talk is about random walks that lie in between these two extreme cases and for which we can prove their scaling limit. This is a joint work with Lila Greco, Lionel Levine, and Boyao Li.
May 1 @ Penn: Kavita Ramanan (Brown)
Local characterization of dynamics on sparse graphs
Given a sequence of regular graphs G_n whose size goes to infinity, and dynamics that are suitably symmetric, a key question is to understand the limiting dynamics of a typical particle in the system. The case when each G_n is a clique falls under the purview of classical mean-field limits, and it is well known that (under suitable assumptions) the dynamics of a typical particle is governed by a nonlinear Markov process. In this talk, we consider the complementary sparse case when G_n converges in a suitable sense to a countably infinite locally finite graph G, and describe various limit results, both in the setting of diffusions and Markov chains. In particular, when G is a d-regular tree, we obtain an autonomous characterization of the local dynamics of the neighborhood of a typical node. We also obtain a local characterization for the annealed dynamics on a class of Galton-Watson trees. The proofs rely on a certain Markov random field structure of the dynamics on the countably infinite graph G, which may be of independent interest. This is based on various joint works with Ankan Ganguly, Dan Lacker, Mitchell Wortsman and Ruoyu Wu.
Apr 24 @ Penn: Zhenfu Wang (Penn)
Propagation of Chaos via Large Deviation Principle
We present a new method to derive quantitative estimates proving the propagation of chaos for large stochastic or deterministic systems of interacting particles. Our approach requires to prove large deviations estimates for non-continuous potentials modified by the limiting law. But it leads to explicit bounds on the relative entropy between the joint law of the particles and the tensorized law at the limit; and it can be applied to very singular kernels that are only in negative Sobolev spaces and include the Biot-Savart law for 2D Navier-Stokes and 2D Euler. Joint work with P.-E. Jabin.
Apr 17 @ Penn: Jay Pantone (Dartmouth)
Local Patterns in Chord Diagrams
A chord diagram with n chords is a set of 2n points in a line connected in n pairs. Chord diagrams, sometimes called matchings, play an important role in mathematical biology, knot theory, and combinatorics, and as a result they have been intensely studied by mathematicians, computer scientists, and biologists alike. We examine enumerative properties of families of chord diagrams that avoid local patterns. In particular, we prove that for all k, the generating function for chord diagrams in which every chord has length at least k is D-finite (and therefore the counting sequence satisfies a linear recurrence with polynomial coefficients). We conjecture that a similar but much more general statement is also true. The proof uses several interesting tools: finite state machines, the sieve method, creative telescoping, and D-finite closure properties. We also give examples of local patterns for which experimental evidence suggests that the generating function is non-D-finite, or worse. This is joint work with Peter Doyle and Everett Sullivan.
Apr 3 @ Penn: Josh Rosenberg (Penn)
Quenched survival of Bernoulli percolation on Galton-Watson trees
In this talk I will explore the subject of Bernoulli percolation on Galton-Watson trees. Letting $g(T,p)$ represent the probability a tree $T$ survives Bernoulli percolation with parameter $p$, we establish several results relating to the behavior of $g$ in the supercritical region. These include an expression for the right derivative of $g$ at criticality in terms of the martingale limit of $T$, a proof that $g$ is infinitely continuously differentiable in the supercritical region, and a proof that $g'$ extends continuously to the boundary of the supercritical region. Allowing for some mild moment constraints on the offspring distribution, each of these results is shown to hold for almost surely every Galton-Watson tree. This is based on joint work with Marcus Michelen and Robin Pemantle.
Mar 27 @ Penn: Hanbaek Lyu (Ohio State)
Double jump phase transition in a random soliton cellular automaton
In this talk, we consider the soliton cellular automaton introduced by Takahashi and Satsuma in 1990 with a random initial configuration. We give multiple constructions of a Young diagram describing various statistics of the system in terms of familiar objects like birth-and-death chains and Galton-Watson forests. Using these ideas, we establish limit theorems showing that if the first $n$ boxes are occupied independently with probability $p\in(0,1)$, then the number of solitons is of order $n$ for all $p$, and the length of the longest soliton is of order $\log n$ for $p<1/2$, order $\sqrt{n}$ for $p=1/2$, and order $n$ for $p>1/2$. Additionally, we uncover a condensation phenomenon in the supercritical regime: For each fixed $j\geq 1$, the top $j$ soliton lengths have the same order as the longest for $p\leq 1/2$, whereas all but the longest have order at most $\log n$ for $p>1/2$. As an application, we obtain scaling limits for the lengths of the $k^{\text{th}}$ longest increasing and decreasing subsequences in a random stack-sortable permutation of length $n$ in terms of random walks and Brownian excursions.
Mar 22 @ Penn: Ira Gessel (Brandeis)
Rational functions with nonnegative power series coefficients
I will talk about rational power series in several variables with nonnegative power series coefficients. An example of such a series is 1/(1-x-y-z+4xyz), whose power series coefficients were proved nonnegative by Szego and Kaluza in 1933. I will discuss several methods for proving nonnegativity and also some conjectures.
Feb 20 @ Penn: Cheyne Homberger (Maryland)
Permuted Packings and Permutation Breadth
The breadth of a permutation p is the minimum value of |i - j| + |p(i) - p(j)|, taken over all relevant i and j. Breadth has important consequences to permutation pattern containment, and connections to plane tiling. In this talk we explore the breadth of random permutations using both probabilistic techniques and combinatorial geometry. In particular, we present the expected breadth of a random permutation, the proportion of permutations with a fixed breadth, and a constructive proof for maximizing unique large patterns in permutations. This talk is based on work with both David Bevan and Bridget Tenner and with Simon Blackburn and Pete Winkler.
Feb 13 @ Penn: Ewain Gwynne (MIT)
A mating-of-trees approach for graph distances and random walk on random planar maps
I will discuss a general strategy for proving estimates for a certain class of random planar maps, namely, those which can be encoded by a two-dimensional walk with i.i.d. increments via a ``mating-of-trees" type bijection. This class includes the uniform infinite planar triangulation (UIPT) and the infinite-volume limits of random planar maps sampled with probability proportional to the number of spanning trees, bipolar orientations, or Schnyder woods they admit.Using this strategy, we obtain non-trivial estimates for graph distances in certain natural non-uniform random planar maps. We also prove that random walk on the UIPT typically travels graph distance $n^{1/4 + o_n(1)}$ in $n$ units of time and that the spectral dimension of a class of random planar maps (including the UIPT) is a.s. equal to 2---i.e., the return probability to the starting point after $n$ steps is $n^{-1+o(1)}$.Our approach proceeds by way of a strong coupling of the encoding walk for the map with a correlated two-dimensional Brownian motion (Zaitsev, 1998), which allows us to compare our given map with the so-called mated-CRT map constructed from this correlated two-dimensional Brownian. The mated-CRT map is closely related to SLE-decorated Liouville quantum gravity due to results of Duplantier, Miller, and Sheffield (2014). So, we can analyze the mated-CRT map using continuum theory and then transfer to other random planar maps via strong coupling. We expect that this approach will have further applications in the future.Based on various joint works with Nina Holden, Tom Hutchcroft, Jason Miller, and Xin Sun.
Jan 30 @ Penn: David Burstein (Swarthmore)
Tools for constructing graphs with fixed degree sequences
Constructing graphs that resemble their empirically observed counterparts is integral for simulating dynamical processes that occur on networks. Since many real-world networks exhibit degree heterogeneity, we consider some challenges in randomly constructing graphs with a given bi-degree sequence in an unbiased way. In particular, we propose a novel method for the asymptotic enumeration of directed graphs that realize a bi-degree sequence, d, with maximum degree d_max = O(S^{1/2 - tau}) for an arbitrarily small positive number tau, where S is the number of edges; the previous best results allow for d_max = o(S^{1/3} ). Our approach is based on two key steps, graph partitioning and degree preserving switches. The former allows us to relate enumeration results to sequences that are easy to handle, while the latter facilitates expansions based on numbers of shared neighbors of pairs of nodes. We will then discuss the implications of our work in context to other methods, such as Markov Chain Monte Carlo, for generating graphs with a prescribed degree sequence. Joint work with Jonathan Rubin.
Jan 16 @ Penn: Jeffrey Kuan (Columbia)
Algebraic constructions of Markov duality functions
Markov duality in spin chains and exclusion processes has found a wide variety of applications throughout probability theory. We review the duality of the asymmetric simple exclusion process (ASEP) and its underlying algebraic symmetry. We then explain how the algebraic structure leads to a wide generalization of models with duality, such as higher spin exclusion processes, zero range processes, stochastic vertex models, and their multi-species analogues.
Dec 5 @ Penn: Konstantinos Karatapanis (Penn)
One dimensional system arising in stochastic gradient descent
We consider SDEs of the form dX_t = |X_t|^k/t^gamma dt+1/t^gamma dB_t, for a fixed k in [1,infty). We find the values of gamma in (1/2,1] such that X_t will not converge to the origin with probability 1. Furthermore, we can show that for the rest of the permissible values the process will converge to the origin with some positive probability. The previous results extend for processes that satisfy dX_t = f(X_t)/t^gamma dt+1/t^gamma dB_t, when f(x) is comparable to |x|^k in a neighborhood of the origin. As it is expected, similar results are true for discrete processes satisfying X_{n+1} - X_n =f(X_n)/n^gamma+Y_{n+1}/n^gamma. Here, Y_{n+1} are martingale differences that are almost surely bounded and satisfy E(Y_{n+1}^2| F_n )>delta>0.
Nov 14 @ Penn: Miklos Racz (Princeton)
How fragile are information cascades?
It is well known that sequential decision making may lead to information cascades. If the individuals are choosing between a right and a wrong state, and the initial actions are wrong, then the whole cascade will be wrong. We show that if agents occasionally disregard the actions of others and base their action only on their private information, then wrong cascades can be avoided. Moreover, we obtain the optimal asymptotic rate at which the error probability at time t can go to zero. This is joint work with Yuval Peres, Allan Sly, and Izabella Stuhl.
Nov 7 @ Temple: Indrajit Jana (Temple)
Spectrum of Random Band Matrices
We consider the limiting spectral distribution of matrices of the form $\frac{1}{2b_{n}+1} (R + X)(R + X)^{*}$, where $X$ is an $n\times n$ band matrix of bandwidth $b_{n}$ and $R$ is a non random band matrix of bandwidth $b_{n}$. We show that the Stieltjes transform of ESD of such matrices converges to the Stieltjes transform of a non-random measure. And the limiting Stieltjes transform satisfies an integral equation. For $R=0$, the integral equation yields the Stieltjes transform of the Marchenko-Pastur law.
Oct 24 @ Penn: Lisa Hartung (NYU)
Extreme Level Sets of Branching Brownian Motion
We study the structure of extreme level sets of a standard one dimensional branching Brownian motion, namely the sets of particles whose height is within a fixed distance from the order of the global maximum. It is well known that such particles congregate at large times in clusters of order-one genealogical diameter around local maxima which form a Cox process in the limit. We add to these results by finding the asymptotic size of extreme level sets and the typical height and shape of those clusters which carry such level sets. We also find the right tail decay of the distribution of the distance between the two highest particles. These results confirm two conjectures of Brunet and Derrida.(joint work with A. Cortines, O Louidor)
Oct 17 @ Temple: Atilla Yilmaz (NYU)
Homogenization of a class of 1-D nonconvex viscous Hamilton-Jacobi equations with random potential
There are general homogenization results in all dimensions for (inviscid and viscous) Hamilton-Jacobi equations with random Hamiltonians that are convex in the gradient variable. Removing the convexity assumption has proved to be challenging. There was no progress in this direction until two years ago when the 1-D inviscid case was settled positively and several classes of (mostly inviscid) examples for which homogenization holds were constructed as well as a 2-D inviscid counterexample. Methods that were used in the inviscid case are not applicable to the viscous case due to the presence of the diffusion term. In this talk, I will present a new class of 1-D viscous Hamilton-Jacobi equations with nonconvex Hamiltonians for which homogenization holds. Due to the special form of the Hamiltonians, the solutions of these PDEs with linear initial data have representations involving exponential expectations of controlled Brownian motion in random potential. The effective Hamiltonian is the asymptotic rate of growth of these exponential expectations as time goes to infinity and is explicit in terms of the tilted free energy of (uncontrolled) Brownian motion in random potential. The proof relies on (i) analyzing the large deviation behavior of the controlled Brownian particle which assumes the role of one of the players in an emergent two-player game, (ii) identifying asymptotically optimal control policies and (iii) constructing correctors which lead to exponential martingales. Based on recent joint work with Elena Kosygina and Ofer Zeitouni.
Oct 10 @ Penn: Sourav Chatterjee (Stanford)
Rigidity of the 3D hierarchical Coulomb gas
The mathematical analysis of Coulomb gases, especially in dimensions higher than one, has been the focus of much recent activity. For the 3D Coulomb, there is a famous prediction of Jancovici, Lebowitz and Manificat that if N is the number of particles falling in a given region, then N has fluctuations of order cube-root of E(N). I will talk about the recent proof of this conjecture for a closely related model, known as the 3D hierarchical Coulomb gas. I will also try to explain, through some toy examples, why such unusually small fluctuations may be expected to appear in interacting gases.
Oct 3 @ Penn: Stephen Melczer (Penn)
Lattice Path Enumeration, Multivariate Singularity Analysis, and Probability Theory
The problem of enumerating lattice paths with a fixed set of allowable steps and restricted endpoint has a long history dating back at least to the 19th century. For several reasons, much research on this topic over the last decade has focused on two dimensional lattice walks restricted to the first quadrant, whose allowable steps are "small" (that is, each step has coordinates +/- 1, or 0). In this talk we relax some of these conditions and discuss recent work on walks in higher dimensions, with non-small steps, or with weighted steps. Particular attention will be given to the asymptotic enumeration of such walks using representations of the generating functions as diagonals of rational functions, through the theory of analytic combinatorics in several variables. Several techniques from computational and experimental mathematics will be highlighted, and open conjectures of a probabilistic nature will be discussed.
Sep 26 @ Penn: Evita Nestoridi (Princeton)
Cutoff for random to random
Random to random is a card shuffling model that was created to study strong stationary times. Although the mixing time of random to random has been known to be of order n log n since 2002, cutoff had been an open question for many years, and a strong stationary time giving the correct order for the mixing time is still not known. In joint work with Megan Bernstein, we use the eigenvalues of the random to random card shuffling to prove a sharp upper bound for the total variation mixing time. Combined with the lower bound due to Subag, we prove that this walk exhibits cutoff at (3 /4) n log n, answering a conjecture of Diaconis.
Sep 19 @ Penn: Marcus Michelen (Penn)
Invasion Percolation on Galton-Watson Trees
Given an infinite rooted tree, how might one sample, nearly uniformly, from the set of paths from the root to infinity? A number of methods have been studied including homesick random walks, or determining the growth rate of the number of self-avoiding paths. Another approach is to use percolation. The model of invasion percolation almost surely induces a measure on such paths in Galton-Watson trees, and we prove that this measure is absolutely continuous with respect to the limit uniform measure; other properties of invasion percolation are proved as well. This work in progress is joint with Robin Pemantle and Josh Rosenberg.
Sep 12 @ Temple: Nicholas Crawford (Technion)
Stability of Phases and Interacting Particle Systems
In this talk, I will discuss recent work with W. de Roeck on the following natural question: Given an interacting particle system are the stationary measures of the dynamics stable to small (extensive) perturbations? In general, there is no reason to believe this is so and one must restrict the class of models under consideration in one way or another. As such, I will focus in this talk on the simplest setting for which one might hope to have a rigorous result: attractive Markov dynamics (without conservation laws) relaxing at an exponential rate to its unique stationary measure in infinite volume. In this case we answer the question affirmatively. As a consequence we show that ferromagnetic Ising Glauber dynamics is stable to small, non-equilibrium perturbations in the entire uniqueness phase of the inverse temperature/external field plane. It is worth highlighting that this application requires new results on the (exponential) rate of relaxation for Glauber dynamics defined with non-zero external field.
Sep 5 @ Penn: Allan Sly (Princeton)
Large Deviations for First Passage Percolation
We establish a large deviation rate function for the upper tail of first passage percolation answering a question of Kesten who established the lower tail in 1986. Moreover, conditional on the large deviation event, we show that the minimal cost path is delocalized, that is it moves linearly far from the straight line path. Joint work with Riddhipratim Basu (Stanford/ICTS) and Shirshendu Ganguly (UC Berkeley)
May 2 @ Penn: Milan Bradonjic (Rutgers)
Percolation in Weighted Random Connection Model
When modeling the spread of infectious diseases, it is important to incorporate risk behavior of individuals in a considered population. Not only risk behavior, but also the network structure created by the relationships among these individuals as well as the dynamical rules that convey the spread of the disease are the key elements in predicting and better understanding the spread. We propose the weighted random connection model, where each individual of the population is characterized by two parameters: its position and risk behavior. A goal is to model the effect that the probability of transmissions among individuals increases in the individual risk factors, and decays in their Euclidean distance. Moreover, the model incorporates a combined risk behavior function for every pair of the individuals, through which the spread can be directly modeled or controlled. The main results are conditions for the almost sure existence of an infinite cluster in the weighted random connection model. We use results on the random connection model and site percolation in Z^2.
Apr 25 @ Temple: Chris Sinclair (U. Oregon)
An introduction to p-adic electrostatics
We consider the distribution of N p-adic particles with interaction energy proportional to the log of the p-adic distance between two particles. When the particles are constrained to the ring of integers of a local field, the distribution of particles is proportional to a power of the p-adic absolute value of the Vandermonde determinant. This leads to a first question: What is the normalization constant necessary to make this a probability measure? This sounds like a triviality, but this normalization constant as a function of extrinsic variables (like number of particles, or temperature) holds much information about the statistics of the particles. Viewed another way, this normalization constant is a p-adic analog of the now famous Selberg integral. While a closed form for this seems out of reach, I will derive a remarkable identity that may hold the key to unlocking more nuanced information about p-adic electrostatics. Along the way we'll find an identity for the generating function of probabilities that a degree N polynomial with p-adic integer coefficients split completely. Joint work with Jeff Vaaler.
Apr 11 @ Penn: Patrick Devlin (Rutgers)
Biased random permutations are predictable (proof of an entropy conjecture of Leighton and Moitra)
Suppose F is a random (not necessarily uniform) permutation of {1, 2, ... , n} such that |Prob(F(i) < F(j)) -1/2| > epsilon for all i,j. We show that under this assumption, the entropy of F is at most (1-delta)log(n!), for some fixed delta depending only on epsilon [proving a conjecture of Leighton and Moitra]. In other words, if (for every distinct i,j) our random permutation either noticeably prefers F(i) < F(j) or prefers F(i) > F(j), then the distribution inherently carries significantly less uncertainty (or information) than the uniform distribution. Our proof relies on a version of the regularity lemma, a combinatorial bookkeeping gadget, and a few basic probabilistic ideas. The talk should be accessible for any background, and we will gently recall any relevant notions (e.g., entropy) as needed. Those unhappy with the talk are welcome to form an unruly mob to depose the speaker, and pitchforks and torches will be available for purchase. This is from a recent paper joint with Huseyin Acan and Jeff Kahn.
Apr 4 @ Penn: Tobias Johnson (NYU)
Galton-Watson fixed points, tree automata, and interpretations
Consider a set of trees such that a tree belongs to the set if and only if at least two of its root child subtrees do. One example is the set of trees that contain an infinite binary tree starting at the root. Another example is the empty set. Are there any other sets satisfying this property other than trivial modifications of these? I'll demonstrate that the answer is no, in the sense that any other such set of trees differs from one of these by a negligible set under a Galton-Watson measure on trees, resolving an open question of Joel Spencer's. This follows from a theorem that allows us to answer questions of this sort in general. All of this is part of a bigger project to understand the logic of Galton-Watson trees, which I'll tell you more about. Joint work with Moumanti Podder and Fiona Skerman.
Mar 28 @ Temple: Arnab Sen (Minnesota)
Majority dynamics on the infinite 3-regular tree
The majority dynamics on the infinite 3-regular tree can be described as follows. Each vertex of the tree has an i.i.d. Poisson clock attached to it, and when the clock of a vertex rings, the vertex looks at the spins of its three neighbors and flips its spin, if necessary, to come into agreement with majority of its neighbors. The initial spins of the vertices are taken to be i.i.d. Bernoulli random variables with parameter p. In this talk, we will discuss a couple of new results regarding this model. In particular, we will show that the limiting proportion of 'plus' spins in the tree is continuous with respect to the initial bias p. A key tool in our argument is the mass transport principle. The talk is based on an ongoing work with M. Damron.
Mar 21 @ Temple: Paul Bourgade (Courant)
Local extrema of random matrices and the Riemann zeta function
Fyodorov, Hiary & Keating have conjectured that the maximum of the characteristic polynomial of random unitary matrices behaves like extremes of log-correlated Gaussian fields. This allowed them to conjecture the typical size of local maxima of the Riemann zeta function along the critical axis. I will first explain the origins of this conjecture, and then outline the proof for the leading order of the maximum, for unitary matrices and the zeta function. This talk is based on a joint works with Arguin, Belius, Radziwill and Soundararajan.
Feb 28 @ Temple: James Melbourne (Delaware)
Bounds on the maximum of the density for certain linear images of independent random variables
We will present a generalization of a theorem of Rogozin that identifies uniform distributions as extremizers of a class of inequalities, and show how the result can reduce specific random variables questions to geometric ones. In particular, by extending "cube slicing" results of K. Ball, we achieve a unification and sharpening of recent bounds on densities achieved as projections of product measures due to Rudelson and Vershynin, and the bounds on sums of independent random variable due to Bobkov and Chistyakov. Time permitting we will also discuss connections with generalizations of the entropy power inequality.
Feb 21 @ Penn: Shirshendu Ganguly (Berkeley)
Large deviation and counting problems in sparse settings
The upper tail problem in the Erd ?os-R Ženyi random graph G ~ Gn,p, where every edge is included independently with probability p, is to estimate the probability that the number of copies of a graph H in G exceeds its expectation by a factor 1 + d. The arithmetic analog considers the count of arithmetic progressions in a random subset of Z/nZ, where every element is included independently with probability p. In this talk, I will describe some recent results regarding the solution of the upper tail problem in the sparse setting i.e. where p decays to zero, as n grows to infinity. The solution relies on non-linear large deviation principles developed by Chatterjee and Dembo and more recently by Eldan and solutions to various extremal problems in additive combinatorics.
Feb 14 @ Temple: Mihai Nica (NYU)
Intermediate disorder limits for multi-layer random polymers
The intermediate disorder regime is a scaling limit for disordered systems where the inverse temperature is critically scaled to zero as the size of the system grows to infinity. For a random polymer given by a single random walk, Alberts, Khanin and Quastel proved that under intermediate disorder scaling the polymer partition function converges to the solution to the stochastic heat equation with multiplicative white noise. In this talk, I consider polymers made up of multiple non-intersecting walkers and consider the same type of limit. The limiting object now is the multi-layer extension of the stochastic heat equation introduced by O'Connell and Warren. This result proves a conjecture about the KPZ line ensemble. Part of this talk is based on joint work with I. Corwin.
Feb 07 @ Temple: Fabrice Baudoin (U. Conn)
Stochastic areas and Hopf fibrations
We define and study stochastic areas processes associated with Brownian motions on the complex symmetric spaces ℂℙn and ℂℍn. The characteristic functions of those processes are computed and limit theorems are obtained. For ℂℙn the geometry of the Hopf fibration plays a central role, whereas for ℂℍn it is the anti-de Sitter fibration. This is joint work with Jing Wang (UIUC)
Jan 31 @ Penn: Nina Holden (MIT)
How round are the complementary components of planar Brownian motion?
Consider a Brownian motion W in the complex plane started from 0 and run for time 1. Let A(1), A(2),... denote the bounded connected components of C-W([0,1]). Let R(i) (resp.\ r(i)) denote the out-radius (resp.\ in-radius) of A(i) for i \in N. Our main result is that E[\sum_i R(i)^2|\log R(i)|^\theta ]<\infty for any \theta <1. We also prove that \sum_i r(i)^2|\log r(i)|=\infty almost surely. These results have the interpretation that most of the components A(i) have a rather regular or round shape. Based on joint work with Serban Nacu, Yuval Peres, and Thomas S. Salisbury.
Jan 24 @ Penn: Charles Burnette(Drexel University)
Abelian Squares and Their Progenies
A polynomial P ∈ C[z1, . . . , zd] is strongly Dd-stable if P has no zeroes in the closed unit polydisc D d . For such a polynomial define its spectral density function as SP (z) = P(z)P(1/z) −1 . An abelian square is a finite string of the form ww0 where w0 is a rearrangement of w. We examine a polynomial-valued operator whose spectral density function's Fourier coefficients are all generating functions for combinatorial classes of con- strained finite strings over an alphabet of d characters. These classes generalize the notion of an abelian square, and their associated generating functions are the Fourier coefficients of one, and essentially only one, L2 (T d)-valued oper- ator. Integral representations and asymptotic behavior of the coefficients of these generating functions and a combinatorial meaning to Parseval's equation are given as consequences.
Dec 06 @ Penn: Hao Shen (Columbia)
Some new scaling limit results on ASEP and Glauber dynamics of spin models
We discuss two scaling limit results for discrete models converging to stochastic PDEs. The first is the asymmetric simple exclusion process in contact with sources and sinks at boundaries, called Open ASEP. We prove that under weakly asymmetric scaling the height function converges to the KPZ equation with Neumann boundary conditions. The second is the Glauber dynamics of the Blume-Capel model (a generalization of Ising model), in two dimensions with Kac potential. We prove that the averaged spin field converges to the stochastic quantization equations. A common challenge in the proofs is how to identify the limiting process as the solution to the SPDE, and we will discuss how to overcome the difficulties in the two cases. (Based on joint works with Ivan Corwin and Hendrik Weber)
Nov 29 @ Temple: Jack Hanson (CUNY)
Arm events in invasion percolation
Invasion percolation is a "self-organized critical" distribution on random subgraphs of Z^2, believed to exhibit much of the same behavior as critical percolation models. Self-organization means that this happens spontaneously without tuning some parameter to a critical value. In two dimensions, some aspects of the invasion graph are known to correspond to those in critical models, and some differences are known. We will discuss new results on the probabilities of various "arm events" -- events that connections from the origin to a large distance n are either present or "closed" in the invasion graph. We show that some of these events have probabilities obeying power laws with the same power as in the critical model, while all others differ from the critical model's by a power of n.
Nov 15 @ Penn: Elliot Paquette (Ohio State)
The law of fractional logarithm in the GUE minor process
Consider an infinite array of standard complex normal variables which are independent up to Hermitian symmetry. The eigenvalues of the upper-left NxN submatrices, form what is called the GUE minor process. This largest-eigenvalue process is a canonical example of the Airy process which is connected to many other growth processes. We show that if one lets N vary over all natural numbers, then the sequence of largest eigenvalues satisfies a 'law of fractional logarithm,' in analogy with the classical law of iterated logarithm for simple random walk. This GUE minor process is determinantal, and our proof relies on this. However, we reduce the problem to correlation and decorrelation estimates that must be made about the largest eigenvalues of pairs of GUE matrices, which we hope is useful for other similar problems.
Nov 08 @ Penn: Sébastien Bubeck (Microsoft)
Local max-cut in smoothed polynomial time
The local max-cut problem asks to find a partition of the vertices in a weighted graph such that the cut weight cannot be improved by moving a single vertex (that is the partition is locally optimal). This comes up naturally, for example, in computing Nash equilibrium for the party affiliation game. It is well-known that the natural local search algorithm for this problem might take exponential time to reach a locally optimal solution. We show that adding a little bit of noise to the weights tames this exponential into a polynomial. In particular we show that local max-cut is in smoothed polynomial time (this improves the recent quasi-polynomial result of Etscheid and Roglin). Joint work with Omer Angel, Yuval Peres, and Fan Wei.
Nov 01 @ Penn: Henry Towsner (Penn)
Markov Chains of Exchangeable Structures
The Aldous--Hoover Theorem characterizes arrays of random variables which are exchangeable - that is, the distribution is invariant under permutations of the indices of the array. We consider the extension to exchangeable Markov chains. In order to give a satisfactory classification, we need an extension of the Adous--Hoover Theorem to "relatively exchangeable" arrays, which are only invariant under some permutations. Different families of permutations lead to different characterization theorems, with the crucial distinction coming from a model theoretic characterization of the way finite arrays can be amalgamated.
Oct 25 @ Penn: Alexey Bufetov (MIT)
Asymptotics of stochastic particle systems via Schur generating functions
We will discuss a new approach to the analysis of the global behavior of stochastic discrete particle systems. This approach links the asymptotics of these systems with properties of certain observables related to the Schur symmetric functions. As applications of this method, we prove the Law of Large Numbers and the Central Limit Theorem for various models of random lozenge and domino tilings, non-intersecting random walks, and decompositions of tensor products of representations of unitary groups. Based on joint works with V. Gorin and A. Knizel.
Oct 18 @ Penn: Sanchayan Sen (Eindhoven)
Random discrete structures: Scaling limits and universality
One major conjecture in probabilistic combinatorics, formulated by statistical physicists using non-rigorous arguments and enormous simulations in the early 2000s, is as follows: for a wide array of random graph models on n vertices and degree exponent \tau>3, typical distance both within maximal components in the critical regime as well as on the minimal spanning tree on the giant component in the supercritical regime scale like n^{\frac{\tau\wedge 4 -3}{\tau\wedge 4 -1}}. In other words, the degree exponent determines the universality class the random graph belongs to. More generally, recent research has provided strong evidence to believe that several objects constructed on a wide class of random discrete structures including (a) components under critical percolation, (b) the vacant set left by a random walk, and (c) the minimal spanning tree, viewed as metric measure spaces converge, after scaling the graph distance, to some random fractals, and these limiting objects are universal under some general assumptions. We report on recent progress in proving these conjectures. Based on joint work with Shankar Bhamidi, Nicolas Broutin, Remco van der Hofstad, and Xuan Wang.
Oct 11 @ Penn: Louigi Addario-Berry (McGill)
The front location for branching Brownian motion with decay of mass
I will describe joint work with Sarah Penington (Oxford). Consider a standard branching Brownian motion whose particles have varying mass. At time t, if a total mass m of particles have distance less than one from a fixed particle x, then the mass of particle x decays at rate m. The total mass increases via branching events: on branching, a particle of mass m creates two identical mass-m particles. One may define the front of this system as the point beyond which there is a total mass less than one (or beyond which the expected mass is less than one). This model possesses much less independence than standard BBM, and martingales are hard to come by. Nonetheless, it is possible to prove that (in a rather weak sense) the front is at distance ~ c t^{1/3} behind the typical BBM front. At a high level, our argument for this may be described as a proof by contradiction combined with fine estimates on the probability Brownian motion stays in a narrow tube of varying width.
Oct 04 @ Temple: Ramon van Handel (Princeton)
Chaining, interpolation, and convexity
A significant achievement of modern probability theory is the development of sharp connections between the boundedness of random processes and the geometry of the underlying index set. In particular, the generic chaining method of Talagrand provides in principle a sharp understanding of the suprema of Gaussian processes. The multiscale geometric structure that arises in this method is however notoriously difficult to control in any given situation. In this talk, I will exhibit a surprisingly simple but very general geometric construction, inspired by real interpolation of Banach spaces, that is readily amenable to explicit computations and that explains the behavior of Gaussian processes in various interesting situations where classical entropy methods are known to fail.
Sep 27 @ Penn: Amanda Lohss (Drexel)
Corners in Tree-Like Tableaux.
Tree–like tableaux are combinatorial objects which exhibit a natural tree structure and are connected to the partially asymmetric simple exclusion process (PASEP). There was a conjecture made on the total number of corners in tree–like tableaux and the total number of corners in symmetric tree–like tableaux. We have proven both conjectures based on a bijection with permutation tableaux and type–B permutation tableaux. In addition, we have shown that the number of diagonal boxes in symmetric tree–like tableaux is asymptotically normal and that the number of occupied corners in a random tree–like tableau is asymptotically Poisson. This extends earlier results of Aval, Boussicault, Nadeau, and Laborde Zubieta, respectively.
Sep 20 @ Temple: Wei Wu (NYU)
Loop erased random walk, uniform spanning tree and bi-Laplacian Gaussian field in the critical dimension.
Critical lattice models are believed to converge to a free field in the scaling limit, at or above their critical dimension. This has been (partially) established for Ising and $\Phi^4$ models for $d \geq 4$. We describe a simple spin model from uniform spanning forests in $\mathbb{Z}^d$ whose critical dimension is 4 and prove that the scaling limit is the bi-Laplacian Gaussian field for $d\ge 4$. At dimension 4, there is a logarithmic correction for the spin-spin correlation and the bi-Laplacian Gaussian field is a log correlated field. The proof also improves the known mean field picture of LERW in $d=4$, by showing that the renormalized escape probability (and arm events) of 4D LERW converge to some "continuum escaping probability". Based on joint works with Greg Lawler and Xin Sun.
Sep 13 @ Penn: Yuri Kifer (Hebrew University)
An Introduction to Limit Theorems for Nonconventional Sums
I'll survey some of the series results on limit theorems for nonconventional sums of the form \[ \sum_{n=1}^NF(X_n,X_{2n},...,X_{\ell n}) \] and more general ones, where $\{ X_n\}$ is a sequence of random variables with sufficiently weak dependence.
Sep 06 @ Penn: Jian Ding (Chicago)
Random planar metrics of Gaussian free fields
I will present a few recent results on random planar metrics of two-dimensional discrete Gaussian free fields, including Liouville first passage percolation, the chemical distance for level-set percolation and the electric effective resistance on an associated random network. Besides depicting a fascinating picture for 2D GFF, these metric aspects are closely related to various models of planar random walks.
May 05 @ Penn: Oren Louidor (Technion)
Aging in a logarithmically correlated potential
We consider a continuous time random walk on the box of side length N in Z^2, whose transition rates are governed by the discrete Gaussian free field h on the box with zero boundary conditions, acting as potential: At inverse temperature \beta, when at site x the walk waits an exponential time with mean \exp(\beta h_x) and then jumps to one of its neighbors chosen uniformly at random. This process can be used to model a diffusive particle in a random potential with logarithmic correlations or alternatively as Glauber dynamics for a spin-glass system with logarithmically correlated energy levels. We show that at any sub-critical temperature and at pre-equilibrium time scales, the walk exhibits aging. More precisely, for any \theta > 0 and suitable sequence of times (t_N), the probability that the walk at time t_N(1+\theta) is within O(1) of where it was at time t_N tends to a non-trivial constant as N \to \infty, whose value can be expressed in terms of the distribution function of the generalized arcsine law. This puts this process in the same aging universality class as many other spin-glass models, e.g. the random energy model. Joint work with Aser Cortines-Peixoto and Adela Svejda.
Apr 26 @ Penn: Josh Rosenberg (Penn)
The frog model with drift on R
This paper considers the following scenario. There is a Poisson process on R with intensity f where 0 \le f(x) \le infty for x \ge 0 and f(x)=0 for x \le 0. The "points" of the process represent sleeping frogs. In addition, there is one active frog initially located at the origin. At time t=0 this frog begins performing Brownian motion with leftward drift C (i.e. its motion is a random process of the form B_t-Ct). Any time an active frog arrives at a point where a sleeping frog is residing, the sleeping frog becomes active and begins performing Brownian motion with leftward drift C, that is independent of the motion of all of the other active frogs. This paper establishes sharp conditions on the intensity function f that determine whether the model is transient (meaning the probability that infinitely many frogs return to the origin is 0), or non-transient (meaning this probability is greater than 0).
Apr 19 @ Penn: Dan Jerison (Cornell)
Markov chain convergence via regeneration
How long does it take for a reversible Markov chain to converge to its stationary distribution? This talk discusses how to get explicit upper bounds on the time to stationarity by identifying a regenerative structure of the chain. I will demonstrate the flexibility of this approach by applying it in two very different cases: Markov chain Monte Carlo estimation on general state spaces, and finite birth and death chains. In the first case, an unusual perspective on the popular ``drift and minorization'' method leads to a simple bound that improves on existing convergence results. In the second case, a hidden connection between reversibility and monotonicity recovers sharp upper bounds on the cutoff window.
Apr 12 @ Penn: Zsolt Pajor-Gyulai (Courant)
Stochastic approach to anomalous diffusion in two dimensional, incompressible, periodic, cellular flows
It is a well known fact that velocity grandients in a flow change the dispersion of a passive tracer. One clear manifestation of this phenomenon is that in systems with homogenization type diffusive long time/large scale behavior, the effective diffusivity often differs greatly from the molecular one. An important aspect of these well known result is that they are only valid on timescales much longer than the inverse molecular diffusivity. We are interested in what happens on shorter timescales (subhomogenization regimes) in a family of two-dimensional incompressible periodic flows that consists only of pockets of recirculations essentially acting as traps and infinite flowlines separating these where significant transport is possible. Our approach is to follow the random motion of a tracer particle and show that under certain scaling it resembles a time-changed Brownian motions. This shows that while the trajectories are still diffusive, the variance grows differently than linear.
Apr 05 @ Penn: Boris Hanin (MIT)
Nodal Sets of Random Eigenfunctions of the Harmonic Oscillator
Random eigenfunctions of energy E for the isotropic harmonic oscillator in R^d have a U(d) symmetry and are in some ways analogous to random spherical harmonics of fixed degree on S^d, whose nodal sets have been the subject of many recent studies. However, there is a fundamentally new aspect to this ensemble, namely the existence of allowed and forbidden regions. In the allowed region, the Hermite functions behave like spherical harmonics, while in the forbidden region, Hermite functions are exponentially decaying and it is unclear to what extent they oscillate and have zeros.
The purpose of this talk is to present several results about the expected volume of the zero set of a random Hermite function in both the allowed and forbidden regions as well as in a shrinking tube around the caustic. The results are based on an explicit formula for the scaling limit around the caustic of the fixed energy spectral projector for the isotropic harmonic oscillator. This is joint work with Steve Zelditch and Peng Zhou.
Mar 29 @ Penn: John Pike (Cornell)
Random walks on abelian sandpiles
Given a simple connected graph $G=(V,E)$, the abelian sandpile Markov chain evolves by adding chips to random vertices and then stabilizing according to certain toppling rules. The recurrent states form an abelian group $\Gamma$, the sandpile group of $G$. I will discuss joint work with Dan Jerison and Lionel Levine in which we characterize the eigenvalues and eigenfunctions of the chain restricted to $\Gamma$ in terms of ``multiplicative harmonic functions'' on $V$. We show that the moduli of the eigenvalues are determined up to a constant factor by the lengths of vectors in an appropriate dual Laplacian lattice and use this observation to bound the mixing time of the sandpile chain in terms of the number of vertices and maximum vertex degree of $G$. We also derive a surprising inverse relationship between the spectral gap of the sandpile chain and that of simple random walk on $G$.
Mar 22 @ Temple: Christian Benes (CUNY)
The scaling limit of the loop-erased random walk Green's function
We show that the probability that a planar loop-erased random walk passes through a given edge in the interior of a lattice approximation of a simply connected domain converges, as the lattice spacing goes to zero, to a multiple of the SLE(2) Green's function. This is joint work with Greg Lawler and Fredrik Viklund.
Mar 15 @ Temple: Philippe Sosoe (Harvard)
The chemical distance in critical percolation
The chemical distance is the graph distance inside percolation clusters. In the supercritical phase, this distance is known to be linear with exponential probability, enabling a detailed study of processes like random walks on the infinite cluster. By contrast, at the critical point, the distance is known to be longer than Euclidean by some (unknown) power. I will discuss this and some bounds on distance, as well as a result comparing the chemical distance to the size of the lowest crossing. Joint work with Jack Hanson and Michael Damron.
Mar 01 @ Penn: Sumit Mukherjee (Columbia)
Mean field Ising models
In this talk we consider the asymptotics of the log partition function of an Ising model on a sequence of finite but growing graphs/matrices. We give a sufficient condition for the mean field prediction to the log partition function to be asymptotically tight, which in particular covers all regular graphs with degree going to infinity. We show via several examples that our condition is "almost necessary" as well. As application of our result, we derive the asymptotics of the log partition function for approximately regular graphs, and bi-regular bi-partite graphs. We also re-derive asymptotics of the log partition function for a sequence of graphs convering in cut metric. This is joint work with Anirban Basak from Duke University.
Feb 16 @ Temple: Yuri Bakhtin (Courant)
Burgers equation with random forcing
I will talk about the ergodic theory of randomly forced Burgers equation (a basic nonlinear evolution PDE related to fluid dynamics and growth models) in the noncompact setting. The basic objects are one-sided infinite minimizers of random action (in the inviscid case) and polymer measures on one-sided infinite trajectories (in the positive viscosity case). Joint work with Eric Cator, Kostya Khanin, Liying Li.
Feb 09 @ Penn: Nayantara Bhatnagar (Delaware)
Limit Theorems for Monotone Subsequences in Mallows Permutations
The longest increasing subsequence (LIS) of a uniformly random permutation is a well studied problem. Vershik-Kerov and Logan-Shepp first showed that asymptotically the typical length of the LIS is 2sqrt(n). This line of research culminated in the work of Baik-Deift-Johansson who related this length to the GUE Tracy-Widom distribution. We study the length of the LIS and LDS of random permutations drawn from the Mallows measure, introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation p in S_n is proportional to q^Inv(p) where q is a real parameter and Inv(p) is the number of inversions in p. We determine the typical order of magnitude of the LIS and LDS, large deviation bounds for these lengths and a law of large numbers for the LIS for various regimes of the parameter q. In the regime that q is constant, we make use of the regenerative structure of the permutation to prove a Gaussian CLT for the LIS. This is based on joint work with Ron Peled and with Riddhi Basu.
Feb 02 @ Penn: Erik Slivken (UC Davis)
Bootstrap Percolation on the Hamming Torus
Bootstrap percolation on a graph is a simple to describe yet hard to analyze process on a graph. It begins with some initial configuration (open or closed) on the vertices. At each subsequent step a vertex may change from closed to open if enough of its neighbors are already open. For a random initial configuration where each vertex is open independently with probability p, how does the probability that eventually every vertex will be open change as p varies? The large neighborhood size of the Hamming torus leads to a distinctly different flavor than previous results on the grid and hypercube. We will focus on Hamming tori with high dimension, giving a detailed description of the long term behavior of the process.
Jan 26 @ Penn: Vadim Gorin (MIT)
Largest eigenvalues in random matrix beta-ensembles: structures of the limit
Despite numerous articles devoted to its study, the universal scaling limit for the largest eigenvalues in general beta log-gases remains a mysterious object. I will present two new approaches to such edge scaling limits. The outcomes include a novel scaling limit for the differences between largest eigenvalues in submatrices and a Feynman-Kac type formula for the semigroup spanned by the Stochastic Airy Operator. (based on joint work with M.Shkolnikov)
Dec 01 @ Penn: Sivak Mkrtchyan (Rochester)
The entropy of Schur-Weyl measures
We will study local and global statistical properties of Young diagrams with respect to a Plancherel-type family of measures called Schur-Weyl measures and use the results to answer a question from asymptotic representation theory. More precisely, we will solve a variational problem to prove a limit-shape result for random Young diagrams with respect to the Schur-Weyl measures and apply the results to obtain logarithmic, order-sharp bounds for the dimensions of certain representations of finite symmetric groups.
Nov 17 @ Penn: Partha Dey (UIUC)
Longest increasing path within the critical strip
Consider a Poisson Point Process of intensity one in the two-dimensional square of side length $n$. In Baik-Deift-Johansson (1999), it was shown that the length of a longest increasing path (an increasing path that contains the most number of points) when properly centered and scaled converges to the Tracy-Widom distribution. Later Johansson (2000) showed that all maximal paths lie within the strip of width $n^{2/3+o(1)}$ around the diagonal with high probability. We consider the length $L(n,w)$ of longest increasing paths restricted to lie within a strip of width $w$ around the diagonal and show that when properly centered and scaled it converges to a Gaussian distribution whenever $w \ll n^{2/3}$. We also obtain tight bounds on the expectation and variance of $L(n,w)$ which involves application of BK inequality and approximation of the optimal restricted path by locally optimal unrestricted path. Based on joint work with Matthew Joseph and Ron Peled.
Nov 10 @ Penn: Charles Bordenave (Toulouse)
A new proof of Friedman's second eigenvalue Theorem and its extensions
It was conjectured by Alon and proved by Friedman that a random d-regular graph has nearly the largest possible spectral gap, more precisely, the largest absolute value of the non-trivial eigenvalues of its adjacency matrix is at most 2 √ ( d − 1) + o(1) with probability tending to one as the size of the graph tends to infinity. We will discuss a new method to prove this statement and give some extensions to random lifts and related models.
Nov 03 @ Penn: Christian Gromoll (UVA)
Fluid limits and queueing policies
There are many different queueing policies discussed in the literature. They tend to be defined in model-specific ways that differ in format from one policy to another, each format suitable for the task at hand (e.g. steady-state derivation, scaling-limit theorem, or proof of some other property). The ad hoc nature of the policy definition often limits the scope of potentially quite general arguments. Moreover, because policies are defined variously, it's difficult to approach classification questions for which the answer presumably spans many policies. In this talk I'll propose a definition of a general queueing policy and discuss exactly what I mean by "general". The setup makes it possible to frame questions about queues in terms of an arbitrary policy and, potentially, to classify policies according to the answer. In this vein, I'll discuss a few results and some ongoing work on proving fluid limit theorems for general policies.
Oct 27 @ Penn: Doug Rizzolo (U Delaware)
Random pattern-avoiding permutations
Abstract: In this talk we will discuss recent results on the structure of random pattern-avoiding permutations. We will focus a surprising connection between random permutations avoiding a fixed pattern of length three and Brownian excursion. For example, this connection lets us describe the shape of the graph of a random 231-avoiding permutation of {1,...,n} as n tends to infinity as well as the asymptotic distribution of fixed points in terms of Brownian excursion. Time permitting, we will discuss work in progress on permutations avoiding longer patterns. This talk is based on joint work with Christopher Hoffman and Erik Slivken.
Oct 20 @ Penn: Tai Melcher (UVA)
Smooth measures in infinite dimensions
A collection of vector fields on a manifold satisfies H\"{o}rmander's condition if any two points can be connected by a path whose tangent vectors lie in the given collection. It is well known that a diffusion which is allowed to travel only in these directions is smooth, in the sense that its transition probability measure is absolutely continuous with respect to the volume measure and has a strictly positive smooth density. Smoothness results of this kind in infinite dimensions are typically not known, the first obstruction being the lack of an infinite-dimensional volume measure. We will discuss some smoothness results for diffusions in a particular class of infinite-dimensional spaces. This is based on joint work with Fabrice Baudoin, Daniel Dobbs, Bruce Driver, Nate Eldredge, and Masha Gordina.
Oct 06 @ Penn: Leonid Petrov (UVA)
Bethe Ansatz and interacting particle systems
I will describe recent advances in bringing a circle of ideas and techniques around Bethe ansatz and Yang–Baxter relation under the probabilistic roof, which provides new examples of stochastic interacting particle systems, and techniques to solve them. In particular, I plan to discuss a new particle dynamics in continuous inhomogeneous medium with features resembling traffic models, as well as queuing systems. This system has phase transitions (discontinuities in the limit shape) and Tracy-Widom fluctuations (even at the point of the phase transition).
Sep 29 @ Temple: David Belius (Courant)
Branching in log-correlated random fields
This talk will discuss how log-correlated random fields show up in diverse settings, including the study of cover times and random matrix theory. This is explained by the presence of an underlying approximate branching structure in each of the models. I will describe the most basic model of the log-correlated class, namely Branching Random Walk (BRW), where the branching structure is explicit, and explain how to adapt ideas developed in the context of BRW to models where the branching structure is not immediately obvious.
Sep 24 @ Penn: Steven Heilman (UCLA)
Strong Contraction and Influences in Tail Spaces
We study contraction under a Markov semi-group and influence bounds for functions all of whose low level Fourier coefficients vanish. This study is motivated by the explicit construction of 3-regular expander graphs of Mendel and Naor, though our results have no direct implication for the construction of expander graphs. In the positive direction we prove an L_{p} Poincar\'{e} inequality and moment decay estimates for mean 0 functions and for all 1 \less p \less \infty, proving the degree one case of a conjecture of Mendel and Naor as well as the general degree case of the conjecture when restricted to Boolean functions. In the negative direction, we answer negatively two questions of Hatami and Kalai concerning extensions of the Kahn-Kalai-Linial and Harper Theorems to tail spaces. For example, we construct a function $f\colon\{-1,1\}^{n}\to\{-1,1\}$ whose Fourier coefficients vanish up to level $c \log n$, with all influences bounded by $C \log n/n$ for some constants $0\lessc,C\less \infty$. That is, the Kahn-Kalai-Linial Theorem cannot be improved, even if we assume that the first $c\log n$ Fourier coefficients of the function vanish. This implies there is a phase transition in the largest guaranteed influence of functions $f\colon\{-1,1\}^{n}\to\{-1,1\}$, which occurs when the first $g(n)\log n$ Fourier coefficients vanish and $g(n)\to\infty$ as $n\to\infty$ or $g(n)$ is bounded as $n\to\infty$.. joint with Elchanan Mossel and Krzysztof Oleszkiewicz
Sep 15 @ Penn: Toby Johnson (USC)
The frog model on trees
Imagine that every vertex of a graph contains a sleeping frog. At time 0, the frog at some designated vertex wakes up and begins a simple random walk. When it lands on a vertex, the sleeping frog there wakes up and begins its own simple random walk, which in turn wakes up any sleeping frogs it lands on, and so on. This process is called the frog model. I'll (mostly) answer a question posed by Serguei Popov in 2003: On an infinite d-ary tree, is the frog model recurrent or transient? That is, is each vertex visited infinitely or finitely often by frogs? The answer is that it depends on d: there's a phase transition between recurrence and transience as d grows. Furthermore, if the system starts with Poi(m) sleeping frogs on each vertex independently, for any d there's a phase transition as m grows. This is joint work with Christopher Hoffman and Matthew Junge.
Sep 08 @ Penn: Matt Junge (U. Washington)
Splitting hairs (with choice)
Sequentially place n balls into n bins. For each ball, two bins are sampled uniformly and the ball is placed in the emptier of the two. Computer scientists like that this does a much better job of evenly distributing the balls than the "choiceless" version where one places each ball uniformly. Consider the continuous version: Form a random sequence in the unit interval by having the nth term be whichever of two uniformly placed points falls in the larger gap between the previous n-1 points. We confirm the intuition that this sequence is a.s. equidistributed, resolving a conjecture from Itai Benjamini, Pascal Maillard and Elliot Paquette. The history goes back a century to Weyl and more recently to Kakutani. | CommonCrawl |
Local and regional dynamics of chikungunya virus transmission in Colombia: the role of mismatched spatial heterogeneity
Sean M. Moore1,
Quirine A. ten Bosch1,2,3,4,
Amir S. Siraj1,
K. James Soda1,
Guido España1,
Alfonso Campo5,
Sara Gómez6,
Daniela Salas6,
Benoit Raybaud7,
Edward Wenger7,
Philip Welkhoff7 &
T. Alex Perkins ORCID: orcid.org/0000-0002-7518-40141
BMC Medicine volume 16, Article number: 152 (2018) Cite this article
Mathematical models of transmission dynamics are routinely fitted to epidemiological time series, which must inevitably be aggregated at some spatial scale. Weekly case reports of chikungunya have been made available nationally for numerous countries in the Western Hemisphere since late 2013, and numerous models have made use of this data set for forecasting and inferential purposes. Motivated by an abundance of literature suggesting that the transmission of this mosquito-borne pathogen is localized at scales much finer than nationally, we fitted models at three different spatial scales to weekly case reports from Colombia to explore limitations of analyses of nationally aggregated time series data.
We adapted the recently developed Disease Transmission Kernel (DTK)-Dengue model for modeling chikungunya virus (CHIKV) transmission, given the numerous similarities of these viruses vectored by a common mosquito vector. We fitted versions of this model specified at different spatial scales to weekly case reports aggregated at different spatial scales: (1) single-patch national model fitted to national data; (2) single-patch departmental models fitted to departmental data; and (3) multi-patch departmental models fitted to departmental data, where the multiple patches refer to municipalities within a department. We compared the consistency of simulations from fitted models with empirical data.
We found that model consistency with epidemic dynamics improved with increasing spatial granularity of the model. Specifically, the sum of single-patch departmental model fits better captured national-level temporal patterns than did a single-patch national model. Likewise, multi-patch departmental model fits better captured department-level temporal patterns than did single-patch departmental model fits. Furthermore, inferences about municipal-level incidence based on multi-patch departmental models fitted to department-level data were positively correlated with municipal-level data that were withheld from model fitting.
Our model performed better when posed at finer spatial scales, due to better matching between human populations with locally relevant risk. Confronting spatially aggregated models with spatially aggregated data imposes a serious structural constraint on model behavior by averaging over epidemiologically meaningful spatial variation in drivers of transmission, impairing the ability of models to reproduce empirical patterns.
Viral diseases transmitted by mosquitoes, including dengue, Zika, chikungunya, and yellow fever, are a rapidly growing problem and together pose a risk to approximately half the world's population [1,2,3]. In the past 5 years, both the Zika (ZIKV) and chikungunya (CHIKV) viruses were introduced into the Western Hemisphere and rapidly spread among naïve populations in South America, Central America, and the Caribbean, resulting in millions of cases and causing a public health crisis [4,5,6,7,8,9]. In addition, hundreds of millions of people are infected by dengue virus (DENV) each year [1]. Due to the influence of environmental conditions on DENV transmission, as well as complex immunological interactions among the four DENV serotypes, many regions experience periodic dengue epidemics [10, 11]. Faced with these large epidemics, limited resources need to be targeted towards areas with the highest transmission and the most vulnerable populations. In addition, public health officials would like to be able to predict where epidemics of these diseases may spread next [12].
Mathematical models can play a critical role in identifying at-risk populations and forecasting the course of an epidemic based on current epidemiological conditions [13,14,15,16]. Models are often fitted to time series of confirmed or suspected cases to estimate epidemiological parameters such as the reproduction number of the pathogen, which can be used to predict how rapidly the epidemic will spread or whether it is expected to die out [17,18,19]. For simplicity, these models often make assumptions about transmission dynamics that do not reflect biological reality [20]. One important assumption that is often made is that the human population is well mixed, which for a mosquito-transmitted pathogen means that each person within a given area has an equal chance of being bitten by any of the mosquitoes within that area [20]. The spatial scale at which this assumption is reasonable is determined primarily by the scales of both human and mosquito movement [21]. Empirical studies have shown that chikungunya clusters at scales of neighborhoods or villages [22, 23], implying that models posed at larger scales may be incompatible with the biology of CHIKV transmission.
Over large spatial scales, e.g., at the national or provincial scale, human populations are unevenly distributed, and population mixing and movement depend on transportation networks, with movement among localities affected by a number of different economic, cultural, geographical, and environmental factors [24,25,26,27]. Contact rates between humans and mosquitoes also vary considerably among locations due to the influence of meteorological variables, such as temperature, rainfall, and relative humidity, on mosquito population dynamics [28,29,30]. As a result of these different factors, exposure within a particular geographic region can be highly heterogeneous, with important implications for disease dynamics. For example, estimates of transmission rates made from models assuming homogeneous mixing can lead to underestimates of the level of effort needed to control the spread of a pathogen [31]. Spatial heterogeneity in human-mosquito contact rates can be incorporated into disease transmission models by subdividing the population and modeling movement between subpopulations [32]. Heterogeneity in human-mosquito contact rates between different subpopulations can be represented by explicitly modeling mosquito population dynamics based on local climate [33].
In late 2013, CHIKV was introduced into the Caribbean and soon spread throughout North and South America, infecting millions of people [13, 34]. The first confirmed cases in Colombia were reported in June 2014, and almost 500,000 cases were reported by the end of 2015. Suspected chikungunya cases were reported at the second administrative level (municipality) in Colombia throughout the epidemic, enabling examination of its spatiotemporal dynamics. By simulating the chikungunya epidemic in Colombia at different spatial scales, we examine how model assumptions about the scale of human-mosquito interactions affect the accuracy of model predictions. Specifically, we simulate disease dynamics at a finer spatial scale than the observed time series used to fit the model and compare these model results to simulations conducted at the coarser spatial scale at which surveillance data were aggregated. A comparison of model fits at different levels of spatial aggregation is used to assess how incorporating spatial heterogeneity in environmental and demographic conditions improves model accuracy and provides additional insights into the epidemiological parameters estimated during the model-fitting process. In addition, simulation results at spatial scales below the level of observation provide estimates of unobserved spatial heterogeneity in epidemic dynamics.
We modeled CHIKV transmission dynamics using a new extension of the Institute for Disease Modeling's (IDM) Epidemiological Modeling Disease Transmission Kernel (EMOD-DTK) software [35]. EMOD is an individual-based disease modeling platform that supports multiple disease transmission routes, including vector-based transmission initially designed to simulate malaria transmission dynamics [35]. We modified the generic vector-transmission model to represent the transmission dynamics of arboviruses transmitted by Aedes aegypti mosquitoes. Modifications to the generic vector model included incorporating life-history parameters specific to Ae. aegypti, including parameters that capture the sensitivity of its life cycle to rainfall and temperature [36]. The modified model also includes the ability to simulate the transmission of multiple serotypes of the same pathogen; however, for CHIKV we assume that there is a single strain. Mosquito life-history parameters, as well as parameters determining the temperature-dependent frequency of feeding on humans, are described elsewhere [36].
Several parameters affecting the transmissibility of CHIKV were estimated from recent studies (Table 1). The probability of an infected individual developing a symptomatic infection was estimated as 0.72 based on the mean of estimates from 13 different studies (Table 2) [37,38,39,40,41,42,43,44,45,46,47,48,49]. An individual's infectiousness, ζ(t), over the duration of infection was assumed to vary according to
$$ \zeta (t)={e}^{-a/{c}_3}, $$
where a = c1(Dt − c2)2 and Dt is the number of days since infection. The values for parameters c1, c2, and c3 were estimated by fitting Eq. (1) to viremia data from [50] and assuming that the dose-response curve for CHIKV was the same as a DENV curve calculated elsewhere [51]. Because another study [50] did not find any significant differences in viremias between asymptomatic and symptomatic infections, we used the same parameter values for asymptomatic and symptomatic individuals. The extrinsic incubation rate, δT, for CHIKV in Ae. aegypti following an infected blood meal depends on the temperature (T) in Kelvins and was assumed to follow the Arrhenius equation, \( {\delta}_T={a}_1{e}^{-{a}_2T} \), with parameters fit to the exponential representation in [52]. CHIKV-specific parameters a1 and a2 were estimated by fitting to data from [53]. We assumed that only 8% of symptomatic infections are reported, consistent with estimates for dengue [54] and similar to the 9% observed for chikungunya in Puerto Rico [38]. The total number of infections reported is the product of the symptomatic rate and the reporting rate for symptomatic infections. To ensure that our model results were not overly dependent on particular values for either the symptomatic rate or reporting rate, we conducted a sensitivity analysis by fitting the single-patch and multi-patch departmental models for six different departments with combined symptomatic and reporting rates that were 25% lower or higher than the values used in the main analysis (corresponding to a symptomatic rate of 0.54–0.9 when the reporting rate is 0.08 or a reporting rate of 0.06–0.10 when the symptomatic rate is 0.72).
Table 1 Estimates for key parameters affecting the transmissibility of chikungunya virus and the probability that an infection is reported. Sources are studies from which values were taken or studies that contained data that were used to estimate parameter values (see Methods for details)
Table 2 Estimates of the probability of an infected individual developing a symptomatic infection from 13 different epidemiological studies
EMOD-DTK is capable of simulating pathogen transmission among humans and mosquitoes in a single patch, as well as spatial dynamics across multiple patches connected by human and mosquito movement. The spatial scales considered in this study are much larger than the typical dispersal distance of Ae. aegypti [55], so all spatial models only allowed for human movement among patches. Within a single patch, humans and mosquitoes are evenly mixed (although heterogeneous biting patterns can be implemented in the model). Mosquito population dynamics were represented by a compartmental model rather than modeled individually to reduce the computational requirements of each simulation. The compartmental model incorporates each life-history stage and simulates adult female mosquito biting and ovipositing behaviors.
CHIKV transmission was simulated in populations at three different spatial scales. First, simulations of the chikungunya epidemic for all of Colombia were run with a single patch representing the entire country. Second, single-patch simulations were run for each of the 32 departments (plus the capital district of Bogotá) individually. Finally, multi-patch simulations were run for each department (except for Bogotá, which consists of a single municipality) with separate patches for each municipality (second administrative unit in Colombia). Within a patch, various aspects of the mosquito population and the extrinsic incubation period of the virus within the mosquito are affected by local climate variables. Parameter values used in all simulations are described in Table 1. Gridded daily temperature, precipitation, and relative humidity from 2013 to 2016 were initially modeled at a 5 km × 5 km resolution [56]. The mean climate values at the country, department, and municipality scales were calculated by taking population-weighted averages of the daily values from the gridded data sets.
Due to computational constraints, the size of the human population in some simulations was either scaled down or subsampled. For the single-patch simulations at the national and departmental scales, the mosquito and human populations were both scaled to one tenth of their actual size. The populations in the multi-patch departmental model were not scaled, because the human population sizes are already smaller at the municipality level. In addition, humans were simulated using an adaptive sampling scheme, with a maximum patch population of 50,000 individuals in single-patch simulations and 20,000 in multi-patch simulations. For patches in the multi-patch simulations with fewer than 20,000 residents, everyone in the population is simulated individually. For patches with more than 20,000 residents, the patch population size is set at 20,000 humans and each individual in the simulation is weighted so as to approximate the actual population size (e.g., if the actual population size is 200,000, then each individual in the simulation receives a weighting of 10.0). To test the sensitivity of simulation results to the maximum population size used in the adaptive sampling scheme, we ran simulations for a population of 4.85 million with the maximum population size ranging from 5000 to 50,000 (the sampling factor ranged from ~ 1000:1 to 100:1). Between-simulation variance increased for maximum population sizes < 20,000, but it was not significantly reduced by increasing the maximum size above 20,000 (Additional file 1: Figure S1A). There also did not appear to be any bias in the mean incidence estimates for maximum population sizes of ≥ 20,000 (Additional file 1: Figure S1B).
Epidemiological data and model fitting
We obtained a time series of weekly suspected cases for each municipality in Colombia from the start of the epidemic through the end of the third week of 2016 from the national system of surveillance for public health of Colombia (SIVIGILA). A suspected case was defined as a person having an acute onset of fever (> 38 °C) and severe arthralgia or arthritis not explained by other medical conditions and being a resident or having visited epidemic or endemic areas within 2 weeks prior to the onset of clinical symptoms. In the 2014–2015 period, a laboratory-confirmed case was defined as a suspected case with positive reverse transcription polymerase chain reaction (RT-PCR), and in 2016 confirmed cases included RT-PCR or positive serology.
These time series were used to estimate several model parameters separately at each spatial scale. For both the spatial and non-spatial models, we fitted the model to time series data to estimate (1) the amount of rainfall-associated temporary mosquito larval habitat in each department (2) the decay rate of this temporary habitat, and (3–5) the timing, magnitude, and duration of virus importation into the country or department. For the spatial model, we also fitted a scaling factor that modulated movement rates among municipalities. Therefore, the multi-patch departmental models involved fitting only a single additional parameter relative to the single-patch departmental models and the single-patch national model (six vs. five).
Rainfall-associated temporary larval mosquito habitat in the model increases with rainfall and decays at a rate proportional to the evaporation rate driven by temperature and humidity [35]. The amount of larval habitat is the primary driver of the number of adult mosquitoes per human in simulations. Fitting the larval habitat parameters in the model to the time series of suspected cases allowed us to estimate the ratio of adult mosquitoes per human that recreate the observed transmission dynamics. The amount of temporary rainfall habitat was scaled by the department population size, so that we could compare the relative amounts of larval habitat per person in different departments. For the multi-patch models, a single larval habitat size parameter was fitted for each department, with the amount of habitat in each municipality scaled by the municipality population size so that the amount of larval habitat per person was constant for all municipalities in the department.
The initial introduction of CHIKV was assumed to occur via a single pulse of importation with variable timing, size, and duration. We represented this pulse with a Gaussian probability density function, with the timing of the introduction represented by the mean and the duration represented by the standard deviation. We then multiplied this curve by a scaling factor representing the overall magnitude of the importation pulse [36]. The mean timing was allowed to range between the beginning of 2014 and the end of the study period (the first case in Colombia was reported in June 2014). The standard deviation was between 1 and 50 days, and the magnitude corresponded to between 0.001 to 100 expected cumulative infections, with the actual number of imported infections drawn from a Poisson distribution with a mean equal to the scaled magnitude of the Gaussian. For the spatial models, the initial imported case(s) were assumed to occur in the largest municipality in the department, with introduction into the other municipalities (patches) occurring via simulated human movement.
Movement rates among municipalities within a department were estimated using a gravity-like model [57] fitted to department-level migration rates from the most recent census, which were then downscaled to the municipality level based on population, distance, and economic covariates. These migration rates were then scaled to a short-term movement rate with an initial scaling factor that was previously estimated in a study [58] comparing census immigration rates and cellphone-based movement patterns in Kenya. Because that study was conducted in a different country and the scaling factor was very different for different travel lengths (e.g., 2.15 for daily travel but 101.92 for weekly travel), we fitted this range between 1.02 and 101.92, setting the upper bound at the high weekly movement rate seen in Kenya. These movement rates were represented in the model as the fraction of individuals in patch i who travel on a given day to patch j. Movement events are assumed to last for 1 day, with a 100% probability that the individual will return to their home patch.
Fitting of the transmission model was conducted by maximum likelihood using a gradient ascent iterative optimization algorithm called OptimTool that has been built into the EMOD-DTK software framework. The initial parameter values were drawn from the hypersphere of the specified parameter ranges, centered around an initial best guess for that parameter value with a mean search radius determined by the number of parameters and the standard deviation of the radius set at 1/10 of the mean. One hundred draws from this parameter space were conducted for each iteration of the model-fitting process. Due to the stochasticity involved in individual-based models, each sample was simulated separately four times, for a total of 400 simulations per iteration. At the end of each iteration step, the log likelihood of each sample was calculated. The number of suspected cases was assumed to be binomially distributed given the population, and, in order to incorporate uncertainty in the infection and reporting rates, the probability of a reported case was assumed to come from a beta distribution, resulting in a beta-binomial likelihood function. Initially, the beta distribution was assumed to be uninformative (α = 1, β = 1), but after simulation results became available, the beta hyperparameters were adjusted to reflect this new information via a Bayesian update. As a result, α = 1 + Xi and β = 1 + Ni - Xi, where Ni is the population size in patch i and Xi is the average number of reported cases across simulations [59]. This process was repeated ten times, with parameter draws from each successive iteration based on the log likelihoods from all previous iterations.
The accuracies of model estimates were assessed by calculating the mean absolute scaled error (MASE) of the estimated vs. observed weekly suspected case numbers [60]. The MASE calculates the estimation error at each time step (numerator) relative to the prediction from a simple stationary autoregressive lag-1 (AR-1) model:
$$ MASE=\frac{1}{T}\sum \limits_{t=1}^T\frac{\left|{y}_t-{x}_t\right|}{\frac{1}{T-1}{\sum}_{t=2}^T\left|{y}_t-{y}_{t-1}\right|}, $$
where yt and xt are the observed and estimated numbers of cases for weeks t = 1,…,T. The relative accuracies of the single-patch vs. multi-patch models for each department were then measured by calculating the relative MASE = MASEm/MASEs.
Because the municipality-level observations were not used in the fitting process at the department level, we were able to compare these observations to the predicted municipality-level dynamics from the multi-patch models to assess the model's capability to reproduce disease dynamics at spatial scales below the scale at which the fitting process occurred. The total number of observed cases and cumulative per capita incidence were calculated for each municipality in a department and compared to the estimated case totals and per capita incidence per municipality. Comparisons were made by calculating the Pearson's correlation coefficient for the reported and estimated municipality values within each department using the model results from 100 best-fitting simulations per department. These municipality-level correlations were compared to correlations calculated for a null model that allocates the estimated cases in a department to each municipality within the department using a multinomial distribution with probabilities weighted by municipality population size.
Fit to national time series
Between the start of 2014 and the third week of 2016, our best-fit national-level model projects a median of 873,318 (95% confidence interval (CI) 0–1,000,353) reported cases, an overestimate of the 481,284 actually reported (Fig. 1a). The 95% CI includes zero because about 19% of the time the importations did not result in any locally acquired cases. Excluding these stochastic fadeouts, the median estimate of reported cases is 886,947 (95% CI 805,164–1,010,590). The best-fit national-level model estimates matched the observations well early in the epidemic through the end of 2014 but overestimated cases following the peak in the second week of 2015, projecting a continued increase in cases until week 15 in 2015. The best-fit estimate for date of introduction was week 7 of 2014 (95% CI week 52, 2013 to week 25, 2014).
a Weekly number of reported chikungunya cases in Colombia (black), along with the mean and 95% CI from the (green) national-level model. b National-level totals derived by combining the results of each departmental model with either a (blue) single-patch model per department, or (red) the multi-patch models. c Maps of Colombia showing the spatial scale of the different models, with the color coding for the different models used in all figures
The combined total of reported cases predicted by the 33 different single-patch department-level models was 864,296 (95% CI 709,075–892,697), overestimating the observed national total by 79.6% (95% CI 47.3–85.5%). The timing of the epidemic was relatively accurate, but the size of the peak was significantly overestimated, with estimated cases during the peak week being 72.3% (95% CI 23.2–151.1%) above the observed national number of cases (Fig. 1b).
The combined total of reported cases at the national level predicted by the multi-patch department-level models was more accurate than either the national-level model or the combined total from the single-patch department-level models (Fig. 1b). The median estimate of reported cases was 451,920 (95% CI 375,139–511,009), an underestimate of 6.1% (95% CI –6.2 to 22.1%). The number of cases during the week of peak reported cases was underestimated by 11.5% (95% CI –37.0 to 45.1%), and the estimated peak was 2 weeks earlier than the observed peak. However, the estimated peak was only 9.0% below the observed peak (95% CI –40.6 to 49.6%).
Department-level fits
The median MASE across single-patch departmental models was 3.37 (95% CI 0.50–27.46), while the median MASE across all multi-patch departmental models was 1.75 (95% CI 0.50–6.11), for an overall relative MASE of 0.55 (95% CI 0.12–1.90). The MASE of the multi-patch model was lower than the MASE of the single-patch model for the majority of departments (Fig. 2). The 95% CI of the MASE from the single-patch model was not entirely below the MASE from the multi-patch model for any department, while it was entirely above the multi-patch model MASE for 15 departments: Atlantico (10.22–15.83 vs. 1.55–2.81), Caldas (6.7–7.76 vs. 0.95–1.92), Caqueta (3.20–4.99 vs. 1.40–2.86), Cauca (25.09–28.83 vs. 2.67–8.13), Cesar (4.41–9.06 vs. 1.57–1.87), Cordoba (4.35–6.44 vs. 1.01–3.27), Cundinamarca (5.51–6.33 vs. 1.08–1.52), Huila (1.71–3.39 vs. 1.14–1.60), Magdalena (5.72–8.74 vs. 1.64–4.92), Putumayo (3.07–12.32 vs. 1.59–2.76), Quindio (5.14–6.68 vs. 1.49–2.82), Risaralda (10.36–12.75 vs. 1.68–2.14), Santander (11.456–17.01 vs. 2.40–10.97), Valle del Cauca (1.87–4.71 vs. 1.24–1.76), and Vichada (5.26–7.86 vs. 1.06–1.96). In a few departments, the single-patch model overestimated the number of cases by a large margin while the multi-patch model provided a good fit to the observed time series (e.g., Cauca, Santander, and Risaralda; Fig. 3). In the department where the relative MASE for the multi-patch model was the poorest (Narino), the best-fit simulation from the multi-patch model actually reproduced the epidemic well, but overestimated the epidemic size in some simulations, while the single-patch model underestimated the epidemic size.
Fit of multi-patch simulations vs. single-patch simulations to department-level time series for each department in Colombia (excluding Bogotá). Relative model fit is measured via the relative mean scaled error (relMASE) of the single-patch fit to the multi-patch fit, with relMASE < 1 indicating a better fit for the multi-patch model
Comparisons of department-level results for single-patch and multi-patch models. Black dots represent the observed time series, while blue lines represent the 40 best-fitting individual simulations from the single-patch model and red lines represent the best-fitting simulations from the multi-patch model. Darker colored blue and red lines are the single best-fitting simulations
Parameter estimates
The estimated amount of larval habitat per capita was higher in the single-patch than in the multi-patch model for many of the departments (Additional file 1: Figures S2–S9); particularly for departments where the MASE of the multi-patch departmental model was significantly less than the MASE of the single-patch departmental model. In departments with higher single-patch departmental model MASE values and where the model overestimated epidemic size, the estimated larval habitat decay rates tended to be lower than the estimate from the multi-patch departmental model, which also corresponds to larger mosquito populations in the single-patch departmental models (Fig. 4e, f, Additional file 1: Figures S2–S9). The joint distributions for the parameters that dictate importation timing and magnitude are presented in Additional file 1: Figures S10–S17. Model fits were not overly sensitive to varying the symptomatic or reporting rates, with relative single-patch and multi-patch model fits being qualitatively the same for both lower and higher symptomatic/reporting rates (Additional file 1: Figures S18 and S19). The one exception was the multi-patch departmental model for Antioquia, where the number of reported cases was overestimated with both low and high symptomatic rates, but not at the middle rate used in the main analysis.
a–d The population weighted mean daily temperature in the labeled department along with the daily temperatures for each municipality in the department. e–h The mean daily biting rate from the top 10 simulations for the single-patch and multi-patch models. Panels a, b, e, and f are departments where the single-patch model severely overestimated the epidemic size. Panels c, d, g, and h are departments where the single-patch model did not overestimate the size of the epidemic
Municipality-level fits
Although the multi-patch simulations for each department were only fitted to the department-level time series, the ensemble of municipality-level simulations captured several important aspects of the observed municipal-level dynamics. Overall, the total number of simulated cases per municipality was strongly correlated with the observed number of cases per municipality (across simulation runs: median r = 0.86; interquartile range (IQR) of r = 0.53–0.97). At the same time, a null model (in which the single-patch departmental model results were allocated to municipalities proportional to population) produced similar results (median r = 0.84; IQR 0.52–0.97). A bigger distinction between the multi-patch and single-patch departmental models was seen when examining per capita incidence. In this case, the correlation between observed and simulated per capita incidence for the multi-patch model (median r = 0.17; IQR –0.02 to 0.39) was clearly higher than the single-patch model (median r = 0.00; IQR –0.13 to 0.13) (Fig. 5). Whereas the result about raw incidence reflects the importance of population size in driving overall case numbers, the result about per capita incidence demonstrates that there the parameters and assumptions of the multi-patch model contain information about risk not captured by the data to which the model was fitted. Examples of municipality-level estimates are presented in Fig. 6.
Mean and 95% CI from simulations at the municipality level for Valle del Cauca and Antioquia departments. The four largest municipality-level epidemics for each department are shown
Histogram of correlations (Pearson's r) between the observed and simulated cumulative per capita incidence per municipality. Correlations for the multi-patch departmental models (red) and (blue) correlations for the null model where departmental cases are allocated to each municipality proportional to its population size
At the national level, aggregating simulated epidemics from single-patch departmental models did not improve the estimate of overall epidemic size compared to the single-patch national model fitted directly to national-level data. However, the aggregated single-patch departmental models did improve the shape of the reconstructed national-level epidemic curve, with the timing of the peak correctly estimated in early 2015 instead of several months later by the single-patch national model. This result indicates that the single-patch departmental models were somehow more appropriate for their respective time series than was the single-patch national model for its time series, similar to a previous finding about Zika dynamics in Colombia [61]. This result is particularly concerning for the prospect of using a national-level model for forecasting, due to the fact that it was incapable of capturing the temporal trajectory of the epidemic (fitting early patterns but overestimating later ones). The fact that it could not capture the shape of the epidemic's trajectory, even under ideal circumstances of being fitted to the entire time series, suggests structural limitations of the model posed at this scale. Two primary limitations are: (1) it does not allow for the timing of the start of the epidemic to vary locally, and (2) it averages spatial covariates over a ludicrously large scale in a country spanning the Andes to the Amazon. Any decisions based on forecasts from such a model could lead to the misallocation of critical resources or undue panic if communicated to the public [62].
Going even further, the collection of multi-patch departmental models also appeared more structurally appropriate for the department-level time series to which they were fitted, meaning that greater spatial granularity in model structure consistently led to improved structural appropriateness for capturing temporal dynamics [21], at least down to the municipal level. In fact, with the multi-patch departmental models, we were able to accurately estimate both the timing and the size of the overall epidemic peak. Both the single-patch and multi-patch departmental models also predicted variability in the national-level time series better than the single-patch national model. Rather than a smooth epidemic curve, there were several noticeable spikes in the national-level time series following the introduction of CHIKV into a new department or large municipality. By estimating introductions into each department, both single-patch and multi-patch departmental models can capture this temporal heterogeneity. The multi-patch departmental model can also simulate introductions at the municipality level, allowing for exploration of which municipalities might have been the most likely entry point for a given department. In general, our results raise concerns about the application of national-level models to national-level time series, as has been done previously for the chikungunya invasion of the Americas [63, 64]. It is essential that population substructure be included in models fitted to national-level data, and our multi-patch model represents a structurally advantageous option, as do others (e.g., [16]).
With respect to departmental dynamics, two major patterns emerged when we compared the relative fits of the single-patch and multi-patch departmental models. First, for many of the departments where the relative MASE of the multi-patch model was substantially lower, the single-patch model provided a poorer fit to the observed data because it overestimated the size of the epidemic (e.g., Antioquia, Atlantico, Risaralda, and Santander). In these departments, the single-patch model may have overestimated the amount of available larval mosquito habitat, or estimated a slower decay in larval habitat size following rainfall. Because the climate variables were averaged for the entire department, the mean temperature in many departments was less suitable for Ae. aegypti and CHIKV transmission than it was in some of the municipalities within the department (Fig. 4a–d). This may be especially true for a mountainous country such as Colombia, consistent with general expectations that the nature of spatial autocorrelation affects the type of bias that results from spatial aggregation [65]. If climate suitability is lower, then more larval habitat is needed to achieve the same number of infectious mosquitoes per human (Fig. 4e–h). Because the entire department is homogeneously mixed, everyone in the department experiences a similar risk of infection, and the size of the epidemic is overestimated. The multi-patch models, however, may contain municipalities where the climate is not suitable for efficient CHIKV transmission, lowering the portion of the population at risk of infection and appropriately matching geographic variation in human demography with geographic variation in climate. This issue of appropriately matching populations with factors driving exposure is a general and pervasive issue in spatial epidemiology, affecting not only vector-borne diseases but even non-communicable diseases such as leukemia [66].
The second major pattern was displayed by single-patch departmental models where the timing of the peak and the final epidemic size fit relatively well, but the duration of the epidemic was underestimated. In these departments (e.g., Huila, Meta, and Tolima), the single-patch model overestimated the initial increase in cases at the start of the epidemic, and then underestimated how long it would take for the epidemic to fade out after the peak. The multi-patch model may have done a better job of estimating the rapid increase in cases at the start of the epidemic because the conditions in one or more municipalities were highly suitable for rapid transmission compared to mean climate conditions across the department. Once the peak was reached, these departments could also experience a slower decline in cases because municipalities with less favorable conditions would take longer to reach their local peaks. In addition, the spatial structuring of the human population and movement within a structured population slows the spread of the epidemic within the department [67]. These results mirror recent work [68] on influenza dynamics made possible by fine-scale spatial data, which showed that a combination of detailed human geographic data and mobility patterns is important for being able to recreate spatially heterogeneous epidemic patterns below larger scales of spatial aggregation.
No single pattern or set of patterns was observed in departments where the multi-patch model did not improve on the fit of the single-patch departmental model. In several departments, such as Bolivar and Norte de Santander, the single-patch departmental model provided a good fit to the data, leaving little room for improvement with the multi-patch model. There were several departments with smaller outbreaks, particularly Boyaca and Nariño, where the multi-patch rather than the single-patch departmental model had a tendency to overestimate the size of the epidemic. For both of these departments, the mean estimate from the multi-patch departmental model was actually a better fit, but the variance between simulations was greater, likely due to the additional stochasticity that arises from the possibility of stochastic fadeout occurring in each municipality in a multi-patch model. There were also several departments with smaller population sizes that had relative MASE scores near one. These departments, such as Amazonas and Vaupes, had few cases, and as a result neither the single-patch nor the multi-patch models estimated that an outbreak had occurred.
Impressively, our assumptions about transmission dynamics within and among municipalities turned out to be good enough to enable estimation, to at least some degree, of per capita incidence below the spatial scale of the data to which the model was fitted. Implicitly, the single-patch departmental model assumes that residents of all municipalities within a department experience equal risk of infection. Not surprisingly, there was variation in risk among residents of different municipalities, and our multi-patch departmental model provided estimates of that risk that were positively correlated with per capita incidence based on suspected case numbers. Because no data below the departmental scale were used to inform those estimates, this result provides a clear indication that the parameters and assumptions of the multi-patch departmental model contain some degree of positive predictive value. Models of mosquito-borne pathogen transmission usually ignore within-patch heterogeneity [20] and instead default to assuming well-mixed interactions at whatever scale data are available. Our results suggest that this may often be a mistake, given the potential for copious high-resolution data on spatial drivers of transmission [56] and an improved understanding of human mobility patterns [57] to enable successful model predictions at finer scales than that at which data are available. Although gravity models are often capable of reproducing patterns of epidemic spread similar to alternative models of human movement [69], incorporating human movement data from sources such as cell phone metadata can improve model estimates of spread and timing compared to a gravity model [32]. Human movement data or transportation infrastructure information may be particularly useful for modeling epidemic spread in geographically diverse countries like Colombia, where the distance between locations may not be representative of their connectivity due to intervening mountain ranges or rainforests that restrict human movement.
Although the EMOD-DTK modeling framework is flexible in many respects, we madeseveral simplifications that could be viewed as limitations of this study. First, while the 1122 municipalities do represent a granular view of the country, there may be relevant heterogeneities at even finer spatial scales. Dengue spatial foci have been estimated to occur at neighborhood scales [70, 71], and both blood-feeding and microclimate heterogeneity have been shown as far down as the household scale [30, 72]. Theoretical results indicate that these extremely fine-scale heterogeneities may not be easily captured by even modestly aggregated models [21]. Second, we assumed a single, homogeneous larval mosquito habitat for each municipality within a department. In reality, these habitats are extremely numerous [73] and are spatially associated with many factors [74]. More detailed models of Ae. aegypti population dynamics exist [75], but they come at exceedingly high computational expense for the spatial scales of interest here and are subject to numerous uncertainties [76]. Still, different models of Ae. aegypti population dynamics can vary considerably in their response to climatic drivers and interventions [77], suggesting that future refinement of this aspect of the model may be worthwhile. Third, besides climate, there are other important factors that influence geographic heterogeneity in incidence rates that we did not incorporate into our model that could improve estimates at the department or municipality level. One important factor that is known to influence both the amount of mosquito habitat and human contact with mosquitoes is the local level of economic development, with poorer areas having higher incidence rates due to higher contact rates with Aedes mosquitoes [78]. Other environmental factors might also affect the local suitability for larval mosquitoes, such as how local infrastructure and development, as well as cultural practices surrounding water storage, influence the amount of mosquito breeding habitat. Fourth, we assumed a fixed reporting rate based on an estimate for chikungunya from Puerto Rico and overall estimates for dengue, although reporting rates are likely to vary among departments or even among municipalities [79].
Simulating CHIKV transmission dynamics from versions of our model with increasing spatial granularity improved the fit of the model to temporal incidence patterns, both at the scales to which the data were fitted and when aggregated at the national level. This improvement derived from the fact that simulations with spatially granular models more appropriately captured spatial heterogeneity in epidemiologically relevant factors, such as mosquito abundance and human demography and movement. This improvement was evident when moving from national to departmental levels and from departmental to municipal levels. Models based on municipal-level spatial heterogeneity closely matched epidemic size for the majority of departments and also estimated the duration of the epidemic better than the single-patch departmental models, particularly with respect to the timing of the start of local epidemics. These models also captured continued low levels of transmission for months following epidemic peaks in many of the departments. Use of models posed at spatial scales more granular than those at which data are available represents a promising approach for the common situation of needing to answer questions about spatial heterogeneity in transmission below the scale at which highly spatially aggregated data are available.
CHIKV:
MASE:
Mean absolute scaled error
Mackenzie JS, Gubler DJ, Petersen LR. Emerging flaviviruses: the spread and resurgence of Japanese encephalitis, West Nile and dengue viruses. Nat Med. 2004;10:S98–109.
Kyle JL, Harris E. Global spread and persistence of dengue. Annu Rev Microbiol. 2008;62:71–92.
Staples JE, Erin Staples J, Fischer M. Chikungunya virus in the Americas — what a vectorborne pathogen can do. N Engl J Med. 2014;371:887–9.
Weaver SC. Arrival of chikungunya virus in the new world: prospects for spread and impact on public health. PLoS Negl Trop Dis. 2014;8:e2921.
Khan K, Bogoch I, Brownstein JS, Miniota J, Nicolucci A, Hu W, et al. Assessing the origin of and potential for international spread of chikungunya virus from the Caribbean. PLoS Curr. 2014;6 https://doi.org/10.1371/currents.outbreaks.2134a0a7bf37fd8d388181539fea2da5.
Lessler J, Chaisson LH, Kucirka LM, Bi Q, Grantz K, Salje H, et al. Assessing the global threat from Zika virus. Science. 2016;353:aaf8160.
Bogoch II, Brady OJ, Kraemer MUG, German M, Creatore MI, Kulkarni MA, et al. Anticipating the international spread of Zika virus from Brazil. Lancet. 2016;387:335–6.
Faria NR, Azevedo R d S d S, MUG K, Souza R, Cunha MS, Hill SC, et al. Zika virus in the Americas: early epidemiological and genetic findings. Science. 2016;352:345–9.
Wearing HJ, Rohani P. Ecological and immunological determinants of dengue epidemics. Proc Natl Acad Sci U S A. 2006;103:11802–7.
Del Valle SY, McMahon BH, Asher J, Hatchett R, Lega JC, Brown HE, et al. Summary results of the 2014-2015 DARPA chikungunya challenge. BMC Infect Dis. 2018;18:245.
Johansson MA, Powers AM, Pesik N, Cohen NJ, Staples JE. Nowcasting the spread of chikungunya virus in the Americas. PLoS One. 2014;9:e104915.
Cauchemez S, Ledrans M, Poletto C, Quenel P, de Valk H, Colizza V, et al. Local and regional spread of chikungunya fever in the Americas. Euro Surveill. 2014;19:20854.
WHO Ebola Response Team, Aylward B, Barboza P, Bawo L, Bertherat E, Bilivogui P, et al. Ebola virus disease in West Africa—the first 9 months of the epidemic and forward projections. N Engl J Med. 2014;371:1481–95.
Zhang Q, Sun K, Chinazzi M, Pastore Y, Piontti A, Dean NE, Rojas DP, et al. Spread of Zika virus in the Americas. Proc Natl Acad Sci U S A. 2017;114:E4334–43.
Althaus CL. Estimating the reproduction number of Ebola virus (EBOV) during the 2014 outbreak in west Africa. PLoS Curr. 2014:6. https://doi.org/10.1371/currents.outbreaks.91afb5e0f279e7f29e7056095255b288.
Kucharski AJ, Funk S, Eggo RM, Mallet H-P, Edmunds WJ, Nilles EJ. Transmission dynamics of Zika virus in island populations: a modelling analysis of the 2013-14 French Polynesia outbreak. PLoS Negl Trop Dis. 2016;10:e0004726.
Van Kerkhove MD, Bento AI, Mills HL, Ferguson NM, Donnelly CA. A review of epidemiological parameters from Ebola outbreaks to inform early public health decision-making. Sci Data. 2015;2:150019.
Reiner RC, Perkins TA, Barker CM, Niu T, Chaves LF, Ellis AM, et al. A systematic review of mathematical models of mosquito-borne pathogen transmission: 1970–2010. J R Soc Interface. 2013;10:20120921.
Perkins TA, Scott TW, Le Menach A, Smith DL. Heterogeneity, mixing, and the spatial scales of mosquito-borne pathogen transmission. PLoS Comput Biol. 2013;9:e1003327.
Salje H, Lessler J, Paul KK, Azman AS, Rahman MW, Rahman M, et al. How social structures, space, and behaviors shape the spread of infectious diseases using chikungunya as a case study. Proc Natl Acad Sci U S A. 2016;113:13420–5.
Salje H, Cauchemez S, Alera MT, Rodriguez-Barraquer I, Thaisomboonsuk B, Srikiatkhachorn A, et al. Reconstruction of 60 years of chikungunya epidemiology in the Philippines demonstrates episodic and focal transmission. J Infect Dis. 2016;213:604–10.
Simini F, González MC, Maritan A, Barabási A-L. A universal model for mobility and migration patterns. Nature. 2012;484:96–100.
Wesolowski A, Eagle N, Tatem AJ, Smith DL, Noor AM, Snow RW, et al. Quantifying the impact of human mobility on malaria. Science. 2012;338:267–70.
Brockmann D, Helbing D. The hidden geometry of complex, network-driven contagion phenomena. Science. 2013;342:1337–42.
Read JM, Lessler J, Riley S, Wang S, Tan LJ, Kwok KO, et al. Social mixing patterns in rural and urban areas of southern China. Proc R Soc Lond B Biol Sci. 2014;281:20140268.
Lambrechts L, Paaijmans KP, Fansiri T, Carrington LB, Kramer LD, Thomas MB, et al. Impact of daily temperature fluctuations on dengue virus transmission by Aedes aegypti. Proc Natl Acad Sci. 2011;108:7460–5.
Johansson MA, Dominici F, Glass GE. Local and global effects of climate on dengue transmission in Puerto Rico. PLoS Negl Trop Dis. 2009;3:e382.
Murdock CC, Evans MV, McClanahan TD, Miazgowicz KL, Tesla B. Fine-scale variation in microclimate across an urban landscape shapes variation in mosquito population dynamics and the potential of Aedes albopictus to transmit arboviral disease. PLoS Negl Trop Dis. 2017;11:e0005640.
Kraemer MUG, Perkins TA, Cummings DT, Zakar R, Hay SI, Smith DL, et al. Big city, small world: density, contact rates, and transmission of dengue across Pakistan. J R Soc Interface. 2015;12:20150468.
Wesolowski A, Qureshi T, Boni MF, Sundsøy PR, Johansson MA, Rasheed SB, et al. Impact of human mobility on the emergence of dengue epidemics in Pakistan. Proc Natl Acad Sci U S A. 2015;112:11887–92.
Morin CW, Monaghan AJ, Hayden MH, Barrera R, Ernst K. Meteorologically driven simulations of dengue epidemics in San Juan, PR. PLoS Negl Trop Dis. 2015;9:e0004002.
Fischer M, Staples JE. Arboviral Diseases Branch, National Center for Emerging and Zoonotic Infectious Diseases, CDC. Notes from the field: chikungunya virus spreads in the Americas — Caribbean and South America, 2013-2014. MMWR Morb Mortal Wkly Rep. 2014;63:500–1.
Eckhoff PA. A malaria transmission-directed model of mosquito life cycle and ecology. Malar J. 2011;10:303.
Soda KJ, Moore SM, España G, Bloedow J, Raybaud B, Althouse B, et al. DTK-Dengue: A new agent-based model of dengue virus transmission dynamics. bioRxiv. 2018. https://doi.org/10.1101/376533.
Gay N, Rousset D, Huc P, Matheus S, Ledrans M, Rosine J, et al. Seroprevalence of Asian lineage chikungunya virus infection on Saint Martin Island, 7 months after the 2013 emergence. Am J Trop Med Hyg. 2016;94:393–6.
Bloch D, Roth NM, Caraballo EV, Muñoz-Jordan J, Hunsperger E, Rivera A, et al. Use of household cluster investigations to identify factors associated with chikungunya virus infection and frequency of case reporting in Puerto Rico. PLoS Negl Trop Dis. 2016;10:e0005075.
Moro ML, Gagliotti C, Silvi G, Angelini R, Sambri V, Rezza G, et al. Chikungunya virus in north-eastern Italy: a seroprevalence survey. Am J Trop Med Hyg. 2010;82:508–11.
Queyriaux B, Simon F, Grandadam M, Michel R, Tolou H, Boutin J-P. Clinical burden of chikungunya virus infection. Lancet Infect Dis. 2008;8:2–3.
Yoon I-K, Alera MT, Lago CB, Tac-An IA, Villa D, Fernandez S, et al. High rate of subclinical chikungunya virus infection and association of neutralizing antibody with protection in a prospective cohort in the Philippines. PLoS Negl Trop Dis. 2015;9:e0003764.
Kumar NP, Suresh A, Vanamail P, Sabesan S, Krishnamoorthy KG, Mathew J, et al. Chikungunya virus outbreak in Kerala, India, 2007: a seroprevalence study. Mem Inst Oswaldo Cruz. 2011;106:912–6.
Sergon K, Njuguna C, Kalani R, Ofula V, Onyango C, Konongoi LS, et al. Seroprevalence of chikungunya virus (CHIKV) infection on Lamu Island, Kenya, October 2004. Am J Trop Med Hyg. 2008;78:333–7.
Sergon K, Yahaya AA, Brown J, Bedja SA, Mlindasse M, Agata N, et al. Seroprevalence of chikungunya virus infection on Grande Comore Island, Union of the Comoros, 2005. Am J Trop Med Hyg. 2007;76(6):1189–93.
Sissoko D, Moendandze A, Malvy D, Giry C, Ezzedine K, Solet JL, et al. Seroprevalence and risk factors of chikungunya virus infection in Mayotte, Indian Ocean, 2005-2006: a population-based survey. PLoS One. 2008;3:e3066.
Gérardin P, Guernier V, Perrau J, Fianu A, Le Roux K, Grivard P, et al. Estimating chikungunya prevalence in La Réunion Island outbreak by serosurveys: two methods for two critical times of the epidemic. BMC Infect Dis. 2008;8:99.
Manimunda SP, Sugunan AP, Rai SK, Vijayachari P, Shriram AN, Sharma S, et al. Outbreak of chikungunya fever, Dakshina Kannada District, South India, 2008. Am J Trop Med Hyg. 2010;83:751–4.
Ayu SM, Lai LR, Chan YF, Hatim A, Hairi NN, Ayob A, et al. Seroprevalence survey of chikungunya virus in Bagan Panchor, Malaysia. Am J Trop Med Hyg. 2010;83:1245–8.
Nakkhara P, Chongsuvivatwong V, Thammapalo S. Risk factors for symptomatic and asymptomatic chikungunya infection. Trans R Soc Trop Med Hyg. 2013;107:789–96.
Appassakij H, Khuntikij P, Kemapunmanus M, Wutthanarungsan R, Silpapojakul K. Viremic profiles in asymptomatic and symptomatic chikungunya fever: a blood transfusion threat? Transfusion. 2013;53:2567–74.
Nguyet MN, Duong THK, Trung VT, Nguyen THQ, Tran CNB, Long VT, et al. Host and viral features of human dengue cases shape the population of infected and infectious Aedes aegypti mosquitoes. Proc Natl Acad Sci U S A. 2013;110:9072–7.
Chan M, Johansson MA. The incubation periods of dengue viruses. PLoS One. 2012;7:e50972.
Christofferson RC, Chisenhall DM, Wearing HJ, Mores CN. Chikungunya viral fitness measures within the vector and subsequent transmission potential. PLoS One. 2014;9:e110538.
Stanaway JD, Shepard DS, Undurraga EA, Halasa YA, Coffeng LE, Brady OJ, et al. The global burden of dengue: an analysis from the Global Burden of Disease Study 2013. Lancet Infect Dis. 2016;16:712–23.
Harrington LC, Scott TW, Lerdthusnee K, Coleman RC, Costero A, Clark GG, et al. Dispersal of the dengue vector Aedes aegypti within and between rural communities. Am J Trop Med Hyg. 2005;72:209–20.
Siraj AS, Rodriguez-Barraquer I, Barker CM, Tejedor-Garavito N, Harding D, Lorton C, et al. Spatiotemporal incidence of Zika and associated environmental drivers for the 2015-2016 epidemic in Colombia. Sci Data. 2018;5:180073.
Sorichetta A, Bird TJ, Ruktanonchai NW, Zu Erbach-Schoenberg E, Pezzulo C, Tejedor N, et al. Mapping internal connectivity through human migration in malaria endemic countries. Sci Data. 2016;3:160066.
Wesolowski A, Buckee CO, Pindolia DK, Eagle N, Smith DL, Garcia AJ, et al. The use of census migration data to approximate human movement patterns across temporal scales. PLoS One. 2013;8:e52971.
Thompson Hobbs N, Hooten MB. Bayesian models: a statistical primer for ecologists. Princeton: Princeton University Press; 2015.
Hyndman RJ, Koehler AB. Another look at measures of forecast accuracy. Int J Forecast. 2006;22:679–88.
Shutt DP, Manore CA, Pankavich S, Porter AT, Del Valle SY. Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America. Epidemics. 2017;21:63–79.
Chowell G, Viboud C, Simonsen L, Merler S, Vespignani A. Perspectives on model forecasts of the 2014-2015 Ebola epidemic in West Africa: lessons and the way forward. BMC Med. 2017;15:42.
Perkins TA, Metcalf CJE, Grenfell BT, Tatem AJ. Estimating drivers of autochthonous transmission of chikungunya virus in its invasion of the Americas. PLoS Curr. 2015;7 https://doi.org/10.1371/currents.outbreaks.a4c7b6ac10e0420b1788c9767946d1fc.
Escobar LE, Qiao H, Peterson AT. Forecasting chikungunya spread in the Americas via data-driven empirical approaches. Parasit Vectors. 2016;9:112.
Arbia G. The modifiable areal unit problem and the spatial autocorrelation problem: towards a joint approach. Metro. 1986;44:391–407.
Jeffery C, Ozonoff A, Pagano M. The effect of spatial aggregation on performance when mapping a risk of disease. Int J Health Geogr. 2014;13:9.
Gómez-Gardeñes J, Soriano-Paños D, Arenas A. Critical regimes driven by recurrent mobility patterns of reaction–diffusion processes in networks. Nat Phys. 2018;14(4):391–5.
Charu V, Zeger S, Gog J, Bjørnstad ON, Kissler S, Simonsen L, et al. Human mobility and the spatial transmission of influenza in the United States. PLoS Comput Biol. 2017;13:e1005382.
Kraemer MUG, Bisanzio D, Reiner RC, Zakar R, Hawkins JB, Freifeld CC, et al. Inferences about spatiotemporal variation in dengue virus transmission are sensitive to assumptions about human mobility: a case study using geolocated tweets from Lahore, Pakistan. EPJ Data Sci. 2018;7(1):16.
Salje H, Lessler J, Endy TP, Curriero FC, Gibbons RV, Nisalak A, et al. Revealing the microscale spatial signature of dengue transmission and immunity in an urban population. Proc Natl Acad Sci U S A. 2012;109:9535–8.
Salje H, Lessler J, Maljkovic Berry I, Melendrez MC, Endy T, Kalayanarooj S, et al. Dengue diversity across spatial and temporal scales: local structure and the effect of host population size. Science. 2017;355:1302–6.
Liebman KA, Stoddard ST, Reiner RC Jr, Perkins TA, Astete H, Sihuincha M, et al. Determinants of heterogeneous blood feeding patterns by Aedes aegypti in Iquitos, Peru. PLoS Negl Trop Dis. 2014;8:e2702.
Morrison AC, Gray K, Getis A, Astete H, Sihuincha M, Focks D, et al. Temporal and geographic patterns of Aedes aegypti (Diptera: Culicidae) production in Iquitos, Peru. J Med Entomol. 2004;41:1123–42.
Kraemer MUG, Sinka ME, Duda KA, Mylne AQN, Shearer FM, Barker CM, et al. The global distribution of the arbovirus vectors Aedes aegypti and Ae albopictus. Elife. 2015;4:e08347.
Magori K, Legros M, Puente ME, Focks DA, Scott TW, Lloyd AL, et al. Skeeter Buster: a stochastic, spatially explicit modeling tool for studying Aedes aegypti population replacement and population suppression strategies. PLoS Negl Trop Dis. 2009;3:e508.
Xu C, Legros M, Gould F, Lloyd AL. Understanding uncertainties in model-based predictions of Aedes aegypti population dynamics. PLoS Negl Trop Dis. 2010;4:e830.
Legros M, Otero M, Aznar VR, Solari H, Gould F, Lloyd AL. Comparison of two detailed models of Aedes aegypti population dynamics. Ecosphere. 2016;7 https://doi.org/10.1002/ecs2.1515.
Reiter P, Lathrop S, Bunning M, Biggerstaff B, Singer D, Tiwari T, et al. Texas lifestyle limits transmission of dengue virus. Emerg Infect Dis. 2003;9(1):86–9.
Imai N, Dorigatti I, Cauchemez S, Ferguson NM. Estimating dengue transmission intensity from case-notification data from multiple countries. PLoS Negl Trop Dis. 2016;10:e0004833.
This research was supported by a Young Faculty Award from the Defense Advanced Research Projects Agency (D16AP00114) and by Intellectual Ventures.
The data sets and code used in the analysis are available at https://github.com/mooresea/chikv-colombia-emod.
Department of Biological Sciences and Eck Institute for Global Health, University of Notre Dame, Notre Dame, IN, USA
Sean M. Moore, Quirine A. ten Bosch, Amir S. Siraj, K. James Soda, Guido España & T. Alex Perkins
Mathematical Modelling of Infectious Diseases Unit, Institut Pasteur, 75015, Paris, France
Quirine A. ten Bosch
CNRS UMR2000: Génomique évolutive, modélisation et santé (GEMS), Institut Pasteur, Paris, France
Center of Bioinformatics, Biostatistics and Integrative Biology, Institut Pasteur, 75015, Paris, France
Subdirección de Análisis de Riesgo y Respuesta Inmediata en Salud Pública, Instituto Nacional de Salud de Colombia, Bogotá, Colombia
Alfonso Campo
Grupo de Enfermedades Transmisibles, Instituto Nacional de Salud de Colombia, Bogotá, Colombia
Sara Gómez & Daniela Salas
Institute for Disease Modeling, Bellevue, WA, USA
Benoit Raybaud, Edward Wenger & Philip Welkhoff
Sean M. Moore
Amir S. Siraj
K. James Soda
Guido España
Sara Gómez
Daniela Salas
Benoit Raybaud
Edward Wenger
Philip Welkhoff
T. Alex Perkins
SMM and TAP conceived the research and led the writing. SMM, ASS, GE, AC, SG, and DS prepared data sets. SMM, QAtB, KJS, BR, EW, and PW contributed to code development. SMM performed the analyses. All authors provided critical feedback to revisions of the manuscript and read and approved the final manuscript.
Correspondence to Sean M. Moore or T. Alex Perkins.
Additional file
Figure S1. (A) Cumulative incidence as a function of the maximum adaptive sampling population size. Dashed line represents the mean, and the dotted lines are the mean ± the standard deviation. (B) Epidemic time series for three different maximum adaptive sampling population sizes. Solid lines are means and shaded areas represent the range. Figures S2–S9. The joint distribution of parameter estimates for amount of rainfall-associated temporary larval mosquito habitat and the decay rate of that temporary habitat. Left panels are estimates from the single-patch departmental model, and right panels are estimated from the multi-patch departmental model. Each figure contains results from four departments, with the departments ordered from lowest to highest relative MASE as displayed in Fig. 2. Figures S10–S17. The joint distribution of parameter estimates for the timing of the initial importation event(s) and the magnitude of importation. Left panels are estimates from the single-patch departmental model, and right panels are estimated from the multi-patch departmental model. Each figure contains results from four departments, with the departments ordered from lowest to highest relative MASE as displayed in Fig. 2. Figures S18–S19. Comparisons of department-level results for single-patch and multi-patch models for three different symptomatic rates (0.54, 0.72, and 0.90). Black dots represent the observed time series, darker colored lines are the single best-fitting simulations, and lighter colored lines are the other 40 top simulations. (PDF 40161 kb)
Moore, S.M., ten Bosch, Q.A., Siraj, A.S. et al. Local and regional dynamics of chikungunya virus transmission in Colombia: the role of mismatched spatial heterogeneity. BMC Med 16, 152 (2018). https://doi.org/10.1186/s12916-018-1127-2
Arbovirus
Spatial scale
Transmission dynamics
Spatial epidemiology and infectious diseases | CommonCrawl |
Algebraic geometry code
Algebraic geometry codes, often abbreviated AG codes, are a type of linear code that generalize Reed–Solomon codes. The Russian mathematician V. D. Goppa constructed these codes for the first time in 1982.[1]
History
The name of these codes has evolved since the publication of Goppa's paper describing them. Historically these codes have also been referred to as geometric Goppa codes;[2] however, this is no longer the standard term used in coding theory literature. This is due to the fact that Goppa codes are a distinct class of codes which were also constructed by Goppa in the early 1970s.[3][4][5]
These codes attracted interest in the coding theory community because they have the ability to surpass the Gilbert–Varshamov bound; at the time this was discovered, the Gilbert–Varshamov bound had not been broken in the 30 years since its discovery.[6] This was demonstrated by Tfasman, Vladut, and Zink in the same year as the code construction was published, in their paper "Modular curves, Shimura curves, and Goppa codes, better than Varshamov-Gilbert bound".[7] The name of this paper may be one source of confusion affecting references to algebraic geometry codes throughout 1980s and 1990s coding theory literature.
Construction
In this section the construction of algebraic geometry codes is described. The section starts with the ideas behind Reed–Solomon codes, which are used to motivate the construction of algebraic geometry codes.
Reed–Solomon codes
Algebraic geometry codes are a generalization of Reed–Solomon codes. Constructed by Irving Reed and Gustave Solomon in 1960, Reed–Solomon codes use univariate polynomials to form codewords, by evaluating polynomials of sufficiently small degree at the points in a finite field $\mathbb {F} _{q}$.[8]
Formally, Reed–Solomon codes are defined in the following way. Let $\mathbb {F} _{q}=\{\alpha _{1},\dots ,\alpha _{q}\}$. Set positive integers $k\leq n\leq q$. Let
$\mathbb {F} _{q}[x]_{<k}:=\left\{f\in \mathbb {F} _{q}[x]:\deg f<k\right\}$
The Reed–Solomon code $RS(q,n,k)$ is the evaluation code
$RS(q,n,k)=\left\{\left(f(\alpha _{1}),f(\alpha _{2}),\dots ,f(\alpha _{n})\right):f\in \mathbb {F} _{q}[x]_{<k}\right\}\subseteq \mathbb {F} _{q}^{n}.$
Codes from algebraic curves
Goppa observed that $\mathbb {F} _{q}$ can be considered as an affine line, with corresponding projective line $\mathbb {P} _{\mathbb {F} _{q}}^{1}$. Then, the polynomials in $\mathbb {F} _{q}[x]_{<k}$ (i.e. the polynomials of degree less than $k$ over $\mathbb {F} _{q}$) can be thought of as polynomials with pole allowance no more than $k$ at the point at infinity in $\mathbb {P} _{\mathbb {F} _{q}}^{1}$.[6]
With this idea in mind, Goppa looked toward the Riemann–Roch theorem. The elements of a Riemann–Roch space are exactly those functions with pole order restricted below a given threshold,[9] with the restriction being encoded in the coefficients of a corresponding divisor. Evaluating those functions at the rational points on an algebraic curve $X$ over $\mathbb {F} _{q}$ (that is, the points in $\mathbb {F} _{q}^{2}$ on the curve $X$) gives a code in the same sense as the Reed-Solomon construction.
However, because the parameters of algebraic geometry codes are connected to algebraic function fields, the definitions of the codes are often given in the language of algebraic function fields over finite fields.[10] Nevertheless, it is important to remember the connection to algebraic curves, as this provides a more geometrically intuitive method of thinking about AG codes as extensions of Reed-Solomon codes.[9]
Formally, algebraic geometry codes are defined in the following way.[10] Let $F/\mathbb {F} _{q}$ be an algebraic function field, $D=P_{1}+\dots +P_{n}$ be the sum of $n$ distinct places of $F/\mathbb {F} _{q}$ of degree one, and $G$ be a divisor with disjoint support from $D$. The algebraic geometry code $C_{\mathcal {L}}(D,G)$ associated with divisors $D$ and $G$ is defined as
$C_{\mathcal {L}}(D,G):=\lbrace (f(P_{1}),\dots ,f(P_{n})):f\in {\mathcal {L}}(G)\rbrace \subseteq \mathbb {F} _{q}^{n}.$
More information on these codes may be found in both introductory texts[6] as well as advanced texts on coding theory.[10][11]
Examples
Reed-Solomon codes
One can see that
$RS(q,n,k)={\mathcal {C}}_{\mathcal {L}}(D,(k-1)P_{\infty })$
where $P_{\infty }$ is the point at infinity on the projective line $\mathbb {P} _{\mathbb {F} _{q}}^{1}$ and $D=P_{1}+\dots +P_{q}$ is the sum of the other $\mathbb {F} _{q}$-rational points.
One-point Hermitian codes
The Hermitian curve is given by the equation
$x^{q+1}=y^{q}+y$
considered over the field $\mathbb {F} _{q^{2}}$.[2] This curve is of particular importance because it meets the Hasse–Weil bound with equality, and thus has the maximal number of affine points over $\mathbb {F} _{q^{2}}$.[12] With respect to algebraic geometry codes, this means that Hermitian codes are long relative to the alphabet they are defined over.[13] The Riemann–Roch space of the Hermitian function field is given in the following statement.[2] For the Hermitian function field $\mathbb {F} _{q^{2}}(x,y)$ given by $x^{q+1}=y^{q}+y$ and for $m\in \mathbb {Z} ^{+}$, the Riemann–Roch space ${\mathcal {L}}(mP_{\infty })$ is
${\mathcal {L}}(mP_{\infty })=\left\langle x^{a}y^{b}:0\leq b\leq q-1,aq+b(q+1)\leq m\right\rangle ,$
where $P_{\infty }$ is the point at infinity on ${\mathcal {H}}_{q}(\mathbb {F} _{q^{2}})$.
With that, the one-point Hermitian code can be defined in the following way. Let ${\mathcal {H}}_{q}$ be the Hermitian curve defined over $\mathbb {F} _{q^{2}}$.
Let $P_{\infty }$ be the point at infinity on ${\mathcal {H}}_{q}(\mathbb {F} _{q^{2}})$, and
$D=P_{1}+\cdots +P_{n}$
be a divisor supported by the $n:=q^{3}$ distinct $\mathbb {F} _{q^{2}}$-rational points on ${\mathcal {H}}_{q}$ other than $P_{\infty }$.
The one-point Hermitian code $C(D,mP_{\infty })$ is
$C(D,mP_{\infty }):=\left\lbrace (f(P_{1}),\dots ,f(P_{n})):f\in {\mathcal {L}}(mP_{\infty })\right\rbrace \subseteq \mathbb {F} _{q^{2}}^{n}.$
References
1. Goppa, Valerii Denisovich (1982). "Algebraico-geometric codes". Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya. 46 (4): 726--781 – via Russian Academy of Sciences, Steklov Mathematical Institute of Russian.
2. Stichtenoth, Henning (1988). "A note on Hermitian codes over GF(q^2)". IEEE Transactions on Information Theory. 34 (5): 1345--1348 – via IEEE.
3. Goppa, Valery Denisovich (1970). "A new class of linear error-correcting codes". Probl. Inf. Transm. 6: 300--304.
4. Goppa, Valerii Denisovich (1972). "Codes Constructed on the Base of (L,g)-Codes". Problemy Peredachi Informatsii. 8 (2): 107--109 – via Russian Academy of Sciences, Branch of Informatics, Computer Equipment and.
5. Berlekamp, Elwyn (1973). "Goppa codes". IEEE Transactions on Information Theory. 19 (5): 590--592 – via IEEE.
6. Walker, Judy L. (2000). Codes and Curves. American Mathematical Society. p. 15. ISBN 0-8218-2628-X.
7. Tsfasman, Michael; Vladut, Serge; Zink, Thomas (1982). "Modular curves, Shimura curves, and Goppa codes better than the Varshamov-Gilbert bound". Mathematische Nachrichten.
8. Reed, Irving; Solomon, Gustave (1960). "Polynomial codes over certain finite fields". Journal of the Society for Industrial and Applied Mathematics. 8 (2): 300--304 – via SIAM.
9. Hoholdt, Tom; van Lint, Jacobus; Pellikaan, Ruud (1998). "Algebraic geometry codes" (PDF). Handbook of coding theory. 1 (Part 1): 871–961 – via Elsevier Amsterdam.
10. Stichtenoth, Henning (2009). Algebraic function fields and codes (2nd ed.). Springer Science & Business Media. pp. 45--65. ISBN 978-3-540-76878-4.
11. van Lint, Jacobus (1999). Introduction to coding theory (3rd ed.). Springer. pp. 148--166. ISBN 978-3-642-63653-0.
12. Garcia, Arnoldo; Viana, Paulo (1986). "Weierstrass points on certain non-classical curves". Archiv der Mathematik. 46: 315--322 – via Springer.
13. Tiersma, H.J. (1987). "Remarks on codes from Hermitian curves". IEEE Transactions on Information Theory. 33 (4): 605--609 – via IEEE.
| Wikipedia |
\begin{document}
\title{sfv Iterative methods for the inclusion of the inverse matrix}
\begin{abstract} {\fontfamily{cmss}\selectfont \fontseries{msc} \selectfont \fontshape{n}\selectfont \fontsize{9}{9}\selectfont In this paper we present an efficient iterative method of order six for the inclusion of the inverse of a given regular matrix. To provide the upper error bound of the outer matrix for the inverse matrix, we combine point and interval iterations. The new method is relied on a suitable matrix identity and a modification of a hyper-power method. This method is also feasible in the case of a full-rank $m\times n$ matrix, producing the interval sequence which converges to the Moore-Penrose inverse. It is shown that computational efficiency of the proposed method is equal or higher than the methods of hyper-power's type.}
\frenchspacing \itemsep=-1pt \begin{description} \item[] {\it AMS Subject Classification}: 15A09, 65G30, 47J25, 03D15, 65H05. \item[] {\it Key words}: Inclusion methods; inverse matrix; hyper-power methods; convergence; computational efficiency. \end{description} \end{abstract}
\renewcommand{\thefootnote}{} \footnote{This work is supported by the Serbian Ministry of Education and Science under the grants 174033 (first author) and 174022 (second author).}
{
\fontseries{msc}
\fontsize{11}{12}\selectfont
\section{\bsfs Introduction} \setcounter{equation}{0}
A number of tasks in Numerical analysis, Graph theory, Geometry, Statistics, Computer sciences, Cryptography (encoding and decoding matrices), Partial differential equations, Physics, Engineering disciplines, Medicine (eg., digital tomosynthesis), Management and Optimization (Design Structure Matrix) and so on, is modeled in the matrix form. Solution of these problems is very often reduced to finding an inverse matrix. There is a vast literature in this area so that we will not consider all matrix numerical methods of iterative nature. Instead, in this paper we concentrate only on that small branch of matrix iterative analysis concerned with the efficient determination of inverse matrices
with upper error bound of the solution using interval arithmetic. The presented study is a two-way bridge between linear algebra and computing.
The paper is divided into four sections and organized as follows. In Section 2 we give some preliminary matrix properties and definitions and a short study of hyper-power matrix iterations. The main goal of this paper is to state an efficient iterative method of order six for the inclusion of the inverse of a given regular matrix, which is the subject of Section 3. This method is constructed by modifying a hyper-power method in such a way that the computational cost is decreased. In order to provide information on the upper error bounds of the approximate interval matrix, interval arithmetic is used.
Computational aspects of the considered interval methods and one numerical example are considered in Section 4. We show that computational efficiency of the proposed method is equal or higher than the methods of hyper-power's type realized in a Horner scheme fashion.
\section{\bsfs Hyper-power methods}
Applying numerical methods on digital computers, one of the most important task is to provide an information on the accuracy of obtained results. The interest of bounding roundoff errors in matrix computations has come from the impossibility of exact representation of elements of matrices in some cases since numbers are represented in the computer by string of bits of fixed, {\it finite} length. For more details see \cite{mur3}, \cite{herz}, \cite{cia}. Such case also appears in finding inverse matrices, the subject of this paper. To provide the upper error bound of the outer matrix for the inverse matrix, we will combine point and interval iterations. The essential advantage of the presented interval methods consists of capturing all the roundoff errors automatically, making this approach useful, elegant and powerful tool for finding errors in the sought results.
To avoid any confusion, in this paper interval matrices will be denoted by bold capital letters and real matrices (often called point matrices) by calligraphic letters with a dot below the letter. We use bold small letters to denote real intervals.
Let $\d{${{C}}$}=[\mbox{\matbi{c}} _{ij}]$ be a nonsingular $n\times n$ matrix, where $\mbox{\matbi{c}}_{ij}=[\underline{c}_{\,ij},\overline{c}_{ij}],\ \overline{c}_{ij}-\underline{c}_{\,ij}\ge 0,$ are real intervals.
An interval matrix whose all elements are points (real numbers) is called a {\it point matrix}. Basic definitions, operations and properties of interval matrices can be found in detail in \cite[Ch. 10]{AH} and \cite{cia}.
For a given interval matrix $\mbox{\matbi{C}}=[\mbox{\matbi{c}}_{ij}]$
let us define corresponding point matrices, the {\it midpoint matrix} $m(\mbox{\matbi{C}}):=[m(\mbox{\matbi{c}}_{ij})],$ the {\it width matrix} $d(\mbox{\matbi{C}}):=[d(\mbox{\matbi{c}}_{ij})],$ and the {\it absolute value matrix} $|\mbox{\matbi{C}}|:=[|\mbox{\matbi{c}}_{ij}|],$
as follows:
$$m(\mbox{\matbi{c}}_{ij})=\tfrac12(\underline{c}_{\,ij}+\overline{c}_{ij}), \quad d(\mbox{\matbi{c}}_{ij}):=\overline{c}_{\,ij}-\underline{c}_{\,ij}, \quad
|\mbox{\matbi{c}}_{ij}|=\max\{|\overline{c}_{ij}|,|\underline{c}_{\,ij}|\}. $$ If $\mbox{\matbi{C}}=\d{${{C}}$}=[c_{ij}] $ is a point matrix, then it is obvious $$m(\d{${{C}}$})=[c_{ij}],\quad d(\d{${{C}}$})=[0]\ \mbox{\rm (null-matrix)},\quad
|\d{${{C}}$}|=[|c_{ij}|]. $$
We start with the following
matrix identity for an
$n\times n$ matrix $\d{${{Q}}$}$ and the unity matrix $\d{${{I}}$},$
$$
(\d{${{I}}$}-\d{${{Q}}$})(\d{${{I}}$}+\d{${{Q}}$}+\cdots+\d{${{Q}}$}^{r-2})=\d{${{I}}$}-\d{${{Q}}$}^{r-1}.
$$
Hence, setting $\d{${{Q}}$}=\d{${A}$}\d{${H}$},$ where $\d{${H}$}$ is an $n\times n$ matrix, the following identity is obtained:
\begin{equation} \d{${H}$}\sum_{\lambda=0}^{r-2}(\d{${{I}}$}-\d{${A}$}\d{${H}$})^{\lambda}=\d{${A}$}^{-1}-\d{${A}$}^{-1}
(\d{${{I}}$}-\d{${A}$}\d{${H}$})^{r-1}.\label{i1}
\end{equation}
From (\ref{i1}) there follows
\begin{equation} \d{${A}$}^{-1}=\d{${H}$}\sum_{\lambda=0}^{r-2}(\d{${{I}}$}-\d{${A}$}\d{${H}$})^{\lambda}+\d{${A}$}^{-1}(\d{${{I}}$}-\d{${A}$}\d{${H}$})^{r-1}.\label{i2}
\end{equation} This relation will be used for the construction of interval matrix iterations.
Let $\mbox{\matbi{X}}_0$ be an $n\times n$ interval matrix such that $\d{${A}$}^{-1}\in \mbox{\matbi{X}}_0,$ and let the matrix $\d{${H}$}$ in (\ref{i2}) be defined by $\d{${H}$}=m(\mbox{\matbi{X}}_0).$ Then we obtain from (\ref{i2}) using inclusion property
\begin{equation} \d{${A}$}^{-1}\in \mbox{\matbi{X}}_1:=m(\mbox{\matbi{X}}_0)\sum_{\lambda=0}^{r-2}
\Bigl(\d{${{I}}$}-\d{${A}$} m(\mbox{\matbi{X}}_0)\Bigr)^{\lambda}+
\mbox{\matbi{X}}_0\Bigl(\d{${{I}}$}-\d{${A}$} m(\mbox{\matbi{X}}_0)\Bigr)^{r-1}.\label{i3}
\end{equation}
For simplicity, let us introduce $\d{${{R}}$}_k=\d{${{I}}$}-\d{${A}$} m(\mbox{\matbi{X}}_k).$ Combining (\ref{i2}) and (\ref{i3}), it is easily to prove by the set property and mathematical induction that the following is valid for an arbitrary $k\ge 0:$ \begin{equation} \d{${A}$}^{-1}\in \mbox{\matbi{X}}_{k+1}:=m(\mbox{\matbi{X}}_k)\sum_{\lambda=0}^{r-2} \d{${{R}}$}_k^{\lambda}+
\mbox{\matbi{X}}_k\d{${{R}}$}_k^{r-1}.\label{i4}
\end{equation}
In regard to this property, the following iterative
process for finding an inclusion matrix for $\d{${A}$}^{-1}$ can
be stated in a Horner scheme fashion
\begin{equation} \left\{\begin{array}{l}
\mbox{\matbi{Y}}_{k}=m(\mbox{\matbi{X}}_k)\Bigl(\d{${{I}}$}\underbrace{+\d{${{R}}$}_k(\d{${{I}}$}+\d{${{R}}$}_k(\d{${{I}}$}+\cdots+
\d{${{R}}$}_k(\d{${{I}}$}+}_{r-2\ \mbox{\rm times}}\d{${{R}}$}_k)\cdots \Bigr)
+
\mbox{\matbi{X}}_k \d{${{R}}$}_k^{r-1},\\[10pt]
\mbox{\matbi{X}}_{k+1}=\mbox{\matbi{Y}}_k\cap \mbox{\matbi{X}}_k,
\end{array}\right.\quad (k=0,1,\ldots).\label{i5}
\end{equation}
The iterative method (\ref{i5}) was considered in
detail in the book \cite{AH} by Alefeld and Herzberger.
As shown in \cite[Ch. 18]{AH}, the most efficient method from the class (\ref{i5}) of hyper-power methods is obtained for $r=3$ and reads
\begin{equation} \left\{\begin{array}{l}
\mbox{\matbi{Y}}_{k}=m(\mbox{\matbi{X}}^{(k)})+m(\mbox{\matbi{X}}^{(k)})\d{${{R}}$}_k+
\mbox{\matbi{X}}_k \d{${{R}}$}_k^2,\\[10pt]
\mbox{\matbi{X}}_{k+1}=\mbox{\matbi{Y}}_k\cap \mbox{\matbi{X}}_k,
\end{array}\right.\quad (k=0,1,\ldots).\label{i6a}
\end{equation}
The properties of the iterative interval method (\ref{i5}) are
given in the following theorem proved in \cite[Theorem 2, Ch.
18]{AH}, where $\rho(M)$ denotes the spectral radius of a matrix $M.$
\begin{thm} \fontsize{11}{12}\selectfont Let $\d{${A}$}$ be a nonsingular $n\times n$ matrix
and $\mbox{\matbi{X}}_0$ an $n\times n$ interval matrix such that
$\d{${A}$}^{-1}\in \mbox{\matbi{X}}_0.$ Then
\begin{itemize} \itemsep0pt \item[(a)]
each inclusion matrix $\mbox{\matbi{X}}_k,$ calculated by
$(\ref{i5})$, contains $A^{-1};$
\item[(b)] If $\rho(|\d{${{I}}$}-\d{${A}$}\d{${{X}}$}|)<1$ for every $\d{${{X}}$} \in \mbox{\matbi{X}}_0,$ then the sequence $\{\mbox{\matbi{X}}_k\}_{k\ge 0}$ converges to $A^{-1};$
\item[(c)] using a matrix norm $\|\cdot\|$ the sequence
$\{d(\mbox{\matbi{X}}_k)\}_{k\ge 0}$ satisfies
$$
\|d(\mbox{\matbi{X}}_{k+1})\|\le \gamma \|d(\mbox{\matbi{X}}_k)\|^r,\quad \gamma\ge 0,
$$
that is, the $R$-order of convergence of the method $(\ref{i5})$ is at least $r.$
\end{itemize} \label{thm:i1} \end{thm}
Using the iterative formula (\ref{i5}) in the Horner form for $r=6,$ we obtain the following iterative method for the inclusion of the inverse matrix:
\begin{equation} \aligned
\d{${{R}}$}_k&=\d{${{I}}$} -\d{${A}$}\odot m(\mbox{\matbi{X}}_k),\\
\d{${{S}}$}_k&=\d{${{R}}$}_k\odot \d{${{R}}$}_k,\\
\d{${{M}}$}_k&=\d{${{I}}$}+\d{${{R}}$}_k\odot(\d{${{I}}$}+\d{${{R}}$}_k\odot(\d{${{I}}$}+\d{${{R}}$}_k\odot
(\d{${{I}}$}+\d{${{R}}$}_k))),\\
\mbox{\matbi{Y}}_k&= m(\mbox{\matbi{X}}_k)\odot \d{${{M}}$}_k+\mbox{\matbi{X}}_k \otimes (\d{${{S}}$}_k\odot \d{${{S}}$}_k\odot \d{${{R}}$}_k),\\
\mbox{\matbi{X}}_{k+1}&=\mbox{\matbi{Y}}_k\cap \mbox{\matbi{X}}_k,
\endaligned
\qquad (k=0,1,\ldots).
\label{i7}
\end{equation}
The method \eqref{i7} is a particular case of the general matrix iteration \eqref{i5}.
According to Theorem \ref{thm:i1}, the method \eqref{i7} has order six
and requires 8 multiplication of point
matrices (denoted by $\odot$) and one multiplication
of interval matrix by point matrix (denoted by $\otimes$).
\section{\bsfs New inclusion method of high efficiency}
In what follows we are going to show that the computational cost of the interval method (\ref{i7})
can be reduced using the identity
\begin{equation}
x^4+x^3+x^2+x+1=x^2(x^2+x+1)+x+1
\label{rel}
\end{equation}
and the corresponding matrix relation. Having in mind (\ref{rel}) we rewrite (\ref{i7}) and
state the following algorithm in interval arithmetic for bounding the inverse matrix:
\begin{equation} \aligned \d{${{R}}$}_k&=\d{${{I}}$}-\d{${A}$}\odot m(\mbox{\matbi{X}}_k),\\ \d{${{S}}$}_k&=\d{${{R}}$}_k\odot \d{${{R}}$}_k,\\ \d{${{T}}$}_k&=\d{${{S}}$}_k\odot \d{${{S}}$}_k\odot \d{${{R}}$}_k,\\ \d{${{M}}$}_k&=\d{${{I}}$}+\d{${{R}}$}_k+\d{${{S}}$}_k\odot (\d{${{I}}$}+\d{${{R}}$}_k+\d{${{S}}$}_k),\\ \mbox{\matbi{Y}}_k&=m(\mbox{\matbi{X}}_k)\odot \d{${{M}}$}_k+\mbox{\matbi{X}}_k\otimes \d{${{T}}$}_k,\\ \mbox{\matbi{X}}_{k+1}&=\mbox{\matbi{Y}}_k\cap \mbox{\matbi{X}}_k, \endaligned
\qquad (k=0,1,\ldots). \label{i8} \end{equation} Compared with the method (\ref{i7}), the iterative scheme (\ref{i8}) requires 6 multiplications of point matrices (thus, two matrix multiplications less) and still preserves the order six. The above consideration can be summarized in the following theorem.
\begin{thm} \fontsize{11}{12}\selectfont Let $\d{${A}$}$ be a nonsingular $n\times n$ matrix
and $\mbox{\matbi{X}}_0$ an $n\times n$ interval matrix such that
$\d{${A}$}^{-1}\in \mbox{\matbi{X}}_0.$ Then
\begin{itemize} \itemsep0pt \item[(a)]
each inclusion matrix $\mbox{\matbi{X}}_k,$ calculated by
$(\ref{i8})$, contains $\d{${A}$}^{-1};$
\item[(b)] if $\rho(|\d{${{I}}$}-\d{${A}$} \d{${{X}}$}|)<1$ holds for all $\d{${{X}}$}\in \mbox{\matbi{X}}_0,$ then the sequence $\{\mbox{\matbi{X}}_k\}_{k\ge 0}$
converges toward $\d{${A}$}^{-1}$;
\item[(c)] using a matrix norm $\|\cdot\|$ the sequence
$\{d(\mbox{\matbi{X}}_k)\}_{k\ge 0}$ satisfies
$$
\|d(\mbox{\matbi{X}}_{k+1})\|\le \gamma \|d(\mbox{\matbi{X}}_k)\|^6,\quad \gamma\ge 0,
$$
that is, the $R$-order of convergence of the
method $(\ref{i8})$ is at least $6.$
\end{itemize} \label{thm:i2} \end{thm}
\noindent Theorem \ref{thm:i2} can be proved in a similar way
as Theorems 1
and 2 in \cite[Ch. 18]{AH} so that we omit the proof.
\begin{rem} \it \fontsize{11}{12}\selectfont Zhang, Cai and Wei have proved in \cite[Theorem 3.3]{Interval} that, under the additional condition $(m(\mbox{\matbi{X}}_0)=A^TBA^T$ for some matrix $B\in {\mathbb R}^{m\times m})$, the iterative method \eqref{i5} (and specially \eqref{i7}) is also convergent in the case of full-rank $m\times n$ matrix $\d{${A}$}$. In such a case, it converges to the Moore-Penrose inverse $\d{${A}$}^\dagger$ of $\d{${A}$}$. In a similar way, the same can be proved for the method \eqref{i8}.
\end{rem}
Executing iterative interval processes in general, one of the most important but also very difficult task is to find a good initial interval (real interval, complex interval, interval matrix, etc.) that contains the sought result. Similar situation appears in bounding the inverse matrix. We present here an efficient method for construction an initial matrix $\mbox{\matbi{X}}_0$ that contains the inverse matrix $\d{${A}$}^{-1}.$
Let $\d{${{X}}$}\in \mbox{\matbi{X}}_0$ and let us assume that the matrix $\d{${A}$}$ can be represented as
\begin{equation} \d{${A}$}=\d{${{I}}$}-\d{${{Y}}$},\quad \mbox{where a newly introduced matrix $\d{${{Y}}$}$ satisfies}\quad
\|\d{${{Y}}$}\|<1.\label{i9}
\end{equation}
It has been shown in \cite[Ch. 18]{AH} that the inequality
$$
\|\d{${{X}}$}\|\le a:=\frac{1}{1-\|\d{${{Y}}$}\|}
$$
holds.
If we use either the row-sum or the column-sum norm, then we find that
$
-a\le x_{ij}\le a,\ (1\le i,j\le n)
$
holds for all the elements of $\d{${{X}}$}=[x_{ij}].$ For the matrix
$\mbox{\matbi{X}}_0=\bigl[ X_{ij}^{(0)}\bigr]$ with
interval coefficients
\begin{equation}
X_{ij}^{(0)}=\left\{\begin{array}{ll}
[-a,a] & \mbox{\rm for}\ i\ne j\\[2pt]
[-a,2+a] & \mbox{\rm for}\ i=j,\end{array}\right.\label{i10}
\end{equation}
we have $\d{${A}$}^{-1}\in \mbox{\matbi{X}}_0$ and $m(\mbox{\matbi{X}}_0)=\d{${{I}}$}$ (see \cite{AH}).
If the condition (\ref{i9}) is not satisfied,
then it is effectively to normalize
the matrix $\d{${A}$}$ before running the
iterative process, say, to deal with the matrices $\d{${A}$}/\|\d{${A}$}\|$ or $\d{${A}$}/\|\d{${A}$}\|^2.$
Having in mind the described procedure of choosing
initial inclusion matrix $\mbox{\matbi{X}}_0,$
applying point matrix iterations
it is convenient to take $\d{${{X}}$}_0=m(\mbox{\matbi{X}}_0)=\d{${{I}}$}.$ Such choice have already
applied in stating the iterative interval methods (\ref{i7}) and (\ref{i8}).
\section{\bsfs Computational aspects}
Let us compare computational efficiency of the hybrid methods (\ref{i7}) and (\ref{i8}). As proved in \cite[Ch. 6]{springer}, CPU (central processor unit) time necessary for executing an iterative method $(IM)$ can be suitably expressed in a pretty manner in the form
\begin{equation}
CPU_{\footnotesize (IM)}=h\log q \cdot \frac{\theta(IM)}
{\log r(IM)}.\label{e1}
\end{equation}
Here $r(IM)$ is the convergence order, $\theta(IM)$ is computational cost of the iterative method $(IM)$ per iteration, $q$ is the number od significant decimal digits (for example $q=15$ or $16$ for double precision arithmetic) and $h$ is a constant that depends on hardware characteristics of the employed digital computer. Assuming that the considered methods are implemented on the same computer, according to (\ref{e1}) the comparison of two methods $(M_1)$ and $(M_2)$ is carried out by the {\it efficiency ratio}
\begin{equation}
ER_{\small M_1/ M_2}(n)=\frac{CPU_{\footnotesize (M_1)}}
{CPU_{\footnotesize (M_2)}}=\frac{\log r(M_2)}{\log r(M_2)}
\cdot \frac{\theta(M_1)}{\theta(M_2)}.\label{e2}
\end{equation}
Calculating the computational cost $\theta,$ it is necessary to deal with the number of arithmetic operations per iteration taken with certain {\it weights} depending on the
execution times of operations. We assume that floating-point number representation is used, with a binary fraction of $b$ bits, meaning that we deal with ``{\it precision} $b$" numbers, giving results with a relative error of approximately $2^{-b}.$ Following results given in \cite{brent},
the execution time $t_b(A)$ of addition (subtraction) is ${\cal{O}}(b),$ where ${\cal{O}}$ is the Landau symbol. Using Sch\"onhage-Strassen multiplication (see \cite{brent}),
often implemented in multi-precision libraries (in the computer algebra systems {\it Mathematica, Maple, Magma}, for instance), we have $t_b(M)={\cal{O}}\bigl(b\log b\, \log (\log b)\bigr).$ For comparison purpose, we chose the weights $w_{a}$ and $w_m$ proportional to $t_b(A)$ and $t_b(M),$ respectively for double precision arithmetic ($b=64$ bits) and quadruple-precision arithmetic ($b=128$ bits).
In particular cases, assuming that multiplication of two scalar $n\times n$ matrices requires $n^2(n-1)$ additions and $n^3$ multiplications and adding combined costs in the iterative formulae (\ref{i7}) and (\ref{i8}), for the hybrid method (\ref{i7}) and (\ref{i8}) we have $r(\ref{i7})=r(\ref{i8})=6$ and, approximately, $$ \theta(\ref{i7})=(9n^3-3n^2)b+10n^3b\log b \log(\log b),\quad \theta(\ref{i8})=(7n^3-n^2)b+8n^3b\log b \log(\log b). $$ In view of this, by (\ref{e2}) we determine the {efficiency ratio}
$$
ER_{(\ref{i7})/(\ref{i8})}(n)=\frac{9-3/n+10\log b \log(\log b)}
{7-1/n+8\log b \log(\log b)}. $$
The graph of the function $ER_{(\ref{i7})/(\ref{i8})}(n)$ for $n\in [2,40]$ is shown in Figure 1. From this graph we note that the values of $ER(n)$ are grouped about the value 1.25 for $n$ in a wide range. This means that the new method (\ref{i8}) consumes about 25\% less CPU time than the Horner-fashion method (\ref{i7}).
\centerline{\includegraphics[height=6cm]{grafikalt}}
\centerline{\small Figure 1: The ratio of CPU times for two different precisions of arithmetical processors}
A very similar graph is obtained for a lot of computing machines. For example, for double precision arithmetic and quadruple precision arithmetic (corresponding approximately to $b=64$ and $b=128,$ respectively) for the processor Pentium M 2.8 GHz (Fedora core 3) the values of $ER(n)$ are very close to 1.25 almost independently on the dimension of matrix $n.$ In addition, we find $ER_{(\ref{i5})/(\ref{i8})}(n)>1$ for every $r\ne 3$ and close to 1 for $r=3.$
The convergence behavior of the iterative interval method (\ref{i8}), together with the choice of initial inclusion matrix $\mbox{\matbi{X}}_0,$ will be demonstrated by one simple example. We emphasize that the interval method (\ref{i7}) produces the same inclusion matrix, which is obvious since the corresponding iterative formulae are, actually, identical but arranged in different forms. However,
as mentioned above, the inclusion method (\ref{i8}) has
lower computational cost than (\ref{i7}).
\noindent {\bf Example 1.} We wish to find the inclusion matrix for the inverse of the matrix
$$
\d{${A}$}=\left[\begin{array} {rr}
\frac{9}{10} & \frac{1}{5}\\[6pt] -\frac{3}{10} & \frac{4}{5}
\end{array}\right].
$$ Note the the inverse matrix $\d{${A}$}^{-1}$ is
$$
\d{${A}$}^{-1}=\left[\begin{array} {rr}
\frac{40}{39} & -\frac{10}{39}\\[6pt] \frac{5}{13} & \frac{15}{13}
\end{array}\right]=\left[\begin{array} {rr}
1.0\overline{256410} & -0.\overline{256410} \\[2pt]
0.\overline{384615} & 1.\overline{153846}
\end{array}\right].
$$
The overlined set of digits indicates that this set of digits repeats periodically.
First we determine
$$
\d{${{Y}}$}=\d{${{I}}$}-\d{${A}$}=\left[\begin{array} {rr}
0.1 & -0.2\\ 0.3 & 0.2
\end{array}\right] \ \ \mbox{\rm with} \ \ \|\d{${{Y}}$}\|_2=0.424264\ \ \mbox{\rm and}\ \ a=\frac{1}{1-\|\d{${{Y}}$}\|_2}=1.73691.
$$
According to (\ref{i10}) we form the initial inclusion matrix
$$
\mbox{\matbi{X}}_0=\left[\begin{array} {ll}
[-1.73691,3.73691] & [-1.73691,1.73691]\\[2pt]
[-1.73691,1.73691] & [-1.73691, 3.73691]
\end{array}\right].
$$
Note that the widths of intervals which present the coefficients
of the initial inclusion matrix $\mbox{\matbi{X}}_0$ are rather large. We have applied two iterations of (\ref{i8}) and obtained the following midpoint matrices (approximations to $\d{${A}$}^{-1}$) and the width matrices that give the upper error bounds of $\mbox{\matbi{X}}_k.$
\hskip1.5cm$\underline{k=1}$
\begin{eqnarray*}
&&m(\mbox{\matbi{X}}_1)=\left[\begin{array} {rr}
1.025\ldots & -0.256\ldots\\[2pt]
0.384\ldots & 1.153\ldots
\end{array}\right],\quad d(\mbox{\matbi{X}}_1)=\left[\begin{array} {ll}
1.27\times 10^{-2} & 8.68\times 10^{-3}\\[2pt]
1.51\times 10^{-2} & 6.356\times 10^{-3}
\end{array}\right].
\end{eqnarray*}
\hskip1.5cm$\underline{k=2}$ \begin{eqnarray*}
&& m(\mbox{\matbi{X}}_2)=\left[\begin{array} {rr}
1.0256410256410256\ldots & -0.256410256410256\ldots\\[2pt]
0.3846153846153846\ldots & 1.153846153846153\ldots
\end{array}\right], \\ &&
d(\mbox{\matbi{X}}_2)=\left[\begin{array}{ll}
6.33\times 10^{-19} & 4.19\times 10^{-19}\\[2pt]
5.99\times 10^{-19} & 4.54\times 10^{-19}
\end{array}\right].
\end{eqnarray*}
All displayed decimal digits of $m(\mbox{\matbi{X}}_1)$ and $m(\mbox{\matbi{X}}_2)$ are correct.
The third iteration produces the
width matrix $d(\mbox{\matbi{X}}_3)$ with elements
in the form of real intervals with widths of order $10^{-99}.$
We have not listed $m(\mbox{\matbi{X}}_3)$ and $d(\mbox{\matbi{X}}_3)$ to save the space.
We have also tested the interval method (\ref{i6a}) possessing the highest efficiency among hyper-power methods. Starting with the same initial matrix $\mbox{\matbi{X}}_0$ as above, we obtained the following outcomes:
\hskip1.5cm$\underline{k=1}$
\begin{eqnarray*}
&&m(\mbox{\matbi{X}}_1)=\left[\begin{array} {rr}
1.05 & -0.26\\[2pt]
0.39 & 1.18
\end{array}\right],\quad d(\mbox{\matbi{X}}_1)=\left[\begin{array} {ll}
0.586 & 0.398\\[2pt]
0.666 & 0.318
\end{array}\right].
\end{eqnarray*}
\hskip1.5cm$\underline{k=2}$ \begin{eqnarray*}
&& m(\mbox{\matbi{X}}_2)=\left[\begin{array} {rr}
1.0256\ldots & -0.2564\ldots\\[2pt]
0.3846\ldots & 1.1538\ldots
\end{array}\right],\quad d(\mbox{\matbi{X}}_2)=\left[\begin{array}{ll}
3.60\times 10^{-4} & 2.43\times 10^{-4}\\[2pt]
3.91\times 10^{-4} & 2.12\times 10^{-4}
\end{array}\right].
\end{eqnarray*}
The method (\ref{i8}) produced considerably higher accuracy than (\ref{i6a}) using only two iterations so that its application is justified in this case.
Furthermore, since $
ER_{(\ref{i6a})/(\ref{i8})}(n)
$
is close to 1,
which of these two methods will be chosen depends of the nature of solved problem, specific requirements and available hardware and software (precision of employed computer). For instance, the proposed method (\ref{i8}) is more convenient when a high accuracy is requested in a few iterations, as in the presented example.
\end{document} | arXiv |
Climate Dynamics
January 2016 , Volume 46, Issue 1–2, pp 383–412 | Cite as
Precipitation in the EURO-CORDEX \(0.11^{\circ }\) and \(0.44^{\circ }\) simulations: high resolution, high benefits?
A. F. Prein
A. Gobiet
H. Truhetz
K. Keuler
K. Goergen
C. Teichmann
C. Fox Maule
E. van Meijgaard
M. Déqué
G. Nikulin
R. Vautard
A. Colette
E. Kjellström
D. Jacob
First Online: 25 June 2015
In the framework of the EURO-CORDEX initiative an ensemble of European-wide high-resolution regional climate simulations on a \(0.11^{\circ }\,({\sim}12.5\,\hbox {km})\) grid has been generated. This study investigates whether the fine-gridded regional climate models are found to add value to the simulated mean and extreme daily and sub-daily precipitation compared to their coarser-gridded \(0.44^{\circ }\,({\sim}50\,\hbox {km})\) counterparts. Therefore, pairs of fine- and coarse-gridded simulations of eight reanalysis-driven models are compared to fine-gridded observations in the Alps, Germany, Sweden, Norway, France, the Carpathians, and Spain. A clear result is that the \(0.11^{\circ }\) simulations are found to better reproduce mean and extreme precipitation for almost all regions and seasons, even on the scale of the coarser-gridded simulations (50 km). This is primarily caused by the improved representation of orography in the \(0.11^{\circ }\) simulations and therefore largest improvements can be found in regions with substantial orographic features. Improvements in reproducing precipitation in the summer season appear also due to the fact that in the fine-gridded simulations the larger scales of convection are captured by the resolved-scale dynamics . The \(0.11^{\circ }\) simulations reduce biases in large areas of the investigated regions, have an improved representation of spatial precipitation patterns, and precipitation distributions are improved for daily and in particular for 3 hourly precipitation sums in Switzerland. When the evaluation is conducted on the fine (12.5 km) grid, the added value of the \(0.11^{\circ }\) models becomes even more obvious.
Regional climate modeling EURO-CORDEX Precipitation Added value High resolution Extremes
A. F. Prein and A. Gobiet was formerly with Wegener Center for Climate and Global Change (WEGC), University of Graz, Brandhofgasse 5, 8010 Graz, Austria. K. Goergen was formerly with Département Environnement et Agro-Biotechnologies, Centre de Recherche Public – Gabriel Lippmann, 41 Rue du Brill, 4422 Belvaux, Luxembourg, since January 2015 renamed to Luxembourg Institute of Science and Technology.
The online version of this article (doi: 10.1007/s00382-015-2589-y) contains supplementary material, which is available to authorized users.
The amount, distribution, and intensity of precipitation has major impacts on ecosystems and society, since heavy precipitation may lead to large damages caused by floods, debris flows, or landslides, while the absence of precipitation may cause droughts and has impact on water- and hydropower supply. Consequently, precipitation is regarded as one of the most relevant meteorological variables for society and its regional alteration along with global warming is currently one of the most discussed topics in climate change research.
However, simulating precipitation is challenging because of the wide range of processes involved. There are three factors which affect precipitation (Sawyer 1956): (1) cloud processes and convection, (2) the interaction of the atmospheric flow with the surface, and (3) the large-scale atmospheric circulation. Particularly cloud processes like phase transitions are partly still not well understood and one of the major sources of uncertainty in climate simulations (e.g., Stocker et al. 2013).
Decreasing the horizontal grid spacing in climate models to \(0.11^{\circ }\) can help to improve factors (2) and (3) by better representing surface characteristics (e.g., orography and coastlines) and by more accurately solving the equations of motion. Several studies investigated the influence of model grid spacing on precipitation. Giorgi and Marinucci (1996) showed that precipitation amount, intensity, and distribution are sensitive to grid spacing. By investigating seasonal mean precipitation in nine regional climate model (RCM) simulations from the ENSEMBLES project with 25 and 50 km grid spacing (Rauscher et al. 2010) found that spatial patterns and temporal evolution of summertime precipitation (but not for winter) are improved in most 25 km simulations. An improvement of the higher resolution was especially visible in topographically complex regions, which is in line with findings by Chan et al. (2013). Chan et al. (2013) further emphasize the importance of highly resolved observational data sets to capture regional-scale climate signals.
Major improvements in representing cloud processes and convection (factor 1) can be expected when convection permitting models, using a grid spacing finer than 4 km, are used (e.g., Weisman et al. 1997; Kendon et al. 2012; Prein et al. 2015). On these grids error prone convection parameterizations schemes can be avoided by resolving deep convection explicitly. Convection permitting simulations might also alter the projected climate change signals especially of sub-daily extreme precipitation (Kendon et al. 2014; Mahoney et al. 2012). However, the drawback of these kind of simulations is that they are computationally very demanding. Therefore, transient climate simulations on convection permitting grids are not feasible at the moment on continental-scale domains. Chan et al. (2013) investigated the simulated precipitation of a 50, 12 and 1.5 km grid spacing model over the Southern United Kingdom. The 50 km model underestimates mean precipitation over mountainous regions and the simulated precipitation intensity is too weak. Both biases are reduced in the 12 and 1.5 km model. On a daily time scale, they found no evidence that the skill of the 1.5 km model is superior to the skill of the 12 km model. This is consistent with previous findings (e.g., Prein et al. 2013a; Ban et al. 2014; Fosser et al. 2014), which show that added value of convection permitting simulations can be predominantly found on sub-daily timescales.
Previous European ensemble RCM initiatives defined a target grid spacing of \(0.44^{\circ }\) in case of the PRUDENCE project (Christensen et al. 2007; Jacob et al. 2007) and up to \(0.22^{\circ }\) in case of the ENSEMBLES project (van der Linden and Mitchell 2009). The European branch of the COordinated Regional climate Downscaling EXperiment (CORDEX) called EURO-CORDEX (Jacob et al. 2014) is the first initiative in which multiple RCMs are used to simulate transient climate change with \(0.11^{\circ }\) horizontal grid spacing for an entire continent. In parallel, similar simulations on a \(0.44^{\circ }\) grid are conducted.
In this study we present a comparative evaluation of precipitation from \(0.11^{\circ }\) and \(0.44^{\circ }\) grid spacing EURO-CORDEX simulations by applying scale sensitive and intensity dependent statistical methods. The analysis focuses on the entire ensemble in order to achieve robust results with regard to the effect of model resolution, but does not aim for an in-depth analysis of the performance of single models. A similar set of simulations was already used for a standard evaluation (Kotlarski et al. 2014) and an analysis of heat waves (Vautard et al. 2013). Both studies could not identify added value in the skill of the high-resolution models to simulate regionally and seasonally averaged quantities. Compared to Kotlarski et al. (2014) we focus solely on the evaluation of precipitation and we investigate model performance on a daily and local scale, rather than averages over long periods and larger regions. Another major difference is the usage of highly resolved regional precipitation data sets in this study, which have approximately a ten times higher station density than the E-OBS data set that was used in Kotlarski et al. (2014). This is essential for local-scale analyses (Prein and Gobiet 2015).
Our major research questions are:
Is there improved skill in simulated precipitation if the horizontal grid spacing of climate models is increased from \(0.44{^\circ }\) to \(0.11{^\circ }\)?
On which spatial scales do differences occur?
Are the differences dependent on the intensity of precipitation?
What are the main sources of differences?
To answer these questions, six statistical methods are used. We begin with analyzing biases in simulated seasonal mean and extreme precipitation (Sect. 3.1). Then the location and total area of grid cells where a majority of the 0.11\({^\circ }\) models improve/deteriorate seasonal mean and extreme precipitation biases compared to the 0.44\({^\circ }\) models are evaluated (Sect. 3.3). Further, the ability of the RCMs to simulate seasonal average spatial patterns of precipitation is investigated for different horizontal scales (Sect. 3.4) and for different precipitation intensities (Sect. 3.5). In Sect. 3.6 differences in daily precipitation patterns are analyzed by accounting for spatial displacements and finally, 3 hourly and daily precipitation distributions are compared to observations (Sect. 3.7).
2 Data and methods
2.1 Models
Simulations from eight different models of the EURO-CORDEX ensemble (or model versions in the case of WRF) are analyzed within the 19 year period 1989 to 2007 (see Table 1). With each model a pair of simulations with \(0.44^{\circ }\) (approximately 50 km) and \(0.11^{\circ }\) (approximately 12.5 km) horizontal grid spacing has been performed. Both simulations of each pair have a similar setup except for the grid spacing and the associated time step. Only in the case of REMO rain advection was used in the \(0.11^{\circ }\) simulation, which was not applied at \(0.44^{\circ }\).
All models, except ARPEGE, are RCMs and forced by the European Centre for Medium Range Weather Forecasts Interim Reanalysis (ERA-Interim) at their lateral boundaries and by sea surface temperature in the interior of the EURO-CORDEX domain. In contrast, ARPEGE is a global climate model (GCM), its temperature, wind speed, and specific humidity is nudged towards ERA-Interim outside the EURO-CORDEX domain similar to a RCM with and global relaxation zone. Inside the domain no nudging was applied in any of the models. The sea surface temperature, used in the simulations, is taken from ERA-Interim. An overview on the used greenhouse gas and aerosol forcing can be found in the Online Resource 1 Table A1.
List of models
Model; institute
Soil spin-up, land use, and vertical levels
ARPEGE-CNRM (Déqué 2010); Métó-France
RS: Morcrette (1990); CS: Bougeault (1985); MS: Ricard and Royer (1993); LSS: Douville et al. (2000); BLS: Ricard and Royer (1993)
SI: Year 1989 is run twice; VL: 31
CCLM-CLMCOM (Böhm et al. 2006; Rockel et al. 2008); BTU
RS: Ritter and Geleyn (1992); CS:Tiedtke (1989); MS: Doms et al. (2011), Baldauf and Schulz (2004); LSS: TERRA-ML Doms et al. (2011); BLS: Louis (1979)
SI: Initialization with climatological soil moisture; LU: GLC2000 (Joint Research Centre 2003); VL: 40
HIRHAM5 (Christensen et al. 1998); DMI
RS: Morcrette et al. (1986); Giorgetta and Wild 1995); CS: Tiedtke (1989); MS: Lohmann and Roeckner (1996); LSS: Hagemann (2002); BLS: Louis (1979)
SI: Initialization with climatological temperatures and full water reservoirs. One year spin-up.; LU: USGS (Hagemann 2002); VL: 31
RACMO22E (van Meijgaard et al. 2012); KNMI
RS: Fouquart and Bonnel (1980), Mlawer et al. (1997); CS: Tiedtke (1989), Nordeng (1994), Neggers et al. (2009); MS: Tiedtke (1993), Tompkins et al. (2007), Neggers (2009); LSS: Van den Hurk et al. (2000), Balsamo et al. (2009); BLS: Lenderink and Holtslag (2004), Siebesma et al. (2007)
SI: Initialized from ERA-Interim on 1979.01.01 00:00; LU: ECOCLIMAP (1 km) (Champeaux et al. 2003; Masson et al. 2003); VL: 40
RCA4 (Samuelsson et al. 2011); SMHI
RS: Savijarvi (1990), Sass et al. (1994); CS: Kain and Fritsch (1990, 1993); MS: Rasch and Kristjánsson (1998); LSS: Samuelsson et al. (2011); BLS: Cuxart et al. (2000)
REMO (Jacob et al. 2012); CSC
RS: Morcrette et al. (1986); CS:Tiedtke (1989), Nordeng (1994), Pfeifer (2006); MS: Lohmann and Roeckner (1996); LSS: Hagemann (2002), Rechid et al. (2009); BLS: Louis (1979)
SI: Soil initialized from ERA-Interim. No Spin-up; LU: USGS (Hagemann 2002; VL: 27
WRF (Skamarock et al. 2008; CRP-GL
RS: CAM 3.0 Collins et al. (2004); CS: Modified Kain (2004); MS: WSM 6-class Hong and Lim (2006); LSS: NOAH Ek et al. (2003); BLS: YSU Hong et al. (2006)
SI: Soil initialized from ERA-Interim. No Spin-up; LU: IGBP-MODIS (30\(''\)); VL: 50
WRF (Skamarock et al. 2008; IPSL and INERIS
RS: RRTMG Lacono et al. (2008); CS: Grell and Devenyi (2002); MS: Hong et al. (2004); LSS: NOAH Ek et al. (2003); BLS: YSU Hong et al. (2006)
SI: Soil initialized from ERA-Interim. No Spin-up; LU: USGS Land Use; VL: 32
RS radiation scheme, CS convection scheme, MS microphysics scheme, LSS land-surface scheme, BLS boundary layer scheme, SI soil initialization, LU land use, VL vertical levels
Spacing and frequency
Stations per 1000 \(\hbox {km}^2\)
EURO4M-APGD (Isotta et al. 2013)
European Alps and surrounding flatland areas
5 km daily
RdisaggH (v1.0) (MeteoSwiss 2010)
Mai 2003– Dec. 2010
1 km hourly
\(\sim\)10 and 4 radar stations
REGNIE (DWD 2009)
\(\sim\)3
PTHBV (Johansson 2002)\(^{\mathrm{a}}\)
KLIMAGRID (Mohr 2009)\(^{\mathrm{a}}\)
Spain011 (Herrera et al. 2012)
12 km daily
CARPATCLIM (Szalai et al. 2013)
SAFRAN (Quintana-Seguí et al. 2008; Vidal et al. 2010)\(^{\mathrm{b}}\)
\(\sim\)11
aCorrected for observation losses
bRegional reanalysis
The EURO-CORDEX domain covers the entire European Continent and large parts of Northern Africa and therefore includes a wide range of climate zones (Fig. 1a). The boundaries of this domain are given in a rotated coordinate system with the rotated North Pole at \(198.0^{\circ }\) East and \(39.25^{\circ }\) North and the top left corner of the domain at \(331.79^{\circ }\) East and \(21.67^{\circ }\) North. The domain extends 106 grid cells to the East and 103 to the South with a grid spacing of \(0.44^{\circ }\). In general a zone of a few hundred kilometers was added around the EURO-CORDEX domain to account for the relaxation zone and to prevent spurious boundary effects to enter the analysis.
2.2 Observations
Highly resolved observational data sets are an elementary ingredient for the detection of added value in high-resolution (\({\le}0.11^{\circ }\)) models (e.g., Chan et al. 2013; Prein and Gobiet 2015). However, for daily precipitation on the pan-European scale only the E-OBS gridded data set is available, which has a rather coarse grid spacing of \(0.22^{\circ }\), a low station density in some regions, and known deficiencies, especially with regard to extremes, in orographically complex areas, and areas where the station density is low (Haylock et al. 2008; Hofstra et al. 2009, 2010). These shortcomings motivated us to use regional precipitation data sets from several weather services in Europe. A comparison of those regional data sets with E-OBS can be found in Prein and Gobiet (2015).
In total eight gridded data sets are used, which cover Switzerland, the Alps, Germany, France, the Carpathians, Sweden, Norway, and Spain (see Fig. 1a; Table 2). Except for Switzerland and France, the data sets are solely based on station data, provided on a daily basis, and cover the entire simulated period 1989 to 2007. The Swiss data set (RdissagH) is derived from a combination of surface stations and four weather radar images and has an hourly frequency starting on May 1, 2003. The French data set (SAFRAN) is a regional reanalysis in which observations where assimilated. It is originally provided on a hourly basis. The Alpine data set includes areas in Germany and France, which overlap with the observational data sets of these countries. For the analysis we compare the simulated precipitation with single observational data sets (region by region) and do not account for differences between observational data sets in the overlapping areas.
All precipitation data sets are affected by systematic errors of rain gauge measurements. The most severe source of errors is caused by wind field deformations around the gauge and the induced under-catch of precipitation particles. The resulting underestimation depends on the type and intensity of precipitation, the type of gauge, and the wind speed. In case of rain the errors are on average 3 % but can be as large as 20 % (Sevruk and Hamon 1984). Snow measurements are usually affected by much larger errors, which can be up to 80 % in case of non-shielded gauges (against 40 % in case of shielded gauges) for wind speeds of 5 m/s and temperatures above −8 °C (Goodison et al. 1997). Additionally, systematic errors occur by interpolating the point measurements onto a grid. In case of the Alpine EURO4M-APGD data set this leads to an underestimation of high intensities in the range of 10–20 % (smoothing effect) and an overestimation of low intensities (moist extension into dry regions) (Isotta et al. 2013).
These observational errors have to be kept in mind throughout the study. For simplicity, we still use the terms "bias" and "error" when we compare the simulations to observations and use "differences", "improvements", "deteriorations" and so on when the two model grid spacings are compared with each other.
2.3 Evaluation methods
Common evaluation grids To compare the \(0.11^{\circ }\) and \(0.44^{\circ }\) simulations with observations, they have to be available on a common evaluation grid. The most suitable evaluation grid depends on the underlying research question. First of all, the grid spacing of the observational data set defines the finest scale on which comparisons are meaningful. In this study the grid spacings of the observational data sets are smaller or equal to the fine gridded simulations (\(0.11^{\circ }\)) in any case. Further, one may decide to evaluate on the grid of the coarse-gridded (\(0.44^{\circ }\)) or on the grid of the fine-gridded simulations. The former is the "fairer" option with regard to the \(0.44^{\circ }\) simulations, since it compares only features, which are resolvable by all models. The latter option (evaluating on the \(0.11^{\circ }\) grid) penalizes the coarse-gridded simulations, because even a perfect \(0.44^{\circ }\) simulation would feature pattern errors because of missing sub-grid-scale features. However, from an end user's viewpoint this option can provide valuable insights, since precipitation data is frequently used on very small-scales, e.g., as a driver for hydrological simulations. Even though the comparison is somehow "unfair" for the coarse-gridded simulations, it is not trivial for the \(0.11^{\circ }\) models to produce meaningful information on scales smaller than \(0.44^{\circ }\). Therefore, most analyses of this study are conducted on the \(0.44^{\circ }\) grid, but differences to the analyses on the fine grid are discussed and depicted where needed.
Technically, the different data sets are transferred to the evaluation grid by a conservative resampling procedure (Suklitsch et al. 2008). In the first step all grids are artificially refined (if necessary) to a grid spacing, which is at least three times finer than the one of the evaluation grid, in order to reduce sampling errors. Thereby, all smaller grid cells retain the value from the larger grid cell they originate from. After refining, all grid cells whose centers are inside a grid-box of the evaluation grid (\(0.44^{\circ }\) or \(0.11^{\circ }\) regular lon/lat grid) are averaged. This method can be used to transfer finer to coarser grids and vice versa, while spatial averages and patterns are conserved.
Intensity dependent analysis Climate models usually have intensity dependent errors in their simulated precipitation (e.g., Themeßl et al. 2011). This is because heavy precipitation is often caused by small-scale processes (e.g., deep convection) while processes leading to light rain are more large-scale processes (e.g., stratiform precipitation in a warm front) that can be resolved in a \(0.44^{\circ }\) model. Therefore, it can be expected that decreasing the horizontal grid spacing of climate models is especially beneficial for high precipitation intensities.
This is examined by evaluating total and extreme precipitation separately. Thereby, all values above the 97.5 percentile are called extreme and are selected in observations and simulations independently. This means extremes do not have to match in time. Additionally, analyses are performed for different intensity classes (Sect. 3.5) and for different intensity thresholds (Sect. 3.6).
Scale-dependent spatial correlation analysis Seasonal extreme and mean precipitation patterns are evaluated by using the Pearson product-moment correlation coefficient (Pearson 1895). Information about scale dependence of correlation coefficients is derived by smoothing out smaller-scale precipitation patterns with a square boxcar averaging method. Thereby, the smoothed field \(R_{i,j}\) is calculated from the original field \(A_{i,j}\) as follows:
$$\begin{aligned} R_{i,j}=\frac{1}{w^2}\sum _{k=0}^{w-1}\sum _{l=0}^{w-1} A_{i+k-(w-1)/2,j+l-(w-1)/2}, \quad \begin{array}{ll} i= \frac{w-1}{2},\ldots ,N-\frac{w+1}{2}\\ j=\frac{w-1}{2},\ldots ,M-\frac{w+1}{2} \end{array} \end{aligned}$$
where w is the side length of the square smoothing window and N respectively M denote the number of elements in rows and columns. If the smoothing window contains points, which are outside the evaluation domain, the nearest edge points are used instead to derive the smoothed result.
Additionally to this method, two further methods were tested to derive scale-dependent information. Since the three methods lead to very similar results, only the results of the square boxcar averaging are shown here.
Patterns of daily precipitation fields Even if the patterns of simulated daily precipitation fields are very realistic, evaluation with traditional methods like squared errors or correlation coefficients may indicate very low quality of the simulation. This is due to the chaotic nature of precipitation cells, leading to simulated patterns that do not match the exact location and time of observed precipitation cells, although the spatial and temporal frequencies and averages may be very realistic (double penalty problem, e.g., Prein et al. 2013a).
To avoid this problem we apply the fractions skill score (FSS) method (Roberts and Lean 2008), which is based on the assumption that a useful simulation has a realistic spatial frequency of precipitation. Therefore, the fraction coverage in neighboring grid cells (cells within a square window with side length n centered on a grid point) in the observation and simulation are used to calculate a Fractions Brier Score (FBS):
$$\begin{aligned} \text {FBS}=\frac{1}{N} \sum _N \left( \frac{1}{m}\sum _m I_S -\frac{1}{m}\sum _m I_O \right) ^2 \end{aligned}$$
where m is the number of grid boxes in the neighborhood (\(m=n \cdot n\)), N is the number of neighborhood windows in the domain (number of grid cells), and \(I_O\) (\(I_S\)) is the indicator if the precipitation in a grid box is above a threshold (1 = yes, 0 = no). Finally, the FSS is computed as follows:
$$\begin{aligned} \text {FSS}=1-\frac{\text {FBS}}{\frac{1}{N} \left[ \sum _N \left( \frac{1}{m} \sum _m I_S \right) ^2 + \sum _N \left( \frac{1}{m} \sum _m I_O \right) ^2 \right] }. \end{aligned}$$
Here the FBS is divided by the worst possible simulation results without any overlap between observation and simulation. A perfect simulation has a FSS of 1 while a complete mismatch results in an FSS of 0. The FSS is a function of horizontal scale (side length n of the square window) and precipitation threshold. The statistical value, which will be investigated here, is the difference (\(0.11^{\circ }\) minus \(0.44^{\circ }\)) between the median FSS from all precipitation days (>1 mm in the observation) within a season.
3.1 Precipitation biases and spatial error variability
First, we investigate the climatological errors of median (Fig. A1 in the Online Resource 1) and extreme (days with values above 97.5 percentile) precipitation (Fig. 2) compared to observations on the common \(0.44^{\circ }\) evaluation grid. Simulated minus observed mean seasonal precipitation is calculated for total precipitation and for extremes on a grid point basis. From the resulting biases the median, 25 percentile, and 75 percentiles over all grid boxes of an evaluation domain are derived. The difference between the 75 minus 25 percentile (Q75 minus Q25) will be further on denoted as spatial error variability.
The \(0.11^{\circ }\) simulations tend to produce heavier extreme precipitation than their \(0.44^{\circ }\) counterparts (symbols in Fig. 2 are below the diagonal), however, this can not be generalized. For example, the \(0.11^{\circ }\) median June, July, and August (JJA) extreme precipitation in REMO is lower in all regions (symbol is above the diagonal) while in the RCA4 \(0.11^{\circ }\) simulation it is always higher (below the diagonal).
Improvements of the spatial error variability can be predominantly found in mountainous regions like the Alps (panel a and b), Norway (panel i and j), Spain in December, January, and February (DJF) (panel e), France (panel k and l), and the Carpathians in JJA (panel n). These are also the regions where most of the \(0.11^{\circ }\) simulations are improving the median extreme precipitation bias (symbols located in the green area of Fig. 2). Deteriorations of the spatial error variability are found in Sweden during DJF (panel g) and mixed results prevail in the other regions and seasons.
Results for March, April, and May (MAM) and September, October, and November (SON) (not shown) are frequently in between those of DJF and JJA. The main characteristics of biases and spatial error variabilities of mean precipitation (Online Resource Fig. A1) are similar to those of extreme precipitation. This means, models that underestimate extreme precipitation usually also underestimate total precipitation sums. Also differences in the median biases between the two model resolutions are similar to those of extreme precipitation.
Summing up, biases in extreme and mean precipitation averaged over larger regions are not clearly improved in the \(0.11^{\circ }\) simulations. This means, simulations with \(0.44^{\circ }\) grid spacing might be sufficient if regional average precipitation is of interest.
3.2 Precipitation biases versus model resolution differences
In Fig. 3 we show the relation between the seasonal absolute biases in the mean and extreme precipitation of the \(0.44^{\circ }\) simulations and the precipitation differences between the \(0.11^{\circ }\) and \(0.44^{\circ }\) simulations.
In DJF and JJA (Fig. 3 upper/lower panel) the differences between the \(0.11^{\circ }\) and \(0.44^{\circ }\) simulations are typically smaller than the biases in the \(0.44^{\circ }\) simulations (y-axis ratios are smaller than one). For mean precipitation (green) the differences between the \(0.11^{\circ }\) and \(0.44^{\circ }\) simulations are within 20 and 50 %/80 % (DJF/JJA) of the \(0.44^{\circ }\) simulation biases. For extreme precipitation this ratio is higher and typically between 50 and 100 % with outliers up to 190 %. This means, theoretically, the potential for added value is higher for extreme than for mean precipitation.
For mean precipitation largest ratios are found for the Alps, the Carpathians, France, and Spain while lowest appear in Norway and Sweden. For extreme precipitation Germany, Sweden and the Carpathians have the highest and Norway, Spain, and the Alps the lowest ratios. The reason for this is primarily the magnitude of the absolute biases in the \(0.44^{\circ }\) simulations because the biases strongly vary between different regions in Europe while the grid spacing differences are more uniform. This can already be seen in Fig. 2 and Fig. A1 where the symbols tend to align along the diagonal and do not scatter too much. However, these two figures can not be directly compared to Fig. 3 because they do not show absolute biases and therefore positive and negative biases can cancel out by spatial averaging.
Replacing the precipitation of the \(0.44^{\circ }\) simulations with those of the \(0.11^{\circ }\) simulations in the divisors leads to similar results (not shown).
3.3 Analysis of grid cell biases
Here we investigate how well spatial patterns of extreme and mean precipitation are represented in the EURO-CORDEX \(0.11^{\circ }\) and \(0.44^{\circ }\) simulations by detecting regions of consistent improvements or deteriorations in the \(0.11^{\circ }\) simulations. The term consistent improvement/deterioration is used if more than six out of the eight \(0.11^{\circ }\) simulations (more than 75 %) show smaller/larger absolute biases on specific grid cell than their \(0.44^{\circ }\) counterparts.
European Alps In the Alps extreme precipitation patterns are spatially and temporally highly variable (Fig. 4a, e, i and m) with two distinct hot spots around the Tessin and the Julian Alps (the sub-regions are indicated in Fig. 1b). In addition, the Ligurian Alps and the north-eastern Adriatic coast are highly affected by extreme precipitation in SON and DJF.
Extreme precipitation in the Tessin is well simulated in the multi-model-mean (except for DJF where an overestimation is dominant in the entire Western Alps) while extreme precipitation is underestimated in the Julian and Ligurian Alps.
Consistent improvements are found in 30–40 % of the evaluation grid cells, while consistent deteriorations are only found in 1–8 % (Fig. 4, right column).
In DJF (panel a–d) the domain wide average extreme precipitation bias is close to zero in both resolutions (Fig. 2a), but regionally large differences occur. The minimum and maximum values of the ensemble mean bias are larger in the (\(0.11^{\circ }\)) ensemble (panel b). However, there are large areas where biases are consistently improved (panel d). Added value is particularly visible south- and northward of the Alpine divide, while there are small areas of deteriorations (precipitation overestimation) along the Alpine divide.
In MAM (panel e–h) the domain wide average extreme precipitation bias is close to zero as in winter, and the largest differences between the \(0.44^{\circ }\) and \(0.11^{\circ }\) ensemble occur in the Western Alps. Consistent improvements in the \(0.11^{\circ }\) ensemble can be found along the entire Alpine chain, the Ligurian Alps and Adriatic coast.
In JJA (panel i–l) the domain wide average extreme precipitation is underestimated in both ensembles and most added value can be found in this season. Not only the spatial mean but also the minimum and maximum biases are improved. There are large areas in the south and western part of the Alps where extreme precipitation is consistently improved in the \(0.11^{\circ }\) runs (39 % of the entire area). In the \(0.44^{\circ }\) ensemble too much precipitation is produced along the Alpine divide and too little southward. Both error patterns are nicely corrected in the \(0.11^{\circ }\) simulations.
SON is the season with highest extreme precipitation in the Alps (panel m–p). Although there is a general underestimation of about 10 %, the basic patterns are well simulated in the fine and coarse gridded ensemble. Nevertheless, the \(0.11^{\circ }\) simulations have consistent improvements especially in the mountainous and coastal areas of the domain.
The basic error characteristics are similar for mean precipitation (Online Resource 1 Fig. A2) as for extremes. The simulations are too dry SON and JJA southward of the Alps and to wet in the Alps. In JJA the \(0.11^{\circ }\) simulations mitigate the dry bias a lot. In DJF and MAM too wet conditions are simulated in and northwards of the Alps. In all seasons mean precipitation is consistently improved in large areas by the \(0.11^{\circ }\) simulations (between 30 and 37 % of the evaluation domain). The location and the amount of improved areas are very similar to those of extreme precipitation.
Germany In Germany the season with the highest extreme precipitation amounts is JJA and the season with the lowest is DJF. There is only one major hot-spot, which is located at the borders to Austria and the Czech Republic and which is clearly related to topography. A minor hot-spot can be found in the western part of the Central German Uplands. Northern Germany shows a uniformly gradient where extreme precipitation decreases from west to east (except for JJA).
In DJF (panel a–d in Fig. 5), but also in the transition seasons (not shown) this large-scale gradient is too weak in the EURO-CORDEX simulations, which leads to a growing overestimation of extreme precipitation towards the eastern part of Northern Germany. Areas which are consistently improved and deteriorated by the \(0.11^{\circ }\) runs are approximately in balance. One pattern, which is clearly improved, is the overestimation of extreme precipitation in the southern part of Germany.
In JJA the EURO-CORDEX simulations overestimate extreme precipitation in Central Germany. Extremes are underestimated in the southeast and particularly in the hot-spot region. Consistently improved and deteriorated areas are both small and added value is primarily found in the mountainous areas, while in the flat land, the results are often even deteriorated. Also in MAM and SON (not shown) the consistently improved areas are small and no clear advantage of the \(0.11^{\circ }\) simulations can be detected. Generally, added value of fine-grid spacings is tied to mountainous regions in Germany.
For JJA and SON mean precipitation (Online Resource 1 Fig. A3, SON not shown) improved and deteriorated areas, in the fine-gridded runs, are also small and balance each other. Similar as for extremes, mean DJF (Online Resource 1 Fig. A3) precipitation is consistently improved in Southern Germany and additionally also in Eastern Germany. For MAM (Fig. 10c) larger parts of Central and the North East coast of Germany have a better representation of mean precipitation in the \(0.11^{\circ }\) ensemble.
Spain In Spain a north-south precipitation gradient with high amounts of mean and extreme precipitation in the north is present. Most of the precipitation is falling during DJF while JJA is very dry. Hot-spots of extreme precipitation are the Pyrenees, the Cantabrian Mountain Chain, and parts of the Mediterranean Coast during DJF and SON.
In DJF (panel a–d in Fig. 6), the intensity of extreme precipitation is too low in the \(0.44^{\circ }\) simulations. This is improved by the \(0.11^{\circ }\) runs. Consistent added value covers 43 % of the total territory, while deterioration is only found in 4 % of the region.
In JJA (panel e–h), the underestimation in the \(0.44^{\circ }\) simulations is also improved by the \(0.11^{\circ }\) runs. The consistently improved areas cover 30 % of Spain and are mainly located along the coastlines and the Pyrenees.
Similar or even larger improvements can be found in MAM and SON (not shown) where consistent improvements in the fine-gridded models appear in 44 % of the domain and have similar patterns as in DJF. For mean precipitation (Online Resource 1 Fig. A4) the \(0.11^{\circ }\) ensemble shows highest advantages in SON where 54 % of the domain are improved followed by MAM (44 %), DJF (38 %), and JJA (20 %). The locations of the improved areas are similar to those of extreme precipitation.
Norway and Sweden The precipitation patterns in Norway and Sweden are dominated by the Scandinavian Mountains, which reach from southern Norway up to the North Cape. The mean and extreme precipitation patterns are quite homogeneous in Sweden, which is located downstream of the coastal mountain range. The band of most extreme precipitation follows the Norwegian coastline and is divided into two hot-spots. One is located in Western Norway and the second in the south of Northern Norway. The seasons with the highest mean and extreme precipitation are SON and DJF while spring has the lowest values.
In DJF (panel a–d in Fig. 7) the \(0.44^{\circ }\) simulations underestimate extreme precipitation in large parts of the domain but especially in the two hot-spot regions. The \(0.11^{\circ }\) runs have smaller biases but still underestimate extreme precipitation in large areas. The most consistent improvements can be found along the Atlantic coast and the Norwegian Mountains (25 % of the total area) but improvements barely occur downstream of the Scandinavian Mountains. Deteriorations are found in 12 % of the area but differences are small.
Smaller biases are found in JJA (panel e–h). Both ensembles still underestimate extreme precipitation in Norway and also most added value is found here. In Sweden no clear benefits are visible.
Simulated extreme precipitation in MAM shows similar shortcomings as in JJA and patterns in SON are comparable to those in DJF (not shown). Mean precipitation is underestimated along the Atlantic coast and overestimated in Sweden in all seasons (Online Resource 1 Fig. A5). Contrary to extreme precipitation, consistent improvements in the \(0.11^{\circ }\) simulations are not restricted to mountainous areas, but are also found in the flat areas of Sweden.
France In France extreme precipitation is heaviest in the south and is located in the Western Alps, the Pyrenees, the Central Massif, and Corsica. The season with the heaviest extremes is SON while during JJA extremes are weakest.
In DJF (panel a–d in Fig. 8) the \(0.44^{\circ }\) models underestimate extreme precipitation in southern France, where heaviest extremes occur, and overestimate them elsewhere. The \(0.11^{\circ }\) simulations can mitigate the dry bias consistently while biases in the rest of France remain the same.
Similar patterns can be seen in JJA (panel e–h) where a pronounced dry bias is apparent in the \(0.44^{\circ }\) simulations. Again the \(0.11^{\circ }\) models are found to reduce this dry bias consistently.
Also in MAM and SON (not shown) the same bias patterns occur in the \(0.44^{\circ }\) simulations and similar improvements can be found in the \(0.11^{\circ }\) runs. This is similar for mean precipitation in DJF and MAM (Online Resource 1 Fig. A6) but in JJA and SON entire France is too dry in the \(0.44^{\circ }\) simulations, which is improved in the \(0.11^{\circ }\) simulations. Consistent improvements are predominantly located in the South and along the Atlantic coast.
Carpathians Compared to the other investigated regions, the Carpathians feature moderate extreme precipitation amounts. The most intense season is JJA where extremes of up to \(40\,\hbox {mm}\,\hbox {d}^{-1}\) occur along the entire Carpathian Mountain chain.
In DJF (panel a–d in Fig. 9), and MAM (not shown) the models are too wet in the entire region except of the Southwest. The bias patterns are similar in both model resolutions except of a shift of precipitation. In the \(0.11^{\circ }\) simulations extreme precipitation tempts to fall out upstream of the Carpathians, which leads to deteriorations in their foothills towards Southwest and to improvements above the mountains.
In JJA, the \(0.44^{\circ }\) simulations are too dry in the entire region except in the mountains. The \(0.11^{\circ }\) simulations are showing heavier extremes, which is a consistent improvement in 30 % of the region.
SON (not shown) is characterized by a wet bias in the mountains, which is improved by the \(0.11^{\circ }\) simulations in the Northeast. For mean precipitation (Online Resource 1 Fig. A7) the patterns are similar as for extremes but the relative magnitudes of biases are larger.
Areas with improved mean and extreme precipitation Very often the areas of consistent improvements in mean and extreme precipitation are similar. However, there are some notable exceptions, which are worth to be discussed.
Figure 10 shows differences between the \(0.11^{\circ }\) minus \(0.44^{\circ }\) multi-model-means relative to observations for mean (panel a) and extreme (panel b) MAM precipitation in Norway and Sweden. The patterns of extreme precipitation are very similar to JJA (Fig. 7h). Consistent improvements, due to the fine-gridded models, can predominantly be found in the mountainous region of Norway. However, for mean precipitation also large parts of the flat regions in Sweden are improved, which are located downstream of the Scandinavian Mountains. This is because extreme precipitation in Sweden is predominately caused by south-easterly flow, which advects moist air from the Baltic Sea (Hellström 2005) while mean precipitation is more related to a zonal flow where a rain shadowing effect is present that is caused by the Scandinavian Mountains. Therefore, in the \(0.11^{\circ }\) simulations more precipitation is generated over the Scandinavian Mountains, which leads to less mean precipitation downstream in Sweden and thereby reduces the overall wet bias in the \(0.44^{\circ }\) models.
A similar behavior can be seen in Germany also during MAM (Fig. 10c, d). Extreme precipitation only improves in hilly and mountainous regions in the \(0.11^{\circ }\) models but mean precipitation also gets better in flat areas.
In panel e the net-improved-areas (improved minus deteriorated areas) of the \(0.11^{\circ }\) simulations are summarized for mean and extreme precipitation. In the Alps the difference between the mean and extreme precipitation net-improved-areas are similar (difference \(<\)5 %). In Germany improvements are larger in mean precipitation during MAM (as shown in panel c–d) and in extreme precipitation in SON. Also in Sweden large differences are visible especially in DJF and MAM (as shown in panel a–b) and net-improved-areas of mean precipitation are always larger than those of extreme precipitation. In Spain, Norway, and the Carpathians improvements are partly larger for means and partly for extremes with differences of up to 20 % in the Carpathians during SON. Differences in France are small. More generally speaking, these results do not suggest that extreme precipitation is improved more than mean precipitation.
Figure 10e also demonstrates that the \(0.11^{\circ }\) simulations outperform the \(0.44^{\circ }\) runs with regard to extreme and mean precipitation in all regions and seasons (with some exceptions in Germany during JJA and Sweden during DJF and JJA). The largest net-improved-area fractions can be found in Spain, followed by the Alps, and Norway. In the Alps, the Carpathians, and France the season with the largest net-improved area is JJA while in the other regions JJA is among the season with smallest net-improved-areas.
If we perform the same statistical analysis on a \(0.11^{\circ }\) evaluation grid (Online Resource 1 Fig. A8) most of the features described above stay the same but two remarkable differences deserve to be highlighted. First, on the \(0.44^{\circ }\) evaluation grid, the area for consistently improved summertime extreme precipitation is rarely larger than the one of mean precipitation. Contrary, on the \(0.11^{\circ }\) evaluation grid the improved areas in JJA are larger for extreme precipitation (except in Sweden and the Carpathians). Second, hardly any improved areas are found in Germany on the coarse grid, but clear added value is indicated by the results of the evaluation on the fine grid. Also in other regions, improved areas are larger on the fine evaluation grid.
The reason for this is shown on the example of Germany during DJF in Fig. 11. Compared to Fig. 5a–d we can see that more fine-scale structures can be captured by the \(0.11^{\circ }\) models. This results in larger consistently improved areas compared to the analysis on a \(0.44^{\circ }\) evaluation grid and demonstrates that the \(0.11^{\circ }\) simulations are found to produce realistic precipitation patterns beyond the grid spacing of the \(0.44^{\circ }\) models.
Summing up, the \(0.11^{\circ }\) simulations are found to consistently improve extreme and mean precipitation biases on grid point scale over large parts of Europe but especially in mountainous areas. Since heaviest precipitation is observed in the mountains these improvements can be valuable for flood protection or river runoff studies.
3.4 Scale dependence of spatial correlation coefficients
In contrast to the investigation of biases in Sects. 3.1, 3.2 and 3.3 here we focus on the spatial correlation between simulated and observed precipitation patterns on different spatial scales. Therefore, we calculate the Pearson product-moment correlation coefficient (insensitive to regionally averaged biases) for extreme and mean precipitation fields. Information about the scale dependence is derived by smoothing the fields with the method described in Sect. 2.3.
In the Alpine region (Fig. 12a–d), spatial correlation coefficients are improved in the \(0.11^{\circ }\) simulations uniformly across all investigated scales and all seasons except JJA. In JJA there is a constant decrease in improvement until approximately 400 km where half of the \(0.11^{\circ }\) simulations improve and the other half deteriorates the correlation coefficients.
In Germany the majority of \(0.11^{\circ }\) simulations improve the correlation coefficients in MAM and SON (panel f and h). In MAM there is no clear spatial dependence but in SON improvements are increasing on large scales. Deteriorations are found in DJF and especially in JJA. In the latter not a single \(0.11^{\circ }\) simulation was able to improve the correlation coefficients of its \(0.44^{\circ }\) counterpart.
In Spain almost all \(0.11^{\circ }\) simulations feature higher correlation in SON (panel l) and more than 75 % in DJF and MAM (panel i and j). In JJA there is a clear gradient, where more than half of the \(0.11^{\circ }\) ensemble improve the correlation coefficients below 400 km.
Clear improvements can be found in Sweden during DJF and especially during MAM (panel m and n). In the latter the entire \(0.11^{\circ }\) ensemble has higher correlation coefficients. In SON (panel p) both, improvements and deteriorations occur in equal shares while in JJA (panel o) more than 75 % of the \(0.11^{\circ }\) ensemble has smaller correlation coefficients.
In Norway improvements and deteriorations in SON occur in equal shares (panel t). In DJF and MAM (panel q and r) improvements dominantly occur for scales above approximately 200 km. In JJA (panel s) improvements on scales below 450 km are found for more than half of the \(0.11^{\circ }\) simulations while results on larger scales are mostly deteriorating.
No clear scale dependency is found in the Carpathians (panel u–x). More than 75 % of the \(0.11^{\circ }\) simulations have higher correlation coefficients in all season except in MAM.
Improvements in all seasons and at all scales can be found in France (panel y to bb). During MAM and SON nearly all \(0.11^{\circ }\) models have higher correlation coefficients whereas in DJF and JJA the ratio is approximately 75 %.
For mean precipitation (Online Resource 1 Fig. A9) generally less scale dependence is found than for extremes. The spread is smaller, and improvements are more consistent. In nearly all seasons and regions more than 75 % of the \(0.11^{\circ }\) simulations are found to improve the spatial correlation coefficients of their \(0.44^{\circ }\) counterparts. The only exception is JJA in Germany.
Summing up, most \(0.11^{\circ }\) simulations are found to improve spatial correlation coefficients over a wide range of scales. This means, spatial patterns, like the location of precipitation hot-spots or areas with weaker precipitation, are better represented at spatial scales from the meso scale (\({\sim}50\,\hbox {km}\)) to the regional scale (\({\sim}400\,\hbox {km}\)). The typically weak spatial-scale dependency of the pattern correlation coefficients might be related to the spatial extent of the orographic features in the investigated regions that have a similar size than the spatial scales investigated in Fig. 12. Stronger scale dependencies might be present on synoptic to continental scales.
3.5 Intensity dependence of spatial correlation coefficients
While the spatial-scale dependencies of correlation coefficients were analyzed in Sect. 3.4, here we focus on their intensity dependence. Usually different synoptic situations lead to different precipitation intensities and therefore model errors are often intensity-dependent. For this investigation grid cell precipitation was binned in 2.5 % classes for values above the 50 % percentiles. The 0–50 % percentiles, which include mostly non to weak precipitation values were binned to one class in addition. Thereafter, the resulting spatial correlation coefficients were calculated for each precipitation class (bin).
In the Alps, Spain, and France (Fig. 13a–d, i–l and y–bb) the \(0.11^{\circ }\) simulations improve the correlation more for high intensities (except for JJA in Spain, and MAM in France). For the other regions improvements are predominantly larger for light precipitation.
In the Alps more than 75 % of the \(0.11^{\circ }\) simulations show improvements in all seasons and for all intensities (except for light rain in DJF). In Germany (panels e–h) light to moderate intensities are improved in all seasons. In JJA most models show only small differences between the two resolutions. In Spain (panels i–l) most fine gridded simulations feature higher correlation coefficients on all scales (except in SON for light precipitation). In JJA there is a strong intensity dependence where largest improvements occur for low intensities. Differences in the correlation coefficients are rather small in Sweden (panels m–p) while in Norway (panels q–t) large improvements for light precipitation are found. In JJA no intensity dependence is visible, while in the other seasons improvements are smaller for higher intensities. In the Carpathians light precipitation is more improved during JJA (panel w), while no clear intensity-dependence is found in the other seasons. In France extremes are more improved than light precipitation in all seasons except in MAM.
If we repeat this analysis on a \(0.11^{\circ }\) evaluation grid, the intensity dependencies remain the same, but the higher correlation coefficients of the \(0.11^{\circ }\) simulations are even more pronounced (Online Resource 1 Fig. A10).
Summing up, spatial correlation coefficients of mean and extreme precipitation are larger in most of the \(0.11^{\circ }\) simulations over a wide range of precipitation intensities. Because the size of precipitation intensities is strongly related to differences in synoptic situations, this finding indicates that the fine gridded simulations improve the representation of precipitation patterns for a variety of weather situations.
3.6 Daily spatial precipitation structure
Until now we analyzed precipitation in climatological fields (e.g., median, mean, extreme). Here we are directly comparing observed with simulated precipitation patterns on a day-to-day basis. This can elucidate further added value, since in climatological fields daily model errors may cancel out.
Evaluating precipitation patterns on daily timescales can be challenging because of double penalty problems (e.g., Prein et al. 2013a). Here the FSS method is applied, which is able to avoid the double penalty problem by allowing spatial displacements (see Sect. 2.3 or Roberts and Lean (2008) for more details).
The differences in the median FSSs (\(0.11^{\circ }\) minus \(0.44^{\circ }\) simulations, see Fig. 14) is mostly positive, meaning that the \(0.11^{\circ }\) models have a higher skill to simulate daily patterns of precipitation than their \(0.44^{\circ }\) counterparts. Only for moderate precipitation thresholds (1–10 mm/day) and horizontal scales beyond 400 km some small deteriorations can be identified in Germany (panel m–p), DJF in Spain (panel y), and France during DJF (panel i). Usually, improvements are seen on small horizontal scales (below 200 km) for thresholds up to 5 mm/day and for all scales for thresholds between 5 and 30 mm/day. Largest improvements are found for moderate to intense daily precipitation sums (10–30 mm/day) and allowed displacements larger than 200 km.
Comparing the different regions, largest improvements are found in the Alps (panels a–d) and in Norway (panels q–t), whereas less improvement is found in Germany (panels m–p) and Sweden (panels u–x). In Germany and Sweden improvements are similar in different seasons. In the Alps and the Carpathians largest improvements are found in JJA and lowest in DJF whereas in Norway the opposite is the case. In Spain the transition seasons show largest improvements. This is in good agreement with findings in Sect. 3.3 (see Fig. 10e).
These results indicate that the \(0.11^{\circ }\) simulations are not only capable of improving climatological average precipitation but also precipitation on a daily basis. This means that the fine gridded simulations yield improved precipitation patterns and intensities on the weather timescale. For studies related to, e.g., hydrology or droughts this is important, since they require a correct representation and sequence of weather conditions.
3.7 Daily and 3-hourly precipitation distributions
In this section the shape of simulated daily and 3-hourly precipitation distributions (hourly only available for Switzerland) on grid point basis (\(0.44^{\circ }\) evaluation grid) are compared to observations. Contrary to the analyses in the previous sections, temporal or spatial mismatches do not affect the results of this analysis since the distributions only dependent on the frequency of precipitation intensities irrespective of where or when they occur in a season.
Shown in Fig. 15 is that the \(0.11^{\circ }\) models tend to have higher extreme precipitation values than the \(0.44^{\circ }\) simulations. This is beneficial in MAM where the \(0.11^{\circ }\) simulations improve the representation of extreme precipitation in all regions except the Carpathians and the Alps (thick red lines are closer to the diagonal than the thick white lines). In DJF however, the \(0.11^{\circ }\) models only improve extremes in Norway while they deteriorate their representation elsewhere. The regions with the most consistent improvements across seasons are Germany and Norway. In the Carpathians no improvements are seen (except for SON).
The \(0.11^{\circ }\) simulation spread (red shaded areas) is smaller than the spread of the \(0.44^{\circ }\) simulations (blue contours) during JJA, except for the Carpathians. The spread does not change in DJF while MAM and SON show mixed results.
Extreme precipitation events often have small spatial and temporal extends. Therefore, this evaluation is very sensitive to the underlying temporal and spatial resolution. On a \(0.11^{\circ }\) evaluation grid (Online Resource 1 Fig. A11) the shown improvements are more pronounced. This indicates that the \(0.11^{\circ }\) simulations are found to reproduce extreme events on scales smaller than \(0.44^{\circ }\) more realistically.
To investigate the difference between the observed and simulated precipitation distributions on a sub-daily (3-hourly) scale we have used the RdisaggH data set (Table 2), which provides data for Switzerland within the period May 2003 to December 2007. Figure 16 shows that in Switzerland the distribution of the median of the \(0.11^{\circ }\) models is always closer to the observed distribution than the median of the \(0.44^{\circ }\) models (except for daily DJF and JJA, panel b and f). Additionally, also the simulated spread is smaller in the \(0.11^{\circ }\) ensemble (except for daily and hourly DJF and daily MAM).
In general, improvements in the \(0.11^{\circ }\) simulations are larger for 3-hourly precipitation than for daily values and for high intensities. Especially the maximum values are well represented in all seasons except DJF 3-hourly. Remarkable is the improvement in SON where daily extremes are overestimated in the \(0.44^{\circ }\) runs. This is corrected in the \(0.11^{\circ }\) simulations, At the same time 3-hourly precipitation maxima are underestimated by the \(0.44^{\circ }\) models, which is improved in the \(0.11^{\circ }\) simulations as well. These improvements can only be achieved when precipitation intensity is increased on short time scales while precipitation duration is decreased.
If the same evaluation in Switzerland is performed on the \(0.11^{\circ }\) evaluation grid (Online Resource 1 Fig. A12), improvements in the \(0.11^{\circ }\) simulations are getting larger (except for DJF and SON daily).
Investigating the dry-day frequency and moderate precipitation intensities (below \(25\,\hbox {mm}\,\hbox {d}^{-1}\)), which are barely visible in Fig. 15, reveals that for most regions and seasons the simulated dry-day frequency is too low (except for Spain in all seasons and France and the Carpathians during JJA and SON; Fig. A13). The dry-day frequency tends to be equal or lower in the \(0.11^{\circ }\) simulations compared to the \(0.44^{\circ }\) models (except for DJF in Sweden). Moderate precipitation intensities (between \(0.1\,\hbox {mm}\,\hbox {d}^{-1}\) and \(25\,\hbox {mm}\,\hbox {d}^{-1}\)) tend to be slightly more frequent in the high-resolution models (Fig. A13).
In this study mean and extreme (above 97.5 %) precipitation in 16 evaluation experiments from the EURO-CORDEX initiative with horizontal grid spacings of \(0.11^{\circ }\) and \(0.44^{\circ }\) (8 each) are compared to highly resolved observation data sets in 7 European regions (Alps, Germany, France, Sweden, Norway, Spain, and the Carpathians). The main goal was to find out where differences between the fine and the coarse gridded simulations occur and if these differences result in an improved or deteriorated representation of precipitation in the \(0.11^{\circ }\) models.
Our evaluation strategy focused on:
investigating spatial and seasonal median biases and spatial error ranges in the seven investigated sub-regions (Sects. 3.1, 3.2),
assessing spatial distribution of seasonal mean biases and the evaluation of consistent improvements/deteriorations of seasonal mean absolute biases in the \(0.11^{\circ }\) simulations compared to the \(0.44^{\circ }\) models on the grid cell scale (Sect. 3.3),
evaluating spatial pattern correlation coefficients as a function of spatial scales (Sect. 3.4) and precipitation intensities (Sect. 3.5),
analyzing precipitation structures and intensities on a daily basis (Sect. 3.6),
and investigating the simulation of daily and 3-hourly precipitation distributions (Sect. 3.7).
In general, no added value was found in regional and seasonal mean and median precipitation (cf. Figs. 2, 4, 5, 6, 7, 8 and 9). The \(0.11^{\circ }\) simulations tend to increase precipitation by reducing the dry-day frequency and by increasing the frequency and intensity of light, moderate, and especially extreme precipitation (cf. Figs. 15, 16, and Fig. A13). Analyzing precipitation differences on a local (e.g., grid cell) basis (cf. Figs. 4, 5, 6, 7, 8 and 9) reveals that the \(0.11^{\circ }\) simulations produce more precipitation especially in areas that are upstream (regarding the predominate westerly wind direction in Europe) of mountain ranges and simulate less precipitation in downstream areas (precipitation shadowing effect). This effect is best visible during DJF because of the strong synoptic-scale flow. Examples are shown for the Carpathians (Fig. 9d and A7 d), Sweden (Fig. A5 d and Fig. 10a, b), the Alps (Fig. A2 d), and Spain (Fig. 6d and Fig. A4 d). This orographically induced differences in the \(0.11^{\circ }\) simulations tend to consistently reduce the precipitation biases in most of the \(0.11^{\circ }\) models and affected regions. Therefore, the regions with the largest areas of consistently improved biases have topographically complex features (e.g., the Alps, Norway, Spain) or are directly affected by mountain ranges (cf. Fig. 10e) such as Sweden, which is shielded by the Scandinavian Mountains towards the West. The strong influence of mountains on the improved precipitation features in the \(0.11^{\circ }\) simulations is also shown in the decrease of spatial error ranges (predominant blue colors in Fig. 2 and A1 in the Alps, Spain or Norway) and the higher improvements in the FSS statistics (cf. Fig. 14 Alps and Norway).
Spatial correlation coefficients for different precipitation intensities show that the \(0.11^{\circ }\) simulations are superior in representing precipitation patterns, compared to their \(0.44^{\circ }\) counterparts for light precipitation in virtually all regions and seasons (see Fig. 13). These improvements get even larger for high precipitation intensities in the Alps, Spain, and France while they tend to get smaller or stay unaltered in the other regions. The spatial-scale dependence of the correlation coefficients is generally weak (cf. Fig. 12 and Fig. A9). This might be related to the too small spatial extent of the regional data sets, which is typically on the order of a few hundred kilometers and therefore beyond the synoptic scale. Strongest scale dependencies of extreme precipitation occur in mountain regions (Alps, Spain, Norway) during JJA. Improvements in correlation coefficients are limited to scales below approximately 400 km. This is probably related to the predominance of convective storms that are the major source of extreme precipitation during JJA in this regions.
A clear result from our analysis is that added value in the \(0.11^{\circ }\) simulations is not restricted to extreme precipitation but is partly even larger in mean precipitation statistics on local scales. An example is shown by the improvements of biases in Sweden during MAM (see Fig. 10a, b).
Improvements in the \(0.11^{\circ }\) simulations are more pronounced when evaluations are performed on a \(0.11^{\circ }\) evaluation grid (all data is remapped on a common \(0.11^{\circ }\) instead of a \(0.44^{\circ }\) grid; compare e.g., Fig. 11 with Fig. 5a–d or Fig. 13 with Fig. A10). This indicates that the \(0.11^{\circ }\) models are found to produce realistic precipitation patterns beyond the grid spacing of the \(0.44^{\circ }\) simulations.
There are some important differences between the presented results to findings in the EURO-CORDEX standard evaluation paper by Kotlarski et al. (2014). The results agree that there is no added value in seasonal and regional averaged mean precipitation (cf., Fig. A1) however they disagree because Kotlarski et al. (2014) did not find improvements in spatial pattern correlation of mean seasonal precipitation, which is shown here (e.g., Fig. A9). Furthermore, Kotlarski et al. (2014) found a general wet bias in most seasons and over most of Europe, which cannot be confirmed by our findings (cf. Fig. A1).
The reasons for these differences are probably the usage of different observational data sets. Kotlarski et al. (2014) use the E-OBS gridded data set (Haylock et al. 2008), while we use gridded regional data sets, which have a finer-grid spacing, higher observation station densities, and are partially precipitation under catch corrected. The differences between the E-OBS and the regional data sets as well as the implication on model evaluation are shown in Prein and Gobiet (2015). By using the same observational data sets for the European Alps and Spain, Casanueva et al. (2015) show similar improvements in the spatial pattern correlation and similar biases than shown here.
Kotlarski et al. (2014) did not explicitly address the added value of an increased grid spacing and left this topic for further analysis. However, they stated that they would expect benefits for quantities such as daily precipitation intensities and small-scale spatial climate variability in topographically structured terrain, which is confirmed in the here presented study.
Our results are consistent with previous studies that addressed the added value of smaller horizontal grid spacings in simulating precipitation. Rauscher et al. (2010) showed improving spatial patterns and temporal evolution of summertime precipitation for the ENSEMBLES simulations by comparing 25 km grid spacing simulations with 50 km gridded simulations. Largest improvements have been found in topographically complex regions (Rauscher et al. 2010), which is also confirmed by a study of Chan et al. (2013). The reason why Rauscher et al. (2010) did not find improvements in DJF precipitation might be the coarser-grid spacing of the 25 km simulations and the usage of a different precipitation data set (E-OBS).
Jacob et al. (2014) state that biggest differences in the climate change signals between the EURO-CORDEX fine-gridded (\(0.11^{\circ }\)) and coarse-gridded (\(0.44^{\circ }\)) simulations occur in the change pattern for heavy precipitation events. They find a smoother shift from weak to moderate and high intensities. They relate the more detailed spatial patterns of the \(0.11^{\circ }\) grid spacing simulations to better resolved physical processes like convection and heavy precipitation, and due to better representation of surface characteristics and their spatial variability, which can be supported by our findings.
5.1 Sources for added value
In this subsection we will investigate why the \(0.11^{\circ }\) simulations are able to improve the representation of precipitation compared to their \(0.44^{\circ }\) counterparts. Therefore, we try to get insights in differences between the following three factors, which affect precipitation (Sawyer 1956):
large-scale atmospheric circulation by comparing the simulation of sea level pressure (Fig. 17),
cloud processes and convection by analyzing the convective-to-total precipitation ratio (Fig. 18a–d), and
the interaction of the atmospheric flow with the surface (particularly with the orography) by comparing the variability in the 700 hPa vertical wind speed (Fig. 19e–h).
The simulated differences in sea level pressure are typically below 0.6 hPa (see contours in Fig. 17d, h). Even though, areas of consistent improvements are detectable in the \(0.11^{\circ }\) simulations (especially during DJF over the Mediterranean and Eastern Europe), the differences between the \(0.11^{\circ }\) and \(0.44^{\circ }\) simulations are an order of magnitude smaller than the biases in the simulated sea level pressure (Fig. 17b, c and f, g). These differences can partly contribute to improvements found for DJF but are probably too small to be the major source of added value. For this evaluation all simulations, except those of the REMO model, were used (REMO data was not available).
The effect of changing the grid spacing on cloud processes and convection is estimated by the convective-to-total precipitation ratio between the \(0.11^{\circ }\) and \(0.44^{\circ }\) simulations of the CCLM-CLMCOM, WRF-IPSL-INERIS, and RCA4-SMHI models (the data for the other models were not available). Convective precipitation is produced by the deep convection schemes (related to sub-grid-scale convection) while large-scale precipitation is explicitly resolved on the model grid. In DJF (Fig. 18a–c) no major difference are seen over land areas (except for the South-Wests of the Iberian Peninsula). During JJA the \(0.11^{\circ }\) runs tend to reduce the proportion of convective precipitation in most of the investigated areas (Fig. 18d–f). This is in line with findings by Rauscher et al. (2010) who analyzed the ENSEMBLES RCMs (Rauscher et al. 2010). There is no visible relationship between changes in the convective-to-total precipitation ratio and consistently improved areas (dashed contours). The lower ratio of precipitation generated by the deep convection parameterization schemes of the \(0.11^{\circ }\) models means that more precipitation is explicitly generated by the model dynamics.
EURO-CORDEX domain (colored area in a) and orography therein (contour). Colored overlays depict the evaluated regions. The Alpine data set includes areas in Germany and France (dashed regions). b depicts the locations of important sub-regions, which are discussed in the text
Scatter plots showing \(0.11^{\circ }\) (x-axis) against \(0.44^{\circ }\) (y-axis) simulated median extreme precipitation biases for DJF (left column) and JJA (right column) averaged over the Alps, Germany, Spain, and Sweden, (top down left) and Norway, France, and the Carpathians (top down right). Symbol colors show differences in the spatial error variability (Q75 \(-\) Q25; \(0.11^{\circ }\) minus \(0.44^{\circ }\)) in percent (relative to \(0.44^{\circ }\)). A \(0.11^{\circ }\) simulation has a smaller (larger) absolute bias if its symbols is located in the green (red) areas of the plot
Absolute precipitation differences in precipitation (\(0.11^{\circ }\) minus \(0.44^{\circ }\) simulations) divided by the absolute biases in the \(0.44^{\circ }\) simulations. Results for mean/extreme precipitation are shown in green/red. Upper/lower plots show JJA/DJF
Observed extreme precipitation (mean of all values above the 97.5 percentile) in the Alps (first column). The second (third) column shows the relative biases in the \(0.11^{\circ }\) (\(0.44^{\circ }\)) multi-model-mean. Filled contours in the fourth column show differences between the \(0.11^{\circ }\) minus the \(0.44^{\circ }\) multi-model-mean relative to the observation. Red (blue) shaded areas depict regions where more then 75 % of the \(0.11^{\circ }\) (\(0.44^{\circ }\)) simulations have smaller errors than the corresponding \(0.44^{\circ }\) (\(0.11^{\circ }\)) runs. Below the first three columns the mean, maximum (Max), and minimum (Min) values are displayed while below the fourth panel the areal coverage of improved (red; IMPRO) and deteriorated (blue; DETER) shaded areas in the \(0.11^{\circ }\) simulations are shown. The thick black contour line shows the 800 m height level in the \(0.11^{\circ }\) orography
Same as in Fig. 4 but for Germany in DJF and JJA
Same as in Fig. 4 but for Spain in DJF and JJA
Same as in Fig. 4 but for Sweden and Norway in DJF and JJA
Same as in Fig. 4 but for France in DJF and JJA
Same as in Fig. 4 but for the Carpathians in DJF and JJA
a–d are similar to the right column of Fig. 4. a and b show results for Norway and Sweden and c and d for Germany in MAM. Statistics for mean precipitation are depicted in panel a and c and for extreme precipitation in b and d. e depicts an overview of the net consistently improved areas (improved minus deteriorated areas in the \(0.11^{\circ }\) simulations)
As a proxy for the interaction between the atmosphere and the orography we investigate the standard deviation of hourly vertical wind speed at 700 hPa (Fig. 19). Since vertical wind speed is no standard output variable in the CORDEX framework and a hourly frequency is beneficial (on lower frequencies up and downward motions might cancel out) we investigate data from a 4-year long (2006 to 2009) simulation with the CCLM-CLMCOM model. The choice of the 700 hPa level is a compromise between being high enough to not intersect with orography and low enough to still see a strong influence of orography on vertical motions. In DJF we see much higher variability in the \(0.11^{\circ }\) simulations (panel a) than in the \(0.44^{\circ }\) runs (panel b) especially over mountainous regions. This is most likely related to the better resolved orography and therefore steeper slopes in the \(0.11^{\circ }\) simulations. In Fig. 19c areas with higher vertical wind standard deviation in the \(0.11^{\circ }\) run are overlapping with, or are surrounded by, consistently improved areas (dashed contours). During JJA synoptic-scale flow is generally weaker but the stratification of air masses is typically more unstable than in DJF. Again, vertical wind speed is more variable in the fine-gridded simulations (panel d). In contrast to DJF the largest variability is not constrained to mountainous regions but covers almost all land regions south of 50° North. In large parts of this area consistent improvements in the \(0.11^{\circ }\) simulations can be found. In the 0.11 simulations the standard deviations in vertical wind speed are larger and the areas with high values for standard deviation are far less confined to mountainous regions than they are in the 0.44 simulations.
It is important to mention that the CCLM is a non-hydrostatic model, which is able to simulate vertical movements due to atmospheric instabilities (buoyancy effect). Non-hydrostatic processes (e.g., deep convection) have scales lower than approximately 10 km (e.g., Kalnay 2003). Such processes start to be resolved in the \(0.11^{\circ }\) run but are unresolved in the \(0.44^{\circ }\) run.
Same as in Fig. 4 but for Germany in DJF evaluated on a \(0.11^{\circ }\) evaluation grid
Differences (\(0.11^{\circ }\) minus \(0.44^{\circ }\) simulations) in spatial correlation coefficients of extreme precipitation as a function of smoothing window size. Alps, Germany, Spain, Sweden, Norway, Carpathians, and France are shown in columns (from left to right) and DJF, MAM, JJA, and SON are depicted in top down order. The thick lines show the median model. Dark (light) shaded areas depict the Q25–Q75 (Q0–Q100) distance. Blue (red) colors indicate higher (lower) correlation coefficients in the \(0.11^{\circ }\) simulations
Same as in Fig. 12 but for different precipitation intensities (x-axis)
Median differences of FSSs between \(0.11^{\circ }\) minus \(0.44^{\circ }\) daily precipitation events. Blue (red) colors indicate higher (lower) FSSs in the \(0.11^{\circ }\) simulations. From left to right DJF, MAM, JJA, and SON is displayed while top down the Alps, Carpathians, France, Germany, Norway, Sweden, and Spain are shown
Summing up, we have made plausible that the major drivers for the added value in the \(0.11^{\circ }\) simulated precipitation are the better resolved model orography and the fact that in the fine-gridded simulations the larger scales in convection are captured by the resolved-scale dynamics which turns out beneficial for the model performance. Added value can therefore commonly be found in regions with complex orography (Pyrenees, Alps, Scandinavian Mountains) or in their surroundings (e.g., rain shadow effect in Sweden, Po valley). A similar result has been found by Beck et al. (2004). They performed regional downscaling over the European Alps with 12 km horizontal grid spacing but with a smoothed model orography representative for a 50 km grid spacing and found that the improvements in their unsmoothed 12 km simulations (compared to a 50 km grid spacing simulation) can be largely attributed to the strong surface forcing in the Alps. Also Prein et al. (2013b) showed that a grid spacing of at most 12 km is necessary to reproduce observed precipitation patterns in the headwaters region of the Colorado River and Chan et al. (2013) showed comparable results for the southern part of Great Britain.
The results presented in this study strongly suggest that the EURO-CORDEX \(0.11^{\circ }\) hindcast simulations are found to add value to the representation of extreme and mean precipitation compared to their \(0.44^{\circ }\) counterparts by:
consistently (more than 6 out of 8 simulations) reducing seasonal biases on the grid scale in large parts (up to 50 % of the total area) of the investigated regions,
improving the seasonal mean spatial patterns of precipitation especially for high precipitation intensities (above \(\sim\)90 percentile) in the Alps, France, and Spain and low intensities (below \(\sim\)80 percentile) in Germany, Sweden, and Norway,
simulating more realistic daily precipitation patterns (spatial distribution and intensity of precipitation) especially for intensities above 10 mm/day and when displacements beyond 200 km are allowed,
adding skillful information beyond the grid spacing of the \(0.44^{\circ }\) simulations,
improving the representation of daily and especially 3-hourly precipitation distributions in Switzerland.
However, on regional scales (e.g., the Alps, the Carpathians) added value in precipitation biases tend to cancel out by averaging. Therefore, the added value is most pronounced on local scales below \({\sim}400\,\hbox {km}\).
The primary reason for the detected added value seems to be the improved representation of orography and capturing larger scales in convection by the resolved-scale dynamics during JJA. This can be concluded from the locations where biases are reduced and the generally larger improvements in mountainous regions (Alps, Spain, and Norway). Improvements are, however, not confined to mountainous areas even though they can be related to orography (e.g., rain shadow effects).
Daily quantile-quantile plots of precipitation rates in the Alps, Germany, Sweden, Norway, Spain, France, and the Carpathians (from left to right). DJF, MAM, JJA, and SON are shown top down. The thick white lines and the blue shaded areas show the median value and the Q0–Q100 interval of the 0.44 ensemble. The thick red line and the red hatched area depicts the median value and the Q0–Q100 interval of the 0.11 ensemble
Same as in Fig. 15 but for three hourly (left column) and daily (right column) quantile-quantile plots of precipitation rates in Switzerland in May 2003–December 2007
Same as in Fig. 4 but for mean sea level pressure in DJF and JJA
Seasonal mean of the convective-to-total precipitation ration in the \(0.11^{\circ }\) and \(0.44^{\circ }\) simulation and their difference (from left to right). Data from the CCLM-CLMCOM, WRF-IPSL-INERIS, and RCA4-SMHI models are used. Shaded regions depict the consistently improved areas from Figs. 4, 5, 6, 7, 8 and 9. Results for DJF/JJA are shown in the first/second row
Same as Fig. 18 but for the standard deviation (STDDEV) of the hourly vertical wind velocity at 700 hPa of a four year long CCLM-CLMCOM simulation
The added value is larger when analyses are performed on a \(0.11^{\circ }\) evaluation grid instead of a \(0.44^{\circ }\) grid. This is not a trivial result because it demands that the \(0.11^{\circ }\) simulations are found to generate skillful information beyond the grid spacing of the \(0.44^{\circ }\) simulations. Thereby, improvements in simulated JJA extreme precipitation are especially enhanced because of their small-scale nature (e.g., convective thunderstorms).
The detection of added value in the \(0.11^{\circ }\) simulations strongly depends on the the availability and accessibility of fine-gridded and high-quality observational data sets. There is an urgent need for an European wide effort to combine existing national data sets into one single homogeneous data set, which is internally consistent and provides an estimate of uncertainty accounting for interpolation, under-catch, and under-sampling errors.
Concluding, simulated precipitation from the EURO-CORDEX \(0.11^{\circ }\) models can be of great value for the assessment of climate change impacts because they are found to reduce errors of both mean and extreme precipitation, particularly on small scales. Future investigations are planned to assess whether simulations with the \(0.11^{\circ }\) models are also capable of improving precipitation when forced by boundary conditions from GCM simulations. This would allow to analyze how errors induced by the GCM simulations (e.g., biases, misrepresentation of synoptic conditions) will propagate into the RCM simulation and affect the RCM precipitation and the added value detected in this paper. Crucial is to analyze if the RCMs are able to compensate errors in the lateral boundary conditions from GCMs. Diaconescu et al. (2007) showed that in their RCM simulations errors in the lateral boundary conditions did not increase nor amplifies. If large-scale errors are present in the lateral boundary conditions the representation of small-scale features in their RCM was rather poor. Exceptions could be found at locations where strong small-scale surface forcing were present. The EURO-CORDEX imitative provides a perfect framework to deepen this analysis and apply it to a large ensemble of GCM driven transient regional climate simulations.
The authors like to thank the coordination and participating institutes of the EURO-CORDEX initiative. This study relied on the availability of high-resolution observation data sets. Therefore, the authors want to thank the German Weather Service (DWD) for making available the REGNIE, the Swedish Meteorological and Hydrological Institute (SMHI) for the PTHBV, the Federal Office of Meteorology and Climatology (MeteoSwiss) for the RdisaggH, the Norwegian Meteorological Institute (met.no) for the KLIMAGRID, Météo-France for the SAFRAN, the University of Cantabria (UNICAN) for the Spain011 data set, and the CARPATCLIM Database European Commission—JRC, 2013. Part of SMHI contribution was done in the Swedish Mistra-SWECIA programme founded by Mistra (the Foundation for Strategic Environmental Research). Some EURO-CORDEX simulations and analyses were carried out within the framework of the IMPACT2C FP7 project (Grant FP7-ENV.2011.1.1.6-1). The contribution from CRP-GL (now LIST) was funded by the Luxembourg National Research Fund (FNR) through Grant FNR C09/SR/16 (CLIMPACT). Part of this work was supported by the NHCM-2 project funded by the Austrian Science Fund (FWF) (Project Number P24758-N29). NCAR is funded by the National Science Foundation.
382_2015_2589_MOESM1_ESM.pdf (2.6 mb)
Baldauf M, Schulz JP (2004) Prognostic precipitation in the Lokal–Modell (LM) of DWD. Tech. rep, COSMO NewsletterGoogle Scholar
Balsamo G, Viterbo P, Beljaars A, van den Hurk BJJM, Hirschi M, Betts A, Scipal K (2009) A revised hydrology for the ECMWF model: verification from field site to terrestrial water storage and impact in the integrated forecast system. J Hydrometeorol 10:623–643. doi: 10.1175/2008JHM1068.1 CrossRefGoogle Scholar
Ban N, Schmidli J, Schär C (2014) Evaluation of the convection-resolving regional climate modeling approach in decade-long simulations. J Geophys Res Atmos 119(13):7889–7907CrossRefGoogle Scholar
Beck A, Ahrens B, Stadlbacher K (2004) Impact of nesting strategies in dynamical downscaling of reanalysis data. Geophys Res Lett 31(19). doi: 10.1029/2004GL020115
Böhm U, Kücken M, Ahrens W, Block A, Hauffe D, Keuler K, Rockel B, Will A (2006) CLM—the climate version of LM: Brief description and long-term applications. Tech. rep, COSMO NewsletterGoogle Scholar
Bougeault P (1985) A simple parameterization of the large-scale effects of cumulus convection. Mon Weather Rev 113:2108–2121CrossRefGoogle Scholar
Casanueva A, Kotlarski S, Herrera S, Fernández J, Gutiérrez J, Boberg F, Colette A, Christensen OB, Goergen K, Jacob D, Keuler K, Nikulin G, Teichmann C, Vautard R (2015) Daily precipitation statistics in the EURO-CORDEX rcm ensemble: added value of a high resolution and implication for bias correction. Clim Dyn (submitted)Google Scholar
Champeaux JI, Masson V, Chauvin F (2003) ECOCLIMAP: a global database of land surface parameters at 1 km resolution. Meteorol Appl 12:29–32CrossRefGoogle Scholar
Chan SC, Kendon EJ, Fowler HJ, Blenkinsop S, Ferro CAT, Stephenson DB (2013) Does increasing the spatial resolution of a regional climate model improve the simulated daily precipitation? Clim Dyn 41(5–6):1475–1495. doi: 10.1007/s00382-012-1568-9 CrossRefGoogle Scholar
Christensen OB, Christensen JH, Machenhauer B, Botzet M (1998) Very high-resolution regional climate simulations over scandinavia-present climate. J Clim 11(12):3204–3229CrossRefGoogle Scholar
Christensen JH, Carter TR, Rummukainen M, Amanatidis G (2007) Evaluating the performance and utility of regional climate models: the prudence project. Clim Change 81:1–6CrossRefGoogle Scholar
Collins W, Rasch PJ, Boville BA, McCaa J, Williamson DL, Kiehl JT, Briegleb BP, Bitz C, Lin S, Zhang M, Dai Y (2004) Description of the ncar community atmosphere model (cam 3.0). Tech. rep., NCAR technical note, NCAR/ TN-464?STRGoogle Scholar
Cuxart J, Bougeault P, Redelsperger JL (2000) A turbulence scheme allowing for mesoscale and large-eddy simulations. QJR Meteorol Soc 126:1–30CrossRefGoogle Scholar
Déqué M (2010) Regional climate simulation with a mosaic of rcms. Meteorol Z 19(3):259–266CrossRefGoogle Scholar
Diaconescu EP, Laprise R, Sushama L (2007) The impact of lateral boundary data errors on the simulated climate of a nested regional climate model. Clim Dyn 28(4):333–350CrossRefGoogle Scholar
Doms G, Förstner J, Heise E, Herzog HJ, Mironov D, Raschendorfer M, Reinhardt T, Ritter B, Schrodin R, Schulz JP, Vogel G (2011) A description of the nonhydrostatic regional cosmo-model; part ii: Physical parameterization. Tech. rep., Deutscher WetterdienstGoogle Scholar
Douville H, Planton S, Royer JF, Stephenson DB, Tyteca S, Kergoat L, Lafont S, Betts RA (2000) The importance of vegetation feedbacks in doubled-CO\(_2\) time-slice experiments. J Geophys Res 105:14,841–14,861CrossRefGoogle Scholar
DWD (2009) Regionalisierte Niederschlagshhen (REGNIE)Google Scholar
Ek MB, Mitchell KE, Lin Y, Rogers E, Grunmann P, Koren V, Gayno G, Tarpley JD (2003) Implementation of Noah land surface model advances in the National Centers for Environmental Prediction operational mesoscale Eta model. J Geophys Res 108:8851CrossRefGoogle Scholar
Fosser G, Khodayar S, Berg P (2014) Benefit of convection permitting climate model simulations in the representation of convective precipitation. Clim Dyn 44(1–2):1–16Google Scholar
Fouquart Y, Bonnel B (1980) Computations of solar heating of the earths atmosphere: a new parameterization. Beitr Phys Atmos 53:35–62Google Scholar
Giorgetta M, Wild M (1995) The water vapour continuum and its representation in ECHAM4. Max-Planck-Institut für MeteorologieGoogle Scholar
Giorgi F, Marinucci MR (1996) A investigation of the sensitivity of simulated precipitation to model resolution and its implications for climate studies. Mon Weather Rev 124:148–166CrossRefGoogle Scholar
Goodison BE, Louie PY, Yang D (1997) The wmo solid precipitation measurement intercomparison. World Meteorological Organization-Publications-WMO TD, pp 65–70Google Scholar
Grell GA, Devenyi D (2002) A generalized approach to parameterizing convection combining ensemble and data assimilation techniques. Geophys Res Lett 29. doi: 10.1029/2002GL015311
Hagemann S (2002) An improved land surface parameter dataset for global and regional climate model. Tech. rep., MPI Rep 336:21Google Scholar
Haylock MR, Hofstra N, Klein Tank AMG, Klok EJ, Jones PD, New M (2008) A European daily high-resolution gridded data set of surface temperature and precipitation for 19502006. J Geophys Res Atmos 113(D20). doi: 10.1029/2008JD010201
Hellström C (2005) Atmospheric conditions during extreme and non-extreme precipitation events in sweden. Int J Climatol 25(5):631–648CrossRefGoogle Scholar
Herrera S, Gutiérrez JM, Ancell R, Pons MR, Frías MD, Fernández J (2012) Development and analysis of a 50-year high-resolution daily gridded precipitation dataset over spain (spain02). Int J Climatol 32(1):74–85CrossRefGoogle Scholar
Hofstra N, Haylock M, New M, Jones PD (2009) Testing e-obs European high-resolution gridded data set of daily precipitation and surface temperature. J Geophys Res Atmos (1984–2012) 114(D21)Google Scholar
Hofstra N, New M, McSweeney C (2010) The influence of interpolation and station network density on the distributions and trends of climate variables in gridded daily data. Clim Dyn 35(5):841–858CrossRefGoogle Scholar
Hong SY, Lim JOJ (2006) The WRF single-moment 6-class micro-physics scheme (WSM6). J Korean Meteorol Soc 42:129–151Google Scholar
Hong SY, Dudhia J, Chen SH (2004) A revised approach to microphysical processes for the bulk parameterization of cloud and precipitation. Mon Weather Rev 132:103–120CrossRefGoogle Scholar
Hong SY, Noh Y, Dudhia J (2006) A new vertical diffusion package with an explicit treatment of entrainment processes. Mon Weather Rev 134:2318–2341CrossRefGoogle Scholar
Isotta FA, Frei C, Weilguni V, Perčec Tadić M, Lassègues P, Rudolf B, Pavan V, Cacciamani C, Antolini G, Ratto SM, et al. (2013) The climate of daily precipitation in the alps: development and analysis of a high-resolution grid dataset from pan-alpine rain-gauge data. Int J Climatol 34(5):1657–1675CrossRefGoogle Scholar
Jacob D, Barring L, Christensen OB, Christensen JH, Castro MD, Déqué M, Giorgi F, Hagemann S, Hirschi M, Jones R, Kjellström E, Lenderink G, Rockel B, Sanchez E, Schär C, Seneviratne SI, Somot S, Ulden AV, Hurk BVD (2007) An intercomparison of regional climate models for Europe: model performance in present-day climate. Clim Change 81:31–52CrossRefGoogle Scholar
Jacob D, Elizalde A, Haensler A, Hagemann S, Kumar P, Podzun R, Rechid D, Remedio AR, Saeed F, Sieck K, Teichmann C, Wilhelm C (2012) Assessing the transferability of the regional climate model remo to different coordinated regional climate downscaling experiment (cordex) regions. Atmosphere 3(1):181–199CrossRefGoogle Scholar
Jacob D, Petersen J, Eggert B, Alias A, Christensen OB, Bouwer LM, Braun A, Colette A, Déqué M, Georgievski G et al (2014) EURO-CORDEX: new high-resolution climate change projections for European impact research. Reg Environ Change 14(2):563–578CrossRefGoogle Scholar
Johansson B (2002) Estimation of areal precipitation for hydrological modelling in sweden. Ph.D.thesis A76, Earth Science Centre, Göteborg UniversityGoogle Scholar
Joint Research Centre (2003) Global land cover 2000 database. European Commission, Joint Research Centre. Tech. rep., Joint Research CentreGoogle Scholar
Kain JS (2004) The Kain–Fritsch convection parameterization: an update. J Appl Meteorol 43:170–181CrossRefGoogle Scholar
Kain JS, Fritsch JM (1990) A one-dimensional entraining/detraining plume model and its application in convective parameterization. J Atmos Sci 47:2784–2802CrossRefGoogle Scholar
Kain JS, Fritsch J (1993) Convective parameterization for mesoscale models: the kain fritsch scheme. in the representation of cumulus convection in numerical models. Meteorol Monogr 24:165–170Google Scholar
Kalnay E (2003) Atmospheric modeling, data assimilation, and predictability. Cambridge University Press, CambridgeGoogle Scholar
Kendon EJ, Roberts NM, Senior CA, Roberts MJ (2012) Realism of rainfall in a very high-resolution regional climate model. J Clim 25:5791–5806CrossRefGoogle Scholar
Kendon EJ, Roberts NM, Fowler HJ, Roberts MJ, Chan SC, Senior CA (2014) Heavier summer downpours with climate change revealed by weather forecast resolution model. Nat Clim Change 4:570–576CrossRefGoogle Scholar
Kotlarski S, Keuler K, Christensen OB, Colette A, Déqué M, Gobiet A, Görgen K, Jacob D, Lüthi D, van Meijgaard E, Nikulin G, Schär C, Teichmann C, Vautard R, Warrach-Sagi K, Wulfmeyer V (2014) Regional climate modeling on European scales: a joint standard evaluation of the EURO-CORDEX RCM ensemble. Geosci Model Dev 7(1):217–293. doi: 10.5194/gmdd-7-217-2014 CrossRefGoogle Scholar
Lacono MJ, Delamere JS, Mlawer EJ, Shephard MW, Clough SA, Collins WD (2008) Radiative forcing by long-lived greenhouse gases: calculations with the AER radiative transfer models. J Geophys Res 113(D13):103. doi: 10.1029/2008JD009944 Google Scholar
Lenderink G, Holtslag AAM (2004) An updated length-scale formulation for turbulent mixing in clear and cloudy boundary layers. QJR Meteorol Soc 130:3405–3427. doi: 10.1256/qj.03.117 CrossRefGoogle Scholar
Lohmann U, Roeckner E (1996) Design and performance of a new cloud microphysics scheme developed for the echam general circulation model. Clim Dyn 12(8):557–572CrossRefGoogle Scholar
Louis JF (1979) A parametric model of vertical eddy fluxes in the atmosphere. Bound Layer Meteorol 17:187–202CrossRefGoogle Scholar
Mahoney K, Alexander MA, Thompson G, Barsugli JJ, Scott JD (2012) Changes in hail and flood risk in high-resolution simulations over Colorado's mountains. Nat Clim Change 2(2):125–131CrossRefGoogle Scholar
Masson V, Champeaux JL, Chauvin F, Mériguet C, Lacaze R (2003) A global database of land surface parameters at 1 km resolution for use in meteorological and climate models. J Clim 16:1261–1282CrossRefGoogle Scholar
MeteoSwiss (2010) Hourly precipitation (Experimental): RdisaggHGoogle Scholar
Mlawer EJ, Taubman SJ, Brown PD, Iacono MJ, Clough SA (1997) Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J Geophys Res 102D:16663–16682CrossRefGoogle Scholar
Mohr M (2009) Comparison of versions 1.1 and 1.0 of gridded temperature and precipitation data for norway. Tech. rep., met.noGoogle Scholar
Morcrette JJ (1990) Impact of changes to the radiation transfer parameterizations plus cloud optical. Properties in the ECMWF model. Mon Weather Rev 118(4):847–873CrossRefGoogle Scholar
Morcrette JJ, Smith L, Fouquart Y (1986) Pressure and temperature dependence of the absorption in longwave radiation parametrizations. Beitr Phys Atmos 59(4):455–469Google Scholar
Neggers RAJ (2009) A dual mass flux framework for boundary layer convection. Part II: clouds. J Atmos Sci 66:1489–1506. doi: 10.1175/2008JAS2636.1 CrossRefGoogle Scholar
Neggers RAJ, Koehler M, Beljaars ACM (2009) A dual mass flux framework for boundary layer convection. Part I: transport. J Atmos Sci 66:1465–1487. doi: 10.1175/2008JAS2635.1 CrossRefGoogle Scholar
Nordeng TE (1994) Extended versions of the convection parametrization scheme at ECMWF and their impact upon the mean climate and transient activity of the model in the tropics. Tech. rep., Research Department Technical Memorandum No. 206, ECMWF, Shinfield Park, Reading, Berks, UKGoogle Scholar
Pearson K (1895) Notes on regression and inheritance in the case of two parents. Proc R Soc Lond 58:240–242CrossRefGoogle Scholar
Pfeifer S (2006) Modeling cold cloud processes with the regional climate model remo. Tech. rep., Report Max-Planck Institute for Meteorology, HamburgGoogle Scholar
Prein AF, Gobiet A (2015) Is it the models fault? uncertainties in European precipitation datasets. Int J Climatol (submitted)Google Scholar
Prein AF, Gobiet A, Suklitsch M, Truhetz H, Awan NK, Keuler K, Georgievski G (2013a) Added value of convection permitting seasonal simulations. Clim Dyn. doi: 10.1007/s00382-013-1744-6
Prein AF, Holland GJ, Rasmussen RM, Done J, Ikeda K, Clark MP, Liu CH (2013b) Importance of regional climate model grid spacing for the simulation of precipitation extremes. J Clim. doi: 10.1175/JCLI-D-12-00727.1
Prein AF, Langhans W, Fosser G, Ferrone A, Ban N, Goergen K, Keller M, Gutjahr MTO, Feser F, Brisson E, Kollet S, Schmidli J, van Lipzig NPM, Leung RL (2015) A review on convection permitting climate modeling: demonstrations, prospects, and challenges. Rev Geophys. doi: 10.1002/2014RG000475
Quintana-Seguí P, Le Moigne P, Durand Y, Martin E, Habets F, Baillon M, Canellas C, Franchisteguy L, Morel S (2008) Analysis of near-surface atmospheric variables: validation of the SAFRAN analysis over France. J Appl Meteorol Climatol 47(1):92–107Google Scholar
Rasch PJ, Kristjánsson JE (1998) A comparison of the CCM3 model climate using diagnosed and predicted condensate parameterizations. J Clim 11:1587–1614CrossRefGoogle Scholar
Rauscher SA, Coppola E, Piani C, Giorgi F (2010) Resolution effects on regional climate model simulations of seasonal precipitation over Europe. Clim Dyn 35(4):685–711. doi: 10.1007/s00382-009-0607-7 CrossRefGoogle Scholar
Rechid D, Hagemann S, Jacob D (2009) Sensitivity of climate models to seasonal variability of snow-free land surface albedo. Theor Appl Climatol 95:197–221CrossRefGoogle Scholar
Ricard JL, Royer JF (1993) A statistical cloud scheme for use in an AGCM. Ann Geophys 11:1095–1115Google Scholar
Ritter B, Geleyn JF (1992) A comprehensive radiation scheme for numerical weather prediction models with potential applications in climate simulations. Mon Weather Rev 120:303–325CrossRefGoogle Scholar
Roberts NM, Lean HW (2008) Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon Weather Rev 136(1):78–97CrossRefGoogle Scholar
Rockel B, Will A, Hense A (2008) Special issue on regional climate modelling with COSMO-CLM (CCLM). Meteorol Z 17:477–485CrossRefGoogle Scholar
Samuelsson P, Jones CG, Willén U, Ullerstig A, Gollvik S, Hansson U, Jansson C, Kjellström E, Nikulin G, Wyser K (2011) The Rossby Centre Regional Climate model RCA3: model description and performance. Tellus 63A:4–23. doi: 10.1111/j.1600-0870.2010.00478.x CrossRefGoogle Scholar
Sass NH, Rontu L, Savijarvi Raisanen P (1994) Hirlam-2 radiation scheme: documentation and tests. Tech. rep., SMHI HIRLAM Tech Rep 16Google Scholar
Savijarvi H (1990) A fast radiation scheme for mesoscale model and short-range forecast models. J Appl Meteorol 29:437–447CrossRefGoogle Scholar
Sawyer JS (1956) The physical and dynamical problems of orographic rain. Weather 11:375–381CrossRefGoogle Scholar
Sevruk B, Hamon WR (1984) International comparison of national precipitation gauges with a reference pit gauge. World Meteorological Organization, WMO/TD No. 38, IOM Report No. 17, 20 ppGoogle Scholar
Siebesma AP, Soares PMM, Teixeira J (2007) A combined eddy-diffusivity mass-flux approach for the convective boundary layer. J Atmos Sci 64:1230–1248. doi: 10.1175/JAS3888.1 CrossRefGoogle Scholar
Skamarock WC, Klemp J, Dudhia J, Gill D, Barker D, Wang W, Powers J (2008) A description of the advanced research wrf version 3. ncar technical note 475. Tech. rep., NCAR Technical Note 475, 113 ppGoogle Scholar
Stocker TF, Dahe Q, Plattner GK (2013) Climate change 2013: the physical science basis. Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Summary for Policymakers (IPCC, 2013)Google Scholar
Suklitsch M, Gobiet A, Leuprecht A, Frei C (2008) High resolution sensitivity studies with the regional climate model cclm in the alpine region. Meteorol Z 17:467–476CrossRefGoogle Scholar
Szalai S, Auer I, Hiebl J, Milkovich J, Radim T, Stepanek P, Zahradnicek P, Bihari Z, Lakatos M, Szentimrey T, Limanowka D, Kilar P, Cheval S, Deak G, Mihic D, Antolovic I, Mihajlovic V, Nejedlik P, Stastny P, Mikulova K, Nabyvanets I, Skyryk O, Krakovskaya S, Vogt J, Antofie T, Spinoni J (2013) Climate of the greater carpathian region. Final technical report. www.carpatclim-eu.org, European Commission, Joint Research Centre (JRC)
Themeßl JM, Gobiet A, Leuprecht A (2011) Empirical-statistical downscaling and error correction of daily precipitation from regional climate models. Int J Climatol 31(10):1530–1544CrossRefGoogle Scholar
Tiedtke M (1989) A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon Weather Rev 117:1779–1799CrossRefGoogle Scholar
Tiedtke M (1993) Representation of clouds in large-scale models. Mon Weather Rev 121:3040–3061CrossRefGoogle Scholar
Tompkins AM, Gierens K, Radel G (2007) Ice supersaturation in the ECMWF integrated forecast system. QJR Meteorol Soc 133:53–63CrossRefGoogle Scholar
Van den Hurk BJJM, Viterbo P, Beljaars ACM, Betts AK (2000) Offline validation of the era40 surface scheme. Tech. rep., ECMWF Technical report no. 75, ECMWFGoogle Scholar
van der Linden P, Mitchell JFB (2009) Ensembles: climate change and its impacts: summary of research and results from the ensembles project. Tech. rep., Met Office Hadley CentreGoogle Scholar
van Meijgaard E, Van Ulft LH, Lenderink G, de Roode SR, Wipfler L, Boers R, Timmermans RMA (2012) Refinement and application of a regional atmospheric model for climate scenario calculations of Western Europe. Climate changes Spatial Planning publication: KvR 054/12Google Scholar
Vautard R, Gobiet A, Jacob D, Belda M, Colette A, Déqué M, Fernández J, García-Díez M, Goergen K, Güttler I et al (2013) The simulation of European heat waves from an ensemble of regional climate models within the EURO-CORDEX project. Clim Dyn 41(9–10):1–21Google Scholar
Vidal JP, Martin E, Franchistéguy L, Baillon M, Soubeyroux JM (2010) A 50-year high-resolution atmospheric reanalysis over france with the Safran system. Int J Climatol 30(11):1627–1644CrossRefGoogle Scholar
Weisman ML, Skamarock WC, Klemp JB (1997) The resolution dependence of explicitly modeled convective systems. Mon Weather Rev 125(4):527–548CrossRefGoogle Scholar
1.NCAR Earth System LaboratoryBoulderUSA
2.Zentralanstalt fr Meteorologie und Geodynamik (ZAMG)GrazAustria
3.Wegener Center for Climate and Global Change (WEGC)University of GrazGrazAustria
4.Chair of Environmental MeteorologyBrandenburg University of Technology (BTU) Cottbus-SenftenbergCottbusGermany
5.Meteorologiocal InstituteUniversity of BonnBonnGermany
6.Max-Planck-Institut for Meteorology (MPI)HamburgGermany
7.Danish Climate CentreDanish Meteorological Institute (DMI)Copenhagen OeDenmark
8.Royal Netherlands Meteorological Institute (KNMI)De BiltThe Netherlands
9.Météo-France/CNRM, CNRS/GAMEToulouseFrance
10.Swedish Meteorological and Hydrological Institute (SMHI)NorrköpingSweden
11.LSCE/IPSL Laboratoire CEA/CNRS/UVSQ CEAGif sur Yvette CedexFrance
12.Institut National de lEnvironnement Industriel et des Risques (INERIS)Verneuil en HalatteFrance
Prein, A.F., Gobiet, A., Truhetz, H. et al. Clim Dyn (2016) 46: 383. https://doi.org/10.1007/s00382-015-2589-y
First Online 25 June 2015
DOI https://doi.org/10.1007/s00382-015-2589-y | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Free Entanglement Measure of Multiparticle Quantum States} \author{Chang-shui Yu \corauthref{yu}} \author{, He-shan Song} \corauth[yu]{Corresponding [email protected]} \address{Department of Physics, Dalian University of Technology, Dalian 116024, China}
\begin{abstract} In this paper, based on the classfication of multiparticle states and the original definition of semiseparability , we give out the redefinition of semiseparability and inseparability of multiparticle states. By virtue of the redefinition, entanglement measure of multiparticle states can be converted into bipartite entanglement measure in arbitrary dimension in mathematical method. A simple expression of entanglement measure is given out. As examples, a general three-particle pure state and an N-particle mixed state are considered. \end{abstract} \begin{keyword}
entanglement measure \sep multiparticle entanglement \sep free entanglement
\PACS 03.67.-a \sep 03.65.-Ta \end{keyword}
\end{frontmatter}
\section{\protect
Introduction}
Entanglement as a valuable resource has been widely applied to quantum communication and quantum information processing. Quantum teleportation [1], entanglement swapping [2], quantum key distribution [3] and quantum correction and so on make use of quantum entanglement, the profoundly important resource, in essence. Therefore, the quantification of entanglement as a central problem in quantum information theory is a primary goal of this field.
Quantum entanglement has attracted a lot of attention in recent years. A lot of studies of quantum entanglement has been proposed and at the same time, many entanglement measures come up [4-9]. However, only bipartite entanglement with two levels has been perfectly complished [5], and there are a lot of open questions in quantifying entanglement for the bipartite entanglement measure in arbitrary dimensions and the multiparticle entanglement measure. Fortunately, the method for classifying a three-particle state [10] and the one presented recently for quantifying the bipartite entanglement in arbitrary dimension [11] would increase our understanding of multiparticle entanglement.
For some entanglement measure, there must exist a corresponding separability criterion. However, so far there have not been an operational multiparticle full separability criterion, but only a semiseparability definition [12]. Hence it is very difficult to obtain a thorough entanglement measure for multiparticle systems.
In this paper, we study multiparticle free entanglement measure [12] with a new idea(Strictly, free entanglement here denotes the entanglement of states which excludes the fourth class defined in [10], and for convenience, free entanglement is substituted by entanglement for multiparticle systems later). Based on the classfication of multiparticle states, we express semiseparability condition and full inseparability condition of multiparticle systems in a unified way. By virtue of the conditions, we find some mathematical counterpart of entanglement and convert multiparticle entanglement measure into bipartite entanglement in arbitrary dimension in mathematics. Then we give out a simple multipartitle entanglement measure according to the bipartite quantum entanglement measure. Finally, we give an example demonstrating our measure can work effectively in its right for pure states and mixed states respectively.
\section{Separability and Inseparability}
We begin with the usual definition of multiparticle entanglement. For $N$
-particle pure state $\left| \Psi ^{ABC\cdots N}\right\rangle $, if it can be written in the form of direct product of all the subsystems, i.e. \begin{equation}
\left| \Psi ^{ABC\cdots N}\right\rangle =\left| \psi ^{A}\right\rangle
\otimes \left| \psi ^{B}\right\rangle \otimes \cdots \otimes \left| \psi ^{N}\right\rangle , \end{equation} then $N$-particle pure state is separable; If $N$-particle mixed state $\rho ^{ABC\cdots N}$ is separable, the state can be written in the following form: \begin{equation}
\rho ^{ABC\cdots N}=\underset{i}{\sum }p_{i}\left| \psi _{i}^{A}\right\rangle \left\langle \psi _{i}^{A}\right| \otimes \cdots
\otimes \left| \psi _{i}^{N}\right\rangle \left\langle \psi _{i}^{N}\right| , \end{equation} where $\underset{i}{\sum }p_{i}=1$, $p_{i}>0$ and $\psi _{i}^{\alpha }$ with $i=0,1,\cdots $, is any normalized state of the subsystem $\alpha $. Hence, if any multiparticle state cannot be written in the above forms, the state is called an entangled state. However, the definition \textit{per se }is not operational, we have to turn to an operational one.
Three-particle states can be classified according to whether they are seperable or not with respect to the different qubits [10]. They can be classified into five classes according to whether they can be written in one or more the following forms [10]:
\begin{equation}
\rho =\underset{i}{\sum }\left| \psi _{i}^{1}\right\rangle \left\langle \psi _{i}^{1}\right| \otimes \left| \psi _{i}^{2}\right\rangle \left\langle \psi _{i}^{2}\right| \otimes \left| \psi _{i}^{3}\right\rangle \left\langle \psi _{i}^{3}\right| , \end{equation}
\begin{equation}
\rho =\underset{i}{\sum }\left| \psi _{i}^{1}\right\rangle \left\langle \psi _{i}^{1}\right| \otimes \left| \psi _{i}^{23}\right\rangle \left\langle \psi _{i}^{23}\right| , \end{equation}
\begin{equation}
\rho =\underset{i}{\sum }\left| \psi _{i}^{2}\right\rangle \left\langle \psi _{i}^{2}\right| \otimes \left| \psi _{i}^{13}\right\rangle \left\langle \psi _{i}^{13}\right| , \end{equation}
\begin{equation}
\rho =\underset{i}{\sum }\left| \psi _{i}^{3}\right\rangle \left\langle \psi _{i}^{3}\right| \otimes \left| \psi _{i}^{12}\right\rangle \left\langle \psi _{i}^{12}\right| , \end{equation}
where$\left| \psi ^{1}\right\rangle $, $\left| \psi ^{2}\right\rangle $ and $
\left| \psi ^{3}\right\rangle $ are states of system 1,2 and 3, respectively, and $\left| \psi ^{12}\right\rangle $,$\left| \psi
^{23}\right\rangle $ and $\left| \psi ^{13}\right\rangle $ are states of two systems. But no matter how many classes it can be classified into, one can describe it with three cases for convenience: 1) fully seperable states, corresponding to (3); 2) incompletely seperable states, corresponding to (4), (5) and (6); 3) fully inseperable states, corresponding to none of above forms.
Considering an $N$-particle pure state no matter which is separable or not , one can always expand it in a series of common basis. If the dimension of the \textit{i}th subsystem is $D_{i}$, then the dimension of the common basis must be in $\underset{i}{\Pi }D_{i}$ dimension. Hence, an N-particle state with the fixed dimension of every subsystem can always be converted into a single state with much higher dimension in mathematics. Thus, we can also express an N-particle pure state $\left| \Psi ^{ABC\cdots N}\right\rangle $ in $s$ dimension as $\left| \Psi ^{ABC\cdots N}\right\rangle =\underset{i}{\sum }\sqrt{\lambda _{i}}\left| \Psi _{i}^{1}\right\rangle \otimes \left| \Psi _{i}^{2}\right\rangle $ in terms of the generalized Schmidt decomposition [13], where $\left| \Psi _{i}^{1}\right\rangle $ and $\left| \Psi _{i}^{2}\right\rangle $ are defined in $n_{1}$ and $n_{2}$ dimension, respectively, with $n_{1}\times n_{2}=s$. I.e. an N-particle pure state $\left| \Psi ^{ABC\cdots N}\right\rangle $ can always be written as a bipartite state in form, which corresponds to the bipartite grouping of the $N$-particle system. $\left| \Psi _{i}^{1}\right\rangle $ and $\left| \Psi _{i}^{2}\right\rangle $ correspond to each group, respectively. Analogously, an N-particle mixed state $\rho
^{ABC\cdots N}=\underset{i}{\sum }p_{i}\left| \psi _{i}^{A\cdots N}\right\rangle \left\langle \psi _{i}^{A\cdots N}\right| $ can be operated in the same way because of every pure state $\psi _{i}^{A\cdots N}$. Therefore, considering the bipartite grouping, multiparticle(N-particle) states can also be classified into three classes analogous to three-particle classfication. i.e. \begin{equation}
\rho ^{ABC\cdots N}=\underset{i}{\sum }p_{i}\left| \psi _{i}^{A}\right\rangle \left\langle \psi _{i}^{A}\right| \otimes \cdots
\otimes \left| \psi _{i}^{N}\right\rangle \left\langle \psi _{i}^{N}\right| , \end{equation} \begin{equation}
\rho ^{ABC\cdots N}=\underset{i}{\sum }p_{i}\left| \psi _{i}^{\sum_{j}}\right\rangle \left\langle \psi _{i}^{\sum_{j}}\right|
\otimes \left| \psi _{i}^{\sum -\sum_{j}}\right\rangle \left\langle \psi _{i}^{\sum -\sum_{j}}\right| , \end{equation} and \begin{equation}
\rho ^{ABC\cdots N}=\underset{i}{\sum }p_{i}\left| \psi _{i}^{A\cdots N}\right\rangle \left\langle \psi _{i}^{A\cdots N}\right| , \end{equation}
where $\sum_{j}=q$ denotes any $q$ subsystems among $A\cdots N$ , $\psi _{i}^{\sum_{j}}$ stands for a common state of $\sum_{j}$ subsystems, $\left|
\psi _{i}^{\sum -\sum_{j}}\right\rangle $ stands for the common state of the rest subsystems except the $\sum_{j}$ ones and $\left| \psi _{i}^{A\cdots N}\right\rangle $ denotes a common fully inseparable state of all the $N$ subsystems. If we divide the N subsystems $\rho ^{ABC\cdots N}$ into two big subsystems $\rho ^{\sum_{j}}$ and $\rho ^{\sum -\sum_{j}}$, one includes one subsystem, i.e. $\sum_{j}=1$ denotes any one of the N subsystems, and the other includes N-1 subsystems. It is obvious that if \begin{equation} \rho ^{ABC\cdots N}=\underset{i}{\sum }p_{i}\rho _{i}^{\sum_{j}}\otimes \rho _{i}^{\sum -\sum_{j}} \end{equation} holds for every $\sum_{j}=1$ (There exist $C_{N}^{1}=N$ ways to realize such a bipartite grouping.), then the whole N-particle system is semiseparable, which was defined in [12]. Note, however, that if there does exist none of all the $\sum_{j}=1$ such that\ (10) holds, we cannot draw the conclusion that the N-particle system is fully inseparable such as a four-particle pure state $\psi =\psi ^{+}\otimes \psi ^{+},$ where $\psi ^{+}$ is one of the four Bell states. $\rho ^{ABC\cdots N}$ may be incompletely separable. Based on the above study, in order to draw a conclusion to estimate whether the N-particle system is fully inseparable, we must enhance the above condition. I.e. $\sum_{j}$ cannot denote only one of the N subsystems, but every case of $\sum_{j}$ $=1,2,\cdots ,\left[ \frac{N}{2}\right] $ with $\left[ \frac{N }{2}\right] $ =$\left\{ \begin{array}{cc} N/2, & N\text{ \ is even} \\ (N-1)/2, & N\text{ \ is odd} \end{array} \right. $. \ However, in order to estimate the inseparability and the semiseparability in the same criterion, we have to express the condition of semiseparability in terms of the condition of full inseparability, which is equivalent to the original one, whilst we express the above conditions in a more rigorous way.
\textit{Definition1.- }The N-particle system $\rho ^{ABC\cdots N}$, which can be divided into two big subsystems $\rho ^{\sum_{j}}$ and $\rho ^{\sum -\sum_{j}}$ in $\underset{i=1}{\overset{\left[ \frac{N}{2}\right] }{\sum }} C_{N}^{i}$ ways with $C_{N}^{i}=\frac{N!}{(N-i)!i!}$ and $\sum_{j}\in \lbrack 1,\left[ \frac{N}{2}\right] ]$, is called semiseparable, iff (10) holds for all $\sum_{j}\in \lbrack 1,\left[ \frac{N}{2}\right] ]$, and called fully inseparable, iff $\nexists $ $\sum_{j}\in \lbrack 1,\left[ \frac{N}{2}\right] ]$ such that (10) holds.
\section{Multiparticle Free Entanglement Measurement}
According to the above redefinition and analysis, we have converted the study of multiparticle inseparabilitiy into study of a series of bipartite inseparability in mathematics. In other words, multiparticle entanglement measure can be obtained from a series of bipartite entanglement measures corresponding to every bipartite grouping. However, note that it does not mean that multiparticle entanglement is equivalent(converted to each other) to bipartite entanglement.
Because the way to divide the whole system to two big subsystems (bipartite grouping) is stochastic and equavilent, the entanglement measure of the whole system is given by \begin{equation} \overline{E}=\underset{j=1}{\overset{\underset{i=1}{\overset{\left[ \frac{N}{ 2}\right] }{\sum }}C_{N}^{i}}{\sum }}(E_{j}/\underset{i=1}{\overset{\left[ \frac{N}{2}\right] }{\sum }}C_{N}^{i}), \end{equation} where, $\overline{E}$ denotes the multiparticle entanglement measure of the given system and $E_{j}$ denotes the bipartite entanglement measure corresponding to the $j$th bipartite grouping.
For multiparticle states, one can find that all the bipartite states obtained by our bipartite grouping are pure states or mixed states corresponding to the original\ pure or mixed multiparticle ones. We have to find an effective bipartite entanglement measure. For bipartite pure states, partial entropy measure or $C(\rho )=\sqrt{2(|\left\langle \psi |\psi
\right\rangle |^{2}-Tr\rho _{r}^{2})}$ , the concurrence, which is defined in [14], work well in arbitrary dimension. So $E_{j}=C(\rho _{j})$ or $
E_{j}=S_{j}(\Psi ^{12})=-tr\{\rho _{1}\log \rho _{1}\}$, where $\rho _{1}=tr_{2}\{\left| \Psi ^{12}\right\rangle \left\langle \Psi ^{12}\right| \} $ is the reduced density matrix and the subscripts $1$ and $2$ denote the two big subsystems $1$ and $2$ after bipartite grouping. For bipartite mixed states in higher dimension, it is difficult to find a \ satisfactory operational entanglement measure. Although that ''Concurrence of mixed bipartite quantum states in arbitrary dimensions'' [11] proposed recently sheds new light on our problem to some extent, one can also find that this measure is complicated and inoperational. Here, for integrality, we can temporarily employ the concurrence of bipartite mixed state in arbitrary dimension as bipartite entanglement measure. I.e. $E_{j}=c(\rho _{_{j}})$
with $\rho _{_{j}}$ standing for the bipartite density matrix by the bipartite grouping in the $j$th way. In some cases, in order to give an explicit expression of entanglement measure of a state, we can also employ the negativity $N(\rho )=\frac{||\rho ^{T_{A}}||-1}{2}$ defined in [15], which corresponds to the absolute value of the sum of negative eigenvalues of $\rho ^{T_{A}}$ [17]. Of course, a better bipartite entanglement measure will be expectable and better compensative for our measure. But no matter which measure one chooses, in the same case, one must employ the same measure as must work without any mistake in the given case.
In our multiparticle entanglement measure, we have to divide the whole N-particle system into two big subsystems by virtue of the above method. \ Equivalently, we can construct a series of permutation operations to realize the bipartite grouping. Consider $\rho ^{ABC\cdots N}=\underset{i}{\sum }
p_{i}\left| \psi _{i}^{AB\cdots N}\right\rangle \left\langle \psi _{i}^{AB\cdots N}\right| $ in $s$ dimension, with subsystem $A$ and subsystem $B$ in $n_{1}$ and $n_{2}$ dimension respectively, we can get that \begin{equation}
\rho ^{B(AC\cdots N)}=\underset{i}{\sum }p_{i}\left| \psi _{i}^{B(AC\cdots N)}\right\rangle \left\langle \psi _{i}^{B(AC\cdots N)}\right| \notag \end{equation} \begin{equation*}
=\underset{i}{\sum }p_{i}(P(n_{1},n_{2})^{T}\otimes 1^{C\cdots N})\left|
\psi _{i}^{AB\cdots N}\right\rangle \times \left\langle \psi _{i}^{AB\cdots N}\right| (P(n_{1},n_{2})\otimes 1^{C\cdots N}) \end{equation*} \begin{equation} =(P(n_{1},n_{2})^{T}\otimes 1^{C\cdots N})\rho ^{A(BC\cdots N)}(P(n_{1},n_{2})\otimes 1^{C\cdots N}), \end{equation} where the bracket in the superscripts denotes a whole subsystem, $ P(n_{1},n_{2})$ is permutation matrix defined as \begin{equation} P(n_{1},n_{2})=\underset{i=1}{\overset{n_{1}}{\sum }}\underset{j=1}{\overset{ n_{2}}{\sum }}E_{ij}\otimes E_{ij}^{T}=\left( \begin{array}{cccc} E_{11}^{T} & E_{12}^{T} & \cdots & E_{1n_{2}}^{T} \\ E_{21}^{T} & E_{22}^{T} & \cdots & E_{2n_{2}}^{T} \\ \vdots & \vdots & \ddots & \vdots \\ E_{n_{1}1}^{T} & E_{n_{1}2}^{T} & \cdots & E_{n_{1}n_{2}}^{T} \end{array} \right) , \end{equation} $E_{ij}$ is a matrix in $n_{1}\times n_{2}$ dimension with subscript $ij$ \ denoting the matrix element $e_{ij\ }=1$ and the rests are zero in the matrix $E_{ij}$. By such a transformation, an $n_{1}\times \left( s/n_{1}\right) $ bipartite $\rho ^{A(BC\cdots N)}$ is transformed to an $ n_{2}\times \left( s/n_{2}\right) $ bipartite $\rho ^{B(AC\cdots N)}$. In terms of dividing the whole system in $\underset{i=1}{\overset{\left[ \frac{N }{2}\right] }{\sum }}C_{N}^{i}$ ways, we have to construct corresponding permutation matrix with the same quality. Generally, we first construct a permutation which moves the $j$th particle to the position of the $i$th one and moves the $i$th particle to the position of the $(i+1)$th one as \begin{equation} P^{\prime }(i,j)=\left( \overset{i-1}{\underset{t_{1}=1}{\otimes }} 1^{t_{1}}\right) \otimes \left( \overset{j-1}{\underset{t_{2}=i}{\otimes }} p(\dim (t_{2}),\dim (t_{2}+1))\right) \otimes \left( \overset{last}{\underset {t_{3}=j+1}{\otimes }}1^{t_{3}}\right) , \end{equation} here $1^{\alpha }$ stands for unit matrix with the same dimension to the $ \alpha $th particle, $\dim (i)$ denotes the dimension of the $i$th particle and $\overset{0}{\underset{t_{1}=1}{\otimes }}1^{t_{1}}=1.$ What's more, we require that $\overset{i+1}{\underset{t_{2}=i}{\otimes }}p(\dim (t_{2}),\dim (t_{2}+1))=$ $p(\dim (i),\dim (i+1))\otimes p(\dim (i+1),\dim (i+2))$ and $ \overset{i-1}{\underset{t_{1}=1}{\otimes }}1^{t_{1}}$ and $\overset{last}{ \underset{t_{3}=j+1}{\otimes }}1^{t_{3}}$ are defined analogously. Therefore, for the $k$th grouping, one of the two big subsystems includes $M$ \ particles each of which lies on the $X_{i}$th position, we can construct a unitary transformation by the permutation to realize the grouping as \begin{equation*} U_{k}=\overset{N}{\underset{i=1}{\Pi }}P^{\prime }(i,X_{i}), \end{equation*} analogously, $\overset{j+1}{\underset{i=j}{\Pi }}P^{\prime
}(i,X_{i})=P^{\prime }(j,X_{j})\times P^{\prime }(j+1,X_{j+1})$. Note that the order of every particle in each big subsystem does not influence the separability \ relation between the two big subsystems. The different orders are just like local unitary transformations. If we operate every $U_{k}$ on the density matrix $\rho ^{ABC\cdots N}$, we get a density matrix $\rho _{k}=U_{k}^{T}\rho ^{ABC\cdots N}U_{k}$ according to the $k$th grouping. Thus we can get a set $\rho =\{\rho _{k}|k=0,1,\cdots \underset{i=1}{\overset {\left[ \frac{N}{2}\right] }{\sum }}C_{N}^{i}\}$, every element of which corresponds to every $E_{j}$ in (10).
\section{Examples}
As examples, for pure states, consider a general three-particle pure state \begin{equation*}
\left| \Psi ^{ABC}\right\rangle =(C_{1}\left| 0\right\rangle _{A}+C_{2}\left| 1\right\rangle _{A})\left| \phi _{BC}^{+}\right\rangle
+(C_{3}\left| 0\right\rangle _{A}+C_{4}\left| 1\right\rangle _{A})\left| \phi _{BC}^{-}\right\rangle \end{equation*} \begin{equation}
+(C_{5}\left| 0\right\rangle _{A}+C_{6}\left| 1\right\rangle _{A})\left|
\psi _{BC}^{+}\right\rangle +(C_{7}\left| 0\right\rangle _{A}+C_{8}\left|
1\right\rangle _{A})\left| \psi _{BC}^{-}\right\rangle , \end{equation}
with $\overset{8}{\underset{i=1}{\sum }}\left| C_{i}\right| ^{2}=1$ , $
\left| \phi _{BC}^{\pm }\right\rangle =\frac{1}{\sqrt{2}}(\left|
00\right\rangle \pm \left| 11\right\rangle )$ and $\left| \psi _{BC}^{\pm
}\right\rangle =\frac{1}{\sqrt{2}}(\left| 01\right\rangle \pm \left| 10\right\rangle )$ . By our entanglement measure, $\underset{i=1}{\overset{ \left[ \frac{3}{2}\right] }{\sum }}C_{3}^{i}=C_{3}^{1}=3$, and \begin{equation*}
C(\rho _{i})=\sqrt{2(|\left\langle \psi |\psi \right\rangle |^{2}-Tr(\rho _{i})_{r}^{2})}=\sqrt{2(1-Tr(\rho _{i})_{r}^{2})}, \end{equation*} with $i\ $\ denoting $A-BC$, $B-AC$ and $C-AB$ , three different groupings respectively. Hence, we have \begin{equation*} \overline{E}=\frac{1}{3}\underset{i}{\sum }C(\rho ^{i})=\frac{1}{3}[C(\rho ^{A-BC})+C(\rho ^{B-AC})+C(\rho ^{C-AB})], \end{equation*} with \begin{equation*} C(\rho ^{i})=\sqrt{2(1-(M_{i}^{2}+N_{i}^{2}+2P_{i}Q_{i}))}, \end{equation*} where \begin{equation*}
M_{A-BC}=\left| C_{1}\right| ^{2}+\left| C_{3}\right| ^{2}+\left|
C_{5}\right| ^{2}+\left| C_{7}\right| ^{2}, \end{equation*} \begin{equation*}
N_{A-BC}=\left| C_{2}\right| ^{2}+\left| C_{4}\right| ^{2}+\left|
C_{6}\right| ^{2}+\left| C_{8}\right| ^{2}, \end{equation*} \begin{equation*} P_{A-BC}=C_{1}C_{2}^{\ast }+C_{3}C_{4}^{\ast }+C_{5}C_{6}^{\ast }+C_{7}C_{8}^{\ast }, \end{equation*} \begin{equation*}
M_{B-AC}=\frac{1}{2}(\left| C_{1}+C_{3}\right| ^{2}+\left|
C_{2}+C_{4}\right| ^{2}+\left| C_{5}+C_{7}\right| ^{2}+\left|
C_{6}+C_{8}\right| ^{2}), \end{equation*} \begin{equation*}
N_{B-AC}=\frac{1}{2}(\left| C_{1}-C_{3}\right| ^{2}+\left|
C_{2}-C_{4}\right| ^{2}+\left| C_{5}-C_{7}\right| ^{2}+\left|
C_{6}-C_{8}\right| ^{2}), \end{equation*} \begin{eqnarray*} P_{B-AC} &=&\frac{1}{2}((C_{1}+C_{3})(C_{5}-C_{7})^{\ast }+(C_{5}+C_{7})(C_{1}-C_{3})^{\ast } \\ &&+(C_{2}+C_{4})(C_{6}-C_{8})^{\ast }+(C_{6}+C_{8})(C_{2}-C_{4})^{\ast }, \end{eqnarray*} \begin{equation*}
M_{C-AB}=\frac{1}{2}(\left| C_{1}+C_{3}\right| ^{2}+\left|
C_{2}+C_{4}\right| ^{2}+\left| C_{5}-C_{7}\right| ^{2}+\left|
C_{6}-C_{8}\right| ^{2}), \end{equation*} \begin{equation*}
N_{C-AB}=\frac{1}{2}(\left| C_{1}-C_{3}\right| ^{2}+\left|
C_{2}-C_{4}\right| ^{2}+\left| C_{5}+C_{7}\right| ^{2}+\left|
C_{6}+C_{8}\right| ^{2}), \end{equation*} \begin{eqnarray*} P_{C-AB} &=&\frac{1}{2}((C_{1}+C_{3})(C_{5}+C_{7})^{\ast }+(C_{5}-C_{7})(C_{1}-C_{3})^{\ast } \\ &&+(C_{2}+C_{4})(C_{6}+C_{8})^{\ast }+(C_{6}-C_{8})(C_{2}-C_{4})^{\ast }, \end{eqnarray*} \begin{equation*} Q_{_{i}}=P_{i}^{\ast }. \end{equation*} Therefore, we have given out the entanglement measure of all the three-particle pure states. one can evaluate $\overline{E}$ according to the given quantum state. For example, consider $C_{1}=C_{4}=\frac{1}{\sqrt{2}}$ and the rest are zero, then $\overline{E}=1$ by our measure, which suggests $
\left| \Psi ^{ABC}\right\rangle $ is a GHZ state. Substitute $C_{1}$ and $
C_{4}$ to (15), then $\left| \Psi ^{ABC}\right\rangle =\frac{1}{2}(\left|
0\right\rangle +\left| 1\right\rangle )^{A}\left| 00\right\rangle ^{BC}+
\frac{1}{2}(\left| 0\right\rangle -\left| 1\right\rangle )^{A}\left|
11\right\rangle ^{BC}$ which is only a local unitary transformation different from the GHZ state, $\left| \Psi ^{^{\prime }ABC}\right\rangle =
\frac{1}{\sqrt{2}}\left| 000\right\rangle ^{ABC}+\frac{1}{\sqrt{2}}\left| 111\right\rangle ^{ABC}$. This result is consistent to our measure.
For mixed states, consider such a state as \begin{equation}
\rho =x\left| \psi _{0}^{+}\right\rangle \left\langle \psi _{0}^{+}\right| + \frac{1-x}{2^{N}}1, \end{equation} which is described in [10,16]. We employ the negativity $N(\rho )$ as entanglement measure. By our method, one can find that the $N$-particle state can be divided in $\underset{i=1}{\overset{\left[ \frac{N}{2}\right] }{
\sum }}C_{N}^{i}$ ways, and $N(\rho _{j})=\left| \frac{1-(1+2^{N-1})x}{2^{N}}
\right| $ for $j=1,2,\cdots \underset{i=1}{\overset{\left[ \frac{N}{2}\right] }{\sum }}C_{N}^{i}$ and $x>\frac{1}{(1+2^{N-1})}$ with subscript $j$ \ denoting the bipartite grouping in the $j$th way. Hence one can get \begin{equation} \overline{E}(\rho )=\left\{ \begin{array}{cc}
\left| \frac{1-(1+2^{N-1})x}{2^{N}}\right| , & x>\frac{1}{(1+2^{N-1})} \\ 0 & otherwise \end{array} \right. . \end{equation} This result is not only consistent to the previous one [10], but also gives an explict entanglement measure for $x>\frac{1}{(1+2^{N-1})}$.
\section{Conclusion}
In conclusion, we have shown a redefinition to estimate whether a multiparticle system is semiseparable or fully inseparable. In terms of the redefinition, we convert multiparticle entanglement measure into a series of bipartite entanglement measures in mathematics and then give out the multiparticle entanglement measure with a simple form $\overline{E}=\underset {j=1}{\overset{\underset{i=1}{\overset{\left[ \frac{N}{2}\right] }{\sum }} C_{N}^{i}}{\sum }}(E_{j}/\underset{i=1}{\overset{\left[ \frac{N}{2}\right] }{ \sum }}C_{N}^{i})$. At last, we give two examples demonstrating that our measure is reasonable and feasible. Due to the convenient and operational bipartite entanglement measure for pure states, our measure for multiparticle entanglement works better, unlike the rather cumbersome measure for mixed states. But in some special cases, it can also work better to employ a special measure for bipartite states, so long as the bipartite measure can work without any mistake. What's more, we carry out all our studies about the inseparability and semiseparability of a state in mathematics, and we think the inseparbility measure as entanglement measure analogous to bipartite entanglement, but it is doesn't mean that the multiparticle entanglement and bipartite entanglement are equivalent(converted to each other) in physics.
\end{document} | arXiv |
Secrets of the Mathematical Ninja: Pascal's Triangle
Written by Colin+ in binomial, core 2, ninja maths, probability.
You've seen Pascal's triangle before:
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
You get the number in each row by adding its two 'parents' - for instance, each 10 in the row that starts with 1 then 5 comes from adding the 4 and 6 above it. (Somewhat oddly, that's normally called the 5th row, even though it's 6th from the top; the sums get a lot simpler if you call the top row 'row 0'.)
It's named after Blaise Pascal, 17th century French philosopher who invented the calculator and was one of the first to talk about probability in any depth.
(It comes up in the binomial expansion and in the binomial distribution - you might almost think the two things were somehow related).
So, even though Pascal invented a calculator, it almost certainly didn't have an $^nC_r$ button on it. It's possible, though, to work out rows of Pascal's triangle on the fly. Here's how the mathematical ninja would do it.
Let's say you wanted to find the 5th row. You're going to need to keep two numbers in mind: the timesy number (which starts as 5) and the dividey number (which starts at 1).
The first number in every row is 1, so write that down. Multiply the previous number (1) by the timesy number ($1 \times 5 = 5$) and divide by the dividey number ($5 \div 1 = 5$). That's the second number.
Now, drop the timesy number by 1 to get 4, and increase the dividey number by 1 to get 2, and repeat the process: the previous answer, $5 \times 4 \div 2 = 10$. There's the 3rd number.
Keep going: nudge the timesy number up to 3 and the dividey number down to 3. $10 \times 3 \div 3 = 10$. There's number 4.
If you keep going, you get back to 1 (and then, of course, 0).
It works for any row (try it, and confirm with your calculator if you must).
For bonus points, try to figure out the Pascal's Triangle entries with negative numbers - what does the -2nd row look like?
Edited 2015-03-08 and 2016-12-28 to fix LaTeX.
HOW much rice?
Why I don't buy that $1 + 2 + 3 + … = -\frac{1}{12}$
Secrets of the Mathematical Ninja: Percentage adjustments
The smart way to do the binomial expansion (Part 1)
The smart way to do the binomial expansion (Part II) | CommonCrawl |
Trigonometric series
In mathematics, a trigonometric series is an infinite series of the form
$A_{0}+\sum _{n=1}^{\infty }A_{n}\cos {nx}+B_{n}\sin {nx},$
This article is about the mathematical concept. For the book by Zygmund, see Trigonometric Series.
where $x$ is the variable and $\{A_{n}\}$ and $\{B_{n}\}$ are coefficients. It is an infinite version of a trigonometric polynomial.
A trigonometric series is called the Fourier series of the integrable function $ f$ if the coefficients have the form:
$A_{n}={\frac {1}{\pi }}\int _{0}^{2\pi }\!f(x)\cos {nx}\,dx$
$B_{n}={\frac {1}{\pi }}\displaystyle \int _{0}^{2\pi }\!f(x)\sin {nx}\,dx$
Examples
Every Fourier series gives an example of a trigonometric series. Let the function $f(x)=x$ on $[-\pi ,\pi ]$ be extended periodically (see sawtooth wave). Then its Fourier coefficients are:
${\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{-\pi }^{\pi }x\cos {nx}\,dx=0,\quad n\geq 0.\\[4pt]B_{n}&={\frac {1}{\pi }}\int _{-\pi }^{\pi }x\sin {nx}\,dx\\[4pt]&=-{\frac {x}{n\pi }}\cos {nx}+{\frac {1}{n^{2}\pi }}\sin {nx}{\Bigg \vert }_{x=-\pi }^{\pi }\\[5mu]&={\frac {2\,(-1)^{n+1}}{n}},\quad n\geq 1.\end{aligned}}$
Which gives an example of a trigonometric series:
$2\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}\sin {nx}=2\sin {x}-{\frac {2}{2}}\sin {2x}+{\frac {2}{3}}\sin {3x}-{\frac {2}{4}}\sin {4x}+\cdots $
The converse is false however, not every trigonometric series is a Fourier series. The series
$\sum _{n=2}^{\infty }{\frac {\sin {nx}}{\log {n}}}={\frac {\sin {2x}}{\log {2}}}+{\frac {\sin {3x}}{\log {3}}}+{\frac {\sin {4x}}{\log {4}}}+\cdots $
is a trigonometric series which converges for all $x$ but is not a Fourier series.[1] Here $B_{n}={\frac {1}{\log(n)}}$ for $n\geq 2$ and all other coefficients are zero.
Uniqueness of Trigonometric series
The uniqueness and the zeros of trigonometric series was an active area of research in 19th century Europe. First, Georg Cantor proved that if a trigonometric series is convergent to a function $f(x)$ on the interval $[0,2\pi ]$, which is identically zero, or more generally, is nonzero on at most finitely many points, then the coefficients of the series are all zero.[2]
Later Cantor proved that even if the set S on which $f$ is nonzero is infinite, but the derived set S' of S is finite, then the coefficients are all zero. In fact, he proved a more general result. Let S0 = S and let Sk+1 be the derived set of Sk. If there is a finite number n for which Sn is finite, then all the coefficients are zero. Later, Lebesgue proved that if there is a countably infinite ordinal α such that Sα is finite, then the coefficients of the series are all zero. Cantor's work on the uniqueness problem famously led him to invent transfinite ordinal numbers, which appeared as the subscripts α in Sα .[3]
Notes
1. Hardy, Godfrey Harold; Rogosinski, Werner Wolfgang (1956) [1st ed. 1944]. Fourier Series (3rd ed.). Cambridge University Press. pp. 4–5.
2. http://www.math.caltech.edu/papers/uniqueness.pdf
3. Cooke, Roger (1993), "Uniqueness of trigonometric series and descriptive set theory, 1870–1985", Archive for History of Exact Sciences, 45 (4): 281–334, doi:10.1007/BF01886630, S2CID 122744778.
References
• Bari, Nina Karlovna (1964). A Treatise on Trigonometric Series. Vol. 1. Translated by Mullins, Margaret F. Pergamon.
• Zygmund, Antoni (1968). Trigonometric Series. Vol. 1 and 2 (2nd, reprinted ed.). Cambridge University Press. MR 0236587.
See also
• Denjoy–Luzin theorem
Sequences and series
Integer sequences
Basic
• Arithmetic progression
• Geometric progression
• Harmonic progression
• Square number
• Cubic number
• Factorial
• Powers of two
• Powers of three
• Powers of 10
Advanced (list)
• Complete sequence
• Fibonacci sequence
• Figurate number
• Heptagonal number
• Hexagonal number
• Lucas number
• Pell number
• Pentagonal number
• Polygonal number
• Triangular number
Properties of sequences
• Cauchy sequence
• Monotonic function
• Periodic sequence
Properties of series
Series
• Alternating
• Convergent
• Divergent
• Telescoping
Convergence
• Absolute
• Conditional
• Uniform
Explicit series
Convergent
• 1/2 − 1/4 + 1/8 − 1/16 + ⋯
• 1/2 + 1/4 + 1/8 + 1/16 + ⋯
• 1/4 + 1/16 + 1/64 + 1/256 + ⋯
• 1 + 1/2s + 1/3s + ... (Riemann zeta function)
Divergent
• 1 + 1 + 1 + 1 + ⋯
• 1 − 1 + 1 − 1 + ⋯ (Grandi's series)
• 1 + 2 + 3 + 4 + ⋯
• 1 − 2 + 3 − 4 + ⋯
• 1 + 2 + 4 + 8 + ⋯
• 1 − 2 + 4 − 8 + ⋯
• Infinite arithmetic series
• 1 − 1 + 2 − 6 + 24 − 120 + ⋯ (alternating factorials)
• 1 + 1/2 + 1/3 + 1/4 + ⋯ (harmonic series)
• 1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ⋯ (inverses of primes)
Kinds of series
• Taylor series
• Power series
• Formal power series
• Laurent series
• Puiseux series
• Dirichlet series
• Trigonometric series
• Fourier series
• Generating series
Hypergeometric series
• Generalized hypergeometric series
• Hypergeometric function of a matrix argument
• Lauricella hypergeometric series
• Modular hypergeometric series
• Riemann's differential equation
• Theta hypergeometric series
• Category
| Wikipedia |
\begin{document}
\begin{center} \bf MATRIX COEFFICIENT IDENTIFICATION IN AN ELLIPTIC EQUATION WITH THE CONVEX ENERGY FUNCTIONAL METHOD \end{center}
\centerline {\bf Michael Hinze and Tran Nhan Tam Quyen}
{University of Hamburg, Bundesstrasse 55, 20146 Hamburg, Germany\\ Email: [email protected] and [email protected]}
{\small {\bf Abstract:} In this paper we study the inverse problem of identifying the {\it diffusion matrix} in an elliptic PDE from measurements. The convex energy functional method with Tikhonov regularization is applied to tackle this problem. For the discretization we use the variational discretization concept, where the PDE is discretized with piecewise linear, continuous finite elements. We show the convergence of approximations. Using a suitable source condition, we prove an error bound for discrete solutions. For the numerical solution we propose a gradient-projection algorithm and prove the strong convergence of its iterates to a solution of the identification problem. Finally, we present a numerical experiment which illustrates our theoretical results. }
{\small {\bf Key words and phrases:} Coefficient identification, diffusion matrix, Tikhonov regularization, convex energy function, source condition, convergence rates, finite element method, gradient-projection algorithm, Dirichlet problem, ill-posed problems.}
\section{Introduction} Let $\Omega$ be an open bounded connected domain of $R^d$, $d \le 3$ with boundary $\partial\Omega$. We investigate the problem of identifying the spatially varying diffusion matrix $Q$ in the Dirichlet problem for the elliptic equation \begin{align} -\text{div} (Q\nabla u) &= f \mbox{ in } \Omega, \label{m1*}\\ u &=0 \mbox{ on } {\partial \Omega} \label{qmict3*} \end{align} from the observation $z^\delta$ of the solution $u$ in the domain $\Omega$. Here, the function $f\in L^2(\Omega)$ is given.
In this paper we assume that $z^\delta \in H^1_0(\Omega)$. For related research we refer the reader to \cite{ ChanTai2003,Chavent_Kunisch2002,Cherlenyak,{Haoq},hao_quyen3,Kaltenbacher_Schoberl,kolo,wang_zou}.
Our identification problem can be considered as a generalization of identifying the scalar diffusion coefficient $q$ in the elliptic equation \begin{equation}\label{7/12/12:ct2} -\text{div} (q\nabla u) = f \mbox{ in } \Omega \mbox{~and~} u=0 \mbox{~on~} \partial\Omega. \end{equation} The problem has been studied extensively in the last 30 years or so. The identification results can be found in \cite{Chicone,Know2,Ric,Vainikko-Kunisch}. Error estimates for finite element approximation solutions have been obtained, for example, in \cite{Falk,hao_quyen4,kolo,wang_zou}. A survey of numerical methods for the identification problem can be found in \cite{ChenZou,KeungZou98,Kunisch}.
Compared to the identification $q$ in (\ref{7/12/12:ct2}), the problem of identifying the matrix $Q$ in (\ref{m1*}) has received less attention. However, there are some contributions treating this problem. Hoffmann and Sprekels in \cite{Hoffmann_Sprekels} proposed a dynamical system approach to reconstruct the matrix $Q$ in equation (\ref{m1*}). In \cite{Rannacher_Vexler} Rannacher and Vexler employed the finite element method and showed error estimates for a matrix identification problem from pointwise measurements of the state variable, provided that the sought matrix is constant and the exact data is smooth enough.
In the present paper we adopt the convex energy functional approach of Kohn and Vogelius in \cite{Kohn_Vogelius1,Kohn_Vogelius2} to the matrix case. In fact, for estimating the matrix $Q$ in (\ref{m1*})--(\ref{qmict3*}) from the observation $z^\delta$ of the solution $u$, we use the {\it non-negative convex functional} (see \S \ref{Auxiliary results}) \begin{align*} \mathcal{J}^\delta(Q) := \int_{\Omega} Q \nabla \big( \mathcal{U}(Q) - z^\delta\big) \cdot \nabla \big(\mathcal{U}(Q) - z^\delta \big) dx \end{align*} together with Tikhonov regularization and consider the {\it strictly convex} minimization problem $$
\min_{Q \in \mathcal{Q}_{ad}} \mathcal{J}^\delta(Q) + \rho \| Q \|^2_{{L^2(\Omega)}^{d\times d}} $$ over the admissible set $\mathcal{Q}_{ad}$ (see \S \ref{D-I problems}), and consider its {\it unique global} solution $Q^{\rho,\delta}$ as reconstruction. Here $\rho >0$ is the regularization parameter and $\mathcal{U}$ the non-linear coefficient-to-solution operator.
For the discretization we use the variational discretization method introduced in \cite{Hinze} and show the convergence of approximations. Under a source condition, which is weaker than that of the existing theories in \cite{Engl_Hanke_Neubauer,EnglKuNe}, we prove an error bound for discrete regularized solutions. Finally, we employ a gradient-projection algorithm for the numerical solution of the regularized problems. The strong convergence of iterates to a solution of the identification problem is ensured without smoothness requirements on the sought matrix. Numerical results show an efficiency of our theoretical findings.
In \cite{Engl_Hanke_Neubauer,EnglKuNe} the authors investigated the convergence of Tikhonov regularized solutions via the standard output least squares method for the general non-linear ill-posed equation in Hilbert spaces. They proved some rates of convergence for this approach under a source condition and the so-called {\it small enough condition} on source elements. In the present paper, by working with a convex energy functional for our concrete identification problem, we in the proof of Theorem \ref{nu21***} are not faced with a smallness condition. Furthermore, our source condition does not require additional smoothness assumption of the sought matrix and the exact data (see \S \ref{tdht}). We also remark that such a source condition without the smallness condition was proposed in \cite{Haoq,hao_quyen1,hao_quyen2,hao_quyen3} for the {\it scalar} coefficient identification problem in elliptic PDEs and in some concrete cases the source condition was proved to satisfy if sought coefficients belong to certain smooth function spaces.
We mention that in \cite{EnglZou}, by utilizing a modified kind of adjoint, the authors for the inverse heat conduction problem introduced a source condition in the form of a variational identity without the smallness condition on source elements. The advantage of this source condition is that it does not involve the Fr\'echet derivative of the coefficient-to-solution operator. However, the source condition requires some smoothness assumptions on the sought coefficient.
Starting with \cite{HKPS}, the authors in \cite{Grasmair,HohageWerner,WernerHohage2012} have proposed new source conditions in the form of variational inequalities. They proved some convergence rates for Tikhonov-type regularized solutions via the misfit functional method of the discrepancy for the general non-linear ill-posed equation in Banach spaces. The novelty of this theory is that the source conditions do not involve the Fr\'echet derivative of forward operators and so avoid differentiability assumptions. Furthermore, the theory is applied to inverse problems with PDEs (see, for example, \cite{Hohage}).
Recently, by using several sets of observations and a suitable projected source condition motivated by \cite{Flemming-Hofmann} as well as certain smoothness requirements on the sought coefficient and the exact solution, the authors of \cite{Deckelnick} derived an error bound for the finite element solutions of a standard output least squares approach to identify the diffusion matrix in \eqref{m1*}. Due to the non-linearity of the identification problem the method presented in \cite{Deckelnick} solves a non-convex minimization problem. Our approach in the present paper is different. We utilize a convex cost functional and a different source condition without smoothness assumptions. Therefore, the theory in \cite{Deckelnick} and its proof techniques are not directly comparable with our approach. Furthermore, taking the advantage of the convexity to account, we here are able to prove that iterates via a gradient-projection algorithm converge to the identified diffusion matrix.
The remaining part of this paper is organized as follows. In Section \ref{definition} and Section \ref{Finite element method} we describe the direct and inverse problems and the finite element method which is applied to the identification problem, respectively. Convergence analysis of the finite element method is presented in Section \ref{Stability}. In Section \ref{tdht} we show convergence rates obtained with this technique. Section \ref{iterative} is devoted to a gradient-projection algorithm. Finally, in Section \ref{Numerical implement} we present a numerical experiment which illustrates our theoretical results.
Throughout the paper we use the standard notion of Sobolev spaces $H^1(\Omega)$, $H^1_0(\Omega)$, $W^{k,p}(\Omega)$, etc from, for example, \cite{tro}. If not stated otherwise we write $\int_\Omega \cdots$ instead of $\int_\Omega \cdots dx$.
\section{Problem setting and preliminaries}\label{definition}
\subsection{Notations}\label{Notation}
Let $\mathcal{S}_d$ denote the set of all symmetric $d \times d$-matrices equipped with the inner product $M \cdot N :=
\mbox{trace} (MN)$ and the norm $$\| M \|_{\mathcal{S}_d} = (M \cdot M)^{1/2} = \left( \sum_{i, j =1}^d m_{ij}^2 \right)^{1/2},$$ where $M = (m_{ij})_{1\le i,j \le d}$. Let $M$ and $N$ be in $\mathcal{S}_d$, then $$M \preceq N $$ if and only if $$ M \xi \cdot \xi \le N \xi \cdot \xi ~ \mbox{for all} ~ \xi \in R^d.$$ We note that if $0 \preceq M \in \mathcal{S}_d$ the root $M^{1/2}$ is well defined.
In $\mathcal{S}_d$ we introduce the convex subset
$$\mathcal{K} := \{M \in \mathcal{S}_d ~|~ \underline{q} I_d \preceq M \preceq \overline{q} I_d\},$$ where $\underline{q}$ and $\overline{q}$ are given positive constants and $I_d$ is the unit $d\times d$-matrix. Furthermore, let $\xi := (\xi_1, \cdot\cdot\cdot, \xi_d)$ and $\eta := (\eta_1, \cdot\cdot\cdot, \eta_d)$ be two arbitrary vectors in $R^d$, we use the notation $$(\xi \otimes \eta)_{1\le i,j\le d} \in \mathcal{S}_d ~ \mbox{with} ~ (\xi \otimes \eta)_{ij} := \frac{1}{2} (\xi_i \eta_j + \xi_j \eta_i) ~ \mbox{for all} ~ i, j = 1, \cdots, d.$$
Finally, in the space ${L^{\infty}(\Omega)}^{d\times d}$ we use the norm
$$\|H\|_{{L^{\infty}(\Omega)}^{d\times d}} :=
\max_{1\le i,j\le d} \|h_{ij}\|_{L^{\infty}(\Omega)},$$ where $H =(h_{ij})_{1\le i,j\le d} \in {L^{\infty}(\Omega)}^{d\times d}$.
\subsection{Direct and inverse problems}\label{D-I problems}
We recall that a function $u$ in $H^1_0(\Omega)$ is said to be a weak solution of the Dirichlet problem (\ref{m1*})--(\ref{qmict3*}) if the identity \begin{align}\label{4/6:m4} \int_\Omega Q\nabla u \cdot \nabla v = \int_\Omega f v \end{align} holds for all $v\in H^1_0(\Omega)$. Assume that the matrix $Q$ belongs to the set \begin{align} \mathcal{Q}_{ad}:= \left\{ Q \in {L^{\infty}(\Omega)}^{d \times d}
~|~ Q(x) \in \mathcal{K} ~ \mbox{a.e. in} ~ \Omega\right\}. \label{5/12/12:ct3} \end{align} Then, by the aid of the Poincar\'e-Friedrichs inequality in $H^1_0(\Omega)$, there exists a positive constant $\kappa$ depending only on $\underline{q}$ and the domain $\Omega$ such that the coercivity condition \begin{equation}\label{coercivity} \int_\Omega Q \nabla u \cdot \nabla u \ge \kappa
\|u\|^2_{H^1(\Omega)} \end{equation} holds for all $u$ in $H^1_0(\Omega)$ and $ Q \in \mathcal{Q}_{ad}$. Hence, by the Lax-Milgram lemma, we conclude that there exists a unique solution $u$ of (\ref{m1*})--(\ref{qmict3*}) satisfying the following estimate \begin{align}
\left\|u\right\|_{H^1(\Omega)}\le \dfrac{1}{\kappa}
\left\|f\right\|_{L^2(\Omega)}.\label{mq5} \end{align}
Therefore, we can define the non-linear coefficient-to-solution operator $$\mathcal{U} : \mathcal{Q}_{ad} \subset {L^{\infty}(\Omega)}^{d \times d} \rightarrow H^1_0(\Omega)$$ which maps the matrix $Q \in \mathcal{Q}_{ad} $ to the unique solution $\mathcal{U}(Q) := u$ of the problem (\ref{m1*})--(\ref{qmict3*}). Then, the inverse problem is stated as follows: $$\mbox{~Given~} \overline{u} := \mathcal{U}(\overline{Q}) \in H^1_0(\Omega), \mbox{~find a matrix~} \overline{Q} \in \mathcal{Q}_{ad} \mbox{~such that~} \eqref{4/6:m4} \mbox{~is satisfied with~} \overline{u} \mbox{~and~} \overline{Q}.$$
\subsection{Tikhonov regularization}\label{Tikhonov regularization} According to our problem setting $\overline{u}$ is the exact solution of (\ref{m1*})--(\ref{qmict3*}), so there exists some $\overline{Q} \in \mathcal{Q}_{ad}$ such that $\overline{u} = \mathcal{U}(\overline{Q})$. We assume that instead of the exact $\overline{u}$ we have only measurements $z^\delta \in H^1_0(\Omega)$ with \begin{align}
\|z^\delta - \overline{u} \|_{H^1(\Omega)} \leq \delta \mbox{~for some~} \delta >0. \label{gradient-obs} \end{align} Our problem is to reconstruct the matrix $\overline{Q}$ from $z^\delta$. For solving this problem we consider the non-negative {\it convex} functional (see \S \ref{Auxiliary results}) \begin{align} \mathcal{J}^\delta(Q) := \int_{\Omega} Q \nabla \big( \mathcal{U}(Q) - z^\delta\big) \cdot \nabla \big(\mathcal{U}(Q) - z^\delta \big). \label{29/6:ct8} \end{align} Furthermore, since the problem is ill-posed, in this paper we shall use Tikhonov regularization to solve it in a stable way. Namely, we consider $$\min_{Q \in \mathcal{Q}_{ad}} \Upsilon^{\rho,\delta}, \eqno \left(\mathcal{P}^{\rho,\delta} \right)$$ where
$$\Upsilon^{\rho,\delta} := \mathcal{J}^\delta(Q) + \rho \| Q \|^2_{{L^2(\Omega)}^{d\times d}}$$ and $\rho>0$ is the regularization parameter.
In the present paper we assume that the gradient-type observation is available. Concerning this assumption, we refer the reader to \cite{hao_quyen3,Cherlenyak,Kaltenbacher_Schoberl,Chavent_Kunisch2002,ChanTai2003,kolo} and the references therein, where discussions about the interpolation of discrete measurements of the solution $\overline{u}$ which results the data $z^\delta$ satisfying \eqref{gradient-obs} are given.
\subsection{Auxiliary results}\label{Auxiliary results}
Now we summarize some properties of the coefficient-to-solution operator. The proofs of the following results are based on standard arguments and therefore omitted.
\begin{lemma}\label{bd21} The coefficient-to-solution operator $\mathcal{U} : \mathcal{Q}_{ad} \subset {L^{\infty}(\Omega)}^{d\times d} \rightarrow H^1_0(\Omega)$ is infinitely Fr\'echet differentiable on $\mathcal{Q}_{ad}$. For each $Q \in \mathcal{Q}_{ad}$ and $m \ge 1$ the action of the Fr\'echet derivative $\mathcal{U}^{(m)}(Q)$ in direction $(H_1, H_2, \cdot\cdot\cdot, H_m) \in {\big({L^\infty(\Omega)}^{d \times d}\big)}^m$ denoted by $\eta := \mathcal{U}^{(m)}(Q)(H_1, H_2, \cdot\cdot\cdot, H_m)$ is the unique weak solution in $H^1_0(\Omega)$ to the equation \begin{align} \int_\Omega Q \nabla\eta \cdot \nabla v =- \sum\limits_{i=1}^m \int_\Omega H_i\nabla \mathcal{U}^{(m-1)}(Q) \xi_i \cdot \nabla v \label{ct10} \end{align} for all $v\in H^1_0(\Omega)$ with $ \xi_i := (H_1, \cdot\cdot\cdot,H_{i-1}, H_{i+1}, \cdot\cdot\cdot, H_m)$. Furthermore, the following estimate is fulfilled $$
\|\eta\|_{H^1(\Omega)} \le\frac{m d}{\kappa^{m+1}} \|f \|_{L^2(\Omega)}
\prod_{i=1}^{m}\|H_i\|_{{L^{\infty}(\Omega)}^{d\times d}}. $$ \end{lemma}
Now we prove the following useful results.
\begin{lemma} \label{convex} The functional $\mathcal{J}^\delta$ defined by (\ref{29/6:ct8}) is convex on the convex set $\mathcal{Q}_{ad}$. \end{lemma}
\begin{proof} From Lemma \ref{bd21} we have that $\mathcal{J}^\delta$ is infinitely differentiable. A short calculation with $\eta := \mathcal{U}'(Q)H$ gives \begin{align*} {\mathcal{J}^\delta}'' (Q) (H, H) = 2\int_{\Omega} Q \nabla \eta \cdot \nabla \eta \ge 0 \end{align*} for all $Q \in \mathcal{Q}_{ad}$ and $H \in {L^\infty(\Omega)}^{d \times d}$, which proves the lemma. \end{proof}
\begin{lemma}[{\cite{Murat_Tartar,Tartar}}]\label{H-convergent} Let $(Q_n)_n$ be a sequence in $\mathcal{Q}_{ad}$. Then, there exists a subsequence, again denoted $(Q_n)_n$, and an element $Q\in \mathcal{Q}_{ad}$ such that \begin{quote} $\mathcal{U}(Q_n)$ weakly converges to $\mathcal{U}(Q)$ in $H^1_0(\Omega)$ and \\ $Q_n \nabla \mathcal{U}(Q_n)$ weakly converges to $Q\nabla \mathcal{U}(Q)$ in ${L^2(\Omega)}^d$. \end{quote}
The sequence $(Q_n)_n$ is then said to be H-convergent to $Q$. \end{lemma}
The concept of H-convergence generalizes that of G-convergence introduced by Spagnolo in \cite{Spagnolo}. Furthermore, the H-limit of a sequence is unique.
A relationship between the H-convergence and the weak$^*$ convergence in ${L^{\infty}(\Omega)}^{d\times d}$ is given by the following lemma.
\begin{lemma} [{\cite{Murat_Tartar}}]\label{weak-convergent} Let $(Q_n)_n$ be a sequence in $\mathcal{Q}_{ad}$. Assume that $(Q_n)_n$ is H-convergent to $Q$ and $(Q_n)_n$ weak$^*$ converges to $\widehat{Q}$ in ${L^\infty(\Omega)}^{d \times d}$. Then, $Q(x) \preceq \widehat{Q}(x)$ a.e. in $\Omega$ and \begin{align*}
\|Q\|^2_{{L^2(\Omega)}^{d \times d}} \le
\|\widehat{Q}\|^2_{{L^2(\Omega)}^{d \times d}} \le
\liminf_{n} \|Q_n\|^2_{{L^2(\Omega)}^{d \times d}}. \end{align*} \end{lemma}
\begin{theorem}\label{ttnghiem} There exists a unique minimizer $Q^{\rho,\delta}$ of the problem $\left(\mathcal{P}^{\rho,\delta} \right)$, which is called the regularized solution of the identification problem. \end{theorem}
\begin{proof} Let $(Q_n)_n$ be a minimizing sequence of the problem $(\mathcal{P}^{\rho,\delta})$, i.e., $$\lim_{n} \Upsilon^{\rho,\delta}(Q_n) = \inf_{Q\in \mathcal{Q}_{ad}} \Upsilon^{\rho,\delta}(Q).$$ By Lemma \ref{H-convergent} and Lemma \ref{weak-convergent}, it follows that there exists a subsequence which is not relabelled and elements $Q \in \mathcal{Q}_{ad}$ and $\widehat{Q} \in {L^\infty(\Omega)}^{d \times d}$ such that \begin{quote} $(Q_n)_n$ is H-convergent to $Q$, \\ $(Q_n)_n$ weak$^*$ converges to $\widehat{Q}$ in ${L^\infty(\Omega)}^{d \times d}$, \\ $Q(x) \preceq \widehat{Q}(x)$ a.e. in $\Omega$ and \\
$\|Q\|^2_{{L^2(\Omega)}^{d \times d}} \le
\|\widehat{Q}\|^2_{{L^2(\Omega)}^{d \times d}} \le
\liminf_{n} \|Q_n\|^2_{{L^2(\Omega)}^{d \times d}}$. \end{quote} We have that \begin{align*} \mathcal{J}^{\delta}(Q_n) &= \int_\Omega Q_n \nabla \mathcal U(Q_n) \cdot \nabla (\mathcal{U}(Q_n) - z^\delta) - \int_\Omega Q_n \nabla(\mathcal U(Q_n) - z^\delta) \cdot \nabla z^\delta\\ &= \int_\Omega f(\mathcal{U}(Q_n) - z^\delta) - \int_\Omega Q_n \nabla \mathcal U(Q_n) \cdot \nabla z^\delta + \int_\Omega Q_n \nabla z^\delta \cdot \nabla z^\delta. \end{align*} And so that \begin{align*} \lim_n \mathcal{J}^{\delta}(Q_n) &= \int_\Omega f(\mathcal{U}(Q) - z^\delta) - \int_\Omega Q \nabla \mathcal U(Q) \cdot \nabla z^\delta + \int_\Omega \widehat{Q} \nabla z^\delta \cdot \nabla z^\delta\\ &= \mathcal{J}^\delta (Q) + \int_\Omega (\widehat{Q} - Q) \nabla z^\delta \cdot \nabla z^\delta \\ &\ge \mathcal{J}^\delta (Q). \end{align*} We therefore get \begin{align*} \Upsilon^{\rho,\delta}(Q) &\le \lim_{n}
\mathcal{J}^{\delta}(Q_n) + \liminf_{n} \rho\|Q_n\|^2_{{L^2(\Omega)}^{d \times d}} \\
&= \liminf_{n} \left( \mathcal{J}^{\delta}(Q_n) + \rho\|Q_n\|^2_{{L^2(\Omega)}^{d \times d}} \right) \\
&= \inf_{Q\in \mathcal{Q}_{ad}} \mathcal{J}^{\delta}(Q) + \rho\|Q\|^2_{{L^2(\Omega)}^{d \times d}}. \end{align*} Since $\Upsilon^{\rho,\delta}$ is strictly convex, the minimizer is unique. \end{proof}
\section{Discretization}\label{Finite element method}
Let $\left(\mathcal{T}_h\right)_{0<h<1}$ be a family of regular and quasi-uniform triangulations of the domain $\overline{\Omega}$ with the mesh size $h$. For the definition of the discretization space of the state functions let us denote \begin{equation*} \mathcal{V}^1_h := \left\{v_h\in C(\overline\Omega) \cap H^1_0(\Omega)
~|~{v_h}_{|T} \in \mathcal{P}_1(T), ~~\forall T\in \mathcal{T}_h\right\} \end{equation*} with $\mathcal{P}_1$ consisting all polynomial functions of degree less than or equal to 1. Similar to the continuous case we have the following result. \begin{proposition} Let $Q$ be in $\mathcal{Q}_{ad}$. Then the variational equation \begin{align} \int_\Omega Q\nabla u_h \cdot \nabla v_h = \int_\Omega fv_h, \enskip\forall v_h\in \mathcal{V}^1_h \label{10/4:ct1} \end{align} admits a unique solution $u_h = u_h(Q) \in \mathcal{V}^1_h$. Further, the prior estimate \begin{align}
\left\|u_h\right\|_{H^1(\Omega)}\le \dfrac{1}{\kappa}
\left\|f\right\|_{L^2(\Omega)}, \label{18/5:ct1} \end{align} is satisfied. \end{proposition}
\begin{definition}\label{discrete_solution} The map $\mathcal{U}_h: \mathcal{Q}_{ad} \rightarrow \mathcal{V}^1_h$ from each $Q \in \mathcal{Q}_{ad}$ to the unique solution $u_h$ of variational equation \eqref{10/4:ct1} is called {\it the discrete coefficient-to-solution operator}. \end{definition} We note that the operator $\mathcal{U}_h$ is Fr\'echet differentiable on the set $\mathcal{Q}_{ad}$. For each $Q \in \mathcal{Q}_{ad}$ and $H \in {L^{\infty}(\Omega)}^{d\times d}$ the Fr\'echet differential $\eta_h := {\mathcal{U}_h}'(Q) H$ is an element of $\mathcal{V}_h^1$ and satisfies the equation \begin{align} \int_\Omega Q\nabla \eta_h \cdot \nabla v_h &= -\int_\Omega H\nabla \mathcal{U}_h(Q) \cdot \nabla v_h \label{ct21***} \end{align} for all $v_h$ in $\mathcal{V}_h^1$.
Before presenting our results we need some facts on data interpolation.
\subsection{Data interpolation}\label{Data interpolation}
It is well known that there is a usual nodal value interpolation operator $$I^1_h : C(\overline{\Omega}) \to \left\{v_h\in C(\overline\Omega)
~|~{v_h}_{|T} \in \mathcal{P}_1(T), ~~\forall T\in \mathcal{T}_h\right\}$$
such that \begin{equation*} I^1_h \left(H^1_0(\Omega) \cap C(\overline{\Omega})\right) \subset \mathcal{V}^1_h. \end{equation*}
Since $H^2(\Omega)$ is continuously embedded in $C(\overline{\Omega})$ as $d\le 3$ (see, for example, \cite{attouch}), the following result is standard in the theory of the finite element method, the proof of which can be found, for example, in \cite{Brenner_Scott,Ciarlet}.
\begin{lemma} \label{FEM*} Let $\psi$ be in $H^1_0(\Omega) \cap H^2(\Omega)$. Then, we have \begin{align*}
\left\|\psi - I^1_h \psi\right\|_{H^k(\Omega)} \le Ch^{m-k}\left\|\psi \right\|_{H^m(\Omega)}, \end{align*} where $0\le k< m \le 2$. \end{lemma}
\subsection{Data mollification}\label{L2-observation}
Since the data $z^\delta$ is not smooth enough, in general we cannot define $I^1_h$ for them, when $d \ge 2$. Instead, we use Cl\'ement's interpolation operator $$\Pi_h: L^2(\Omega) \rightarrow \left\{v_h\in C(\overline\Omega)
~|~{v_h}_{|T} \in \mathcal{P}_1(T), ~~\forall T\in \mathcal{T}_h\right\}$$ with \begin{equation*} \Pi_h \left(H^1_0(\Omega)\right) \subset \mathcal {V}^1_h \end{equation*} and satisfying the following convergence properties and estimates \begin{equation}\label{23/10:ct2}
\lim_{h\to 0} \left\| \vartheta - \Pi_h \vartheta
\right\|_{H^k(\Omega)} =0 \enskip \mbox{for all} \enskip k \in \{0, 1\} \end{equation} and \begin{equation}\label{23/5:ct1}
\left\| \vartheta - \Pi_h \vartheta \right\|_{H^k(\Omega)} \le Ch^{m-k} \| \vartheta\|_{H^m(\Omega)} \end{equation} for $0 \le k < m \le 2$ (see \cite{Clement}, and some generalizations of which \cite{Bernardi1,Bernardi2,scott_zhang}).
\subsection{Finite element method}
Using the operator $\Pi_h$ in \S \ref{L2-observation}, we introduce the discrete cost functional \begin{equation}\label{29/6:ct9} \mathcal{J}^\delta_h (Q):= \int_\Omega Q\nabla \big(\mathcal{U}_h(Q)- \Pi_hz^{\delta}\big) \cdot \nabla \big(\mathcal{U}_h(Q)- \Pi_hz^{\delta}\big) \end{equation} with $Q \in \mathcal{Q}_{ad} $.
We note that the cost functional $\mathcal{J}^\delta_h$ contains the interpolation $\Pi_h z^\delta$ of the measurement $z^\delta$. This is different from the approaches in \cite{Deckelnick,Falk,kolo,wang_zou}. However, it is unavoidable for a numerical experiment since in general we cannot define the pointwise values of $z^\delta$ at the nodes of $\mathcal{T}_h$.
The following results are exactly obtained as in the continuous case.
\begin{lemma} \label{J-dis-convex} For each $h>0$ the functional $\mathcal{J}^\delta_h$ defined by (\ref{29/6:ct9}) is convex on the convex set $\mathcal{Q}_{ad}$. \end{lemma}
We adapt a finite element version of Lemma \ref{H-convergent} and Lemma \ref{weak-convergent}.
\begin{lemma}[{\cite{Deckelnick_Hinze_2011}}]\label{Hd-convergent} Let $(\mathcal{T}_{h_n})_n$ be a sequence of triangulations with $\lim_n h_n =0$ and $(Q_n)_n$ be a sequence in $\mathcal{Q}_{ad}$. Then there exists a subsequence which is not relabelled and an element $Q\in \mathcal{Q}_{ad}$ such that \begin{quote} $\mathcal{U}_{h_n}(Q_n)$ weakly converges to $\mathcal{U}(Q)$ in $H^1_0(\Omega)$ and \\ $Q_n \nabla \mathcal{U}_{h_n}(Q_n)$ weakly converges to $Q\nabla \mathcal{U}(Q)$ in ${L^2(\Omega)}^d$. \end{quote}
The sequence $(Q_n)_n$ is then said to be Hd-convergent to $Q$. \end{lemma}
\begin{lemma} [{\cite{Deckelnick_Hinze_2011}}]\label{d-weak-convergent} Let $(Q_n)_n$ be a sequence in $\mathcal{Q}_{ad}$. Assume that $(Q_n)_n$ is Hd-convergent to $Q$ and $(Q_n)_n$ weak$^*$ converges to $\widehat{Q}$ in ${L^\infty(\Omega)}^{d \times d}$. Then, $Q(x) \preceq \widehat{Q}(x)$ a.e. in $\Omega$ and \begin{align*}
\|Q\|^2_{{L^2(\Omega)}^{d \times d}} \le
\|\widehat{Q}\|^2_{{L^2(\Omega)}^{d \times d}} \le
\liminf_{n} \|Q_n\|^2_{{L^2(\Omega)}^{d \times d}}. \end{align*} \end{lemma}
\begin{lemma} \label{solution2} Let $$\Upsilon^{\rho,\delta}_h (Q):=
\mathcal{J}^\delta_h(Q) + \rho \left \| Q
\right \|^2_{{L^2(\Omega)}^{d\times d}}.$$ There exists a unique minimizer $Q^{\rho,\delta}_h$ of the strictly convex minimization problem $$ \min_{Q\in \mathcal{Q}_{ad}} \Upsilon^{\rho,\delta}_h (Q). \eqno \left( \mathcal{P}^{\rho,\delta}_h \right) $$ \end{lemma}
Now we consider the orthogonal projection $P_{\mathcal{K}} : \mathcal{S}_d \to \mathcal{K}$ characterised by $$(A - P_{\mathcal{K}}(A)) (B - P_{\mathcal{K}}(A)) \le 0$$ for all $A \in \mathcal{S}_d$ and $B \in \mathcal{K}$.
\begin{lemma}\label{Projection} Let $Q^{\rho,\delta}_h \in \mathcal{Q}_{ad}$. Then $Q^{\rho,\delta}_h$ is the unique solution of the problem $\left(\mathcal{P}^{\rho,\delta}_h \right)$ if and only if the equation \begin{align*} Q^{\rho,\delta}_h(x) = P_{\mathcal{K}} \left(\frac{1}{2\rho} \left( \nabla \mathcal{U}_h(Q^{\rho,\delta}_h)(x) \otimes \nabla \mathcal{U}_h(Q^{\rho,\delta}_h)(x) - \nabla \Pi_hz^{\delta} (x) \otimes \nabla \Pi_hz^{\delta} (x) \right)\right) \end{align*} holds for a.e. in $\Omega$. \end{lemma} \begin{proof} Since the problem $\left( \mathcal{P}^{\rho,\delta}_h \right)$ is strictly convex, an element $Q^{\rho,\delta}_h \in \mathcal{Q}_{ad}$ is the unique solution of $\left( \mathcal{P}^{\rho,\delta}_h \right)$ if and only if the inequality \begin{align}\label{17/10/14:ct5} {\mathcal{J}^\delta_h}'\big(Q^{\rho,\delta}_h\big) \big(Q - Q^{\rho,\delta}_h\big) + 2\rho \int_\Omega Q^{\rho,\delta}_h \cdot \big(Q - Q^{\rho,\delta}_h\big) \ge 0 \end{align} is satisfied for all $Q \in \mathcal{Q}_{ad}$.
By (\ref{29/6:ct9}) and (\ref{ct21***}), we have that \begin{align} \label{17/10/14:ct6} {\mathcal{J}^\delta_h}'\big(Q^{\rho,\delta}_h\big) \big(Q - Q^{\rho,\delta}_h\big) &= \int_\Omega \big(Q - Q^{\rho,\delta}_h\big)\nabla \big(\mathcal{U}_h \big(Q^{\rho,\delta}_h\big) - \Pi_hz^{\delta}\big) \cdot \nabla \big(\mathcal{U}_h \big(Q^{\rho,\delta}_h\big) - \Pi_hz^{\delta}\big) \nonumber \\ &~\quad + 2\int_\Omega Q^{\rho,\delta}_h \nabla {\mathcal{U}_h}' \big(Q^{\rho,\delta}_h\big)\big(Q - Q^{\rho,\delta}_h\big) \cdot \nabla \big(\mathcal{U}_h \big(Q^{\rho,\delta}_h\big) - \Pi_hz^{\delta}\big) \nonumber\\ &= \int_\Omega \big(Q - Q^{\rho,\delta}_h\big)\nabla \big(\mathcal{U}_h \big(Q^{\rho,\delta}_h\big) - \Pi_hz^{\delta}\big) \cdot \nabla \big(\mathcal{U}_h \big(Q^{\rho,\delta}_h\big) - \Pi_hz^{\delta}\big) \nonumber\\ &~\quad - 2\int_\Omega \big(Q - Q^{\rho,\delta}_h\big) \nabla \mathcal{U}_h \big(Q^{\rho,\delta}_h\big) \cdot \nabla \big(\mathcal{U}_h \big(Q^{\rho,\delta}_h\big) - \Pi_hz^{\delta}\big) \nonumber\\ &= - \int_\Omega \big(Q - Q^{\rho,\delta}_h\big) \left( \nabla \mathcal{U}_h \big(Q^{\rho,\delta}_h\big) \cdot \nabla \mathcal{U}_h \big(Q^{\rho,\delta}_h\big) - \nabla \Pi_hz^{\delta} \cdot \nabla \Pi_hz^{\delta}\right) \nonumber\\ &= - \int_\Omega \left( \nabla \mathcal{U}_h(Q^{\rho,\delta}_h) \otimes \nabla \mathcal{U}_h(Q^{\rho,\delta}_h) - \nabla \Pi_hz^{\delta} \otimes \nabla \Pi_hz^{\delta} \right) \cdot \big(Q - Q^{\rho,\delta}_h\big). \end{align} It follows from (\ref{17/10/14:ct5}) and (\ref{17/10/14:ct6}) that \begin{align*} \int_\Omega \left(\frac{1}{2\rho} \left( \nabla \mathcal{U}_h \big(Q^{\rho,\delta}_h\big) \otimes \nabla \mathcal{U}_h \big(Q^{\rho,\delta}_h\big) - \nabla \Pi_hz^{\delta} \otimes \nabla \Pi_hz^{\delta}\right) - Q^{\rho,\delta}_h\right) \cdot \big(Q - Q^{\rho,\delta}_h\big) \le 0 \end{align*} for all $Q \in \mathcal{Q}_{ad}$. Then a localization argument infers \begin{align*} \left( \frac{1}{2\rho} \left( \nabla \mathcal{U}_h \big(Q^{\rho,\delta}_h\big) (x) \otimes \nabla \mathcal{U}_h \big(Q^{\rho,\delta}_h\big)(x) - \nabla \Pi_hz^{\delta}(x) \otimes \nabla \Pi_hz^{\delta}(x) \right) - Q^{\rho,\delta}_h (x) \right) \cdot \\ \cdot \big(M - Q^{\rho,\delta}_h (x)\big) \le 0 \quad \quad \quad \quad \quad\enskip \end{align*} for all $M \in \mathcal{K}$. The proof is completed. \end{proof}
{\bf Remark.} Since $\mathcal{U}_h \big(Q^{\rho,\delta}_h\big)$ and $\Pi_h z^\delta$ are both in $\mathcal{V}^1_h$, the assertion of Lemma \ref{Projection} shows that the solution of $\left( \mathcal{P}^{\rho,\delta}_h \right)$ is a piecewise constant matrix over $\mathcal{T}_h$, so that it belongs to the set $\mathcal{Q}_{ad} \cap \mathcal{V}_h$, where \begin{align} \label{18/10/14:ct1} \mathcal{V}_h:= \Big\{ M := (m_{ij})_{1\le i,j\le d} \in & {L^{\infty}(\Omega)}^{d \times d}
~ \Big|~ M(x) \in \mathcal{S}_d \mbox{~a.e. in~} \Omega \mbox{~and~} \notag\\
& {m_{ij}}_{|T} = \mbox{const} \mbox{~for all~} i,j \mbox{~with~} 1\le i,j\le d \mbox{~and~} T\in \mathcal{T}_h\Big\}. \end{align} Taking this into account, a discretization of the admissible set $\mathcal{Q}_{ad}$ can be avoided. Furthermore, we note that $\mathcal{Q}_{ad} \cap \mathcal{V}_h$ is a non-empty, convex, bounded and closed set in the ${L^2(\Omega)}^{d \times d}$-norm in the finite dimensional space $\mathcal{V}_h$.
In what follows $C$ is a generic positive constant which is independent of the mesh size $h$ of $\mathcal{T}_h$, the noise level $\delta$ and the regularization parameter $\rho$.
\section{Convergence} \label{Stability}
In this section we analyze the convergence of Tikhonov regularization. To this end, we introduce \begin{align}\label{19-6-15ct1}
\sigma_h(Q) := \| \mathcal{U}(Q) - \mathcal{U}_h(Q) \|_{H^1(\Omega)} \end{align} and \begin{align}\label{19-6-15ct1*}
\gamma_h(\varphi) := \| \varphi - \Pi_h\varphi \|_{H^1(\Omega)}, \end{align} where $Q \in \mathcal{Q}_{ad}$ and $\varphi \in H^1_0(\Omega)$. We note that $$\lim_{h \to 0} \sigma_h(Q) = 0 \mbox{~ and ~} \lim_{h \to 0} \gamma_h(\varphi) = 0.$$
\begin{theorem}\label{odinh1} Let $\left(\mathcal{T}_{h_n}\right)_n$ be a sequence of triangulations with $\lim_n h_n = 0$. Assume that $\big( Q^{\rho, \delta}_{h_n} \big)_n$ is the sequence of unique minimizers of $\big(\mathcal{P}^{\rho,\delta}_{h_n} \big)$. Then $\big( Q^{\rho, \delta}_{h_n} \big)_n$ converges to minimizer $Q^{\rho, \delta}$ of $\big( \mathcal{P}^{\rho,\delta} \big)$ in the ${L^2(\Omega)}^{d\times d}$-norm. \end{theorem}
\begin{proof} For the sake of the notation we denote by $$Q_n := Q^{\rho, \delta}_{h_n}.$$ In view of Lemma \ref{Hd-convergent} and Lemma \ref{d-weak-convergent} there exists a subsequence, again denoted $(Q_n)_n$ and elements $Q^{\rho,\delta} \in \mathcal{Q}_{ad}$, $\widehat{Q} \in {L^\infty(\Omega)}^{d \times d}$ such that $(Q_n)_n$ is Hd-convergent to $Q^{\rho,\delta}$, $(Q_n)_n$ weak$^*$ converges to $\widehat{Q}$ in
${L^\infty(\Omega)}^{d \times d}$, $Q^{\rho,\delta}(x) \preceq \widehat{Q}(x)$ a.e. in $\Omega$ and $\|Q^{\rho,\delta}\|^2_{{L^2(\Omega)}^{d \times d}} \le \|\widehat{Q}\|^2_{{L^2(\Omega)}^{d \times d}} \le \liminf_{n} \|Q_n\|^2_{{L^2(\Omega)}^{d \times d}}$.
First we show that \begin{align}\label{19-6-15ct2} \lim_n \mathcal{J}_{h_n}^{\delta}\big(Q_n\big) \ge \mathcal{J}^\delta\big(Q^{\rho,\delta}\big). \end{align} Indeed, we write \begin{align}\label{19-6-15ct3} \mathcal{J}_{h_n}^{\delta}\big(Q_n\big) &= \int_\Omega Q_n \nabla \mathcal{U}_{h_n}(Q_n) \cdot \nabla \mathcal{U}_{h_n}(Q_n) - 2\int_\Omega Q_n \nabla \mathcal{U}_{h_n}(Q_n)\cdot \nabla \Pi_{h_n}z^{\delta} + \int_\Omega Q_n \nabla \Pi_{h_n}z^{\delta} \cdot \nabla \Pi_{h_n}z^{\delta}. \end{align} We have that \begin{align}\label{22-6-15ct1} \int_\Omega Q_n \nabla \mathcal{U}_{h_n}(Q_n) \cdot \nabla \mathcal{U}_{h_n}(Q_n) &= \int_\Omega f \mathcal{U}_{h_n}(Q_n)\notag\\ & \rightarrow \int_\Omega f \mathcal{U}(Q^{\rho,\delta}) \end{align} and, by \eqref{23/10:ct2}, \begin{align}\label{22-6-15ct2} \int_\Omega Q_n \nabla \mathcal{U}_{h_n}(Q_n)\cdot \nabla \Pi_{h_n}z^{\delta} &= \int_\Omega Q_n \nabla \mathcal{U}_{h_n}(Q_n)\cdot \nabla z^{\delta} + \int_\Omega Q_n \nabla \mathcal{U}_{h_n}(Q_n)\cdot \nabla \big(\Pi_{h_n}z^{\delta} - z^\delta\big)\notag\\ & \rightarrow \int_\Omega Q^{\rho,\delta}\nabla \mathcal{U}(Q^{\rho,\delta})\cdot \nabla z^{\delta} \end{align} and \begin{align}\label{22-6-15ct3} \int_\Omega Q_n \nabla \Pi_{h_n}z^{\delta} \cdot \nabla \Pi_{h_n}z^{\delta} &= \int_\Omega Q_n \nabla z^{\delta} \cdot \nabla z^{\delta} + \int_\Omega Q_n \nabla (\Pi_{h_n}z^{\delta} - z^\delta) \cdot \nabla \Pi_{h_n}z^{\delta} \notag\\ &~\quad + \int_\Omega Q_n \nabla (\Pi_{h_n}z^{\delta} - z^\delta) \cdot \nabla z^{\delta} \notag\\ &\rightarrow \int_\Omega \widehat{Q} \nabla z^{\delta} \cdot \nabla z^{\delta} \notag\\ &= \int_\Omega Q^{\rho,\delta} \nabla z^{\delta} \cdot \nabla z^{\delta} + \int_\Omega (\widehat{Q} - Q^{\rho,\delta})\nabla z^{\delta} \cdot \nabla z^{\delta} \notag\\ &\ge \int_\Omega Q^{\rho,\delta} \nabla z^{\delta} \cdot \nabla z^{\delta}. \end{align} By \eqref{19-6-15ct3}--\eqref{22-6-15ct3}, we arrive at \eqref{19-6-15ct2}. Furthermore, in view of \eqref{19-6-15ct1} and \eqref{23/10:ct2}, for all $Q \in \mathcal{Q}_{ad}$ we also get \begin{align*} \lim_n \mathcal{J}_{h_n}^{\delta}(Q) = \mathcal{J}^\delta(Q). \end{align*} Hence it follows that for all $Q \in \mathcal{Q}_{ad}$ \begin{align}\label{22-6-15ct4}
\mathcal{J}^\delta\big( Q^{\rho,\delta} \big) + \rho \big\|
Q^{\rho,\delta} \big\|^2_{{L^2(\Omega)}^{d\times d}} &\le \lim_n \mathcal{J}_{h_n}^{\delta}(Q_n) + \liminf_n \rho \big\| Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}} \notag\\ &= \liminf_n \left( \mathcal{J}_{h_n}^{\delta}(Q_n)
+ \rho \big\| Q_n\big\|^2_{{L^2(\Omega)}^{d\times d}} \right) \nonumber \\ &\le \limsup_n \left( \mathcal{J}_{h_n}^{\delta}(Q_n)
+ \rho \big\| Q_n
\big\|^2_{{L^2(\Omega)}^{d\times d}} \right) \nonumber \\
& \le \limsup_n \left( \mathcal{J}_{h_n}^{\delta} (Q) + \rho \big\| Q
\big\|^2_{{L^2(\Omega)}^{d\times d}} \right) \nonumber\\
& = \lim_n \left( \mathcal{J}_{h_n}^{\delta} (Q) + \rho \big\| Q
\big\|^2_{{L^2(\Omega)}^{d\times d}} \right) \nonumber\\
&= \mathcal{J}^\delta (Q) + \rho \big\| Q
\big\|^2_{{L^2(\Omega)}^{d\times d}}. \end{align} Thus, $Q^{\rho,\delta}$ is a unique solution to $\big(\mathcal{P}^{\rho,\delta}\big)$. It remains to show that $( Q_n)_n$ converges to $Q^{\rho, \delta}$ in the ${L^2(\Omega)}^{d\times d}$-norm. To this end, we rewrite \begin{align*}
\rho \big\| Q^{\rho,\delta} - Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}}
&= \rho \big\| Q^{\rho,\delta} \|^2_{{L^2(\Omega)}^{d\times d}} + \rho \big\| Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}} -2 \rho \big\langle Q^{\rho,\delta} , Q_n \big\rangle_{{L^2(\Omega)}^{d\times d}} \notag\\
&= \rho \big\| Q^{\rho,\delta} \|^2_{{L^2(\Omega)}^{d\times d}} + \left( \mathcal{J}_{h_n}^{\delta}(Q_n) + \rho \big\| Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}} \right) \notag\\ &~\quad -2 \rho \big\langle Q^{\rho,\delta} , Q_n \big\rangle_{{L^2(\Omega)}^{d\times d}} - \mathcal{J}_{h_n}^{\delta}(Q_n). \end{align*} By \eqref{22-6-15ct4}, we have that $$\limsup_n \left( \mathcal{J}_{h_n}^{\delta}(Q_n)
+ \rho \big\| Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}} \right) = \mathcal{J}^\delta\big( Q^{\rho,\delta} \big) + \rho \big\|
Q^{\rho,\delta} \big\|^2_{{L^2(\Omega)}^{d\times d}}.$$ Therefore, by \eqref{19-6-15ct2} and the fact that $(Q_n)_n$ weak$^*$ converges to $\widehat{Q}$ in ${L^\infty(\Omega)}^{d \times d}$ with $Q^{\rho,\delta}(x) \preceq \widehat{Q}(x)$ a.e. in $\Omega$, we deduce that \begin{align*}
\rho \lim_n \big\| Q^{\rho,\delta} - Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}}
&\le \rho \big\| Q^{\rho,\delta} \|^2_{{L^2(\Omega)}^{d\times d}} + \left( \mathcal{J}^\delta\big( Q^{\rho,\delta} \big) + \rho \big\|
Q^{\rho,\delta} \big\|^2_{{L^2(\Omega)}^{d\times d}} \right) \notag\\ &~\quad -2 \rho \big\langle Q^{\rho,\delta} , \widehat{Q} \big\rangle_{{L^2(\Omega)}^{d\times d}} - \mathcal{J}^\delta\big( Q^{\rho,\delta} \big)\\
&\le 2\rho \big\| Q^{\rho,\delta} \|^2_{{L^2(\Omega)}^{d\times d}} -2 \rho \big\langle Q^{\rho,\delta} , Q^{\rho,\delta} \big\rangle_{{L^2(\Omega)}^{d\times d}} \\ &= 0. \end{align*} The proof is completed. \end{proof}
Next we show convergence of discrete regularized solutions to identification problem. Before presenting our result we introduce the notion of the minimum norm solution of the identification problem.
\begin{lemma} \label{nx45} The set $$ \mathcal{I}_{\mathcal{Q}_{ad}}(\overline{u}) := \{ Q\in
\mathcal{Q}_{ad}~|~\mathcal{U}(Q)=\overline{u}\} $$ is non-empty, convex, bounded and closed in the ${L^2(\Omega)}^{d\times d}$-norm. Hence there is a unique minimizer $Q^\dag$ of the problem \begin{align*}
\min_{ Q \in \mathcal{I}_{\mathcal{Q}_{ad}}(\overline{u})}\left\| Q\right\|^2_{{L^2(\Omega)}^{d\times d}} \end{align*} which is called by the minimum norm solution of the identification problem. \end{lemma}
\begin{theorem}\label{convergence1} Let $\left(\mathcal{T}_{h_n}\right)_n$ be a sequence of triangulations with mesh sizes $\left(h_n\right)_n$. Let $\left(\delta_n\right)_n$ and $\left(\rho_n\right)_n$ be any positive sequences such that $$\rho_n\rightarrow 0, ~\frac{\delta_n^2}{\rho_n} \rightarrow 0, ~\frac{\sigma^2_{h_n}(Q^\dag)}{\rho_n} \rightarrow 0 \mbox{~and~} \frac{\gamma^2_{h_n} \big(\mathcal{U}(Q^\dag)\big)}{\rho_n} \rightarrow 0$$
as $n\rightarrow\infty$. Moreover, assume that $\left(z^{\delta_n}\right)_n $ is a sequence satisfying $\left \| \mathcal{U}(Q^\dag) - z^{\delta_n}
\right \|_{H^1_0(\Omega)} \le \delta_n$ and $\big( Q^{\rho_n, \delta_n}_{h_n} \big)_n$ is the sequence of unique minimizers of $\big( \mathcal{P}^{\rho_n,\delta_n}_{h_n} \big)$. Then $\big(Q^{\rho_n, \delta_n}_{h_n}\big)_n$ converges to $Q^\dag$ in the ${L^2(\Omega)}^{d\times d}$-norm as $n\to \infty$. \end{theorem}
\begin{proof} Denoting $$Q_n := Q^{\rho_n, \delta_n}_{h_n}$$ and due to the definition of $Q_n$, we get \begin{align*} \mathcal{J}^{\delta_n}_{h_n} \big(Q_n\big) + \rho_n
\big \| Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}} &\le
\mathcal{J}^{\delta_n}_{h_n} (Q^\dag) + \rho_n \big\| Q^\dag
\big\|^2_{{L^2(\Omega)}^{d\times d}}. \end{align*} We have that \begin{align*} \mathcal{J}^{\delta_n}_{h_n}& (Q^\dag) \\ &= \int_\Omega Q^\dag \nabla \big(\mathcal{U}_{h_n}(Q^\dag)- \Pi_{h_n} z^{\delta_n}\big) \cdot \big(\mathcal{U}_{h_n}(Q^\dag)- \Pi_{h_n} z^{\delta_n}\big) \\
&\le \overline{q} \big\| \mathcal{U}_{h_n}(Q^\dag)- \Pi_{h_n}z^{\delta_n}\big\|^2_{H^1(\Omega)} \\
&= \overline{q} \big\| \mathcal{U}_{h_n}(Q^\dag) - \mathcal{U}(Q^\dag) + \Pi_{h_n} \big(\mathcal{U}(Q^\dag) - z^{\delta_n}\big) + \mathcal{U}(Q^\dag) - \Pi_{h_n} \mathcal{U}(Q^\dag)\big\|^2_{H^1(\Omega)} \\
&\le 3\overline{q} \left( \big\| \mathcal{U}_{h_n}(Q^\dag) - \mathcal{U}(Q^\dag) \big\|^2_{H^1(\Omega)} + \big\| \Pi_{h_n} \big(\mathcal{U}(Q^\dag) - z^{\delta_n} \big)\big\|^2_{H^1(\Omega)} + \big\| \mathcal{U}(Q^\dag) - \Pi_{h_n} \mathcal{U}(Q^\dag)\big\|^2_{H^1(\Omega)} \right) \\ & \le 3C\overline{q} \left( \sigma^2_{h_n}(Q^\dag) + \delta_n^2 + \gamma^2_{h_n}\big(\mathcal{U}(Q^\dag)\big)\right), \end{align*} where $C$ is the positive constant defined by
$$C := \max \big(1, \| \Pi_{h_n} \|_{\mathcal{L}(H^1(\Omega), H^1(\Omega))} \big).$$ So that \begin{align} \mathcal{J}^{\delta_n}_{h_n} \big(Q_n\big) + \rho_n
\big \| Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}} &\le 3C\overline{q} \big( \sigma^2_{h_n}(Q^\dag) + \delta_n^2 + \gamma^2_{h_n}\big(\mathcal{U}(Q^\dag)\big) + \rho_n \big\| Q^\dag
\big\|^2_{{L^2(\Omega)}^{d\times d}}.\label{odinh22} \end{align} This implies \begin{align}
\limsup_n\big \| Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}} &\le \limsup_n \left( 3C\overline{q} \frac{\sigma^2_{h_n}(Q^\dag) + \delta_n^2 + \gamma^2_{h_n}\big(\mathcal{U}(Q^\dag)\big)}{\rho_n} +
\big\| Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} \right)\notag\\
&= \big\| Q^\dag
\big\|^2_{{L^2(\Omega)}^{d\times d}}.\label{odinh22*} \end{align} By Lemma \ref{Hd-convergent} and Lemma \ref{d-weak-convergent} there exists a subsequence which is not relabelled and elements $\Theta \in \mathcal{Q}_{ad}$, $\widehat{Q} \in {L^\infty(\Omega)}^{d \times d}$ such that $(Q_n)_n$ is Hd-convergent to $\Theta$, $(Q_n)_n$ weak$^*$ converges to $\widehat{Q}$ in ${L^\infty(\Omega)}^{d \times d}$, $\Theta(x) \preceq \widehat{Q}(x)$ a.e. in $\Omega$ and \begin{align}\label{23-6-15ct1}
\|\Theta\|^2_{{L^2(\Omega)}^{d \times d}} \le \|\widehat{Q}\|^2_{{L^2(\Omega)}^{d \times d}} \le \liminf_{n} \|Q_n\|^2_{{L^2(\Omega)}^{d \times d}}. \end{align} Moreover, the proof of Theorem \ref{odinh1} includes an argument which can be used to show that \begin{align*} \lim_n \mathcal{J}_{h_n}^{\delta_n}\big(Q_n\big) \ge \int_{\Omega} \Theta \nabla \big( \mathcal{U}(\Theta) - \mathcal{U}(Q^\dag)\big) \cdot \nabla \big( \mathcal{U}(\Theta) - \mathcal{U}(Q^\dag)\big). \end{align*} Then by (\ref{coercivity}) and (\ref{odinh22}), we arrive at \begin{align*}
\kappa \big\| \mathcal{U}(\Theta) - \mathcal{U}(Q^\dag)
\big\|^2_{H^1(\Omega)} \le \lim_n \mathcal{J}_{h_n}^{\delta_n}\big(Q_n\big) = 0. \end{align*} Therefore, $\Theta \in\mathcal{I}_{\mathcal{Q}_{ad}}(\overline{u}).$ Furthermore, by \eqref{23-6-15ct1} and \eqref{odinh22*} and the uniqueness of the minimum norm solution, we obtain that $\Theta = Q^\dag$. Finally, by \eqref{odinh22*}, the fact that $(Q_n)_n$ weak$^*$ converges to $\widehat{Q}$ in ${L^\infty(\Omega)}^{d \times d}$ and $Q^\dag(x) \preceq \widehat{Q}(x)$ a.e. in $\Omega$, we infer that \begin{align*}
\limsup_n\big \| Q_n - Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} &= \limsup_n \left( \big \| Q_n \big\|^2_{{L^2(\Omega)}^{d\times d}} + \big \| Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} - 2 \big\langle Q_n, Q^\dag \big\rangle_{{L^2(\Omega)}^{d\times d}}\right) \\
&\le 2\big \| Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} - 2 \big\langle \widehat{Q}, Q^\dag \big\rangle_{{L^2(\Omega)}^{d\times d}}\\
&\le 2\big \| Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} - 2 \big\langle Q^\dag, Q^\dag \big\rangle_{{L^2(\Omega)}^{d\times d}} \\ &=0. \end{align*} The proof is completed. \end{proof}
\section{Convergence rates}\label{tdht}
Now we state the result on convergence rates for Tikhonov regularization of our identification problem. Before presenting the result we recall some notions.
Any $\Psi \in L^\infty(\Omega)$ can be considered as an element in ${L^\infty(\Omega)}^*$ by \begin{align} \left\langle\Psi, \psi\right\rangle_{\left({L^\infty(\Omega)}^*, L^\infty(\Omega)\right)} := \int_\Omega \Psi \psi \label{inf1} \end{align} for all $\psi \mbox{~in~} L^\infty(\Omega)$ with
$\| \Psi \|_{{L^{\infty}(\Omega)}^*}\le |\Omega| \| \Psi \|_{L^\infty(\Omega)}$.
According to Lemma \ref{bd21}, for each $Q \in \mathcal{Q}_{ad}$ the mapping $$ \mathcal{U}'(Q): {L^{\infty}(\Omega)}^{d \times d} \rightarrow H^1_0(\Omega) $$ is a continuous operator with the dual $$ {\mathcal{U}'(Q)}^*:{H^1_0(\Omega)}^* \rightarrow {\left({L^{\infty}(\Omega)}^{d \times d}\right)}^*. $$ Let $w^* \in {H^1_0(\Omega)}^*$ be arbitrary but fixed. We consider the Dirichlet problem \begin{align}\label{25-9-15ct1} -\text{div} (Q^\dag\nabla w) = w^* \mbox{ in } \Omega \mbox{~and~} w=0 \mbox{~on~} \partial\Omega \end{align} which has a unique weak solution $w \in H^1_0(\Omega)$. Then for all $H \in {L^{\infty}(\Omega)}^{d \times d}$ we have \begin{align} \label{29-6-15ct1} \big \langle {\mathcal{U}'(Q^\dag)}^* w^*, H \big \rangle_{\big({{L^{\infty}(\Omega)}^{d \times d}}^*, {L^{\infty}(\Omega)}^{d \times d}\big)} &= \langle w^*, \mathcal{U}'(Q^\dag)H \rangle_{\big({H^1_0(\Omega)}^*, H^1_0(\Omega)\big)} \notag\\ &= \int_\Omega Q^\dag \nabla w \cdot \nabla \mathcal{U}'(Q^\dag)H. \end{align}
\begin{theorem}\label{nu21***} Assume that there is a functional $w^* \in {H^1_0 (\Omega)}^*$ such that \begin{align}\label{moi14***} {\mathcal{U}'(Q^\dag)}^*w^* = Q^\dag. \end{align} Then \begin{align} \label{29-6-15ct3}
\dfrac{\kappa}{4}\big\| \mathcal{U}_h(Q_h) - \mathcal{U} (Q^\dag) \big\|^2_{H^1(\Omega)} + \rho\big\|Q_h - Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} =\mathcal{O} \left(\delta^2 + \sigma_h^2(Q^\dag) + \gamma_h^2 \big( \mathcal{U}(Q^\dag) \big) + \gamma^2_h (w) + \rho^2 \right), \end{align} where $Q_h := Q^{\rho,\delta}_h$ is the unique solution of $\big( \mathcal{P}^{\rho,\delta}_h \big)$. \end{theorem}
We remark that in case $\overline{u} ,w \in H^2(\Omega)$ with $w$ from \eqref{25-9-15ct1}, by the C\'ea's lemma and \eqref{23/5:ct1}, we infer that $\sigma_h(Q^\dag) \le Ch$, $\gamma_h\big( \mathcal{U}(Q^\dag) \big) \le Ch$ and $\gamma_h(w) \le Ch$. Therefore, the convergence rate
$$\dfrac{\kappa}{4}\big\| \mathcal{U}_h(Q_h) - \mathcal{U} (Q^\dag) \big\|^2_{H^1(\Omega)} + \rho\big\|Q_h - Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} =\mathcal{O} \left(\delta^2 + h^2 + \rho^2 \right)$$ is obtained.
By \eqref{ct10}, \eqref{inf1} and \eqref{29-6-15ct1}, the source condition \eqref{moi14***} is satisfied if there exists a functional $w \in H^1_0(\Omega)$ such that for all $H \in {L^{\infty}(\Omega)}^{d \times d}$ the equation $$\int_\Omega H \cdot Q^\dag = - \int_\Omega H \nabla \mathcal{U}(Q^\dag) \cdot \nabla w$$ holds. However, as we can see in \eqref{28-9-15ct1} below, the convergence rate \eqref{29-6-15ct3} is obtained under the weaker condition that there exists a functional $w \in H^1_0(\Omega)$ such that \begin{align}\label{28-9-15ct2}
\int_\Omega (Q^\dag - Q_h) \cdot Q^\dag \le \left| \int_\Omega (Q^\dag - Q_h) \nabla \mathcal{U}(Q^\dag) \cdot \nabla w\right|. \end{align}
\begin{lemma} If there exists a functional $w \in H^1_0(\Omega)$ such that \begin{align}\label{28-9-15ct3} Q^\dag(x) = P_{\mathcal{K}}\left(\nabla \mathcal{U}(Q^\dag) (x) \otimes \nabla w(x)\right) \mbox{~a.e in~} \Omega, \end{align} then the condition \eqref{28-9-15ct2} holds. Thus the convergence rate \eqref{29-6-15ct3} is obtained. \end{lemma}
We note that \eqref{28-9-15ct3} is the projected source condition introduced in \cite{Deckelnick}. However, we here do not require any of the smoothness of the sought matrix and the exact data.
\begin{proof} We have \begin{align*} \int_\Omega (Q^\dag - Q_h) \cdot Q^\dag &= - \int_\Omega P_{\mathcal{K}}\left(\nabla \mathcal{U}(Q^\dag) \otimes \nabla w\right) \cdot \left(Q_h - P_{\mathcal{K}}\left(\nabla \mathcal{U}(Q^\dag) \otimes \nabla w\right)\right) \\ &= \int_\Omega \left( \nabla \mathcal{U}(Q^\dag) \otimes \nabla w - P_{\mathcal{K}}\left(\nabla \mathcal{U}(Q^\dag) \otimes \nabla w\right) \right)\cdot \left(Q_h - P_{\mathcal{K}}\left(\nabla \mathcal{U}(Q^\dag) \otimes \nabla w\right)\right)\\ &~\quad - \int_\Omega \nabla \mathcal{U}(Q^\dag) \otimes \nabla w \cdot \left(Q_h - P_{\mathcal{K}}\left(\nabla \mathcal{U}(Q^\dag) \otimes \nabla w\right)\right)\\ &\le - \int_\Omega \nabla \mathcal{U}(Q^\dag) \otimes \nabla w \cdot \left(Q_h - P_{\mathcal{K}}\left(\nabla \mathcal{U}(Q^\dag) \otimes \nabla w\right)\right)\\
&\le \left| \int_\Omega \nabla \mathcal{U}(Q^\dag) \otimes \nabla w \cdot (Q^\dag - Q_h)\right|\\
&= \left| \int_\Omega (Q^\dag - Q_h) \nabla \mathcal{U}(Q^\dag) \cdot \nabla w\right|, \end{align*} which finishes the proof. \end{proof}
To prove Theorem \ref{nu21***} we need the following auxiliary result.
\begin{lemma}\label{auxi2} The estimate $$\mathcal{J}^\delta_h (Q^\dag) \le C\left (\delta^2 + \sigma_h^2(Q^\dag) + \gamma^2_h \big( \mathcal{U}(Q^\dag) \big)\right)$$ holds. \end{lemma}
\begin{proof} The stated inequality follows from an argument which has included in the proof in Theorem \ref{convergence1} and therefore omitted. \end{proof}
\begin{proof}[Proof of Theorem \ref{nu21***}] Since $Q_h$ is the solution of the problem $\big(\mathcal{P}^{\rho,\delta}_h \big)$, we have that \begin{align}
\mathcal{J}^\delta_h \big( Q_h \big) + \rho \big \|
Q_h \big\|^2_{{L^2(\Omega)}^{d\times d}} &\le \mathcal{J}^\delta_h
\big(Q^\dag\big) + \rho \big\| Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} \nonumber \\
&\le C\left (\delta^2 + \sigma_h^2(Q^\dag) + \gamma^2_h \big( \mathcal{U}(Q^\dag) \big)\right) + \rho \big\| Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}}, \label{27-4-15:ct1} \end{align} by Lemma \ref{auxi2}. Thus, we get \begin{align}
\mathcal{J}_h^\delta \big( Q_h \big) &+ \rho \big\|
Q_h - Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} \nonumber\\ &\le C\left(\delta^2 + \sigma_h^2(Q^\dag) + \gamma_h^2 \big( \mathcal{U}(Q^\dag) \big) \right) \notag \\
&~\quad + \rho\left(\big\| Q^\dag
\big\|^2_{{L^2(\Omega)}^{d\times d}} - \big\| Q_h
\big\|^2_{{L^2(\Omega)}^{d\times d}}
+ \big\| Q_h - Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} \right)\nonumber\\ &= C\left (\delta^2 + \sigma_h^2(Q^\dag) + \gamma^2_h \big( \mathcal{U}(Q^\dag) \big)\right) + 2\rho \big\langle Q^\dag , Q^\dag - Q_h \big\rangle_{{L^2(\Omega)}^{d \times d}}. \label{mqq11} \end{align} By \eqref{inf1}, \eqref{moi14***} and \eqref{29-6-15ct1}, we have with $w$ from \eqref{25-9-15ct1} \begin{align}\label{28-9-15ct1} \big\langle Q^\dag , Q^\dag - Q_h \big\rangle_{{L^2(\Omega)}^{d \times d}} &= \big\langle Q^\dag , Q^\dag - Q_h \big\rangle_{\big({{L^{\infty}(\Omega)}^{d \times d}}^*, {L^{\infty}(\Omega)}^{d \times d}\big)} \notag \\ &= \big \langle {\mathcal{U}'(Q^\dag)}^* w^*, Q^\dag - Q_h \big \rangle_{\big({{L^{\infty}(\Omega)}^{d \times d}}^*, {L^{\infty}(\Omega)}^{d \times d}\big)} \notag\\ &= \langle w^*, \mathcal{U}'(Q^\dag) (Q^\dag - Q_h) \rangle_{\big({H^1_0(\Omega)}^*, H^1_0(\Omega)\big)} \notag\\ &= \int_\Omega Q^\dag \nabla \mathcal{U}'(Q^\dag) (Q^\dag - Q_h) \cdot \nabla w \notag\\ &= - \int_\Omega (Q^\dag - Q_h) \nabla \mathcal{U}(Q^\dag) \cdot \nabla w, \end{align} here we used the equation \eqref{ct10}. Hence by (\ref{4/6:m4}), we get \begin{align*} \big\langle Q^\dag , Q^\dag - Q_h \big\rangle_{{L^2(\Omega)}^{d \times d}} &= \int_\Omega Q_h \nabla \mathcal{U} ( Q^\dag ) \cdot \nabla w - \int_\Omega f w \\ &= \int_\Omega Q_h \nabla \mathcal{U} ( Q^\dag ) \cdot \nabla w - \int_\Omega Q_h \nabla \mathcal{U} ( Q_h ) \cdot \nabla w \\ &= \int_\Omega Q_h \nabla \big(\mathcal{U} ( Q^\dag ) - \mathcal{U} ( Q_h )\big) \cdot \nabla w \\ &= \int_\Omega Q_h \nabla \big(\mathcal{U} (Q^\dag)-\Pi_hz^{\delta} \big) \cdot \nabla w \\ &~\quad + \int_\Omega Q_h \nabla \big(\mathcal{U}_h ( Q_h) - \mathcal{U}(Q_h)\big) \cdot \nabla w\\ &~\quad + \int_\Omega Q_h \nabla \big(\Pi_hz^{\delta} - \mathcal{U}_h (Q_h)\big) \cdot \nabla w \\ &:= S_1 + S_2 + S_3. \end{align*} We have that \begin{align}\label{24-6-15ct1}
\big\| \mathcal{U} (Q^\dag) - \Pi_hz^{\delta} \big\|_{H^1 (\Omega)} &\le
\big\| \Pi_h \big (\mathcal{U} (Q^\dag) - z^{\delta} \big) \big\|_{H^1 (\Omega)} +
\big\| \mathcal{U} (Q^\dag) - \Pi_h \mathcal{U} (Q^\dag) \big\|_{H^1 (\Omega)} \notag\\
&\le \big\| \Pi_h \big\|_{\mathcal {L} (H^1 (\Omega), H^1(\Omega))} \big\|\mathcal{U} (Q^\dag) - z^{\delta} \big\|_{H^1
(\Omega)} + \big\| \mathcal{U} (Q^\dag) - \Pi_h \mathcal{U} (Q^\dag) \big\|_{H^1 (\Omega)} \notag\\
& \le \max \big(1, \| \Pi_h \|_{\mathcal{L}(H^1(\Omega), H^1(\Omega))} \big) \left(\delta + \gamma_h \big( \mathcal{U}(Q^\dag) \big)\right) \notag\\ &= C\left(\delta + \gamma_h \big( \mathcal{U}(Q^\dag) \big)\right). \end{align} Thus we obtain \begin{align*} S_1 &:= \int_\Omega Q_h \nabla \big(\mathcal{U} (Q^\dag)-\Pi_hz^{\delta} \big) \cdot \nabla w \\
&\le C \big \| \mathcal{U} (Q^\dag)-\Pi_hz^{\delta} \big \|_{H^1(\Omega)} \\ &\le C\left(\delta + \gamma_h \big( \mathcal{U}(Q^\dag) \big)\right). \end{align*} We deduce from \eqref{4/6:m4} and \eqref{10/4:ct1} that $$\int_\Omega Q_h \nabla \big(\mathcal{U}_h ( Q_h) - \mathcal{U}(Q_h)\big) \cdot \nabla v_h =0$$ for all $v_h \in \mathcal{V}^1_h.$ Therefore, we obtain \begin{align*} S_2 &:= \int_\Omega Q_h \nabla \big(\mathcal{U}_h ( Q_h) - \mathcal{U}(Q_h)\big) \cdot \nabla w \\ &= \int_\Omega Q_h \nabla \big(\mathcal{U}_h ( Q_h) - \mathcal{U}(Q_h)\big) \cdot \nabla ( w - \Pi_h w) \\
&\le \big\| Q_h \nabla \big(\mathcal{U}_h ( Q_h) - \mathcal{U}(Q_h)\big) \big\|_{L^2(\Omega)} \big\| \nabla ( w - \Pi_h w) \big\|_{L^2(\Omega)} \\
&\le C \big\| w - \Pi_h w\big\|_{H^1(\Omega)} \\ &\le C \gamma_h(w). \end{align*} Since $0 \preceq Q_h (x) \in \mathcal{S}_d$ for a.e. $x \in \Omega$, the root ${Q_h (x)}^{1/2}$ is well defined. Furthermore, \begin{align*} Q_h(x) \nabla \big(\Pi_hz^{\delta}(x) - \mathcal{U}_h(Q_h)(x)\big) &\cdot \nabla w(x) \\ &= {Q_h}^{1/2}(x) \nabla \big(\Pi_hz^{\delta}(x) -\mathcal{U}_h(Q_h)(x)\big) \cdot {Q_h}^{1/2}(x) \nabla w(x). \end{align*} Then applying the Cauchy-Schwarz inequality, we have \begin{align*} S_3 &:= \int_\Omega Q_h \nabla \big(\Pi_hz^{\delta} - \mathcal{U}_h (Q_h)\big) \cdot \nabla w \\ &\le \left(\int_\Omega Q_h \nabla \big( \mathcal{U}_h(Q_h) -\Pi_hz^{\delta} \big)\cdot \nabla \big( \mathcal{U}_h(Q_h) -\Pi_hz^{\delta} \big)\right)^{1/2} \left(\int_\Omega Q_h \nabla w \cdot \nabla w \right)^{1/2}\\ &\le C \left(\int_\Omega Q_h \nabla \big(\mathcal{U}_h ( Q_h )- \Pi_hz^{\delta} \big) \cdot \nabla \big(\mathcal{U}_h ( Q_h )- \Pi_hz^{\delta} \big) \right)^{1/2}. \end{align*} Using Young's inequality, we obtain \begin{align*} S_3 \le C \rho +\frac{1}{4\rho}\int_\Omega Q_h \nabla \big(\mathcal{U}_h ( Q_h )- \Pi_hz^{\delta} \big) \cdot \nabla \big(\mathcal{U}_h (Q_h )- \Pi_hz^{\delta} \big). \end{align*} Therefore, we arrive at \begin{align*} 2\rho \big\langle Q^\dag , Q^\dag - Q_h \big\rangle_{{L^2(\Omega)}^{d \times d}} \le C\left(\delta^2 + \gamma_h^2 \big( \mathcal{U}(Q^\dag) \big) + \gamma^2_h (w) + \rho^2 \right) + \frac{1}{2}\mathcal{J}_h^\delta \big( Q_h \big). \end{align*} Combining this with \eqref{mqq11}, we infer that \begin{align*}
\frac{1}{2}\mathcal{J}_h^\delta \big( Q_h \big) &+ \rho \big\|
Q_h - Q^\dag \big\|^2_{{L^2(\Omega)}^{d\times d}} \le C\left(\delta^2 + \sigma_h^2(Q^\dag) + \gamma_h^2 \big( \mathcal{U}(Q^\dag) \big) + \gamma^2_h (w) + \rho^2 \right). \end{align*} Now we have \begin{align*}
\dfrac{\kappa}{4} \big\|\mathcal{U}_h( Q_h ) - \mathcal{U}(Q^\dag)
\big\|^2_{H^1(\Omega)}
&\le \dfrac{\kappa}{2} \big\|\mathcal{U}_h( Q_h ) - \Pi_hz^{\delta} \big\|^2_{H^1(\Omega)} + \dfrac{\kappa}{2} \big\| \Pi_hz^{\delta} - \mathcal{U}(Q^\dag)\big\|^2_{H^1(\Omega)}\\ &\le \frac{1}{2}\mathcal{J}_h^\delta \big( Q_h \big) +\kappa C^2\left(\delta^2 + \gamma^2_h \big( \mathcal{U}(Q^\dag) \big)\right), \end{align*} by (\ref{coercivity}) and \eqref{24-6-15ct1}. Thus, we arrive at \eqref{29-6-15ct3}, which finishes the proof. \end{proof}
\section{Gradient-projection algorithm} \label{iterative}
For the numerical solution we here use the gradient-projection algorithm of \cite{Enyi}. We note that many other efficient solution methods are available, see for example \cite{KeungZoz2000}.
We consider the finite dimensional space $\mathcal{V}_h$ defined by (\ref{18/10/14:ct1}). Let $\underline{C}_h$ and $\overline{C}_h$ be positive constants such that \begin{align}\label{18/10/14:ct2}
\underline{C}_h \| H \|_{{L^2(\Omega)}^{d \times d}} \le
\| H \|_{{L^\infty(\Omega)}^{d \times d}} \le
\overline{C}_h \| H \|_{{L^2(\Omega)}^{d \times d}} \end{align} for all $H \in \mathcal{V}_h$.
The following results are useful.
\begin{lemma} \label{Lip*} The discrete coefficient-to-solution operator $ \mathcal{U}_h $ is Lipschitz continuous on $\mathcal{Q}_{ad} \cap \mathcal{V}_h$ in the ${L^2(\Omega)}^{d \times d}$-norm with a Lipschitz constant \begin{align*} \frac{\overline{C}_h d}{\kappa^2}
\big\|f\big\|_{L^2(\Omega)}. \end{align*} \end{lemma} \begin{proof} For all $M, N \in \mathcal{Q}_{ad} \cap \mathcal{V}_h$ it follows from (\ref{10/4:ct1}) that \begin{align*} \int_\Omega M\nabla \mathcal{U}_h(M) \cdot \nabla v_h &= \int_\Omega fv_h \\ &= \int_\Omega N\nabla \mathcal{U}_h(N) \cdot \nabla v_h \end{align*} for all $v_h\in \mathcal{V}^1_h$. Thus \begin{align*} \int_\Omega M\nabla (\mathcal{U}_h(M) - \mathcal{U}_h(N)) \cdot \nabla v_h = \int_\Omega (N-M)\nabla \mathcal{U}_h(N) \cdot \nabla v_h. \end{align*} Choosing $v_h = \mathcal{U}_h(M) - \mathcal{U}_h(N)$, by (\ref{coercivity}), we have \begin{align*}
\kappa \big\| \mathcal{U}_h(M) - \mathcal{U}_h(N) \big
\|^2_{H^1(\Omega)} & \le d\| M - N \|_{{L^\infty(\Omega)}^{d \times d}}
\| \mathcal{U}_h(N) \|_{H^1(\Omega)}
\| \mathcal{U}_h(M) - \mathcal{U}_h(N) \|_{H^1(\Omega)}. \end{align*} Therefore, from (\ref{18/5:ct1}) and (\ref{18/10/14:ct2}) we arrive at \begin{align} \label{18/10/14:ct3}
\| \mathcal{U}_h(M) - \mathcal{U}_h(N) \|_{H^1(\Omega)} & \le \frac{\overline{C}_h d}{\kappa^2}
\big\|f\big\|_{L^2(\Omega)}
\| M - N \|_{{L^2(\Omega)}^{d \times d}}. \end{align} This finishes the proof. \end{proof}
\begin{lemma} \label{Lip} The objective functional $ \Upsilon^{\rho,\delta}_h $ of $\big(\mathcal{P}^{\rho,\delta}_h \big)$ has the property that the gradient is Lipschitz continuous on $\mathcal{Q}_{ad} \cap \mathcal{V}_h$ in the ${L^2(\Omega)}^{d \times d}$-norm with a Lipschitz constant \begin{align*} L_h := 2 \overline{C}_h d \left( \frac{\overline{C}_h d}{\kappa^3}
\big\|f\big\|_{L^2(\Omega)}^2 + \rho |\Omega|^{1/2} \right). \end{align*} In other word, the estimate
$$\| {\Upsilon^{\rho,\delta}_h}' (M) - {\Upsilon^{\rho,\delta}_h}' (N)
\|_{\mathcal{L}({L^2(\Omega)}^{d \times d}, {R})}
\le L_h \| M - N \|_{{L^2(\Omega)}^{d \times d}}$$ is satisfied for all $M, N \in \mathcal{Q}_{ad} \cap \mathcal{V}_h$. \end{lemma}
\begin{proof} Since any norm on $\mathcal{V}_h$ is equivalent, $\mathcal{U}_h$ is Fr\'echet differentiable on the set $\mathcal{Q}_{ad} \cap \mathcal{V}_h$ in the ${L^\infty(\Omega)}^{d\times d}$-norm and thus in the ${L^2(\Omega)}^{d\times d}$-norm. For all $M, N \in \mathcal{Q}_{ad} \cap \mathcal{V}_h$ and $H \in \mathcal{V}_h$, in view of (\ref{17/10/14:ct6}), we get \begin{align*}
\left| {\Upsilon^{\rho,\delta}_h}' (M)H
- {\Upsilon^{\rho,\delta}_h}' (N)H \right|
&= \Big| \int_{\Omega} H \nabla \mathcal{U}_h(N) \cdot \nabla \mathcal{U}_h(N) - \int_{\Omega} H \nabla \mathcal{U}_h(M) \cdot \nabla \mathcal{U}_h(M)\\ &~\quad + 2\rho \int_\Omega H \cdot M
- 2\rho \int_\Omega H \cdot N\Big|. \end{align*} Thus \begin{align*}
\Big| {\Upsilon^{\rho,\delta}_h}' (M)H
&- {\Upsilon^{\rho,\delta}_h}' (N)H \Big| \\
&= \Big| \int_{\Omega} H \nabla (\mathcal{U}_h(N) - \mathcal{U}_h(M)) \cdot \nabla(\mathcal{U}_h(N) + \mathcal{U}_h(M))
+ 2\rho \int_\Omega H \cdot (M - N) \Big|\\
&\le \left(\int_{\Omega} |H \nabla (\mathcal{U}_h(N) - \mathcal{U}_h(M))|^2\right)^{1/2}
\left(\int_{\Omega} | \nabla (\mathcal{U}_h(N)
+ \mathcal{U}_h(M))|^2\right)^{1/2}\\ &~\quad + 2\rho \left(\int_{\Omega} H \cdot H\right)^{1/2} \left(\int_{\Omega} (M - N) \cdot (M - N)\right)^{1/2}\\
&\le d \| H \|_{{L^\infty(\Omega)}^{d \times d}}
\| \mathcal{U}_h(N) - \mathcal{U}_h(M) \|_{H^1(\Omega)}
\| \mathcal{U}_h(N) + \mathcal{U}_h(M) \|_{H^1(\Omega)}\\
&~\quad + 2\rho d \| H \|_{{L^\infty(\Omega)}^{d \times d}} |\Omega|^{1/2}
\| M - N \|_{{L^2(\Omega)}^{d \times d}}. \end{align*} From the estimates (\ref{18/5:ct1}), (\ref{18/10/14:ct2}) and (\ref{18/10/14:ct3}) we now get \begin{align*}
\Big| {\Upsilon^{\rho,\delta}_h}' (M)H &-
{\Upsilon^{\rho,\delta}_h}' (N)H \Big| \\
&\le d \| H \|_{{L^\infty(\Omega)}^{d \times d}}
\| \mathcal{U}_h(N) - \mathcal{U}_h(M) \|_{H^1(\Omega)}
\left(\| \mathcal{U}_h(N) \|_{H^1(\Omega)} +
\| \mathcal{U}_h(M) \|_{H^1(\Omega)}\right)\\
&~\quad + 2\rho d \| H \|_{{L^\infty(\Omega)}^{d \times d}} |\Omega|^{1/2}
\| M - N \|_{{L^2(\Omega)}^{d \times d}}\\ &\le 2 \overline{C}_h d \left( \frac{\overline{C}_h d}{\kappa^3}
\big\|f\big\|_{L^2(\Omega)}^2 + \rho |\Omega|^{1/2} \right)
\| H \|_{{L^2(\Omega)}^{d \times d}} \| M - N \|_{{L^2(\Omega)}^{d \times d}}. \end{align*} The lemma is proved. \end{proof}
\begin{lemma} [\cite{Enyi}]\label{cite} Let $X$ be a non-empty, closed and convex subset of a Hilbert space $\mathcal{X}$ and $\mathfrak{F}: X \to {R}$ be a convex Fr\'echet differentiable functional with the gradient $\nabla \mathfrak{F}$ being $L$-Lipschitzian. Assume that the problem \begin{align} \label{21/10/2014:ct1} \min_{x \in X} \mathfrak{F}(x) \end{align} is consistent and let $S$ denote its solution set. Let ${(\alpha_m)}_m, {(\beta_m)}_m$ and ${(\gamma_m)}_m$ be real sequences satisfying ${(\alpha_m)}_m \subset (0,1), ~ \overline{{(\beta_m)}_m} \subset (0,1), ~ {(\gamma_m)}_m \subset (0, L/2)$ and the following additional condition $$\lim_{m} \alpha_m = 0,~ \sum_{m = 1}^{\infty} \alpha_m = \infty ~ \mbox{and} ~ 0< \liminf_{m} \gamma_m \le \limsup_m \gamma_m < L/2.$$ Then, for any given $x^* \in X$ the iterative sequence ${(x_m)}_m$ is generated by $x_1 \in X$, \begin{align}\label{25-9-15ct2} x_{m+1} := (1 - \beta_m) x_m + \beta_m P_X \left(x_m - \gamma_m \nabla \mathfrak{F} (x_m)\right) + \alpha_m (x^* - x_m) \end{align} converges strongly to the minimizer $x^\dag = P_S x^*$ of the problem (\ref{21/10/2014:ct1}). \end{lemma}
To identify a stopping criterion for the iteration \eqref{25-9-15ct2} we adopt the following result.
\begin{lemma}\label{stopping criterion} Let $X$ be a non-empty, closed and convex subset of a Hilbert space $\mathcal{X}$ and $\mathfrak{F}: X \to {R}$ be a convex Fr\'echet differentiable functional with the gradient $\nabla \mathfrak{F}$. Assume that the problem \begin{align} \label{25-9-15ct3} \min_{x \in X} \mathfrak{F}(x) \end{align} is consistent. Then $x^\dag$ is a solution to \eqref{25-9-15ct3} if and only if the equation $$x^\dag = P_X \left( x^\dag - \gamma \nabla \mathfrak{F} (x^\dag)\right) $$ holds, where $\gamma$ is an arbitrary positive constant. \end{lemma}
\begin{proof} Since $\mathfrak{F}$ is convex differentiable, we have for all $\gamma>0$ that \begin{align*} x^\dag \mbox{solves \eqref{25-9-15ct3}} &\Leftrightarrow \left\langle \gamma\nabla \mathfrak{F} (x^\dag), x-x^\dag \right\rangle _{\mathcal{X}} \ge 0 ~\quad \mbox{for all~} x \in X\\ &\Leftrightarrow \left\langle \left( x^\dag - \gamma\nabla \mathfrak{F} (x^\dag) \right) - x^\dag, x-x^\dag\right\rangle _{\mathcal{X}} \le 0 ~\quad \mbox{for all~} x \in X \\ & \Leftrightarrow x^\dag = P_X \left( x^\dag - \gamma \nabla \mathfrak{F} (x^\dag)\right), \end{align*} which finishes the proof. \end{proof}
Now we state the main result of this section on the strong convergence of iterative solutions to that of our identification problem.
\begin{theorem}\label{algorithm} Let $\left(\mathcal{T}_{h_n}\right)_n$ be a sequence of triangulations with mesh sizes $\left(h_n\right)_n$. For any positive sequence $\left(\delta_n\right)_n$, let $\rho_n := \rho\left(\delta_n\right)$ be such that $$\rho_n\rightarrow 0, ~\frac{\delta_n^2}{\rho_n} \rightarrow 0, ~\frac{\sigma^2_{h_n}(Q^\dag)}{\rho_n} \rightarrow 0 \mbox{~and~} \frac{\gamma^2_{h_n}(Q^\dag)}{\rho_n} \rightarrow 0$$ and $\left(z^{\delta_n}\right)_n$ be observations satisfying
$\big\| \overline{u} - z^{\delta_n}
\big\|_{H^1_0(\Omega)} \le \delta_n$.
Moreover, for any fixed $n$ let ${(\alpha^n_m)}_m, {(\beta^n_m)}_m$ and ${(\gamma^n_m)}_m$ be real sequences satisfying \begin{quote} ${(\alpha^n_m)}_m \subset (0,1), ~ \overline{{(\beta^n_m)}_m} \subset (0,1), ~{(\gamma^n_m)}_m \subset (0, L_{h_n}/2)$,
$\lim_{m} \alpha^n_m = 0,~ \sum_{m = 1}^{\infty} \alpha^n_m = \infty$ and
$0< \liminf_m \gamma^n_m \le \limsup_m \gamma^n_m < L_{h_n}/2$ \end{quote} with \begin{align}\label{28-4-15-ct1}
L_{h_n}:= 2 \overline{C}_{h_n} d \left( \frac{\overline{C}_{h_n} d}{\kappa^3} \big\|f\big\|_{L^2(\Omega)}^2 + \rho_n |\Omega|^{1/2}\right). \end{align} Let $Q^*$ be a prior estimate of the sought matrix $Q^\dag$ and let ${(Q^n_m)}_m$ be the sequence of iterates generated by \begin{align}\label{30-4-15:ct1} \begin{split} &Q^n_0 \in \mathcal{Q}_{ad} \cap \mathcal{V}_{h_n}\\ &Q^n_{m} := (1 - \beta^n_{m-1}) Q^n_{m-1} + \alpha^n_{m-1} (Q^* - Q^n_{m-1}) \\ &\phantom{xxx} + \beta^n_{m-1} P_{\mathcal{Q}_{ad} \cap \mathcal{V}_{h_n}} \big(Q^n_{m-1} - \gamma^n_{m-1} \big( \nabla \Pi_{h_n} z^{\delta_n} \otimes \nabla \Pi_{h_n }z^{\delta_n} -\nabla \mathcal{U}_{h_n}(Q^n_{m-1}) \otimes \nabla \mathcal{U}_{h_n}(Q^n_{m-1}) \\ &\phantom{xxx} + 2\rho_n Q^n_{m-1}\big)\big). \end{split} \end{align} Then ${(Q^n_m)}_m$ converges strongly to the unique minimizer $ Q^{\rho_n,\delta_n}_{h_n}$ of $\left( \mathcal{P}^{\rho_n,\delta_n}_{h_n} \right)$,
$$\lim_m \| Q^n_m - Q^{\rho_n,\delta_n}_{h_n}
\|_{{L^2(\Omega)}^{d \times d}} = 0. $$ Furthermore, ${(Q^n_m)}_m^n$ converges strongly to the minimum norm solution $Q^\dag$ of the identification problem,
$$\lim_n \big( \lim_m \| Q^n_m - Q^\dag
\|_{{L^2(\Omega)}^{d \times d}} \big) = 0. $$ \end{theorem}
\begin{proof} Since $$\nabla \Upsilon^{\rho,\delta}_h (Q) = \nabla
\Pi_hz^{\delta} \otimes \nabla \Pi_hz^{\delta} - \nabla \mathcal{U}_h(Q) \otimes \nabla \mathcal{U}_h(Q) + 2\rho Q$$ for all $Q \in \mathcal{Q}_{ad} \cap \mathcal{V}_h$, the conclusion of the theorem follows directly from Theorem \ref{convergence1}, Lemma \ref{Lip} and Lemma \ref{cite}. \end{proof}
\section{Numerical tests}\label{Numerical implement}
In this section we illustrate the theoretical result with two numerical examples. The first one is provided to compare with the numerical results obtained in \cite{Deckelnick}, while the second one aims to illustrate the discontinuous coefficient identification problem.
For this purpose we consider the Dirichlet problem \begin{align} -\text{div} (Q^\dag\nabla \overline{u}) &= f \mbox{ in } \Omega, \label{m1**}\\ \overline{u} &=0 \mbox{ on } {\partial \Omega} \label{qmict3**} \end{align}
with $\Omega = \{ x = (x_1,x_2) \in {R}^2 ~|~ -1 < x_1, x_2 < 1\}$ and \begin{align}\label{4-3-16ct1} \overline{u}(x) &= (1-x^2_1)(1-x^2_2). \end{align} Now we divide the interval $(-1,1)$ into $\ell$ equal segments and so that the domain $\Omega = (-1,1)^2$ is divided into $2\ell^2$ triangles, where the diameter of each triangle is $h_{\ell} = \frac{\sqrt{8}}{\ell}$. In the minimization problem $\left(\mathcal{P}^{\rho,\delta}_h \right)$ we take $h=h_\ell$ and $\rho = \rho_\ell = 0.001h_\ell$. For observations with noise we assume that $$z^{\delta_{\ell}} = \overline{u} + \dfrac{x_1}{\ell} + \dfrac{x_2}{\ell} \mbox{~and~} \Pi_{h_\ell} z^{\delta_{\ell}} = I^1_{h_{\ell}} \left( \overline{u} + \dfrac{x_1}{\ell} + \dfrac{x_2}{\ell}\right) $$ so that
$$\delta_\ell = \left\| z^{\delta_{\ell}} - \overline{u} \right\|_{H^1(\Omega)} = \left\| \dfrac{x_1}{\ell} + \dfrac{x_2}{\ell}\right\|_{H^1(\Omega)} = \sqrt{\frac{32}{3}} \frac{1}{\ell}= \dfrac{2}{\sqrt{3}} h_\ell.$$ The constants $\underline{q}$ and $\overline{q}$ in the definition of the set $\mathcal{K}$ are respectively chosen as 0.05 and 10. We use the gradient-projection algorithm which is described in Theorem \ref{algorithm} for computing the solution of the problem $\left(\mathcal{P}^{\rho,\delta}_h \right)$.
Note that in \eqref{28-4-15-ct1} $d=2$ and $\overline{C}_{h_{\ell}} = \frac{\ell}{\sqrt{2}}$, where for all $Q\in \mathcal{Q}_{ad}$ and $v \in H^1_0(\Omega)$ \begin{align*}
\|v\|^2_{H^1(\Omega)} &= \int_\Omega |\nabla v|^2 + \int_\Omega | v|^2\\
&\le \int_\Omega |\nabla v|^2 + \left( \sqrt{\frac{3}{2}} \right)^{(d+2)/2} |\Omega|^{1/d} \int_\Omega | \nabla v|^2\\
&\le \dfrac{1}{\underline{q}} \left( 1 + \left( \sqrt{\frac{3}{2}} \right)^{(d+2)/2} |\Omega|^{1/d}\right) \int_\Omega Q \nabla v \cdot \nabla v. \end{align*} So we can choose \begin{align*}
\kappa = \dfrac{\underline{q}}{1 + \left( \sqrt{\frac{3}{2}} \right)^{(d+2)/2} |\Omega|^{1/d}}. \end{align*} As the initial matrix $Q_0$ in \eqref{30-4-15:ct1} we choose $$Q_0 := \begin{pmatrix} 2&0\\ 0&2 \end{pmatrix}.$$ The prior estimate is chosen with $Q^* := I^1_{h_\ell} Q^\dag$. Furthermore, The sequences ${(\alpha_m)}_m, {(\beta_m)}_m$ and ${(\gamma_m)}_m$ are chosen with $$\alpha_m = \frac{1}{100m}, ~~ \beta_m = \frac{100m\rho_{\ell}}{3m+1}, ~~\mbox{and}~ \gamma_m = \frac{100m\rho_{\ell}}{2m+1}.$$ Let $Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}$ denote the computed numerical matrix with respect to $\ell$ and the iteration \eqref{30-4-15:ct1}. According to the lemma \ref{stopping criterion}, the iteration was stopped if
$$\mbox{Tolerance~}:= \left\|Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} - P_{\mathcal{Q}_{ad} \cap \mathcal{V}_{h_\ell}} \left( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} - \gamma_m \nabla \Upsilon^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big) \right) \right\|_{{L^2(\Omega)}^{d\times d}} < 10^{-6}$$ with $$\nabla \Upsilon^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big) = \nabla
\Pi_{h_\ell}z^{\delta_\ell} \otimes \nabla \Pi_{h_\ell}z^{\delta_\ell} - \nabla \mathcal{U}_{h_\ell} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big) \otimes \nabla \mathcal{U}_{h_\ell} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big) + 2\rho_\ell Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}$$ or the number of iterations was reached 500.
\subsection{Example 1}\label{eg1}
We now assume that \begin{align*} Q^\dag (x) &= P_{\mathcal{K}}\left( \nabla \overline{u}(x) \otimes \nabla \overline{u}(x)\right). \end{align*}
Let us denote $\eta(x) = 4(x_1^2(1-x_2^2)^2 + x_2^2(1-x_1^2)^2)$ and $P_{[\underline{q}, \overline{q}]} (\eta(x)) = \max\left(\underline{q}, \min(\eta(x), \overline{q})\right)$. A calculation shows $$Q^\dag(x) = \begin{cases} \underline{q}I_2 & \mbox{~if~} \eta(x)=0,\\ \underline{q}I_2 + \frac{P_{[\underline{q}, \overline{q}]} (\eta(x)) - \underline{q}}{\eta(x)} \nabla \overline{u}(x) \otimes \nabla \overline{u}(x) & \mbox{~if~} \eta(x) \neq 0. \end{cases}$$ Then along with $\overline{u}$ given in the equation \eqref{4-3-16ct1} one can compute the right hand side $f$ in the equation \eqref{m1**}.
The numerical results are summarized in Table \ref{b1}, where we present the refinement level $\ell$, regularization parameter $\rho_\ell$, mesh size $h_\ell$ of the triangulation, noise level $\delta_\ell$, number of iterations, value of tolerances, the final $L^2$ and $L^\infty$-error in the coefficients, the final $L^2$ and $H^1$-error in the states. Their experimental order of convergence (EOC) is presented in Table \ref{b11}, where $$\mbox{EOC}_\Phi := \dfrac{\ln \Phi(h_1) - \ln \Phi(h_2)}{\ln h_1 - \ln h_2}$$ and $\Phi(h)$ is an error functional with respect to the mesh size $h$.
In Table \ref{b2} we present the numerical result for $\ell = 96$, where the value of tolerances, the final $L^2$ and $L^\infty$-error in the coefficients, the final $L^2$ and $H^1$-error in the states are displayed for each one hundred iteration. The convergence history given in Table \ref{b1}, Table \ref{b11} and Table \ref{b2} shows that the gradient-projection algorithm performs well for our identification problem.
All figures are presented here corresponding to $\ell = 96$. Figure \ref{h1} from left to right shows the graphs of the interpolation $I^1_{h_{\ell}} \overline{u}$, computed numerical state of the algorithm at the 500$^{\mbox{\tiny th}}$ iteration, and the difference to $I^1_{h_{\ell}}\overline{u}$. We write $$Q^\dag = \begin{pmatrix} q^\dag_{11}&q^\dag_{12}\\ q^\dag_{12}&q^\dag_{22} \end{pmatrix} ~~\mbox{and}~~ Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} = \begin{pmatrix} {q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{11} & {q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{12}\\ {q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{12} & {q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{22} \end{pmatrix}.$$ Figure \ref{h3} from left to right we display $I^1_{h_{\ell}} q^\dag_{11}, I^1_{h_{\ell}}q^\dag_{12}$ and $I^1_{h_{\ell}}q^\dag_{22}$. Figures \ref{h4} shows ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{11}, {q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{12}$ and ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{22}$. Figure \ref{h5} from left to right we display differences ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{11} - I^1_{h_{\ell}} q^\dag_{11}$, ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{12} - I^1_{h_{\ell}} q^\dag_{12}$ and ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{22} - I^1_{h_{\ell}} q^\dag_{22}$.
For the simplicity of the notation we denote by \begin{align}
& \Gamma := \|Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} - I^1_{h_\ell} Q^\dag\|_{{L^2(\Omega)}^{d\times d}},~ \Delta := \|Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} - I^1_{h_\ell} Q^\dag\|_{{L^\infty(\Omega)}^{d\times d}}, \label{4-3-16ct2}\\
& \Sigma := \|\mathcal{U}_{h_{\ell}} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big) - I^1_{h_\ell}\overline{u}\|_{L^2(\Omega)} \mbox{~and~} \Xi := \|\mathcal{U}_{h_{\ell}} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big) - I^1_{h_\ell}\overline{u}\|_{H^1(\Omega)}.\label{4-3-16ct3} \end{align}
\begin{table}[H] \begin{center}
\begin{tabular}{|c|l|l|l|l|l|l|l|l|l|}
\hline \multicolumn{10}{|c|}{ {\bf Convergence history} }\\
\hline $\ell$ &\scriptsize $\rho_\ell$ &\scriptsize $h_\ell$ &\scriptsize $\delta_\ell$ &\scriptsize {\bf Ite.} &\scriptsize {\bf Tol.} &\scriptsize $\Gamma$ &\scriptsize $\Delta$ &\scriptsize $\Sigma$ &\scriptsize $\Xi$ \\ \hline 6 &4.7140e-4 &0.4714 &0.5443& 500& 0.0165 & 1.0481e-3 & 1.0381e-3 & 0.041001 &0.20961 \\ \hline 12 &2.3570e-4 &0.2357 &0.2722& 500& 0.0057 & 1.3471e-4 & 1.0825e-4 & 0.012848 &0.070352 \\ \hline 24 &1.1785e-4 &0.1179 &0.1361& 500& 0.0014 & 1.6826e-5 & 1.2084e-5 & 4.1855e-3 &0.023265 \\ \hline 48 &5.8926e-5 &0.0589 &0.0680& 500& 3.5653e-4 & 2.0962e-6 & 1.4577e-6 & 9.8725e-4 &6.3264e-3 \\ \hline 96 &2.9463e-5 &0.0295 &0.0340& 500& 8.9059e-5 & 2.6177e-7 & 1.8007e-7 & 2.6475e-4 &1.8936e-3 \\ \hline \end{tabular} \caption{Refinement level $\ell$, regularization parameter $\rho_\ell$, mesh size $h_\ell$ of the triangulation, noise level $\delta_\ell$, number of iterations, value of tolerances, errors $\Gamma$, $\Delta$, $\Sigma$ and $\Xi$.} \label{b1} \end{center} \end{table}
\begin{table}[H] \begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline \multicolumn{5}{|c|}{ {\bf Experimental order of convergence} }\\
\hline $\ell$ &\scriptsize {\bf EOC$_\Gamma$} &\scriptsize {\bf EOC$_\Delta$} &\scriptsize {\bf EOC$_\Sigma$} &\scriptsize {\bf EOC$_\Xi$}\\ \hline 6 & --& -- & -- &--\\ \hline 12 & 2.9598 & 3.2615 & 1.6741& 1.5750\\ \hline 24 & 3.0011 & 3.1632 & 1.6181 &1.5964 \\ \hline 48 & 3.0048 & 3.0513 & 2.0839 &1.8787\\ \hline 96 & 3.0014 & 3.0171 & 1.8988 &1.7403 \\ \hline Mean of EOC & 2.9918 & 3.1233 & 1.8187 & 1.6976 \\ \hline \end{tabular} \caption{Experimental order of convergence between finest and coarsest level for $\Gamma$, $\Delta$, $\Sigma$ and $\Xi$.} \label{b11} \end{center} \end{table}
\begin{table}[H] \begin{center}
\begin{tabular}{|c|l|l|l|l|l|}
\hline \multicolumn{6}{|c|}{ {\bf Numerical result for $\ell = 96$}} \\
\hline { \bf Iterations} &\scriptsize {\bf Tolerances} &\scriptsize $\Gamma$ &\scriptsize $\Delta$ &\scriptsize $\Sigma$ &\scriptsize $\Xi$\\ \hline 100 & 0.1875 & 5.0541 & 4.5433 & 4.6756 & 27.2404\\ \hline 200 & 9.8522e-3 & 0.01224 & 0.58589 & 6.6362e-3 & 0.021169\\ \hline 300 & 8.9476e-5 & 5.8712e-7 & 6.0808e-7 & 2.6475e-4 & 1.8936e-3\\ \hline 400 & 8.9386e-5 & 3.0944e-7 & 2.4408e-7 & 2.6475e-4 & 1.8936e-3\\ \hline 500 & 8.9059e-5 & 2.6177e-7 & 1.8007e-7 & 2.6475e-4 & 1.8936e-3\\ \hline
\end{tabular}
\caption{Errors $\Gamma$, $\Delta$, $\Sigma$ and $\Xi$ for $\ell = 96$.} \label{b2} \end{center} \end{table}
\begin{figure}\label{h1}
\end{figure}
\begin{figure}\label{h3}
\end{figure}
\begin{figure}\label{h4}
\end{figure}
\begin{figure}\label{h5}
\end{figure}
\subsection{Example 2}\label{eg2}
We next assume that entries of the symmetric matrix $Q^\dag \in \mathcal{Q}_{ad}$ are discontinuous which are defined as $$q^\dag_{11}(x) = \begin{cases} 3 & \mbox{~if~} x\in\Omega_{11}\\ 1 & \mbox{~if~} x\in\Omega\setminus\Omega_{11} \end{cases},~ q^\dag_{12}(x) = \begin{cases} 1 & \mbox{~if~} x\in\Omega_{12}\\ 0 & \mbox{~if~} x\in\Omega\setminus\Omega_{12} \end{cases} \mbox{~and~} q^\dag_{22}(x) = \begin{cases} 4 & \mbox{~if~} x\in\Omega_{22}\\ 2 & \mbox{~if~} x\in\Omega\setminus\Omega_{22} \end{cases}$$ with \begin{align*}
&\Omega_{11} := \left\{ (x_1, x_2) \in \Omega ~\Big|~ |x_1| \le \frac{1}{2} \mbox{~and~} |x_2| \le \frac{1}{2} \right\},\\
&\Omega_{12} := \left\{ (x_1, x_2) \in \Omega ~\Big|~ |x_1| + |x_2| \le \frac{1}{2} \right\} \mbox{~and~}\\
&\Omega_{22} := \left\{ (x_1, x_2) \in \Omega ~\Big|~ x_1^2 + x_2^2 \le \left( \frac{1}{2}\right)^2 \right\}. \end{align*} Since the entries of the matrix $Q^\dag$ are discontinuous, the right hand side $f$ in the equation \eqref{m1**} is now given in the form of a load vector $$F = KU,$$ where $K=\left( k_{ij}\right)_{1\le i,j\le N_\ell}$ with $$k_{ij} := \int_\Omega Q^\dag \nabla \phi_i \cdot \nabla \phi_j$$ and $\{\phi_1, \cdots, \phi_{N_\ell}\}$ being the basis for the approximating subspace $\mathcal{V}^1_{h_\ell}$, while the vector $U$ is the nodal values of the functional $\overline{u}$.
With the notation on errors $\Gamma$, $\Delta$, $\Sigma$ and $\Xi$ as in the equations \eqref{4-3-16ct2}-\eqref{4-3-16ct3} the numerical results of Example \ref{eg2} are summarized in Table \ref{b4}. For clarity we also present additionally the $H^1(\Omega)$-semi-norm error
$$\Lambda := \| \nabla \mathcal{U}_{h_{\ell}} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big) - \nabla I^1_{h_\ell}\overline{u} \|_{L^2(\Omega)}.$$ For simplicity in Table \ref{b4} we do not restate the regularization parameter $\rho_\ell$, mesh size $h_\ell$ of the triangulation and noise level $\delta_\ell$ again, since they have been given in Table \ref{b1} of Example \ref{eg1}.
The experimental order of convergence for $\Gamma$, $\Delta$, $\Sigma$, $\Xi$ and $\Lambda$ is presented in Table \ref{b5}.
All figures are presented corresponding to $\ell = 96$. Figure \ref{h6} from left to right contains graphs of the entries ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{11}, {q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{12}$, ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{22}$ of the computed numerical matrix ${Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}$ and the computed numerical state $\mathcal{U}_{h_{\ell}} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big)$ of the algorithm at the 500$^{\mbox{\tiny th}}$ iteration, while Figure \ref{h7} from left to right we display differences ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{11} - I^1_{h_{\ell}} q^\dag_{11}$, ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{12} - I^1_{h_{\ell}} q^\dag_{12}$, ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{22} - I^1_{h_{\ell}} q^\dag_{22}$ and $\mathcal{U}_{h_{\ell}} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big) - I^1_{h_\ell}\overline{u}$.
\begin{table}[H] \begin{center}
\begin{tabular}{|c|l|l|l|l|l|l|l|}
\hline \multicolumn{8}{|c|}{ {\bf Convergence history} }\\
\hline $\ell$ &\scriptsize {\bf Ite.} &\scriptsize {\bf Tol.} &\scriptsize $\Gamma$ &\scriptsize $\Delta$ &\scriptsize $\Sigma$ &\scriptsize $\Xi$ &\scriptsize $\Lambda$\\ \hline 6& 500& 0.020795 & 9.7270e-4 & 6.4331e-4 & 0.058024 & 0.058024 & 2.1247e-4\\ \hline 12& 500& 0.005585 & 1.3063e-4 & 8.6203e-5 & 0.014679 & 0.014679 & 3.7208e-5\\ \hline 24& 500& 0.001423 & 1.6639e-5 & 1.1138e-5 & 3.6807e-3 & 3.6807e-3 & 5.4592e-6\\ \hline 48& 500& 3.5555e-4 & 2.0900e-6 & 1.4148e-6 & 9.2086e-4 & 9.2086e-4 & 7.2313e-7\\ \hline 96 & 500& 8.9001e-5 & 2.6157e-7 & 1.7825e-7 & 2.3026e-4 & 2.3026e-4 & 9.2168e-8\\ \hline \end{tabular} \caption{Refinement level $\ell$, number of iterations, value of tolerances, errors $\Gamma, \Delta, \Sigma$, $\Xi$ and $\Lambda$.} \label{b4} \end{center} \end{table}
\begin{table}[H] \begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline \multicolumn{5}{|c|}{ {\bf Experimental order of convergence} }\\
\hline $\ell$ &\scriptsize {\bf EOC$_\Gamma$} &\scriptsize {\bf EOC$_\Delta$} &\scriptsize {\bf EOC$_\Sigma$ = EOC$_\Xi$} &\scriptsize {\bf EOC$_\Lambda$}\\ \hline 6 & --& -- & -- &--\\ \hline 12 & 2.8965 & 2.8997 & 1.9829 & 2.5136\\ \hline 24 & 2.9728 & 2.9522 & 1.9957 & 2.7689\\ \hline 48 & 2.9930 & 2.9768 & 1.9989 & 2.9164\\ \hline 96 & 2.9982 & 2.9886 & 1.9997 & 2.9719\\ \hline Mean of EOC & 2.9651 & 2.9543 & 1.9943 & 2.7927\\ \hline \end{tabular} \caption{Experimental order of convergence for $\Gamma$, $\Delta$, $\Sigma$, $\Xi$ and $\Lambda$.} \label{b5} \end{center} \end{table}
\begin{figure}\label{h6}
\end{figure}
\begin{figure}\label{h7}
\end{figure}
Finally, Figure \ref{h8} from left to right we perform graphs of ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{11}, {q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{12}$, ${q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}}}_{22}$ and $\mathcal{U}_{h_{\ell}} \big( Q^{\rho_{\ell},\delta_{\ell}}_{h_{\ell}} \big)$ at the 50$^{\mbox{\tiny th}}$ iteration. At this iteration the value of tolerance is 4.1052, while errors $\Gamma, \Delta, \Sigma, \Xi$ and $\Lambda$ are 6.9093, 3.9500, 9.9183, 46.2302 and 45.1537, respectively.
\begin{figure}\label{h8}
\end{figure}
We close this section by noting that the proposed method may be extended to the case where the observation $z^\delta$ is only available in a compact subset $\Omega_{\mbox{obs}}$ of the domain $\Omega$, i.e. $\Omega_{\mbox{obs}} \Subset \Omega$. We then use a suitable $H^1_0(\Omega)$-extension $\widehat{z}^\delta$ of $z^\delta$ as measurement in our cost functional. We then consider the following strictly convex minimization problem: $$\min_{Q \in \mathcal{Q}_{ad}} \int_{\Omega} Q \nabla \big( \mathcal{U}(Q)
- \widehat{z}^\delta\big) \cdot \nabla \big(\mathcal{U}(Q) - \widehat{z}^\delta \big) + \rho \| Q \|^2_{{L^2(\Omega)}^{d\times d}} \eqno \left(\widehat{\mathcal{P}}^{\rho,\delta} \right)$$ instead of $\left(\mathcal{P}^{\rho,\delta} \right)$. This problem then attains a unique solution $\widehat{Q}^{\rho,\delta}$, as in the case with full observations.
\section*{Acknowledgments}
We thank the three referees and the associate editor for their valuable comments and suggestions. The author M.\ H.\ was supported by {\it Lothar Collatz Center for Computing in Science}. The author T.\ N.\ T.\ Q.\ was supported by {\it Alexander von Humboldt-Foundation}, Germany.
\end{document} | arXiv |
Review Rate of Change
Chain Rule
Product Rule
Quotient Rule
Mixed Problems
Identifying turning points and points of inflection
Graphing Polynomial Functions
Applications of Differentiation
Small changes and marginal rates
Find derivatives using implicit differentiation
Use implicit differentiation
It can happen that a variable $y$y is known to be a function of $x$x but we do not have an explicit definition of the function. Instead, we have an equation that is satisfied by $x$x and a function $y$y of $x$x. Often, there may be more than one function $y$y that satisfies the equation.
For example, the equation $x^2+y^2=1$x2+y2=1 is satisfied by the points $(x,y)$(x,y) that belong to the unit circle. In this case, it is possible to rearrange the equation into a form that will serve as a function definition. In particular, $y=\sqrt{1-x^2}$y=√1−x2 with the domain $-1\le x\le1$−1≤x≤1. But note that $y=-\sqrt{1-x^2}$y=−√1−x2 also satisfies the equation so that, in this case, the equation can be said to define implicitly two different functions.
In many cases, it is difficult or impossible to solve an equation in $x$x and $y$y to give $y$y as a function of $x$x explicitly. Yet, we may be convinced that such a function exists and that it is differentiable.
Implicit differentiation
We can differentiate term-by-term using the product rule and the fact that the derivative of $y$y (which we assume to be an implicitly defined differentiable function of $x$x) is just $\frac{\mathrm{d}y}{\mathrm{d}x}.$dydx.
The unit circle equation $x^2+y^2=1$x2+y2=1 defines two functions, one for the upper semicircle and one for the lower semicircle. These are both smooth curves. We can differentiate the equation implicitly.
Differentiating term-by-term, we have
$2x+2y.\frac{\mathrm{d}y}{\mathrm{d}x}=0$2x+2y.dydx=0. (We treated the $y^2$y2 term as a function of a function.)
On re-arranging, we find $\frac{\mathrm{d}y}{\mathrm{d}x}=-\frac{x}{y}$dydx=−xy, provided $y\ne0.$y≠0. (The derivative is undefined at the end-points of the semicircles where the tangents are verticle.)
We could have differentiated the explicit form $y=\sqrt{1-x^2}$y=√1−x2. Thus, $\frac{\mathrm{d}y}{\mathrm{d}x}=-\frac{x}{\sqrt{1-x^2}}$dydx=−x√1−x2 but note that this is the same as $-\frac{x}{y}$−xy when $y=\sqrt{1-x^2}$y=√1−x2 is substituted. A similar thing happens with the other explicit function, $y=-\sqrt{1-x^2}$y=−√1−x2.
Differentiate implicitly $x^2y+2xy^2=3$x2y+2xy2=3.
By the product rule, we have$2xy+x^2\frac{\mathrm{d}y}{\mathrm{d}x}+2y^2+4xy\frac{\mathrm{d}y}{\mathrm{d}x}=0$2xy+x2dydx+2y2+4xydydx=0.
Therefore, $\frac{\mathrm{d}y}{\mathrm{d}x}=\frac{-2y(x+y)}{x(x+4y)}$dydx=−2y(x+y)x(x+4y).
In this example, it is possible to discover the explicit functions that satisfy the equation. We assume $y$y can be given as one or more functions of $x$x. Using the quadratic formula,
$x^2y+2xy^2$x2y+2xy2 $=$= $3$3
$\therefore\ 2xy^2+x^2y-3$∴ 2xy2+x2y−3 $=$= $0\ \ \text{This is a quadratic in y.}$0 This is a quadratic in y.
$\therefore\ y$∴ y $=$= $\frac{-x^2\pm\sqrt{x^4+24x}}{4x}$−x2±√x4+24x4x
Thus, the equation is satisfied by two functions. The red and green curves in the diagram below represent the two functions.
Choose and apply a variety of differentiation, integration, and antidifferentiation techniques to functions and relations, using both analytical and numerical methods
Apply differentiation methods in solving problems | CommonCrawl |
\begin{document}
\title{On the order of regular graphs with fixed second largest eigenvalue} \author{Jae Young Yang$^1$\footnote{J.Y. Yang is partially supported by the National Natural Science Foundation of China (No. 11371028).}, Jack H. Koolen$^{2,3}$\footnote{J.H. Koolen is partially supported by the National Natural Science Foundation of China (No. 11471009 and No. 11671376).} \\ \\ \small ${}^1$ School of Mathematical Sciences,\\ \small Anhui University, \\ \small 111 Jiulong Road, Hefei, 230039, Anhui, PR China\\ \small $^2$ School of Mathematical Sciences,\\ \small University of Science and Technology of China, \\ \small 96 Jinzhai Road, Hefei, 230026, Anhui, PR China\\ \small $^3$ Wen-Tsun Wu Key Laboratory of CAS,\\ \small 96 Jinzhai Road, Hefei, 230026, Anhui, PR China
\\ \small {\tt e-mail : [email protected], [email protected]}
} \date{} \maketitle
\begin{abstract}
Let $v(k, \lambda)$ be the maximum number of vertices of a connected $k$-regular graph with second largest eigenvalue at most $\lambda$. The Alon-Boppana Theorem implies that $v(k, \lambda)$ is finite when $k > \frac{\lambda^2 + 4}{4}$. In this paper, we show that for fixed $\lambda \geq1$, there exists a constant $C(\lambda)$ such that $2k+2 \leq v(k, \lambda) \leq 2k + C(\lambda)$ when $k > \frac{\lambda^2 + 4}{4}$.
\end{abstract}
\textbf{Keywords} : smallest eigenvalue, Hoffman graph, Alon-Boppana Theorem, co-edge-regular graph
\textbf{AMS classification} : 05C50, 05C75, 05C62 \section{Introduction}
Let $v(k, \lambda)$ be the maximum order of a connected $k$-regular graph with second largest eigenvalue at most $\lambda$. For $\lambda \geq 2\sqrt{k-1}$, it is known that $v(k, \lambda)$ is infinite from the existence of infinite families of bipartite regular Ramanujan graphs \cite{Ramanujan}. The Alon-Boppana Theorem \cite{Alon, ab1, ab2, ab3, ab4, nilli1, nilli2} states:
\begin{theorem}\label{alon} For any integer $k\geq 3$ and real number $\lambda < 2\sqrt{k-1}$, the number $v(k, \lambda)$ is finite. \end{theorem}
In this paper, we will look at the behavior of $v(k, \lambda)$ when $\lambda$ is fixed and $k$ goes to infinity. Our main theorem is:
\begin{theorem}\label{main} Let $\lambda$ be an integer at least $1$. Then there exists a constant $C_1(\lambda)$ such that $2k+2 \leq v(k, \lambda) \leq 2k + C_1(\lambda)$ holds for all $k > \frac{\lambda ^2 + 4}{4}$. \end{theorem}
For fixed real number $\lambda \geq 1$, define $T(\lambda)$ as
$$T(\lambda) := \limsup_{k \rightarrow \infty} v(k,\lambda) -2k .$$
Because of Theorem \ref{main}, $T(\lambda)$ is well-defined. We will show that $T(\lambda)\geq 2\lambda$ holds for fixed positive integer $\lambda$.
The proof of Theorem \ref{main} is based on the following proposition. In order to state this proposition, we need to introduce the next notion. For a vertex $x$ of a graph $G$, let $\Gamma_i(x)$ be the set of vertices which are at distance $i$ from $x$.
\begin{proposition}\label{tool}
Let $\lambda$ be a real number at least $1$. Then there exists a constant $M(\lambda) \geq \lambda^3$ such that, if $G$ is a graph satisfying
\begin{enumerate}[(i)]
\item every pair of vertices at distance $2$ has at least $M(\lambda)$ common neighbors,
\item the smallest eigenvalue of $G$, $\lambda_{\min}(G),$ satisfies $\lambda_{\min}(G)\geq -\lambda$,
\end{enumerate}
\noindent then $G$ has diameter $2$ and $|\Gamma_2(x) | \leq \lfloor \lambda \rfloor \lfloor\lambda^2 \rfloor$ for all $x \in V(G)$.
\end{proposition}
\begin{remark}
Neumaier \cite{-m} mentioned that Hoffman gave a very large bound on the intersection number $c_2$ of strongly regular graphs. This may imply that Proposition \ref{tool} was already known by Hoffman. However, we could not find it in the literature.
\end{remark}
To prove Proposition \ref{tool}, we use a combinatorial object named Hoffman graphs. The definition and basic properties of Hoffman graphs are given in Section 2. In Section 3, we prove Proposition \ref{tool}. In Section 4, we present some known facts on the number $v(k, \lambda)$, and, in Section 5, we prove Theorem \ref{main} by using Proposition \ref{tool}. In Section 6, we discuss the behavior of the number $T(\lambda)$ for a fixed positive integer $\lambda$. In the last section, we give two more applications of Proposition \ref{tool} for the classes of co-edge regular graphs and amply regular graphs.
\section{Hoffman graphs}
In this section, we introduce the definition and basic properties of Hoffman graphs. Hoffman graphs were defined by Woo and Neumaier \cite{Woo} following an idea of Hoffman \cite{ Hoff1977}. For more details or proofs, see \cite{Jang, KKY, Woo}.
\subsection{Definition and properties of Hoffman graphs}
\begin{definition}
A Hoffman graph $\mathfrak{h}$ is a pair $(H, \ell)$ of a graph $H$ and a labeling map $\ell : V(H) \rightarrow \{{\rm \bf fat,slim}\}$ satisfying two conditions:
\begin{enumerate}[(i)]
\item the vertices with label {\rm \bf fat} are pairwise non-adjacent,
\item every vertex with label {\rm \bf fat} has at least one neighbor with label {\rm \bf slim}.
\end{enumerate}
\end{definition}
The vertices with label {\rm \bf fat} are called {\it fat} vertices, and the set of fat vertices of $\mathfrak{h}$ are denoted by $V_{\rm fat}(\mathfrak{h})$. The vertices with label {\rm \bf slim} are called {\it slim} vertices, and the set of slim vertices are denoted by $V_{\rm slim}(\mathfrak{h})$. Now, we give some definitions.
\begin{definition}
For a Hoffman graph $\mathfrak{h}$, a Hoffman graph $\mathfrak{h}_1 = (H_1, \ell_1)$ is called an {\it induced Hoffman subgraph} of $\mathfrak{h}$ if $H_1$ is an induced subgraph of $H$ and $\ell(x) = \ell_1 (x)$ for all vertices $x$ of $H_1$.
\end{definition}
\begin{definition}
Two Hoffman graphs $\mathfrak{h}=(H, \ell)$ and $\mathfrak{h}'=(H', \ell')$ are called {\it isomorphic} if there exists a graph isomorphism $\psi$ from $H$ to $H'$ such that $\ell(x) = \ell'(\psi(x))$ for all vertices $x$ of $H$.
\end{definition}
\begin{definition}
For a Hoffman graph $\mathfrak{h} = (H, \ell)$, let $A(H)$ be the adjacency matrix of $H$ with a labeling in which the fat vertices come last. Then
$$A(H) = \begin{pmatrix} A_{\rm slim} & C \\ C^T & O \end{pmatrix},$$
\noindent where $A_{\rm slim}$ is the adjacency matrix of the subgraph of $H$ induced by slim vertices and $O$ is the zero matrix.
The real symmetric matrix $S(\mathfrak{h}) = A_{\rm slim} - CC^T$ is called the {\it special matrix} of $\mathfrak{h}$, and the eigenvalues of $\mathfrak{h}$ are the eigenvalues of $S(\mathfrak{h})$.
\end{definition}
For a Hoffman graph $\mathfrak{h}$, we focus on its smallest eigenvalue in this paper. Let $\lambda_{\min}(\mathfrak{h})$ denote the smallest eigenvalue of $\mathfrak{h}$. Now, we discuss some spectral properties of $\lambda_{\min}(\mathfrak{h})$ without proofs.
\begin{lemma}{\rm \cite[Corollary 3.3]{Woo}} If $\mathfrak{h}'$ is an induced Hoffman subgraph of $\mathfrak{h}$, then $\lambda_{\min}(\mathfrak{h}') \geq \lambda_{\min}(\mathfrak{h})$ holds.
\end{lemma}
\begin{theorem}\label{OH}{\rm \cite[Theorem 2.2]{KKY}} Let $\mathfrak{h}$ be a Hoffman graph. For a positive integer $p$, let $G(\mathfrak{h}, p)$ be the graph obtained from $\mathfrak{h}$ by replacing every fat vertex of $\mathfrak{h}$ by a complete graph $K_p$ of $p$ slim vertices, and connecting all vertices of the $K_p$ to all neighbors of the original fat vertex by edges. Then
$$ \lambda_{\min}(G(\mathfrak{h}, p)) \geq \lambda_{\min}(\mathfrak{h}), $$
and
$$ \lim_{p\rightarrow \infty} \lambda_{\min}(G(\mathfrak{h}, p)) = \lambda_{\min}(\mathfrak{h}). $$
\end{theorem}
\subsection{Quasi-clique and associated Hoffman graph}
In this subsection, we introduce two terminologies, {\it quasi-clique} and {\it associated Hoffman graph}. Most of this section is explicitly formulated in \cite{KKY}. Note that the term {\it quasi-clique} in this paper is different from the term quasi-clique in $\cite{Woo}$.
For the rest of this section, let $\widetilde{K}_{2m}$ be the graph consisting of a complete graph $K_{2m}$ and a vertex which is adjacent to exactly $m$ vertices of the $K_{2m}$. For a positive integer $m$ at least $2$, let $G$ be a graph which does not contain $\widetilde{K}_{2m}$ as an induced subgraph. For a positive integer $n$ at least $(m+1)^2$, let $\mathcal{C}(n)$ be the set of maximal cliques of $G$ with at least $n$ vertices. Define the relation $\equiv_n^m$ on $\mathcal{C}(n)$ by $C_1 \equiv_n^m C_2$ if every vertex $x \in C_1$ has at most $m-1$ non-neighbors in $C_2$ and every vertex $y \in C_2$ has at most $m-1$ non-neighbors in $C_1$ for $C_1, C_2 \in \mathcal{C}(n)$.
\begin{lemma}{\rm \cite[Lemma 3.1]{KKY}} Let $m, n$ be two integers at least $2$ such that $n \geq (m+1)^2$. Then the relation $\equiv_n^m$ on $\mathcal{C}(n)$ is an equivalence relation.
\end{lemma}
For a maximal clique $C \in \mathcal{C}(n)$, let $[C]_n^m$ denote the equivalence class containing $C$. Now, we are ready to define the term {\it quasi-clique}.
\begin{definition}
Let $m, n$ be two integers at least $2$ such that $n \geq (m+1)^2$. For a maximal clique $C \in \mathcal{C}(n)$, we define the quasi-clique $Q[C]_n^m$ with respect to the pair $(m,n)$ of $G$, as the subgraph of $G$ induced on the vertices which have at most $m-1$ non-neighbors in $C$.
\end{definition}
By {\rm \cite[Lemma 3.2]{KKY}} and {\rm \cite[Lemma 3.3]{KKY}}, the quasi-clique $Q[C]_n^m$ is well-defined for $C \in \mathcal{C}(n)$.
Now we introduce the associated Hoffman graphs. In the next proposition, we present a result which is needed to show Proposition 1.3.
\begin{definition}
Let $m, n$ be two integers at least $2$ such that $n \geq (m+1)^2$. Let $[C_1]_n^m, [C_2]_n^m, \cdots, [C_t]_n^m$ be all the equivalence classes of $G$ under $\equiv_n^m$. The associated Hoffman graph $\mathfrak{g} = \mathfrak{g}(G, m, n)$ is the Hoffman graph with the following properties.
\begin{enumerate}[(i)]
\item $V_{\rm slim}(\mathfrak{g}) = V(G)$, and $V_{\rm fat}(\mathfrak{g}) = \{F_1, \dots, F_t\}$, where $t$ is the number of equivalence classes of $G$ under $\equiv_n^m$,
\item the induced Hoffman subgraph of $\mathfrak{g}$ on $V_{\rm slim}(\mathfrak{g})$ is isomorphic to $G$,
\item the fat vertex $F_i$ is adjacent to all vertices of the quasi-clique $Q[C_i]_n^m$ for $i=1,2,\dots, t$.
\end{enumerate}
\end{definition}
\begin{proposition}\label{asso}{\rm \cite[Proposition 4.1]{KKY}} There exists a positive integer $n = n(m, \phi, \sigma, p) \geq (m+1)^2$ such that for any integer $q \geq n$, and any Hoffman graph $\mathfrak{h}$ with at most $\phi$ fat vertices and at most $\sigma$ slim vertices, the graph $G(\mathfrak{h}, p)$ is an induced subgraph of $G$, provided that the graph $G$ satisfies the following conditions:
\begin{enumerate}[(i)]
\item the graph $G$ does not contain $\widetilde{K}_{2m}$ as an induced subgraph,
\item the associated Hoffman graph $\mathfrak{g} = \mathfrak{g}(G, m, q)$ contains $\mathfrak{h}$ as an induced Hoffman subgraph.
\end{enumerate}
\end{proposition}
\section{Main tool}
In this section, we prove Proposition \ref{tool}, which is the main tool of this paper. Before we prove Proposition \ref{tool}, we first show two lemmas.
Let $H$ be a graph. Define $\mathfrak{q}(H)$ the Hoffman graph obtained by attaching one fat vertex to all vertices of $H$. Then $\lambda_{\min}(\mathfrak{q}(H)) = -\lambda_{\max}(\overline{H})$, where $\lambda_{\max}(\overline{H})$ is the maximal eigenvalue of the complement $\overline{H}$ of $H$. The Perron-Frobenius Theorem implies the following lemma.
\begin{lemma}\label{min}
Let $H$ be a graph with an isolated vertex $x$. If $\lambda_{\min}(\mathfrak{q}(H)) \geq -\lambda$ for some real number $\lambda \geq 1$, then $H$ has at most $\lfloor \lambda^2 \rfloor + 1$ vertices.
\end{lemma}
\begin{proof}
Let $n$ be the number of vertices of $H$. Since $x$ is an isolated vertex of $H$, $x$ is adjacent to all other vertices of $H$ in the complement $\overline{H}$ of $H$. By the Perron-Frobenius Theorem, we have
$$ \lambda_{\max}(\overline{H}) \geq \lambda_{\max}(K_{1,n-1}) = \sqrt{n-1}$$.
This shows the lemma. \end{proof}
\begin{lemma}\label{min2} Let $\lambda$ be a real number at least $1$. Then there exist minimum positive integers $t'(\lambda)$ and $m'(\lambda)$ such that both $\lambda_{\min}(K_{2,t'(\lambda)}) < -\lambda$ and $\lambda_{\min}(\widetilde{K}_{2m'(\lambda)}) < -\lambda$ hold.
\end{lemma}
\begin{proof} Since $\lambda_{\min}(K_{2,t}) = -\sqrt{2t}$ and $\lambda_{\min}(\widetilde{K}_{2m})$ is the smallest eigenvalue of the matrix
$$ \begin{pmatrix} m-1 & m & 0 \\ m & m-1 & 1\\ 0 & m & 0 \end{pmatrix},$$ it is easily checked that
$$ \lim_{t \rightarrow \infty} \lambda_{\min}(K_{2,t}) = \lim_{m \rightarrow \infty} \lambda_{\min}(\widetilde{K}_{2m}) = -\infty. $$ This shows the existence of $t'(\lambda)$ and $m'(\lambda)$. \end{proof}
\noindent{\bf Proof of Proposition \ref{tool}.} First, we consider the Hoffman graph $\mathfrak{h}^{(\lfloor \lambda +1 \rfloor)}$ with $\lfloor \lambda +1 \rfloor$ fat vertices adjacent to one slim vertex. Then $\lambda_{\min}(\mathfrak{h}^{(\lfloor \lambda +1 \rfloor)}) = -\lfloor \lambda +1 \rfloor < -\lambda$, so there exists an positive integer $p_{0}$ such that $\lambda_{\min}(G(\mathfrak{h}^{(\lfloor \lambda +1 \rfloor)}, p_{0})) < -\lambda$ by Theorem \ref{OH}.
Next, let $\{H_1, \cdots, H_r\}$ be the set of pairwise non-isomorphic graphs on $\lfloor \lambda^2 \rfloor + 2$ vertices with an isolated vertex. By Lemma \ref{min}, $\lambda_{\min}(\mathfrak{q}(H_i)) < -\lambda$ holds for all $i = 1, \dots, r$. For each $i=1,\dots, r$, there exists positive integers $p_i$ such that $\lambda_{\min}(G(\mathfrak{q}(H_i), p_i)) < -\lambda$ by Theorem \ref{OH}. Set $p' = \max p_i$.
For the two integers $t' = t'(\lambda)$ and $m' = m'(\lambda)$ of Lemma \ref{min2}, let $n' = n(m', \lfloor \lambda +1 \rfloor, \lfloor \lambda^2 +2 \rfloor, p')$ where $n(m', \lfloor \lambda +1 \rfloor, \lfloor \lambda^2 +2 \rfloor, p')$ is the integer in Proposition \ref{asso}. This means that the associated Hoffman graph $\mathfrak{g}(G, m', n')$ does not contain any of the Hoffman graphs in the set $\{\mathfrak{h}^{(\lfloor \lambda +1 \rfloor)}\} \cup \{\mathfrak{q}(H_i) \mid i =1, .\ldots, r\}$ as induced subgraphs. This implies that the following two conditions hold:
\begin{enumerate}[(i)]
\item for each vertex $x$ of $\mathfrak{g}(G, m', n')$ and one of its fat neighbor $f_x$, the number of vertices which is adjacent to $f_x$ and non-adjacent to $x$ is at most $\lfloor \lambda^2 \rfloor$,
\item every vertex $x$ of $\mathfrak{g}(G, m', n')$ has at most $\lfloor \lambda \rfloor$ fat neighbors.
\end{enumerate}
Now we want to assume that for any two distinct non-adjacent vertices $x$ and $y$ of $G$, they have a common fat neighbor in $\mathfrak{g}(G, m', n')$. To do so, let $M(\lambda)$ be the number $\max\{R(n', t'), \lfloor \lambda^3 +1\rfloor\}$, where $R(n', t')$ denotes the Ramsey number. Recall that the Ramsey number $R(s,t)$ is the minimal positive integer $n$ such that any graph with order $n$ contains a clique of order $s$ or a coclique of order $t$. The property of Ramsey number implies that for two vertices $x, y$ at distance $2$, their common neighborhood contains a clique of size $n'$ or a coclique of size $t'$. Hence, there exists a fat vertex which is adjacent to both $x$ and $y$ in $\mathfrak{g}(G, m', n')$. From (i) and (ii), we conclude that $|\Gamma_2(x)| \leq \lfloor \lambda \rfloor \lfloor\lambda^2 \rfloor$ for all $x \in V(G)$.
Assume that there exists a vertex $y \in \Gamma_3(x)$ for some $x$. Then there exists a vertex $z$ such that $z \in \Gamma_1(x)$ and $z \in \Gamma_2(y)$. The common neighborhood of $y$ and $z$ have at least size $M(\lambda)$ and is contained in $\Gamma_2(x)$. This is impossible. Hence, $G$ has diameter $2$. \QEDB
\section{Some known facts on the number $v(k, \lambda)$}
In this section we give some known facts on the number $v(k, \lambda)$. We start from the case $\lambda < 0$. If a connected graph $G$ is not complete, $G$ contains $K_{1,2}$ as an induced subgraph. Then by interlacing, $G$ has second largest eigenvalue at least $0$. It implies that if a graph $G$ has negative second largest eigenvalue, $G$ is a complete graph. Thus, $v(k, \lambda) = k+1$ for $\lambda < 0$ and the unique graph with the equality case is the complete graph $K_{k+1}$.
For $\lambda = 0$, a regular graph with non-positive second largest eigenvalue is a complete multipartite graph \cite[Corollary 3.5.4]{drg}. Among $k$-regular complete multipartite graphs, we can check that the complete bipartite graph $K_{k,k}$ maximizes the number of vertices. Hence $v(k, 0) = 2k$ and the unique graph with the equality case is the complete multipartite graph $K_{k,k}$.
For $\lambda = 1$, let $G$ be a regular graph with second largest eigenvalue at most $1$. Then the complement of $G$ is a regular graph with smallest eigenvalue at least $-2$. Since such regular graphs are classified in \cite{seidel}, we can find the all values of $v(k, 1)$ \cite[Theorem 3.2]{Nozaki}. Especially, $v(k, 1) = 2k+2$ when $k \geq 11$. The equality case is obtained by the complement of the line graph of $K_{2,k+1}$. Note that $2k+2 \leq v(k, 1) \leq 2k+6$ for all $k$.
For other values of $\lambda > 1$, Cioab\u{a} et al. \cite{Nozaki} found several values of $v(k, \lambda)$ by using a linear programming method. Using the method of Cioab\u{a} et al., it can be shown that $v(k, \lambda) \leq (\lambda + 2)k +\lambda^3 +\lambda^2 - \lambda$ if $k$ is large enough. Theorem \ref{main} improves this result significantly.
\section{Proof of the main theorem} Now, we are ready to prove Theorem \ref{main}.
\noindent {\bf Proof of Theorem \ref{main}.} Let $G$ be a $k$-regular graph with second largest eigenvalue $\lambda$ and $v(k, \lambda)$ vertices. Since $v(k, 1) \geq 2k+2$ and $\lambda \geq 1$, $v(k, \lambda) \geq 2k+2$. We only need to show Theorem \ref{main} for large enough $k$, so we may assume that $k > \lambda(\lambda+1)(\lambda+2)$. Now, we consider the complement $\overline{G}$ of $G$. Then $\overline{G}$ is a $l$-regular graph with smallest eigenvalue $-1-\lambda$ and $v(k, \lambda)$ vertices, where $l = v(k, \lambda) - k - 1 \geq k+1$.
Suppose that $l \geq k + C_1(\lambda)$, where $C_1(\lambda) = M(\lambda+1) -1$, where $M(\lambda)$ is the constant of Proposition \ref{tool}. Let $x$ be a vertex of $\overline{G}$. Then the set of non-neighbors of $x$ has size $k$ and has at least $M(\lambda+1)$ neighbors in the neighborhood of $x$ since $\overline{G}$ is $l$-regular and $l\ \geq k + C_1(\lambda)$. It implies that the set of non-neighbors of $x$ is exactly $\Gamma_2(x)$. By Proposition \ref{tool}, $G$ has diameter $2$ and $|\Gamma_2(x)| \leq (\lambda +1)(\lambda^2 +2\lambda)$. This contradicts to the assumption $k > (\lambda +1)(\lambda^2 +2\lambda)$. Hence, $\l \leq k + C_1(\lambda) -1$ and $v(k, \lambda) = 1 + k + l \leq 2k + C_1(\lambda)$. \QEDB
\section{The behavior of $T(\lambda)$} Recall that, for fixed real number $\lambda \geq 1$, $T(\lambda)$ is defined as
$$T(\lambda) := \limsup_{k \rightarrow \infty} v(k,\lambda) -2k .$$ Now we give a result on $T(\lambda)$.
The complement of the line graph of $K_{2,a+1}$, denoted by $\overline{L(K_{2,a+1})}$, is a $a$-regular graph which has $2a+2$ vertices and spectrum $\{[a]^1, [1]^{a}, [-1]^{a}, [-a]^1\}$. We consider the coclique extension of this graph.
\begin{definition}
For an integer $q > 1$, the $q$-coclique extension $\tilde{G}_q$ of a graph $G$ is the graph obtained from $G$ by replacing each vertex $x \in G$ by a coclique $\tilde{X}$ with $q$ vertices, such that $\tilde{x} \in \tilde{X}$ and $\tilde{y} \in \tilde{Y}$ are adjacent if and only if $x$ and $y$ are adjacent in $G$.
\end{definition}
If $\tilde{G}_q$ is the $q$-coclique extension of $G$, then $\tilde{G}_q$ has adjacency matrix $A\otimes J_q$, where $J_q$ is the all one matrix of size $q$ and $\otimes$ denotes the Kronecker product. This shows that, if a graph $G$ has spectrum $$\{[\lambda_0]^{m_0}, [\lambda_1]^{m_1}, \dots, [\lambda_n]^{m_n}\},$$
\noindent then the $q$-coclique extension $\tilde{G}_q$ of $G$ has spectrum
$$\{[q\lambda_0]^{m_0}, [q\lambda_1]^{m_1}, \dots, [q\lambda_n]^{m_n}, [0]^{(q-1)(m_0+m_1+\dots+m_n)}\}.$$
Hence the $q$-coclique extension of $\overline{L(K_{2,a+1})}$ is a $qa$-regular graph with order $2qa + 2q$ and spectrum
$$ \{[qa]^1, [q]^a, [-q]^a, [-qa]^1, [0]^{(q-1)(2a+2)}\}.$$
This implies that the $\lambda$-coclique extension of $\overline{L(K_{2,a+1})}$ has second largest eigenvalue $\lambda$, and that $v(k, \lambda) \geq 2k+2\lambda$ when $k$ is a multiple of $\lambda$. Hence we have:
\begin{lemma}\label{T}
Let $\lambda$ be a positive integer. Then $T(\lambda) \geq 2\lambda$. \end{lemma}
Moreover, we have a conjecture on $T(\lambda)$ as follows:
\begin{conjecture}
Let $\lambda$ be a positive integer. Then $T(\lambda) = 2\lambda$.
\end{conjecture}
For $\lambda = 1$, this conjecture is true as $T(1) = 2$.
\section{Applications}
In this section, we introduce two applications of Proposition \ref{tool}. We first consider co-edge regular graphs with parameters $(v, k, c_2)$, which are $k$-regular graphs with $v$ vertices and the property such that every pair of non-adjacent vertices has exactly $c_2$ common neighbors. By applying Proposition \ref{tool} and Theorem \ref{alon}, we obtain the following theorem.
\begin{theorem}\label{co-edge} Let $\lambda \geq 1$ be a real number. Let $G$ be a connected co-edge regular graph with parameters $(v, k, c_2)$. Then there exists a real number $C_2(\lambda)$ (only depending on $\lambda$) such that, if $G$ has smallest eigenvalue at least $-\lambda $, then $c_2 > C_2(\lambda)$ implies that $v-k-1 \leq \frac{(\lambda -1)^2}{4}+1$ holds.
\end{theorem}
\begin{proof} Let $\ell = v-k-1$, then $|\Gamma_2(x)| = \ell$ for all $x \in V(G)$ since $G$ has diameter $2$. We can apply Proposition \ref{tool} for $C(\lambda) = M(\lambda)-1$ to obtain that either $c_2 \leq C(\lambda)$ or $\ell = v-k-1 \leq \lfloor \lambda \rfloor \lfloor\lambda^2 \rfloor$ holds. Suppose $\ell > \frac{(\lambda -1)^2}{4}+1$. The complement of $G$ is $\ell$-regular and has second largest eigenvalue at most $\lambda -1$, and therefore has at most $v(\ell, \lambda -1)$ vertices. As $v(\ell, \lambda -1)$ is a finite number by Theorem \ref{alon}, we see that the theorem follows, if we take $C_2 (\lambda) =\max \{ \max\{ v(\ell, \lambda -1) -\ell-1 \mid \frac{(\lambda -1)^2}{4}+1< \ell\leq \lfloor \lambda \rfloor \lfloor\lambda^2 \rfloor\}, C(\lambda)\}$. \end{proof}
An edge-regular graph with parameters $(v, k, a_1)$ is a $k$-regular graph with $v$ vertices such that any two adjacent vertices have exactly $a_1$ common neighbors. Note that the complement of a co-edge regular graph is edge-regular.
\begin{remark}
(i) Let $\ell$ be an integer at least 3 and let $\lambda := 2\sqrt{\ell-1}$. Take the infinite family of the bipartite $\ell$-regular Ramanujan graphs, as constructed in \cite{Ramanujan}. The graphs in this family are clearly edge-regular with $a_1=0$ and have second largest eigenvalue at most $2\sqrt{\ell-1}$. Let $\Gamma$ be a graph in this family with $v$ vertices. Then the complement of $\Gamma$ is co-edge-regular with parameters $(v, v-\ell-1, v-2\ell)$ and has smallest eigenvalue at least $-1-2\sqrt{\ell-1}$.
This example shows that the upper bound for $v-k-1$ in Theorem \ref{co-edge} cannot be improved.\\ (ii) For $\lambda =2$, we find $C_2(2) = 8$, by \cite[Theorem 3.12.4(iv)]{drg}. \end{remark}
An amply regular graph with parameters $(v, k, a_1, c_2)$ is a $k$-regular graph with $v$ vertices such that any two adjacent vertices have exactly $a_1$ common neighbors and any two vertices at distance $2$ have $c_2$ common neighbors. We call an amply regular graph with diameter $2$ strongly regular. Neumaier \cite{-m} proved the following theorem which is called the $\mu$-bound for strongly regular graphs.
\begin{theorem}\label{srg}{\rm \cite[Theorem 3.1]{-m}} Let $G$ be a coconnected strongly regular graph with parameters $(v, k, a_1, c_2)$ and integral smallest eigenvalue $-\lambda\leq -2$. Then $$c_2 \leq \lambda^3 (2\lambda -3).$$
\end{theorem} The condition $-\lambda\leq -2$ implies that $G$ is not a union of cliques of the same size. Since the only strongly regular graphs which are not coconnected, are the complete multipartite graphs, we obtain the following theorem.
\begin{theorem}
Let $G$ be an amply regular graph with parameters $(v, k, a_1, c_2)$. Let $\lambda \geq 2$ be an integer. Then there exists a real number $C_3(\lambda)$ such that if $G$ has smallest eigenvalue at least $-\lambda$, then $c_2 \leq C_3(\lambda)$ or $G$ is a complete multipartite graph.
\end{theorem}
\begin{proof}
Let $C_3(\lambda) = \max\{M(\lambda)-1,\lambda^3 (2\lambda -3)\}$. If $c_2 > C_3(\lambda)$, then $G$ has diameter $2$ by Proposition \ref{tool}. By Theorem \ref{srg}, $G$ is not coconnected. Hence, $G$ is a complete multipartite graph.
\end{proof}
\end{document} | arXiv |
Home Journals IJSDP Simulation of the Thermal and Aerodynamic Behavior of an Established Screenhouse under Warm Tropical Climate Conditions: A Numerical Approach
Simulation of the Thermal and Aerodynamic Behavior of an Established Screenhouse under Warm Tropical Climate Conditions: A Numerical Approach
Edwin Villagran* | Roberto Ramirez | Andrea Rodriguez | Rommel Leon Pacheco | Jorge Jaramillo
Centro de Investigación Tibaitata, Corporación Colombiana de Investigación Agropecuaria - AGROSAVIA, Mosquera - Cundinamarca 250040, Colombia
Estación Experimental Enrique Jiménez Núñez, Instituto Nacional de Innovación y Transferencia en Tecnología Agropecuaria de Costa Rica – INTA., Cañas – Guanacaste 50601, Costa Rica
Centro de Investigación Caribia, Corporación Colombiana de Investigación Agropecuaria - AGROSAVIA, Sevilla – Magdalena 478020, Colombia
Centro de Investigación La Selva, Corporación Colombiana de Investigación Agropecuaria - AGROSAVIA, Rionegro - Antioquia 054040, Colombia
[email protected]
In tropical countries agriculture protected with passive and low-cost structures is one of the main alternatives for intensifying agricultural production in a sustainable manner. This type of greenhouses has adequate efficiency in cold weather conditions meanwhile its use in hot weather conditions presents disadvantages due to the generation of an inadequate microclimate for the growth and development of certain species. This has generated an important interest for the use of screen houses (SH) for the horticultural and fruit production, and currently there are many studies on the behavior of microclimates in SH; however, these experiments were developed for climatic conditions in other latitudes. In this research, a study was developed using a computational fluid dynamics (CFD) 3D numerical simulation, with the aim of evaluating the thermal and aerodynamic behavior of an SH under two specific configurations (under rain (RC) and under dry conditions (DC)). The CFD model was validated by taking experimental temperature data inside the SH. The results showed that: i) the CFD model has an acceptable capacity to predict the behavior of temperature and air flows, ii) simulations can be performed under environmental conditions of day and night, and iii) the RC configuration affected the positive thermal behavior, which limited the presence of the thermal inversion phenomenon under nocturnal conditions, meanwhile under RC daytime conditions, it reduced the velocity of the air flows generating higher thermal gradients compared to DC.
computational fluid dynamics, temperature, screenhouse microclimate, wind speed
Screen houses (SH) are a technological option offered by protected agriculture as an intermediate alternative between open field and greenhouse cultivation. With the implementation of these structures, the aim is to transform the land use from extensive to intensive or promote agricultural production in alternative and sustainable systems in order to generate the supply necessary to meet the demand for high-quality food throughout the year [1]. This type of structure is built on metal columns and support cables where a roof and side walls are installed, generally made of porous screens that are insect proof or shaded [2].
The adoption of this type of technology has generated a great boom since the end of the 90s and is currently a relevant component of farming systems undercover, which has gradually extended from the countries of the Mediterranean coast to regions in other latitudes, mainly with temperate or warm climates [3] and for different cultivation types and methods [4]. Commercially there is a great variety of screens that differ in material types, color and porosity. These characteristics affect their optical and aerodynamic properties; therefore, they have been strongly studied and modified seeking to improve the microclimatic conditions generated inside the SH [5, 6].
According to the manufacturing material of the porous screen used and its properties, various agricultural benefit objectives are sought such as (i) Shading for regions where solar radiation is excessive and with supra-optimal values [7]; (ii) reducing the vulnerability of crops to damage by weather events such as icy hail and wind gusts [8, 9]; (iii) cooling limitation in night-time conditions through the reduction in energy loss by radiation [10]; (iv) exclusion of insects and vectors that transmit viruses, allowing significant reductions in the application of pesticides [6, 11]; and (v) increase the efficient use of water, extending the growth period of the plants and delaying the ripening process of some horticultural products [12, 13]. In addition to the benefits mentioned above, this type of structure has become popular and widespread among farmers because they can potentially maximize the benefit of crops with a low-cost technological contribution compared to conventional greenhouses [9].
The knowledge of the microclimate in SH as well as in plastic greenhouses is essential to achieve adequate crop management [6]. The effects of different types of screens on the microclimate of plants have been studied since the beginning of the century [1, 11, 14, 15]. The use of screens mainly influences the radiation exchange and air flow dynamics, reducing its speed and modifying its turbulence characteristics [16], thus, affecting ventilation rates and heat exchanges, mass, and gases between the plants and their surrounding atmosphere. This usually translates into behaviors with high values of variables such as temperature and humidity that can cause physiological and environmental disorders conducive to the appearance of fungal diseases that affect the final crop yield [17].
The studies dedicated to the measurement, modeling and simulation of the microclimate distribution in conventional greenhouses have been extensive in the last three decades, obtaining results that have allowed to describe the distribution of temperature, humidity, CO2 concentration and the characteristics of airflow patterns, and develop management strategies to optimize the behavior of these variables [18-20]. On the other hand, the studies related in this field with SH are still scarce, although there are significant advances as summarized in the study developed by Tanny et al. [6]. Currently, there is a need to generate relevant information that allows researchers and farmers of horticultural products to obtain a deep understanding of the patterns and characteristics of the airflow in order to obtain a better design and positioning of screen houses [21] or study the aerodynamic effect of different types of screens on physical and biological processes in these systems [9].
One of the most used tools since the beginning of the century to characterize the microclimate distributed inside greenhouses and its interaction with the plant has been computational fluid dynamics (CFD). This tool models and simulates fluid flow and transfer of heat, mass, and momentum, obtaining great advances in the design and optimization of agricultural structures [22, 23]. The study of the microclimate in screenhouses can be approached through CFD numerical simulations, considering the roof material as a porous medium, which will allow evaluating a great variety of structures, screens and climatic environments in a relatively short period of time. Bartzanas et al. [1] developed a two-dimensional CFD study to assess the effect of a screen on radiation distribution, finding that the optical and spectral properties directly affect the distribution of solar radiation, and the degree of porosity of the screen reduces air velocity, affecting the thermal behavior inside the screenhouse. Other relevant studies using 3D CFD modeling were in charge of evaluating the behavior of air flows and the value of temperature in screenhouses used for tomato cultivation, reporting that these parameters are strongly affected by the degree of porosity of the screen [24]. Although these works have not been developed for the warm climate conditions of the Central American Caribbean
According to the above, the objective of this work was to determine through 3D CFD simulation the thermal and airflow patterns behavior of an insect proof screen house established in Guanacaste - Costa Rica. With the purpose of evaluating two configurations of the productive system used in different times of the year.
2.1 Experimental site and climatic conditions
The study area is in the coastal area of the canton of Abangares, province of Guanacaste in northwest Costa Rica (10º11' N, 85º10' W at an altitude of 10 m a.s.l.). This region has a warm tropical climate with a dry season, and according to the Köppen-Geiger climate classification, the area has an Aw climate [25]. The average multi-year average temperature is 27.7℃, with maximum and minimum averages of 36.9 and 21.1℃ (Figure 1a). The annual rainfall reaches a value of 1669.7 mm, distributed during the months of May to November (Figure 1a). The average wind speed oscillates in the year between 0.2 and 1.4 ms-1 (Figure 1b), with predominant directions between SE-SSE.
2.2 Description of the screenhouse
The development of the experimental study was carried out in a flat roof SH with a covered floor area of 1,496 m2, where the longitudinal section was in an east-west direction (E-W). The geometric characteristics of the structure were the following: width (X = 34 m), length (Z = 44 m) and height (Y = 5 m) (Figure 2a). The side walls and roof were covered with a porous insect-proof screen (Dimensions thread 16.1x10.2 and porosity ε = 0.33). Inside the screenhouse, small semicircular tunnels of 2.2 m of height and 1.2 m of width were built located along the longitudinal axis of the SH and on top of the cultivation beds, these tunnels were covered with polyethylene to be used during the rainy season, in order to avoid or reduce to the maximum the wetting of the foliage (Figure 2b).
Figure 1. Meteorological characteristics for the canton of Abangares, province of Guanacaste in northwest Costa Rica
Figure 2. Dimensions and interior detail of the screenhouse
2.3 Fundamental equations and physical models
The models explained in this section, describe the physical principles that govern the study analyzed, the models selected are those reported in the literature used for problems similar to this research and that have shown appropriate computational performance and numerical results adjusted to real behavior. The governing flow equations presented in Eq. (1). They represent as diffusion-convection equations of fluids for three conservation laws, which include the transport, impulse and energy equations of a compressible fluid and in a three-dimensional (3D) field in a steady state.
$\nabla(\rho \phi \vec{v})=\nabla(\Gamma \nabla \phi)+\mathrm{S} \phi$ (1)
where, ρ is the density of the fluid (kgm-3), ∇ is the nabla operator, ϕ represents the concentration of the transported quantity in a dimensional form (the momentum, the scalars mass and energy conservation equations), $\vec{v}$ is the speed vector (ms-1), Γ is the diffusion coefficient (m2s-1), and S represents the source term [26].
The turbulent nature of the air flow was simulated using the standard turbulence model k-ε, a model widely used and validated in studies focused on greenhouses, which has shown an adequate fit and accuracy with a low computational cost [27, 28]. Because wind speeds are lower in some areas inside the screenhouse, the effects of buoyancy influenced by the change in air density will be present [29, 30]. Therefore, they were modeled using the Boussinesq approximation, which is calculated using Eq. (2) and Eq. (3).
$\left(\rho-\rho_{0}\right) g=-\rho_{0} \beta\left(\mathrm{T}-\mathrm{T}_{0}\right) \mathrm{g}$ (2)
$\beta=-\left(\frac{1}{\rho}\right)\left(\frac{\partial \rho}{\partial \mathrm{T}}\right)_{\mathrm{p}}=\frac{1}{\rho} \frac{\mathrm{p}}{\mathrm{RT}^{2}}=\frac{1}{\mathrm{T}}$ (3)
where, g is the gravitational constant in (m s-2); β is the volumetric thermal expansion coefficient (°K-1); $\rho_{0}$ is the reference density in (kg m-3): R is the gas constant (J K-1 mol-1); p is the pressure on Pa, and $\mathrm{T}_{0}$ is the reference temperature (℃).
Likewise, the energy equation and the selected radiation model were considered, i.e. the one of discrete ordinates (DO) with angular discretization. The DO model has been widely used in greenhouse studies [31-34] and screenhouses [1]. This model allows calculating, by means of Eq. (4), the radiation and convective exchanges between the roof, the ceiling and the walls of a structure which, in the case of greenhouses, are treated as semi-transparent media. It is also possible to carry out the climate analysis in night conditions, simulating and solving the phenomenon of radiation from the floor of the greenhouse to the outside environment. For this purpose, the sky is considered as a black body with an equivalent temperature (TC) for two predominant scenarios of cloudy and wet nights and clear wet nights [35-37].
$\nabla .\left(I_{\lambda}\left( \begin{matrix} \Rightarrow \\ r \\\end{matrix},\begin{matrix} \Rightarrow \\ s \\\end{matrix} \right) \begin{matrix} \Rightarrow \\ s \\\end{matrix} \right)+\left(a_{\lambda}+\sigma_{s}\right) I_{\lambda}\left( \begin{matrix} \Rightarrow \\ r \\\end{matrix},\begin{matrix} \Rightarrow \\ s \\\end{matrix} \right)$
$=a_{\lambda} n^{2} \frac{\sigma T^{4}}{\pi}+\frac{\sigma_{S}}{4 \pi} \int_{0}^{4 \pi} I_{\lambda}{\left( \begin{matrix} \Rightarrow \\ r \\\end{matrix},\begin{matrix} \Rightarrow \\ s \\\end{matrix}' \right)} \Phi{\left( \begin{matrix} \Rightarrow \\ s \\\end{matrix}.\begin{matrix} \Rightarrow \\ s \\\end{matrix} '\right)} d \Omega^{\prime}$ (4)
where, $I_{\lambda}$ is the intensity of the radiation at a wavelength; $\begin{matrix} \Rightarrow \\ r \\\end{matrix}$, $\begin{matrix} \Rightarrow \\ s \\\end{matrix}$ they are the vectors that indicate the position and direction, respectively; $\begin{matrix} \Rightarrow \\ s \\\end{matrix}$` is the direction vector of dispersion; $\sigma_{s}, a_{\lambda}$ are the coefficients of dispersion and spectral absorption; n is the refractive index; ∇ is the divergence operator; σ is the Stefan-Boltzmann constant (5.669×10−8Wm−2°K−4), Φ,T and Ω are the phase function, the local temperature (°K) and the solid angle, respectively.
The presence of insect screens was modeled using equations derived from the flow of a free and forced fluid through porous materials, taking into account their main characteristics of porosity and permeability [38, 39]. These equations can be derived using Eq. (5), which represents the Forchheimer equation.
$\frac{\partial p}{\partial x}=\frac{\mu}{K} u+\rho \frac{C f}{\sqrt{K}} u|u|$ (5)
where, u is the air velocity (ms-1); μ is the dynamic viscosity of the fluid (kgm-1s-1), K is the permeability of the medium (m2); Cf is the net inertial factor; ρ is the air density (kgm-3), and ∂x is the thickness of the porous material (m). The inertia factor Cf and the permeability of the screen K have been evaluated in different experimental studies from tests in the wind tunnel, the numerical results obtained in the experiments are adjusted to equations showing correlation with the porosity (ε) of the screen. The aerodynamic parameters for the insect-proof porous screens commonly used in protected agriculture are obtained by Eq. (6) and Eq. (7), which are the mathematical expressions that best fit the data obtained in the wind tunnel [39-42].
$C_{f}=0.00342 \varepsilon^{-2.5917}$ (6)
$K=2 \mathrm{X} 10^{-7} \varepsilon^{3.5331}$ (7)
Figure 3. Meshing of the computer domain
2.4 Computational domain and generation of the mesh
The construction of the computational domain, the meshing and the evaluation of the quality of the mesh were carried out following the existing guidelines of the good practices of CFD simulation, where the minimum criteria to be met for these three parameters that are directly related to the precision of the results and the required computational effort are established. The ANSYS ICEM CFD 18.2 preprocessing software was used to generate a large computational domain composed of the screenhouse (Figure 3b) and its surroundings, in order to guarantee an appropriate definition of the atmospheric boundary layer and avoid the generation of forced flows with velocities and unrealistic behaviors [43]. The dimensions of the computational domain were 184, 75 and 194 m for the X, Y and Z axes, respectively (Figure 3a). This size was determined following the recommendations of numerical studies of the wind environment around the buildings [44]. The computational domain was divided into an unstructured mesh of hexahedral elements composed of a total of 7,787,701 discretized volumes in space. This number of elements was obtained after verifying the independence of the numerical solutions from the airflow and the temperature behavior at a total of 7 different sized meshes where the one with the highest number of elements presented a value of 12,123,456 and the one with the lowest number of elements was 1,345,123. The independence test was performed following the procedure reported and used successfully by Villagran et al. [34]. The quality parameters evaluated in the mesh were the variation of cell-to-cell size, which showed that 92.3% of the cells in the mesh were within the high-quality range (0.9-1), and on the other hand, the criterion of orthogonality was evaluated, where the minimum value obtained was 0.92, results that are classified within the adequate quality range [45, 46].
2.5 Boundary conditions and convergence criteria
The CFD ANSYS FLUENT 18.2 processing software was used to perform the simulations under the conditions set forth in Table 1. It was run from a computer that was composed of an Intel® Xeon W-2155 processor with twenty cores at 3.30 GHz and 128 GB of RAM in a Windows 10 64-bit operating system. The semi-implicit solution method for the pressure-velocity equation (SIMPLE) was applied to solve the flow field of the simulated fluid. The convergence criteria of the model were established in 10-6 for all the equations considered [47]. With this computer equipment and the simulation criteria established to have an efficiency of results as well as agility and communication effort, the average simulation time was 53 hours for approximately 7,900 iterations.
The upper limit of the domain and the surfaces parallel to the flow were set with boundary conditions of symmetrical properties so as not to generate frictional losses of the air flow in contact with these surfaces. The simulations considered the atmospheric characteristics of the air and the physical and optical properties of the materials within the computational domain, are summarized in Table 1. At the lower limit and the walls of the greenhouse a non-slip wall boundary condition was fixed, at the left boundary, the entry condition for the average wind speed was imposed through a logarithmic profile [48]. The profile was linked to the main CFD module using the user-defined function and using the Eq. (8).
$v(y)=\frac{v^{*}}{K} \ln \left(\frac{y+y_{o}}{y_{o}}\right)$ (8)
where, $y_{o}$ is the roughness of the surface, that for this case, was set at 0.03 m according to the response standard of the local terrain, v* is the friction velocity v(y) is the average wind speed at height and above the ground level and K is the von Karman constant with a value of 0.42, the leeward limit was considered as an edge condition of pressure output type. The model is not considered without the presence of cultivation, since we want to obtain a solution independent of any type and size of plant. In addition, other boundary conditions imposed in the computational domain and the physical and optical properties of materials taken from works such as those of Flores Velasquez et al. [42] and Villagran et al. [49] are summarized in Table 1.
Table 1. Settings of the computational fluid dynamics (CFD) model simulations and boundary conditions
Boundary conditions
Entry domain
Velocity inlet logarithmic profile (Air velocity a 2 m height), and atmospheric pressure.
Output domain
Pressure outlet (Zero pressure and same condition of turbulence).
Treatment of porous medium
Screen porous jump, viscosity effect (α)=3.98 e-9 and drag coefficient (C2) 19185.
Constant from the ground, Boussinesq hypothesis activated in the buoyancy effect of the turbulence model.
Physical and optical properties of the materials used
Density (ρ, kg m-3)
Thermal conductivity (k, W m-1 K-1)
Specific heat (Cp, J K-1 kg-1)
Coefficient of thermal expansion (K-1)
Absorptivity
Scattering coefficient
Emissivity
Table 2. Initial boundary conditions for simulated configurations
Diurnal Period
Wind speed [ms-1]
Wind direction [°]
Air Temperature [°C]
Solar radiation [W m-2]
Dry configuration (DC)
Rain configuration (RC)
Nocturnal Period
Tc* [°C]
* Equivalent temperature of the sky.
Table 3. Initial boundary conditions to validate simulation
2.6 Measurements and experimental procedure
During the development of the experimental phase between July 01 and July 10, 2018, and in order to obtain data for the validation of the CFD model, ten-minute records of climatic variables inside and outside the SH were made. Outside, a conventional I-Metos weather station (Pessl Instruments Gmbh, Weiz, Austria) was used, located 50 m from the greenhouse and equipped with temperature sensors (range: -30℃ to 99℃, accuracy: ± 0.1℃), relative humidity (range: 10% to 95%, Accuracy: ± 1%), global solar radiation (range: 0 Wm-2 to 2,000 Wm-2, accuracy: ± 2%), wind speed (range: 0 ms-1 to 70 ms-1, precision: ± 5%), wind direction (range: 0° to 360°, resolution: 2° precision: ± 7°) and precipitation (range: 6.5 cm per measurement period; resolution: 0.01 cm; precision: ± 0.1%). The indoor air temperature of the screenhouse was registered by nine data-loggers type sensors HOBO® Pro RH-Temp H08-032-08 (Onset Computer Corp., Pocasset, USA) This measured the temperature in a range from −20℃ to 70℃ with accuracy of ±0.3℃, sensors that were located at a height of Y = 1.8 m above the ground level just at the center line of the screenhouse at X = 17 m and distributed uniformly on the longitudinal Z-axis = 40 m. additionally these devices were covered with a capsule that acted as a protective shield against direct solar radiation.
2.7 Simulated scenarios
The validated CFD numerical model was used as a simulation tool to determine the thermal and aerodynamic behavior of the screenhouse, evaluating two specific configurations, one with rain (RC) and the other one dry (DC), under diurnal and nocturnal climate conditions, establishing the initial conditions listed in Table 2.
2.8 Validation of the model developed
The validation of the CFD model was performed by comparing temperature data obtained experimentally in the SH and the data obtained by numerical simulation for two specific conditions, the initial boundary conditions were determined from the average values of the climatic variables obtained for the experimental period considering a specific time for day and night, respectively (Table 3). Validation is a necessary phase in order to adequately verify the results obtained from the numerical model and establish total independence of these parameter results such as the quality and size of the mesh [50].
Another way to evaluate the performance and accuracy of numerical models is through the calculation of some goodness-of-fit criteria that compare measured and simulated data. In this case, the mean absolute error (MAE) was calculated with Eq. (8), the mean square error (MSE) with Eq. (9) and finally the mean percentage error (MAPE) with Eq. (10).
$M A E=\frac{1}{n} \sum_{i=1}^{n}|X m i-X s i|$ (9)
$M S E=\frac{1}{n} \sum_{i=1}^{n}|X m i-X s i|^{2}$ (10)
$M A P E=\frac{\sum_{i=1}^{n} \frac{|X m i-X s i|}{|X m i|}}{n}$ (11)
where, Xmi is the value measured, Xsi is the simulated value and n the number of data compared. Once it is verified that the values of the goodness-of-fit criteria are close to 0, the model is validated and can be used to develop CFD simulations under the scenarios considered in this investigation.
3.1 Validation of the CFD model
The fit and performance of the CFD model were tested through a quantitative analysis. For the diurnal period it was found that the absolute differences between the values of the simulated and measured points ranged between 0.25°C and 1.08°C, meanwhile, for the night period, such differences were 0.17°C and 0.83°C. The Figure 4 shows the trend of the simulated and measured data under climatic conditions for the day and night period, it can be seen that the qualitative and quantitative behavior of the data sets are similar, which allows us to deduce at first that the CFD model makes adequate temperature predictions for SH studied.
On the other hand, for the goodness-of-fit criteria used to evaluate the numerical model, values of 0.70℃ and 0.55℃ were obtained for the MAE and MSE, respectively, and an MAPE value of 1.46% for the daytime condition, while for the night-time simulation conditions, values of 0.54℃ and 0.32℃ were obtained for the MAE and MSE, respectively, and an MAPE value of 1.32%. These values obtained for the temperature are in the same order of magnitude as those found by Ali et al. [51].
These experimental results allow us to conclude that the CFD numerical model has adequate capacity to predict the temperature behavior within the SH. Although no experimental measurement of airflow patterns was performed, it is known that the thermal behavior indoors is dependent on airflow patterns, therefore this model can be used as a tool to perform aerodynamic and thermal analysis within the SH structure.
3.2 Daytime period
3.2.1 Air flows
In Figure 5, the behavior of the wind speed inside the SH for the evaluated scenarios RC and DC was observed. For DC an air flow is observed with an average speed of 0.21 ms-1 and maximum and minimum values of 0.49 ms-1 and 0.06 ms-1, respectively (Figure 5a). The behavior of the flow shows a pattern with a higher air velocity in the roof area of the SH over the central length of the structure and that is directed towards the leeward wall. This behavior has already been described in previous studies [11, 42, 52]. For the DC scenario, two converging flow zones can be observed between the ground and the deck area, the zone located between the windward wall and X = 12 m, with a flow of low speeds and in the opposite direction to the outside air flow. On the contrary, the zone between X = 12 m and the leeward window shows higher velocities and a flow that has the same direction of the external flow for the upper average height of the SH, and an air movement for the height at the lower half in the inverse direction to the external flow. Likewise, the interaction area between the windward wall and the roof area of the SH has vectors of low intensity and speed, this is caused by the loss of impulse generated on the air flow by the presence of the insect-proof porous screen (Figure 5a). For the RC case, it was observed that the displacement of the air moves in a single convective cell, clearly differentiated in comparison with DC; this shows a movement in the same direction of the external air flow with average velocity values of 0.24 ms-1, just in the area above the small plastic tunnels and a reverse flow direction in the lower area of these with an average speed of 0.36 ms-1 (Figure 5b).
Figure 4. Comparison of simulated and measured temperature data
Figure 5. Simulated air velocity field inside the screenhouse (m s–1). (a) The configuration of DC, and (b) The configuration RC for the diurnal period
Figure 6. Normalized air velocity (Vint/Vext) inside the screenhouse during the diurnal period for the DC and RC configurations
In order to compare the airflow velocities inside the structure for both RC and DC, the normalized wind speed (VN) was calculated for the heights above the ground level Y = 1 m and Y = 2 m, respectively. This velocity represents the relationship between the interior and the exterior velocity of the air. In Figure 6 the VN curves for RC and DC in each of the heights evaluated along the width of the SH can be observed. For RC-1m a reduction of the air flow was found, which oscillated between 56% and 99.6% in comparison with the external air, meanwhile, the zone with lower velocities appears over the 5 m adjacent to the lateral leeward and windward walls. On the contrary, the highest velocities occurred on the area between 8 m and 30 m of the width of the SH, because the reduction of air flow is influenced by the strong pressure drops that are generated when the external air makes contact with the screen [52]. In the case of RC-2m, a higher reduction in airflow is observed, influenced mainly by the presence of the plastic covering the tunnels; in this case, the air flow values are below, between 80% and 98.8% respect to the external airspeed and its behavior is more homogeneous over the length evaluated in the width of the SH.
In the case of DC-1m and DC-2m, lower flow reductions are observed compared to the RC scenario. In this case, the reductions in airflow are between 26% and 88% in relation to the external wind speed; nonetheless, these values coincide with previous studies conducted by Flores-Velazquez et al. [53]. The behavior for both DC-1m and DC-2m is very similar, the greatest reductions in flow are observed between the windward wall and the zone that is 10 m adjacent to the wall; on the contrary, the lower reduction rates can be observed between 10 m and 29 m of the width of the SH, meanwhile, in the area between 29 m and the leeward wall, an increase in the air reduction indexes is observed again (Figure 6). This allows deducing that the presence of plastic tunnels inside the SH, generates greater reductions in air flow and spatial behaviors differentiated from this flow in RC compared with DC.
3.2.2 Thermal behavior
In Figure 7, the spatial behavior of the temperature inside the SH at a height of 2 m above the ground level is shown. For the DC scenario, an average temperature of 37.6 ± 0.2°C was obtained with maximum and minimum values of 38.3°C and 36.9°C, respectively. Qualitatively it was observed that the areas of higher temperature were generated near the windward side wall just where the lowest airflow velocity values are presented; on the contrary, the lower temperature zones were found in the areas near the front and leeward walls and a small area of the windward wall as well as over the area of greater air flow (Figure 7a). The vertical distribution of the temperature for this case showed a behavior directly related to the air movements as was shown by Teitel et al. [2], finding an area with values higher than 38°C, just in the area of interaction of the two convective airflow cells generated in DC, area that extends from the ground to the SH cover (Figure 7b).
In the RC scenario, the mean temperature value found was 35.5 ± 0.4℃ with maximum and minimum values of 36.5℃ and 34.1℃, respectively. The spatial distribution in the interior volume showed three areas adjacent to the windward wall as the zones of higher temperature, and these zones expanded heterogeneously across the width of the SH. On the other hand, the zones of low temperature were located near the leeward lateral wall and expanded on an area of the central part of the SH (Figure 7c). The distribution of the vertical temperature profile shows high-temperature zones just above the plastic tunnels and another zone with similar values, located on the area between the windward wall and the first plastic tunnel, an area that has low airspeeds and little air exchange (Figure 7d).
Figure 7. Simulated temperature profiles (°C) inside the screenhouse. (a) Top view of the DC configuration at 2 m of height, (b) Front view of the DC configuration, (c) Top view of the RC configuration at 2 m of height, and (d) Front view of the RC configuration for the diurnal period
Figure 8. Thermal gradient profile width of the screenhouse for DC and RC configurations during the diurnal period
The thermal gradient was calculated (∆T) for both DC and RC; this ∆T represents the difference between the air temperature outside and inside the SH. Figure 8 shows the ∆T for a height above the ground level of 1m and 2 m. In general, the average value of ∆T for RC is superior in 0.7℃ and 1.1℃ compared to the DC scenario for both 1m and 2m, respectively. Additionally, it was observed that the behavior of ∆T in RC-2m presents greater variability between nearby points obtaining values of ∆T between 1.7℃ and 2.5℃; this can be directly related to the presence of plastic tunnels (Figure 8).
3.3 Night period
The distribution patterns of the air flow for RC and DC are presented in Figure 9. In the case of DC, two air movement cells can be observed; one moves in a clockwise direction from the central area of the SH towards the leeward wall with average speeds of 0.11 ms-1 with some zones of greater speed in the area adjacent to the roof and floor of the structure. On the other hand, the movement cell included between the windward wall presents a clockwise displacement in the upper part of the SH with average velocity values of 0.07 ms-1. This cell is complemented by another that shows a counter-clockwise displacement in the lower part of the structure, with average speeds of 0.10 ms-1 (Figure 9a).
This behavior differs from what was observed by Montero et al. [36] in greenhouses covered with impermeable plastic walls and may be influenced by air leaks through the porous material that occurs both on the front and side walls of the SH. For RC, a clearly differentiated air flow behavior was observed in two zones. A flow in the upper part of the screenhouse just above the plastic tunnels shows slight upward-downward currents that move from the lateral wall of the windward side of the leeward side wall, with an average velocity of this flow of approximately 0.13 ms-1. The other flow moves through the lower area of the plastic tunnels with two main characteristics, the first, a displacement contrary to the flow of the outside air, and the second, a more accelerated air velocity with approximate average values of 0.23 ms-1; this higher speed can be influenced by a greater air movement generated by free convection from a warm zone with lower air density (Figure 9b).
In Figure 10, the behavior of the wind speed (VN) can be observed for the two heights evaluated in RC and DC. The average reduction of the indoor air velocity compared to the outside air for DC was 68% and 81% for DC-1m and DC-2m, respectively. For this scenario, the air displacement patterns move in the direction of the flow of the outside air, except for DC-2m in the area between X = 4 m and X = 8 m on the width of the SH. In the RC scenario, an air movement is observed that moves in the opposite direction to the external airflow for the two evaluated heights, the VN values show a reduction of the airspeed in ranges of 31% and 88%, where the flow pattern that showed a more homogeneous velocity behavior was obtained for RC-1m, unlike that obtained in RC-2m, where high and low-velocity vectors are observed in relatively close points; this is clearly influenced by the presence of the plastic tunnels (Figure 10).
Figure 9. Simulated air velocity field inside the screenhouse (m s–1). (a) DC configuration; and (b) RC configuration during the nocturnal period
Figure 10. Normalized air velocity (Vint/Vext) inside the screenhouse for the DC and RC configurations during the nocturnal period
The spatial distribution of the temperature inside the SH for the night period can be seen in Figure 11. For the DC scenario an average temperature value of 20.4 ± 0.2°C, the spatial behavior of this variable was homogeneous inside the structure, the minimum and maximum values obtained under this condition were 20.2°C and 21.3°C, respectively. The areas of higher temperature were located near the front and side walls of the structure, and a small cell generated towards the center of the same, meanwhile the low-temperature zones were located in the central area of the SH between the coordinates of X = 7 m and X = 14 m over the width dimension of the structure (Figure 11a). The vertical distribution of the temperature for DC shows a behavior where the soil in the central part of the structure is the zone of higher temperature with values close to 21.5°C, approximately in 20% of the volume evaluated. Additionally, low-temperature areas can be observed with mean values of 20.2°C in the central zone adjacent to the higher temperature area (Figure 11b).
Figure 11. Simulated temperature profiles (°C) inside the screenhouse. (a) Top view of the DC configuration at 2 m of height, (b) Front view of the DC configuration, (c) Top view of the RC configuration at 2 m of height; and (d) Front view of the RC configuration during the nocturnal period
Figure 12. Thermal gradient profile width of the screenhouse for the DC and RC configurations during the nocturnal period
The behavior for RC is presented in Figure 11 c and d. The average value of the temperature for this case was 23.9 ± 0.4°C; under these conditions a heterogeneity was observed in the spatial distribution of temperature where there are two zones clearly differentiated, one of them located between the central area of the screenhouse and the wall of the rear facade with average temperature values of 24.3°C, meanwhile, the area of low temperatures was located from the middle zone of SH to the front wall with values of 23.4°C (Figure 11c). The vertical distribution of the temperature shows the presence of two higher temperature zones with average values of 24.2°C, the first located between the soil and the cultivation beds, and a second located on the lower part of the plastic tunnels. On the other hand, the zones of lower temperature with average values of 23.3°C were located near the side walls and the roof of the screenhouse (Figure 11d).
Figure 12 shows the ∆T calculated for RC and DC for heights above the ground level of 1m and 2 m. One of the main differences observed is the numerical value of ΔT for each case, are on the one hand, a positive ∆T which occurs in the RC scenario for the two evaluated heights, with an average ∆T value of 0.7°C, and areas with ∆T values of 0.05°C, just over the areas surrounding the windward and leeward side walls, and areas with ΔT of 1.3°C in the central area of the SH. So it can be inferred that the presence of plastic tunnels may be influencing the thermal behavior, whereas for RC-2m, a behavior with greater variability between nearby points was found (Figure 12).
The opposite occurred in the DC scenario where the structure enters thermal inversion conditions; this phenomenon is characterized by lower indoor air temperatures compared to the values of the outside air temperature. Numerically, this could be checked when analyzing the ∆T values generated under this condition, finding an average value of ∆T of -0.5°C; however, these values are within the range of those reported in previous studies by Teitel et al. [21]. The maximum ΔT value was 0.3°C and occurred in the central zone of the SH and the minimum ∆T value was -0.9°C, which was located in DC-1m in an area on the central zone of the SH and moved in the direction of the lateral leeward window (Figure 12). The thermal inversion phenomenon occurs due to the cooling generated by infrared thermal radiation, poor ventilation and the presence of climatic conditions of low humidity and clear skies [36].
The research results are as follows:
(1) CFD 3D simulation proved to be an optimal, valid and accurate tool to determine the microclimatic behavior of a screenhouse during the day and night period under the climatic conditions of the study region.
(2) The presence of small tunnels inside the structure (RC) generates a negative effect on the speed and distribution of air patterns which translates into thermal conditions with values of ΔT up to 1.1°C, compared to the scenario where the tunnels are not used (DC).
(3) The presence of the small tunnels (RC) for the night period allows to improve the microclimate of the screenhouse limiting the phenomenon of thermal inversion characteristic of the DC scenario.
(4) Any modification to the cultivation system under screen-house structures generates both positive and negative effects on the microclimate, therefore, these modifications cannot be made by following the farmers' criteria alone. It is recommended that in future studies, starting from the base that this investigation leaves with a validated CFD model, other variables of interest are included like, different commercial anti-insect meshes that have defined their aerodynamic properties, evaluations to shorter temporary scales that allow to simulate different meteorological conditions for the day and the night, evaluations with some type of crop or another geometric configuration for the screen-house, by the side of the experimentation it is recommendable to validate the flow patterns through sonic anemometry.
The authors wish to thank Corporación Colombiana de Investigación Agropecuaria (AGROSAVIA) and Instituto Nacional de Innovación y Transferencia en Tecnología Agropecuaria de Costa Rica (INTA) for their technical and administrative support in this study. The research was funded by The Regional Fund of Agricultural Research and Technological Development (FONTAGRO) as part of the project "Innovations for horticulture in protected environments in tropical zones: an option for sustainable intensification of family farming in the context of climate change in LAC".
screen houses
Rain configuration
Dry configuration
Observed temperature data (°C)
Simulated temperature data (°C)
Gravitational acceleration, (m.s-2)
Thermal conductivity (W.m-1. K-1)
The normalized wind speed
Absolute mean error (°C)
Mean square error (°C)
Mean absolute percentage error (%)
The semi-implicit solution method for the pressure-velocity equation
Sϕ
source term
TC*
Temperature of the sky (°C)
components of speed (ms-1)
user defined function
Air speed (m.s-1)
the length of the roughness coefficient (m)
Greek symbols
Γϕ
the diffusion coefficient
thermal gradient (°C)
$\beta$
thermal expansion coefficient (K-1)
turbulent kinetic energy dissipation rate
(m2.s-3)
dynamic viscosity (kg.m-1.s-1)
μt
turbulent viscosity (kg.m-1.s-1)
ρ0
density (Kg.m-3)
Concentration of the transported quantity in a dimensional form
[1] Bartzanas, T., Katsoulas, N., Kittas, C. (2012). Solar radiation distribution in screenhouses: A CFD approach. In VII International Symposium on Light in Horticultural Systems, 956: 449-456. https://doi.org/10.17660/ActaHortic.2012.956.52
[2] Teitel, M., Liang, H., Tanny, J., Garcia-Teruel, M., Levi, A., Ibanez, P.F., Alon, H. (2017). Effect of roof height on microclimate and plant characteristics in an insect-proof screenhouse with impermeable sidewalls. Biosystems Engineering, 162: 11-19. https://doi.org/10.1016/j.biosystemseng.2017.07.001
[3] Tanny, J., Cohen, S. (2003). The effect of a small shade net on the properties of wind and selected boundary layer parameters above and within a citrus orchard. Biosystems Engineering, 84(1): 57-67. https://doi.org/10.1016/S1537-5110(02)00233-7
[4] Shahak, Y., Gal, E., Offir, Y., Ben-Yakir, D. (2008, October). Photoselective shade netting integrated with greenhouse technologies for improved performance of vegetable and ornamental crops. In International Workshop on Greenhouse Environmental Control and Crop Production in Semi-Arid Regions, 797: 75-80. https://doi.org/10.17660/ActaHortic.2008.797.8
[5] Manja, K., Aoun, M. (2019). The use of nets for tree fruit crops and their impact on the production: A review. Scientia Horticulturae, 246: 110-122. https://doi.org/10.1016/J.SCIENTA.2018.10.050
[6] Tanny, J. (2013). Microclimate and evapotranspiration of crops covered by agricultural screens: A review. Biosystems Engineering, 114(1): 26-43. https://doi.org/10.1016/j.biosystemseng.2012.10.008
[7] Möller, M., Cohen, S., Pirkner, M., Israeli, Y., Tanny, J. (2010). Transmission of short-wave radiation by agricultural screens. Biosystems Engineering, 107(4): 317-327. https://doi.org/10.1016/j.biosystemseng.2010.09.005
[8] Ilić, Z.S., Milenković, L., Šunić, L., Fallik, E. (2015). Effect of coloured shade‐nets on plant leaf parameters and tomato fruit quality. Journal of the Science of Food and Agriculture, 95(13): 2660-2667. https://doi.org/10.1002/jsfa.7000
[9] Mahmood, A., Hu, Y., Tanny, J., Asante, E.A. (2018). Effects of shading and insect-proof screens on crop microclimate and production: A review of recent advances. Scientia Horticulturae, 241: 241-251. https://doi.org/10.1016/j.scienta.2018.06.078
[10] Teitel, M., Peiper, U.M., Zvieli, Y. (1996). Shading screens for frost protection. Agricultural and Forest Meteorology, 81(3-4): 273-286. https://doi.org/10.1016/0168-1923(95)02321-6
[11] Tanny, J., Pirkner, M., Teitel, M., Cohen, S., Shahak, Y., Shapira, O., Israeli, Y. (2014). The effect of screen texture on air flow and radiation transmittance: laboratory and field experiments. Acta horticulturae, (1015): 45-51.
[12] Tanny, J., Haijun, L., Cohen, S. (2006). Airflow characteristics, energy balance and eddy covariance measurements in a banana screenhouse. Agricultural and Forest Meteorology, 139(1-2): 105-118. https://doi.org/10.1016/j.agrformet.2006.06.004
[13] Pirkner, M., Tanny, J., Shapira, O., Teitel, M., Cohen, S., Shahak, Y., Israeli, Y. (2014). The effect of screen type on crop micro-climate, reference evapotranspiration and yield of a screenhouse banana plantation. Scientia Horticulturae, 180: 32-39. https://doi.org/10.1016/j.scienta.2014.09.050
[14] Cohen, S., Raveh, E., Li, Y., Grava, A., Goldschmidt, E.E. (2005). Physiological responses of leaves, tree growth and fruit yield of grapefruit trees under reflective shade screens. Scientia Horticulturae, 107(1): 25-35. https://doi.org/10.1016/j.scienta.2005.06.004
[15] Desmarais, G., Ratti, C., Raghavan, G.S.V. (1999). Heat transfer modelling of screenhouses. Solar Energy, 65(5): 271-284. https://doi.org/10.1016/S0038-092X(99)00002-X
[16] Siqueira, M.B., Katul, G.G., Tanny, J. (2012). The effect of the screen on the mass, momentum, and energy exchange rates of a uniform crop situated in an extensive screenhouse. Boundary-layer Meteorology, 142(3): 339-363. https://doi.org/10.1007/s10546-011-9682-5
[17] Meneses, J.F., Baptista, F.J., Bailey, B.J. (2007). Comparison of humidity conditions in unheated tomato greenhouses with different natural ventilation management and implications for climate and Botrytis cinerea control. Acta Horticulturae, 801(801): 1013-1020. https://doi.org/10.17660/ActaHortic.2008.801.120
[18] Teitel, M., Garcia-Teruel, M., Ibanez, P.F., Tanny, J., Laufer, S., Levi, A., Antler, A. (2015). Airflow characteristics and patterns in screenhouses covered with fine-mesh screens with either roof or roof and side ventilation. Biosystems Engineering, 131: 1-14. https://doi.org/10.1016/j.biosystemseng.2014.12.010
[19] Villagran, E., Bojaca, C.R. (2019). CFD simulation of the increase of the roof ventilation area in a traditional colombian greenhouse: Effect on air flow patterns and thermal behavior. International Journal of Heat and Technology, 37(3): 881-892. https://doi.org/10.18280/ijht.370326
[20] Mesmoudi, K., Meguellati, K., Bournet, P.E. (2017). Thermal analysis of greenhouses installed under semi arid climate. International Journal of Heat and Technology, 35(3): 474-486. https://doi.org/10.18280/ijht.350304
[21] Teitel, M., Garcia-Teruel, M., Alon, H., Gantz, S., Tanny, J., Esquira, I., Soger M., Levi, A., Schwartz, A., Antler, A. (2014). The effect of screenhouse height on air temperature. Acta Horticulturae, 1037: 517-523. https://doi.org/10.17660/ActaHortic.2014.1037.64
[22] Norton, T., Sun, D.W., Grant, J., Fallon, R., Dodd, V. (2007). Applications of computational fluid dynamics (CFD) in the modelling and design of ventilation systems in the agricultural industry: A review. Bioresource Technology, 98(12): 2386-2414. https://doi.org/10.1016/j.biortech.2006.11.025
[23] Bournet, P.E., Boulard, T. (2010). Effect of ventilator configuration on the distributed climate of greenhouses: A review of experimental and CFD studies. Computers and Electronics in Agriculture, 74(2): 195-217. https://doi.org/10.1016/j.compag.2010.08.007
[24] Flores-Velazquez, J., Ojeda, W., Villarreal-Guerrero, F., Rojano, A. (2015). Effect of crops on natural ventilation in a screenhouse evaluated by CFD simulations. In International Symposium on New Technologies and Management for Greenhouses-GreenSys2015, 1170: 95-102. https://doi.org/10.17660/ActaHortic.2017.1170.10
[25] Peel, M.C., Finlayson, B.L., McMahon, T.A. (2007). Updated world map of the Köppen-Geiger climate classification. Hydrology and Earth System Sciences Discussions, 4(2): 439-473. https://doi.org/10.5194/hess-11-1633-2007.
[26] Piscia, D., Montero, J.I., Bailey, B., Muñoz, P., Oliva, A. (2013). A new optimisation methodology used to study the effect of cover properties on night-time greenhouse climate. Biosystems Engineering, 116(2): 130-143. https://doi.org/10.1016/J.BIOSYSTEMSENG.2013.07.005
[27] Drori, U., Dubovsky, V., Ziskind, G. (2005). Experimental verification of induced ventilation. Journal of Environmental Engineering, 131(5): 820-826. https://doi.org/10.1061/(ASCE)0733-9372
[28] Villagrán, E.A., Bojacá, C.R. (2019). Effects of surrounding objects on the thermal performance of passively ventilated greenhouses. Journal of Agricultural Engineering, 50(1): 20-27. https://doi.org/10.4081/jae.2019.856
[29] Villagrán, E.A., Bojacá, C.R. (2019). Determination of the thermal behavior of a Colombian hanging greenhouse applying CFD simulation. Revista Ciencias Técnicas Agropecuarias, 28(3).
[30] Villagrán, E.A., Bojacá, C.R. (2019). Simulacion del microclima en un invernadero usado para la producción de rosas bajo condiciones de clima intertropicaL. Chilean Journal of Agricultural & Animal Sciences, 35(2): 137-150. https://doi.org/10.4067/s0719-38902019005000308
[31] Baxevanou, C., Fidaros, D., Bartzanas, T., Kittas, C. (2018). Yearly numerical evaluation of greenhouse cover materials. Computers and Electronics in Agriculture, 149: 54-70. https://doi.org/10.1016/j.compag.2017.12.006
[32] Nebbali, R., Roy, J.C., Boulard, T. (2012). Dynamic simulation of the distributed radiative and convective climate within a cropped greenhouse. Renewable Energy, 43: 111-129. https://doi.org/10.1016/J.RENENE.2011.12.003
[33] Yu, Y., Xu, X., Hao, W. (2018). Study on the wall optimization of solar greenhouse based on temperature field experiment and CFD simulation. International Journal of Heat and Technology, 36: 847-854. https://doi.org/10.18280/ijht.360310
[34] Villagrán, E.A., Romero, E.J.B., Bojacá, C.R. (2019). Transient CFD analysis of the natural ventilation of three types of greenhouses used for agricultural production in a tropical mountain climate. Biosystems Engineering, 188: 288-304. https://doi.org/10.1016/j.biosystemseng.2019.10.026
[35] Iglesias, N., Montero, J.I., Muñoz, P., Antón, A. (2009). Estudio del clima nocturno y el empleo de doble cubierta de techo como alternativa pasiva para aumentar la temperatura nocturna de los invernaderos utilizando un modelo basado en la Mecánica de Fluidos Computacional (CFD). Horticultura Argentina, 28: 18-23.
[36] Camacho, J.I.M., Muñoz, P., Guerrero, M.S., Cortés, E. M., Piscia, D. (2013). Shading screens for the improvement of the night time climate of unheated greenhouses. Spanish Journal of Agricultural Research, 1: 32-46. https://doi.org/10.5424/sjar/2013111-411-11
[37] Villagrán, E.A., Bojacá, C.R. (2019). Numerical evaluation of passive strategies for nocturnal climate optimization in a greenhouse designed for rose production (Rosa spp.). Ornamental Horticulture, 25(4): 351-364. https://doi.org/10.1590/2447-536X.V25I4.2087
[38] Campen, J.B. (2004). Greenhouse design applying CFD for Indonesian conditions. In International Conference on Sustainable Greenhouse Systems-Greensys2004, 691: 419-424. https://doi.org/10.17660/ActaHortic.2005.691.50
[39] Valera, D.L., Álvarez, A.J., Molina, F.D. (2006). Aerodynamic analysis of several insect-proof screens used in greenhouses. Spanish Journal of Agricultural Research, 4(4): 273-279. https://doi.org/10.5424/sjar/2006044-204
[40] Miguel, A.F., Van de Braak, N.J., Bot, G.P.A. (1997). Analysis of the airflow characteristics of greenhouse screening materials. Journal of Agricultural Engineering Research, 67(2): 105-112. https://doi.org/10.1006/jaer.1997.0157
[41] Teitel, M. (2007). The effect of screened openings on greenhouse microclimate. Agricultural and Forest Meteorology, 143(3-4): 159-175. https://doi.org/10.1016/j.agrformet.2007.01.005
[42] Flores-Velazquez, J., Montero, J.I. (2008). Computational fluid dynamics (CFD) study of large scale screenhouses. In International Workshop on Greenhouse Environmental Control and Crop Production in Semi-Arid Regions, 797: 117-122. https://doi.org/10.17660/ActaHortic.2008.797.14
[43] Bournet, P.E., Khaoua, S.O., Boulard, T. (2007). Numerical prediction of the effect of vent arrangements on the ventilation and energy transfer in a multi-span glasshouse using a bi-band radiation model. Biosystems Engineering, 98(2): 224-234. https://doi.org/10.1016/j.biosystemseng.2007.06.007
[44] Tominaga, Y., Mochida, A., Yoshie, R., Kataoka, H., Nozu, T., Yoshikawa, M., Shirasawa, T. (2008). AIJ guidelines for practical applications of CFD to pedestrian wind environment around buildings. Journal of Wind Engineering and Industrial Aerodynamics, 96(10-11): 1749-1761. https://doi.org/10.1016/j.jweia.2008.02.058
[45] ANSYS Fluent, V.18.0. Ansys Fluent Tutorial Guide. http://users.abo.fi/rzevenho/ansys%20fluent%2018%20tutorial%20guide.pdf, accessed on Nov. 21, 2019.
[46] Zhang, X., Wang, H., Zou, Z., Wang, S. (2016). CFD and weighted entropy based simulation and optimisation of Chinese Solar Greenhouse temperature distribution. Biosystems Engineering, 142: 12-26. https://doi.org/10.1016/j.biosystemseng.2015.11.006
[48] Richards, P.J., Hoxey, R.P. (1993). Appropriate boundary conditions for computational wind engineering models using the k-ϵ turbulence model. Journal of Wind Engineering and Industrial Aerodynamics, 46: 145-153. https://doi.org/10.1016/B978-0-444-81688-7.50018-8
[49] Villagrán, E.A., Bojacá, C. (2019). Study of natural ventilation in a Gothic multi-tunnel greenhouse designed to produce rose (Rosa spp.) in the high-Andean tropic. Ornamental Horticulture, 25(2): 133-143. https://doi.org/10.14295/oh.v25i2.2013
[50] Ramponi, R., Blocken, B. (2012). CFD simulation of cross-ventilation for a generic isolated building: impact of computational parameters. Building and Environment, 53: 34-48. https://doi.org/10.1016/j.buildenv.2012.01.004
[51] Ali, H.B., Bournet, P.E., Cannavo, P., Chantoiseau, E. (2018). Development of a CFD crop submodel for simulating microclimate and transpiration of ornamental plants grown in a greenhouse under water restriction. Computers and Electronics in Agriculture, 149: 26-40. https://doi.org/10.1016/J.COMPAG.2017.06.021
[52] Tanny, J., Teitel, M., Barak, M., Esquira, Y., Amir, R. (2007). The effect of height on screenhouse microclimate. In International Symposium on High Technology for Greenhouse System Management: Greensys2007, 801: 107-114. https://doi.org/10.17660/ActaHortic.2008.801.6
[53] Flores-Velazquez, J., Villarreal Guerrero, F., Lopez, I.L., Montero, J.I., Piscia, D. (2012, July). 3-Dimensional thermal analysis of a screenhouse with plane and multispan roof by using computational fluid dynamics (CFD). In Ist International Symposium on CFD Applications in Agriculture, 1008: 151-158. https://doi.org/10.17660/ActaHortic.2013.1008.19 | CommonCrawl |
If $A$ and $B$ are independent events, and $A$ and $C$ are independent events, how do I show that $A$ and $B\cup C$ are independent?
Let $A$ and $B$ be independent events, and let $A$ and $C$ be independent events. How do I show that $A$ and $B\cup C$ are independent events as well?
According to the definition of independent events, $A$ and $B\cup C$ are independent if and only if $$P(A\cap (B\cup C)) = P(A)P(B\cup C).$$
Since $A$ and $B$ and $A$ and $C$ are independent, I know that $$P(A\cap B) = P(A)P(B) \quad\text{and}\quad P(A\cap C)=P(A)P(C).$$
However, I have no idea how to solve this. I attempted to apply the probability rules I know but got nowhere.
probability self-study independence
jennjenn
$\begingroup$ Please add the [self-study] tag & read its wiki. $\endgroup$
– gung - Reinstate Monica
$\begingroup$ I find it a bit disappointing that people just did the problem here. Regardless of whether the "self-study" tag is there, we all know what it's like to me told an answer and what's it's like to be lead to one. The latter is almost always more meaningful. $\endgroup$
– jlimahaverford
$\begingroup$ I upvoted you, now I am even wondering there are something missing for both my solution and jtobin's solution. Since both of us assume that A , B and C are mutually independent which might not be correct. $\endgroup$
– Deep North
$\begingroup$ Hmmm. That's a good point. I'm gonna actually work this out myself. $\endgroup$
$\begingroup$ What is especially disappointing is that this question has received three incorrect answers, though two may yet be modified. Consider two independent tosses of a fair coin, and let $B= \{HT,HH\}$ and $C=\{HT,TT\}$ be the events that the first and second tosses resulted in Heads and Tails respectively, and $A=\{HT,TH\}$ the event that exactly one toss resulted in Heads. Thus, $P(A)=P(B)=P(C)=\frac 12$, $P(A\cap B)=P(A\cap C)=\frac 14$, so that $A,B$ are independent as are $A,C$. But $P(B\cup C)=\frac 34,P(A\cap(B\cup C)=\frac 14 \neq P(A)P(B\cup C)$, that is, $A$ and $B\cup C$ are dependent. $\endgroup$
– Dilip Sarwate
You cannot show this result because it does not hold for all $A, B, C$ enjoying these properties. Consider the following counter-example.
Consider two independent tosses of a fair coin. Let $B=\{HT,HH\}$ and $C=\{HT,TT\}$ be the events that the first and second tosses resulted in Heads and Tails respectively. Let $A=\{HT,TH\}$ be the event that exactly one toss resulted in Heads.
Then, $P(A)=P(B)=P(C) = \frac 12$ while $P(A\cap B) = P(A\cap C) = \frac 14$ and so $A$ and $B$ are independent events as are $A$ and $C$ independent events. Indeed, $B$ and $C$ are also independent events (that is, $A$, $B$, and $C$ are pairwise independent events). However, $$P(A) = \frac 12 ~ \text{and}~ P(B\cup C)=\frac 34 ~ \text{while}~ P(A\cap(B\cup C)) =\frac 14 \neq P(A)P(B\cup C)$$ and so $A$ and $B\cup C$ are dependent events.
Putting away our counter-example, let us consider what conditions are needed to make $A$ and $B\cup C$ independent events. The other answers have already done the work for us. We have that \begin{align} P(A\cap (B\cup C)) &= P((A\cap B) \cup (A\cap C))\\ &= P(A\cap B) + P(A\cap C) - P(((A\cap B) \cap (A\cap C))\\ &= P(A)P(B) + P(A)P(C) - P(A\cap B \cap C)\\ &= P(A)\left(P(B) + P(C) - P(B\cap C)\right) + \left(P(A)P(B\cap C) - P(A\cap B \cap C)\right)\\ &= P(A)P(B\cup C) + \left[P(A)P(B\cap C) - P(A\cap B \cap C)\right] \end{align} and so $P(A\cap (B\cup C))$ equals $P(A)P(B \cup C)$ (as is needed to prove that $A$ and $B\cup C$ are independent events) exactly when $P(A)P(B\cap C)$ equals $P(A\cap B \cap C) = P(A\cap (B\cap C))$, that is when $A$ and $B\cap C$ are independent events.
$A$ and $B\cup C$ are independent events whenever $A$ and $B\cap C$ are independent events.
Notice that whether $B$ and $C$ are independent or not is not relevant to the issue at hand: in the counter-example above, $B$ and $C$ were independent events and yet $A = \{HT, TH\}$ and $B\cap C = \{HT\}$ were not independent events. Of course, as noted by Deep North, if $A$, $B$, and $C$ are mutually independent events (which requires not just independence of $B$ and $C$ but also for $P(A\cap B \cap C) = P(A)P(B)P(C)$ to hold), then $A$ and $B\cap C$ are indeed independent events. Mutual independence of $A$, $B$ and $C$ is a sufficient condition.
Indeed, if $A$ and $B\cap C$ are independent events, then, together with the hypothesis that $A$ and $B$ are independent, as are $A$ and $C$ independent events, we can show that $A$ is independent of all $4$ of the events $B\cap C, B\cap C^c, B^c\cap C, B^c\cap C^c$, that is, of all $16$ events in the $\sigma$-algebra generated by $B$ and $C$; one of these events is $B\cup C$.
Dilip SarwateDilip Sarwate
$\begingroup$ I would add that a trivial way to make the framed condition hold is $B$ and $C$ disjoint, since then $P(B\cap C)=0$. $\endgroup$
– Miguel
$\begingroup$ @Miguel Yes, that is another sufficient condition for $A$ and $B\cup C$ to be independent events, just like mutual independence of $A,B,C$ is a sufficient condition as my answer says. My answer is about what is the necessary condition for $A$ and $B\cup C$ to be independent events. $\endgroup$
1) Is there some way you know to rewrite the event $A \cap (B\cup C)$. Intuitively, we know how A,B and A,C interact, but we don't know how B,C interact. So $(B\cup C)$ is getting in our way.
2) Is there some way you know of rewriting $P(X\cup Y)$?
Even if you don't immediately get the answer, please edit your answer with the answers to these questions and we'll go from there.
Please check me on this. I believe I have a counterexample.
Rolling a die to get X.
A: X < 4
B: X in {1, 4}
C: X in {1, 5}
jlimahaverfordjlimahaverford
$\begingroup$ I would go by this answer! Try to work it out yourself! you do not gain too much by just seeing the answer! $\endgroup$
– Gumeo
As per Dilip Sarwate's comment, these events are demonstrably not independent.
The typical way I would try to prove independence proceeds like this:
\begin{align*} P(A, B \cup C) & = P(\{A, B\} \cup \{A, C\}) & \text{distributive property} \\ & = P(A, B) + P(A, C) - P(A,B,C) & \text{sum rule} \end{align*}
and here you'd like to factor $P(A)$ out of the expression in order to establish the property $P(A, B \cup C) = P(A)P(B \cup C)$, which would be sufficient to prove independence. However if you try to do that here, you get stuck:
$$ P(A, B) + P(A, C) - P(A,B,C) = P(A) \{ P(B) + P(C) - P(B,C \, | \, A) \} $$
Note that the braced expression is almost $P(B) + P(C) - P(B,C)$, which would get you to your goal. But you have no information that allows you to reduce $P(B,C \, | \, A)$ any further.
Note that in my original answer I had sloppily asserted that $P(B, C \, | \, A) = P(A)P(B, C)$ and thus erroneously claimed that the result asked to be proved was true; it's easy to mess up!
But given that it proves to be difficult to demonstrate independence in this way, a good next step is to look for a counterexample, i.e. something that falsifies the claim of independence. Dilip Sarwate's comment on the OP includes exactly such an example.
jtobinjtobin
$\begingroup$ Why is $P(A,B,C)$ on the second line equal to $P(A)P(B,C)$ on the third line? It is not given that $A$ is independent of $B\cap C$, just of $B$, and of $C$ _separately. $\endgroup$
$\begingroup$ So, after your edit, is it just the derivation that is sloppy but the result claimed is itself correct, that is, $A$ is indeed independent of $B\cup C$ as the OP is tasked with proving? Or is it that the derivation does not prove the claim that $A$ is independent of $B\cup C$? $\endgroup$
$\begingroup$ @DilipSarwate My derivation does not prove the claim; my edit also changed the erroneous $=$ assertion to $\neq$ in an attempt to make this clear. I'll edit the answer again to be more explicit. $\endgroup$
– jtobin
$\begingroup$ OK, +1 for fixing your answer. $\endgroup$
$P[A \cap(B \cup C)]=P[(A \cap B) \cup (A \cap C)]=P(A \cap B)+P(A \cap C)-P[( A \cap B)\cap (A \cap C)]=P(A)*P(B)+P(A)*P(C)-P(A \cap B \cap C)$
$P(A)*P(B \cup C)=P(A)[P(B)+P(C)-P(B \cap C)]=P(A)*P(B)+P(A)*P(C)-P(A)*P( B \cap C)$
Now, we need to show $P(A \cap B \cap C)=P(A)*P( B \cap C)$
If $A, B,C$ are mutually independent,the results are obvious.
While the condition is $A$ and $B$ are independent and $A$ and $C$ are independent do not guarantee independent of $B$ and $C$
Therefore, the OP may need to reexamine the condition of the question.
Deep NorthDeep North
$\begingroup$ In your second long equation, you got a $-P(A)P(B\cap C)$ term when you multiplied out that middle expression. But you wrote $-P(A\cap B \cap C)$ instead, that is, you equated $P(A)P(B\cap C)$ and $P(A\cap B \cap C)$, in effect assuming that $A$ and $B\cap C$ are independent. Why is that? $\endgroup$
$\begingroup$ Thanks, it is an assumed independent which may not be correct. $\endgroup$
P{A(B+C)}=P(AB+BC)=P(AB)+P(AC)-P(ABC) =P(A)P(B)+P(A)P(C)-P(A)P(BC) [A,B,C are mutually independent] =P(A)[P(B)+P(C)-P(BC)] =P(A)P(B+C) Hence A and B+C are independent.
Srishti MondalSrishti Mondal
Not the answer you're looking for? Browse other questions tagged probability self-study independence or ask your own question.
If A is independent of C and B is independent of C, is their conjuction also independent of C?
$X_i, X_j$ independent when $i≠j$, but $X_1, X_2, X_3$ dependent?
Probability of three independent events
Solving the probability of independent events without the complement
Independent events cannot be separate?
Birthday problem independent events?
How randomly selected events are independent?
For a toss of fair die, if events are, A: {1,2}, B: {2,4,6}, and C: {4,5,6}, then A and B are independent but B and C are not. Why?
Definition of independent events in probability theory (Wasserman) | CommonCrawl |
Nano Convergence
Dual-nozzle microfluidic droplet generator
Ji Wook Choi1 na1,
Jong Min Lee1 na1,
Tae Hyun Kim1,
Jang Ho Ha1,
Christian D. Ahrberg2 &
Bong Geun Chung ORCID: orcid.org/0000-0002-6838-32181
Nano Convergence volume 5, Article number: 12 (2018) Cite this article
The droplet-generating microfluidics has become an important technique for a variety of applications ranging from single cell analysis to nanoparticle synthesis. Although there are a large number of methods for generating and experimenting with droplets on microfluidic devices, the dispensing of droplets from these microfluidic devices is a challenge due to aggregation and merging of droplets at the interface of microfluidic devices. Here, we present a microfluidic dual-nozzle device for the generation and dispensing of uniform-sized droplets. The first nozzle of the microfluidic device is used for the generation of the droplets, while the second nozzle can accelerate the droplets and increase the spacing between them, allowing for facile dispensing of droplets. Computational fluid dynamic simulations were conducted to optimize the design parameters of the microfluidic device.
The development of microfluidics and micro total analysis systems (µTAS) [1] has led to a paradigm shift in many research areas. Microfluidics allows the precise handling of small volumes of liquids, while maintaining a high control over mass and thermal transport, as well as fast response times at low cost and automation [2]. Two options exist for the operation of a microfluidic device, either continuous or segmented flow. While in continuous flow only one phase is used [3, 4], segmented flow breaks up the flow using two or more different phases [5]. Despite the higher complexity, the segmented flows possess a number of advantages over continuous flows. Typically, droplets provide faster mass and thermal transfer, while preventing boundary effects, such as axial dispersion. Furthermore, they provide small, reproducible volumes, can be manipulated independently, and serve as individual units for reactions [6]. Due to their high homogeneity and fast mass transfer, they are commonly used for the controlled synthesis of nanoparticles [7, 8]. Other applications can be found in the creation of artificial cells [9], the analysis of single cells [10], or in digital polymerase chain reaction [11]. For all of these applications, the generation of stable and monodispersed droplets is necessary.
In microfluidics, droplets can be made either following an active or a passive method. In active methods, droplets are generated by applying an external force. This can be done either by applying a direct or alternating current. In systems consisting of one conducting and one insulating phase, charges accumulate on the interface due to electrochemical reactions. The resulting electrical field force results in the formation of droplets [12]. Alternatively, a force can be created through thermal expansion of one of the two phases, as can be done by localized laser irradiation [13, 14]. Lastly, droplets can be generated by active methods utilizing active valves or pneumatically actuated membranes [15, 16]. In passive method, pressure-driven flows of the dispersed and continuous phase meet at a microchannel junction. The characteristics of the junction determine the interface deformation and the formation of droplets. One, infrequently used, option is to arrange both streams in coaxial microchannels. The dispersed phase is introduced in the central channel, while the continuous phase flows through outer channels [17, 18]. Similarly, flow-focusing geometries use a central flow of the dispersed phase and outer flows of segmented phase. In contrast to coaxial microchannels, the flows pass a contraction region after which the central flow breaks up into droplets [19, 20]. The most popular method for passive droplet generation is the cross-flow method. Here, the flow of the continuous phase is partially blocked by a flow of the dispersed phase coming from a secondary channel. Through this, a shear gradient develops, the dispersed phase elongates and eventually breaks into droplets [21,22,23]. Some applications, such as filling of nanowells [24] or production of micro-lenses [25] require the dispensing of the generated droplets. Previously, this has been obtained by generating droplets using a pinched flow channel, followed by injecting the droplets into a stream of a carrier gas for analysis in ion coupled mass spectroscopy (ICPMS) [26]. Other groups have been able to achieve dispensing by either precise timing control of the dispensing process [27], or by the use of an active piezo-electric droplet generator [28]. Through the use of an active droplet generation method, the issue of droplet aggregation and merging can be prevented. However, this reduces the throughput of the microfluidic device and adds complexity to the system.
Here, we show a novel method of droplet dispensing using a dual-nozzle microfluidic setup. While the first nozzle is used for the generation of droplets, the second nozzle is used for the acceleration of the generated droplets and to increase the spacing between them. Through this droplets can be dispensed at a high frequency without the issue of aggregation and merging at the device outlet. A computational fluid dynamic (CFD) model was created before experiments to optimize the design of microfluidic devices.
Computational model of the microfluidic device
Prior to experiments, the performance of the dual-nozzle droplet-generating microfluidic device was simulated using CFD model. For this purpose, the two phase flow with level set function of COMSOL (5.1, COMSOL Inc., USA) was used as previously suggested [29]. The governing equations for the simulation are the Navier–Stokes equation and the continuity equation for the conservation of momentum and mass:
$$\uprho\frac{{\partial {\mathbf{v}}}}{\partial t} +\uprho\left( {{\mathbf{v}} \cdot \nabla } \right){\mathbf{v}} = \nabla \cdot \left[ { - p{\mathbf{I}} + \mu \left( {\nabla {\mathbf{v}} + \left( {\nabla {\mathbf{v}}} \right)^{T} } \right)} \right] + {\mathbf{F}}_{{{\mathbf{st}}}}$$
$$\nabla \cdot {\mathbf{v}} = 0$$
where v, p, and F st are the velocity vector, pressure, and the surface tension, respectively. The density and dynamic viscosity is denoted by ρ and μ, respectively. The position of the phase interface can be tracked using the level set function as a transportation equation:
$$\frac{\partial \phi }{\partial t} + {\mathbf{v}} \cdot \nabla \phi = \gamma \nabla \cdot \left( { - \phi \left( {1 - \phi } \right)\frac{\nabla \phi }{{\left| {\nabla \phi } \right|}} + \varepsilon \nabla \phi } \right)$$
where ϕ is the level set function, and γ and ɛ are numerical stabilization parameters. The following equations were used for the Multiphysics coupling of density and viscosity:
$$\uprho =\uprho_{1} + \left( {\uprho_{2} -\uprho_{1} } \right)\phi$$
$$\upmu =\upmu_{1} + \left( {\upmu_{2} -\upmu_{1} } \right)\phi$$
For the simulations, a value of ρ1 = 800 kg/m3 and dynamic viscosity of μ1 = 0.01 Pa s was used. For water, the values were ρ2 = 1000 kg/m3 and μ2 = 0.001 Pa s, respectively. Furthermore, all fluids were assumed to be incompressible, homogenous Newtonian fluids. A model of the microfluidic droplet dispensing device was constructed based on the AutoCAD drawing used for device fabrication. The walls were defined as wetted boundaries with a contact angle of 120° for the water phase and no pressure was set at the outlet of the microfluidic device.
Fabrication of the dual-nozzle microfluidic device
A microfluidic dual-nozzle device consisting of two inlets for each nozzle was designed using AutoCAD (Autodesk, USA) and printed onto photomasks. All inlet channels were designed with a width of 70 µm with the exception of the water inlet in the first nozzle which had a width of 100 µm. The design from the masks was transferred to silicon wafers (Wangxing Silicon-Peak Electronics, China) using a standard soft-lithography process as shown previously [30]. Briefly, silicon wafers are cleaned using a wafer washing system and afterwards dried for 5 min at 200 °C on a hotplate. 5 mL of SU-8 50 photoresist (Microchem Corp., USA) was spin-coated onto the silicon wafers at 3000 rpm for 60 s, resulting in a 40 µm photoresist layer. The spin-coated wafer was soft-baked at 65 °C for 5 min and afterwards further heat treated at 95 °C for 15 min on a hotplate to evaporate the solvent. After UV-exposure for 10 s at an intensity of 20 mW/cm2, the wafers were baked at 65 °C for 1 min, followed by heat treatment at 95 °C for 4 min on a hotplate. The silicon masters were developed using SU-8 developer (Microchem Corp., USA) and dried with air. Poly(dimethylsiloxane) (PDMS, Dow Corning, USA) was poured onto the silicon wafers. After curing in an oven at 80 °C, the PDMS was peeled off from the silicon wafer and was subsequently bonded into glass slides using oxygen plasma.
Droplet dispensing experiments
Syringe pumps (PHD 2000, Harvard Apparatus, USA) were connected to the four inlets of the microfluidic device using tygon tubing (Sigma Aldrich, USA) to conduct droplet dispensing experiments. For experiments, de-ionized water (DI water) was used as the continuous phase and mineral oil (M5904, Sigma Aldrich, USA) as the dispersed phase. For experiments, all flow rates were systematically varied between 10 and 50 µL/min in increments of 10 µL/min, in accordance with the values previously used for numerical simulations. Images of the resulting droplets were captured using an inverted microscope (Olympus IX73, Japan) and were also analyzed using Image J (National Institute of Health, USA) regarding their droplet diameter and the distance between droplets.
Fabrication of dual-nozzle microfluidic device
The dual-nozzle microfluidic device consisting of three water inlets and one oil inlet combined into two nozzles (Fig. 1). The first nozzle which formed a Y-arrangement (Fig. 1b) was used for the generation of mineral oil droplets. In the second nozzle area, the distance between the formed mineral oil droplets could be adjusted through the injection of further water (Fig. 1c). Overall, the microfluidic device has an area of less than 2.5 cm2, making it easily to integrate into various applications (Fig. 1f).
Design and fabrication of the microfluidic dual-nozzle device. Schematic of the dual-nozzle device (a), magnified schematic of the first (b), and second nozzle (c). Microscope images of the fabricated first (d), and second nozzle (e). For illustration purposes, the channels are filled with fluorescein solution. Scale bars are 200 µm. Photograph of the fabricated PDMS device, for illustration purposes the channels are filled with red dye (f)
Prior to experiments, CFD simulations were carried out to optimize the device design for droplet generation and dispensing applications. A first concern was the generation of back flow in the device, especially through the generation of back pressure by the second nozzle. Hence, two different designs for the first nozzle were simulated. A first design with straight connections between the first nozzle and the corresponding inlets, and a second design with zigzag channels between the inlets and the first nozzle were tested (Fig. 2). The simulations predict that through the introduction of the zigzag channel, the pressure drop between the device inlet and the first nozzle increases by a factor of 6 (Fig. 2a, b), making the device more robust to back pressure. While Hagen–Poiseuille predicts only an increase of a factor two through the increase of channel length, the sharp corners of the channel cause a higher robustness to back pressure [31, 32]. Although the addition of the zigzag channel increases the size of the device, the increase in size is smaller than would be required by just increasing the length of the channels. Next, the generation of droplets in the first nozzle was simulated, as well as the how the droplet volume can be controlled by the water and oil flow rates (Fig. 2c, d). When the water flow rate was increased and the oil flow rate remained constant, the frequency of oil droplet generation was increased. This caused a decrease of the diameter of generated oil droplets from 160 µm at a water flow rate of 10 µL/min to 90 µm at 50 µL/min (Fig. 2c). In contrast, when the oil flow rate is increased and the water flow rate kept constant, simulations predict and increase in the oil droplet size (Fig. 2d).
CFD simulation of the first nozzle of the microfluidic device. Table comparing the pressure drop from the inlet to the first nozzle for a straight and zigzag channel (a). Snapshot of pressure distribution in the first nozzle for the cases in which a zigzag channel or a straight channel is used (b). Simulation of droplet diameter for cases in which the water flow rate (c), or oil flow rate (d) is varied and the other flowrate remains constant
Droplet generation in a dual-nozzle microfluidic device
The droplet dispensing device had an integrated second nozzle to adjust the distance between the droplets generated in the first nozzle. Furthermore, this second nozzle should prevent droplet aggregation and merging. As for the first nozzle, the behavior of the second nozzle was also simulated using CFD models prior to experiments (Fig. 3). Through the injection of additional water, the distance between the individual droplets could be increased due to the velocity difference before and after the second nozzle (Fig. 3a, d). However, the increase in flow rate at the second nozzle causes a backpressure influencing the first nozzle. Through this, simulation predicts a decrease in droplet size generated by the first nozzle (Fig. 3b). The simulation predicts an almost linear relationship between the distance of the droplets and the flow rate of the additional water injection (Fig. 3c). Through the adjustment of the droplet distances, the agglomeration and merging of droplets at the outlet of the microfluidic device could be prevented and the dispensing of droplets achieved in simulations. After the simulation of the droplet dispenser, experiments were conducted using the previously fabricated PDMS device (Fig. 4). For the experiments, water and oil flow rates equivalent to the flow rates in simulations were used. As predicted by simulations, a decrease in droplet diameter was observed when the water flow rate of the first nozzle was increased (Fig. 4c). At the same time, the frequency of droplet generation increased due to the constant oil flow rate. Furthermore, an increase of droplet diameter with increasing oil flowrate at a constant water flow rate was observed, analog to the simulation predictions (Fig. 4d). While the CFD model captured the general trends of droplet generation well, the droplet volumes are overestimated by 25% by the model. The cause for this can be found in the assumptions made in the construction of the model. Firstly, the channel walls were only characterized in their wetting behavior for water and not for mineral oil. Secondly, the level set method is based on reinitialization techniques which greatly affect accuracy and efficiency. Combined with the known mass loss problems of the method, this leads to the observed deviations [33]. The deviation could be removed by introducing an experimentally determined correction factor into the model. However, if the model is used for design purposes, this might not be required. Lastly, the performance of the second nozzle was tested and compared to simulation results. As predicted by simulations, a linear relationship between the water injection rate at the second nozzle and the distance between the droplets was found. By increasing the distance between the droplets aggregation and merging of droplets could be prevent (Fig. 4e). While in the case of using only a single nozzle, many of the droplets aggregated and merged at the outlet of the microfluidic device, the dual-nozzle microfluidic device effectively prevented this problem, allowing the dispensing of droplets as they were generated in the microfluidic device (Additional file 1: Figure S1).
Simulation of the second nozzle of the microfluidic device. Snapshot of the fluid velocity distribution around the second nozzle of the microfluidic device (a). Graphs of diameter of droplets generated at second nozzle against water flowrate (b). Graph of simulated distance of droplets after second nozzle as a function of water flow rate (c). Table of flow velocities for a single and dual-nozzle microfluidic device (d)
Droplet generation and spacing using dual-nozzle microfluidic device. Microscope image of droplet generation at first nozzle (a) and control of droplet distance at second nozzle (b). Scale bars are 200 µm. Graphs of diameter of droplets generated at first nozzle against water flowrate (c), and oil flowrate (d), and bar diagram of dispensed droplet diameter at device outlet using a single nozzle and the dual-nozzle setup (e)
Here, we have shown a microfluidic device for the generation and dispensing of droplets. The microfluidic device consists of two separate nozzles. While the first nozzle is used for the generation of droplets, the distance between the individual droplets can be adjusted using the second nozzle. Using this method, the agglomeration and merging of droplets at the microfluidic device outlet can be prevented and dispensing of homogenous droplets can be achieved. The microfluidic device could be a valuable tool for a wide range of applications. Through the small size of the device, it might be particularly interesting for point-of-care applications.
A. Manz, N. Graber, H.M. Widmer, Sens. Actuators B Chem. 1, 244 (1990)
N.-T. Nguyen, M. Hejazian, C. Ooi, N. Kashaninejad, Micromachines 8, 186 (2017)
T.H. Kim, J.M. Lee, B.H. Chung, B.G. Chung, Nano Converg. 2, 12 (2015)
P.R. Makgwane, S.S. Ray, J. Nanosci. Nanotechnol. 14, 1338 (2014)
C. Sensen, H.S. Mohtashim, J. Micromech. Mircoeng. 27, 083001 (2017)
L. Shang, Y. Cheng, Y. Zhao, Chem. Rev. 117, 7964 (2017)
J. Wang et al., Micromachines 8, 22 (2017)
T.W. Phillips, I.G. Lignos, R.M. Maceiczyk, R.M. Maceiczyk, A.J. deMello, J.C. deMello, Lab Chip 14, 3172 (2014)
C. Martino, A.J. deMello, Interface Focus 6, 20160011 (2016)
Y. Fu, C. Li, S. Lu, W. Zhou, F. Tang, X.S. Xie, Y. Huang, Proc. Natl. Acad. Sci. USA. 112, 11923 (2015)
C.D. Ahrberg, A. Manz, B.G. Chung, Lab Chip 16, 3866 (2016)
D.R. Link, E. Grasland-Mongrain, A. Duri, F. Sarrazin, Z. Cheng, G. Cristobal, M. Marquez, D.A. Weitz, Angew. Chem. Int. Ed. 45, 2556 (2006)
S.-Y. Park, T.-H. Wu, Y. Chen, M.A. Teitell, P.-Y. Chiou, Lab Chip 11, 1010 (2011)
N.T. Nguyen, T.H. Ting, Y.F. Yap, T.N. Wong, J.C.K. Chai, W.L. Ong, J. Zhou, S.H. Tan, L. Yobas, Appl. Phys. Lett. 91, 084102 (2007)
C.T. Chen, G.B. Lee, J. Microelectromech. Sys. 15, 1492 (2006)
X. Jie, A. Daniel, J. Micromech. Mircoeng. 18, 065020 (2008)
C. Cramer, P. Fischer, E.J. Windhab, Chem. Eng. Sci. 59, 3045 (2004)
A.S. Utada, A. Fernandez-Nieves, H.A. Stone, D.A. Weitz, Phys. Rev. Lett. 99, 094502 (2007)
J.Y. Kim, S.I. Chang, A.J. deMello, D. O'Hare, Nano Converg. 1, 3 (2014)
S.L. Anna, H.C. Mayer, Phys. Fluids 18, 121512 (2006)
P. Guillot, A. Colin, Phys. Rev. E 72, 066301 (2005)
Y. Ding, XCi Solvas, A. deMello, Analyst 140, 414 (2015)
M. De Menech, P. Garstecki, F. Jousse, H.A. Stone, J. Fluid Mech. 595, 141 (2008)
J. Wang, Y. Zhou, H. Qiu, H. Huang, C. Sun, J. Xi, Y. Huang, Lab Chip 9, 1831 (2009)
S. Rongrong, C. Lingqian, L. Lei, J. Micromech. Mircoeng. 25, 115012 (2015)
P.E. Verboket, O. Borovinskaya, N. Meyer, D. Günther, P.S. Dittrich, Anal. Chem. 86, 6012 (2014)
A. Kasukurti, C.D. Eggleton, S.A. Desai, D.I. Disharoon, D.W.M. Marr, Lab Chip 14, 4673 (2014)
M.J. Ahamed, S.I. Gubarenko, R. Ben-Mrad, P. Sullivan, J. Microelectromech. Syst. 19, 110 (2010)
S. Osher, J.A. Sethian, J. Computational Phys. 79, 12 (1988)
C.D. Ahrberg, J.M. Lee, B.G. Chung, Sci. Rep. 8, 2438 (2018)
S.P. Sutera, R. Skalak, Ann. Rev. Fluid Mech. 25, 1 (1993)
C.M. Sewatkar, S. Dindorkar, S. Jadhao, CFD analyses and validation of multiphase flow in micro-fluidic system (Springer, Shanghai, 2007), pp. 647–649
H. Hua, J. Shin, J. Kim, J. Fluids Eng. 136, 021301 (2013)
JWC and JML fabricated and analyzed the dual-nozzle microfluidic droplet generator. THK and JHH performed CFD simulation and analyzed droplets. CDA and BGC discussed the experimental data and wrote the paper. All authors wrote the final manuscript. All authors read and approved the final mansucript.
The authors have no data to share since all data are shown in the submitted manuscript.
This work was supported by the National Research Foundation (NRF) of Korea grant funded by the Ministry of Science and ICT (MSIT) (Grant Numbers 2016M3A7B4910652, 2016R1A6A1A03012845, 2017H1D3A1A02013996).
Ji Wook Choi and Jong Min Lee contributed equally to this work
Department of Mechanical Engineering, Sogang University, Seoul, 04107, Republic of Korea
Ji Wook Choi, Jong Min Lee, Tae Hyun Kim, Jang Ho Ha & Bong Geun Chung
Research Center, Sogang University, Seoul, 04107, Republic of Korea
Christian D. Ahrberg
Ji Wook Choi
Jong Min Lee
Tae Hyun Kim
Jang Ho Ha
Bong Geun Chung
Correspondence to Bong Geun Chung.
Additional file
Microscope images of droplet dispensing from the microfluidic droplet dispenser for the case of a single nozzle (A) and dual-nozzle microfluidic device (B). Scale bars are 200µm. The droplets aggregate once they leave the microfluidic device when a single nozzle is used (Bottom section of both images show the dispensed droplets), while no aggregation or merging can be observed in the case of the dual-nozzle microfluidic device.
Choi, J.W., Lee, J.M., Kim, T.H. et al. Dual-nozzle microfluidic droplet generator. Nano Convergence 5, 12 (2018). https://doi.org/10.1186/s40580-018-0145-2
Dual-nozzle
Microfluidic device | CommonCrawl |
\begin{document}
\large \begin{center} {\bf\Large Twisted cohomology pairings of knots I; diagrammatic computation} \end{center} \vskip 1.5pc
\begin{center}{\Large Takefumi Nosaka}\end{center}\vskip 1pc\begin{abstract}\baselineskip=12pt \noindent We provide a diagrammatic computation for the bilinear form, which is defined as the pairing between the (relative) cup products with every local coefficients and every integral homology 2-class of every links in the 3-sphere. As a corollary, we construct bilinear forms on the twisted Alexander modules of links.
\end{abstract}
\begin{center} \normalsize \baselineskip=11pt {\bf Keywords} \\ \ \ \ Cup product, Bilinear form, knot, twisted Alexander polynomial,
group homology, quandle \ \ \ \end{center}
\baselineskip=13pt
\tableofcontents
\large \baselineskip=16pt
\section{Introduction}
The cup products and pairings of connected compact $C^{\infty}$-manifolds $Y$ have a long history, and possess powerful information in topology. As is classically known from algebraic surgery theory, if $Y$ is simply connected and closed with $ \mathrm{dim}(Y)\geq 6$, then the homeotype of $Y$ is almost characterized by cup products and some characteristic classes. Furthermore, there are also some studies for non-simply connected cases, although the cases have many obstruction and difficulties, such as the $s$-bordism theorem and $\mathbb{L}$-theory and Blanchfield duality in high dimensional topology (see \cite{Bla,CS,Mil,Hil}).
Meanwhile, in low dimensional topology, it is important to analyse quantitatively the fundamental group $\pi_1(Y)$ (cf. the geometrization conjecture). That being said, as in the interaction in \cite{CS,COT}, it is sensible to ask how applicable the study of the cup products in high dimensional topology is to that in low one.
This paper focuses on twisted pairings arising from any group homomorphism $\pi_1(Y) \rightarrow G $, which are constructed in simple and general situations as follows: Take a relative homology $n$-class $\mu \in H_{n} ( Y ,\partial Y ;{\Bbb Z} )$, and a right $G$-module $M$ and a $G$-invariant multilinear function $ \psi : M^n \rightarrow A $ for some ring $A$. Then, we can easily define the composite map \begin{equation}\label{kiso} H^1( Y,\partial Y ; M )^{ \otimes n} \xrightarrow{\ \ \smile \ \ } H^{n}( Y ,\partial Y ; M^{\otimes n } ) \xrightarrow{ \ \ \langle \bullet , \ \mu \rangle \ \ } M^{\otimes n } \xrightarrow{\ \ \langle \psi, \ \bullet \rangle \ \ }A . \end{equation} Here $M$ is regarded as the local coefficient of $Y$ via $f$, and the first map $\smile$ is the cup product, and the second (resp. third) is defined by the pairing with $ \mu$ (resp. $\psi$).
However, in general, the linear form has a critical difficulty that the relative cup product in $H^*( Y, \partial Y ; M ) $ seems speculative and uncomputable from definitions. Actually, even if $Y$ is a surface with orientation 2-class $\mu $, the bilinear 2-form \eqref{kiso} is complicated and includes an important example: Precisely, if $G$ is a semisimple Lie group with Killing form $\psi $ and Lie algebra $\mathfrak{g}=M$, the 2-form \eqref{kiso} yields a symplectic structure on the flat moduli space $\mathop{\mathrm{Hom}}\nolimits (\pi_1(Y), G)/\! \!/G$ away from singular points, which is universally summarized as the Goldman Lie algebra \cite{G}.
Furthermore, concerning 3-manifolds $Y$, similar difficulties appear in ``the twisted Alexander modules $H_1(Y ; M)$" \cite{Wada,Lin}; Precisely, whereas the study has provided some topological applications (see \cite{FV,Hil}), few papers have addressed linear forms on $H_1(Y ; M)$. In addition,
in analysing some pairings of 3-dimensional links, some boundary conditions occur elaborate difficulty appearing in relative (co)homology; see, e.g., \cite{Bla,BE,COT}, \cite[Chapters 4--8]{Hil}.
\
In the series starting from this paper, we address 3-dimensional case where $Y_L$ is the 3-manifold which is obtained from the 3-sphere by removing an open tubular neighborhood of a link $L$, i.e., $Y_L =S^3 \setminus \nu L.$ Notice the relative homology groups $$ H_3 ( Y_L ,\partial Y_L;{\Bbb Z} ) \cong {\Bbb Z}, \ \ \ \ \ \ H_2 (Y_L ,\partial Y_L ;{\Bbb Z} ) \cong {\Bbb Z}^{\# \pi_0( \partial Y_L)},$$ which are generated by the fundamental 3-class $[Y_L, \partial Y_L]$ and by some Seifert surfaces in $S^3 \setminus \nu L$, respectively.
We should emphasize that it is not easy to directly describe the 3-class and Seifert surfaces. This point often appears to be a difficulty in many studies (see \cite{COT,Hil,Mil,T}), e.g., the $A$-polynomial, Milnor link-invariant, and Chern-Simons invariant \cite{Zic}.
Nevertheless, this paper focuses on the bilinear case with $ n=2$, and we succeed in describing a formulation of computing the twisted pairings \eqref{kiso} with respect to every representation $f: \pi_1(S^3 \setminus \nu L) \rightarrow G$ of every link group (Theorem \ref{mainthm2}). Namely, the twisted pairing \eqref{kiso} turns out to be computable from only a link diagram, with describing no Seifert surfaces. Actually, we can calculate the bilinear forms with respect to some representations (e.g., see Sections \ref{Lexa1}--\ref{rei.8} for the trefoil and figure eight knots), and observe some interesting phenomena. Moreover, the subsequent paper \cite{Nos5} will show that the setting \eqref{kiso} recovers three classical pairings: the Blanchfield pairing, twisted cup products of infinite cyclic covers, and the Casson-Gordon local signature; hence, the main theorem enables us to compute the classical pairings. The third paper \cite{Nos6} will deal with trilinear cases of \eqref{kiso}, i.e., $n=3$. Furthermore, the computation of the $(m, m)$-torus link in Propositions \ref{aa11c}--\ref{aa1133c} will be used in the studies of 4-dimensional Lefschetz fibrations; see \cite{Nos4} for the detail. In summary, our viewpoint sheds some concrete light on the relative cup product not normally considered, with applications including some classical topology.
Finally, we roughly explain the relation of
relative cohomologies from diagrammatic viewpoints.
The key is the diagrammatic link-invariant obtained from ``quandle cocycles" \cite{CJKLS,CKS,IIJO}, where quandle is an algebraic system.
In fact, the formulation of computing the bilinear forms formulates a generalization of the invariants associated with a certain class of quandles.
The theorem \ref{mainthm2} implies that the link-invariants exactly coincide with the bilinear maps \eqref{kiso}. In particular, our result gives a topological interpretation of some quandle cocycle invariants, and stress a topological serviceability of quandle theory.
Moreover, we emphasize that this discussion in link cases (under a weak condition of $f$) gives explicitly a homomorphism $\mathcal{L}$ from the homology $ H_1 (Y_L ;M) $ to the cohomology $ H^1 (Y_L , \partial Y_L ;M) $. As in \cite{FV,Hil,Lin}, the former $ H_1 (Y_L ;M) $ is defined from Fox derivation, and seemingly to be a bilinear form. However, we show a commutative diagram which relates the Fox derivation to the quandle condition (Lemma \ref{busan}), and obtain the map $\mathcal{L}$. The condition of $f$ is compatible with linear representations $ \pi_1(S^3 \setminus L) \rightarrow GL_n(\widetilde{R})$ of link groups, where $\widetilde{R}$ is a Noetherian UFD which factors through the abelianization of $ \pi_1 (Y_L)$; the associated $ H_1 (Y_L ;M)$ is called the twisted Alexander module, and has some studies \cite{FV,Hil,Wada,Lin}. In conclusion, by composing \eqref{kiso} with $\mathcal{L}$, we succeed in introducing bilinear forms on the twisted Alexander modules $ H_1 (Y ;M) $ of a link.
\
This paper is organized as follows. Section 2 formulates the twisted pairing by means of the quandle cocycle invariants, and states the main theorems. Section 3 describes some computation. In application, Section 4 introduces bilinear forms on twisted Alexander dual modules.
Section 5 proves the theorems, after reviewing the relative group cohomologies.
\
\noindent
{\bf Notation.} Every link $L$ is smoothly embedded in the 3-sphere $S^3$ with orientation. We write $Y_L$ for the 3-manifold which is obtained from $S^3$ by removing an open neighborhood of $L$. Further, we denote by $ \pi_L $ the fundamental group $\pi_1(Y_L)$, and denote by $\# L$ the number of the link component, i.e., $\# L =|\pi_0( \partial Y_L)|.$ Furthermore, we fix a group homomorphism $f:\pi_L \rightarrow G$, and by $A$ we mean an abelian group.
\section{Results; diagrammatic formulations of the bilinear forms}\label{ss1} Our purpose in this section is to state the main results in \S \ref{sss3}. For this purpose, \S \ref{sss2} starts by reviewing quandles, and formulates some link-invariants of bilinear forms.
\subsection{Preliminary; formulations of the bilinear maps}\label{sss2} We will need some knowledge of quandles before proceeding. A {\it quandle} \cite{Joy} is a set, $X$, with a binary operation ${ \lhd} : X \times X \rightarrow X$ such that \begin{enumerate}[(I)] \item The identity $a{ \lhd} a=a $ holds for any $a \in X. $ \item The map $ (\bullet { \lhd} a ): \ X \rightarrow X$ defined by $x \mapsto x { \lhd} a $ is bijective for any $a \in X$. \item The identity $(a{ \lhd} b){ \lhd} c=(a{ \lhd} c){ \lhd} (b{ \lhd} c)$ holds for any $a,b,c \in X. $ \end{enumerate} \noindent
For example, every group $ G$ is made into a quandle with the operation $g \lhd h= h^{-1}gh \in G$.
Moreover, let us explain a broad class of quandles on which this paper focuses.
Take a right $G $-module $M$, that is, a right module of the group ring ${\Bbb Z}[G ]$.
Let $X= M \times G$, and define a quandle operation on $X$ by \begin{equation}\label{kihon} \lhd: (M \times G) \times (M \times G)\longrightarrow M \times G, \ \ \ \ \ \ (a,g,b,h) \longmapsto (\ (a-b)\cdot h +b, \ h^{-1}gh \ ). \end{equation} This quandle was first introduced in \cite[Lemma 2.2]{IIJO}.
Next, let us recall $X$-colorings, where $X$ is a quandle. Let $D$ be an oriented link diagram of a link $L\subset S^3.$ An $X$-{\it coloring} of $D$ is a map $\mathcal{C}: \{ \mbox{arcs of $D$} \} \to X$ such that $\mathcal{C}(\alpha_{\tau}) \lhd \mathcal{C}(\beta_{\tau}) = \mathcal{C}(\gamma_{\tau})$ at each crossings of $D$ illustrated as
the figure below. Let $\mathrm{Col}_X(D) $ denote the set of all $X$-colorings of $D$. For example, for a group $X=G$ with the conjugacy operation, the Wirtinger presentation implies that the set $ \mathrm{Col}_X(D) $ is bijective to the set of group homomorphisms $\pi_L \rightarrow G$. Namely \begin{equation}\label{kihon22} \mathrm{Col}_G(D) \longleftrightarrow \mathrm{Hom}_{\rm gr}(\pi_L , G ). \end{equation} \vskip -0.17pc
We will explain the subset \eqref{kihon2294} and a decomposition \eqref{skew24592} below. By assumption, via \eqref{kihon22}, we can regard the homomorphism $f$ as a $G$-coloring of a link-diagram $D$.
Take the quandle $X= M \times G$ in \eqref{kihon} and the projection $p_G: X \rightarrow G$. Then, we define the set of lifts of $f$ as follows:
\begin{equation}\label{kihon2294} \mathrm{Col}_X(D_{f}):= \{ \ \mathcal{C} \in \mathrm{Col}_X(D) \ | \ p_G \circ \mathcal{C} =f \ \}. \end{equation}
It is worth noticing that the set $\mathrm{Col}_X(D) $ is regarded as a subset of the product $X^{\# \{ \textrm{arcs of }D \}}$. Hence, the subset $\mathrm{Col}_X(D_{f})$ is made into an abelian subgroup of $M^{\# \{ \textrm{arcs of }D \}}$ according to
the linear operation \eqref{kihon}. Further, we can easily see that the diagonal subset $M_{\rm diag} \subset M^{\# \{ \textrm{arcs of }D \}}$ is a subset of $\mathrm{Col}_X(D_{f}) $ as a direct summand in $\mathrm{Col}_X(D_{f}) $. Denoting another summand by $ \mathrm{Col}^{\rm red}_X(D_{f}) $, we have a direct decomposition \begin{equation}\label{skew24592}\mathrm{Col}_X(D_{f}) \cong \mathrm{Col}^{\rm red}_X(D_{f}) \oplus M .\end{equation}
Furthermore, one introduces a bilinear form on the ${\Bbb Z}$-module $ \mathrm{Col}_X(D_{f})$ as follows (Definition \ref{deals3}). Taking another $G$-module $M'$, let $\psi : M \times M' \rightarrow A$ be a bilinear map over ${\Bbb Z}$. Moreover, we assume that this $\psi$ is $G$-invariant. Namely,
$$ \psi(x \cdot g, y \cdot g)= \psi(x , y ) \ \ \ \ \ \ \ \mathrm{for \ any} \ x \in M, \ y\in M' \mathrm{ \ and \ } g \in G.$$
Considering the associated quandle $X'= M' \times G$, define the map $ \varphi_{\psi} : X \times X' \rightarrow A$ by setting \begin{equation}\label{skew245} \varphi_{\psi} \bigl( (y_1,g_1),\ (y_2,g_2) \bigr) = \psi \bigl( y_1 ,\ y_2 \cdot(1-g_2^{-1} ) \bigr), \end{equation} which is first introduced \cite[Corollary 4.7]{Nos2}. Furthermore, recall from (\ref{kihon2294}) the set $ \mathrm{Col}_{Z}^{\rm red}(D_{f})$ associated with $Z=X$ or $Z=X'$. Inspired this, we define
\begin{defn}\label{deals3} Let $X$ and $X'$ be as above, let $D = K_1 \cup \cdots \cup K_{\# L}$ be a link diagram, where $K_1, \dots, K_{\# L}$ are connected components. For $1 \leq \ell \leq \# L$, we define a map $\mathcal{Q}_{\psi,\ell }$ by $$ \mathrm{Col}_{X}^{\rm red}(D_{f}) \times \mathrm{Col}_{X'}^{\rm red} ( D_{f}) \longrightarrow A; \ \ \ ( \mathcal{C}, \mathcal{C}' ) \longmapsto \sum_{\tau} \epsilon_{\tau} \psi \bigl( x_{\tau}-y_{\tau} , \ y_{\tau} ' \cdot (1 - h_{\tau}^{-1}) \bigr), $$ where $\tau $ runs over all the crossings such that the under-arc is from the component $K_\ell$, and $\epsilon_{\tau} \in \{ \pm 1\}$ is the sign of $\tau$ according to the figure below.
Furthermore, the symbols $(x_{\tau}^{\bullet}, y_{\tau})\in X $ and $ (y_{\tau}^{\bullet} , \ h_{\tau})\in X' $ are the colors around the crossing $\tau.$ \end{defn}
\subsection{Statements of the main theorems}\label{sss3} As mentioned in the introduction, we will show (Theorem \ref{mainthm2}) that the twisted cohomology pairing \eqref{kiso} is described as the previous bilinear maps
and states some corollaries. The proofs of the theorems appear in \S \ref{Lbo33222}.
Next, from the view of this theorem we will reformulate the bilinear form $ \mathcal{Q}_{\psi} $ defined in \S \ref{sss2}. As mentioned in the introduction, recall the isomorphism $ H_{2}(Y_L ,\partial Y_L;{\Bbb Z} ) \cong {\Bbb Z}^{\# L}$ with a basis $ \mu_1, \dots, \mu_{ \# L}$ which correspond to the longitudes (or Seifert surfaces) in $S^3 \setminus L.$
\begin{thm}\label{mainthm2}Let $Y_L$ be a link complement in $S^3$ as in \S 1.
Regard the $G$-modules $M$ and $M'$as a local systems of $Y_L$ via $f: \pi_1(Y_L) \rightarrow G$.
Then, there are isomorphisms
\begin{equation}\label{g21gg33} \mathrm{Col}_{X } (D_{f}) \cong H^1(Y_L , \ \partial Y_L ;M ) \oplus M, \ \ \ \ \ \mathrm{Col}_{X }^{\rm red} (D_{f}) \cong H^1(Y_L , \ \partial Y_L ;M ) . \end{equation} such that
the bilinear form $ \mathcal{Q}_{\psi, \ell}$ on $ \mathrm{Col}_{X^{(')}}^{\rm red} (D_{f})$ is equal to the following composite (cf. \eqref{kiso}): $$ H^1( Y_L ,\partial Y_L ; M )\otimes H^1( Y_L ,\partial Y_L ; M' ) \xrightarrow{\ \ \smile \ \ } H^2( Y_L ,\partial Y_L ; M\otimes M' ) \xrightarrow{ \ \langle \bullet , \mu_\ell \rangle \ } M \otimes M' \xrightarrow{ \ \ \langle \psi, \bullet \rangle \ \ }A . $$
\end{thm} As a concluding remark, we should emphasize again that
we can compute the cohomology pairing of links from only a link diagram without describing longitudes (or Seifert surfaces) in $S^3 \setminus L$. Moreover, as seen in Definition \ref{}, the pairing seems computable in an easy way (see \S \ref{Lb162r3} for the examples).
In addition, we see that the bilinear form in Definition \ref{deals3} formulates a generalization of the quandle cocycle invariants \cite{CJKLS,IIJO} with respect to quandles of the forms $X= X'= M \times G $.
The link invariants \cite{CJKLS,CKS}, constructed from
a quandle $X$ and a map $\Psi: X^2 \rightarrow A $ which satisfies ``the quandle cocycle condition", were defined to be a certain map $\mathcal{I}_{\Psi}: \mathrm{Col}_X(D )\rightarrow A $.
Then, we can see that the map $ \varphi_{\psi} $ in \eqref{skew245} which is a quandle 2-cocycle, and verify the equality $\mathcal{I}_{\varphi_{\psi} } = \mathcal{Q}_{\psi } \circ \bigtriangleup $ by construction, where $\bigtriangleup : \mathrm{Col}_X(D )\rightarrow \mathrm{Col}_X(D )^2$ is
the diagonal map. To sum up, as a result of Theorem \ref{mainthm2},
we have succeeded in describing entirely a topological meaning of the quandle cocycle invariants.
\
Next, we mention two properties which are used in the papers \cite{Nos5,Nos4}. We then discuss a non-degeneracy or duality of $ \mathcal{Q}_{\psi, \ell} $. However, we should mention the connecting map $\delta^*: H^{0} ( \partial Y_L ; M ) \rightarrow H^1( Y_L ,\partial Y_L; M)$. Actually, if $M=M'$ and if $\mathbf{x} \in \mathrm{Im}(\delta^*) $, then
the two vanishings $ \mathcal{Q}_{\psi, \ell}( \mathbf{x},\mathbf{y})= \mathcal{Q}_{\psi, \ell}( \mathbf{y},\mathbf{x})=0$
hold for any $ \mathbf{y} \in \mathrm{Col}_{X } (D_{f}) $.
(cf. Theorem \ref{mainthm14} later).
\begin{cor}[See \S \ref{yy43} for the proof.]\label{mainthm1}Let $Y_L$ be a link complement in $S^3$ as in \S 1.
For each link component $\ell$, fix a meridian $\mathfrak{m}_{\ell} \in \pi_1(Y_L) $. If the maps $ \mathrm{id}_M-f( \mathfrak{m }_{\ell } ): M \rightarrow M $ are isomorphisms for any $\ell \leq \#L$, then the inclusion $(Y_L, \emptyset) \rightarrow (Y_L, \partial Y_L )$ induces the isomorphisms
$ H^1(Y_L ,\partial Y_L ;M) \cong H^1(Y_L ;M)$ and $ \mathrm{Im}(\delta^*) \cong 0$.
In particular,
the decomposition in \eqref{g21gg33} is written as $\mathrm{Col}_{X }^{\rm red} (D_f ) \cong H^1(Y_L ;M ) .$ \end{cor}
On the other hand, the invariance with respect to conjugacy is immediately shown;
\begin{cor}\label{lem373d} Let $\phi$ be a $G$-bilinear map as above, and let $f$ and $f' $ be two homomorphisms $\pi_L \rightarrow G$. If there is $g \in G$ such that $f(\mathfrak{m})= g^{-1}f'(\mathfrak{m})g \in G$ for any meridian $\mathfrak{m} \in \pi_L$,
then the resulting bilinear maps $\mathcal{Q}_{\psi ,\ell }$ and $\mathcal{Q}'_{\psi ,\ell } $ are equivalent.
\end{cor}
Finally, we give a special corollary of Theorem \ref{mainthm2},
when $G $ is the free abelian group ${\Bbb Z}^{\# L}$ and $f: \pi_L \rightarrow {\Bbb Z}^{\# L}$ is the canonical abelianization. Writing $t_1, \dots, t_{\# L }$ for generators of $ {\Bbb Z}^{\# L} $,
we can consider the $G$-module $M$ to be a module over the Laurent polynomial ring $ {\Bbb Z}[t_1^{\pm 1}, \dots, t_{\# L }^{\pm 1}] $.
Then, Theorem \ref{mainthm2} immediately deduces a topological meaning on the set of colorings. \begin{cor}\label{le359} Let $L$ be a link, and $f$ be its abelianization $\mathrm{Ab} : \pi_L \rightarrow G= {\Bbb Z}^{\# L} $. Take a $ {\Bbb Z}[t_1^{\pm 1}, \dots, t_{\# L }^{\pm 1}] $-module $M$. Then, we have a $ {\Bbb Z}[t_1^{\pm 1}, \dots, t_{\# L }^{\pm 1}] $-module isomorphism $$ \mathrm{Col}_X(D_{f}) \cong H^1( Y_L, \partial Y_L ;M ) \oplus M. $$ \end{cor} \begin{rem} Let us compare Theorem \ref{mainthm2} with the previous papers, and mention some leap forwards. Concerning the set $\mathrm{Col}_X(D_{f})$, many papers have dealt with only the case $G={\Bbb Z}$ (which is commonly called ``the Alexander quandle $X$"; see \cite{CJKLS}). However, as seen in \cite[\S 12]{Joy} or \cite{CDP} and references therein, while some papers discussed a connection to Alexander polynomials in knot case, few papers analysed a relation between ${\rm Col}_X(D)$ and Alexander polynomials (or module) if $\# L >1$. Corollary \ref{le359} implies a conclusive remark that the set $ {\rm Col}_X(D)$ is interpreted not only by usual (group) homologies, but by relative ones.
\end{rem}
\section{Bilinear forms on the twisted Alexander modules of links}\label{LFsu32b12r2223}
The purpose of this section is to define bilinear forms on the twisted Alexander (dual) modules (Definition \ref{twisted}). According to most papers on the twisted polynomial (see \cite{FV,Wada,Lin}), we mean by $R$ a (commutative)
a Noetherian unique factorization domain (henceforth UFD), with involution $\bar{}: R \rightarrow R$.
\subsection{Preliminaries}\label{ss131} For this purpose, we start by briefly reviewing the twisted Alexander module associated with two group homomorphisms $$f_{\rm pre} : \pi_L \rightarrow GL_n(R), \ \ \ \mathrm{and } \ \ \ \rho :\pi_L \rightarrow {\Bbb Z}^m$$ for some $m \in \mathbb{N}$.
Identifying the group ring, $R [{\Bbb Z}^{m}]$ , of ${\Bbb Z}^{m} $ with the polynomial ring $R [t_1^{\pm 1}, \dots, t_{m}^{\pm 1}]$, the map $\rho$ is extended to a representation $ \pi_L \rightarrow \mathrm{End}_R( R [t_1^{\pm 1}, \dots, t_{m}^{\pm 1}]) $.
Hence, tensoring this $ \rho$ with $ f_{\rm pre}$, we have a representation $$ \rho\otimes f_{\rm pre} : \pi_L \longrightarrow GL_n ( R [t_1^{\pm 1}, \dots, t_{m }^{\pm 1}]) .$$
Thus, the associated first homology $H_1(Y_L ;R[t_1^{\pm 1}, \dots, t_{m }^{\pm 1}]^n ) $ is commonly called {\it the twisted Alexander module} associated with $f_{\rm pre} $; see a survey \cite{FV} on twisted Alexander polynomials.
This Alexander module can be described from
the Fox derivative as follows.
Take a diagram $D$ with $\alpha_D = \beta_D$, where
$\alpha_D $ (resp. $\beta_D$) is the number of the arcs (resp. crossings).
Let us denote this $\alpha_D$ by $\alpha$ in short, and consider the Wirtinger presentation
$\langle x_1 , \ldots , x_{\alpha} | r_1 , \ldots , r_{\alpha} \rangle $ of $\pi_L$. Let $F_m$ be the free group of rank $m $.
Here, recall that there uniquely exists, for each $x_j $, a Fox derivative $ \frac{\partial \ \ }{\partial x_j} : F_\alpha \rightarrow R [{\Bbb Z}^{m}][F_\alpha ]$ with the following two properties: $$ \frac{\partial x_i}{\partial x_j} = \delta_{i,j}, \ \ \ \ \ \ \frac{\partial (uv)}{\partial x_j} = \frac{\partial u }{\partial x_j}v +\frac{\partial v}{\partial x_j},$$ for all $u, v \in F_\alpha$.
Then, as is known (see, e.g., Exercise \cite[\S II.5]{Bro}), we can describe a partial resolution of $\pi_L$ over $A_{(\partial f )}$ as \begin{equation}\label{skddew22} (R [{\Bbb Z}^{m}][\pi_L ])^{\alpha }\xrightarrow{ \ \partial_2 \ } (R [{\Bbb Z}^{m}] [\pi_L])^{\alpha } \xrightarrow{ \ \partial_1 \ } R [{\Bbb Z}^{m}] [\pi_L] \stackrel{\epsilon}{\longrightarrow} R [{\Bbb Z}^{m}] \longrightarrow 0 \ \ \ \ \ \ \ ( \mathrm{exact}) \end{equation} such that the matrix of $ \partial_2$ is the $(\alpha \times \alpha)$-Jacobian matrix $ ( [\frac{\partial r_i}{\partial x_j}] )$,
and the latter $ \partial_1 $ is defined by $ \partial_1 (\gamma)=1 - \gamma $.
Accordingly, after tensoring with a $R [{\Bbb Z}^{m}]$-module $M$,
the common quotient $\mathop{\mathrm{Ker}}\nolimits (\mathrm{id}_M \otimes \partial_1)/\mathop{\mathrm{Im}}\nolimits (\mathrm{id}_M \otimes \partial_2 )$ is isomorphic to the first group homology $H_1 (Y_L ;M)$ with local coefficients.
Next, we will set up a localized ring \eqref{aaa} below, and review the twisted Alexander polynomial \cite{Wada,Lin}.
For this purpose, assume the non-vanishings \begin{equation}\label{aaaaa} \mathrm{det}(\mathrm{id} - \rho\otimes f_{\rm pre} (\mathfrak{m} )) \neq 0 \in R[{\Bbb Z}^m]\end{equation} for every meridian $\mathfrak{m} \in \pi_L$: A typical example is the case $\rho( \mathfrak{m}) \neq 0 $ in ${\Bbb Z}^m$ for every meridian $\mathfrak{m} $, such as the abelianization $\pi_L \rightarrow {\Bbb Z}^{\# L}$.
Then, the assumption enables us to define the ring $A_{(\partial f)}$ obtained by inverting the determinants.
Precisely, we set
\begin{equation}\label{aaa}A_{(\partial f)} :=R [t_1^{\pm 1}, \dots, t_{m}^{\pm 1}, \ \prod_{ \ell \leq \# L} \mathrm{det}(\mathrm{id}- \rho\otimes f_{\rm pre} (\mathfrak{m}_{\ell} ))^{-1} (\mathrm{id}- \overline{\rho\otimes f_{\rm pre} (\mathfrak{m}_{\ell} )} )^{-1} ] . \end{equation} We remark that $A_{(\partial f)}$ is also a Noetherian UFD, and that $A_{(\partial f)}$ has the involution $ \bar{ \ } :A_{(\partial f)} \rightarrow A_{(\partial f)}$ defined by $\bar{ t_i} =t_i^{-1}$. This localization \eqref{aaa} can be interpreted as a generalization of ``localized Blanchfield pairing" (see \cite[\S 2.6]{Hil}), let us set up
Then, {\it the twisted Alexander polynomial}, $\Delta_{f}$, is defined to be the $n^2(\alpha-1)^2$ Jacobian of the Fox derivations \eqref{skddew22} subject to \eqref{aaaaa}: $$\Delta_{f}:= \mathrm{det} \Bigl(([\frac{\partial r_i}{\partial x_j}] )\otimes \mathrm{id}_{A_{(\partial f)}^n })_{1 \leq i,j \leq \alpha -1}\Bigr) / \mathrm{det}(\mathrm{id} - f_{\rm \mathcal{I}} (x_{\alpha} )) \in A_{(\partial f)}. $$ It is shown \cite{Wada} that the value is independent, up to units, of the choice of the arcs $\alpha$.
In addition, we mention a close relation to the colorings set.
Recall that the subset $\mathrm{Col}_X(D_{f})$ is a submodule of the product $M^{\alpha_D}$ according to the linear operation \eqref{kihon}. More precisely, $\mathrm{Col}_X(D_{f})$ can be regarded as the kernel of the homomorphism \begin{equation}\label{fukuoka} \Gamma_{X,D} : M^{\alpha_D} \longrightarrow M^{\beta_D} \end{equation} obtained from \eqref{kihon}. Furthermore, let us examine the cokernel $\mathop{\mathrm{Coker}}\nolimits (\Gamma_{ X,D }) $:
\begin{lem}\label{busan} For any link $L$, choose a diagram $D$ with $\alpha_D =\beta_D $. Consider the quandle $ \overline{X} $ of the form $ M \times G$, where $M$ is the free module $ (A_{(\partial f)})^n $ and $G$ is $GL_n(A_{(\partial f)} ) $.
Then, the cokernel has the following isomorphism $$\mathop{\mathrm{Coker}}\nolimits (\Gamma_{ \overline{X},D }) \cong H_1(Y_L ; (A_{(\partial f)} )^n) \oplus( A_{(\partial f)}) ^n.$$ Here the second summand $(A_{(\partial f)})^n$ corresponds to the diagonal subset $A_{\rm diag}$ of $ (A_{(\partial f)})^{n \beta_D } $.
\end{lem} \begin{proof} From the definition of the ring $A_{(\partial f)} $ in \eqref{aaa}, every $\mathrm{id}- \rho\otimes f_{\rm pre} (\gamma_i) $ is invertible in $M$; The map $\mathrm{id}_M \otimes \partial_1$ is a (diagonally) splitting surjection, which admits consequently a decomposition $$\mathrm{Coker} ( \mathrm{id}_M \otimes \partial_2 : M^{\alpha_D }\longrightarrow M^{\beta_D } ) \cong H_1 (\pi_L ;M) \oplus M .$$
Here, regarding a crossing $\tau$ illustrated as in Figure \ref{koutenpn}, let us
set up the bijection $\kappa_{\tau } : M \rightarrow M$ which takes $ m$ to $ m -m \cdot \rho\otimes f_{\rm pre} (\alpha_{\tau })$, and $\kappa_{\tau }' : M\rightarrow M$ which sends $ m$ to $ m -m \cdot \rho\otimes f_{\rm pre}(\gamma_{\tau }) $. Then, by the direct products with respect to crossings $\tau $, we have the diagram $${\normalsize \xymatrix{ 0 \ar[r] & \mathrm{Col}_{\overline{X}}(D_f) \ar[r] &M^{\alpha_D } \ar[rr]^{\Gamma_{X,D }}\ar[d]_{\prod_{\tau }\kappa_{\tau } } & & M^{\alpha_D } \ar[r] \ar[d]^{\prod_{\tau } \kappa_{\tau }'}& \mathrm{Coker}(\Gamma_{X,D} ) \ar[r] & 0 & (\mathrm{exact}) \\ & & M^{\alpha_D } \ar[rr]^{ \mathrm{id}_M \otimes \partial_2 } & & M^{\alpha_D } \ar[r] & H_1 (Y_L ;M) \oplus M \ar[r] & 0 & (\mathrm{exact}). }} $$
Examining carefully the definitions of $\kappa_{\tau }^{(')}$, $ \partial_2 $, and $ \Gamma_{\overline{X},D} $, the diagram is commutative. Hence, the vertical maps give the desired decomposition $ \mathrm{Coker}(\Gamma_{\overline{X},D} ) \cong H_1 (Y_L ;M) \oplus M $. \end{proof}
Finally, we briefly set up an extension of a bilinear form.
For this, suppose
a bilinear function $ \psi_{\rm pre} : R^n \times R^n \rightarrow R$ satisfying the $f_{\rm pre}$-invariance $$ \psi_{\rm pre} (x,y) = \psi_{\rm pre} ( x \cdot f_{\rm pre}( \mathfrak{m}),\ y \cdot f_{\rm pre}( \mathfrak{m}) ) $$ for any $x,y \in R^n$, and any meridian $ \mathfrak{m} \in \pi_L $.
For an ideal $ \mathcal{I} \subset A_{(\partial f)}$, we let $ \overline{\mathcal{I}} \subset A_{(\partial f)}$ be the ideal consisting of $x \in A_{(\partial f)} $ with $\bar{x} \in \mathcal{I}.$
Then, we can define the bilinear function \[\psi : ( R^n \otimes_R A_{(\partial f)}/ \overline{\mathcal{I}} ) \times (R^n \otimes_R A_{(\partial f)}/ \mathcal{I}) \longrightarrow A_{(\partial f)}/ \mathcal{I}\] by setting \begin{equation}\label{aaaaaa} \psi ( x \otimes a_1 , y \otimes a_2) = \psi_{\rm pre} (x,y) \otimes \overline{a_1} a_2,\end{equation} for $x,y \in R^n $ and $a_1, \ a_2 \in A_{(\partial f)} $. This $\psi $ is $\pi_L $-invariant and sesquilinear over $R[{\Bbb Z}^{ m }] $.
\subsection{Definition}\label{ss33131}
Inspired by Lemma \ref{busan}, we will introduce map from the twisted Alexander module $ H_1 (Y_L ;M)$ to a certain relative cohomology. After that, by composing with the bilinear form $\mathcal{Q}_{\psi}$, we define a bilinear form on the module $ H_1 (Y_L ;M)$.
For this, consider the principal ideal $\mathcal{I}$ generated by $\Delta_f$.
\begin{equation}\label{aaabb} f_{\mathcal{I}}: \pi_L \longrightarrow GL_n(A_{(\partial f)} / \mathcal{I}) ,\end{equation}
by passage to $ \mathcal{I}.$ Then, it is sensible to set up the local coefficients $M=(A_{(\partial f)})^n $, $M_{\Delta}:= (A_{(\partial f)}/ \mathcal{I} )^n $ and $M_{\overline{\Delta}}:= (A_{(\partial f)}/ \overline{\mathcal{I}} )^n $ acted on by \eqref{aaabb}.
Here, the reason why we here need the ideal $ \mathcal{I}$ is as follows. In many cases, the twisted Alexander modules are often torsion $A_{(\partial f)} $-modules, which are annihilated by $ \Delta_f$ ; see, e.g., \cite{FV,Wada}. Therefore, to get non-trivial linear function from such modules, the coefficient ring shall be the quotient $A_{(\partial f)}/( \Delta_f)$.
Next, we will explain Definition \ref{twisted} after introducing two homomorphisms $\mathrm{Adj }$ and $ \mathcal{L}$. Considering the decomposition $(A_{(\partial f)}) ^{n \alpha_D }=(A_{(\partial f)}) ^{n ( \alpha_D -1) } \oplus M_{\rm diag}$, we take the restriction $$ \mathrm{res} (\Gamma_{ \overline{X} ,D}) : (A_{(\partial f)}) ^{n ( \alpha_D -1)} \rightarrow ( A_{(\partial f)}) ^{n ( \alpha_D -1)}. $$ of \eqref{fukuoka}.
Then, it follows from Theorem \ref{mainthm2} and Lemma \ref{busan} above that the adjugate matrix
of $\mathrm{res} (\Gamma_{ \overline{X} ,D} ) $ subject to $(\Delta_f) $ yields a well-defined homomorphism
\begin{equation}\label{ppo}\mathrm{Adj } : H_1(Y_L ; ( A_{(\partial f)}) ^n) \longrightarrow \mathrm{Col}_X^{\rm red}(D_f) \cong H^1(Y_L, \partial Y_L ; M_{\Delta}). \end{equation} where $X$ is $ (M_{\Delta})^m\times G$ as the quotient of $\overline{X}.$
Furthermore, notice that the localization $ R[ {\Bbb Z}^{m } ] \hookrightarrow A_{(\partial f)}$ gives rise to the homomorphism \[ \mathcal{L} : H_1(Y_L ; R[ {\Bbb Z}^{m} ] ^n) \longrightarrow H_1(Y_L ; (A_{(\partial f)})^n).\]
\begin{defn}\label{twisted}
Let $R$ be a Noetherian UFD, and $A_{(\partial f)}$ and $ G $ be as above. Take $\mathcal{I}= (\Delta_f) $, and $M_{\Delta} = ( A_{(\partial f)} /\mathcal{I})^n .$ Let $\psi :M_{\overline{\Delta}} \times M_{\Delta} \rightarrow A_{(\partial f)}/\mathcal{I} $ be the bilinear form obtained from $\psi_{\rm pre}$, as in \eqref{aaaaaa}.
Then, we define the bilinear map from the twisted Alexander module
associated with $ (f_{\rm pre}$, $\psi_{\rm pre})$ to be the following composite
\[ H_1(Y_L ; R[ {\Bbb Z}^{m } ]^n)^{ \otimes 2} \xrightarrow{ \ \mathrm{Adj }^{\otimes 2}\circ \mathcal{L} ^{\otimes 2} \ }
H^1(Y_L, \partial Y_L ; M_{\overline{\Delta}}) \otimes H^1(Y_L , \partial Y_L; M_{\Delta}) \xrightarrow{ \ \mathcal{Q}_{\psi } \ } A_{(\partial f)}/\mathcal{I} .\]
\end{defn}
By definition and Theorem \ref{mainthm2}, we should emphasize that it is not hard to compute the pairing from $ \mathcal{Q}_{\psi } $. For instance as in Example \ref{extra2}, the computation of $ \mathcal{Q}_{\psi } $ implies the twisted pairing equal to $ 2 x \bar{x}'$.
\
Finally, we end this section by mentioning a duality. In general, such a duality always do not holds; we should consider a restricted situation: Let $m=1$, let $R$ be a field $\mathbb{F} $ of characteristic 0, and $ \mathcal{I}$ be the principal ideal $(\Delta_f )$. Then, we can easily show the following lemma in linear algebra. \begin{lem}\label{mainthm1224} Assume that $\Delta_f$ is non-zero and that $\mathrm{det}( t^{-1} \cdot \mathrm{id}_{\mathop{\mathbb{F}}\nolimits^n} - f_{\rm pre} (\mathfrak{m} )) \neq 0$ for a meridian $\mathfrak{m}\in \pi_L $ is relatively prime to the polynomial $\Delta_{f}$. Then the adjugate matrix $\mathrm{Adj }$ in \eqref{ppo} is an $\mathop{\mathbb{F}}\nolimits[t^{\pm 1}]$-isomorphism. \end{lem} As seen in Examples \ref{extra22} and \ref{extra2}, this pairing is often degenerate in many cases, and is possible to be even zero, while the classical Blanchfield pairing is non-singular. However, the subsequent paper \cite{Nos5} will show a duality theorem on the twisted pairings, under assumptions: \begin{thm}[{A corollary of \cite[Theorem 2.4]{Nos5}}]\label{mainthm14} Let $m=1$ and let $R$ be a field of characteristic 0, and $ \mathcal{I}$ be the principal ideal $(\Delta_f )$. Further, assume that $\Delta_f \neq 0$, and $\psi_{\rm pre}$ is nondegenerate. If $\mathrm{det}( \mathrm{id}_{\mathop{\mathbb{F}}\nolimits^n} - t \cdot f_{\rm pre} (\mathfrak{m} )) \neq 0$ for a meridian $\mathfrak{m} \in \pi_L $ is relatively prime to $\Delta_{f}$ in $ \mathbb{F}[t] $, then the twisted pairing in Definition \ref{twisted} is non-degenerate. \end{thm} Here, recall from the known fact of Milnor \cite{Mil2} that all the (skew-)hermitian nondegenerate bilinear forms with isometries $t$ is completely characterised. In conclusion, if $ \psi$ is (skew-)hermitian, we can obtain computable information from the twisted pairing.
\section{Examples as diagrammatic computations}\label{Lb162r3} As a result of Theorem \ref{mainthm2} on the twisted pairings, we will compute the bilinear forms $ \mathcal{Q}_{\psi}$ associated with some homomorphisms $f : \pi_L \rightarrow G$, where $L$ is one among the trefoil knot, the figure eight knot and the $(m,m)$-torus link $T_{m,m}$. The reader may skip this section.
\subsection{The trefoil knot }\label{Lexa1}
\begin{figure}
\caption{ The trefoil knot, the figure eight knot and the $T_{m,m}$-torus link with labeled arcs.}
\label{ftf}
\end{figure}
As a simple example, we will focus on the the trefoil knot $K $. Let $D$ be the diagram of $K$ as illustrated in Figure \ref{ftf}.
Note the Wirtinger presentation $\pi_L \cong \langle \alpha_1, \alpha_2 \ | \ \alpha_1 \alpha_2 \alpha_1 =\alpha_2 \alpha_1 \alpha_2 \rangle . $ Then we can easily see that a correspondence $ \mathcal{C}: \{ \alpha_1, \alpha_2, \alpha_3\} \rightarrow X$ with $\mathcal{C}(\alpha_i )=(x_i ,z_i) \in M \times G $ is an $X$-coloring $ \mathcal{C}$ over $f: \pi_L \rightarrow G$,
if and only if it satisfies the four equations \begin{equation}\label{eweq22} z_i = f(\alpha_i ), \ \ \ \ \ z_1 z_2 z_1 =z_2 z_1 z_2, \notag \end{equation} \begin{equation}\label{eq22} x_3 = x_1 \cdot z_2 +x_2 \cdot (1-z_2 ), \end{equation} \begin{equation}\label{eq221} (x_1 -x_2) \cdot (1-z_1 +z_2z_1)=(x_1 -x_2) \cdot (1-z_2 +z_1z_2)=0 . \end{equation} In particular, Theorem \ref{mainthm2} concerning $\mathrm{Col}_X^{\rm red}(D_f)$ says the isomorphism $$ H^1(Y_K,\partial Y_K;M ) \cong
\bigl\{ \ x \in M \ \bigr| \ x \cdot (1-z_1 +z_2z_1)=x\cdot (1-z_2 +z_1z_2)=0 \ \bigr\}. $$ Further, given a $G$-invariant linear form $\psi$, the bilinear form $\mathcal{Q}_{\psi} (\mathcal{C}, \ \mathcal{C}' ) $ is expressed as $$ \psi \bigl(x_1 -x_2, x_2'(1 -z_2^{-1}) \bigr)+ \psi \bigl(x_2 -x_3, x_3'(1 -z_3^{-1}) \bigr)+\psi \bigl(x_3 -x_1, x_1'(1 -z_1^{-1}) \bigr) \in A ,$$ by definition.
Furthermore, by \eqref{eq22}, the set $ \mathrm{Col}_{X } (D_{f}) $ is generated by the two elements $x_1, x_2$;
Accordingly, it can be seen from \eqref{eq221} that the form $\mathcal{Q}_{\psi}$ is reduced to \begin{equation}\label{eq2211}\mathcal{Q}_{\psi} \bigl( (x_1, x_2),( x_1', x_2' ) \bigr)= \psi (x_1 -x_2 , \ (x_1' -x_2')\cdot (z_1 - z_1^{-1 }) ) \in A, \end{equation}
where $(x_1^{(')}, x_2^{(')}) \in {\rm Col}_{X^{(')}} (D_{f})\subset (M^{(')})^2 $. It is worth noting that, if $\psi$ is symmetric, $ \mathcal{Q}_{\psi}$ is zero. Hence, we should discuss non-symmetric $ \psi$'s.
From the above expressions, we will deal with three concrete representations: \begin{exa}[cf. Blanchfield pairing]\label{extra1}
Let $f: \pi_L \rightarrow G={\Bbb Z}= \langle t^{\pm 1 } \rangle$ be the abelianization. Then, the equation \eqref{eq221} becomes $(x_1 -x_2)(t^2 -t+1)=0.$ Hence, $$ {\rm Col}_X(D_f) \cong M \oplus \mathrm{Ann}(t^2 -t+1 ).$$ We note that $t^2 -t+1$ is equal to the Alexander polynomial $\Delta_K$ of $K$. For any elements $x$ and $ x'$ in the annihilator submodule, the formula \eqref{eq2211} implies $$ \mathcal{Q}_{\psi} ( x,\ x' )= \psi ( x , \ x' \cdot (2t - 1 ) ).$$
In order to discuss non-trivial cases, for instance, we let $X $ and $ A$ be the PID ${\Bbb Z} [t ]/(t^2 -t+1) $ and $ \psi (y, z)= \bar{y} z $. Then, the bilinear form $(1-t)\mathcal{Q}_{\psi}$ is summarized to $(1+t) \bar{x} x' $ that is the $(1+t)$-multiple of the Blanchfield pairing, $\bar{x} x' $, as predicted in \cite[Theorem 2.1]{Nos5}.
\end{exa}
\begin{exa}\label{extra22} Consider $f: \pi_L \rightarrow GL_1(\mathbb{F}[t^{\pm 1 }])= (\mathbb{F}[t^{\pm 1 }])^{\times }$ that sends $\alpha_i$ to $2t $.
Then, the equation \eqref{eq221} becomes $ (x_1 -x_2)(1-2 t+4 t^2)=0$; hence we should consider $M=A= \mathbb{F}[t]/(1-2 t+4 t^2 ) $, which is not reciprocal. Furthermore, we can easily see that there is a non-trivial $A$-linear form $ \psi: M^2 \rightarrow A$ if and only if Char$( \mathbb{F}) $=3. Further, while the associated form $\psi$ of Char$( \mathbb{F}) $=3 is non-degenerate, \eqref{eq2211} implies degeneracy of the bilinear form $$\mathcal{Q}_{\psi}(x_1 -x_2 ,x_1' -x_2')=(1-t)(\bar{x_1} -\bar{x_2})(x_1' -x_2').$$ In particular, we have $ (1-t)\mathcal{Q}_{\psi} =0$, which implies that the form $\mathcal{Q}_{\psi} $ is not always non-degeneracy (cf. Theorem \ref{mainthm14}).
\end{exa}
\begin{exa}[$SL_2$-representations]\label{extra2} Let $R_{\rm pre}$ be ${\Bbb Z}[s^{\pm 1}, t^{\pm 1}]$ with $\bar{t}=t^{-1}$ and $\bar{s} =s$.
As considered in the twisted Alexander polynomials, we will focus on a representation $ f_{\rm pre}: \pi_L \rightarrow SL_2(R_{\rm pre} )$ defined by
$$ f_{\rm pre} (\alpha_1 ) = t \cdot \left( \begin{array}{cc} s & 1 \\ 0 &s^{-1} \end{array} \right), \ \ \ \ \ f_{\rm pre}(\alpha_2 ) = t \cdot \left( \begin{array}{cc} s & 0 \\ 1- s^2 - s^{-2} & s^{-1} \end{array} \right). $$ Here we remark the known fact that the twisted Alexander module is $ {\Bbb Z}[s^{\pm 1}, t^{\pm 1}]/ (t^2 +1)$. So, following \S \ref{LFsu32b12r2223}, we shall define $R$ to be ${\Bbb Z}[s^{\pm 1}, t^{\pm 1}]/ (t^2 +1) $ and $M=R^2$, and consider the quotient representation $ f: \pi_L \rightarrow GL_2 ( R)$. Then, as a solution of \eqref{eq2211}, it can be seen that $ {\rm Col}_X^{\rm red}(D_f )\cong R$ with a basis in $ {\rm Col}_X^{\rm red}(D_f ) \subset M^2 \cong R^2 \oplus R^2 $ is represented as
$$ \vec{x}= \bigl((0,0), ( ( 1 - s^{-1} t +st )x , \ s t x ) \bigr) , $$ for some $x, \ y, \ z \in R^{\times }$. Further, we will compute the form $ \mathcal{Q}_{\psi}$, where an $SL_2(R )$-invariant bilinear form $\psi : (R^2)^{\oplus 2} \rightarrow R$ is the determinant that sends $ ((a,b),(c,d))$ to $\bar{a}d-\bar{b}c$. By \eqref{eq2211}, we have \begin{align} \lefteqn{ } \mathcal{Q}_{\psi} \bigl( \vec{x} , \vec{x'} \bigr) &=\mathop{\mathrm{det}}\nolimits \Bigl( \bar{ \vec{x}}, \ \vec{x'} \bigl( t \left( \begin{array}{cc} s & 1 \\ 0 &s^{-1} \end{array} \right)-t^{-1 }\left( \begin{array}{cc} s^{-1} & -1 \\ 0 &s \end{array} \right) \bigr) \Bigr) \notag \\
&= (s+s^{-1})t \left| \begin{array}{cc} ( 1- s^{-1} t^{-1} +s t^{-1} ) \bar{x} & ( 1- s^{-1} t +s t )x' \\ st^{-1} \bar{x} & s t x ' \end{array}
\right| = 2 (s^2 +1 ) \bar{x} x'. \notag \end{align} In summary, the concluding point is that the degenerate value $\mathcal{Q}_{\psi } ( \vec{x} , \vec{x'} )$ depends on $s$, while the twisted module $ {\Bbb Z}[s^{\pm 1}, t^{\pm 1}]/ (t^2 +1)$ does not.
Furthermore, we comment on the non-degeneracy from the viewpoint of Theorem \ref{mainthm14}. Following from \eqref{aaa}, we shall set up the localized ring $ A= {\Bbb Z}[s^{\pm 1}, t^{\pm 1}][(t-s)(t-s^{-1})^{-1}]/ \mathcal{I}$ with ideal $ \mathcal{I}=(t^2+1)$ and take the resulting representation $ f_{\rm \mathcal{I}}: \pi_L \rightarrow SL_2(A )$.
Then, since we can replace $ x$ by $(t-s)^{-1}x$, the form $\mathcal{Q}_{\psi} ( \vec{x} , \vec{x'} )$ becomes $2 \bar{x} x' $. Hence, it is non-degenerate, as indicated in Theorem \ref{mainthm14}.
\end{exa}
\subsection{The figure eight knot}\label{rei.8} Next, we will compute some $\mathcal{Q}_{\psi}$'s of the figure eight knot. However, the computation can be done in a similar way to the previous subsection. Thus, we only outline the computation.
Let $D$ be the diagram with arcs as illustrated in Figure \ref{ftf}.
Similarly, we can see that a correspondence $ \mathcal{C}: \{ \alpha_1, \alpha_2, \alpha_3, \alpha_4\} \rightarrow X$ with $\mathcal{C}(\alpha_i )=(x_i ,z_i) \in M \times G $ is an $X$-coloring $ \mathcal{C}$ over $f: \pi_L \rightarrow G$,
if and only if it satisfies the following equations: \begin{equation}\label{eq225dd} z_i = f(\alpha_i ), \ \ \ \ \ z_2^{-1} z_1 z_2= z_1^{-1} z_2^{-1}z_1 z_2 z_1^{-1} z_2 z_1 \in G , \end{equation} \begin{equation}\label{eq225} x_3 = (x_1 -x_2 )\cdot z_2 +x_2 , \ \ \ \ \ \ x_4 = (x_2 -x_1 )\cdot z_1 +x_1, \end{equation} \begin{equation}\label{eq2215} (x_1 -x_2) \cdot (z_1 + z_2 - 1 )=(x_1 -x_2) \cdot (1-z_2^{-1} ) z_1 z_2 =(x_1 -x_2) \cdot (1-z_1^{-1} ) z_2 z_1 \in M. \end{equation} Accordingly, it follows from \eqref{eq225} that the set $ \mathrm{Col}_{X } (D_{f}) $ is generated by $x_1, x_2$;
Given a $G$-invariant bilinear form $\psi$, it can be seen that the bilinear form $\mathcal{Q}_{\psi}$ is expressed as \begin{equation}\label{eq2211522}\mathcal{Q}_{\psi} \bigl( (x_1, x_2),( x_1', x_2' ) \bigr)= \psi \bigl( x_1 -x_2 , \ (x_1' -x_2') \cdot (1-z_1^{-1} - z_2^{-1} + z_1 z_2^{-1}+ z_2 z_1^{-1})\bigr) \in A, \end{equation}
where $(x_1^{(')}, x_2^{(')}) \in {\rm Col}_{X^{(')}} (D_{f})\subset (M^{(')})^2 $. We will examine
concrete representations.
\begin{exa}[Elliptic representations]\label{extra222}
Let us set up the situation. Fix a field $\mathbb{F} $ of characteristic $0.$
Then, we will employ the elliptic representation $f: \pi_1(S^3 \setminus K) \rightarrow SL_2 (\mathop{\mathbb{F}}\nolimits [t^{\pm 1}] ) $ such that $$f(\alpha_1)= t \cdot \left( \begin{array}{cc} s & 1 \\ 0 & s^{-1} \end{array} \right), \ \ \ \ \ f(\alpha_2)= t \cdot \left( \begin{array}{cc} s & 0 \\ u +1 & s^{-1} \end{array} \right), $$
for some $s,u \in \mathbb{F}^{\times}$ with $\bar{s}=s$ and $ \bar{u }=u$. We can easily check from \eqref{eq225dd} that $s$ and $u$ must satisfy $ P_{s, u }=0$, where $ P_{s, u } := s^2+s^{-2} + u+ u^{-1} -1. $
To state only simple results, we now assume that $u$ is a quadratic solution of $ P_{s, u } $ (if $u \not{\!\! \in}\ \! \mathbb{F}$, we shall replace $\mathbb{F}$ by a field extension by $ P_{s, u } $). In addition, we will consider two cases.
\
\noindent (i) Assume $s+s^{-1} \neq \pm 1$.
Let us consider the canonical action of $ SL_2(\mathop{\mathbb{F}}\nolimits )$ on $\mathop{\mathbb{F}}\nolimits^2$. Then, following \cite{Lin,Wada}, the twisted Alexander polynomial $\Delta_f $ associated with $f$ turns out to be $t^2- 2(s + s^{-1})t+1 $.
Then,
similar to Example \ref{extra2}, let us define the ring $A$ as $ \mathbb{F}[t]/ ( \Delta_f ) $, and define $M=M'$ as $A^2$ with action.
Then, by the help of computer to solve \eqref{eq2211},
we can verify $ {\rm Col}_X^{\rm red}(D_f )\cong A$ with a basis in $ {\rm Col}_X^{\rm red}(D_f ) \subset A \cong A^2 \oplus A^2 $ represented as
$$ \vec{x}= \bigl((0,0), ( x , \ \frac{ s^2 - 2 s t + t^2 + s t u }{s- t-s^2t } x ) \bigr) ,$$ for some $x, \ y, \ z \in A^{\times }$. Further, we will compute the form $ \mathcal{Q}_{\psi}$, where $\psi : (A^2)^{\oplus 2} \rightarrow A$ is the determinant.
By \eqref{eq2211522}, one can check $$ \mathcal{Q}_{\psi} ( \vec{x} , \vec{x'} ) = 2 (1 + s^2) (1 - s + s^2) (1 + s + s^2) \bar{x} x . $$
Hence, if $ s^2+1 \neq 0$, this $\mathcal{Q}_{\psi}$ is non-degenerate (cf. the boundary condition in Theorem \ref{mainthm14}).
As an example,
consider the case $\mathop{\mathbb{F}}\nolimits = \mathbb{C} $ and $(s,u)=(1, (1 +\sqrt{-3})/2 )$. In other ward, $f$ is exactly the holonomy representation arising from the hyperbolic structure of $S^3 \setminus K$; Then, $ \mathcal{Q}_{\psi}$ is expressed as $ 12 \bar{x} x$.
(ii) On the other hand, we consider the remaining case $s+s^{-1} = \pm 1$. Then, the associated $H_1(\pi_K; \mathop{\mathbb{F}}\nolimits [t^{\pm 1}] ^2)$ is annihilated by $t \pm 1$. Hence, let us define the ring $A$ as $ \mathbb{F}[t]/ ( t \pm 1 ) $, and define $M$ as $A^2$ with action. Then we can see $ {\rm Col}_X(D_f ) = M^2 \cong A^2 \oplus A^2 $ with basis $ (a,b,c,d)\in A^4$. Moreover, we can read off from \eqref{eq2211522} that $$ \mathcal{Q}_{\psi} \bigl((a,b,c,d) , (a',b',c',d') \bigr) = \bar{a} a' + \bar{b} b'. $$ \end{exa} In addition, in spired by \cite{G}, we discuss $ \mathcal{Q}_{\psi} $ associated with adjoint representations.
However, as seen in Example \ref{extra222}, we should carefully analyze singular points in the space of representations $f: \pi_K \rightarrow G$. Thus, we shall focus on generic points such as \eqref{coleqeeee}. \begin{exa}[Adjoint representations]\label{extra2221113}
Let $G$ and $f$ be as above. Consider the lie algebra $ \mathfrak{g}= \{ B \in \mathrm{Mat}_2(\mathop{\mathbb{F}}\nolimits )\ | \ \mathrm{Tr}B =0\ \} $ with adjoint action of $SL_2 (\mathop{\mathbb{F}}\nolimits) $, and set $M=M':= ( \mathfrak{g }[t^{\pm 1}]/ \mathcal{I})^{2} $ for some ideal $ \mathcal{I} \subset \mathop{\mathbb{F}}\nolimits [t^{\pm 1 }]$. Put the Killing 2-form $\psi: \mathfrak{g}^2 \rightarrow \mathop{\mathbb{F}}\nolimits $ which takes $(X,Y )$ to $\mathrm{Tr}(\bar{X}Y) $. To state only the simplest result \eqref{coleqeeee222}, let us suppose a generic assumption of the form \begin{equation}\label{coleqeeee} (u-1)(u+u^{- 1}- 1 )(2u+2u^{- 1}- 1 )(2u+2u^{- 1}- 5 )(u^3 -u^2 -2 u -1 ) \neq 0 . \end{equation}
Then, we can easily compute the twisted Alexander polynomial as $$ \Delta_f = t^2- (2s^2 + 1 +2s^{-2})t+1 =t^2 +( 2u+2u^{-1}-3 )t+1 .$$
Similarly define the ideal $\mathcal{I}$ to be $ (\Delta_f) $.
Then, as a solution of \eqref{eq2211}, we can show
$ {\rm Col}_X^{\rm red}(D_f )\cong A \ $ with a basis $\vec{x}$ in $\mathfrak{g} \otimes A $:
$$ \left( \begin{array}{cc} ( 1-t)(u s^2 + s^4 + s^6 + t + s^2 t + u s^4 t) & s(1 - 3 t^2 + 4 u t^2 + t^4 ) \\ N_{s,t,u } & ( t-1 )(u s^2 + s^4 + s^6 + t + s^2 t + u s^4 t) \end{array} \right),$$ where the left bottom element $ N_{s,t,u}$ is given by the formula $$ s^2 - 2 s^4 + (4 s^2 -2) t - 4 s^2 t^2 + (2 + 5 s^2 - 6 u^2 s^2) t^3 - 2 t^4 + u (s^2 t^4 - 4( 1 + 3 s^2 ) t^3+ 5 s^2 t^2+ 4 (1 - s^2) t -s^2 ).$$ Though the basis is complicated,
the anti-hermitian 2-form $ \mathcal{Q}_{\psi}$ in \eqref{eq2211522} can be reduced to
\begin{equation}\label{coleqeeee222} \mathcal{Q}_{\psi}(\vec{x} , \vec{x'} )= 2 (t - t^{-1}) (u+u^{-1}-1 ) (1-u)(u^3 -u^2 -2 u -1 ) x \bar{x}'. \end{equation} In contract to the previous examples, this $ \mathcal{Q}_{\psi}$ is parameterized by only $u$, not by the trace $s+s^{-1}$ of $f$.
\end{exa}
\subsection{The $(m,m)$-torus link $T_{m,m}$}\label{Lb12r32} As a example of computation, we will calculate the bilinear form $\mathcal{Q}_{\psi}$ concerning the $(m,m)$-torus link, following from Definition \ref{deals3}.
These calculations will be useful in the paper \cite{Nos4}, which suggests invariants of ``Hurewitz equivalence classes".
Let $L$ be the $(m,m)$-torus link $T_{m,m}$ with $m \geq 2$, and let $\alpha_1, \dots, \alpha_m$ be the arcs depicted in Figure \ref{ftf}. Furthermore, let us identity $\alpha_{i+m}$ with $\alpha_i$ of period $m$. By Wirtinger presentation, we have a presentation of $ \pi_L$ as
$$\langle \ a_1, \dots, a_m \ | \ a_1 \cdots a_m= a_m a_1 a_2 \cdots a_{m-1}= a_{m-1}a_m a_1 \cdots a_{m-2}= \cdots = a_2 \cdots a_m a_1 \ \rangle . $$ In particular, we have a projection $ \mathcal{P}: \pi_L \rightarrow F_{m-1 }$ to the free group of rank $m-1$ subject to $ a_1 \cdots a_m=1$.
Given a homomorphism $f:\pi_L \rightarrow G$ with $f(\alpha_i) \in Z $, let us discuss $X$-colorings $ \mathcal{C}$ over $f$. Then, concerning the relation on the $\ell $-th link component, it satisfies the equation \begin{equation}\label{coleq} \bigl( \cdots (\mathcal{C}(\alpha_\ell ) \lhd \mathcal{C}(\alpha_{\ell+1})) \lhd \cdots \bigr)\lhd \mathcal{C}(\alpha_{\ell+m-1}) = \mathcal{C}( \alpha_\ell) ,\ \ \ \ \ \ \ {\rm for \ any \ }1 \leq \ell \leq m . \end{equation}
With notation $ \mathcal{C} (\alpha_i):= (x_i,z_i) \in X$,
this equation \eqref{coleq} reduces to a system of linear equations \begin{equation}\label{aac} ( x_{\ell-1} -x_{\ell} ) + \sum_{ \ell \leq j \leq \ell+m-2 }( x_j -x_{j+1} ) \cdot z_{j+1} z_{j+2} \cdots z_{m +\ell } = 0 \in M , \ \ \ \ \ \mathrm{for \ any \ } 1 \leq \ell \leq m . \end{equation}
Conversely, we can easily verify that, if a map $\mathcal{C}: \{ \mbox{arcs of $D$} \} \to X$ satisfies the equation \eqref{aac}, then $\mathcal{C} $ is an $ X$-coloring. Denoting the left side in \eqref{aac} by $ \Gamma_{f,k}(\vec{x})$, consider a homomorphism $$ \Gamma_{f}: M^m \longrightarrow M^m ; \ \ \ (x_1, \dots, x_m) \longmapsto (\Gamma_{f,1}(\vec{x}) , \dots, \Gamma_{f,m}(\vec{x}) ).$$ To conclude, the set $\mathrm{Col}_X(D_f ) $ coincides with the kernel of $\Gamma_{f} $.
Next, we precisely formulate the resulting bilinear form in Definition \ref{deals3}. \begin{prop}\label{aa11c} Let $f : \pi_1(S^3 \setminus T_{m,m})\rightarrow G$ be as above. Let $\psi: M \otimes M' \rightarrow A $ be a $G$-invariant bilinear function. For any $ \ell \in {\Bbb Z}_{>0}$ with $ 1 \leq \ell \leq m $, the bilinear form $\mathcal{Q}_{\psi,\ell}: \mathrm{Ker}(\Gamma_{f} )\otimes \mathrm{Ker}(\Gamma_{f} ')\ \rightarrow A$ takes $ (x_1, \dots, x_m) \otimes(y_1', \dots, y_m') $ to \begin{equation}\label{bbbdd} \sum_{k=1}^{m-1 } \psi \bigl( \sum_{j=1}^{k } (x_{j+\ell -1}-x_{j +\ell })\cdot z_{j+\ell } z_{j+\ell +1} \cdots z_{ k+\ell -1} ,\ y_{k+ \ell }' \cdot (1 - z_{k+ \ell }^{-1}) \bigr) \in A . \end{equation} \end{prop} \noindent The formulae are obtained by direct calculation and definitions.
Finally, under an assumption, we give a 2-dimensional interpretation of the bilinear form. \begin{prop}\label{aa1133c} Let $W $ be the complementary space of $S^2$ obtained by removing $m$ disks. Consider the action of $\pi_1(W ) $ on $M$ induced from that of $\pi_L$ via the above projection $\mathcal{P}: \pi_L \rightarrow F_{m-1}$.
With the notation above, we assume that $ z_1 \cdots z_m$ are identities in $M$ and $M'$.
Then the diagonal map $ M \rightarrow \mathop{\mathrm{Ker}}\nolimits ( \Gamma_{\mathbf{z}}) $ is a splitting injection, and the cokernel is the relative cohomology $H^1(W, \partial W ; M )$. Further, the bilinear form $\mathcal{Q}_{\psi,\ell} $ coincides with the composite: $$ H^1( W,\partial W; M ) \otimes H^1( W,\partial W; M ') \xrightarrow{\ \ \smile \ \ } H^2( W ,\partial W ; M\otimes M' ) \xrightarrow{ \ \langle \psi, \bullet \rangle \circ \langle \bullet , \mu_\ell \rangle \ }A . $$
Furthermore, elements of the image $\mathrm{Im}(\delta^* ) $ and $M_{\rm tri} $ explained in Corollary \ref{mainthm1} are represented by $ (x_1, \dots, x_m)$ and $ (x, \dots, x) \in M^n$, respectively. Here $x\in M$ and $x_i \in M(1-z_i)$.
\end{prop} We will give the proof in the end of \S \ref{yy4324}. Furthermore, similar to the previous section, for a concrete representation $ \pi_L \rightarrow G$, we can explicitly compute the bilinear forms. We refer the reader to \cite{Nos4} for concrete computation from Proposition \ref{aa1133c}.
\section{Proofs of Theorems \ref{mainthm2}}\label{Lbo33222} We will work out the respective proofs of Theorem \ref{mainthm2} in \S \ref{yy43} and in \S \ref{yy4324}. While the statements were described in terms of ordinary cohomology, the proof will be done via the group cohomology;
In \S \ref{KI11}, we review the relative group (co)homology.
\subsection{Preliminary; Review of relative group homology}\label{KI11} The relative group homology is a useful method, e.g, for algebraic $K$-theory, secondary characteristic classes and stability problems of group homologies.
As in \cite{BE,T,Zic}, the relative homology is defined from a projective resolution.
However, we will spell out the relative group (co)homology in {\it non-}homogeneous terms, as follows.
This subsection reviews the definition and properties. Throughout this subsection, we fix a group $ \Gamma $ and a homomorphism $f : \Gamma \rightarrow G$. Then, $\Gamma$ acts on the right $G$-module $M$ via $f$.
Let $ C_{n}^{\rm gr }(\Gamma;M ) $ be $M \otimes_{{\Bbb Z}} {\Bbb Z}[\Gamma^n] $. Define the boundary map $\partial_n( a \otimes (g_1, \dots , g_n)) \in C_{n-1}^{\mathrm{gr}}(\Gamma;M)$ by the formula \[ a \otimes ( g_2, \dots ,g_{n}) +\!\!\sum_{1 \leq i \leq n-1}\!\! (-1)^i a \otimes ( g_1, \dots ,g_{i-1}, g_{i} g_{i+1}, g_{i+2},\dots , g_n)+(-1)^{n} (a g_n) \otimes ( g_1, \dots , g_{n-1}) .\] Moreover, we set subgroups $ K_j $ and the inclusions $\iota_j: K_j \hookrightarrow \Gamma $, where the index $ j$ runs over $ 1 \leq j \leq m $ (possibly, $K_s = K_t $ even if $s \neq t$). Then, we can define the complex of the mapping cone of $ \iota_j$'s: More precisely, let us set up the module defined to be
$$ C_n( \Gamma, K_\mathcal{J} ; M ):= C_{n}^{\mathrm{gr}}(\Gamma;M) \oplus \bigl( \bigoplus_{j\in \mathcal{J} } C_{n-1}^{\mathrm{gr}}(K_j ;M) \bigr) $$ and define the differential map on $C_*( \Gamma, K_\mathcal{J}; M ) $ by the formula $$ \partial_n^{\rm rel}( a, b_1, \dots, b_m):= \bigl(\sum_{j \in \mathcal{J} } \iota_j(b_j) -\partial_n(a), \partial_{n-1}^{ }(b_1), \dots, \partial_{n-1}^{ }(b_m)\bigr) \in C_{n-1} ( \Gamma, K_\mathcal{J} ; M ). $$ Since the square is zero, we can define the relative group homology $H_n( \Gamma, K_{\mathcal{J}}; M ) $. \begin{rem}\label{clAl221} It is shown \cite[Propositions in \S 1]{T} that, for any $g \in \Gamma$, the relative homology $ H_*( \Gamma, K_\mathcal{J}; M ) $ is invariant with respect to the change from all the subgroups $K_j$ to $g^{-1} K_j g$. \end{rem}
Dually, we will discuss the relative cohomology. Let us set the cochain group of the form $$C^n( \Gamma, K_\mathcal{J}; M ):= \mathrm{Map} ( \Gamma^n , M ) \oplus \bigl( \bigoplus_{j } \mathrm{Map }((K_{j})^{n-1} , M ) \bigr) . $$ Furthermore, for $(h,k_1, \dots, k_m ) \in C^n( \Gamma, K_\mathcal{J}; M ) $, let us define $ \partial^n(h,k_1, \dots, k_m )$ in $ C^{n+1}( \Gamma, K_\mathcal{J}; M )$ by the formula
$$ \partial^n \bigl(h,k_1, \dots, k_m \bigr)( a, b_1, \dots, b_m)= \bigl( h( \partial_{n+1} (a)), \ h (b_1) -k_1(\partial_n(b_1)), \dots,h(b_m) -k_m (\partial_n(b_m))\bigr),$$ where $( a, b_1, \dots, b_m) \in \Gamma^{n+1} \times( K_1)^{n} \times \cdots \times (K_m)^{n} $. Then, we have a complex $ (C^*( \Gamma, K_\mathcal{J}; M ), \partial^*)$, and can define the cohomology.
As the simplest example, we now observe the submodule consisting of 1-cocycles.
Let $ \mathop{\mathrm{Hom}}\nolimits_f (\Gamma , M \rtimes G )$ be the set of group homomorphisms $\Gamma \rightarrow M \rtimes G $ over the homomorphism $f$. Here the semi-product $M \rtimes G $ is defined by $$ (a, g) \star (a',g'):=( a \cdot g' + a', \ gg'), \ \ \ \ \mathrm{for} \ \ a,a' \in M, \ \ \ g,g' \in G. $$
Then, as is well-known (see \cite[\S IV. 2]{Bro}), if $K_\mathcal{J} $ is the empty set, the set $ \mathop{\mathrm{Hom}}\nolimits_f (\Gamma , M \rtimes G )$ is identified with the set of group 1-cocycles of $\Gamma $ as follows: $$ Z^1( \Gamma; M ) \cong \mathop{\mathrm{Hom}}\nolimits_f (\Gamma , M \rtimes G ); \ \ \ \ \ \ h \longmapsto (\gamma \mapsto ( h(\gamma), f(\gamma))) .$$ Further, concerning the relative cohomology, from the definition, we can easily characterize the first cohomology as follows: \begin{lem}\label{clAl1} The submodule of 1-cocycles, $Z^1( \Gamma, K_\mathcal{J}; M )$, is identified with the following:
$$ \{ \ (\widetilde{f} , y_1, \dots, y_m ) \in \mathop{\mathrm{Hom}}\nolimits_f ( \Gamma, M \rtimes G) \oplus M^m \ | \ \ \widetilde{f} (h_j) = ( y_j - y_j \cdot h_j, \ f_j( h_j) ) , \ \ \mathrm{for \ any } \ h_j \in K_j. \ \} $$
Moreover, the image of $\partial^1$, i.e., $B^1( \Gamma, K_\mathcal{J}; M )$, is equal to the subset $\{ ( \tilde{f}_a, a, \dots, a)\}_{a \in M}. $ Here, for $a \in M,$ this $ \tilde{f}_a : \Gamma \rightarrow M \rtimes G $ is defined as a map which sends $\gamma $ to $( a - a \cdot \gamma, \ f (\gamma ))$. In particular, if $ K_\mathcal{J} $ is not empty, $B^1( \Gamma, K_\mathcal{J}; M )$ is a direct summand of $ Z^1( \Gamma, K_\mathcal{J}; M )$. \end{lem}
Finally, we will formulate explicitly the cup product on $ C^p( \Gamma, K_\mathcal{J} ; M )$ and the Kronecker product. When $ K_\mathcal{J} $ is the empty set, we define the product of $u \in C^p( \Gamma; M )$ and $v \in C^{q}( \Gamma; M' )$ to be the element $u \smile v \in C^{p+q} ( \Gamma; M \otimes M')$ given by
$$ ( u \smile v) ( g_1 ,\dots, g_{p+q}):= (-1)^{pq} \bigl( u (g_1 ,\dots, g_{p} ) g_{p+1} \cdots g_{p+q} \bigr) \otimes v (g_{p+1} ,\dots, g_{p+q} ) .$$
Further, if $ K_\mathcal{J} $ is not empty, for two elements $(f,k_1, \dots, k_m )\in C^p( \Gamma, K_\mathcal{J} ; M )$ and $(f',k'_1, \dots, k'_m )\in C^{q}( \Gamma, K_\mathcal{J} ; M' )$, let us define {\it the cup product} to be the formula $$ ( f \smile f', \ k_1\smile f', \dots, \ k_m\smile f') \in C^{p+q}( \Gamma, K_\mathcal{J} ; M \otimes M'). $$
We can easily see that this formula descends to a bilinear map, by passage to cohomology, $$ \smile : H^p( \Gamma, K_\mathcal{J} ; M ) \otimes H^{q}( \Gamma, K_\mathcal{J} ; M' )\longrightarrow H^{p+q}( \Gamma, K_\mathcal{J} ; M \otimes M').$$
Then the graded commutativity holds: for any $u \in H^{p} ( \Gamma, K_\mathcal{J} ; M )$ and $v \in H^{q}( \Gamma, K_\mathcal{J} ; M' )$, we have $u \smile v =(-1)^{pq } \tau (v \smile u) $, where $ \tau: M \otimes M' \rightarrow M' \otimes M$ is the canonical isomorphism. Furthermore, for $( a, b_1, \dots, b_m) \in \Gamma^{n} \times K_1^{n-1} \times \cdots \times K_m^{n-1} $, consider the evaluation defined by $$ \langle (f,k_1, \dots, k_m ) , ( a, b_1, \dots, b_m) \rangle:= f(a)+k_1(b_1)+ \cdots +k_m(b_m)\in M.$$ Then it can be seen that the formula induces $ \langle , \rangle : H^n( \Gamma, K_\mathcal{J} ; M )\otimes H_n( \Gamma, K_\mathcal{J} ; A )\rightarrow H_0(\Gamma ; M) . $ Here we can replace $ H_0(\Gamma ; M) $ by the coinvariant $M_\Gamma= M/ \{ ( a -a \cdot g)\} _{a \in M, g \in \Gamma}$. \begin{rem}\label{clAl221} We will give a topological description of the above definitions without proofs (For the proof see \cite{BE} or \cite[\S 3]{Zic}). Consider the Eilenberg-MacLane spaces of $ \Gamma$ and of $K_j$, and the map $ (\iota_j)_* : K(K_j,1 ) \rightarrow K( \Gamma,1 ) $ induced by the inclusions. Then the relative homology $H_n( \Gamma, K_\mathcal{J} ; M ) $ is isomorphic to the homology of the mapping cone of $ \sqcup_j K(K_j,1 )\rightarrow K(G,1 )$ with local coefficients. Further, the cup product $\smile$ and the Kronecker product $ \langle , \rangle$ above coincide with those on the usual singular (co)homology groups (up to signs \footnote{See \cite[\S\S 1-2 ]{BE} for details.}). In particular, we mention the knot case $\# L =1$.
Since the complementary space $Y_L = S^3 \setminus L$ is an Eilenberg-MacLane space, we have an isomorphism $ H^*( \pi_1 (Y_L ), \pi_1( \partial Y_L) ;M) \cong H^*( Y_L, \partial Y_L ; M) $.
More generally, we comment on the case $ \# L \geq 1$. We let $ \Gamma$ be $\pi_1(Y_L)$ and let $ K_\ell ( \cong {\Bbb Z}^2 ) $ be the abelian subgroup of $\pi_1(Y_L) $ arising from the $\ell $-th boundary. Denote the family $\mathcal{K}:=\{K_\ell \}_{\ell \leq \# L}$ by $\partial \pi_1 (Y_L)$, and consider the inclusion pair $$\iota_Y : \bigl( Y_L ,\ \partial Y_L \bigr) \rightarrow \bigl( K(\pi_1 (Y_L),1 ) , \ K(\partial \pi_1 (Y_L),1 ) \bigr) $$ obtained by attaching cells to kill the higher homotopy group.
Then, we have a commutative diagram: $${\normalsize \xymatrix{ H^1( \pi_1 (Y_L), \partial \pi_1(Y_L) ; M)^{\otimes n} \ar[r]^{\smile}\ar[d]_{\cong }^{\iota_Y^*} & H^n( \pi_1 (Y_L), \partial \pi_1(Y_L) ; M^n ) \ar[rr]^{\ \ \ \ \ \ \ \ \ \ \langle \bullet, (\iota_Y )_*(\mu) \rangle } \ar[d]^{\iota_Y^* }& & \ \ (M^{\otimes n})_{\pi_1(Y_L)} \ar@{=}[d] \ \ & \\ H^1( Y_L ,\partial Y_L; M)^{\otimes n} \ar[r]^{\smile} & H^n( Y_L ,\partial Y_L; M^n) \ar[rr]^{\ \ \ \ \ \ \ \ (-1)^n\langle \bullet, \mu \rangle } & & \ \ (M^{\otimes n})_{\pi_1(Y_L)}. }} $$ Here, the left $\iota_Y^*$ is an isomorphism from the definition of $ \iota_Y $. In conclusion, as a result of the diagram, to prove Theorem \ref{mainthm2} on the bottom arrows, we may focus on only the group (co)homologies on the upper one such as the next subsection, in what follows. \end{rem}
\subsection{Proof for the isomorphism \eqref{g21gg33}.}\label{yy43} This subsection gives the proof of the isomorphism \eqref{g21gg33} in Theorem \ref{mainthm2}, and Corollary \ref{mainthm1}. For this, we now prepare terminology throughout this section: Let $D$ be a diagram of a link $L$ and let $\Gamma$ be $ \pi_L$.
In addition, we fix an arc $\gamma_{\ell}$ from each link component $\ell$ of $L$, and consider the circular path $\mathcal{P}_{\ell}$ starting from $\gamma_{\ell} $ (see Figure \ref{ezu}). Further, for $j\geq 2$, we denote by $\alpha_{\ell, j}$ the $j$-th arc on $\mathcal{P}_{\ell} $, and do by $\beta_{\ell, j}$ the arc that divides the arcs $\alpha_{\ell, j-1 } $ and $\alpha_{\ell, j}$. Considering the meridian $\mathfrak{m}_{\ell,j}^{ \epsilon_j} \in \pi_1( S^3 \setminus L )$ associated with the arc $ \beta_{\ell, j} $,
we here define the longitude $ \mathfrak{l}_{\ell}$ to be \begin{equation}\label{189} \mathfrak{l}_{\ell} := \mathfrak{m}_{\ell,1}^{ \epsilon_1} \mathfrak{m}_{\ell,2}^{ \epsilon_2} \cdots \mathfrak{m}_{\ell, N_\ell }^{ \epsilon_{N_\ell}} \in \pi_1(S^3 \setminus L), \end{equation} where $\epsilon_i \in \{ \pm 1\} $ is the sign of the crossing between $ \alpha_{\ell,i } $ and $\beta_{\ell,i }$ with $i>1,$ and $ \epsilon_1 =+1$.
Considering the subgroup $\partial_{\ell} \pi_L \cong {\Bbb Z}^2 $ generated by the meridian-longitude pair $ (\mathfrak{m}_{\ell} , \mathfrak{l}_{\ell} )$, the union $ \partial_{1} \pi_L \sqcup \cdots \sqcup \partial_{\# L} \pi_L$ coincides with the family $ \partial \pi_L $ mentioned in Remark \ref{clAl221}.
\begin{figure}
\caption{ The longitude $\ell_j$ and arcs $\alpha$'s and $\beta$'s in the diagram $D$. Here $\gamma_{\ell}= \alpha_{\ell ,1 }= \beta_{\ell, 1 }. $}
\label{ezu}
\end{figure}
\begin{proof}[Proof of the isomorphisms \eqref{g21gg33}] First, we will construct a map in \eqref{11222}. Given a $G$-module $M$,
set up a map \begin{equation}\label{g21gg} \kappa : M \times G \longrightarrow M \rtimes G; \ \ \ \ (m,g) \longmapsto (m \cdot g - m, \ g ). \end{equation} Further, for an $X$-coloring $\mathcal{C}$ over $f$, consider a map $ \tilde{f}_{\mathcal{C}} : \{ {\rm arcs \ of \ }D \} \rightarrow M \rtimes G $ which takes $\gamma$ to $\kappa \bigl( \mathcal{C} (\gamma) \bigr) $. Then, we verify from Wirtinger presentation that this $ \tilde{f}_{\mathcal{C}}$ defines a group homomorphism $\pi_L \rightarrow M \rtimes G$ over $f$. Hence, we obtain a map \begin{equation}\label{11222} \Omega: \mathrm{Col}_X(D_{f}) \longrightarrow \mathop{\mathrm{Hom}}\nolimits (\pi_L , M \rtimes G) \times (X^{\# L}) ; \ \ \ \mathcal{C} \longmapsto ( \tilde{f}_{\mathcal{C}}, \ \mathcal{C}( \gamma_1), \dots, \mathcal{C}( \gamma_{\# L} ) ). \end{equation}
We claim tha such an $ \tilde{f}_{\mathcal{C}}$ uniquely admits $a_{\ell} \in M$ satisfying the two identities
\begin{equation}\label{1v24} \ \widetilde{f}(\mathfrak{m}_{\ell } ) =(a_{\ell} - a_{\ell} \cdot f(\mathfrak{m}_{\ell } ), \ f(\mathfrak{m}_{\ell } )) \ \ \ \ \ \ \ \ \widetilde{f}(\mathfrak{l}_{\ell } ) =(a_{\ell} - a_{\ell} \cdot f(\mathfrak{l}_{\ell } ), \ f(\mathfrak{l}_{\ell } )) \in M \rtimes G. \end{equation} with respect to $1 \leq \ell \leq \# L.$ Fix notation $\mathcal{C}(\beta_{\ell,j})=(y_j, z_j)\in M \times G $. The first one is obvious from the definition of $\kappa$ with $a_{\ell}=y_1$. We shall show the second: by the coloring condition between $\alpha_{\ell. N_{\ell}}$ and $\beta_{\ell. N_{\ell}}$, we have two equations; \begin{equation}\label{deq221} z_1 z_2^{\epsilon_2} \cdots z_{N_{\ell}}^{\epsilon_{N_{\ell}}} =z_2^{\epsilon_2} \cdots z_{N_{\ell}}^{\epsilon_{N_{\ell}}} z_1 , \ \ \ \ \ y_1 z_2 ^{\epsilon_{2}} \cdots z_{N_{\ell}} ^{\epsilon_{N_{\ell}}} + \sum_{k=2}^{N_{\ell}-1 } y_k (1-z_{k+1}^{\epsilon_{k+1}} )\cdot z_{k+2 }^{\epsilon_{k+2}} z_{k+3 }^{\epsilon_{k+3}} \cdots z_{N_{\ell}}^{\epsilon_{N_{\ell}}} = y_1 . \end{equation} By \eqref{g21gg} and \eqref{189} that $\widetilde{f}(\mathfrak{l}_{\ell } ) $ is expressed as $\sum_{k=1}^{N_{\ell} } y_k (1-z_{k+1}^{\epsilon_{k+1}} )\cdot z_{k+2 }^{\epsilon_{k+2}} z_{k+3 }^{\epsilon_{k+3}} \cdots z_{N_{\ell}}^{\epsilon_{N_{\ell}}} $. Hence, comparing carefully it with \eqref{deq221} gives the second one in \eqref{1v24}
Since \eqref{1v24} coincides with exactly the 1-cocycle condition by Lemma \ref{clAl1}, the map $\Omega$ is reduced to $\mathrm{Col}_X(D_{f}) \rightarrow Z^1( \pi_L, \partial \pi_L;M)$. We will construct the inverse mapping as follows. For this, notice the equality $$ \kappa (a \cdot h +b, h^{-1}gh) = (b,h)^{-1} \cdot \kappa (a,g) \cdot (b,h) \in M \rtimes G $$ from the definitions, and notice that any meridian $ \mathfrak{m}_{\ell, j} $ in $\pi_L $ is conjugate to the $ \mathfrak{m}_{\ell_j }$ on the $\ell_j$-th component for some $\ell_j$: In other ward, we can choose $ h_j \in \pi_L $ with $\mathfrak{m}_{\ell, j} = h_j^{-1}\mathfrak{m}_{\ell_j} h_j$. To summarize, given an $\tilde{f}$ in $ Z^1( \pi_L, \partial \pi_L;M)$, we define a map $\mathcal{C}_{\tilde{f}} : \{ \mathrm{arc \ of \ }D \} \rightarrow X$ by $ \mathcal{C}_{\tilde{f}} (\mathfrak{m}_{\ell, j} ) = ( a_{\ell} \cdot h_j +b_j , f (\mathfrak{m}_{\ell, j} ) )$ where $b_j \in M$ is defined from $\tilde{f}( \mathfrak{m}_{\ell, j} )=(b_j, f (\mathfrak{m}_{\ell, j} ))$. Then, we can easily see that $ \mathcal{C}_{\tilde{f}}$ is an $X$-coloring, and this construction gives the desired inverse mapping.
To summarize, we have the isomorphism $\mathrm{Col}_X(D_{f}) \longrightarrow Z^1( \pi_L, \partial \pi_L;M)= H^1( \pi_L, \partial \pi_L;M) \oplus M$. Furthermore, Lemma \ref{clAl1} again says that the summand $ X_{\rm diag} \cap \mathrm{Col}_X(D_{f}) \cong M$ is exactly $B^1(\pi_L, \partial \pi_L;M )$. Hence, by the definition of $\mathrm{Col}_{X }^{\rm red} (D_{f}) $ in \eqref{skew24592}, the map $\Omega$ ensures the desired $\mathrm{Col}_{X }^{\rm red} (D_{f}) \cong H^1(Y_L , \ \partial Y_L ;M )$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{mainthm1}] We will show the required isomorphism $ H^1(\pi_L , \partial \pi_L ; M) \cong H^1(\pi_L ; M)$ from the bijective assumption of $\mathrm{id} - f (\mathfrak{m}_{\ell}): M \rightarrow M$.
For this, it is enough to construct an inverse mapping of the projection $Z^1(\pi_L , \partial \pi_L ; M) \rightarrow Z^1(\pi_L ; M) $. Let $\tilde{f}:\pi_L \rightarrow M \rtimes G$ be any homomorphism over $f$ as being in $Z^1(\pi_L ; M) $.
Choose some $b_{\ell}$ and $ c_{\ell} \in M$ with $$\widetilde{f}(\mathfrak{m}_{\ell })= ( b_{\ell}, \ f(\mathfrak{m}_{\ell })) , \ \ \ \ \ \ \ \ \widetilde{f}(\mathfrak{l}_{\ell } )= ( c_{\ell}, \ f(\mathfrak{l}_{\ell })) \in M \rtimes G .$$ Since the pair $ ( \mathfrak{m }_{\ell },\mathfrak{l}_{\ell }) $ commutes in $\pi_L$, we have $ \widetilde{f}(\mathfrak{m}_{\ell }) \widetilde{f}(\mathfrak{l}_{\ell } )=\widetilde{f}(\mathfrak{l}_{\ell }) \widetilde{f}(\mathfrak{m}_{\ell } )$, which reduces to $$ \bigl( \ c_{\ell}- c_{\ell}f(\mathfrak{m}_{\ell } ) - b_{\ell}+ b_{\ell}f(\mathfrak{l}_{\ell } \bigr), \ \ f(\mathfrak{l}_{\ell })^{-1} f(\mathfrak{m}_{\ell } )^{-1}f(\mathfrak{l}_{\ell }) f(\mathfrak{m}_{\ell } ) \ \bigr)= (0, 1_g) \in M \rtimes G. $$ Setting $a_{\ell}= b_{\ell} (\mathrm{id} - f(\mathfrak{m}_{\ell }) )^{-1} $ by assumption, the reduced equality implies $ c_{\ell}= a_{\ell} (\mathrm{id} - f(\mathfrak{l}_{\ell }) )$. Hence, the correspondence $ \widetilde{f} \mapsto (\widetilde{f}, a_1, \dots, a_{\# L})$ gives rise to the desired inverse mapping.
Incidentally, the vanishing $\mathrm{Im}(\delta^*)$ is obtained from $H ^1( \partial \pi_L ; M)=0$. \end{proof}
\subsection{Proofs of Theorem \ref{mainthm2} \ and Proposition \ref{aa1133c}}\label{yy4324} We turn into proving Theorem \ref{mainthm2} and Proposition \ref{aa1133c}. The proof can be outlined as concrete computations of the bilinear form $\mathcal{Q}_{\psi, \ell}$ and of the cup product in turn. The point here is to describe explicitly the 2-cycle $\mu_{\ell }$ in Lemmas \ref{2gs22} and \ref{2g2s22}.
To accomplish the outline,
one will compute $\mathcal{Q}_{\psi, \ell}$.
Recall the arc $\beta_{\ell,j}$ explained in Figure \ref{ezu}. For two $X$-colorings $ \mathcal{C}$ and $ \mathcal{C}'$,
we further employ notation $\mathcal{C}(\beta_{\ell,j})=(y_j, z_j)\in M \times G $ and $\mathcal{C}'(\beta_{\ell,j})=(y_j', z_j) \in M' \times G$. Then, one can easily verifies that the value $\mathcal{Q}_{\psi,\ell } (\mathcal{C}, \mathcal{C}' )$ is, from the definition, formulated as \begin{equation}\label{deqqg} \sum_{k=1}^{N_{\ell}-1 } \psi \bigl( y_1 z_2^{\epsilon_{2}} \cdots z_k^{\epsilon_{k}} - y_{k+1} + \sum_{j=2}^{k} y_{j} (1-z_{j}^{\epsilon_{j}} ) z_{j+1 }^{\epsilon_{j+1}} z_{j+2 }^{\epsilon_{j+2}} \cdots z_{ k }^{\epsilon_{k}} ,\ y_{k + 1 }' \cdot (1 -z_{k +1}^{-\epsilon_{k+1}}) \bigr) , \end{equation} where the second sigma with $k=1$ means zero (cf. \eqref{bbbdd} as the case that all $\epsilon_j=1$ and $\ell=1$).
On the other hand, let us compute the cup products (Lemmas \ref{2gs} and \ref{2g2s}). To this end, we now introduce a 2-cycle. Consider the abelian subgroup $ \langle \mathfrak{m}_{\ell, j} \rangle \cong {\Bbb Z} $ generated by the meridian $\mathfrak{m}_{\ell, j}$ with respect to the arc $\beta_{\ell, j}$. Then, we write $ \mathfrak{M}_{\ell}$ for the disjoint union $ \langle \mathfrak{m}_{\ell, 1} \rangle \sqcup \langle \mathfrak{m}_{\ell, 2} \rangle \sqcup \cdots \sqcup \langle \mathfrak{m}_{\ell, N_{\ell}} \rangle $, and do $ \mathfrak{M}_{\rm arc }$ for the whole union $ \sqcup_{\gamma } \langle \mathfrak{m}_{\gamma} \rangle$ running over every arcs $\gamma $.
Let us define an element $ \hat{\mu}_\ell^{\rm pre} $ in the relative complex $ C_2(\pi_L , \partial \pi_L \sqcup \mathfrak{M}_{\ell}; {\Bbb Z})$ with trivial coefficients to be $$ ((1 , 1), \mathfrak{l}_{\ell}) +\sum_{k=1}^{N_{\ell}-1} ( (\mathfrak{m}_{\ell, 1}^{\epsilon_1} \cdots \mathfrak{m}_{\ell, k}^{\epsilon_k}, \mathfrak{m}_{\ell, k+1}^{\epsilon_{k+1}}) , 1) - \sum_{k=1}^{N_{\ell}} ( (1,1) , \mathfrak{m}_{\ell, k}^{\epsilon_{k}}) .$$ Here, the last term has only the non-trivial $ (k +\# L+1)$-th component $\mathfrak{m}_{\ell, k}^{\epsilon_k }$.
Then we can easily see that $\hat{\mu}_\ell^{\rm pre }$ is a 2-cycle. Moreover, it is easy to verify the following lemma: \begin{lem}\label{2gs} Take the inclusion pair $\iota_Y$ in Remark \ref{clAl221}, and the relative composite map $$r_Y:\bigl( K(\pi_1 (Y),1 ) , \ K(\partial \pi_1 (Y),1 ) \bigr) \longrightarrow \bigl( K(\pi_1 (Y),1 ) , \ K(\partial \pi_1 (Y) \sqcup \mathfrak{M}_{\ell},1 ) \bigr)$$ induced from the inclusions-pair $ (\pi_1 (Y) , \ \partial \pi_1 (Y) )\rightarrow (\pi_1 (Y) , \ \partial \pi_1 (Y) \sqcup \mathfrak{M}_{\ell})$. Consider the $\ell$-th 2-cycle $ \mu_{\ell} \in H_2( Y_L ,\partial Y_L;{\Bbb Z})\cong {\Bbb Z}^{\# L} $ as before. Then $(r_Y \circ \iota_Y )_*(\mu_{\ell})= \hat{\mu}^{\rm pre }_{\ell}. $ \end{lem} \begin{rem}\label{2gs22} In some case with $\# L >1$, the homology class $\hat{\mu}^{\rm pre }_{\ell}$ vanishes. For example, if $L$ is the Hopf link,
$Y_L$ is homotopic to the one of the boundary tori $S^1 \times S^1$. Hence, we can easily verify that, for any local system $M$, the second homology $ H_2(\pi_1 (Y),\ \partial \pi_1 (Y) ;M)$ vanishes.
In comparison with Proposition \ref{aa11c}, we claim that every bilinear form $\mathcal{Q}_{\psi}$ of the Hopf link $L$ is trivial. Actually, for the diagram $D$ with two arcs $\alpha_1$ and $ \alpha_2$, the formula \eqref{aac} becomes $ (x_1-x_2) (1-z_1)= (x_1-x_2)(1 -z_2 ) =0$, and the formulation \eqref{bbbdd} on $\mathcal{Q}_{\psi}$ is reduced to $ \psi(x_1-x_2,\ x_2 (1-z_\ell^{-1}))= 0$ by the $G$-invariance.
\end{rem}
Next, for $\ell \leq \# L,$ we will set up a homomorphism between the sets of 1-cocycles $$ \zeta_{\ell}: Z^1(\pi_L , \partial \pi_L ;M) \longrightarrow Z^1(\pi_L , \partial \pi_L \sqcup \mathfrak{M}_\ell;M) $$ as follows. Recall the terminology $b_j \in M $ in the proof in \S \ref{yy43}. By Lemma \ref{clAl1}, every element of $Z^1(\pi_L , \partial \pi_L ;M) $ can be represented by a homomorphism $ \widetilde{f}$ with $(a_1, \dots, a_{\#L}) \in M^{\#L}$ satisfying $\widetilde{f}( \mathfrak{m}_{\ell,1})= \kappa (a_\ell , f(\mathfrak{m}_{\ell,1})) .$ Hence, the correspondence $$ ( \widetilde{f}, a_1, \dots, a_{\# L}) \longmapsto (\widetilde{f}, a_1, \dots, a_{\# L}, b_1, \dots, b_{N_{\ell}} ) $$ yields the desired homomorphism $\zeta_{\ell} $. In addition, by iterating the process, we can similarly obtain a homomorphism $\zeta: Z^1(\pi_L , \partial \pi_L ;M) \rightarrow Z^1(\pi_L , \partial \pi_L \sqcup \mathfrak{M}_{\rm arc} ;M)$.
We will use these $\zeta$ and $\zeta_{\ell}$ to recover the bilinear form $\mathcal{Q}_{\psi}$ from some cup product: \begin{lem}\label{2g2s} For any two colorings $\mathcal{C}$ and $\mathcal{C}' $, consider the cup product of the form $$ \mathcal{K}_{\mathcal{C}, \mathcal{C}'}:= \bigl( \zeta_{\ell} \circ \Omega(\mathcal{C}) \bigr) \smile \bigl( \zeta'_{\ell} \circ \Omega'(\mathcal{C}')\bigr) \in Z^2 ( \pi_L , \partial \pi_L \sqcup \mathfrak{M}_{\ell};M \otimes M'). $$ Then, the pairing $ \psi ( \langle \mathcal{K}_{\mathcal{C}, \mathcal{C}'},\ \hat{\mu}^{\rm pre }_{\ell} \rangle) $ is equal to the value $\mathcal{Q}_{\psi}(\mathcal{C}, \mathcal{C}') . $ \end{lem} \begin{proof} By Lemma \ref{clAl1}, the composite $ \zeta_{\ell}^{(')} \circ \Omega^{(')}(\mathcal{C}^{(')})$ forms $ (\widetilde{f}^{(')}, a_1^{(')}, \dots, a_{\# L}^{(')}, b_1^{(')}, \dots, b_{N_{\ell}}^{(')} ) $. Then
the cup product $ \mathcal{K}_{\mathcal{C}, \mathcal{C}'}$ is, by definition, formulated as $$ ( \widetilde{f} \smile \widetilde{f}' , \ a_1 \otimes \widetilde{f}', \dots, a_{\# L}\otimes \widetilde{f}', \ b_1\otimes \widetilde{f}', \dots, \ b_{N_{\ell}} \otimes \widetilde{f}' ) . $$
Write $ \hat{\mu}_{(1)}, \ \hat{\mu}_{(2)}$ and $ \ \hat{\mu}_{(3)} $ for the first, second and third term in $\hat{\mu}^{\rm pre }_{\ell} $, respectively.
We will compute the pairings $ \langle \mathcal{K}_{\mathcal{C}, \mathcal{C}'},\ \hat{\mu}_{(i)} \rangle$. Note from the definitions that the third term $\langle \mathcal{K}_{\mathcal{C}, \mathcal{C}'},\ \hat{\mu}_{(3)} \rangle $ is $- \sum_{k=1}^{N_{\ell} } b_{k}\otimes b_{k}'(1-z_{k}^{-\epsilon_{k}} ) $. Next, the first one $\langle \mathcal{K}_{\mathcal{C}, \mathcal{C}'},\ \hat{\mu}_{(1)} \rangle = b_1 \otimes \tilde{ f} (\mathfrak{l}_{\ell })$ is written in $$\bigl( b_1 z_1^{\epsilon_1} \cdots z_{N_{\ell}}^{\epsilon_{N_{\ell}}}\bigr)\otimes \bigl( \sum_{k=1}^{N_{\ell}} b_{k}' (1-z_{k}^{\epsilon_{k}} ) z_{k+1 }^{\epsilon_{k+1}} z_{k+2 }^{\epsilon_{k+2}} \cdots z_{N_{\ell}}^{\epsilon_{N_{\ell}}} \bigr) = - \sum_{k=1}^{N_{\ell}} \bigl( b_1 z_1^{\epsilon_1} \cdots z_{k -1 }^{\epsilon_{k-1 }}\bigr)\otimes \bigl( b_{k }' (1-z_{k}^{ -\epsilon_{k}} ) \bigr).$$
Finally, we now compute the second term as \[\psi \langle \mathcal{K}_{\mathcal{C}, \mathcal{C}'},\ \hat{\mu}_{(2)} \rangle = \sum_{k=1}^{N_{\ell}-1 } \psi \bigl( \sum_{j=1 }^k b_j(1-z_{j}^{\epsilon_j}) z_{j+1}^{\epsilon_{j+1}} \cdots z_{k+1}^{\epsilon_{k+1}} ,\ b_{k + 1 }' \cdot (1 -z_{k +1}^{\epsilon_{k+1}}) \bigr) \] \[ = \sum_{k=1}^{N_{\ell}-1 } \psi \bigl( -b_1 z_1^{\epsilon_1} \cdots z_{N_{\ell}}^{\epsilon_{N_{\ell}}}+ b_1 z_2^{\epsilon_2} \cdots z_{N_{\ell}}^{\epsilon_{N_{\ell}}} + \sum_{j=2}^k b_j(1-z_{j}^{\epsilon_j}) z_{j+1} ^{\epsilon_{j+1}} \cdots z_{k}^{\epsilon_{k}} ,\ b_{k + 1 }' \cdot (1 -z_{k +1}^{- \epsilon_{k+1}}) \bigr) .\] Here, notice that this first term equals $\psi( b_1 \otimes \tilde{ f} (\mathfrak{l}_{\ell }) +
( b_1 , b_{1}' -b'_{1} z_{1}^{-\epsilon_{1}}))$.
To summarize, comparing the sum $ \langle \mathcal{K}_{\mathcal{C}, \mathcal{C}'},\ \hat{\mu}_{(1)}+ \hat{\mu}_{(2)}+ \hat{\mu}_{(3)} \rangle $ with the formula \eqref{deqqg} of $\mathcal{Q}_{\psi}(\mathcal{C}, \mathcal{C}') $ immediately accounts for the desired equality.
\end{proof} As the next step, let us reduce the 2-cycle $\hat{\mu}_{\ell}^{\rm pre}$ to a 2-cycle in $ C_2(\pi_L , \partial \pi_L ;{\Bbb Z}) $:
\begin{lem}\label{2g2s22} Take two arcs $\alpha$ and $ \beta$ from the same link component, and put the associated meridians $ \mathfrak{m}_{\alpha}$ and $ \mathfrak{m}_{ \gamma} \in \pi_L$. Then there exists $ \nu^{\rm pre}_{\alpha, \gamma } \in C_2(\pi_L;{\Bbb Z} ) $ such that the 2-chain $ \nu_{\alpha, \gamma} \in C_2(\pi_L , \partial \pi_L \sqcup \mathfrak{M}_{\rm arc};{\Bbb Z} ) $ of the form \begin{equation}\label{aa6} (\nu_{\alpha, \gamma }^{\rm pre}, 1) -( (1,1), \mathfrak{m}_{\alpha} )- ( (1,1), \mathfrak{m}_{ \gamma} ) \end{equation} is a 2-cycle and that the following pairing is zero: \begin{equation}\label{010101} \langle \bigl( \zeta \circ \Omega (\mathcal{C}) \bigr) \smile \bigl( \zeta' \circ \Omega' (\mathcal{C}')\bigr) , \ \nu_{\alpha, \gamma } \rangle \in (M \otimes M')_{\pi_L }. \end{equation} \end{lem} \begin{proof} Without loss of generality, we may assume that $\alpha$ is next to $ \beta $ and an arc $\beta $ separates $\alpha$ from $ \gamma$ (see Figure \ref{koutenpn}). Let us denote by $\epsilon$ the sign of the crossing, and
define $\nu^{\rm pre}_{\alpha, \gamma }$ to be $$ (\mathfrak{m}_{\alpha}, \mathfrak{m}_{\beta}^{\epsilon}) + (\mathfrak{m}_{\beta}^{-\epsilon} , \mathfrak{m}_{\alpha} \mathfrak{m}_{\beta}^{\epsilon})-(\mathfrak{m}_{\beta}^{\epsilon},\mathfrak{m}_{\beta}^{-\epsilon} )-(1,1) \in C_2(\pi_L;{\Bbb Z} ). $$ Since $ \partial_2(\nu^{\rm pre}_{\alpha, \gamma})= (\mathfrak{m}_{\alpha})+( \mathfrak{m}_{\beta}^{-\epsilon}\mathfrak{m}_{\alpha} \mathfrak{m}_{\beta}^{\epsilon})= (\mathfrak{m}_{\alpha})+(\mathfrak{m}_{ \gamma}) $, the 2-chain $ \nu^{\rm pre}_{\alpha, \gamma }$ in \eqref{aa6} is a 2-cycle. In addition, after a computation, we can verify that the pairing \eqref{010101} is zero.
\end{proof} Using the above preparation, we now prove Theorem \ref{mainthm2} and Proposition \ref{aa1133c}. \begin{proof}[Proof of Theorem \ref{mainthm2}] To start, we will formulate explicitly the 2-class $\hat{\mu}_\ell $ in $C_2(\pi_L, \partial \pi_L;{\Bbb Z} ). $ Since the arc $\beta_i $ has the same link-component to $\gamma_{\ell_i}$ for some $\ell_i$, we take the 2-cycle $ \nu_{\ell_i, \beta_i } $ obtained in Lemma \ref{2g2s22}.
Put a 2-cycle $d_i$ of the form $$ \bigl( ( \mathfrak{m}_{\ell_i}, \mathfrak{m}_{\ell_i}^{-1}) , 1) - ( (1,1), \mathfrak{m}_{\ell_i}) - ((1,1), \mathfrak{m}_{\ell_i} ), $$ where the second (resp. third) term has only a non-trivial element in the $( i+1) $-th (resp. $\beta_i $-th) component. Using the 2-cycle $\hat{\mu}_\ell^{\rm pre}$, let us set $ \hat{\mu}_\ell := \hat{\mu}_\ell^{\rm pre} + \sum_{i=2}^{N_{\ell}} (\nu_{\ell_i, \beta_i } -d_i) $. By construction, $ \hat{\mu}_\ell$ is also a 2-cycle and is presented by some elements of $(\pi_L,\partial \pi_L) $; consequently, it lies in $ C_2(\pi_L,\partial \pi_L).$ Furthermore, from the definitions of $ \zeta_{\ell }$ and $\zeta$, we have the equality $$ \langle \bigl( \zeta \circ \Omega(\mathcal{C}) \bigr) \smile \bigl( \zeta' \circ \Omega' (\mathcal{C}')\bigr) , \ \hat{\mu}_\ell\rangle = \langle \bigl( \zeta_{\ell } \circ \Omega (\mathcal{C}) \bigr)\smile \bigl( \zeta_{\ell }' \circ \Omega' (\mathcal{C}') \bigr), \ \hat{\mu}^{\rm pre}_\ell \rangle \in (M \otimes M')_{\pi_L }. $$ Notice from Lemma \ref{2g2s} that the pairing with $\psi$ is equal to $\mathcal{Q}_{\psi}(\mathcal{C}, \mathcal{C}') $. Hence, the proof is completed. \end{proof}
\begin{proof}[Proof of Proposition \ref{aa1133c}] Since $W$ is $S^2$ with removed $m$ open discs, $W$ and $\partial W$ are Eilenberg-MacLane spaces, and we have the isomorphisms $\pi_L = \pi_1( S^3 \setminus T_{m,m}) \cong {\Bbb Z} \times \pi_1(W ) \cong {\Bbb Z} \times F_{m-1}. $ Here the summand ${\Bbb Z}$ is generated by $a_1 \cdots a_m \in \pi_1( S^3 \setminus T_{m,m})$. Hence, it follows from the assumption $z_1 \cdots z_m = {\mathrm{id}}_M$ and Lemma \ref{clAl1} that the projection $ \mathcal{P}: \pi_1( S^3 \setminus T_{m,m}) \rightarrow \pi_1(W ) $ induces an isomorphism $ \mathcal{P}^*: H^1( W,\partial W ; M ) \rightarrow H^1(\pi_L ,\partial \pi_L ; M )$. Hence, the required claims immediately follow from Theorem \ref{mainthm2}, which completes the proof. \end{proof} \subsection*{Acknowledgments} The author sincerely expresses his gratitude to
Akio Kawauchi for many useful discussions on the classical Blanchfield pairing. He also thanks Takahiro Kitayama and Masahico Saito for valuable comments.
The work is partially supported by JSPS KAKENHI Grant Number 00646903.
\vskip 1pc
\normalsize
Faculty of Mathematics, Kyushu University, 744, Motooka, Nishi-ku, Fukuoka, 819-0395, Japan
\
E-mail address: {\tt [email protected]}
\end{document} | arXiv |
Ha Huy Khoai, My Vinh Quang, p-adic Nevanlinna theory. Lecture Notes in Math. 1351 (1988), 138 - 152.
Nguyen Khac Viet, On the action of automorphism groups on regular models of algebraic curves. In: Proc. of the All-Union Conference on Algebraic Geometry, Yaroslavl, February 1988, 247 - 255 (in Russian).
Nguyen Khac Viet, The special fibre of the Fermat curve. In: Collected Questions of Algebra, Geometry and Discrete Mathematics, Moscow, 1988, 94 - 95 (in Russian)
Tran Duc Van, On pseudodifferential operators with analytic symbols and applications. In: Proc. Intern. Symposium "Microlocal Analysis of Differential Equations", RIMS, Kyoto, September 27 - 30 (1988), Surikaisekikenkyosho Kokyuroku, 757 (1991), 194 - 213.
Tran Duc Van, Nguyen Duy Thai Son and Dinh Zung, Approximately solving Cauchy problems for the wave equation by the method of differential operators of infinite order. Acta Math. Vietnam. 13 (1988), 127 - 136.
Do Long Van, R. Siromoney, A. Jeyanthi, K. G. Subramanian, Public-key cryptosystems based on word problem, In: Proceedings of the ICOMID Symposium on Mathematics of Computation, Ho Chi Minh City, 1988, 267 - 275.
Hoang Tuy, S. Utkin and V. Khachaturov, A new exhaustive procedure for concave minimization, USSR Comput. Math. Math. Phys. 7 (1988), 992 - 999 (in Russian).
Hoang Tuy,Reiner Horst, Convergence and restart in branch and bound algorithms for global optimization. Application to concave minimization and d.c. optimization problems. Math. Programming 42 (1988), 161 - 184.
Hoang Tuy, Nguyen Van Thuong, On the global minimization of a convex function under general nonconvex constraints. Appl. Math. Optim. 18 (1988), 119 - 142.
Vu Kim Tuan, H.-J. Glaeske, Mapping properties and composition structure of a class of intergral transforms. In: Boundary Value and Initial Value Problems in Complex Analysis: Studies in Complex Analysis and Its Applications to Partial Differential Equations 1, Halle, 1988, 209 - 220.
Vu Kim Tuan, S. B. Yakubovich, Kontorovich-Lebedev transformation of functions that admit exponential growth. Mat. Fiz. Nelinein. Mekh. 9 (1988), 6 - 9 (in Russian).
Vu Kim Tuan, Some integral transforms of Fourier convolution type. Dokl. Akad. Nauk SSSR 300 (1988), 521 - 525; English transl.: Soviet Math. Dokl. 37(1988), 669 - 673.
Vu Kim Tuan, New classes of integral transforms with respect to an index. Dokl. Akad. Nauk SSSR, 299 (1988), 30-35; English transl.: Soviet Math. Dokl. 37(1988), 317 - 321.
Tran Manh Tuan, Dispersion analysis. Tiêu chuẩn Việt Nam TCVN 4551-88, 1988 (in Vietnamese).
Tran Manh Tuan, The teaching of statistics in Vietnam. In: The Training of Statisticians Round the World (R.M. Lyones, ed.), 1988, Chap. 11.
Tran Manh Tuan, The rule of estimating the uncertainty of observation results. Tiêu chuẩn Việt Nam TCVN 4548-88, 1988 (in Vietnamese).
Ngo Viet Trung, Giuseppe Valla, Degree bounds for the defining equations of arithmetically Cohen-Macaulay varieties. Math. Ann. 281 (1988), 479 - 491.
A. Simis, Ngo Viet Trung, The divisor class group of ordinary and symbolic blow-ups. Math. Zeits. 198 (1988), 479 - 491.
R. Horst, Nguyen Van Thoai, Branch-and-bound methods for solving systems of Lischitzian equations and inequalities. J. Optim. Theory Appl. 58 (1988), 139 - 146.
Nguyen Van Thoai, J. de Vries, Numerical experiments on concave minimization problems. Methods Oper. Research 60 (1988), 363 - 365.
Nguyen Van Thoai, A modified version of Tuy's method for solving d.c. programming problems. Optimization 19 (1988), 665 - 674.
R. Horst and J. de Vries, Nguyen Van Thoai, On finding new vertices and redundant constraints in cutting plane algorithms for global optimization. Oper. Res. Letters 7 (1988), 85 - 90.
Tran Vu Thieu, A finite method for minimizing a concave function over an unbounded polyhedral convex set. Acta Math. Hungar. 52 (1988),21 - 36.
Tran Vu Thieu, Sur la résolution de problèmes d'optimisation globale. Sém. Anal. Convexe. Montpellier 5 (1988), 19 - 28.
Tran Vu Thieu, A note on the solution of bilinear programming problems by reduction to concave minimization. Math. Programming 41 (1988), 249 - 260.
Hoang Tuy, Phan Thien Thach, Parametric approach to a class of nonconvex global optimization problems. Optimization 19 (1988), 3 - 11.
Nguyen Xuan Tan, Bifurcation from characteristic values for equations concerning Fredholm mappings with applications to partial differential equations II. Application. Math. Nachr. 139 (1988), 7 - 25.
Nguyen Xuan Tan, Bifurcation from characteristic values for equations concerning Fredholm mappings with applications to partial differential equations I. Theory. Math. Nachr. 137 (1988), 175 - 196.
Nguyen Xuan Tan, An analytical study of bifurcation problems for equations involving Fredholm mappings. Proc. of the Royal Soc. of Edinburgh 110 (1988), 199 - 225.
Do Hong Tan, A note on multivalued affine mappings. Studia Univ. Babes-Bolyai Math. 33 (1988), No 4, 55 - 59.
Bui The Tam, T. Tuc, Decomposition for concave programming. Tạp chí Khoa học Tính toán và Điều khiển 4 (1988), 1 - 7 (in Vietnamese).
Pham Huu Sach, Differentiability of set-valued maps in Banch spaces. Math. Nachr. 139 (1988), 215 - 235.
Pham Huu Sach, Calmness, regularity and support priciple. Optimization 19 (1988), 13 - 27.
Hoang Xuan Phu, Optimal control of a hydroelectric power plant with unregulated spilling water. Systems Control Lett. 10 (1988), 131 - 139.
Hoang Xuan Phu, Regulare aufgaben der optimalen steuerung mit linearen zustandsrestriktionen. Z. Anal. Anwendungen 7 (1988), 431 - 440.
Hoang Xuan Phu, Investigation of some inventory problems with linear replenishment cost by the method of region analysis, In: Optimal Control Theory and Economic Analysis, Edited by G. Feichtinger, North-Holland, Amsterdam, Holland, 1988, 195 - 221.
Hoang Xuan Phu, Linear optimal control problem of a system with circuit-free graph structure. Int. J. Control 48 (1988), 1867 - 1882.
Hoang Xuan Phu, Solution of some high-dimensional linear optimal control problems by the method of region analysis. Int. J. Control 47 (1988), 493 - 518.
Vu Quoc Phong, Ju.Y. Ljubich, Asymptotic stability of linear differential equations on Banach spaces. Studia Math. 88 (1988), 37 - 42.
Vu Quoc Phong, Operateurs et representations de Markov presque-periodiques de semigroupes dans les espaces $L^p$. C. R. Acad. Sci. Paris, Série I 307 (1988), 775 - 778.
Vu Quoc Phong, The Perron-Frobenius theory for almost periodic representations in Lp. Teor. Funktsii Funkstional. Anal. i Prilozhen. 49 (1988), 35 - 42 (in Russian).
Vu Quoc Phong, Dissipative almost periodic actions of semigroups. Ukrain. Mat. Zh. 40 (1988), No 1, 110 - 113 (in Russian).
Vu Ngoc Phat, Controllability of linear time-dependent systems with a phase constraint. Avtomatika i Telemekhanika, USSR 8 (1988), 51 - 59. English translation: Automat Remote Control 49 (1988), 998 - 1004.
Vu Ngoc Phat, Controllability of nonlinear discrete-time systems without differentiability assumption. Optimization 19 (1988), 133 - 142.
Vu Ngoc Phat, Approximate controllability of nonlinear discrete-time systems in Banach spaces. Acta Math. Vietnam. 13 (1988), 81 - 88.
Nguyen Van Ngoc, On the solvability of dual integral equations involving fourier transform. Acta Math. Vietnam. 13 (1988), 21 - 30.
Ha Tien Ngoan, A family of solutions for the problems of plane flow. Acta Math. Vietnam. 13 (1988), 97 - 104.
Do Van Luu, Regularity and sufficient optimality conditions for some classes of mathematical programming problems. Acta Math. Vietnam. 13 (1988), 87 - 95.
Do Van Luu, Optimality conditions for discrete minimax problems in infinite - dimensional spaces. Tạp chí Toán học 16 (1988), 15 - 22 (in Vietnamese).
Ngo Van Luoc, Differential boundary value problems for systems of elliptic equations of first order. Dr. Sc. Thesis, Institute of Mathematics, Tbilisi, 1988, 230 p. (in Russian).
Dinh Quang Luu, Decomposition and limits for martingales-like sequences in Banach spaces. Acta Math. Vietnam. 13 (1988), 73 - 78.
Dinh Quang Luu, Summability and amarts of finite order in Fréchet spaces. Acta Math. Hungar. 51 (1988), 71 - 77.
Dinh Quang Luu, The Banach lattice property of L1-amarts. Tạp chí Toán học 16 (1988), 30 - 33.
Le Ngoc Lang and Ngo Van Luoc, On the existence and uniqueness of solutions for a class of evolution equations. Acta Math. Vietnam. 13 (1988), 15 - 22.
Tran Gia Lich, Some mathematical aspects of the calculation of unsteady flow and water pollution on river or open channel system. In: Proc. of the 4th National Conference on Mechanics, Hanoi, 1 (1988), 77 - 83 (in Vietnamese).
Tran Gia Lich, Nguyen Cong Dieu, Mathematical model of vertical two-dimensional density stratified flow. In: Proc. of The 4th National Conference on Machaniscs, Hanoi, 1 (1988), 34 - 38 (in Vietnamese).
Ha Huy Khoai, Sur le théorème de Morera p-adique. Univ. Paris 7, Groupe d'Etude d'Analyse Ultramétrique, 15-ème année, 1987-1988, 29 - 34.
Ha Huy Khoai, Sur la théorie de Nevanlinna p-adique. Univ. Paris 7, Groupe d'Etude d'Analyse Ultramétrique, 15-ème année, 1987-1988, 35 - 39.
Phan Huy Khai, The method of pursuit in linear discrete games with many players II. Acta Math. Vietnam. 13 (1988), 105 - 116 (in Russian).
Dinh Van Huynh, P. F. Smith, Characterizing rings by their modules, Proc. 31st Semester "Classical Algebraic structure", (1988), Banach Center, Warsaw.
Dinh Van Huynh, A note on rings with chain conditions. Acta Math. Hungar. 51 (1988), 65 - 70.
Dinh Van Huynh, Nguyen Viet Dung, A characterization of artinian rings. Glasgow Math. J. 30 (1988), 67 - 73.
Dinh Van Huynh, Phan Dan, On rings with restricted minimum condition. Arch. Math. 51 (1988), 313 - 326.
Dinh Van Huynh, Nguyen Viet Dung, On the cardinality of ideals in artinian rings. Arch. Math. 51 (1988), 213 - 216.
Le Tuan Hoa, On Segre products of affine semigroup rings. Nagoya Math. J. 110 (1988), 113 - 128.
Truong Xuan Duc Ha, Banach spaces of d.c. functions and quasidifferentiable functions. Acta Math. Vietnam. 3 (1988), 55 - 70.
Nguyen Viet Dung, On linearly compact rings. Arch. Math. (Basel) 51 (1988), 327 - 331.
Hoang Dinh Dung, Integral representation of the solution of some hyperbolic systems with degenerate coefficients and their applications. Acta Math. Vietnam. 13 (1988), 153 - 162.
Hoang Dinh Dung, Inverse formulas for the integral representation of some P-analytic functions and their application. Diff. Urav. 24 (1988), 324 - 335.
Do Ngoc Diep, Multidimensional quantization IV. The generic representations. Acta Math. Vietnam. 13 (1988), 67 - 72.
Bui Khoi Dam, The dual space of the martigale Hardy spaces $\mathcall H_{\phi}$with general Young function ${\phi}$. Anal. Math. 14 (1988), 287 - 294.
Nguyen Huu Duc, Nguyen Tien Dai, Stability of a regular geometric interaction between holonomic components. Univ. Iagellonicae Acta Math. Fasciculus XXVII (1988), 325 - 336.
Nguyen Dinh Cong, Stochastic stability test for the highest Lyapunov exponent. Mat. Zametki 43 (1988), 82 - 97; English transl.: Math. Notes 43 (1988), 49 - 57.
Nguyen Dinh Cong, On the stochastic stability of the Lyapunov exponents of equations of arbitrary order. Mat. Sb. 132 (174)(1987), 225 - 243; English transl.: Math. USSR Sb. 60 (1988), 217 - 235.
Nguyen Minh Chuong, On the parabolic pseudodifferential operators of variable order in Sobolev spaces with weighted norms. Acta Mathematica Vietnamica, 13 (1988), 5 - 14.
Nguyen Minh Chuong, Le Quang Trung, On a nonelliptic problem for pseudodifferential operators of variable order. Tap chi Toan hoc 16 (1988), 1 - 5 (in Vietnamese).
Nguyen Minh Chuong, Le Quang Trung, Limit equations for degenerate nonlinear elliptic equations in weighted Sobolev-Orlicz spaces. Uspekhi Matematicheskikh Nauk, 43 (1988), 181 - 182 (in Russian).
Nguyen Minh Chuong, Le Quang Trung, Degenerate elliptic nonlinear differential equations of infinite order in weighted Sobolev - Orlicz spaces. Differentsialnye Uravneniya, 24 (1988), No 3, 535 - 537 (in Russian).
Nguyen Van Chau, On controllability of linear systems and pursuit problem without discrimination of object in linear games. Ph. D. Thesis, Institute of Mathematics, Hanoi, Vietnam, 1988 (in Vietnamese).
Ha Huy Bang, Certain imbedding theorems for the spaces of infinite order of periodic functions. Mat. Zametki 43 (4)(1988), 509 - 517. English transl.: Math. Notes 43 (1988), 293 - 298. | CommonCrawl |
\begin{document}
\title[]{Semi-Markov models and
motion in heterogeneous media}
\address{$1$: Dipartimento di Scienze Statistiche, Sapienza - Universit\`a di Roma}
\address{$2$: Dipartimento di Matematica e applicazioni ``Renato Caccioppoli" - Universit\`a degli studi di Napoli ``Federico II"}
\keywords{Semi-Markov processes, anomalous diffusion, continuous time random walks, Volterra equations, fractional derivatives, subordinators}
\date{\today}
\subjclass[2010]{60K15, 60K40, 60G22}
\author{Costantino Ricciuti$^1$} \author{Bruno Toaldo$^2$}
\begin{abstract} In this paper we study continuous time random walks (CTRWs) such that the holding time in each state has a distribution depending on the state itself. For such processes, we provide integro-differential (backward and forward) equations of Volterra type, exhibiting a position dependent convolution kernel. Particular attention is devoted to the case where the holding times have a power-law decaying density, whose exponent depends on the state itself, which leads to variable order fractional equations. A suitable limit yields a variable order fractional heat equation, which models anomalous diffusions in heterogeneous media. \end{abstract}
\maketitle \tableofcontents
\section{Introduction} We here consider continuous time random walks (CTRWs) on countable state spaces. It is assumed that every time the walker jumps, the future trajectory becomes independent of its past, namely the next position and the next jump time depend only on the current position; furthermore, in a generic time instant, the future behavior is assumed to be also depending on the time already spent in the current position. Such a process is said to be semi-Markovian. If the waiting times between jumps follow an exponential distribution, then, due to the lack of memory property, the random walk is a Markov process.
It is well known that suitable (Markovian) random walks are good approximations of the Brownian motion. In the last decades it has been noticed that the CTRWs whose waiting times have distribution with a power-law decay, played a central role in statistical physics because they are good approximations of anomalous diffusion processes, where the mean square displacement grows as $\overline{x^2} \sim t^{\alpha}, \alpha \in (0,1)$, and therefore slower than a standard Brownian motion (for a complete overview on this matter consult \cite{Metzler} and references therein). In these models each site exercises a trapping effect which, in some sense, delayes the time with respect to a corresponding Markov process.
It turns out that these facts can be framed in a nice probabilistic setting: to construct a large class of CTRWs, it is sufficient to replace the deterministic time $t$ of a Markov process by an independent inverse stable subordinator (on this point see, for example, the instructive discussion in \cite{Meerschaert1}). It is well known (see for example \cite[page 365]{Kolokoltsov} and \cite{Meerschaert3}) that the transition probabilities of the correponding CTRW follows both the fractional backward and forward equations. Such equations are obtained from Kolmogorov backward and forward equations by replacing the time derivative with the fractional one, which introduces a memory effect by a convolution integral with a slowly decaying power-law kernel.
A suitable scaling then leads to anomalous diffusion processes, whose p.d.f. solves the Fokker-Planck equation (see, e.g., \cite{Metzler}) \begin{align} \frac{\partial}{\partial t} p(x,y,t)= k \mathcal{D}_t^{1-\alpha}\frac{\partial^2}{\partial x^2} p(x,y,t). \end{align} It has been empirically confirmed (see \cite{Metzler}) that these models are particularly effective in a number of applications, e.g., for modeling diffusion in percolative and porous systems, charge carrier transport in amorphous semiconductors, nuclear magnetic resonance, motion on fractal geometries, dynamics of a bead in a polymeric network, protein conformational dynamics and many others.
We finally stress that many aspects of the theory hold as they are if the distribution of the holding times is arbitrary and not necessarily with a power law decay provided that it satisfies some mild assumptions (see, for example, the discussion in \cite[Section 4]{Meerschaert1}). In this case the random time process is given by the inverse of a generic subordinator and the corresponding backward equations have the form of a Volterra integro-differential equation \begin{align} \frac{d}{dt} \int_0^t p(x,y,s) \, k(t-s) \, ds \, - \, k(t) p(x,y,0) \, = \, \sum_{z} g_{x,z} p(z,y,t), \label{voltintro} \end{align} where $g_{x,y}:=(G)_{x,y}$ and $G$ is the Markovian generator (see \cite{Zhen Qing Chen, Meerschaert3, toaldo, toaldodo} for the general theory and \cite{dovetalip, dovetal} for some particular cases). The fractional case is more familiar in statistical physics because it is widely used in applications.
Up to now, we have only considered the simplest forms of fractional kinetic equations, where the fractional index $\alpha$ is constant. On the other hand, it is clear that further theoretical investigations are required for the description of more complicated (and more realistic) random processes, where the particle moves in an inhomogeneous environment; as we will discuss in the paper it turns out that this leads to equations of multi-fractional type. Equations with time-fractional derivative whose order depends on space have been studied in \cite{Orsingher2}, where the authors considered a CTRW, say $X(t)$, $t \geq 0$, such that the function $ f(x,t)=\mathds{E}\{u(X(t))\mid X(0)=x \}$, for a suitable test function $u$, solves the fractional backward equation \begin{align} \mathcal{D}_t^{\alpha (x)}f(x,t)=Gf(x,t) \label{intro1} \end{align} where $\mathcal{D}_t^{\alpha (x)}$ denotes the $\alpha$-fractional derivative in the sense of Caputo-Dzerbayshan, i.e., for $\alpha \in (0,1)$, \begin{align} \mathcal{D}_t^{\alpha } u(t) \, : = \, \frac{1}{\Gamma(1-\alpha)} \frac{d}{dt} \int_0^t u(s) \, (t-s)^{-\alpha} \, ds \, - \, \frac{t^{-\alpha}u(0)}{\Gamma(1-\alpha)}, \label{defcaputo} \end{align} for any function $u$ such that the above integral is differentiable. In such a model, the trapping effect is not exercised with the same intensity at all sites. Indeed, when the particle reaches the state $x$, it is trapped for a time interval with density
$\psi (t)\sim t^{-1-\alpha (x)}$ before jumping to another point. Thus the time delay is stronger when the particle is located at points with small values of $\alpha$. This leads to the fundamental fact regarding the time-change relation $X(t)=M(L(t))$: the time process $L$ and the Markov process $M$ are not independent. Such a construction is far from trivial, since $L$ is the right continuous inverse of a non-decreasing additive process also called time-inhomogeneous subordinator (for basic information consult \cite{Sato} and \cite{Orsingher1}).
In the case of a countable state space $\mathcal{S}$, we here present the derivation of the backward equation \begin{align} \mathcal{D}_t^{\alpha (x)}p(x,y,t)= \sum _z g_{x,z}p(z,y,t). \end{align} We further introduce the forward equation \begin{align} \frac{d}{dt} p(x,y,t)= \sum _zg_{z,y}\, ^R \mathcal{D}_t^{1-\alpha (z)} p(x,z,t) \label{forwgiusta} \end{align} where $^R\mathcal{D}_t^{1-\alpha (x)}$ denotes the fractional derivative in the sense of Riemann-Liouville, i.e., for $\beta \in (0,1),$ \begin{align} ^R\mathcal{D}_t^\beta u(t) : = \frac{1}{\Gamma(1-\beta)} \frac{d}{dt} \int_0^t u(s) \, (t-s)^{-\beta} ds \label{defriemann} \end{align} for any function $u$ such that the above operator is well defined. Further we explain why eq. \eqref{forwgiusta} is a true forward equation in the classical sense of Kolmogorov.
Therefore, this paper also creates a further bridge between the theory of semi-Markov processes and models of motion in heterogeneous media (a different theory concerning motions at finite velocity is discussed in \cite{koro}): on the one hand, there is the theory of semi-Markov processes, on the other hand, there are recent works concerning the fractional diffusion equation with multifractional index: \begin{align} \frac{\partial}{\partial t}p(x,y,t)=\frac{1}{2} \frac{\partial ^2}{\partial x^2}\l k(x) \mathcal{D}_t^{1-\alpha (x)}p(x,y,t) \r. \end{align} Such equation has been derived in \cite{Gorenflo}, but the related theory is still at an early stage, especially with regard to physical and phenomenological aspects. However, the use of a multi-fractional index $\alpha (x)$ is more realistic in the description of physical phenomena. Indeed, it takes into account the possibility of heterogeneous media, or, more simply, it considers homogeneous media where some impurities are scattered.
Finally, in the same spirit as \eqref{voltintro} we show that previous models can be generalized by letting the (random) trapping effects having an arbitrary density, subject to some mild assumptions. These models then yield integro-differential equations of Volterra type, with a position dependent kernel of convolution, i.e., \begin{align} \frac{d}{dt} p(x,y,t)= \sum _zg_{z,y}\, \frac{d}{dt} \int_0^t p(x,z,s) \, k(t-s,z) \, ds. \end{align}
The plan of the paper is the following. In sections \ref{2}, \ref{3}, \ref{4} and \ref{5}, we consider the case of power-law holding times, which is the most familiar case in statistical physics. In particular, in section 2 we review (in our notations) some known facts on CTRWs in a homogeneous environment, where the fractional index $\alpha$ is assumed to be constant in space. Section 3 and 4 regard CTRWs in heterogeneous environment, where the fractional index is assumed to be space-dependent. Section 5 deals with the derivation of the multifractional diffusion equation. In section 6 many results are extended to the case where the holding times follow more general distributions.
\section{Semi-Markov models for motion in homogeneous media} \label{2} Before moving to heterogenous media we collect some results from the literature concerning classical models which will be used in the subsequent parts. As we stated in the introduction, the most popular model in statistical physics is related with holding times in each site having a density $\psi(t) \sim Ct^{-\alpha-1}$, $C>0$, $\alpha \in (0,1)$, with a power law decay. So, for example $\psi (t) = -(d/dt) E_{\alpha}(-\lambda t^{\alpha})$ (compare with \cite[eq. (26)]{Scalas2006Lecture}), and this is related to fractional processes. Hence we focus the attention on this case to present the results concerning this theory.
First, in order to introduce the notation that we will use hereafter, we recall some basic facts regarding the classical theory of stepped Markov processes. Let us consider a continuous time Markov process $M$ with discrete state space $\mathcal{S}$ \begin{align} \label{definizione processo markov} M(t)=X_n \qquad V_n \leq t< V_{n+1}\qquad \text{ where } V_0=0 \qquad V_n= \sum _{k=0}^{n-1} E_k \end{align} where $X_n$ is a homogeneous discrete-time Markov chain on $\mathcal{S}$ with transition probabilities \begin{align}
h_{i,j}=P(X_{n+1}=j|X_n=i), \qquad \forall n \in \mathbb{N} \qquad i,j \in \mathcal{S}, \end{align} and the sojourn times are such that \begin{align}
P(E_n>t|X_n=i)=e^{-\lambda_i t} \qquad \forall n\in \mathbb{N}, \qquad t\geq 0. \end{align} Let \begin{align}
p_{i,j}(t)= P(M(t)=j|M(0)=i) \end{align} be the transition probabilities. The Markovian generator of $M$ is the matrix with elements \begin{align} g_{i,j}= \lambda _i (h_{i,j}-\delta _{i,j}) \end{align} where $\delta _{i,j}$ denotes the Kronecker symbol. Then the infinitesimal transition probabilities have the form \begin{align} p_{i,j}(dt)=\begin{cases} g_{i,j} dt = \lambda _i h_{i,j}dt, &i\neq j,\\ 1+g_{i,i} dt= 1-\lambda _i dt +\lambda _i h_{i,i}dt, \qquad & i=j. \end{cases} \end{align} It is enough for our models to consider the case in which, a.s., \begin{align} \zeta := \sup_n V_n = \sum_n E_n = \infty, \end{align} so that the processes here are non explosive and hence we shall not consider what happens to a process after explosion. Under all these assumptions the functions $p_{i,j}(t)$, with $i,j \in S$ solve the Kolmogorov backward equations (e.g. \cite[Sec. 2.8]{norris}) \begin{align} \label{backward markoviana} \frac{d}{dt} p_{i,j}(t)= \sum _k g_{i,k}p_{k,j}(t), \qquad p_{i,j}(0)= \delta _{i,j}, \end{align} as well as the Kolmogorov forward equations \begin{align} \label{forward markoviana} \frac{d}{dt} p_{i,j}(t)= \sum _k p_{i,k}(t) g_{k,j}, \qquad p_{i,j}(0)= \delta _{i,j}, \end{align} which can be written in compact matrix notation as \begin{align} \frac{d}{dt}P(t)= GP(t)=P(t)G, \qquad P(0)=I. \end{align}
We now consider a CTRW constructed in the same way of $M$, except for the distribution of the waiting times, which are no longer exponentially distributed. These processes are said to be semi-Markov processes in the sense of Gihman and Skorohod \cite[Chapter 3]{gihman}. Hence let $X(t)$ be \begin{align} X(t)=X_n ,\qquad T_n \leq t< T_{n+1},\qquad \text{where } T_0=0 \qquad T_n= \sum _{k=0}^{n-1} J_k,\label{processo principale} \end{align} where $X_n$ is a homogeneous discrete time Markov chain on $\mathcal{S}$ with transition probabilities \begin{align}
h_{i,j}=P(X_{n+1}=j|X_n=i) ,\qquad \forall n \in \mathbb{N}, \qquad i,j \in \mathcal{S}, \end{align} and the sojourn times are such that \begin{align}
P(J_n>t|X_n=i)=\overline{F}_i(t), \qquad \forall n\in \mathbb{N} ,\qquad t\geq 0, \end{align} where $F_i(t) = 1-\overline{F}_i(t)$ is an arbirtrary c.d.f. We will devote particular attention to the case \begin{align}
P(J_n>t|X_n=i)=E_\alpha(-\lambda _i t^{\alpha }), \qquad \forall n\in \mathbb{N} ,\qquad t\geq 0, \end{align} for $\lambda_i>0$, where \begin{align*} E_\alpha(x):= \sum_{k=0}^\infty \frac{x^k}{\Gamma (1+\alpha k)} \end{align*} is the Mittag-Leffler function. In this case (e.g. \cite[eq. (26)]{Scalas2006Lecture}) \begin{align} E_{\alpha}(- \lambda t^{\alpha }) \sim C_\lambda \frac{t^{-\alpha}}{\Gamma(1-\alpha)}, \qquad C_\lambda >0, \end{align} and the corresponding equations are fractional. The charactering property of semi-Markov processes is the following: by defining \begin{align*} \gamma(s) = s-\sup \ll w \leq s : X(w) \neq X(s) \rr ,
\end{align*} the sojourn time of $X$ in the current position, the couple $(X(t), \gamma (t))$ is a (strict) Markov process \cite[Chapter 3, Section 3, Lemma 2]{gihman}. This is to say that, when conditioning on the trajectory up to time $s$, future events depend not only on the current position $X(s)$, as it is for Markov processes, but also on the quantity $\gamma(s)$. Let \begin{align} \label{bbbb}
p_{i,j}(t)&:=P(X(t)=j|X(0)=i, \gamma(0) =0) \notag \\& =P(X(t+\tau)=j|X(\tau)=i, \gamma(\tau) =0) \end{align} be the transition probabilities (the second equality follows by time-homogeneity). We know from \cite[page 20]{koro} that the transition probabilities solve the renewal equation \begin{align} \label{Markov renewal equation} p_{i,j}(t)= P \{ J_i>t\}\delta _{i,j}+ \int _0^t \sum _l h_{i,l}\, p_{l,j}(t-s)\mathpzc{f}_i(ds) \end{align} where here $\mathpzc{f}_i(t)$ denotes a smooth density of $F_i(t)$. Note that (\ref{Markov renewal equation}), which provides a system of integral equations for the transition probabilities \eqref{bbbb}, comes from a very classical conditioning argument: fixing the time of the first jump $J_0$ and using the Markov property of the semi-Markov process at the jump times yields (see \cite[page 19]{koro} for some details) \begin{align} P \l X(t) = j \mid X(0) = i, \gamma(0)=0 \r \, = \, &P \l X(t) = j, J_0 >t \mid X(0) = i , \gamma(0)=0 \r \notag \\ &+ P \l X(t) = j , J_0 \leq t \mid X(0) = i, \gamma(0)=0 \r. \end{align} A similar approach on semi-Markov processes, with an interesting discussion on exactly solvable models, can be found in \cite{Scalas}.
The process \eqref{processo principale} is known to have a deep connection to fractional calculus. Indeed, the following result holds. \begin{prop} \label{teofrachom}
The transition functions $p_{i,j}(t)$, $i,j \in \mathcal{S}$, defined in \eqref{bbbb} solve the following system of backward equations \begin{align} \label{aaaa} \mathcal{D}_t^{\alpha } \, p_{i,j}(t)&= \sum _k g_{i,k}p_{k,j}(t) , \qquad p_{i,j}(0)= \delta _{i,j}, \end{align} as well as the system of ``forward" equations \begin{align} \label{zzz} \mathcal{D}_t^{\alpha } \, p_{i,j}(t)&= \sum _k p_{i,k}(t) g_{k,j}, \qquad p_{i,j}(0)= \delta _{i,j} . \end{align} \end{prop} \begin{proof} By the convolution Theorem we can compute the Laplace transform in \eqref{aaaa} and we obtain \begin{align} s^{\alpha}\widetilde{p}_{i,j}(s)-\frac{s^{\alpha}}{s}p_{i,j}(0) \, = \, \sum _k g_{i,k}\widetilde{p}_{k,j}(s). \label{cccc} \end{align} Instead, by applying the Laplace tranform to (\ref{Markov renewal equation}) we have \begin{align} \label{secondac} \widetilde{p}_{i,j}(s)= \frac{s^{\alpha} }{s(\lambda _i+s^{\alpha} )}\delta _{i,j} +\sum _l h_{i,l}\, \widetilde{p}_{l,j}(s)\frac{\lambda _i}{\lambda _i+ s^{\alpha}}. \end{align} By setting $g_{i,j}=\lambda _i (h_{i,j}-\delta _{i,j})$, it is easy to show that \eqref{secondac} reduces to \eqref{cccc} and using the uniqueness theorem for Laplace tranforms eq. \eqref{aaaa} is proved. Now if we apply again the Laplace transform to \eqref{zzz} we get that \begin{align} s^{\alpha}\widetilde{p}_{i,j}(s)- \frac{s^{\alpha }}{s}p_{i,j}(0)= \sum _k g_{k,j}\widetilde{p}_{i,k}(s), \label{ccccC} \end{align} and thus the solution of \eqref{aaaa} and \eqref{zzz} coincide. Indeed they can be obtained by solving either the system \eqref{cccc} or \eqref{ccccC} in the variables $\tilde{p}_{i,j}(s)$, which, in compact operator form, reads \begin{align} \label{soluz 1} \tilde{P}(s)= s^{\alpha -1}(s^{\alpha}I-G)^{-1} \end{align} where $I$ is the identity matrix, and this concludes the proof. \end{proof} \begin{os} We observe that \eqref{aaaa} derives directly by the renewal equation, which has a clear backward meaning. Instead the reason why we call eq. \eqref{zzz} ``forward equation" is that it is formally obtained by introducing the fractional Caputo derivative in the Kolmogorov forward equation \eqref{forward markoviana}; the fact that it has a clear probabilistic interpretation has never been proved. A clear probabilistic meaning to \eqref{zzz} will be derived later in Section \ref{4} from the general form of the forward equation of semi-Markov processes we will present. Concerning forward equations of semi-Markov processes, see also the discussion in \cite{fellersemi}. \end{os}
It is well known that \eqref{processo principale} can be equivalently constructed by replacing the time $t$ in \eqref{definizione processo markov} with the right continuous inverse of an independent $\alpha$-stable subordinator. For the sake of clarity we here report a sketched proof of this fact, which essentially follows \cite[Theorem 2.2]{Meerschaert1}. Let $H$ and $L$ respectively denote the $\alpha$-stable subordinator and its inverse, i.e., \begin{align} L(t) \, := \, \inf \ll s \geq 0 : H(s) >t \rr. \end{align} In order to prove that \eqref{processo principale} is the same process as $M(L(t))$ it is sufficient to prove that $M(L(t))$ has the same Mittag-Leffler intertimes of \eqref{processo principale}. This is clear since to construct a semi-Markov process in the sense of Gihman and Skoroohod, as in Section \ref{2}, it is sufficient to have an embedded chain $X_n$ and a sequence of independent r.v.'s representing the holding times. Here $M(t)$ and \eqref{processo principale} have the same embedded chain and thus it remains only to show that they have the same waiting times. Since \begin{align} M(t)=X_n \qquad V_n \leq t< V_{n+1} \end{align} we have \begin{align} M(L (t))=X_n \qquad V_n \leq L(t) < V_{n+1} \end{align} which is equivalent to (by \cite[Lemma 2.1]{Meerschaert1}) \begin{align} M(L (t))=X_n \qquad H(V_n-) \leq t < H(V_{n+1}-). \label{limsin} \end{align} Further, by \cite[Lemma 2.3.2]{applebaum} we have that $H(V_n-)=H(V_n)$, a.s., we can rewrite \eqref{limsin} as \begin{align} M(L (t))=X_n \qquad H(V_n) \leq t < H(V_{n+1}). \end{align} Thus the jump times $\tau_n$ of $M(L(t))$ are such that $\tau_n \stackrel{d}{=} H(V_n)$ and since $H$ has stationary increments, the holding times of $M(L(t))$ become, for any $n$ \begin{align}
\tau_{n+1}-\tau_n= H (V_{n+1})-H (V_n) \stackrel{d}{=} H (E_n),
\end{align} where we used that $V_{n+1}-V_n$ are exponential r.v.'s $E_n$. By a standard conditioning argument we have, under $P \l \cdot \mid X_n=x \r$, \begin{align} \mathds{E}e^{-\eta H(E_n)}= \frac{\lambda _x}{\lambda _x +\eta ^{\alpha}}. \end{align} Now since \cite[eq. (3.4)]{meerbounded} \begin{align} \int_0^\infty e^{-\eta t} E_{\alpha}(-\lambda_x t^{\alpha})dt \, = \, \eta^{\alpha-1} \frac{1}{\lambda_x + \eta^{\alpha}} \end{align} we have, \begin{align} -\int_0^\infty e^{-\eta t} \frac{d}{dt} E_{\alpha}(-\lambda_x t^{\alpha})dt \, = \, \frac{\lambda_x}{\lambda_x+\eta^{\alpha}} \label{lapldensmittag} \end{align} and this shows that the holding times have the same distribution.
\begin{ex}[The fractional Poisson process] One of the most popular CTRW with heavy tailed waiting times is the so-called Fractional Poisson process, corresponding to the case where $\lambda_i= \lambda$, $h_{i,i+1}=1$ and $X(0)=0$ a.s. It has been studied by a number of authors (see for example \cite{Beghin2, LaskinS, MainardiS, Meerschaert1, RepinS}).
Its transition probabilities \eqref{bbbb}
solve the system of fractional Kolmogorov ``forward" equations \begin{align} \mathcal{D}_t ^\alpha p_{i,j}(t)& = -\lambda p_{i,j}(t)+\lambda p_{i,j-1}(t) \qquad j\geq i \qquad p_{i,j}(0)=\delta _{i,j} \end{align} as well as the system of Kolmogorov fractional backward equations \begin{align} \mathcal{D}_t^\alpha p_{i,j} (t)&= -\lambda p_{i,j}(t)+\lambda p_{i+1,j}(t) \qquad j\geq i \qquad p_{i,j}(0)= \delta _{i,j} \end{align} and it is easy to check directly that their common explicit solution in Laplace space is \begin{align} \widetilde{p}_{i,j}(s)= \frac{\lambda ^{j-i}s^{\alpha -1}}{(\lambda +s^{\alpha})^{j-i+1}}\label{soluzione poissson classico}. \end{align} However, the equation often reported in the literature (e.g. \cite{Beghin}) is \begin{align} \mathcal{D}_t^\alpha p_k (t)&= -\lambda p_k(t)+\lambda p_{k-1}(t) \qquad k\geq 0 \label{equazione di poisson frazionaria} \\ p_k(0)&= \delta _{k,0} \notag \end{align} which is the ``forward" equation corresponding to the special case $i=0$. In \cite{Meerschaert1}, the authors proved that the fractional Poisson process can be constructed as a standard Poisson process with the time variable replaced by an inverse stable subordinator. \end{ex}
\section{Semi-Markov models for motion in heterogeneous media} \label{3} We now show how the tools used for modeling homogeneous media can be adapted to include heterogeneity, in the sense that the trapping effect exercised in different sites depends on the current position. To be consistent with the literature introduced in Section \ref{2} we first focus on the case in which the holding time at the position $x$ has a density $\psi(t) \sim t^{-\alpha(x)-1}$. How this can be generalized to different decaying patterns will be showed later. Hence we consider now a CTRW defined exactly as in \eqref{processo principale}, except for the distribution of the waiting times, which here present a state dependent fractional order: \begin{align} X(t)=X_n, \qquad T_n \leq t < T_{n+1}, \qquad \text{where } T_0 =0 \qquad T_n= \sum _{k=0}^{n-1} J_k \notag \\
P(J_n>t|X_n=i)= \overline{F}_i(t)=E_{\alpha _i}(-\lambda _i t^{\alpha _i}) \qquad \alpha _i \in (0,1). \label{processo da studiare} \end{align} Use again \cite[eq. (26)]{Scalas2006Lecture} to say that, for a constant $C>0$ depending on $\alpha_i$ and $\lambda_i$, we have, as $t \to \infty$ \begin{align} -\frac{d}{dt} E_{\alpha _i}(-\lambda _i t^{\alpha _i}) \, \sim \, C t^{-\alpha_i-1} \end{align} and thus this is a model of a motion performed in a medium where the trapping effect has not the same intensity at all sites.
Before moving to the equation, it is usefull to show that also in this situation it is possible to interpret the semi-Markov process $X(t)$ as the time-change of a Markov process. However this is far from trivial and requires some analysis which is carried out in the following section.
\subsection{The time-change by a dependent time process} In order to have an interpretation of \eqref{processo da studiare} as a time-changed process, we need the notion of multistable subordinator (see for example \cite{molcha} and \cite{Orsingher1}). Strictly speaking, a multistable subordinator is a generalization of a stable subordinator, in the sense that the stability index is a function of time $\alpha= \alpha (t) \in (0,1)$. The intensity of jumps is described by a time-dependent L\'evy measure \begin{align} \nu (dx,t)= \frac{\alpha (t) x^{-\alpha (t)-1}dx}{\Gamma (1-\alpha (t))} \qquad x>0. \label{levmulti} \end{align} A multistable subordinator $\sigma(t)$, $t \geq 0$ is an additive process in the sense of \cite{Sato}, i.e., it is right-continuous and has independent but non stationary increments. Hence all the finite dimensional distributions are completely determined by the distribution of the increments which can be obtained from \eqref{levmulti}. Therefore (see \cite[Section 2]{Orsingher1} for details on this point) \begin{align} \mathds{E}e^{-\eta (\sigma(t)-\sigma(s))}= e^{-\int _s ^t \eta ^{\alpha (\tau)}d\tau}, \qquad 0\leq s \leq t. \label{laplincr} \end{align} Multistable subordinators are particular cases of a larger class of processes, known as non-homogeneous subordinators, which were introduced in \cite{Orsingher1}.
\begin{defin} \label{defpiec} A multistable subordinator $\sigma(t)$, $t \geq 0$, is said to be piecewise stable if there exists a sequence $\alpha _j \in (0,1)$ and a sequence $t_j\geq 0$ such that
the stability index can be written as \begin{align} \alpha (t)= \alpha _j \qquad t_{j} \leq t< t_{j+1} \end{align} and thus the time-dependent L\'evy measure has the form \begin{align} \label{misura di levy multistabile a tratti} \nu (dx,t) = \frac{\alpha _j x^{-\alpha _j-1}}{\Gamma (1-\alpha _j)}dx , \qquad t_j \leq t < t_{j+1}. \end{align} \end{defin} Note that to each $\alpha _j \in (0,1)$ there corresponds a stable subordinator $H_{\alpha _j}(t)$ with index $\alpha _j$ in such a way that $\sigma $ is defined as \begin{align} \label{incrementi multistabile a tratti} \sigma (t)=\sigma (t_j)+ H_{\alpha _j} (t-t_j) \qquad \forall t\in [t_j, t_ {j+1}). \end{align} The following theorem shows that \eqref{processo da studiare} is given by a Markov process time-changed by the inverse of a piecewice stable subordinator. The major novelty consists in the fact that the original process and the random time are not independent as in the classical case. This reflects the fact that the intensity of the trapping effect is not space homogeneous, i.e., the time delay depends on the current position.
\begin{te} \label{tetimechfrac} Let $M$ be a Markov process defined as in (\ref{definizione processo markov}). Moreover, let $\sigma^M(t)$ be a multistable (piecewise stable) subordinator dependent on $M$ whose L\'evy measure is given, conditionally on $V_1 =v_1,V_2=v_2, \cdots$ and $X_1=x_1, X_2=x_2, \cdots$ by \begin{align} \nu ^M(dx,t) = \frac{\alpha _{x_j} x^{-\alpha _{x_j}-1}}{\Gamma (1-\alpha _{x_j})}dx, \qquad v_{j} \leq t< v_{j+1}. \end{align} Let $L^M(t)$ be the right-continuous process \begin{align} L^M(t) : = \, \inf \ll s \geq 0 : \sigma^M (s) > t \rr. \end{align} Then the time-changed process $M(L^M(t))$ is the same process as \eqref{processo da studiare}. \end{te} \begin{proof} The proof is on the line of the discussion at the end Section \ref{2}. To prove that \eqref{processo da studiare} coincides with $M(L^M(t))$ it is sufficient to prove that $M(L^M(t))$ has the same Mittag-Leffler intertimes of \eqref{processo da studiare} since $M(t)$ and $X(t)$ have the same embedded chain. Let $V_n$, $n\geq 1$, be the jump times of $M$. Since \begin{align} M(t)=X_n \qquad V_n \leq t< V_{n+1} \label{this} \end{align} we have \begin{align} M(L^M (t))=X_n \qquad V_n \leq L^M(t) < V_{n+1}. \end{align} Now by \cite[Theorem 2.2]{Orsingher1} we know that $\sigma^M(t)$ is strictly increasing and then we can apply \cite[Lemma 2.1]{Meerschaert1} to say that \eqref{this} is equivalent to \begin{align} M(L^M (t))=X_n \qquad \sigma^M(V_n-) \leq t < \sigma^M(V_{n+1}-). \label{equiv} \end{align} Now use \cite[Theorem 2.1]{Orsingher1} to say that, a.s., $\sigma^M(t) = \sigma^M (t-)$ and thus to say that \eqref{equiv} is equivalent to \begin{align} M(L^M (t))=X_n \qquad \sigma^M(V_n) \leq t < \sigma^M(V_{n+1}). \end{align} Thus the jump times $\tau_n$ of $M(L^M(t))$ are such that $\tau_n \stackrel{d}{=}\sigma ^M(V_n)$ and, by \eqref{incrementi multistabile a tratti}, the holding times are such that, under $P \l \cdot \mid X_n = x \r$, \begin{align}
\tau_{n+1}-\tau_n= \sigma^M (V_{n+1})-\sigma^M (V_n) \stackrel{d}{=} H_{\alpha _x} (E_n).
\end{align} By a standard conditioning argument we have \begin{align} \mathds{E}e^{-\eta H_{\alpha _x}(E_n)}= \frac{\lambda _x}{\lambda _x +\eta ^{\alpha _x}}. \end{align} This fact together with formula \eqref{lapldensmittag} concludes the proof. \end{proof}
\subsection{Variable order backward equations} We here derive the backward equation for the semi-Markov process \eqref{processo da studiare} in this new heterogeneous framework and we show that this equation becomes fractional of order $\alpha(i)$ where $i$ is the state where the transition is started. \begin{te} \label{teoback} The transition functions of \eqref{processo da studiare} $p_{i,j}(t)$, $i,j \in \mathcal{S}$, solve the following system of backward equations \begin{align} \label{aa} \mathcal{D}_t^{\alpha _i} \, p_{i,j}(t)&= \sum _k g_{i,k}p_{k,j}(t) , \qquad p_{i,j}(0)= \delta _{i,j} . \end{align} \end{te} \begin{proof} We can perform Laplace transform computation similar to that in the proof of Proposition \ref{teofrachom}. By applying the Laplace transform to \eqref{aa} we obtain \begin{align} s^{\alpha _i}\widetilde{p}_{i,j}(s)-s^{\alpha _i-1}p_{i,j}(0)= \sum _k g_{i,k}\widetilde{p}_{k,j}(s). \label{ccc} \end{align} Instead, by applying the Laplace tranform to (\ref{Markov renewal equation}) we have \begin{align} \label{seconda} \widetilde{p}_{i,j}(s)= \frac{s^{\alpha _i} }{s(\lambda _i+s^{\alpha _i} )}\delta _{i,j} +\sum _l h_{i,l}\, \widetilde{p}_{l,j}(s)\frac{\lambda _i}{\lambda _i+ s^{\alpha _i}}. \end{align} By setting $g_{i,j}=\lambda _i (h_{i,j}-\delta _{i,j})$, it is easy to show that \eqref{seconda} can be rewritten as \eqref{ccc} and by the uniqueness theorem for Laplace tranform the desired result is immediate. \end{proof} The explicit form of the transition probabilities is easy obtained in Laplace space. By applying the Laplace transform, the system of fractional equations \eqref{aa} reduces to the system of linear equations \eqref{ccc} in the variables $\widetilde{p}_{i,j}(s)$. In compact matrix form, \eqref{ccc} can be written as \begin{align} \Lambda \widetilde{P}(s)- s ^{-1}\Lambda I= G \widetilde{P}(s) \end{align} where $(\widetilde{P}(s))_{i,j}= \widetilde{p}(s)_{i,j}$, $I$ is the identity matrix, while \begin{align} \Lambda = \text{diag} (s^{\alpha_1}, s ^{\alpha _2},\dots, s ^{\alpha _n}...). \end{align} Thus the solution in matrix form is written as \begin{align} \widetilde{P}(s)= \frac{1}{s}(\Lambda -G)^{-1} \Lambda I. \end{align}
\section{The forward equations of semi-Markov processes in heterogeneous media} \label{4} In the spirit of what happens in the homogeneous case one can be tempted to look for the forward equation by trying to replace the ordinary time-derivative in \eqref{forward markoviana} with a variable-order Caputo derivative $\mathcal{D}_t^{\alpha (\cdot)}$, where $(\cdot)$ denotes the final state $j$. However, such an attempt is unsuccessful since it can be shown that the solution in the Laplace space does not coincide with the one of \eqref{aa}. We discuss here an example from the literature in which it is showed that this approach fails.
\begin{ex}[The state dependent fractional Poisson process] \label{expoi1} In the pioneering work \cite{Garra}, the authors studied a generalization of the fractional Poisson process in which the waiting times are independent but not identically distributed. For a given sequence $J_n$, $n\geq 0$, of independent r.v.'s with distribution \begin{align} \label{Mittag leffler distribution state dependent} P(J_k \geq t)= E_{\alpha _k}(-\lambda t^{\alpha _k}), \qquad \alpha _k \in (0,1), \end{align} they defined the state dependent fractional Poisson process as \begin{align} \mathcal{N}(t)=n \qquad T_n \leq t < T_{n+1} \label{definizione state dependent} \end{align} where $T_n= \sum _{k=0}^{n-1} J_k$, $T_0=0$. Further they proved that the state probabilities $p_{k}(t):= P(\mathcal{N}(t)=k \mid \mathcal{N}(0) = 0)$ are such that \begin{align} \label{probabilita di stato state dependent} \widetilde{p}_k(s)= \int _0 ^{\infty}e^{-st} p_k(t)dt= \frac{\lambda ^{k}s^{\alpha_k-1}}{\prod_{i=0}^k (s^{\alpha _i}+\lambda)}. \end{align} The authors noticed that apparently $\mathcal{N}(t)$ is not governed by fractional differential equations, since the state probabilities corresponding to \eqref{probabilita di stato state dependent} do not solve the fractional ``forward" equation with variable order derivative \begin{align} \mathcal{D}_t^{\alpha_k} p_k(t)=-\lambda p_k(t)+\lambda p_{k-1}(t). \label{44} \end{align} Moreover, the construction of $\mathcal{N}(t)$ as a time-changed process was not clear.
An application of our results to this particular situation also sheds light to such problems. In particular, from Theorem \ref{teoback} it follows that the transition probabilities, which have explicit Laplace transform \begin{align} \label{aaa} \tilde{p}_{i,j}(s)= \int _0^\infty e^{-st} p_{i,j}(t)dt= \frac{s^{\alpha _j -1}\lambda ^{j-i}}{\prod _{k=i}^j (\lambda +s^{\alpha _k})}, \end{align} are really related with fractional calculus and indeed they solve the following system of fractional Kolmogorov backward equations \begin{align} \label{equazione backward Poisson state-dependent} \mathcal{D}_t^{\alpha _i} p_{i,j}(t)&=-\lambda p_{i,j}(t)+\lambda p_{i+1,j}(t) \qquad j\geq i\\ p_{i,j}(0)&= \delta _{i,j}. \notag \end{align} Moreover, by Theorem \ref{tetimechfrac} it follows that to obtain \eqref{definizione state dependent}, a standard Poisson process must be composed with a dependent multistable subordinator. In what follows we determine the structure of the forward equations for semi-Markov processes defined as in \eqref{processo da studiare}. The state-dependent fractional Poisson process is therefore a particular case, so a fractional forward equation for this process can be really written down and does not coincide with \eqref{44}.
We observe that in this framework the authors of \cite{Beghin3} also studied the time-change of a Poisson process by means of the inverse of a multistable subordinator. However in this case the multistable subordinator is assumed independent from the Poisson process. Hence the resulting process is a random walk with independent but non identically distributed sojourn times, whose probability law is a time-inhomogeneous generalization of the Mittag-Leffler distribution, which is not coinciding with \eqref{Mittag leffler distribution state dependent}. Hence, the process in \cite{Beghin3} is not a model for motions in heterogeneous media, but rather in an environment whose physical conditions change over time. \end{ex}
We now derive the system of forward (or Fokker-Planck) equations governing the process \eqref{processo da studiare}. Since the random walk has place in a heterogeneous medium, an adequate kinetic description of the process requires variable order fractional operators. The proof of the following theorem is based on the quantities \begin{align*} J_i^+(t)dt& =P((X_{t+dt}=i) \cap (X_t\neq i)) \notag \\ J_i^-(t)dt& = P((X_{t+dt}\neq i) \cap (X_t= i)) \notag \end{align*} where $J_i^+$ and $J_i^-$ represent the gain and the loss fluxes for the state $i$. It is intuitive that to deal with $J_i^+$ and $J_i^-$ it is convenient to assume that the when the process jumps (so when $t=T_n$ for some $n$) it can't jump in its current position. Hence we will assume in the following theorem that $h_{i,i}=0$. However this is not strictly necessary and by adapting the notations one can generalize and remove the assumption. \begin{os} \label{remrendens} The Markov property is a consequence of the lack of memory property of the exponential distribution, which roughly states that when the Markov process is at $i$ the probability of having a jump in an infinitesimal interval of time $dt$ is $\lambda_idt$, so the rate $\lambda_i$ is constant in time. Of course since the lack of memory is not true for other distributions, we must have here that the rate varies with time, i.e., it is $\lambda_i u_i(t) dt$. It turns out that in this case the function $u_i(t)$ is given by \begin{align} u_i(t) \, = \, \frac{t^{\alpha_i-1}}{\Gamma(\alpha_i)}, \qquad \alpha_i \in (0,1). \end{align} This fact will be proved in the general situation in Section \ref{6}. We remark that the probability that in an infinitesimal interval there is more than one jump is $o(dt)$. This is because by construction we know that $X(t)$ is the same process as $M(t^\prime)$ with $t = \sigma^M(t^\prime)$: since $\sigma^M(t^\prime)$ is, a.s., strictly increasing on any finite interval of time and continuous, we have that within an infinitesimal time interval $dt$ also the process $X(t)$ performs at most one jump. \end{os} \begin{te} \label{teforward} Let $X$ be the process in \eqref{processo da studiare}. Assume that the $h_{i,i}=0$. Then the transition probabilities \begin{align*}
p_{l,i}(t)\, = \, P(X(t)=i|X(0)=l, \gamma (0)=0) \qquad l,i\in \mathcal{S} \end{align*} solve the following system of fractional forward equations \begin{align} \frac{d}{dt}p_{l,i}(t)= \sum _k g_{k,i}\, ^R\mathcal{D}_t^{1-\alpha _k} p_{l,k}(t).\label{F} \end{align} \end{te} \begin{proof}
The probability that the process performs more than one jump in an infinitesimal interval is $o(dt)$ is discussed in Remark \ref{remrendens}. Let $J_i^+(t)dt$ be the probability of reaching the state $i$ during the time interval $[t,t+dt)$, i.e., \begin{align*} J_i^+(t)dt& =P((X_{t+dt}=i) \cap (X_t\neq i)) \end{align*} and let $J_i^-(t)dt$ be the probability of leaving the state $i$ during the time interval $[t,t+dt)$, i.e., \begin{align*} J_i^-(t)dt& = P((X_{t+dt}\neq i) \cap (X_t= i)). \end{align*} Then we have under $P:=P \l \cdot \mid X(0)=l, \gamma (0) = 0 \r$ \begin{align} &P(\{ X_{t+dt}=i\}) \notag \\ &= P(\{ X_{t+dt}=i\} \cap \{X_t=i\}) + P(\{ X_{t+dt}=i\} \cap \{X_t\neq i\})\notag \\ &= P(\{X_t=i\})-P(\{X_{t+dt } \neq i \} \cap \{X_t=i\})+P(\{ X_{t+dt}=i\} \cap \{X_t\neq i\}) \label{forw} \end{align} which, in our notations, reads \begin{align*} p_{l,i}(t+dt)= p_{l,i}(t)-J_i^-(t)dt+J_i^+(t)dt \end{align*} or, equivalently, \begin{align} \frac{d}{dt}p_{l,i}(t)= J^+_i(t)-J^-_i(t), \label{A} \qquad t\geq 0. \end{align} By the total probability law, the ingoing flux can be computed as \begin{align*} J^+_i(t)= \sum _{k\neq i} J^-_k(t)h_{k,i} \end{align*} where $h_{k,i}$ is the matrix of the embedded chain. Then we obtain the following balance equation (expressing the conservation of probability mass) \begin{align} \frac{d}{dt}p_{l,i}(t)&= \sum _{k\neq i} J^-_k(t)h_{k,i}-J^-_i(t) \notag \end{align} which can also be written as \begin{align} &\frac{d}{dt}p_{l,i}(t)= \sum _k J^-_k(t) (h_{k,i}-\delta _{k,i}). \label{C} \end{align} The main goal is to compute the outgoing flux $J_i^-(t)$. It can be viewed as the sum of two contributions \begin{align*} J^-_i(t)dt= A_i^1(t)dt+A_i^2(t)dt \end{align*} where $A_i^1(t)dt$ is the probability to be initially in the state \textit{i} and to remain there for a time exactly equal to $t$ and $A_i^2(t)dt$ is the probability to reach the state \textit{i} at time $t'<t$ and to remain there for a time $t-t'$. Thus \begin{align*} J^-_i(t)= f_i(t)p_{l,i}(0)+\int_0^t f_i(t-t')J^+_i(t')dt' \end{align*} where $p_{l,i}(0)= \delta _{l,i}$ and $f_i(t)$ the probability density of the holding time in $i$. We can eliminate $J^+_i(t)$ since by \eqref{A} we have $J^+_i(t)= \frac{d}{dt}p_{l,i}(t)+J^-_i(t)$. We thus obtain \begin{align} J^-_i(t)= f_i(t)p_{l,i}(0)+\int_0^t f_i(t-t')\bigl(\frac{d}{dt'}p_{l,i}(t')+J^-_i(t') \bigr )dt' \label{B} \end{align} which is an integral equation in $J^-_i(t)$. By applying the Laplace transform to (\ref{B}) we obtain \begin{align*} \tilde{J}^-_i(s)= \tilde{f}_i(s)p_{l,i}(0)+ \tilde{f}_i(s) \bigl (s\, \tilde{p}_{l,i}(s)-p_{l,i}(0) + \tilde{J}^-_i(s) \bigr) \end{align*} which gives \begin{align}\label{E} \tilde{J}^-_i(s)=\frac{s \tilde{f}_i(s)}{1-\tilde{f}_i(s)}\tilde{p}_{l,i}(s). \end{align} By assuming that the holding times follow a Mittag-Leffler distribution we have by \eqref{lapldensmittag} that \begin{align*} \tilde{f}_i(s)= \frac{\lambda_i}{\lambda_i+s^{\alpha_i}} \end{align*} and thus formula \eqref{E} becomes \begin{align*} \tilde{J}^-_i(s)= \lambda_is^{1-\alpha _i}\, \tilde{p}_{l,i}(s). \end{align*} By reminding the definition of Riemann-Liouville derivative \eqref{defriemann}, we have \begin{align} J^-_i(t)= \lambda_i \, ^R\mathcal{D}_t^{1-\alpha _i}p_{l,i}(t) \label{416} \end{align} and \eqref{C} reduces to \begin{align} \frac{d}{dt}p_{l,i}(t)&= \sum_k \lambda _k \, ^R\mathcal{D}_t ^{1-\alpha _k}p_{l,k}(t)(h_{k,i}-\delta _{k,i})\notag \\ &= \sum _k \, ^R\mathcal{D}_t^{1-\alpha _ k}p_{l,k}(t)g_{k,i}. \end{align} and the proof is complete. \end{proof} \begin{os} We remark that the reason why such equation is called forward is that the operator on the right side acts on the ``forward" variable $i$ but leaves unchanged the backward variable $l$. Indeed, the equation is derived by conditioning on the event of the last jump (reaching the final state $i$) that may have occurred in a narrow interval near $t$. This last aspect is particularly clear by looking at \eqref{forw}. \end{os}
\begin{os} If $X$ is a Markov process, then the sojourn times are exponentially distributed, i.e. \begin{align*} \tilde{f}_i(s)= \frac{\lambda_i}{\lambda_i+s}. \end{align*} Then (\ref{E}) reduces to \begin{align*} \tilde{J}^-_i(s)= \lambda _i \, \tilde{p}_{l,i}(s) \end{align*} namely \begin{align} J^-_i(t)= \lambda _i p_{l,i}(t).\label{z} \end{align} Then the balance equation (\ref{C}) reduces to the forward Kolmogorov equation \begin{align*} \frac{d}{dt}p_{l,i}(t)&= \sum _k J^-_k(t) (h_{k,i}-\delta _{k,i})\\ &= \sum _k \lambda _k(h_{k,i}-\delta _{k,i})p_{l,k}(t)\\ &= \sum _k p_{l,k}(t) g_{k,i}. \end{align*} From the physical point of view, the dynamics of the Markovian case and that of the CTRW with Mittag-Leffler waiting times present a wide difference. Indeed, in the Markov case, the outgoing flux $J_i^-(t)$ from the state $i$ at time $t$ is proportional to the concentration of particles at the present time $t$ (see \eqref{z}). Instead, in the CTRW with power law waiting times, the outgoing flux depends on the particles concentration at past times, according to a suitable weight kernel (see \eqref{416}). \end{os}
\begin{ex}[The state dependent fractional Poisson process (continued)] To conclude the discussion of Example \ref{expoi1} we remark here that the forward equation for the state-dependent fractional Poisson process can be written down by using Theorem \ref{teforward}. We have indeed that the probabilities $p_{l,i}(t) := P \l \mathcal{N}(t) = i \mid \mathcal{N}(0) = l \r$ satisfy \begin{align} \frac{d}{dt}p_{l,i}(t) \, = \, \lambda \l \, ^R\mathcal{D}_t^{1-\alpha_{i-1}} p_{l,i-1}(t) - \, ^R\mathcal{D}_t^{1-\alpha_{i}} p_{l,i}(t) \r. \end{align} \end{ex}
\section{Convergence to the variable order fractional diffusion} \label{5} Suppose that the state space $\mathcal{S}$ is embedded in $\mathbb{R}$. Hence our processes can be viewed as processes on $\mathbb{R}$, whose distribution is supported on $\mathcal{S}$. So in this section we consider a suitable scale limit of a semi-Markov process and we show that the one time distribution converges to the solution of forward heat equation on $\mathbb{R}$ with fractional variable order \begin{align} \frac{\partial}{\partial t}p(x,y,t)= \frac{1}{2} \frac{\partial^2}{\partial y^2}\l \, ^R\mathcal{D}_t^{1-\alpha (y)} p(x,y,t)\r \label{G}. \end{align} Such equation has been derived for the first time in \cite{Gorenflo} exactly in the study of anomalous diffusion in heterogeneous media. Hence our method provides a semi-Markov framework to this equation. The homogeneous case is represented by the time-fractional diffusion equation \begin{align} \mathcal{D}_t^{\alpha }p(x,y,t)= \frac{1}{2} \frac{\partial^2}{\partial x^2} p(x,y,t) \label{classicheat} \end{align} which is well known in literature and already has a probabilistic interpretation (see Remark \ref{remctrw} below for some details). This equation is related with anomalous diffusion (non-Fickian diffusion), see, for example, \cite{hairer} for a recent application.
Let us assume that the process defined in \eqref{processo da studiare} is a symmetric CTRW with Mittag-Leffler waiting times and with transition probabilities \begin{align} h_{i,j} = \begin{cases} \frac{1}{2}, \qquad & j = i-1, i+1, \notag \\ 0, & j \neq i-1, i+1, \end{cases} \end{align}
and $\lambda_i=\lambda$. Since $i,j$ are labels for points on the real line we can safely assume that the distance in $\mathbb{R}$ of two near points of $\mathcal{S}$ is constant and equal to $\epsilon$, i.e., $|i-j |=\epsilon$ for $j=i-1,i+1$ where $|\cdot|$ is the euclidean distance in $\mathbb{R}$. Hence looking at the process in $\mathbb{R}$ the walker performs jumps of size $\epsilon$. Then define \begin{align} \mathbb{R}^2 \ni (x,y) \mapsto p(x,y, t) \, = \, \begin{cases} p_{x,y}(t), \qquad & (x, y) \in \mathcal{S} \times \mathcal{S}, \notag \\ 0, & \text{otherwise}. \end{cases} \end{align} The forward equation \eqref{F} reduces to \begin{align} \frac{d}{dt}p_{l,i}(t)= \frac{1}{2}\lambda \left[ \, ^R \mathcal{D}_t^{1-\alpha _{i-1}} p_{l,i-1}(t)+ \, ^R \mathcal{D}_t^{1-\alpha _{i+1}} p_{l,i+1}(t)-2 \,^R \mathcal{D}_t^{1-\alpha _{i}} p_{l,i}(t)\right]. \label{forwdiff} \end{align} By considering now the auxiliary function \begin{align} u(x,y,t)= \, ^ R \mathcal{D}_t^{1-\alpha (y)} p(x,y,t) \end{align} we can rewrite \eqref{forwdiff} as \begin{align} \frac{\partial}{\partial t}p(x,y,t)=\frac{1}{2}\lambda \l u(x,y-\epsilon,t)+u(x,y+\epsilon,t)-2u(x,y,t) \r. \end{align} By setting $\lambda = 1/\epsilon^2$ and letting $\epsilon \to 0$, the second derivative $\frac{\partial^2}{\partial y^2}u(x,y,t)$ arises. We thus obtain \begin{align} \frac{\partial}{\partial t}p(x,y,t)= \frac{1}{2} \frac{\partial^2}{\partial y^2}\l \, ^R\mathcal{D}_t^{1-\alpha (y)} p(x,y,t) \r. \end{align} Note that the same scaling limit of a symmetic CTRW on a $d$-dimensional lattice leads to an analogous equation exibiting the Laplace operator in place of the second order derivative.
Equation \eqref{G} can be obtained phenomenologically by combining the continuity equation \begin{align} \frac{\partial }{\partial t } p(x,y,t)= - \frac{\partial }{\partial y} q(x,y,t) \end{align} with an ad-hoc fractional Fick’s law regarding the flux $q(x,y,t)$: \begin{align} q(x,y,t)=- \, \frac{\partial}{\partial y} \, ^R\mathcal{D}_t^{1-\alpha (y)} p(x,y,t). \end{align} The fractional derivative in this expression provides a weighted average of the density gradient over the prior history, provided that the kernel of the average depends on the position $y$.
In terms of probability theory, the picture is completed by the backward heat equation with fractional variable order \begin{align} \mathcal{D}_t^{\alpha (x)} p(x,y,t)= \frac{1}{2} \frac{d^2}{dx^2}p(x,y,t). \label{H} \end{align} Such equation has been derived in \cite{Orsingher2}, where the authors studied the convergence of the resolvent of semi-Markov evolution operators. Heuristically, eq. $\eqref{H}$ can be obtained from the backward equation \eqref{aa} adapted to the case of a symmetric random walk with Mittag-Leffler waiting times: \begin{align} \mathcal{D}_t^{\alpha _i} p_{i,j}(t)= \frac{1}{2}\lambda \l p_{i+1,j}(t)+p_{i-1,j}(t)-2p_{i,j}(t)\r. \end{align} Indeed, passing to a lattice of size $\epsilon$ we have \begin{align} \mathcal{D}_t^{\alpha (x)}p(x,y,t) = \frac{1}{2}\lambda \bigl ( p(x+\epsilon,y,t)+p(x-\epsilon,y,t)-2p(x,y,t) \bigr ).\end{align} By assuming $\lambda= 1/ \epsilon ^2$, the limit $\epsilon \to 0$ gives the desired result.
\begin{os} In our derivation of the fractional heat equations, the diffusion coefficient is put equal to 1. However, the equations reported in the literature (e.g. \cite{Fetodov, Gorenflo}) usually exhibit a space-dependent diffusion coefficient. To obtain this from a CTRW scheme, it is sufficient to assume a space dependent intensity $\lambda$ such that, in the limit of small $\epsilon$, it is of order $1/\epsilon ^2$, that is $\lambda (x)= \widetilde{\lambda}(x)/ \epsilon ^2$. Then, it is natural to define the diffusion coefficient as \begin{align} k(x)= \widetilde{\lambda}(x) \end{align} and to repeat the same scaling limit argument in order to obtain both the forward equation \begin{align} \frac{\partial}{\partial t}p(x,y,t) \, = \, \frac{1}{2} \frac{\partial^2}{\partial y^2}\l k(y) ^R\mathcal{D}_t^{1-\alpha (y)} p(x,y,t)\r \end{align} and the backward equation \begin{align} \mathcal{D}_t^{\alpha (x)}p(x,y,t)= \frac{1}{2} k(x) \frac{\partial ^2}{\partial x^2} p(x,y,t). \end{align} \end{os}
\begin{os} The mean square displacement of a subdiffusion in a homogeneous medium grows slower with respect to the Brownian motion, i.e., $ \overline{x^2}(t) \sim t^{{\alpha}}$, where $\alpha \in (0,1)$. Such a process can be represented as a Brownian motion delayed by an independent inverse stable subordinator. For a study of its long time asymptotic properties, consult \cite{Shilling}. Concerning subdiffusion in heterogeneous media described by our equation, the picture is much more complicated and some unexpected phenomena arise. For example, in \cite{Fetodov}, the authors find that in the long time limit the CTRW process is localized at the lattice point where $\alpha (x)$ has its minimum, a phenomenon called ``anomalous aggregation". This suggests that the process does not enjoy the same ergodic properties reported in \cite{Shilling} for subdiffusion in homogeneous media. \end{os}
\begin{os} The transition probability $p(x,y,t)$ is the fundamental solution to the partial differential equation (\ref{H}), and thus it is interesting to consider some well-posedness issues. The most recent result in this direction can be found in \cite{Kian} where the authors consider the following Cauchy problem \begin{align} \begin{cases} \rho(x) \mathcal{D}_t^{\alpha (x)}u(x,t)- \Delta u(x,t)= f(x,t) \qquad x\in \Omega , t \in (0,T)\\ u(x,0)=u_0(x)\\ u(x,t)= 0 \qquad x \in \partial \Omega , t\in (0,T) \end{cases} \end{align} and prove that, under suitable assumptions on the source term $f$ and the initial datum $u_0$, there exists a unique weak solution $u(x,t)$ in the sense of \cite[Thm 2.3]{Kian}. \end{os}
\begin{os} \label{remctrw} There are several results on CTRWs limit processes which can be applied in this situation by making some further assumptions \cite{marcincoupled, meertri, Meerschaert2, meerstra, strakahenry}. We discuss here an example. Consider the couple process $\l X_n, T_n \r$ and introduce a scale parameter $c$, so the process is $\l X_n^c, T_n^c \r$. By making assumptions on the weak convergence of probability measures of the process $\l X_{[u/c]}^c, T_{[u/c]}^c \r \to \l A(t), D(t) \r$ as $c \to 0$ (e.g. as in \cite[Theorem 3.6]{strakahenry}) one has that \begin{align} X^c(t) \, \to \, X^0 (t):= A(E(t-)) \end{align} where $E(t)$ is the hitting-time of $D$. In our situation the processes are non independent and hence ``coupled" in the language CTRW. Of course when $A$ is a Brownian motion and $D(t)$ is an independent $\alpha$-stable subordinator we are in the equivalent homogeneous situation of this section: the one time distribution of $A(E(t))$ solves indeed eq \eqref{classicheat} in which $\alpha$ is constant \cite{fracCauchy}. Here we can conjecture that in order to obtain a process governed by \eqref{G} we must assume that $A$ is still a Brownian motion and that $D(t)$ is a multistable subordinator $\sigma(t)$ obtained as a limit case of the piecewise stable subordinator of Definition \eqref{defpiec}. So we argue that $D(t)$ must be a multistable subordinator (dependent on $A(t)$) whose L\'evy measure is the limit of the L\'evy measure of a piecewise stable subordinator, i.e., conditionally on a Brownian path $A(t, \omega)$ \begin{align} d\nu(ds, t)/ds \, = \, \int_\mathbb{R} \frac{\alpha(x)s^{-\alpha(x)-1}}{\Gamma(1-\alpha(x))} \, \mathds{1}_{\ll A(t, \omega) = x \rr} dx. \label{multilim} \end{align} Then one can define $E(t)$ as the hitting-time of $\sigma(t)$. This require several further investigations. \end{os}
\section{Arbitrary holding times and integro-differential Volterra equations} \label{6} The construction of Theorem \ref{tetimechfrac} is based on the notion of multistable subordinator. In \cite{Orsingher1} the authors introduced the more general class of inhomogeneous subordinators, i.e., non decreasing processes with independent and non stationary increments. By using these, it is possible to define a new type of CTRWs, which is constructed in the same way as \eqref{processo da studiare}, except for the distributions of the waiting times, which are no more Mittag-Leffler.
Indeed, for any $i \in \mathcal{S}$, consider a L\'evy measure $\nu (dx, i)$ which defines a homogeneous subordinator $\sigma ^i$ such that \begin{align*} \mathbb{E} e^{-s\sigma ^i (t)}= e^{-t f(s,i)} \end{align*} where \begin{align} f(s,i) \, = \, \int_0^\infty \l 1-e^{-sw} \r \nu(dw,i) \end{align} is the Laplace exponent of $\sigma ^i$. Let $L^i(t) = \inf \ll \tau : \sigma^i (\tau)>t \rr$ be the right continuous hitting time of $\sigma ^i$. For any $i \in \mathcal{S}$ we assume $\nu ((0, \infty), i)= \infty$, in such a way that $\sigma ^i$ is a.s. stricly increasing, $L^i$ has a.e. continuous sample paths, and, for any $t>0$, $\sigma ^i(t)$ and $L^i(t)$ are absolutely continuous random variables. We are now ready to define the following CTRW: \begin{align} X(t)= X_n \qquad T_n \leq t< T_{n+1} ,\label{ultimo processo} \end{align} where $T_n= \sum _{k=0}^{n-1}J_k$, $T_0=0$, and \begin{align} P \l J_n>t \mid X_n=i \r \, = \, \overline{F}_i(t)= \mathbb{E}e^{-\lambda_i L^i(t)}. \label{63} \end{align} The generalization of Theorem \eqref{tetimechfrac} is immediate. Let $M$ be a Markov process defined as in \eqref{definizione processo markov}. Moreover, let $\sigma^M(t)$ be an inhomogeneous subordinator dependent on $M$ {whose L\'evy measure, conditionally on $V_1 =v_1,V_2=v_2, \cdots$ and $X_1=x_1, X_2=x_2, \cdots$ is given by \begin{align} \nu ^M(dx,t) = \nu (dx,i), \qquad v_{i} \leq t< v_{i+1}. \end{align} Let $L^M(t)$ be the right continuous inverse of $\sigma^M(t)$. Then the time-changed process $M(L^M(t))$ is the same process as \eqref{ultimo processo}. To prove this, the key point is the fact that \begin{align} \int_0^\infty e^{-s t} P \l J_n > t \mid X_n = i \r \, = \, \frac{f(s, i)}{s} \frac{1}{\lambda_i+f(s,i)}, \end{align} namely, conditionally to $X_n=i$, $J_n$ has a density $\psi _i$ with Laplace transform \begin{align}
\int_0^\infty e^{-s \tau} \psi _i(\tau) d\tau\, = \, \frac{\lambda_i}{\lambda_i + f(s,i)},
\label{322} \end{align} which is coinciding with
$\mathds{E}(e^{-s \sigma ^i (E_n)}|X_n=i) $.
\subsection{ Integro-differential Volterra equations with position dependent kernel} To obtain a backward equation, we resort again to \eqref{Markov renewal equation} and applying Laplace transform to both sides yields \begin{align} \widetilde{p}_{i,j}(s)= \frac{f(s,i) }{s(\lambda _i+f(s,i) )}\delta _{i,j} +\sum _l h_{i,l}\, \widetilde{p}_{l,j}(s)\frac{\lambda _i}{\lambda _i+ f(s,i)}, \end{align} which can be rearranged as \begin{align} f(s,i)\widetilde{p}_{i,j}(s) -s^{-1}f(s,i) \delta_{i,j} \, = \, \sum _k g_{i,k}\widetilde{p}_{k,j}(s) \label{laplgenerale} \end{align} where again $g_{i,j}=\lambda _i (h_{i,j}-\delta _{i,j})$. Inverting Laplace transform in \eqref{laplgenerale} however does not yield to a time-fractional equation. By using indeed \cite[Lemma 2.5 and Proposition 2.7]{toaldo} we get the inverse Laplace transform \begin{align} \frac{d}{dt} \int_0^t p_{i,j}(t^\prime) \, \bar{\nu}(t-t^\prime,i) \, dt^\prime \, - \delta_{i,j} \bar{\nu}(t,i) \, = \, \sum_k g_{i,k}p_{k,j}(t) \label{oltgen} \end{align} where $\bar{\nu}(t,i):=\nu((t, \infty),i)$, provided that the integral function is differentiable. It is clear that in the situation of Theorem \ref{tetimechfrac} one has \begin{align} \bar{\nu}(t,i) \, = \, \int_t^\infty \frac{\alpha_i w^{-\alpha_i-1}}{\Gamma (1-\alpha_i)} dw \, = \, \frac{t^{-\alpha_i}}{\Gamma(1-\alpha_i)} \end{align} and the operator on the left-hand side of \eqref{oltgen} becomes a fractional Caputo derivative.
We now also derive a forward equation. Let \begin{align} N^*(t)= \max \{n: T_n\leq t \} \end{align} be the number of renewals for the process \eqref{ultimo processo} up to time $t$. Of course, conditionally to $X_1=x_1$, $X_2=x_2, \cdots,$ we have that $N^*$ is a birth process (with rates $\lambda _{x_1}$, $\lambda _{x_2}, \cdots$) time changed by the dependent time process $L^{M}$. Our attention focuses on the quantity, conditionally on $\ll X(t)=i \rr$, \begin{align} \lim _{\Delta t \to 0} \frac{\mathbb{E} [N^*(t+\Delta t) ]- \mathbb{E}[N^*(t)]}{\Delta t} \label{renewal density} \end{align} which we call renewal density (in the spirit of \cite[page 26]{Cox}) and specifies the mean number of renewals to be expected in a narrow interval near $t$ conditionally on the current position. Since we condition on $X(t)=i$, \eqref{renewal density} is obviously depending on $i$ and $N^*(t+dt)-N^*(t)$ behaves like $N(L^i(t+dt))-N(L^i(t))$ where $N$ is a standard birth process. Thus the limit \eqref{renewal density} can be computed as \begin{align} m_i(t)= \, & \, \frac{d}{dt}\mathbb{E}\left[N^*(t)\right]= \frac{d}{dt}\mathbb{E}N(L^i(t))= \frac{d}{dt}\lambda _i \mathbb{E}L^i(t)\notag \\ = \, & \lambda _i \frac{d}{dt} \int _0^\infty P(L^i(t)>w)dw\, = \, \lambda _i \frac{d}{dt}\int _0^\infty P(\sigma ^i (w)<t)dw. \label{renfun} \end{align} The function \begin{align} t \mapsto u^i(t):=\frac{d}{dt}\int _0^\infty P(\sigma ^i (w)<t)dw \label{rendens} \end{align} on the right-hand side of \eqref{renfun} is said to be, in the language of potential theory (e.g. \cite{pottheory}), the potential density of the subordinator $\sigma^i$ and is such that (e.g. \cite[Section 1.3]{bertoins}) \begin{align} \int_0^\infty e^{-st} u^i(t) dt\, = \, \frac{1}{ f(s, i)} \end{align} provided that the derivative in \eqref{rendens} exists a.e. Heuristically $u^i(t)$ represents the mean of the total amount of time spent by the subordinator $\sigma ^i$ in the state $dt$.
To obtain a forward equation we can follow the same line of section \ref{4} up to formula \eqref{E}. Then, by using \eqref{322}, the outgoing flux has Laplace transform \begin{align*} \tilde{J}_i^-(s)= \lambda _i \frac{s}{f(s,i)}\tilde{p}_{l,i}(s) \end{align*} and thus the convolution theorem gives \begin{align} J^-_i(t)&= \frac{d}{dt} \int_0^t p_{l,i}(\tau) \, m_i(t-\tau) d\tau \\& = \lambda_i \frac{d}{dt} \int_0^t p_{l,i}(\tau) \, u_i(t-\tau) d\tau. \label{zz} \end{align} Finally, \eqref{C} reduces to \begin{align} \label{forward finale} \frac{d}{dt}p_{l,i}(t)= \sum _k g_{k,i} \frac{d}{dt} \int_0^t p_{l,k}(s) \, u_k(t-s) ds , \end{align} which is the forward equation for our process. It is straightforward to prove that in the fractional case the renewal density relatively to the state $k$ reads \begin{align*} m_k(t)=\lambda_k \frac{t^{\alpha_k-1}}{\Gamma (\alpha_k)} \end{align*} and the operator on the right-hand side of \eqref{forward finale} reduces to the Riemann Liouville derivative $^R\mathcal{D}^{1-\alpha_k}$. With the above discussion we have proved the following result. \begin{te} Let $X(t)$ be a process like \eqref{ultimo processo} with holding times $\overline{F}_i(t)$ given by \eqref{63}. Further assume that $m_i(t)$ exists for any $i$. Then the probabilities $p_{i,j}(t)$ satisfy the backward equation \eqref{oltgen} as well as the forward equation \eqref{forward finale}. \end{te}
\end{document} | arXiv |
Università del Salento
CRUI.UNILE
Per una mitologia critica. La favola della realt� e la realt� della favola
Andrea Tagliapietra
Dataset published via University of Salento
Myth has long been regarded as something, which is not capable of grabbing the traditional notion of truth. Hans Blumenberg defined this pre-comprehension, which denies every legitimacy to the mythical apparatus, �absolutism of reality�. Because of this historical and theoretical prejudice, the Truth seems to have forgotten its proper history: it has implicitly denied the narration of its own history, backing out of the practice of tale and fable. The Truth, in short, denies of...
https://doi.org/10.1285/i22840753v1n1p25 Cite
La metafora dei Proci. Esperienza del limite ed etica della jouissance
In the classical mythology and in greek tragedy, there are some metaphor concepts which, all along, represent and propose again the essential nucleus of human sensitivity. Among these, the concept of hybris, probably, deserves a place of crucial modernity and preeminence. Hybris is arrogance, the feeling of the excess, of prevarification, of pride, the crossing limits of human acting. As for Homeric mythology, among the most representative personalities in terms of arrogance, we can find...
L'industria culturale di Adorno e Horkheimer: una proposta di rilettura
Alberto Abruzzese
The essay by Adorno and Horkheimer about The Culture Industry (in the volume Dialectic of Enlightment) represents for Alberto Abruzzese the starting point of a reasoning on the intellectuals' role, the crisis of humanistic and academic knowledge and the new �screen and network� society. The author uses The Culture Industry as a text on the western civilization's sunset and at the same time on the metamorphosis of mass cultural production. Abruzzese refers to those scholars...
https://doi.org/10.1285/i22840753v1n1p133 Cite
Effetti di senso della comunicazione ambientale in rete
Filippo Maria De Matteis
Which type of environmental communication do the media talk about? How many Natures are being told through the network of texts of environmental communication? Which is the influence of the new media on environmental communication at an expressive and narrative level? The aim of this paper is to answer these questions investigating some distinctive texts. The new possibilities to create meaning through environmental messages will be explored in the light of the evolution of media....
Zur Semantik der ethnischen Schimpfnamen
Maria Laura Tenchini
This article aims to advance a theory of the semantic status of racial epithets able to account for the relationship between the denotation and connotation of these terms. The more common approaches to the topic are considered and some of their drawbacks are illustrated. Then, a solution based on the speech-act theory is offered. It is claimed that a speaker, when using a racial epithet, performs two different speech-acts, one of which is always an...
https://doi.org/10.1285/i22390359v10p125 Cite
Control affine systems on solvable three-dimensional Lie groups, II
Rory Biggs & Claudiu C. Remsing
We seek to classify the full-rank left-invariant control affine systems evolving on solvable three-dimensional Lie groups. In this paper we consider only the cases corresponding to the solvable Lie algebras of types III, V I, and V II in the Bianchi-Behr classification.
https://doi.org/10.1285/i15900932v33n2p19 Cite
On the Dooley-Rice contraction of the principal series
Benjamin Cahen
In [\textsc{A. H. Dooley and J. W. Rice}: \textit{On contractions of semisimple Lie groups}, Trans. Am. Math. Soc., \textbf{289} (1985), 185--202], Dooley and Rice introduced a contraction of the principal series representations of a non-compact semi-simple Lie group to the unitary irreducible representations of its Cartan motion group. We study here this contraction by using non-compact realizations of these representations. \end
Complete independence of an axiom system for central translations
Jesse Alama
A recently proposed axiom system for Andr�e�s central translation structures is improved upon. First, one of its axioms turns out to be dependent (derivable from the other axioms). Without this axiom, the axiom system is indeed independent. Second, whereas most of the original independence models were infinite, finite independence models are available. Moreover, the independence proof for one of the axioms employed proof-theoretic techniques rather than independence models; for this axiom, too, a finite independence...
https://doi.org/10.1285/i15900932v33n2p133 Cite
Maximal sector of analyticity for $C_{0}$-semigroups generated by elliptic operators with separation property in $L^{p}$
Giorgio Metafune, Noboru Okazawa, Motohiro Sobajima & Tomomi Yokota
Analytic continuation of the $C_{0}$-semigroup $\{e^{-zA}\}$ on $L^{p}(\mathbb{R}^{N})$ generated by the second order elliptic operator $- A$ is investigated, where $A$ is formally defined by the differential expression $ Au = -{\rm div}(a{\nabla}u) + (F\cdot{\nabla})u + Vu$ and the lower order coefficients have singularities at infinity or at the origin.
R�cit de titane � grandir. L�historique et l�ontologique dans l�appr�ciation du surr�alisme. Une critique d�un article recent
Andrea D'Urso
Abstract � By analysing the concrete example of an article published in the journal �Synergies Canada� n�3 (2011), which is representative of a bigger range affecting the historiography of Surrealism, this paper aims to give a contribution in the demystification of ontological (and therefore ideological) approaches reducing the accuracy and the seriousness of �scientific� appraisals of Surrealism, mainly when they prefer to follow less the facts and the documents than the metaphysical chattering, or even...
https://doi.org/10.1285/i22390359v10p19 Cite
Un episodio del pensamiento franc�s de la autonom�a. Benjamin Constant y las id�es tr�s ing�nieuses
Francisco Gelman Constantin
This paper attempts a consideration of the significance of Benjamin Constant�s aesthetic thoughts within the context of the history of the concept of �aesthetic autonomy�. With that in mind it reevaluates his transformation of the German tradition, by way of its binding to the social theory corresponding to the cultural and historical analysis of early French Romanticism. Taking the coinage of the expression �art for art�s sake� as a starting point, this paper reconstructs the...
El an�lisis l�xico en funci�n de la actividad de traducci�n
Simone Greco
The purpose of this study is to demonstrate that lexical analysis of a text is a necessary phase in the translation process, fundamental in guaranteeing the efficacy of the language at the diaphasic,diastratic, diatopic and diamesic levels, even across inter-linguistic lines. Neither the decodifying of the original message nor its rewriting in the newcode can be done without individuating idiomatic and evidently cultural aspects, constantly subordinated to the potential combinations of words involved. Lexical analysis...
Jes�s G. Maestro, Calipso eclipsada. El teatro de Miguel de Cervantes m�s all� del Siglo de Oro, Verbum, Madrid, 2013
Antonio Boccardo
Raymond W. Gibbs, Jr. and Herbert L. Colston, Interpreting Figurative Meaning, CUP, Cambridge, 2012
Donatella Resta
La distribuzione dei farmaci tra esigenze di competitivit� e tutela della salute: un'analisi comparata
Carlo Ciardo
The sector of distribution of drugs, with reference to the territorial and demographic criteria for the opening of new pharmacies, was recently amended by the Italian legislator and is also the subject of numerous rulings by the administrative courts. The pharmaceutical distribution is differently regulated among the EU countries. For this reason, it is necessary to do a comparative examination among the laws of some important European countries, Britain, France and Germany, thanks to which...
https://doi.org/10.1285/i22808949a2n2p235 Cite
Nixon e la crisi di Cienfuegos: Cuba, autunno 1970
Immacolata Petio
Compared to the missile crisis of 1962 faced by President Kennedy, the �crisis� derived from the construction of a Soviet base for nuclear submarines in Cienfuegos Bay, on the southern coast of Cuba, is far less known. It dates back to the late summer-autumn of 1970, as the leaders of the United States and the Soviet Union were moving tentatively toward what became the d�tente of the 1970s. There was no public clash or crisis,...
I paradossi della democrazia protetta
Antonio Cardigliano
The �defensive democracy� may be defined as the set of rules of a democratic system that forbids and punishes political behavior of certain movements, or illiberal parties, considered threatening the constitution. The �defensive democracy� institutional model was first theoretically and philosophically, then technically and legally analyzed through the comparison of four different legal systems belonging to Turkey, Spain, Germany and Italy. Theoretically, �defensive democracy� may be historically rooted in Hobbes' doctrine, but its more complete...
Intervista a Kerry Kennedy
Francesca Salvatore
On the existence of orthonormal geodesic bases for Lie algebras
Grant Cairns, Nguyen Thanh Tung Le, Anthony Nielsen & Yuri Nikolayevsky
We show that every unimodular Lie algebra, of dimension at most 4, equipped with an inner product, possesses an orthonormal basis comprised of geodesic elements. On the other hand, we give an example of a solvable unimodular Lie algebra of dimension 5 that has no orthonormal geodesic basis, for any inner product.
The Paradox of the Female Participation in Fundamentalist Movements
Luca Ozzano
Throughout the world, religiously-oriented conservative political movements are well known for their defence of �traditional models� in terms of both family conception and gender roles. Therefore, one should expect to find a limited social and political mobilization of women within them as well as in right-wing religiously conservative parties. However, many significant movements have built strong female branches in which militants usually perform roles apparently contradicting the religious conservative ideologies the movements support. This paper...
https://doi.org/10.1285/i20356609v7i1p14 Cite
The Christian Support Networks for Immigrants in Palermo
Marie Bassi
Based on a fieldwork conducted in Sicily, this paper analyses how, when faced with the emergence of immigration, Christian organisations in Palermo become involved with the migration issue, notably thanks to the pioneering commitment of certain clerics. It draws attention to the heterogeneous nature of the Christian sphere, the internal secularisation of the religious organisations working with migrants, and the transformations of the church-related associative sector from a volunteering to professional expertise model. In sum,...
Religious World-Denying and Trajectories of Activism in the Field of Strongly-Religious Corporative Actors
Olga Michel
This comparative case study links together the scholarly discourses on religiously-motivated world-rejecting and religious activism. It provides empirical evidence for differentiation between four ideal-typed patterns of religious activism as they relate to different trajectories and spheres of religious world-denying in strongly-religious movements: Pattern I (world conquerors) targets inner-worldly sphere of activism in the particular state, using the full scope of political tools to promote its religiously-fueled ideology and theocracy. Pattern II (world transformers) creatively combines...
The Political Influence of Islam in Belgium
Sergio Castano Riano
Belgium is one of the Western European countries in which Muslims have reached more relevance. The immigration process initiated in 1960s brought thousands of people from Turkey and from the Arab World to the major cities of the country. As consequence of their cultural and religious particularities Muslims were progressively claiming their own space in public life. Then, in the analysis of the Muslim political participation in Belgium two stages must be differentiated: The first...
https://doi.org/10.1285/i20356609v7i1p133 Cite
Tra vecchi Olimpi e post-moderni paradisi: il marketing come nuova mitologia?
Silvia Gravili
This paper aims to discuss whether or not marketing can be considered a post-modern mythology. For this reason, the main characteristics of myth are examined, even under a sociological point of view, as well as the conceptual evolution of �marketing�, under a managerial and historical perspective. As a result, marketing is proposed as a hermeneutic instrument, useful for a deeper understanding of our times, besides any ethical or psychological judgement.
Mitologie 2.0: Digital Platforms and umbrella terms
Fabio Cirac�
In this paper, the use of the term "digital platform" is discussed in the context of Web 2.0. This attitude entails the risk of a conceptual misunderstanding incident to the so-called "umbrella terms", for example the term "wiki". This paper tries to deconstruct the net-neutrality myth through the analysis of the concept "digital platform", highlighting the ideological danger. Summary: 1. Words eating words 2. The web as a digital platform 3. Platforms and social media:... | CommonCrawl |
Existence of nontrivial solutions to Polyharmonic equations with subcritical and critical exponential growth
DCDS Home
On the number of limit cycles in general planar piecewise linear systems
June 2012, 32(6): 2165-2185. doi: 10.3934/dcds.2012.32.2165
Collasping behaviour of a singular diffusion equation
Kin Ming Hui 1,
Institute of Mathematics, Academia sinica, Taiwan
Received April 2011 Revised August 2011 Published February 2012
Let $0\le u_0(x)\in L^1(\mathbb{R}^2)\cap L^{\infty}(\mathbb{R}^2)$ be such that $u_0(x) =u_0(|x|)$ for all $|x|\ge r_1$ and is monotone decreasing for all $|x|\ge r_1$ for some constant $r_1>0$ and $\mbox{ess}\inf_{2{B}_{r_1}(0)}u_0\ge\mbox{ess} \sup_{R^2\setminus B_{r_2}(0)}u_0$ for some constant $r_2>r_1$. Then under some mild decay conditions at infinity on the initial value $u_0$ we will extend the result of P. Daskalopoulos, M.A. del Pino and N. Sesum [4], [6], and prove the collapsing behaviour of the maximal solution of the equation $u_t=\Delta\log u$ in $\mathbb{R}^2\times (0,T)$, $u(x,0)=u_0(x)$ in $\mathbb{R}^2$, near its extinction time $T=\int_{R^2}u_0dx/4\pi$ by a simplified method without using the Hamilton-Yau Harnack inequality.
Keywords: maximal solution, singular diffusion equation., Collapsing behaviour.
Mathematics Subject Classification: Primary: 35B40; Secondary: 35K57, 35K6.
Citation: Kin Ming Hui. Collasping behaviour of a singular diffusion equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2165-2185. doi: 10.3934/dcds.2012.32.2165
D. G. Aronson and L. A. Caffarelli, The initial trace of a solution of the porous medium equation,, Transactions A. M. S., 280 (1983), 351. Google Scholar
P. Daskalopoulos and R. Hamilton, Geometric estimates for the logarithmic fast diffusion equation,, Comm. Anal. Geom., 12 (2004), 143. Google Scholar
P. Daskalopoulos and M. A. del Pino, On a singular diffusion equation,, Comm. Anal. Geom., 3 (1995), 523. Google Scholar
P. Daskalopoulos and M. A. del Pino, Type II collapsing of maximal solutions to the Ricci flow in $\mathbbR^2$,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 24 (2007), 851. Google Scholar
P. Daskalopoulos and N. Sesum, Eternal solutions to the Ricci flow on $\mathbbR^2$,, Int. Math. Res. Not., 2006 (8361). Google Scholar
P. Daskalopoulos and N. Sesum, Type II extinction profile of maximal solutions to the Ricci flow equation,, J. Geom. Anal., 20 (2010), 565. doi: 10.1007/s12220-010-9128-1. Google Scholar
J. R. Esteban, A. Rodríguez and J. L. Vazquez, The fast diffusion equation with logarithmic nonlinearity and the evolution of conformal metrics in the plane,, Advances in Differential Equations, 1 (1996), 21. Google Scholar
J. R. Esteban, A. Rodriguez and J. L. Vazquez, The maximal solution of the logarithmic fast diffusion equation in two space dimensions,, Advances in Differential Equations, 2 (1997), 867. Google Scholar
P. G. de Gennes, Wetting: Statics and dynamics,, Rev. Modern Phys., 57 (1985), 827. doi: 10.1103/RevModPhys.57.827. Google Scholar
R. Hamilton and S. T. Yau, The Harnack estimate for the Ricci flow on a surface-revisited,, Asian J. Math., 1 (1997), 418. Google Scholar
S. Y. Hsu, Large time behaviour of solutions of the Ricci flow equation on $R^2$,, Pacific J. Math., 197 (2001), 25. doi: 10.2140/pjm.2001.197.25. Google Scholar
S. Y. Hsu, Asymptotic profile of a singular diffusion equation as $t\to\infty$,, Nonlinear Analysis, 48 (2002), 781. doi: 10.1016/S0362-546X(00)00214-5. Google Scholar
S. Y. Hsu, Asymptotic behaviour of solutions of the equation $u_t=\Delta\log u$ near the extinction time,, Advances in Differential Equations, 8 (2003), 161. Google Scholar
S. Y. Hsu, Behaviour of solutions of a singular diffusion equation near the extinction time,, Nonlinear Analysis, 56 (2004), 63. doi: 10.1016/j.na.2003.07.018. Google Scholar
K. M. Hui, Existence of solutions of the equation $u_t=\Delta\log u$,, Nonlinear Analysis, 37 (1999), 875. doi: 10.1016/S0362-546X(98)00081-9. Google Scholar
K. M. Hui, Singular limit of solutions of the equation $u_t=\Delta (u^m/m)$ as $m\to 0$,, Pacific J. Math., 187 (1999), 297. doi: 10.2140/pjm.1999.187.297. Google Scholar
J. R. King, Self-similar behaviour for the equation of fast nonlinear diffusion,, Phil. Trans. Royal Soc. London Series A, 343 (1993), 337. doi: 10.1098/rsta.1993.0052. Google Scholar
O. A. Ladyzenskaya, V. A. Solonnikov and N. N. Uraltceva, "Linear and Quasilinear Equations of Parabolic Type,", Transl. Math. Mono., (1968). Google Scholar
J. L. Vazquez, Nonexistence of solutions for nonlinear heat equations of fast-diffusion type,, J. Math. Pures Appl. (9), 71 (1992), 503. Google Scholar
L. F. Wu, A new result for the porous medium equation derived from the Ricci flow,, Bull. Amer. Math. Soc. (N.S.), 28 (1993), 90. Google Scholar
L. F. Wu, The Ricci flow on complete $R^2$,, Comm. Anal. Geom., 1 (1993), 439. Google Scholar
Luis Caffarelli, Juan-Luis Vázquez. Asymptotic behaviour of a porous medium equation with fractional diffusion. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1393-1404. doi: 10.3934/dcds.2011.29.1393
Dominika Pilarczyk. Asymptotic stability of singular solution to nonlinear heat equation. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 991-1001. doi: 10.3934/dcds.2009.25.991
Iryna Pankratova, Andrey Piatnitski. On the behaviour at infinity of solutions to stationary convection-diffusion equation in a cylinder. Discrete & Continuous Dynamical Systems - B, 2009, 11 (4) : 935-970. doi: 10.3934/dcdsb.2009.11.935
Liviu I. Ignat, Ademir F. Pazoto. Large time behaviour for a nonlocal diffusion - convection equation related with gas dynamics. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3575-3589. doi: 10.3934/dcds.2014.34.3575
Abdelaziz Rhandi, Roland Schnaubelt. Asymptotic behaviour of a non-autonomous population equation with diffusion in $L^1$. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 663-683. doi: 10.3934/dcds.1999.5.663
Kin Ming Hui, Sunghoon Kim. Existence of Neumann and singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 4859-4887. doi: 10.3934/dcds.2015.35.4859
Guy V. Norton, Robert D. Purrington. The Westervelt equation with a causal propagation operator coupled to the bioheat equation.. Evolution Equations & Control Theory, 2016, 5 (3) : 449-461. doi: 10.3934/eect.2016013
Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei. Existence and uniqueness of singular solution to stationary Schrödinger equation with supercritical nonlinearity. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4613-4626. doi: 10.3934/dcds.2013.33.4613
Galina V. Grishina. On positive solution to a second order elliptic equation with a singular nonlinearity. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1335-1343. doi: 10.3934/cpaa.2010.9.1335
Yuanzhen Shao. Continuous maximal regularity on singular manifolds and its applications. Evolution Equations & Control Theory, 2016, 5 (2) : 303-335. doi: 10.3934/eect.2016006
Bhargav Kumar Kakumani, Suman Kumar Tumuluri. Asymptotic behavior of the solution of a diffusion equation with nonlocal boundary conditions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 407-419. doi: 10.3934/dcdsb.2017019
Tomás Caraballo, Antonio M. Márquez-Durán, Rivero Felipe. Asymptotic behaviour of a non-classical and non-autonomous diffusion equation containing some hereditary characteristic. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1817-1833. doi: 10.3934/dcdsb.2017108
Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258
Pavel Krejčí, Jürgen Sprekels. Long time behaviour of a singular phase transition model. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1119-1135. doi: 10.3934/dcds.2006.15.1119
Zhijun Zhang. Optimal global asymptotic behavior of the solution to a singular monge-ampère equation. Communications on Pure & Applied Analysis, 2020, 19 (2) : 1129-1145. doi: 10.3934/cpaa.2020053
S. Dumont, Noureddine Igbida. On the collapsing sandpile problem. Communications on Pure & Applied Analysis, 2011, 10 (2) : 625-638. doi: 10.3934/cpaa.2011.10.625
Sebastián Ferrer, Martin Lara. Families of canonical transformations by Hamilton-Jacobi-Poincaré equation. Application to rotational and orbital motion. Journal of Geometric Mechanics, 2010, 2 (3) : 223-241. doi: 10.3934/jgm.2010.2.223
Manuel de León, Juan Carlos Marrero, David Martín de Diego. Linear almost Poisson structures and Hamilton-Jacobi equation. Applications to nonholonomic mechanics. Journal of Geometric Mechanics, 2010, 2 (2) : 159-198. doi: 10.3934/jgm.2010.2.159
Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control & Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73
Kin Ming Hui | CommonCrawl |
\begin{document}
\draft
\title{Unknown Quantum States: \\ The Quantum de Finetti Representation}
\author{Carlton M. Caves,$^1$ Christopher A. Fuchs,$^2$ and R\"udiger Schack$^3$
}
\address{$^1$Department of Physics and Astronomy, University of New Mexico, \\ Albuquerque, New Mexico 87131--1156, USA \\ $^2$Computing Science Research Center, Bell Labs, Lucent Technologies, \\ Room 2C-420, 600--700 Mountain Avenue, Murray Hill, New Jersey 07974, USA \\ $^3$Department of Mathematics, Royal Holloway, University of London, \\ Egham, Surrey TW20$\;$0EX, UK}
\date{17 March 2001}
\maketitle
\begin{abstract} We present an elementary proof of the {\it quantum de Finetti representation theorem}, a quantum analogue of de Finetti's classical theorem on exchangeable probability assignments. This contrasts with the original proof of Hudson and Moody [Z.\ Wahrschein.\ verw.\ Geb.\ {\bf 33}, 343 (1976)], which relies on advanced mathematics and does not share the same potential for generalization. The classical de Finetti theorem provides an operational definition of the concept of an unknown probability in Bayesian probability theory, where probabilities are taken to be degrees of belief instead of objective states of nature. The quantum de Finetti theorem, in a closely analogous fashion, deals with exchangeable density-operator assignments and provides an operational definition of the concept of an ``unknown quantum state'' in quantum-state tomography. This result is especially important for information-based interpretations of quantum mechanics, where quantum states, like probabilities, are taken to be states of knowledge rather than states of nature. We further demonstrate that the theorem fails for real Hilbert spaces and discuss the significance of this point. \end{abstract}
\section{Introduction} \label{sec-intro}
What is a quantum state? Since the earliest days of quantum theory, the predominant answer has been that the quantum state is a representation of the observer's knowledge of a system~\cite{Bohr1928}. In and of itself, the quantum state has no objective reality~\cite{Fuchs2000}. The authors hold this information-based view quite firmly~\cite{Caves1996,Caves1997}. Despite its association with the founders of quantum theory, however, holding this view does not require a concomitant belief that there is nothing left to learn in quantum foundations. It is quite the opposite in fact: Only by pursuing a promising, but incomplete program can one hope to learn something of lasting value. Challenges to the information-based view arise regularly, and dealing with these challenges builds an understanding and a problem-solving agility that reading and rereading the founders can never engender~\cite{Faye1994}. With each challenge successfully resolved, one walks away with a deeper sense of the physical content of quantum theory and a growing confidence for tackling questions of its interpretation and applicability. Questions as fundamental and distinct as ``Will a nonlinear extension of quantum mechanics be needed to quantize gravity?''~\cite{tHooft1999,Jozsa1998} and ``Which physical resources actually make quantum computation efficient?''~\cite{Schack1999,Ambainis2000} start to feel tractable (and even connected) from this perspective.
In this paper, we tackle an understanding-building exercise very much in the spirit of these remarks. It is motivated by an apparent conundrum arising from our own specialization in physics, quantum information theory. The issue is that of the {\it unknown\/} quantum state.
There is hardly a paper in the field of quantum information that does not make use of the idea of an ``unknown quantum state.'' Unknown quantum states are teleported~\cite{Bennett1993,Experiments1998}, protected with quantum error correcting codes~\cite{Shor1995,Steane1996}, and used to check for quantum eavesdropping~\cite{Bennett1984,CryptoExperiments}. The list of uses, already long, grows longer each day. Yet what can the term ``unknown quantum state'' mean? In an information-based interpretation of quantum mechanics, the term is an oxymoron: If quantum states, by their very definition, are states of knowledge and not states of nature~\cite{Hartle1968}, then the state is {\it known\/} by someone---at the very least, by the describer himself.
This message is the main point of our paper. Faced with a procedure that uses the idea of an unknown quantum state in its description, a consistent information-based interpretation of quantum mechanics offers only two alternatives: \begin{itemize} \item The owner of the unknown state---a further decision-making agent or observer---must be explicitly identified. In this case, the unknown state is merely a stand-in for the unknown {\it state of knowledge\/} of an essential player who went unrecognized in the original formulation. \item If there is clearly no further decision-making agent or observer on the scene, then a way must be found to re\"express the procedure with the term ``unknown state'' banished from the formulation. In this case, the end-product of the effort is a single quantum state used for describing the entire procedure---namely, the state that captures the describer's state of knowledge. \end{itemize}
Of course, those inclined to an objectivist interpretation of quantum mechanics~\cite{Goldstein98}---that is, an interpretation where quantum states are more like states of nature than states of knowledge---might be tempted to believe that the scarcity of existing analyses of this kind is a hint that quantum states do indeed have some sort of objective status. Why would such currency be made of the unknown-state concept were it not absolutely necessary? As a rejoinder, we advise caution to the objectivist: Tempting though it is to grant objective status to all the mathematical objects in a physical theory, there is much to be gained by a careful delineation of the subjective and objective parts. A case in point is provided by E.~T. Jaynes' \cite{Jaynes1957a,Jaynes1957b,Jaynes1983} insistence that entropy is a subjective quantity, a measure of ignorance about a physical system. One of the many fruits of this point of view can be found in the definitive solution \cite{Bennett1983} to the long-standing Maxwell demon problem \cite{Leff1990}, where it was realized that the information collected by a demon and used by it to extract work from heat has a thermodynamic cost at least as large as the work extracted \cite{Landauer1961}.
\begin{center} \begin{figure}
\caption{What can the term ``unknown state'' mean if quantum states are taken exclusively to be states of knowledge rather than states of nature? When we say that a system has an unknown state, must we always imagine a further observer whose state of knowledge is symbolized by some
$|\psi\rangle$, and it is the identity of the symbol that we are ignorant of?}
\end{figure} \end{center}
The example analyzed in detail in this paper provides another case. Along the way, it brings to light a new and distinct point about why quantum mechanics makes use of complex Hilbert spaces rather than real or quaternionic ones~\cite{Stueckelberg1960,Adler1995,Araki1980,Wootters1990}. Furthermore, the method we use to prove our main theorem employs a novel measurement technique that might be of use in the laboratory.
We analyze in depth a particular use of unknown states, which comes from the measurement technique known as {\it quantum-state tomography\/}~\cite{Vogel1989b,Smithey1993,Leonhardt1995}. The usual description of tomography is this. A device of some sort, say a nonlinear optical medium driven by a laser, repeatedly prepares many instances of a quantum system, say many temporally distinct modes of the electromagnetic field, in a fixed quantum state $\rho$, pure or mixed. An experimentalist who wishes to characterize the operation of the device or to calibrate it for future use might be able to perform measurements on the systems it prepares even if he cannot get at the device itself. This can be useful if the experimenter has some prior knowledge of the device's operation that can be translated into a probability distribution over states. Then learning about the state will also be learning about the device. Most importantly, though, this description of tomography assumes that the precise state $\rho$ is unknown. The goal of the experimenter is to perform enough measurements, and enough kinds of measurements (on a large enough sample), to estimate the identity of $\rho$.
This is clearly an example where there is no further player on whom to pin the unknown state as a state of knowledge. Any attempt to find a player for the pin is entirely artificial: Where would the player be placed? On the inside of the device the tomographer is trying to characterize \cite{BerkeleyRhyme}? The only available course for an information-based interpretation of quantum-state tomography is the second strategy listed above---to banish completely the idea of the unknown state from the formulation of tomography.
\begin{center} \begin{figure}
\caption{To make sense of quantum tomography, must we go to the extreme of imagining a ``man in the box'' who has a better description of the systems than we do? How contrived our usage would be if that were so!}
\end{figure} \end{center}
To do this, we take a cue from the field of Bayesian probability theory~\cite{Kyburg1980,JaynesPosthumous,Bernardo1994}, prompted by the realization that Bayesian probability is to probability theory in general what an information-based interpretation is to quantum mechanics~\cite{Caves1996,Schack1997}. In Bayesian theory, probabilities are not objective states of nature, but rather are taken explicitly to be measures of credible belief, reflecting one's state of knowledge. The overarching Bayesian theme is to identify the conditions under which a set of decision-making agents can come to a common belief or probability assignment for a random variable even though their initial beliefs differ~\cite{Bernardo1994}. Following that theme is the key to understanding tomography from the informational point of view.
The offending classical concept is an ``unknown probability,'' an oxymoron for the same reason as an unknown quantum state. The procedure analogous to quantum-state tomography is the estimation of an unknown probability from the results of repeated trials on ``identically prepared systems,'' all of which are said to be described by the same, but unknown probability. The way to eliminate unknown probabilities from the discussion, introduced by Bruno de Finetti in the early 1930s \cite{DeFinetti1990,DeFinettiCollected}, is to focus on the equivalence of repeated trials, which means the systems are indistinguishable as far as probabilistic predictions are concerned and thus that a probability assignment for multiple trials should be symmetric under permutation of the systems. With his {\it classical representation theorem}, de Finetti \cite{DeFinetti1990} showed that a multi-trial probability assignment that is permutation-symmetric for an arbitrarily large number of trials---de Finetti called such multi-trial probabilities {\it exchangeable\/}---is equivalent to a probability for the ``unknown probabilities.'' Thus the unsatisfactory concept of an unknown probability vanishes from the description in favor of the fundamental idea of assigning an exchangeable probability distribution to multiple trials.
This cue in hand, it is easy to see how to reword the description of quantum-state tomography to meet our goals. What is relevant is simply a judgment on the part of the experimenter---notice the essential subjective character of this ``judgment''---that there is no distinction between the systems the device is preparing. In operational terms, this is the judgment that {\it all the systems are and will be the same as far as observational predictions are concerned}. At first glance this statement might seem to be contentless, but the important point is this: To make this statement, one need never use the notion of an unknown state---a completely operational description is good enough. Putting it into technical terms, the statement is that if the experimenter judges a collection of $N$ of the device's outputs to have an overall quantum state $\rho^{(N)}$, he will also judge any permutation of those outputs to have the same quantum state $\rho^{(N)}$. Moreover, he will do this no matter how large the number $N$ is. This, complemented only by the consistency condition that for any $N$ the state $\rho^{(N)}$ be derivable from $\rho^{(N+1)}$, makes for the complete story.
The words ``quantum state'' appear in this formulation, just as in the original formulation of tomography, but there is no longer any mention of {\it unknown\/} quantum states. The state $\rho^{(N)}$ is known by the experimenter (if no one else), for it represents his state of knowledge. More importantly, the experimenter is in a position to make an unambiguous statement about the structure of the whole sequence of states $\rho^{(N)}$: Each of the states $\rho^{(N)}$ has a kind of permutation invariance over its factors. The content of the {\it quantum de Finetti representation theorem}~\cite{Hudson1976,Hudson1981}---a new proof of which is the main technical result of this paper---is that a sequence of states $\rho^{(N)}$ can have these properties, which are said to make it an {\it exchangeable\/} sequence, if and only if each term in it can also be written in the form \begin{equation} \rho^{(N)}=\int P(\rho)\, \rho^{\otimes N}\, d\rho\;, \label{Jeremy} \end{equation} where \begin{equation} \rho^{\otimes N}= \underbrace{\rho\otimes\rho\otimes\cdots\otimes\rho}_{ \matrix{\mbox{$N$-fold tensor}\cr\mbox{product}}} \end{equation} and $P(\rho)$ is a fixed probability distribution over the density operators.
The interpretive import of this theorem is paramount. It alone gives a mandate to the term unknown state in the usual description of tomography. It says that the experimenter can act {\it as if\/} his state of knowledge $\rho^{(N)}$ comes about because he knows there is a ``man in the box,'' hidden from view, repeatedly preparing the same state $\rho$. He does not know which such state, and the best he can say about the unknown state is captured in the probability distribution $P(\rho)$.
The quantum de Finetti theorem furthermore makes a connection to the overarching theme of Bayesianism stressed above. It guarantees for two independent observers---as long as they have a rather minimal agreement in their initial beliefs---that the outcomes of a sufficiently informative set of measurements will force a convergence in their state assignments for the remaining systems~\cite{Schack2000}. This ``minimal'' agreement is characterized by a judgment on the part of both parties that the sequence of systems is exchangeable, as described above, and a promise that the observers are not absolutely inflexible in their opinions. Quantitatively, the latter means that though $P(\rho)$ might be arbitrarily close to zero, it can never vanish.
This coming to agreement works because an exchangeable density operator sequence can be updated to reflect information gathered from measurements by a quantum version of Bayes's rule for updating probabilities. Specifically, if measurements on $K$ systems yield results $D_K$, then the state of additional systems is constructed as in Eq.~(\ref{Jeremy}), but using an updated probability on density operators given by \begin{equation}
P(\rho|D_K)={P(D_K|\rho)P(\rho)\over P(D_K)}\;. \label{QBayes} \end{equation}
Here $P(D_K|\rho)$ is the probability to obtain the measurement results $D_K$, given the state $\rho^{\otimes K}$ for the $K$
measured systems, and $P(D_K)=\int P(D_K|\rho)\,P(\rho)\,d\rho$ is the unconditional probability for the measurement results. Equation~(\ref{QBayes}) is a kind of {\it quantum Bayes rule} \cite{Schack2000}. For a sufficiently informative set of measurements, as $K$ becomes large, the updated probability
$P(\rho|D_K)$ becomes highly peaked on a particular state $\rho_{D_K}$ dictated by the measurement results, regardless of the prior probability $P(\rho)$, as long as $P(\rho)$ is nonzero in a neighborhood of $\rho_{D_K}$. Suppose the two observers have different initial beliefs, encapsulated in different priors $P_i(\rho)$, $i=1,2$. The measurement results force them to a common state of knowledge in which any number $N$ of additional systems are assigned the product state $\rho_{D_K}^{\otimes N}$, i.e., \begin{equation}
\int P_i(\rho|D_K)\,\rho^{\otimes N}\,d\rho \quad{\longrightarrow}\quad \rho_{D_K}^{\otimes N}\;, \label{HannibalLecter} \end{equation} independent of $i$, for $K$ sufficiently large.
This shifts the perspective on the purpose of quantum-state tomography: It is not about uncovering some ``unknown state of nature,'' but rather about the various observers' coming to agreement over future probabilistic predictions~\cite{Fuchs2000b}. In this connection, it is interesting to note that the quantum de Finetti theorem and the conclusions just drawn from it work only within the framework of complex vector-space quantum mechanics. For quantum mechanics based on real and quaternionic Hilbert spaces~\cite{Stueckelberg1960,Adler1995}, the connection between exchangeable density operators and unknown quantum states does not hold.
The plan of the remainder of the paper is as follows. In Sec.~\ref{sec-classical} we discuss the classical de Finetti representation theorem~\cite{DeFinetti1990,Heath1976} in the context of Bayesian probability theory. It was our familiarity with the classical theorem~\cite{Galavotti1989,Jeffrey1997} that motivated our reconsideration of quantum-state tomography. In Sec.~\ref{sec-quantum} we introduce the information-based formulation of tomography in terms of exchangeable multi-system density operators, accompanied by a critical discussion of objectivist formulations of tomography, and we state the quantum de Finetti representation theorem. Section~\ref{sec-proof} presents an elementary proof of the quantum de Finetti theorem. There, also, we introduce a novel measurement technique for tomography based upon generalized quantum measurements. Finally, in Sec.~\ref{sec-outlook} we return to the issue of number fields in quantum mechanics and mention possible extensions of the main theorem.
\section{The Classical de Finetti Theorem} \label{sec-classical}
As a preliminary to the quantum problem, we turn our attention to classical probability theory. In doing so we follow a maxim of the late E.~T. Jaynes~\cite{Jaynes1986b}: \begin{quote} We think it unlikely that the role of probability in quantum theory will be understood until it is generally understood in classical theory \ldots. Indeed, our [seventy-five-year-old] bemusement over the notion of state reduction in [quantum theory] need not surprise us when we note that today, in all applications of probability theory, basically the same controversy rages over whether our probabilities represent real situations, or only incomplete human knowledge. \end{quote}
As Jaynes makes clear, the tension between the objectivist and informational points of view is not new with quantum mechanics. It arises already in classical probability theory in the form of the war between ``objective'' and ``subjective'' interpretations~\cite{Daston1994}. According to the subjective or Bayesian interpretation, probabilities are measures of credible belief, reflecting an agent's potential states of knowledge. On the other hand, the objective interpretations---in all their varied forms, from frequency interpretations to propensity interpretations---attempt to view probabilities as real states of affairs or ``states of nature.'' Following our discussion in Sec.~\ref{sec-intro}, it will come as no surprise to the reader that the authors wholeheartedly adopt the Bayesian approach. For us, the ultimate reason is simply our own experience with this question, part of which is an appreciation that objective interpretations inevitably run into insurmountable difficulties. We will not dwell upon these difficulties here; instead, the reader can find a sampling of criticisms in Refs.~\cite{Jaynes1983,Kyburg1980,JaynesPosthumous,Bernardo1994, Savage1972}.
We will note briefly, however, that the game of roulette provides an illuminating example. In the European version of the game, the possible outcomes are the numbers $0,1,\ldots,36$. For a player without any privileged information, all 37 outcomes have the same probability $p=1/37$. But suppose that shortly after the ball is launched by the croupier, another player obtains information about the ball's position and velocity relative to the wheel. Using the information obtained, this other player can make more accurate predictions than the first~\cite{NewtonianCasino}. His probability is peaked around some group of numbers. The probabilities are thus different for two players with different states of knowledge.
Whose probability is the true probability? From the Bayesian viewpoint, this question is meaningless: There is no such thing as a true probability. All probability assignments are subjective assignments based specifically upon one's prior information.
For sufficiently precise data---including precise initial data on positions and velocities and probably also including other details such as surface properties of the wheel---Newtonian mechanics assures us that the outcome can be predicted with certainty. This is an important point: The determinism of classical physics provides a strong reason for adopting the subjectivist view of probabilities~\cite{Giere1973}. If the conditions of a trial are exactly specified, the outcomes are predictable with certainty, and all probabilities are 0 or 1. In a deterministic theory, all probabilities strictly greater than 0 and less than 1 arise as a consequence of incomplete information and depend upon their assigner's state of knowledge.
Of course, we should keep in mind that our ultimate goal is to consider the status of quantum states and, by way of them, quantum probabilities. One can ask, ``Does this not change the flavor of these considerations?'' Since quantum mechanics is avowedly {\it not\/} a theory of one's ignorance of a set of hidden variables~\cite{BellBook,GoldsteinBook}, how can the probabilities be subjective? In Sec.~\ref{sec-quantum} we argue that despite the intrinsic indeterminism of quantum mechanics, the essence of the point above carries over to the quantum setting intact. Furthermore, there are specifically quantum-motivated arguments for a Bayesian interpretation of quantum probabilities.
For the present, though, let us consider in some detail the general problem of a repeated experiment---spinning a roulette wheel $N$ times is an example. As discussed briefly in Sec.~\ref{sec-intro}, this allows us to make a conceptual connection to quantum-state tomography. Here the individual trials are described by discrete random variables $x_n\in\{1,2,\ldots,k\}$, $n=1,\ldots,N$; that is to say, there are $N$ random variables, each of which can assume $k$ discrete values. In an objectivist theory, such an experiment has a standard formulation in which the probability in the multi-trial hypothesis space is given by an independent, identically distributed (i.i.d.)\ distribution \begin{equation} p(x_1,x_2,\ldots,x_N)\,=\,p_{x_1} p_{x_2} \cdots p_{x_N}\, =\, p_1^{n_{\scriptscriptstyle 1}} p_2^{n_{\scriptscriptstyle 2}}\cdots p_k^{n_{\scriptscriptstyle k}}\;. \label{eq-iid} \end{equation} The number $p_j$ ($j=1,\ldots,k$) describes the objective, ``true'' probability that the result of a single experiment will be $j$ ($j=1,\ldots,k$). The variable $n_j$, on the other hand, is the number of times outcome $j$ is listed in the vector $(x_1,x_2,\ldots,x_N)$. This simple description---for the objectivist---only describes the situation from a kind of ``God's eye'' point of view. To the experimentalist, the ``true'' probabilities $p_1,\ldots,p_k$ will very often be {\it unknown\/} at the outset. Thus, his burden is to estimate the unknown probabilities by a statistical analysis of the experiment's outcomes.
In the Bayesian approach, it does not make sense to talk about estimating a true probability. Instead, a Bayesian assigns a prior probability distribution $p(x_1,x_2,\ldots,x_N)$ on the multi-trial hypothesis space, which is generally not an i.i.d., and then uses Bayes's theorem to update the distribution in the light of measurement results. A common criticism from the objectivist camp is that the choice of distribution $p(x_1,x_2,\ldots,x_N)$ with which to start the process seems overly arbitrary to them. On what can it be grounded, they would ask? From the Bayesian viewpoint, the subjectivity of the prior is a strength rather than a weakness, because assigning a prior amounts to laying bare the necessarily subjective assumptions behind {\it any\/} probabilistic argument, be it Bayesian or objectivist. Choosing a prior among all possible distributions on the multi-trial hypothesis space is, however, a daunting task. As we will now see, the de Finetti representation theorem makes this task tractable.
It is very often the case that one or more features of a problem stand out so clearly that there is no question about how to incorporate them into an initial assignment. In the present case, the key feature is contained in the assumption that an arbitrary number of repeated trials are equivalent. This means that one has no reason to believe there will be a difference between one trial and the next. In this case, the prior distribution is judged to have the sort of permutation symmetry discussed briefly in Sec.~\ref{sec-intro}, which de Finetti \cite{DeFinettiCollected} called {\it exchangeability}. The rigorous definition of exchangeability proceeds in two stages.
A probability distribution $p(x_1,x_2,\ldots,x_N)$ is said to be {\it symmetric\/} (or finitely exchangeable) if it is invariant under permutations of its arguments, i.e., if \begin{equation} p\bigl(x_{\pi(1)},x_{\pi(2)},\ldots,x_{\pi(N)}\bigr) = p(x_1,x_2,\ldots,x_N) \end{equation} for any permutation $\pi$ of the set $\{1,\ldots,N\}$. The distribution $p(x_1,x_2,\ldots,x_N)$ is called {\it exchangeable\/} (or infinitely exchangeable) if it is symmetric and if for any integer $M>0$, there is a symmetric distribution $p_{N+M}(x_1,x_2,\ldots,x_{N+M})$ such that \begin{equation} p(x_1,x_2,\ldots,x_N)\; = \sum_{x_{N+1},\ldots,x_{N+M}} p_{N+M}(x_1,\ldots,x_N,x_{N+1},\ldots,x_{N+M}) \;. \label{eq-marginal} \end{equation} This last statement means the distribution $p$ can be extended to a symmetric distribution of arbitrarily many random variables. Expressed informally, an exchangeable distribution can be thought of as arising from an infinite sequence of random variables whose order is irrelevant.
We now come to the main statement of this section: if a probability distribution $p(x_1,x_2,\ldots,x_N)$ is exchangeable, then it can be written uniquely in the form \begin{equation} p(x_1,x_2,\ldots,x_N)=\int_{{\cal S}_k} P(\vec{p})\,p_{x_1} p_{x_2} \cdots p_{x_N}\,d\vec{p} = \int_{{\cal S}_k} P(\vec{p})\, p_1^{n_{\scriptscriptstyle 1}} p_2^{n_{\scriptscriptstyle 2}}\cdots p_k^{n_{\scriptscriptstyle k}} \, d\vec{p}\;, \label{eq-repr} \end{equation} where $\vec{p}=(p_1,p_2,\ldots,p_k)$, and the integral is taken over the probability simplex \begin{equation} {\cal S}_k=\left\{\vec{p}\mbox{ : }\; p_j\ge0\mbox{ for all } j\mbox{ and } \sum_{j=1}^k p_j=1\right\}. \end{equation} Furthermore, the function $P(\vec{p})\ge0$ is required to be a probability density function on the simplex: \begin{equation} \int_{{\cal S}_k} P(\vec{p})\,d\vec{p}=1\;. \end{equation} Equation~(\ref{eq-repr}) comprises the classical de Finetti representation theorem for discrete random variables. For completeness and because it deserves to be more widely familiar in the physics community, we give a simple proof (due to Heath and Sudderth \cite{Heath1976}) of the representation theorem for the binary random-variable case in an Appendix.
Let us reiterate the importance of this result for the present considerations. It says that an agent, making solely the judgment of exchangeability for a sequence of random variables $x_j$, can proceed {\it as if\/} his state of knowledge had instead come about through ignorance of an {\it unknown}, but objectively existent set of probabilities $\vec{p}$. His precise ignorance of $\vec{p}$ is captured by the ``probability on probabilities'' $P(\vec{p})$. This is in direct analogy to what we desire of a solution to the problem of the unknown quantum state in quantum-state tomography.
As a final note before finally addressing the quantum problem in Sec.~\ref{sec-quantum}, we point out that both conditions in the definition of exchangeability are crucial for the proof of the de Finetti theorem. In particular, there are probability distributions $p(x_1,x_2,\ldots,x_N)$ that are symmetric, but not exchangeable. A simple example is the distribution $p(x_1,x_2)$ of two binary random variables $x_1,x_2\in\{0,1\}$, \begin{eqnarray} && p(0,0) = p(1,1) = 0\;, \label{HocusPocus} \\ && p(0,1) = p(1,0) = \frac{1}{2} \;. \label{Hiroshima} \end{eqnarray} One can easily check that $p(x_1,x_2)$ cannot be written as the marginal of a symmetric distribution of three variables, as in Eq.~(\ref{eq-marginal}). Therefore it can have no representation along the lines of Eq.~(\ref{eq-repr}). (For an extended discussion of this, see Ref.~\cite{Jaynes1986}.) Indeed, Eqs.~(\ref{HocusPocus}) and (\ref{Hiroshima}) characterize a perfect ``anticorrelation'' of the two variables, in contrast to the positive correlation implied by distributions of de Finetti form. The content of this point is that both conditions in the definition of exchangeability (symmetry under interchange and infinite extendibility) are required to ensure, in colloquial terms, ``that the future will appear much as the past'' \cite{vonPlato1989}, rather than, say, the opposite of the past.
\section{The quantum de Finetti representation} \label{sec-quantum}
Let us now return to the problem of quantum-state tomography described in Sec.~\ref{sec-intro}. In the objectivist formulation of the problem, a device repeatedly prepares copies of a system in the same quantum state $\rho$. This is generally a mixed-state density operator on a Hilbert space ${\cal H}_d$ of $d$ dimensions. We call the totality of such density operators ${\cal D}_d$. The joint quantum state of the $N$ systems prepared by the device is then given by \begin{equation} \rho^{\otimes N}=\rho\otimes\rho\otimes\cdots\otimes\rho \;, \end{equation} the $N$-fold tensor product of $\rho$ with itself. This, of course, is a very restricted example of a density operator on the tensor-product Hilbert space ${\cal H}_d^{\otimes N}\equiv {\cal H}_d\otimes\cdots\otimes{\cal H}_d$. The experimenter, who performs quantum-state tomography, tries to determine $\rho$ as precisely as possible. Depending upon the version of the argument, $\rho$ is interpreted as the ``true'' state of each of the systems or as a description of the ``true'' preparation procedure.
We have already articulated our dissatisfaction with this way of stating the problem, but we give here a further sense of why both interpretations above are untenable. Let us deal first with the version where $\rho$ is regarded as the true, objective state of each of the systems. In this discussion it is useful to consider separately the cases of mixed and pure states $\rho$. The arguments against regarding mixed states as objective properties of a quantum system are essentially the same as those against regarding probabilities as objective. In analogy to the roulette example given in the previous section, we can say that, whenever an observer assigns a mixed state to a physical system, one can think of another observer who assigns a different state based on privileged information.
The quantum argument becomes yet more compelling if the apparently nonlocal nature of quantum states is taken into consideration. Consider two parties, $A$ and $B$, who are far apart in space, say several light years apart. Each party possesses a spin-$1\over 2$ particle. Initially the joint state of the two particles is the maximally entangled pure state
${1\over\sqrt2}(|0\rangle|0\rangle+|1\rangle|1\rangle)$. Consequently, $A$ assigns the totally mixed state
${1\over2}(|0\rangle\langle0|+|1\rangle\langle1|)$ to her own particle. Now $B$ makes a measurement on his particle, finds the result 0, and assigns to $A$'s particle the pure state
$|0\rangle$. Is this now the ``true,'' objective state of $A$'s particle? At what precise time does the objective state of $A$'s particle change from totally mixed to pure? If the answer is ``simultaneously with $B$'s measurement,'' then what frame of reference should be used to determine simultaneity? These questions and potential paradoxes are avoided if states are interpreted as states of knowledge. In our example, $A$ and $B$ have different states of knowledge and therefore assign different states. For a detailed analysis of this example, see Ref.~\cite{Peres-9906a}; for an experimental investigation see Ref.~\cite{Scarani2000}.
If one admits that mixed states cannot be objective properties, because another observer, possessing privileged information, can know which pure state underlies the mixed state, then it becomes very tempting to regard the pure states as giving the ``true'' state of a system. Probabilities that come from pure states would then be regarded as objective, and the probabilities for pure states within an ensemble decomposition of a mixed state would be regarded as subjective, expressing our ignorance of which pure state is the ``true'' state of the system. An immediate and, in our view, irremediable problem with this idea is that a mixed state has infinitely many ensemble decompositions into pure states \cite{Jaynes1957b,Schrodinger1936,Hughston1993}, so the distinction between subjective and objective becomes hopelessly blurred.
This problem can be made concrete by the example of a spin-${1\over2}$ particle. Any pure state of the particle can be written in terms of the Pauli matrices, \begin{equation} \label{eq-pauli} \sigma_1={\mat0110}\;,\qquad \sigma_2={\mat0{-i}i0}\;,\qquad \sigma_3={\mat100{-1}}\;, \end{equation}
as \begin{equation} \label{eq-poincare}
|\vec n\rangle\langle\vec n|={1\over2}(I+{\vec n}\cdot\bbox{\sigma}) ={1\over2}(I+n_1\sigma_1+n_2\sigma_2+n_3\sigma_3)\;, \end{equation} where the unit vector ${\vec n}=n_1\vec e_1+n_2\vec e_2+n_3\vec e_3$ labels the pure state, and $I$ denotes the unit operator. An arbitrary state $\rho$, mixed or pure, of the particle can be expressed as \begin{equation} \rho={1\over2}(I+\vec S\cdot\bbox{\sigma}) \;, \label{eq-rhoqubit} \end{equation}
where $0\le|\vec S|\le1$. This representation of the states of a spin-$1\over2$ particle is called the {\it Bloch-sphere representation.} If $|\vec S|<1$, there is an infinite number of ways in which $\vec S$ can be written in the form $\vec S=\sum_j p_j{\vec n}_j$, $|\vec n_j|=1$, with the numbers $p_j$ comprising a probability distribution, and hence an infinite number of ensemble decompositions of $\rho$: \begin{equation} \rho = \sum_jp_j{1\over2}(I+{\vec n}_j\cdot\bbox{\sigma})
=\sum_j p_j|\vec n_j\rangle\langle\vec n_j|\;. \label{eq-decomp} \end{equation}
Suppose for specificity that the particle's state is a mixed state with $\vec S={1\over2}\,\vec e_3$. Writing $\vec S={3\over4}\vec e_3+{1\over4}(-\vec e_3)$ gives the eigendecomposition, \begin{equation} \rho=
{3\over4}|\vec e_3\rangle\langle\vec e_3|
+{1\over4}|\mathord{-}\vec e_3\rangle\langle\mathord{-}\vec e_3|\;, \end{equation} where we are to regard the probabilities $3/4$ and $1/4$ as subjective expressions of ignorance about which eigenstate is the ``true'' state of the particle. Writing $\vec S={1\over2}\vec n_++{1\over2}\vec n_-$, where $\vec n_{\pm}={1\over2}\vec e_3\pm{\sqrt3\over2}\vec e_x$, gives another ensemble decomposition, \begin{equation} \rho=
{1\over2}|\vec n_+\rangle\langle\vec n_+|
+{1\over2}|\vec n_-\rangle\langle\vec n_-|\;, \label{Eleanor} \end{equation}
where we are now to regard the two probabilities of $1/2$ as expressing ignorance of whether the ``true'' state is $|\vec n_+\rangle$ or $|\vec n_-\rangle$.
The problem becomes acute when we ask for the probability that a measurement of the $z$ component of spin yields spin up; this probability is given by $\langle\vec e_3|\rho|\vec e_3\rangle={1\over2}(1+{1\over2}\langle\vec e_3|\sigma_3|\vec e_3\rangle)=3/4$. The eigendecomposition gets this probability by the route \begin{equation}
\langle\vec e_3|\rho|\vec e_3\rangle= {3\over4}
\underbrace{|\langle\vec e_3|\vec e_3\rangle|^2}_ {\displaystyle{1}} +{1\over4}
\underbrace{|\langle\vec e_3|\mathord{-}\vec e_3\rangle|^2}_ {\displaystyle{0}}\;. \end{equation} Here the ``objective'' quantum probabilities, calculated from the eigenstates, report that the particle definitely has spin up or definitely has spin down; the overall probability of $3/4$ comes from mixing these objective probabilities with the subjective probabilities for the eigenstates. The decomposition~(\ref{Eleanor}) gets the same overall probability by a different route, \begin{equation}
\langle\vec e_3|\rho|\vec e_3\rangle= {1\over2}
\underbrace{|\langle\vec e_3|\vec n_+\rangle|^2}_ {\displaystyle{3/4}} +{1\over2}
\underbrace{|\langle\vec e_3|\vec n_-\rangle|^2}_ {\displaystyle{3/4}} \;. \end{equation} Now the quantum probabilities tell us that the ``objective'' probability for the particle to have spin up is $3/4$. This simple example illustrates the folly of trying to have two kinds of probabilities in quantum mechanics. The lesson is that if a density operator is even partially a reflection of one's state of knowledge, the multiplicity of ensemble decomposition means that a pure state must also be a state of knowledge.
Return now to the second version of the objectivist formulation of tomography, in which the experimenter is said to be using quantum-state tomography to determine an unknown preparation procedure. Imagine that the tomographic reconstruction results in the mixed state $\rho$, rather than a pure state, as in fact all actual laboratory procedures do. Now there is a serious problem, because a mixed state does not correspond to a well-defined procedure, but is itself a probabilistic mixture of well-defined procedures, i.e., pure states. The experimenter is thus trying to determine an unknown procedure that has no unique decomposition into well defined procedures. Thus he cannot be said to be determining an unknown procedure at all. This problem does not arise in an information-based interpretation, according to which all quantum states, pure or mixed, are states of knowledge. In analogy to the classical case, the quantum de Finetti representation provides an operational definition for the idea of an unknown quantum state in this case.
Let us therefore turn to the information-based formulation of the quantum-state tomography problem. Before the tomographic measurements, the Bayesian experimenter assigns a prior quantum state to the joint system composed of the $N$ systems, reflecting his prior state of knowledge. Just as in the classical case, this is a daunting task unless the assumption of exchangeability is justified.
The definition of the quantum version of exchangeability is closely analogous to the classical definition. Again, the definition proceeds in two stages. First, a joint state $\rho^{(N)}$ of $N$ systems is said to be {\it symmetric\/} (or finitely exchangeable) if it is invariant under any permutation of the systems. To see what this means formally, first write out $\rho^{(N)}$ with respect to any orthonormal tensor-product basis on ${\cal H}_d^{\otimes N}$, say
$|i_1\rangle|i_2\rangle\cdots|i_N\rangle$, where $i_k\in\{1,2,\ldots,d\}$ for all $k\,$. The joint state takes the form \begin{equation} \rho^{(N)}=\sum_{i_1,\ldots,i_N;j_1,\ldots,j_N} R^{(N)}_{i_1,\ldots,i_N;j_1,\ldots,j_N}\,
|i_1\rangle\cdots|i_N\rangle \langle j_1| \cdots\langle j_N|\;, \end{equation} where $R^{(N)}_{i_1,\ldots,i_N;j_1,\ldots,j_N}$ is the density matrix in this representation. What we demand is that for any permutation $\pi$ of the set $\{1,\ldots,N\}$, \begin{eqnarray} \rho^{(N)}&=&\sum_{i_1,\ldots,i_N;j_1,\ldots,j_N} R^{(N)}_{i_1,\ldots,i_N;j_1,\ldots,j_N}\,
|i_{\pi^{-1}(1)}\rangle\cdots|i_{\pi^{-1}(N)}\rangle
\langle j_{\pi^{-1}(1)}|\cdots\langle j_{\pi^{-1}(N)}|\nonumber\\ &=&\sum_{i_1,\ldots,i_N;j_1,\ldots,j_N} R^{(N)}_{i_{\pi(1)},\ldots,i_{\pi(N)};j_{\pi(1)},\ldots,j_{\pi(N)}}\,
|i_1\rangle\cdots|i_N\rangle \langle j_1| \cdots\langle j_N| \;, \end{eqnarray} which is equivalent to \begin{equation} R^{(N)}_{i_{\pi(1)},\ldots,i_{\pi(N)};j_{\pi(1)},\ldots,j_{\pi(N)}} =R^{(N)}_{i_1,\ldots,i_N;j_1,\ldots,j_N}\;. \end{equation}
The state $\rho^{(N)}$ is said to be {\it exchangeable\/} (or infinitely exchangeable) if it is symmetric and if, for any $M>0$, there is a symmetric state $\rho^{(N+M)}$ of $N+M$ systems such that the marginal density operator for $N$ systems is $\rho^{(N)}$, i.e., \begin{equation} \rho^{(N)} = {\rm tr}_M\,\rho^{(N+M)} \;, \label{HoundDog} \end{equation} where the trace is taken over the additional $M$ systems. In explicit basis-dependent notation, this requirement is \begin{equation} \rho^{(N)}= \!\!\sum_{i_1,\ldots,i_N;j_1,\ldots,j_N} \!\!\left(\,\sum_{i_{N+1},\ldots,i_{N+M}}\!\! R^{(N+M)}_{i_1,\ldots,i_N,i_{N+1},\ldots,i_{N+M}; j_1,\ldots,j_N,i_{N+1},\ldots,i_{N+M}}\right)\!
|i_1\rangle\cdots|i_N\rangle \langle j_1| \cdots\langle j_N|\;. \end{equation} In analogy to the classical case, an exchangeable density operator can be thought of informally as the description of a subsystem of an infinite sequence of systems whose order is irrelevant.
The precise statement of the quantum de Finetti representation theorem~\cite{Hudson1976,Stormer1969} is that any exchangeable state of $N$ systems can be written uniquely in the form \begin{equation} \rho^{(N)}=\int_{{\cal D}_d} P(\rho)\, \rho^{\otimes N}\, d\rho\;. \label{eq-qdefinetti} \end{equation} Here $P(\rho)\ge0$ is normalized by \begin{equation} \int_{{\cal D}_d} P(\rho)\,d\rho=1\;, \end{equation} with $d\rho$ being a suitable measure on density operator space ${\cal D}_d$ [e.g., one could choose the standard flat measure $d\rho=S^2dS\,d\Omega$ in the parametrization~(\ref{eq-rhoqubit}) for a spin-$1\over 2$ particle]. The upshot of the theorem, as already advertised, is that it makes it possible to think of an exchangeable quantum-state assignment {\it as if\/} it were a probabilistic mixture characterized by a probability density $P(\rho)$ for the product states $\rho^{\otimes N}$.
Just as in the classical case, both components of the definition of exchangeability are crucial for arriving at the representation theorem of Eq.~(\ref{eq-qdefinetti}). The reason now, however, is much more interesting than it was previously. In the classical case, extendibility was used solely to exclude anticorrelated probability distributions. Here extendibility is necessary to exclude the possibility of Bell inequality violations for measurements on the separate systems. This is because the assumption of symmetry alone for an $N$-party quantum system does not exclude the possibility of quantum entanglement, and all states that can be written as a mixture of product states---of which Eq.~(\ref{eq-qdefinetti}) is an example---have no entanglement~\cite{Bennett1996}. A very simple counterexample is the Greenberger-Horne-Zeilinger state of three spin-$1\over2$ particles~\cite{Mermin1990}, \begin{equation}
|\mbox{GHZ}\rangle=\frac{1}{\sqrt{2}}\Big(|0\rangle|0\rangle|0\rangle+
|1\rangle|1\rangle|1\rangle\Big)\;, \end{equation} which is symmetric, but is not extendible to a symmetric state on four systems. This follows because the only states of four particles that marginalize to a three-particle pure state, like the GHZ state, are product states of the form
$|\mbox{GHZ}\rangle\langle\mbox{GHZ}|\otimes\rho$, where $\rho$ is the state of the fourth particle; such states clearly cannot be symmetric. These considerations show that in order for the proposed theorem to be valid, it must be the case that as $M$ increases in Eq.~(\ref{HoundDog}), the possibilities for entanglement in the separate systems compensatingly decrease~\cite{Koashi2000}.
\section{Proof of the quantum de Finetti theorem} \label{sec-proof}
To prove the quantum version of the de Finetti theorem, we rely on the classical theorem as much as possible. We start from an exchangeable density operator $\rho^{(N)}$ defined on $N$ copies of a system. We bring the classical theorem to our aid by imagining a sequence of identical quantum measurements on the separate systems and considering the outcome probabilities they would produce. Because $\rho^{(N)}$ is assumed exchangeable, such identical measurements give rise to an exchangeable probability distribution for the outcomes. The trick is to recover enough information from the exchangeable statistics of these measurements to characterize the exchangeable density operators.
With this in mind, the proof is expedited by making use of the theory of generalized quantum measurements or positive operator-valued measures (POVMs)~\cite{Peres1993a,Kraus1983}. We give a brief introduction to that theory. The common textbook notion of a measurement---that is, a von Neumann measurement---is that any laboratory procedure counting as an observation can be identified with a Hermitian operator $O$ on the Hilbert space ${\cal H}_d$ of the system. Depending upon the presentation, the measurement outcomes are identified either with the eigenvalues $\mu_i$ or with a complete set of normalized eigenvectors
$|i\rangle$ for $O$. When the quantum state is $\rho$, the probabilities for the various outcomes are computed from the eigenprojectors $\Pi_i=|i\rangle\langle i|$ via the standard Born rule, \begin{equation}
p_i={\rm tr}\big(\rho\Pi_i\big) = \langle i|\rho|i\rangle\;. \end{equation} This rule gives a consistent probability assignment because the eigenprojectors $\Pi_i$ are positive-semidefinite operators, which makes the $p_i$ nonnegative, and because the projectors form a resolution of the identity operator $I$, \begin{equation} \sum_{i=1}^d \Pi_i = I\;, \end{equation} which guarantees that $\sum_i p_i=1$.
POVMs generalize the textbook notion of measurement by distilling the essential properties that make the Born rule work. The generalized notion of measurement is this: {\it any\/} set ${\cal E}=\{E_\alpha\}$ of positive-semidefinite operators on ${\cal H}_d$ that forms a resolution of the identity, i.e., that satisfies \begin{equation}
\langle\psi|E_\alpha|\psi\rangle\ge0\,,\quad\mbox{for all
$|\psi\rangle\in{\cal H}_d$} \label{Hank} \end{equation} and \begin{equation} \sum_\alpha E_\alpha = I\;, \label{Hannibal} \end{equation} corresponds to at least one laboratory procedure counting as a measurement. The outcomes of the measurement are identified with the indices $\alpha$, and the probabilities of those outcomes are computed according to the generalized Born rule, \begin{equation} p_\alpha={\rm tr}\big(\rho E_\alpha\big) \;. \end{equation} The set ${\cal E}$ is called a POVM, and the operators $E_\alpha$ are called POVM elements. Unlike von Neumann measurements, there is no limitation on the number of values $\alpha$ can take, the operators $E_\alpha$ need not be rank-1, and there is no requirement that the $E_\alpha$ be mutually orthogonal. This definition has important content because the older notion of measurement is simply too restrictive: there are laboratory procedures that clearly should be called ``measurements,'' but that cannot be expressed in terms of the von Neumann measurement process alone.
One might wonder whether the existence of POVMs contradicts everything taught about standard measurements in the traditional graduate textbooks~\cite{QuantumClassics1} and the well-known classics~\cite{QuantumClassics2}. Fortunately it does not. The reason is that any POVM can be represented formally as a standard measurement on an ancillary system that has interacted in the past with the system of main interest. Thus, in a certain sense, von Neumann measurements capture everything that can be said about quantum measurements \cite{Kraus1983}. A way to think about this is that by learning something about the ancillary system through a standard measurement, one in turn learns something about the system of real interest. Indirect though this might seem, it can be a very powerful technique, sometimes revealing information that could not have been revealed otherwise~\cite{Holevo1973}.
For instance, by considering POVMs, one can consider measurements with an outcome cardinality that exceeds the dimensionality of the Hilbert space. What this means is that whereas the statistics of a von Neumann measurement can only reveal information about the $d$ diagonal elements of a density operator $\rho$, through the probabilities ${\rm tr}\big(\rho\Pi_i\big)$, the statistics of a POVM generally can reveal things about the off-diagonal elements, too. It is precisely this property that we take advantage of in our proof of the quantum de Finetti theorem.
Our problem hinges on finding a special kind of POVM, one for which any set of outcome probabilities specifies a unique operator. This boils down to a problem in pure linear algebra. The space of operators on ${\cal H}_d$ is itself a linear vector space of dimension $d^{\,2}$. The quantity ${\rm tr}(A^\dagger B)$ serves as an inner product on that space. If the POVM elements $E_\alpha$ span the space of operators---there must be at least $d^{\,2}$ POVM elements in the set---the measurement probabilities $p_\alpha={\rm tr}\big(\rho E_\alpha\big)$---now thought of as {\it projections\/} in the directions $E_\alpha$---are sufficient to specify a unique operator $\rho$. Two distinct density operators $\rho$ and $\sigma$ must give rise to different measurement statistics. Such measurements, which might be called {\it informationally complete}, have been studied for some time~\cite{Prugovecki1977}.
For our proof we need a slightly refined notion---that of a {\it minimal\/} informationally complete measurement. If an informationally complete POVM has more than $d^{\,2}$ operators $E_\alpha$, these operators form an overcomplete set. This means that given a set of outcome probabilities $p_\alpha$, there is generally {\it no\/} operator $A$ that generates them according to $p_\alpha={\rm tr}\big(AE_\alpha\bigr)$. Our proof requires the existence of such an operator, so we need a POVM that has precisely $d^{\,2}$ linearly independent POVM elements $E_\alpha$. Such a POVM has the minimal number of POVM elements to be informationally complete. Given a set of outcome probabilities $p_\alpha$, there is a unique operator $A$ such that $p_\alpha={\rm tr}\big(AE_\alpha\bigr)$, even though, as we discuss below, $A$ is not guaranteed to be a density operator.
Do minimal informationally complete POVMs exist? The answer is yes. We give here a simple way to produce one, though there are surely more elegant ways with greater symmetry. Start with a complete orthonormal basis $|e_j\rangle$ on ${\cal H}_d$, and let
$\Gamma_{jk}=|e_j\rangle\langle e_k|$. It is easy to check that the following $d^{\,2}$ rank-1 projectors $\Pi_\alpha$ form a linearly independent set. \begin{enumerate} \item For $\alpha=1,\ldots,d$, let \begin{equation} \Pi_\alpha \equiv \Gamma_{jj}\,, \end{equation} where $j$, too, runs over the values $1,\ldots,d$.
\item For $\alpha=d+1,\ldots,\frac{1}{2}d(d+1)$, let \begin{equation} \Pi_\alpha \equiv \Gamma^{(1)}_{jk} =
\frac{1}{2}\Big(|e_j\rangle+|e_k\rangle\Big)
\Big(\langle e_j|+\langle e_k|\Big) = \frac{1}{2}(\Gamma_{jj}+\Gamma_{kk}+\Gamma_{jk}+\Gamma_{kj})\;, \end{equation} where $j<k$.
\item Finally, for $\alpha= \frac{1}{2}d(d+1) + 1, \ldots,d^{\,2}$, let \begin{equation} \Pi_\alpha \equiv \Gamma^{(2)}_{jk}
= \frac{1}{2}\Big(|e_j\rangle+i|e_k\rangle\Big)
\Big(\langle e_j|-i\langle e_k |\Big) =\frac{1}{2}(\Gamma_{jj}+\Gamma_{kk}-i\Gamma_{jk}+i\Gamma_{kj})\;, \end{equation} where again $j<k$. \end{enumerate} All that remains is to transform these (positive-semidefinite) linearly independent operators $\Pi_\alpha$ into a proper POVM. This can be done by considering the positive semidefinite operator $G$ defined by \begin{equation} G=\sum_{\alpha=1}^{d^2}\Pi_\alpha\;. \label{Herbert} \end{equation}
It is straightforward to show that $\langle\psi|G|\psi\rangle>0$
for all $|\psi\rangle\ne0$, thus establishing that $G$ is positive definite (i.e., Hermitian with positive eigenvalues) and hence invertible. Applying the (invertible) linear transformation $X\rightarrow\, G^{-1/2}XG^{-1/2}$ to Eq.~(\ref{Herbert}), we find a valid decomposition of the identity, \begin{equation} I=\sum_{\alpha=1}^{d^2}G^{-1/2}\Pi_\alpha G^{-1/2}\;. \end{equation} The operators \begin{equation} E_\alpha=G^{-1/2}\Pi_\alpha G^{-1/2} \end{equation} satisfy the conditions of a POVM, Eqs.~(\ref{Hank}) and (\ref{Hannibal}), and moreover, they retain the rank and linear independence of the original $\Pi_\alpha$.
With this generalized measurement (or any other one like it), we can return to the main line of proof. Recall we assumed that we captured our state of knowledge by an exchangeable density operator $\rho^{(N)}$. Consequently, repeated application of the (imagined) measurement $\cal E$ must give rise to an exchangeable probability distribution over the $N$ random variables $\alpha_n\in\{1,2,\ldots,d^{\,2}\}$, $n=1,\ldots,N$. We now analyze these probabilities.
Quantum mechanically, it is valid to think of the $N$ repeated measurements of $\cal E$ as a single measurement on the Hilbert space ${\cal H}_d^{\otimes N}\equiv {\cal H}_d\otimes\cdots\otimes{\cal H}_d$. This measurement, which we denote ${\cal E}^{\otimes N}$, consists of $d^{\,2N}$ POVM elements of the form $E_{\alpha_1}\otimes\cdots\otimes E_{\alpha_N}$. The probability of any particular outcome sequence of length $N$, namely $\bbox{\alpha}\equiv(\alpha_1,\ldots,\alpha_N)$, is given by the standard quantum rule, \begin{equation} p^{(N)}(\bbox{\alpha})={\rm tr}\big(\,\rho^{(N)}\,E_{\alpha_1}\otimes\cdots\otimes E_{\alpha_N}\big)\;. \label{Humphrey} \end{equation} Because the distribution $p^{(N)}(\bbox{\alpha})$ is exchangeable, we have by the classical de Finetti theorem [see Eq.~(\ref{eq-repr})] that there exists a unique probability density $P(\vec{p})$ on ${\cal S}_{d^2}$ such that \begin{equation} p^{(N)}(\bbox{\alpha})= \int_{{\cal S}_{d^2}} P(\vec{p})\, p_{\alpha_1} p_{\alpha_2}\cdots p_{\alpha_N}\,d\vec{p}\;. \label{Helmut} \end{equation}
It should now begin to be apparent why we chose to imagine a measurement $\cal E$ consisting of precisely $d^{\,2}$ linearly independent elements. This allows us to assert the existence of a {\it unique\/} operator $A_{\vec{p}}$ on ${\cal H}_d$ corresponding to each point $\vec{p}$ in the domain of the integral. The ultimate goal here is to turn Eqs.~(\ref{Humphrey}) and (\ref{Helmut}) into a single operator equation.
With that in mind, let us define $A_{\vec{p}}$ as the unique operator satisfying the following $d^{\,2}$ linear equations: \begin{equation} {\rm tr}\big(A_{\vec{p}}E_\alpha\big)= p_\alpha\;,\quad\quad\alpha=1,\ldots,d^{\,2}\;. \label{Hamish} \end{equation} Inserting this definition into Eq.~(\ref{Helmut}) and manipulating it according to the algebraic rules of tensor products---namely $(A\otimes B)(C\otimes D)=AC\otimes BD$ and ${\rm tr}(A\otimes B)=({\rm tr}A)({\rm tr}B)$---we see that \begin{eqnarray} p^{(N)}(\bbox{\alpha}) &=& \int_{{\cal S}_{d^2}} P(\vec{p})\,{\rm tr}\big(A_{\vec{p}}E_{\alpha_1}\big) \cdots {\rm tr}\big(A_{\vec{p}}E_{\alpha_N}\big)\,d\vec{p} \nonumber\\ &=& \int_{{\cal S}_{d^2}} P(\vec{p})\,{\rm tr}\big(A_{\vec{p}}E_{\alpha_1}\otimes \cdots\otimes A_{\vec{p}}E_{\alpha_N}\big)\,d\vec{p} \nonumber\\ &=& \int_{{\cal S}_{d^2}} P(\vec{p})\,{\rm tr}\big[A_{\vec{p}}^{\otimes N} \, (E_{\alpha_1}\otimes\cdots\otimes E_{\alpha_N})\big]\,d\vec{p}\;. \end{eqnarray} If we further use the linearity of the trace, we can write the same expression as \begin{equation} p^{(N)}(\bbox{\alpha})={\rm tr}\!\left[\left(\int_{{\cal S}_{d^2}} P(\vec{p})\,A_{\vec{p}}^{\otimes n} \,\,d\vec{p}\right)E_{\alpha_1}\otimes\cdots\otimes E_{\alpha_N}\right]. \label{Hugo} \end{equation}
The identity between Eqs.~(\ref{Humphrey}) and (\ref{Hugo}) must hold for all sequences $\bbox{\alpha}$. It follows that \begin{equation} \rho^{(N)}=\int_{{\cal S}_{d^2}} P(\vec{p})\,A_{\vec{p}}^{\otimes N} \,\,d\vec{p}\;. \label{Howard} \end{equation} This is because the operators $E_{\alpha_1}\otimes\cdots\otimes E_{\alpha_N}$ form a complete basis for the vector space of operators on ${\cal H}_d^{\otimes N}$.
Equation~(\ref{Howard}) already looks very much like our sought after goal, but we are not there quite yet. At this stage one has no right to assert that the $A_{\vec{p}}$ are density operators. Indeed they generally are not: the integral~(\ref{Helmut}) ranges over some points $\vec{p}$ in ${\cal S}_{d^{\,2}}$ that cannot be generated by applying the measurement $\cal E$ to {\it any\/}
quantum state. Hence some of the $A_{\vec{p}}$ in the integral representation are ostensibly nonphysical. An example might be helpful. Consider any four spin-$1\over2$ pure states $|\vec n_\alpha\rangle$ on ${\cal H}_2$ for which the vectors $\vec n_\alpha$ in the Bloch-sphere representation~(\ref{eq-poincare}) are the vertices of a regular tetrahedron. One can check that the elements $E_\alpha=\frac{1}{2}|\vec n_\alpha\rangle\langle\vec n_\alpha|$ comprise a minimal informationally complete POVM. For this POVM, because of the factor $\frac{1}{2}$ in front of each projector, it is always the case that $p_\alpha={\rm tr}(\rho E_\alpha)\le\frac{1}{2}$. Therefore, this measurement simply cannot generate a probability distribution like $\vec{p}=\big(\frac{3}{4},\frac{1}{8},\frac{1}{16},\frac{1}{16}\big)$, which is nevertheless in the domain of the integral in Eq.~(\ref{Helmut}).
The solution to this conundrum is provided by the overall requirement that $\rho^{(N)}$ be a valid density operator. This requirement places a significantly more stringent constraint on the distribution $P(\vec{p})$ than was the case in the classical representation theorem. In particular, it must be the case that $P(\vec{p})$ vanishes whenever the corresponding $A_{\vec{p}}$ is not a proper density operator. Let us move toward showing that.
We first need to delineate two properties of the operators $A_{\vec{p}}$. One is that they are Hermitian. The argument is simply \begin{equation} {\rm tr}\big(E_\alpha A_{\vec{p}}^\dagger\big) = {\rm tr}\!\left[\big(A_{\vec{p}}E_\alpha\big)^\dagger\right] = \big[{\rm tr}\big(A_{\vec{p}}E_\alpha\big)\big]^* = {\rm tr}\big(A_{\vec{p}}E_\alpha\big)\;, \end{equation} where the last step follows from Eq.~(\ref{Hamish}). Because the $E_\alpha$ are a complete set of linearly independent operators, it follows that $A_{\vec{p}}^\dagger=A_{\vec{p}}$. The second property tells us something about the eigenvalues of $A_{\vec{p}}$: \begin{equation} 1=\sum_\alpha p_\alpha={\rm tr}\!\left(A_{\vec{p}}\sum_\alpha E_\alpha\right)={\rm tr}A_{\vec{p}}\;. \label{HepPlease} \end{equation} In other words the (real) eigenvalues of $A_{\vec{p}}$ must sum to unity.
We now show that these two facts go together to imply that if there are any nonphysical $A_{\vec{p}}$ with positive weight $P(\vec{p})$ in Eq.~(\ref{Howard}), then one can find a measurement for which $\rho^{(N)}$ produces illegal
``probabilities'' for sufficiently large $N$. For instance, take a particular $A_{\vec{q}}$ in Eq.~(\ref{Howard}) that has at least one negative eigenvalue $-\lambda<0$. Let $|\psi\rangle$ be a normalized eigenvector corresponding to that eigenvalue and consider the binary-valued POVM consisting of the elements
$\widetilde{\Pi}=|\psi\rangle\langle\psi|$ and $\Pi=I-\widetilde{\Pi}$. Since ${\rm tr}\big(A_{\vec{q}}\widetilde{\Pi}\big)=-\lambda<0$, it is true by Eq.~(\ref{HepPlease}) that ${\rm tr}\big(A_{\vec{q}}\Pi\big)=1+\lambda >1$. Consider repeating this measurement over and over. In particular, let us tabulate the probability of getting outcome $\Pi$ for every single trial to the exclusion of all other outcomes.
The gist of the contradiction is most easily seen by {\it imagining\/} that Eq.~(\ref{Howard}) is really a discrete sum: \begin{equation} \rho^{(N)}= P(\vec{q})\,A_{\vec{q}}^{\otimes N}+\sum_{\vec{p}\ne\vec{q}}P(\vec{p})\,A_{\vec{p}}^{\otimes N}\;. \end{equation} The probability of $N$ occurrences of the outcome $\Pi$ is thus \begin{eqnarray} {\rm tr}\big(\rho^{(N)}\Pi^{\otimes N}\big) &=& P(\vec{q})\,{\rm tr}(A_{\vec{q}}^{\otimes N}\Pi^{\otimes N}) +\sum_{\vec{p}\ne\vec{q}}P(\vec{p})\,{\rm tr}(A_{\vec{p}}^{\otimes N}\Pi^{\otimes N}) \nonumber\\ &=& P(\vec{q})\,[{\rm tr}(A_{\vec{q}}\Pi)]^N +\sum_{\vec{p}\ne\vec{q}}P(\vec{p})\,[{\rm tr}(A_{\vec{p}}\Pi)]^N \nonumber\\ &=& P(\vec{q})(1+\lambda)^N +\sum_{\vec{p}\ne\vec{q}}P(\vec{p})\,[{\rm tr}(A_{\vec{p}}\Pi)]^N\;. \label{Hanna} \end{eqnarray} There are no assurances in general that the rightmost term in Eq.~(\ref{Hanna}) is positive, but if $N$ is an even number it must be. It follows that if $P(\vec{q})\ge0$, for sufficiently large {\it even\/} $N$, \begin{equation} {\rm tr}\big(\rho^{(N)}\Pi^{\otimes N}\big)>1\;, \label{BigBoy} \end{equation} contradicting the assumption that it should always be a probability.
All we need to do now is transcribe the argument leading to Eq.~(\ref{BigBoy}) to the general integral case of Eq.~(\ref{Howard}). Note that by Eq.~(\ref{Hamish}), the quantity ${\rm tr}\big(A_{\vec{p}}\Pi\big)$ is a (linear) continuous function of the parameter $\vec{p}$. Therefore, for any
$\epsilon>0$, there exists a $\delta>0$ such that $\big|{\rm tr}\big(A_{\vec{p}}\Pi\big)- {\rm tr}\big(A_{\vec{q}}\Pi\big)\big|\le\epsilon$ whenever
$|\vec{p}-\vec{q}|\le\delta$, i.e., whenever $\vec{p}$ is contained within an open ball $B_\delta(\vec{q})$ centered at $\vec{q}$. Choose $\epsilon<\lambda$, and define $\overline{B}_\delta$ to be the intersection of $B_\delta(\vec{q})$ with the probability simplex. For $\vec{p}\in \overline{B}_\delta$, it follows that \begin{equation} {\rm tr}\big(A_{\vec{p}}\Pi\big)\ge 1+\lambda-\epsilon>1\;. \end{equation} If we consider an $N$ that is even, $\big[{\rm tr}\big(A_{\vec{p}}\Pi\big)\big]^N$ is nonnegative in all of ${\cal S}_{d^2}$, and we have that the probability of the outcome $\Pi^{\otimes N}$ satisfies \begin{eqnarray} {\rm tr}\big(\rho^{(N)}\Pi^{\otimes N}\big) &=& \int_{{\cal S}_{d^2}} P(\vec{p})\,\big[{\rm tr}\big(A_{\vec{p}}\Pi\big)\big]^N \,d\vec{p}\; \nonumber\\ &=& \int_{{\cal S}_{d^2}-\overline{B}_\delta} P(\vec{p})\,\big[{\rm tr}\big(A_{\vec{p}}\Pi\big)\big]^N \,d\vec{p}
\; +\, \int_{\overline{B}_\delta} P(\vec{p})\,\big[{\rm tr}\big(A_{\vec{p}}\Pi\big)\big]^N \,d\vec{p} \nonumber\\ &\ge& \int_{\overline{B}_\delta} P(\vec{p})\,\big[{\rm tr}\big(A_{\vec{p}}\Pi\big)\big]^N \,d\vec{p} \nonumber\\ &\ge& (1+\lambda-\epsilon)^N \int_{\overline{B}_\delta} P(\vec{p})\,d\vec{p}\;. \label{Homer} \end{eqnarray} Unless \begin{equation} \int_{\overline{B}_\delta} P(\vec{p})\,d\vec{p}=0\;, \end{equation} the lower bound (\ref{Homer}) for the probability of the outcome $\Pi^{\otimes N}$ becomes arbitrarily large as $N\rightarrow\infty$. Thus we conclude that the requirement that $\rho^{(N)}$ be a proper density operator constrains $P(\vec{p})$ to vanish almost everywhere in $\overline{B}_\delta$ and, consequently, to vanish almost everywhere that $A_{\vec{p}}$ is not a physical state.
Using Eq.~(\ref{Hamish}), we can trivially transform the integral representation (\ref{Howard}) to one directly over the convex set of density operators ${\cal D}_d$ and be left with the following statement. Under the sole assumption that the density operator $\rho^{(N)}$ is exchangeable, there exists a unique probability density $P(\rho)$ such that \begin{equation} \rho^{(N)}=\int_{{\cal D}_d} P(\rho)\, \rho^{\otimes N}\, d\rho\;. \end{equation} This concludes the proof of the quantum de Finetti representation theorem.
\section{Outlook} \label{sec-outlook}
Since the analysis in the previous sections concerned {\it only\/} the case of quantum-state tomography, we certainly have not written the last word on unknown quantum states in the sense advocated in Sec.~\ref{sec-intro}. There are clearly other examples that need separate analyses. For instance, the use of unknown states in quantum teleportation~\cite{Bennett1993}---where a {\it single\/} realization of an unknown state is ``teleported'' with the aid of previously distributed quantum entanglement and a classical side channel---has not been touched upon. The quantum de Finetti theorem, therefore, is not the end of the road for detailing implications of an information-based interpretation of quantum mechanics. What is important, we believe, is that taking the time to think carefully about the referents of various states in a problem can lead to insights into the structure of quantum mechanics that cannot be found by other means.
For instance, one might ask, ``Was this theorem not inevitable?'' After all, is it not already well established that quantum theory is, in some sense, just a noncommutative generalization of probability theory? Should not all the main theorems in classical probability theory carry over to the quantum case~\cite{CommentAccardi}? One can be skeptical in this way, of course, but then one will miss a large part of the point. There are any number of noncommutative generalizations to probability theory that one can concoct~\cite{Hiai2000}. The deeper issue is, what is it in the natural world that forces quantum theory to the particular noncommutative structure it actually has~\cite{Wheeler2000}? It is not a foregone conclusion, for instance, that every theory has a de Finetti representation theorem within it.
Some insight in this regard can be gained by considering very simple modifications of quantum theory. To give a concrete example, let us take the case of real-Hilbert-space quantum mechanics. This theory is the same as ordinary quantum mechanics in all aspects {\it except\/} that the Hilbert spaces are defined over the field of real numbers rather than the complex numbers. It turns out that this is a case where the quantum de Finetti theorem fails. Let us start to explain why by first describing how the particular proof technique used above loses validity in the new context.
In order to specify uniquely a Hermitian operator $\rho^{(N)}$ in going from Eq.~(\ref{Hugo}) to (\ref{Howard}), the proof made central use of the fact that a complete basis $\{E_{1},\ldots,E_{d^{\,2}}\}$ for the vector space of operators on ${\cal H}_d$ can be used to generate a complete basis for the operators on ${\cal H}_d^{\otimes N}$---one just need take the $d^{\,2N}$ operators of the form $E_{\alpha_1}\!\otimes\cdots\otimes E_{\alpha_N}$, $1\le\alpha_j\le d^{\,2}$. (All we actually needed was that a basis for the real vector space of Hermitian operators on ${\cal H}_d$ can be used to generate a basis for the real vector space of Hermitian operators on ${\cal H}_d^{\otimes N}$, but since the vector space of all operators is the complexification of the real vector space of Hermitian operators, this seemingly weaker requirement is, in fact, no different.) This technique works because the dimension of the space of $d^N\!\times d^N$ matrices is $(d^{\,2})^N$, the $N$th power of the dimension of the space of $d\times d$ matrices.
This technique does not carry over to real Hilbert spaces. In a real Hilbert space, states and POVM elements are represented by real symmetric matrices. The dimension of the vector space of real symmetric matrices acting on a $d$-dimensional real Hilbert space is ${1\over2}d(d+1)$, this then being the number of elements in an minimal informationally complete POVM. The task in going from Eq.~(\ref{Hugo}) to (\ref{Howard}) would be to specify the real matrix $\rho^{(N)}$. When $N\ge2$, however, the dimension of the space of $d^N\!\times d^N$ real symmetric matrices is strictly greater than the $N$th power of the dimension of the space of $d\times d$ real symmetric matrices, i.e., \begin{equation} {1\over2}d^{N}(d^N+1)>\left({1\over2}d(d+1)\right)^{\! N}\;. \end{equation} Hence, specifying Eq.~(\ref{Hugo}) for all outcome sequences $\bbox{\alpha}=(\alpha_1,\ldots,\alpha_N)$ is not sufficient to specify a single operator $\rho^{(N)}$. This line of reasoning indicates that the particular {\it proof\/} of the quantum de Finetti theorem presented in Sec.~\ref{sec-proof} fails for real Hilbert spaces, but it does not establish that the theorem itself fails. The main point of this discussion is that it draws attention to the crucial difference between real-Hilbert-space and complex-Hilbert-space quantum mechanics---a fact emphasized previously by Araki~\cite{Araki1980} and Wootters~\cite{Wootters1990}.
To show that the theorem fails, we need a counterexample. One such example is provided by the $N$-system state \begin{equation} \rho^{(N)}={1\over2}\,\rho_+^{\otimes N} + {1\over2}\,\rho_-^{\otimes N} \;, \label{eq-real} \end{equation} where \begin{equation} \rho_+={1\over2}(I+\sigma_2)\qquad\mbox{and} \qquad\rho_-={1\over2}(I-\sigma_2)\;, \end{equation} and where $\sigma_2$ was defined in Eq.~(\ref{eq-pauli}). In complex-Hilbert-space quantum mechanics, this is clearly a valid density operator: It corresponds to an equally weighted mixture of $N$ spin-up particles and $N$ spin-down particles in the $y$ direction. The state $\rho^{(N)}$ is clearly exchangeable, and the decomposition in Eq.~(\ref{eq-real}) is unique according to the quantum de Finetti theorem.
Now consider $\rho^{(N)}$ as an operator in real-Hilbert-space quantum mechanics. Despite the apparent use of the imaginary number $i$ in the $\sigma_2$ operator, $\rho^{(N)}$ remains a valid quantum state. This is because, upon expanding the right-hand side of Eq.~(\ref{eq-real}), all the terms with an odd number of $\sigma_2$ operators cancel away. Yet, even though it is an exchangeable density operator, it cannot be written in de Finetti form of Eq.~(\ref{eq-qdefinetti}) using only real symmetric operators. This follows because Eq.~(\ref{eq-real}), the unique de Finetti form, contains $\sigma_2$, which is an antisymmetric operator and cannot be written in terms of symmetric operators. Hence the de Finetti representation theorem does not hold in real-Hilbert-space quantum mechanics.
Similar considerations show that in quaternionic quantum mechanics (a theory again precisely the same as ordinary quantum mechanics except that it uses Hilbert spaces over the quaternionic field~\cite{Adler1995}), the connection between exchangeable density operators and decompositions of the de Finetti form~(\ref{eq-qdefinetti}) breaks down. The failure mode is, however, even more disturbing than for real Hilbert spaces. In quaternionic quantum mechanics, most operators of the de Finetti form~(\ref{eq-qdefinetti}) do not correspond to valid quaternionic quantum states, even though the states $\rho$ in the integral are valid quaternionic states. The reason is that tensor products of quaternionic Hermitian operators are not necessarily Hermitian.
In classical probability theory, exchangeability characterizes those situations where the only data relevant for updating a probability distribution are frequency data, i.e., the numbers $n_j$ in Eq.~(\ref{eq-repr}) which tell how often the result $j$ occurred. The quantum de Finetti representation shows that the same is true in quantum mechanics: Frequency data (with respect to a sufficiently robust measurement) are sufficient for updating an exchangeable state to the point where nothing more can be learned from sequential measurements; that is, one obtains a convergence of the form~(\ref{HannibalLecter}), so that ultimately any further measurements on the individual systems are statistically independent. That there is no quantum de Finetti theorem in real Hilbert space means that there are fundamental differences between real and complex Hilbert spaces with respect to learning from measurement results. The ultimate reason for this is that in ordinary, complex-Hilbert-space quantum mechanics, exchangeability implies separability, i.e., the absence of entanglement. This follows directly from the quantum de Finetti theorem, because states of the form Eq.~(\ref{eq-qdefinetti}) are not entangled. This implication does not carry over to real Hilbert spaces. By the same reasoning used to show that the de Finetti theorem itself fails, the state in Eq.~(\ref{eq-real}) cannot be written as {\it any\/} mixture of real product states. Interpreted as a state in real Hilbert space, the state in Eq.~(\ref{eq-real}) is thus not separable, but entangled~\cite{Caves2000}. In a real Hilbert space, exchangeable states can be entangled and local measurements cannot reveal that.
Beyond these conceptual points, we also believe that the technical methods exhibited here might be of interest in the practical arena. Recently there has been a large literature on which classes of measurements have various advantages for tomographic purposes~\cite{QuorumLump,QuorumOld}. To our knowledge, the present work is the only one to consider tomographic reconstruction based upon minimal informationally complete POVMs. One can imagine several advantages to this approach via the fact that such POVMs with rank-one elements are automatically extreme points in the convex set of all measurements~\cite{Fujiwara1998}.
Furthermore, the classical de Finetti theorem is only the tip of an iceberg with respect to general questions in statistics to do with exchangeability and various generalizations of the concept~\cite{Aldous1985}. One should expect no less of quantum exchangeability studies. In particular here, we are thinking of things like the question of representation theorems for finitely exchangeable distributions~\cite{Jaynes1986,Diaconis1977}. Just as our method for proving the quantum de Finetti theorem was able to rely heavily on the the classical theorem, so one might expect similar benefits from the classical results in the case of quantum finite exchangeability---although there will certainly be new aspects to the quantum case due to the possibility of entanglement in finite exchangeable states. A practical application of such representation theorems could be their potential to contribute to the solution of some outstanding problems in constructing security proofs for various quantum key distribution schemes~\cite{Gottesman2000}.
In general, our effort in the present paper forms part of a larger program to promote a consistent information-based interpretation of quantum mechanics and to delineate its consequences. We find it encouraging that the fruits of this effort may not be restricted solely to an improved understanding of quantum mechanics, but also possess the potential to contribute to practical applications.
\acknowledgments
We thank Ben Schumacher for discussions. Most of this work was carried out through the hospitality of the Benasque Center for Physics, Benasque, Spain during their program on Progress in Quantum Computing, Cryptography and Communication, 5--25 July 1998, and the hospitality of the Isaac Newton Institute for Mathematical Sciences, Cambridge, England during their Workshop on Complexity, Computation and the Physics of Information, June--July 1999. CMC was supported in part by Office of Naval Research Grant No.~N00014-93-1-0116.
\appendix
\section{Proof of the Classical de Finetti Theorem}
In this Appendix we reprise the admirably simple proof of the classical de Finetti representation theorem given by Heath and Sudderth \cite{Heath1976} for the case of binary variables.
Suppose we have an exchangeable probability assignment for $M$ binary random variables, $x_1,x_2,\ldots,x_M$, taking on the values 0 and 1. Let $p(n,N)$, $N\le M$, be the probability for $n$ 1s in $N$ trials. Exchangeability guarantees that \begin{equation} p(n,N)= {N\choose n} p(x_1=1,\ldots,x_n=1,x_{n+1}=0,\ldots,x_N=0)\;. \end{equation} We can condition the probability on the right on the occurrence of $m$ 1s in all $M$ trials: \begin{equation} p(n,N)={N\choose n} \sum_{m=0}^M p(x_1=1,\ldots,x_n=1,x_{n+1}=0,\ldots,x_N=0\mid m,M) p(m,M)\;. \end{equation} Given $m$ 1s in $M$ trials, exchangeability guarantees that the $\displaystyle{{M\choose m}}$ sequences are equally likely. Thus the situation is identical to drawing from an urn with $m$ 1s on $M$ balls, and we have that \begin{eqnarray} && p(x_1=1,\ldots,x_n=1,x_{n+1}=0,\ldots,x_N=0\mid m,M) \nonumber \\ &&\hphantom{p(x_1=1,}{}= {m\over M} {m-1\over M-1} \cdots {m-(n-1)\over M-(n-1)} {M-m\over M-n} {M-m-1\over M-n-1} \cdots {M-m-(N-n-1)\over M-(N-1)} \nonumber \\ &&\hphantom{p(x_1=1,}{}= {(m)_n(M-m)_{N-n}\over(M)_N} \;, \end{eqnarray} where \begin{equation} (r)_q\equiv\prod_{j=0}^{q-1}(r-j)=r(r-1)\cdots(r-q+1)={r!\over(r-q)!}\;. \end{equation} The result is that \begin{equation} p(n,N)={N\choose n} \sum_{m=0}^M {(m)_n(M-m)_{N-n}\over(M)_N} p(m,M)\;. \end{equation}
What remains is to take the limit $M\rightarrow\infty$, which we can do because of the extendibility property of exchangeable probabilities. We can write $p(n,N)$ as an integral \begin{equation} p(n,N)={N\choose n} \int_0^1 {(zM)_n\bigl((1-z)M\bigr)_{N-n}\over(M)_N} P_M(z)\,dz\;, \end{equation} where \begin{equation} P_M(z)= \sum_{m=0}^M p(zM,M)\delta (z-m/M) \end{equation} is a distribution concentrated at the $M$-trial frequencies $m/M$. In the limit $M\rightarrow\infty$, $P_M(z)$ converges to a continuous distribution $P_\infty(z)$, and the other terms in the integrand go to $z^n(1-z)^{N-n}$, giving \begin{equation} p(n,N)={N\choose n} \int_0^1 z^n(1-z)^{N-n} P_\infty(z)\,dz \;. \end{equation} We have demonstrated the classical de Finetti representation theorem for binary variables: If $p(n,N)$ is part of an infinite exchangeable sequence, then it has a de Finetti representation in terms of a ``probability on probabilities'' $P_\infty(z)$. The proof can readily be extended to nonbinary variables.
\end{document} | arXiv |
Sum of Reciprocals of Squares Alternating in Sign/Proof 1
< Sum of Reciprocals of Squares Alternating in Sign
\(\displaystyle \dfrac {\pi^2} {12}\) \(=\) \(\displaystyle \sum_{n \mathop = 1}^\infty \dfrac {\left({-1}\right)^{n + 1} } {n^2}\)
\(\displaystyle \) \(=\) \(\displaystyle \frac 1 {1^2} - \frac 1 {2^2} + \frac 1 {3^2} - \frac 1 {4^2} + \cdots\)
Let $f \left({x}\right)$ be the real function defined on $\left({0 \,.\,.\, 2 \pi}\right)$ as:
$f \left({x}\right) = \begin{cases} \left({x - \pi}\right)^2 & : 0 < x \le \pi \\ \pi^2 & : \pi < x < 2 \pi \end{cases}$
From Fourier Series: Square of x minus pi, Square of pi, its Fourier series can be expressed as:
$(1): \quad f \left({x}\right) \sim \displaystyle \frac {2 \pi^2} 3 + \sum_{n \mathop = 1}^\infty \left({\frac {2 \cos n x} {n^2} + \left({\frac {\left({-1}\right)^n \pi} n + \frac {2 \left({\left({-1}\right)^n - 1}\right)} {\pi n^3} }\right) \sin n x}\right)$
We have that:
\(\displaystyle f \left({\pi - 0}\right)\) \(=\) \(\displaystyle \left({\pi - \pi}\right)^2\)
\(\displaystyle \) \(=\) \(\displaystyle 0\)
\(\displaystyle f \left({\pi + 0}\right)\) \(=\) \(\displaystyle \pi^2\)
where $f \left({\pi - 0}\right)$ and $f \left({\pi + 0}\right)$ denote the limit from the left and limit from the right respectively of $f \left({\pi}\right)$.
It is apparent that $f \left({x}\right)$ satisfies the Dirichlet conditions:
$(\mathrm D 1): \quad f$ is bounded on $\left({0 \,.\,.\, 2 \pi}\right)$
$(\mathrm D 2): \quad f$ has a finite number of local maxima and local minima.
$(\mathrm D 3): \quad f$ has $1$ of discontinuity, which is finite.
Hence from Fourier's Theorem:
\(\displaystyle f \left({\pi}\right)\) \(=\) \(\displaystyle \frac {f \left({\pi - 0}\right) + f \left({\pi + 0}\right)} 2\)
\(\displaystyle \) \(=\) \(\displaystyle \frac {0 + \pi^2} 2\)
\(\displaystyle \) \(=\) \(\displaystyle \frac {\pi^2} 2\)
Thus setting $x = \pi$ in $(1)$:
\(\displaystyle f \left({\pi}\right)\) \(=\) \(\displaystyle \frac {2 \pi^2} 3 + \sum_{n \mathop = 1}^\infty \left({\frac {2 \cos n \pi} {n^2} + \left({\frac {\left({-1}\right)^n \pi} n + \frac {2 \left({\left({-1}\right)^n - 1}\right)} {\pi n^3} }\right) \sin n \pi}\right)\)
\(\displaystyle \leadsto \ \ \) \(\displaystyle \frac {\pi^2} 2\) \(=\) \(\displaystyle \frac {2 \pi^2} 3 + 2 \sum_{n \mathop = 1}^\infty \frac {\cos n \pi} {n^2}\) Sine of Multiple of Pi
\(\displaystyle \leadsto \ \ \) \(\displaystyle \frac {\pi^2} 4\) \(=\) \(\displaystyle \frac {\pi^2} 3 + \sum_{n \mathop = 1}^\infty \frac {\left({-1}\right)^n} {n^2}\) Cosine of Multiple of Pi and simplification
\(\displaystyle \leadsto \ \ \) \(\displaystyle -\frac {\pi^2} {12}\) \(=\) \(\displaystyle \sum_{n \mathop = 1}^\infty \frac {\left({-1}\right)^n} {n^2}\)
\(\displaystyle \leadsto \ \ \) \(\displaystyle \frac {\pi^2} {12}\) \(=\) \(\displaystyle \sum_{n \mathop = 1}^\infty \frac {\left({-1}\right)^{n + 1} } {n^2}\) changing sign and subsuming into powers of $-1$
1961: I.N. Sneddon: Fourier Series ... (previous) ... (next): Chapter One: $\S 2$. Fourier Series: Example $1$
Retrieved from "https://proofwiki.org/w/index.php?title=Sum_of_Reciprocals_of_Squares_Alternating_in_Sign/Proof_1&oldid=348086"
Sum of Reciprocals of Squares Alternating in Sign
This page was last modified on 20 March 2018, at 17:03 and is 3,413 bytes | CommonCrawl |
Metamath
Metamath is a formal language and an associated computer program (a proof checker) for archiving, verifying, and studying mathematical proofs.[2] Several databases of proved theorems have been developed using Metamath covering standard results in logic, set theory, number theory, algebra, topology and analysis, among others.[3]
For the study of mathematics using mathematical methods, see Metamathematics.
Metamath
Developer(s)Norman Megill
Initial release0.07 in June 2005 (2005-06)
Stable release
0.198[1] / 7 August 2021 (7 August 2021)
Repository
• github.com/metamath/metamath-exe
Written inANSI C
Operating systemLinux, Windows, macOS
TypeComputer-assisted proof checking
LicenseGNU General Public License (Creative Commons Public Domain Dedication for databases)
Websitemetamath.org
As of February 2022, the set of proved theorems using Metamath is one of the largest bodies of formalized mathematics, containing in particular proofs of 74[4] of the 100 theorems of the "Formalizing 100 Theorems" challenge,[5] making it fourth after HOL Light, Isabelle, and Coq, but before Mizar, ProofPower, Lean, Nqthm, ACL2, and Nuprl. There are at least 19 proof verifiers for databases that use the Metamath format.[6]
This project is the first one of its kind that allows for interactive browsing of its formalized theorems database in the form of an ordinary website.[7]
Metamath language
The Metamath language is a metalanguage, suitable for developing a wide variety of formal systems. The Metamath language has no specific logic embedded in it. Instead, it can simply be regarded as a way to prove that inference rules (asserted as axioms or proven later) can be applied. The largest database of proved theorems follows conventional ZFC set theory and classic logic, but other databases exist and others can be created.
The Metamath language design is focused on simplicity; the language, employed to state the definitions, axioms, inference rules and theorems is only composed of a handful of keywords, and all the proofs are checked using one simple algorithm based on the substitution of variables (with optional provisos for what variables must remain distinct after a substitution is made).[8]
Language basics
The set of symbols that can be used for constructing formulas is declared using $c (constant symbols) and $v (variable symbols) statements; for example:
$( Declare the constant symbols we will use $)
$c 0 + = -> ( ) term wff |- $.
$( Declare the metavariables we will use $)
$v t r s P Q $.
The grammar for formulas is specified using a combination of $f (floating (variable-type) hypotheses) and $a (axiomatic assertion) statements; for example:
$( Specify properties of the metavariables $)
tt $f term t $.
tr $f term r $.
ts $f term s $.
wp $f wff P $.
wq $f wff Q $.
$( Define "wff" (part 1) $)
weq $a wff t = r $.
$( Define "wff" (part 2) $)
wim $a wff ( P -> Q ) $.
Axioms and rules of inference are specified with $a statements along with ${ and $} for block scoping and optional $e (essential hypotheses) statements; for example:
$( State axiom a1 $)
a1 $a |- ( t = r -> ( t = s -> r = s ) ) $.
$( State axiom a2 $)
a2 $a |- ( t + 0 ) = t $.
${
min $e |- P $.
maj $e |- ( P -> Q ) $.
$( Define the modus ponens inference rule $)
mp $a |- Q $.
$}
Using one construct, $a statements, to capture syntactic rules, axiom schemas, and rules of inference is intended to provide a level of flexibility similar to higher order logical frameworks without a dependency on a complex type system.
Proofs
Theorems (and derived rules of inference) are written with $p statements; for example:
$( Prove a theorem $)
th1 $p |- t = t $=
$( Here is its proof: $)
tt tze tpl tt weq tt tt weq tt a2 tt tze tpl
tt weq tt tze tpl tt weq tt tt weq wim tt a2
tt tze tpl tt tt a1 mp mp
$.
Note the inclusion of the proof in the $p statement. It abbreviates the following detailed proof:
tt $f term t
tze $a term 0
1,2 tpl $a term ( t + 0 )
3,1 weq $a wff ( t + 0 ) = t
1,1 weq $a wff t = t
1 a2 $a |- ( t + 0 ) = t
1,2 tpl $a term ( t + 0 )
7,1 weq $a wff ( t + 0 ) = t
1,2 tpl $a term ( t + 0 )
9,1 weq $a wff ( t + 0 ) = t
1,1 weq $a wff t = t
10,11 wim $a wff ( ( t + 0 ) = t -> t = t )
1 a2 $a |- ( t + 0 ) = t
1,2 tpl $a term ( t + 0 )
14,1,1 a1 $a |- ( ( t + 0 ) = t -> ( ( t + 0 ) = t -> t = t ) )
8,12,13,15 mp $a |- ( ( t + 0 ) = t -> t = t )
4,5,6,16 mp $a |- t = t
The "essential" form of the proof elides syntactic details, leaving a more conventional presentation:
a2 $a |- ( t + 0 ) = t
a2 $a |- ( t + 0 ) = t
a1 $a |- ( ( t + 0 ) = t -> ( ( t + 0 ) = t -> t = t ) )
2,3 mp $a |- ( ( t + 0 ) = t -> t = t )
1,4 mp $a |- t = t
Substitution
All Metamath proof steps use a single substitution rule, which is just the simple replacement of a variable with an expression and not the proper substitution described in works on predicate calculus. Proper substitution, in Metamath databases that support it, is a derived construct instead of one built into the Metamath language itself.
The substitution rule makes no assumption about the logic system in use and only requires that the substitutions of variables are correctly done.
Here is a detailed example of how this algorithm works. Steps 1 and 2 of the theorem 2p2e4 in the Metamath Proof Explorer (set.mm) are depicted left. Let's explain how Metamath uses its substitution algorithm to check that step 2 is the logical consequence of step 1 when you use the theorem opreq2i. Step 2 states that ( 2 + 2 ) = ( 2 + ( 1 + 1 ) ). It is the conclusion of the theorem opreq2i. The theorem opreq2i states that if A = B, then (C F A) = (C F B). This theorem would never appear under this cryptic form in a textbook but its literate formulation is banal: when two quantities are equal, one can replace one by the other in an operation. To check the proof Metamath attempts to unify (C F A) = (C F B) with ( 2 + 2 ) = ( 2 + ( 1 + 1 ) ). There is only one way to do so: unifying C with 2, F with +, A with 2 and B with ( 1 + 1 ). So now Metamath uses the premise of opreq2i. This premise states that A = B. As a consequence of its previous computation, Metamath knows that A should be substituted by 2 and B by ( 1 + 1 ). The premise A = B becomes 2=( 1 + 1 ) and thus step 1 is therefore generated. In its turn step 1 is unified with df-2. df-2 is the definition of the number 2 and states that 2 = ( 1 + 1 ). Here the unification is simply a matter of constants and is straightforward (no problem of variables to substitute). So the verification is finished and these two steps of the proof of 2p2e4 are correct.
When Metamath unifies ( 2 + 2 ) with B it has to check that the syntactical rules are respected. In fact B has the type class thus Metamath has to check that ( 2 + 2 ) is also typed class.
Metamath proof checker
The Metamath program is the original program created to manipulate databases written using the Metamath language. It has a text (command line) interface and is written in C. It can read a Metamath database into memory, verify the proofs of a database, modify the database (in particular by adding proofs), and write them back out to storage.
It has a prove command that enables users to enter a proof, along with mechanisms to search for existing proofs.
The Metamath program can convert statements to HTML or TeX notation; for example, it can output the modus ponens axiom from set.mm as:
$\vdash \varphi \quad \&\quad \vdash (\varphi \rightarrow \psi )\quad \Rightarrow \quad \vdash \psi $
Many other programs can process Metamath databases, in particular, there are at least 19 proof verifiers for databases that use the Metamath format.[9]
Metamath databases
The Metamath website hosts several databases that store theorems derived from various axiomatic systems. Most databases (.mm files) have an associated interface, called an "Explorer", which allows one to navigate the statements and proofs interactively on the website, in a user-friendly way. Most databases use a Hilbert system of formal deduction though this is not a requirement.
Metamath Proof Explorer
Metamath Proof Explorer
A proof of the Metamath Proof Explorer
Type of site
Online encyclopedia
HeadquartersUSA
OwnerNorman Megill
Created byNorman Megill
URLus.metamath.org/mpeuni/mmset.html
CommercialNo
RegistrationNo
The Metamath Proof Explorer (recorded in set.mm) is the main and by far the largest database, with over 23,000 proofs in its main part as of July 2019. It is based on classical first-order logic and ZFC set theory (with the addition of Tarski-Grothendieck set theory when needed, for example in category theory). The database has been maintained for over thirty years (the first proofs in set.mm are dated August 1993). The database contains developments, among other fields, of set theory (ordinals and cardinals, recursion, equivalents of the axiom of choice, the continuum hypothesis...), the construction of the real and complex number systems, order theory, graph theory, abstract algebra, linear algebra, general topology, real and complex analysis, Hilbert spaces, number theory, and elementary geometry. This database was first created by Norman Megill, but as of 2019-10-04 there have been 48 contributors (including Norman Megill).[10]
The Metamath Proof Explorer references many text books that can be used in conjunction with Metamath.[11] Thus, people interested in studying mathematics can use Metamath in connection with these books and verify that the proved assertions match the literature.
Intuitionistic Logic Explorer
This database develops mathematics from a constructive point of view, starting with the axioms of intuitionistic logic and continuing with axiom systems of constructive set theory.
New Foundations Explorer
This database develops mathematics from Quine's New Foundations set theory.
Higher-Order Logic Explorer
This database starts with higher-order logic and derives equivalents to axioms of first-order logic and of ZFC set theory.
Databases without explorers
The Metamath website hosts a few other databases which are not associated with explorers but are nonetheless noteworthy. The database peano.mm written by Robert Solovay formalizes Peano arithmetic. The database nat.mm[12] formalizes natural deduction. The database miu.mm formalizes the MU puzzle based on the formal system MIU presented in Gödel, Escher, Bach.
Older explorers
The Metamath website also hosts a few older databases which are not maintained anymore, such as the "Hilbert Space Explorer", which presents theorems pertaining to Hilbert space theory which have now been merged into the Metamath Proof Explorer, and the "Quantum Logic Explorer", which develops quantum logic starting with the theory of orthomodular lattices.
Natural deduction
Because Metamath has a very generic concept of what a proof is (namely a tree of formulas connected by inference rules) and no specific logic is embedded in the software, Metamath can be used with species of logic as different as Hilbert-style logics or sequents-based logics or even with lambda calculus.
However, Metamath provides no direct support for natural deduction systems. As noted earlier, the database nat.mm formalizes natural deduction. The Metamath Proof Explorer (with its database set.mm) instead uses a set of conventions that allow the use of natural deduction approaches within a Hilbert-style logic.
Other works connected to Metamath
Proof checkers
Using the design ideas implemented in Metamath, Raph Levien has implemented very small proof checker, mmverify.py, at only 500 lines of Python code.
Ghilbert is a similar though more elaborate language based on mmverify.py.[13] Levien would like to implement a system where several people could collaborate and his work is emphasizing modularity and connection between small theories.
Using Levien seminal works, many other implementations of the Metamath design principles have been implemented for a broad variety of languages. Juha Arpiainen has implemented his own proof checker in Common Lisp called Bourbaki[14] and Marnix Klooster has coded a proof checker in Haskell called Hmm.[15]
Although they all use the overall Metamath approach to formal system checker coding, they also implement new concepts of their own.
Editors
Mel O'Cat designed a system called Mmj2, which provides a graphic user interface for proof entry.[16] The initial aim of Mel O'Cat was to allow the user to enter the proofs by simply typing the formulas and letting Mmj2 find the appropriate inference rules to connect them. In Metamath on the contrary you may only enter the theorems names. You may not enter the formulas directly. Mmj2 has also the possibility to enter the proof forward or backward (Metamath only allows to enter proof backward). Moreover Mmj2 has a real grammar parser (unlike Metamath). This technical difference brings more comfort to the user. In particular Metamath sometimes hesitates between several formulas it analyzes (most of them being meaningless) and asks the user to choose. In Mmj2 this limitation no longer exists.
There is also a project by William Hale to add a graphical user interface to Metamath called Mmide.[17] Paul Chapman in its turn is working on a new proof browser, which has highlighting that allows you to see the referenced theorem before and after the substitution was made.
Milpgame is a proof assistant and a checker (it shows a message only something gone wrong) with a graphic user interface for the Metamath language(set.mm),written by Filip Cernatescu, it is an open source(MIT License) Java application (cross-platform application: Window,Linux,Mac OS). User can enter the demonstration(proof) in two modes : forward and backward relative to the statement to prove. Milpgame checks if a statement is well formed (has a syntactic verifier). It can save unfinished proofs without the use of dummylink theorem. The demonstration is shown as tree, the statements are shown using html definitions (defined in typesetting chapter). Milpgame is distributed as Java .jar(JRE version 6 update 24 written in NetBeans IDE).
See also
• Automated proof checking
• Proof assistant
References
1. "Release 0.198". 8 August 2021. Retrieved 27 July 2022.
2. Megill, Norman; Wheeler, David A. (2019-06-02). Metamath: A Computer Language for Mathematical Proofs (Second ed.). Morrisville, North Carolina, US: Lulul Press. p. 248. ISBN 978-0-359-70223-7.
3. Megill, Norman. "What is Metamath?". Metamath Home Page.
4. Metamath 100.
5. "Formalizing 100 Theorems".
6. Megill, Norman. "Known Metamath proof verifiers". Retrieved 8 October 2022.
7. TOC of Theorem List - Metamath Proof Explorer
8. Megill,Norman. "How Proofs Work". Metamath Proof Explorer Home Page.
9. Megill, Norman. "Known Metamath proof verifiers". Retrieved 8 October 2022.
10. Wheeler, David A. "Metamath set.mm contributions viewed with Gource through 2019-10-04". YouTube. Archived from the original on 2021-12-19.
11. Megill, Norman. "Reading suggestions". Metamath.
12. Liné, Frédéric. "Natural deduction based Metamath system". Archived from the original on 2012-12-28.
13. Levien,Raph. "Ghilbert".
14. Arpiainen, Juha. "Presentation of Bourbaki". Archived from the original on 2012-12-28.
15. Klooster,Marnix. "Presentation of Hmm". Archived from the original on 2012-04-02.
16. O'Cat,Mel. "Presentation of mmj2". Archived from the original on December 19, 2013.
17. Hale, William. "Presentation of mmide". Archived from the original on 2012-12-28.
External links
• Metamath: official website.
• What do mathematicians think of Metamath: opinions on Metamath.
| Wikipedia |
The Journal of Korean Physical Therapy
2287-156X(eISSN)
The Korean Society of Physical Therapy (대한물리치료학회)
Comparison of Motor Skill Acquisition according to Types of Sensory-Stimuli Cue in Serial Reaction Time Task
Kwon, Yong Hyun (Department of Physical Therapy, Yeungnam University College) ;
Lee, Myoung Hee (Department of Physical Therapy, College of Science, Kyungsung University)
Received : 2014.05.19
Accepted : 2014.06.13
PDF KSCI
Purpose: The purpose of this study is to investigate whether types of sensory-stimuli cues in terms of visual, auditory, and visuoauditory cues can be affected to motor sequential learning in healthy adults, using serial reaction time task. Methods: Twenty four healthy subjects participated in this study, who were randomly allocated into three groups, in terms of visual-stimuli (VS) group, auditory-stimuli (AS) group, and visuoauditory-stimuli (VAS) group. In SRT task, eight Arabic numbers were adopted as presentational stimulus, which were composed of three different types of presentational modules, in terms of visual, auditory, and visuoauditory stimuli. On an experiment, all subjects performed total 3 sessions relevant to each stimulus module with a pause of 10 minutes for training and pre-/post-tests. At the pre- and post-tests, reaction time and accuracy were calculated. Results: In reaction time, significant differences were founded in terms of between-subjects, within-subjects, and interaction effect for group ${\times}$ repeated factor. In accuracy, no significant differences were observed in between-group and interaction effect for groups ${\times}$ repeated factor. However, a significant main effect of within-subjects was observed. In addition, a significant difference was showed in comparison of differences of changes between the pre- and post-test only in the reaction time among three groups. Conclusion: This study suggest that short-term sequential motor training on one day induced behavioral modification, such as speed and accuracy of motor response. In addition, we found that motor training using visual-stimuli cue showed better effect of motor skill acquisition, compared to auditory and visuoauditory-stimuli cues.
Motor sequential learning;
Sensory-stimuli cues;
Serial reaction time task
Schmidt RA. Motor control and learning. Champaign, IL: Human Kinetis Publishers; 2005.
Wolpert DM, Ghahramani Z, Flanagan JR. Perspectives and problems in motor learning. Trends Cogn Sci. 2001;5(11):487-94. https://doi.org/10.1016/S1364-6613(00)01773-3
Schmidt RA, Lee TD. Motor learning and performance; from principle to application. Human Kinetics; 2014.
Connelly DM, Carnahan H, Vandervoort AA. Motor skill learning of concentric and eccentric isokinetic movements in older adults. Exp Aging Res. 2000;26(3):209-28. https://doi.org/10.1080/036107300404868
Robertson EM. The serial reaction time task: Implicit motor skill learning? J Neurosci. 2007;27(38):10073-5. https://doi.org/10.1523/JNEUROSCI.2747-07.2007
Shemmell J, Forner M, Tathem B et al. Neuromuscular-skeletal constraints on the acquisition of skill in a discrete torque production task. Exp Brain Res. 2006;175(3):400-10. https://doi.org/10.1007/s00221-006-0547-y
Kim SH, Pohl PS, Luchies CW et al. Ipsilateral deficits of targeted movements after stroke. Arch Phys Med Rehabil. 2003;84(5):719-24. https://doi.org/10.1016/S0003-9993(03)04973-0
Nissen MJ, Bullemer P. Attentional requirements of learning: Evidence from performance measures. Cognit Psychol. 1987;19(1):191-32.
Moisello C, Crupi D, Tunik E et al. The serial reaction time task revisited: A study on motor sequence learning with an armreaching task. Exp Brain Res. 2009;194(1):143-55. https://doi.org/10.1007/s00221-008-1681-5
Song S, Howard JH Jr, Howard DV. Perceptual sequence learning in a serial reaction time task. Exp Brain Res. 2008;189(2):145-58. https://doi.org/10.1007/s00221-008-1411-z
Kwon YH, Chang JS, Kim CS. Changes of cortical activation pattern induced by motor learning with serial reaction time task. The Korean Society of Physcial Therapy. 2009;21(1):65-72.
Kwon YH, Chang JS, Lee MH et al. The evidence of neuromuscular adaptation according to motor sequential learning in the serial reaction time task. J Phys Ther Sci. 2010;22(2):117-21. https://doi.org/10.1589/jpts.22.117
Kwon YH, Nam KS, Park JW. Identification of cortical activation and white matter architecture according to short-term motor learning in the human brain: Functional mri and diffusion tensor tractography study. Neurosci Lett. 2012;520(1):11-5. https://doi.org/10.1016/j.neulet.2012.05.005
Park MC, Bae SS, Lee MY. Change of activation of the supplementary motor area in motor learning: An fmri case study. The Journal of Korean Society of Physical Therapy. 2011;23(2):85-90.
Jonsdottir J, Cattaneo D, Recalcati M et al. Task-oriented biofeedback to improve gait in individuals with chronic stroke: Motor learning approach. Neurorehabil Neural Repair. 2010;24(5):478-85. https://doi.org/10.1177/1545968309355986
Fairbrother JT, Laughlin DD, Nguyen TV. Self-controlled feedback facilitates motor learning in both high and low activity individuals. Front Psychol. 2012;3:323-30.
Lauber B, Keller M. Improving motor performance: Selected aspects of augmented feedback in exercise and health. Eur J Sport Sci. 2014;14(1):36-43. https://doi.org/10.1080/17461391.2012.725104
Sigrist R, Rauter G, Riener R et al. Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review. Psychon Bull Rev. 2013;20(1):21-53. https://doi.org/10.3758/s13423-012-0333-8
Yen SC, Landry JM, Wu M. Augmented multisensory feedback enhances locomotor adaptation in humans with incomplete spinal cord injury. Hum Mov Sci. 2014
Lee MH, Kim MC, Park JT. Analysis of motor performance and p300 during serial task performance according to the type of cue. Journal of the Korean Society of Physical Medine. 2013;8(2):281-7. https://doi.org/10.13066/kspm.2013.8.2.281
Adler S, Beckers D, Buck M. Pnf in practice: An illustrated guide. Springer; 2013.
Oldfield RC. The assessment and analysis of handedness: The edinburgh inventory. Neuropsychologia. 1971;9(1):97-113. https://doi.org/10.1016/0028-3932(71)90067-4
Labeye E, Oker A, Badard G et al. Activation and integration of motor components in a short-term priming paradigm. Acta Psychol (Amst). 2008;129(1):108-11. https://doi.org/10.1016/j.actpsy.2008.04.010
Tang K, Staines WR, Black SE et al. Novel vibrotactile discrimination task for investigating the neural correlates of short-term learning with fmri. J Neurosci Methods. 2009;178(1):65-74. https://doi.org/10.1016/j.jneumeth.2008.11.024
Todorov E, Shadmehr R, Bizzi E. Augmented feedback presented in a virtual environment accelerates learning of a difficult motor task. J Mot Behav. 1997;29(2):147-58. https://doi.org/10.1080/00222899709600829
Wulf G, Horger M, Shea CH. Benefits of blocked over serial feedback on complex motor skill learning. J Mot Behav. 1999;31(1):95-103. https://doi.org/10.1080/00222899909601895
Akamatsu T, Fukuyama H, Kawamata T. The effects of visual, auditory, and mixed cues on choice reaction in parkinson's disease. J Neurol Sci. 2008;269(1-2):118-25. https://doi.org/10.1016/j.jns.2008.01.002
Camachon C, Jacobs DM, Huet M et al. The role of concurrent feedback in learning to walk through sliding doors. Ecological psychology. Ecological Psychology. 2007;19(4):367-82. https://doi.org/10.1080/10407410701557869
Huet M, Camachon C, Fernandez L et al. Self-controlled concurrent feedback and the education of attention towards perceptual invariants. Hum Mov Sci. 2009;28(4):450-67. https://doi.org/10.1016/j.humov.2008.12.004
Wulf G, Shea CH. Principles derived from the study of simple skills do not generalize to complex skill learning. Psychon Bull Rev. 2002;9(2):185-211. https://doi.org/10.3758/BF03196276
Suteerawattananon M, Morris GS, Etnyre BR et al. Effects of visual and auditory cues on gait in individuals with parkinson's disease. J Neurol Sci. 2004;219(1-2):63-9. https://doi.org/10.1016/j.jns.2003.12.007
Leonard CT. The neuroscience of human movement. Mosby; 1998. | CommonCrawl |
Characteristic polynomial
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base (that is, the characteristic polynomial does not depend on the choice of a basis). The characteristic equation, also known as the determinantal equation,[1][2][3] is the equation obtained by equating the characteristic polynomial to zero.
This article is about the characteristic polynomial of a matrix or of an endomorphism of vector spaces. For the characteristic polynomial of a matroid, see Matroid. For that of a graded poset, see Graded poset.
In spectral graph theory, the characteristic polynomial of a graph is the characteristic polynomial of its adjacency matrix.[4]
Motivation
In linear algebra, eigenvalues and eigenvectors play a fundamental role, since, given a linear transformation, an eigenvector is a vector whose direction is not changed by the transformation, and the corresponding eigenvalue is the measure of the resulting change of magnitude of the vector.
More precisely, if the transformation is represented by a square matrix $A,$ an eigenvector $\mathbf {v} ,$ and the corresponding eigenvalue $\lambda $ must satisfy the equation
$A\mathbf {v} =\lambda \mathbf {v} ,$
or, equivalently,
$(\lambda I-A)\mathbf {v} =0$
where $I$ is the identity matrix, and $\mathbf {v} \neq \mathbf {0} $ (although the zero vector satisfies this equation for every $\lambda ,$ it is not considered an eigenvector).
It follows that the matrix $(\lambda I-A)$ must be singular, and its determinant
$\det(\lambda I-A)=0$
must be zero.
In other words, the eigenvalues of A are the roots of
$\det(xI-A),$
which is a monic polynomial in x of degree n if A is a n×n matrix. This polynomial is the characteristic polynomial of A.
Formal definition
Consider an $n\times n$ matrix $A.$ The characteristic polynomial of $A,$ denoted by $p_{A}(t),$ is the polynomial defined by[5]
$p_{A}(t)=\det(tI-A)$
where $I$ denotes the $n\times n$ identity matrix.
Some authors define the characteristic polynomial to be $\det(A-tI).$ That polynomial differs from the one defined here by a sign $(-1)^{n},$ so it makes no difference for properties like having as roots the eigenvalues of $A$; however the definition above always gives a monic polynomial, whereas the alternative definition is monic only when $n$ is even.
Examples
To compute the characteristic polynomial of the matrix
$A={\begin{pmatrix}2&1\\-1&0\end{pmatrix}}.$
the determinant of the following is computed:
$tI-A={\begin{pmatrix}t-2&-1\\1&t-0\end{pmatrix}}$
and found to be $(t-2)t-1(-1)=t^{2}-2t+1\,\!,$ the characteristic polynomial of $A.$
Another example uses hyperbolic functions of a hyperbolic angle φ. For the matrix take
$A={\begin{pmatrix}\cosh(\varphi )&\sinh(\varphi )\\\sinh(\varphi )&\cosh(\varphi )\end{pmatrix}}.$
Its characteristic polynomial is
$\det(tI-A)=(t-\cosh(\varphi ))^{2}-\sinh ^{2}(\varphi )=t^{2}-2t\ \cosh(\varphi )+1=(t-e^{\varphi })(t-e^{-\varphi }).$
Properties
The characteristic polynomial $p_{A}(t)$ of a $n\times n$ matrix is monic (its leading coefficient is $1$) and its degree is $n.$ The most important fact about the characteristic polynomial was already mentioned in the motivational paragraph: the eigenvalues of $A$ are precisely the roots of $p_{A}(t)$ (this also holds for the minimal polynomial of $A,$ but its degree may be less than $n$). All coefficients of the characteristic polynomial are polynomial expressions in the entries of the matrix. In particular its constant coefficient of $t^{0}$ is $\det(-A)=(-1)^{n}\det(A),$ the coefficient of $t^{n}$ is one, and the coefficient of $t^{n-1}$ is tr(−A) = −tr(A), where tr(A) is the trace of $A.$ (The signs given here correspond to the formal definition given in the previous section;[6] for the alternative definition these would instead be $\det(A)$ and (−1)n – 1 tr(A) respectively.[7])
For a $2\times 2$ matrix $A,$ the characteristic polynomial is thus given by
$t^{2}-\operatorname {tr} (A)t+\det(A).$
Using the language of exterior algebra, the characteristic polynomial of an $n\times n$ matrix $A$ may be expressed as
$p_{A}(t)=\sum _{k=0}^{n}t^{n-k}(-1)^{k}\operatorname {tr} \left(\textstyle \bigwedge ^{k}A\right)$
where $ \operatorname {tr} \left(\bigwedge ^{k}A\right)$ is the trace of the $k$th exterior power of $A,$ which has dimension $ {\binom {n}{k}}.$ This trace may be computed as the sum of all principal minors of $A$ of size $k.$ The recursive Faddeev–LeVerrier algorithm computes these coefficients more efficiently.
When the characteristic of the field of the coefficients is $0,$ each such trace may alternatively be computed as a single determinant, that of the $k\times k$ matrix,
$\operatorname {tr} \left(\textstyle \bigwedge ^{k}A\right)={\frac {1}{k!}}{\begin{vmatrix}\operatorname {tr} A&k-1&0&\cdots &0\\\operatorname {tr} A^{2}&\operatorname {tr} A&k-2&\cdots &0\\\vdots &\vdots &&\ddots &\vdots \\\operatorname {tr} A^{k-1}&\operatorname {tr} A^{k-2}&&\cdots &1\\\operatorname {tr} A^{k}&\operatorname {tr} A^{k-1}&&\cdots &\operatorname {tr} A\end{vmatrix}}~.$
The Cayley–Hamilton theorem states that replacing $t$ by $A$ in the characteristic polynomial (interpreting the resulting powers as matrix powers, and the constant term $c$ as $c$ times the identity matrix) yields the zero matrix. Informally speaking, every matrix satisfies its own characteristic equation. This statement is equivalent to saying that the minimal polynomial of $A$ divides the characteristic polynomial of $A.$
Two similar matrices have the same characteristic polynomial. The converse however is not true in general: two matrices with the same characteristic polynomial need not be similar.
The matrix $A$ and its transpose have the same characteristic polynomial. $A$ is similar to a triangular matrix if and only if its characteristic polynomial can be completely factored into linear factors over $K$ (the same is true with the minimal polynomial instead of the characteristic polynomial). In this case $A$ is similar to a matrix in Jordan normal form.
Characteristic polynomial of a product of two matrices
If $A$ and $B$ are two square $n\times n$ matrices then characteristic polynomials of $AB$ and $BA$ coincide:
$p_{AB}(t)=p_{BA}(t).\,$
When $A$ is non-singular this result follows from the fact that $AB$ and $BA$ are similar:
$BA=A^{-1}(AB)A.$
For the case where both $A$ and $B$ are singular, the desired identity is an equality between polynomials in $t$ and the coefficients of the matrices. Thus, to prove this equality, it suffices to prove that it is verified on a non-empty open subset (for the usual topology, or, more generally, for the Zariski topology) of the space of all the coefficients. As the non-singular matrices form such an open subset of the space of all matrices, this proves the result.
More generally, if $A$ is a matrix of order $m\times n$ and $B$ is a matrix of order $n\times m,$ then $AB$ is $m\times m$ and $BA$ is $n\times n$ matrix, and one has
$p_{BA}(t)=t^{n-m}p_{AB}(t).\,$
To prove this, one may suppose $n>m,$ by exchanging, if needed, $A$ and $B.$ Then, by bordering $A$ on the bottom by $n-m$ rows of zeros, and $B$ on the right, by, $n-m$ columns of zeros, one gets two $n\times n$ matrices $A^{\prime }$ and $B^{\prime }$ such that $B^{\prime }A^{\prime }=BA$ and $A^{\prime }B^{\prime }$ is equal to $AB$ bordered by $n-m$ rows and columns of zeros. The result follows from the case of square matrices, by comparing the characteristic polynomials of $A^{\prime }B^{\prime }$ and $AB.$
Characteristic polynomial of Ak
If $\lambda $ is an eigenvalue of a square matrix $A$ with eigenvector $\mathbf {v} ,$ then $\lambda ^{k}$ is an eigenvalue of $A^{k}$ because
$A^{k}{\textbf {v}}=A^{k-1}A{\textbf {v}}=\lambda A^{k-1}{\textbf {v}}=\dots =\lambda ^{k}{\textbf {v}}.$
The multiplicities can be shown to agree as well, and this generalizes to any polynomial in place of $x^{k}$:[8]
Theorem — Let $A$ be a square $n\times n$ matrix and let $f(t)$ be a polynomial. If the characteristic polynomial of $A$ has a factorization
$p_{A}(t)=(t-\lambda _{1})(t-\lambda _{2})\cdots (t-\lambda _{n})$
then the characteristic polynomial of the matrix $f(A)$ is given by
$p_{f(A)}(t)=(t-f(\lambda _{1}))(t-f(\lambda _{2}))\cdots (t-f(\lambda _{n})).$
That is, the algebraic multiplicity of $\lambda $ in $f(A)$ equals the sum of algebraic multiplicities of $\lambda '$ in $A$ over $\lambda '$ such that $f(\lambda ')=\lambda .$ In particular, $\operatorname {tr} (f(A))=\textstyle \sum _{i=1}^{n}f(\lambda _{i})$ and $\operatorname {det} (f(A))=\textstyle \prod _{i=1}^{n}f(\lambda _{i}).$ Here a polynomial $f(t)=t^{3}+1,$ for example, is evaluated on a matrix $A$ simply as $f(A)=A^{3}+I.$
The theorem applies to matrices and polynomials over any field or commutative ring.[9] However, the assumption that $p_{A}(t)$ has a factorization into linear factors is not always true, unless the matrix is over an algebraically closed field such as the complex numbers.
Proof
This proof only applies to matrices and polynomials over complex numbers (or any algebraically closed field). In that case, the characteristic polynomial of any square matrix can be always factorized as
$p_{A}(t)=\left(t-\lambda _{1}\right)\left(t-\lambda _{2}\right)\cdots \left(t-\lambda _{n}\right)$
where $\lambda _{1},\lambda _{2},\ldots ,\lambda _{n}$ are the eigenvalues of $A,$ possibly repeated. Moreover, the Jordan decomposition theorem guarantees that any square matrix $A$ can be decomposed as $A=S^{-1}US,$ where $S$ is an invertible matrix and $U$ is upper triangular with $\lambda _{1},\ldots ,\lambda _{n}$ on the diagonal (with each eigenvalue repeated according to its algebraic multiplicity). (The Jordan normal form has stronger properties, but these are sufficient; alternatively the Schur decomposition can be used, which is less popular but somewhat easier to prove).
Let $ f(t)=\sum _{i}\alpha _{i}t^{i}.$ Then
$f(A)=\textstyle \sum \alpha _{i}(S^{-1}US)^{i}=\textstyle \sum \alpha _{i}S^{-1}USS^{-1}US\cdots S^{-1}US=\textstyle \sum \alpha _{i}S^{-1}U^{i}S=S^{-1}(\textstyle \sum \alpha _{i}U^{i})S=S^{-1}f(U)S.$
For an upper triangular matrix $U$ with diagonal $\lambda _{1},\dots ,\lambda _{n},$ the matrix $U^{i}$ is upper triangular with diagonal $\lambda _{1}^{i},\dots ,\lambda _{n}^{i}$ in $U^{i},$ and hence $f(U)$ is upper triangular with diagonal $f\left(\lambda _{1}\right),\dots ,f\left(\lambda _{n}\right).$ Therefore, the eigenvalues of $f(U)$ are $f(\lambda _{1}),\dots ,f(\lambda _{n}).$ Since $f(A)=S^{-1}f(U)S$ is similar to $f(U),$ it has the same eigenvalues, with the same algebraic multiplicities.
Secular function and secular equation
Secular function
The term secular function has been used for what is now called characteristic polynomial (in some literature the term secular function is still used). The term comes from the fact that the characteristic polynomial was used to calculate secular perturbations (on a time scale of a century, that is, slow compared to annual motion) of planetary orbits, according to Lagrange's theory of oscillations.
Secular equation
Secular equation may have several meanings.
• In linear algebra it is sometimes used in place of characteristic equation.
• In astronomy it is the algebraic or numerical expression of the magnitude of the inequalities in a planet's motion that remain after the inequalities of a short period have been allowed for.[10]
• In molecular orbital calculations relating to the energy of the electron and its wave function it is also used instead of the characteristic equation.
For general associative algebras
The above definition of the characteristic polynomial of a matrix $A\in M_{n}(F)$ with entries in a field $F$ generalizes without any changes to the case when $F$ is just a commutative ring. Garibaldi (2004) defines the characteristic polynomial for elements of an arbitrary finite-dimensional (associative, but not necessarily commutative) algebra over a field $F$ and proves the standard properties of the characteristic polynomial in this generality.
See also
• Characteristic equation (disambiguation)
• Invariants of tensors
• Companion matrix
• Faddeev–LeVerrier algorithm
• Cayley–Hamilton theorem
• Samuelson–Berkowitz algorithm
References
1. Guillemin, Ernst (1953). Introductory Circuit Theory. Wiley. pp. 366, 541. ISBN 0471330663.
2. Forsythe, George E.; Motzkin, Theodore (January 1952). "An Extension of Gauss' Transformation for Improving the Condition of Systems of Linear Equations" (PDF). Mathematics of Computation. 6 (37): 18–34. doi:10.1090/S0025-5718-1952-0048162-0. Retrieved 3 October 2020.
3. Frank, Evelyn (1946). "On the zeros of polynomials with complex coefficients". Bulletin of the American Mathematical Society. 52 (2): 144–157. doi:10.1090/S0002-9904-1946-08526-2.
4. "Characteristic Polynomial of a Graph – Wolfram MathWorld". Retrieved August 26, 2011.
5. Steven Roman (1992). Advanced linear algebra (2 ed.). Springer. p. 137. ISBN 3540978372.
6. Proposition 28 in these lecture notes
7. Theorem 4 in these lecture notes
8. Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. pp. 108–109, Section 2.4.2. ISBN 978-0-521-54823-6.
9. Lang, Serge (1993). Algebra. New York: Springer. p.567, Theorem 3.10. ISBN 978-1-4613-0041-0. OCLC 852792828.
10. "secular equation". Retrieved January 21, 2010.
• T.S. Blyth & E.F. Robertson (1998) Basic Linear Algebra, p 149, Springer ISBN 3-540-76122-5 .
• John B. Fraleigh & Raymond A. Beauregard (1990) Linear Algebra 2nd edition, p 246, Addison-Wesley ISBN 0-201-11949-8 .
• Garibaldi, Skip (2004), "The characteristic polynomial and determinant are not ad hoc constructions", American Mathematical Monthly, 111 (9): 761–778, arXiv:math/0203276, doi:10.2307/4145188, JSTOR 4145188, MR 2104048
• Werner Greub (1974) Linear Algebra 4th edition, pp 120–5, Springer, ISBN 0-387-90110-8 .
• Paul C. Shields (1980) Elementary Linear Algebra 3rd edition, p 274, Worth Publishers ISBN 0-87901-121-1 .
• Gilbert Strang (1988) Linear Algebra and Its Applications 3rd edition, p 246, Brooks/Cole ISBN 0-15-551005-3 .
| Wikipedia |
Band (order theory)
In mathematics, specifically in order theory and functional analysis, a band in a vector lattice $X$ is a subspace $M$ of $X$ that is solid and such that for all $S\subseteq M$ such that $x=\sup S$ exists in $X,$ we have $x\in M.$[1] The smallest band containing a subset $S$ of $X$ is called the band generated by $S$ in $X.$[1] A band generated by a singleton set is called a principal band.
Not to be confused with Band (algebra).
Examples
For any subset $S$ of a vector lattice $X,$ the set $S^{\perp }$ of all elements of $X$ disjoint from $S$ is a band in $X.$[1]
If ${\mathcal {L}}^{p}(\mu )$ ($1\leq p\leq \infty $) is the usual space of real valued functions used to define Lp spaces $L^{p},$ then ${\mathcal {L}}^{p}(\mu )$ is countably order complete (that is, each subset that is bounded above has a supremum) but in general is not order complete. If $N$ is the vector subspace of all $\mu $-null functions then $N$ is a solid subset of ${\mathcal {L}}^{p}(\mu )$ that is not a band.[1]
Properties
The intersection of an arbitrary family of bands in a vector lattice $X$ is a band in $X.$[2]
See also
• Solid set
• Locally convex vector lattice
• Vector lattice – Partially ordered vector space, ordered as a latticePages displaying short descriptions of redirect targets
References
1. Narici & Beckenstein 2011, pp. 204–214.
2. Schaefer & Wolff 1999, pp. 204–214.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Ordered topological vector spaces
Basic concepts
• Ordered vector space
• Partially ordered space
• Riesz space
• Order topology
• Order unit
• Positive linear operator
• Topological vector lattice
• Vector lattice
Types of orders/spaces
• AL-space
• AM-space
• Archimedean
• Banach lattice
• Fréchet lattice
• Locally convex vector lattice
• Normed lattice
• Order bound dual
• Order dual
• Order complete
• Regularly ordered
Types of elements/subsets
• Band
• Cone-saturated
• Lattice disjoint
• Dual/Polar cone
• Normal cone
• Order complete
• Order summable
• Order unit
• Quasi-interior point
• Solid set
• Weak order unit
Topologies/Convergence
• Order convergence
• Order topology
Operators
• Positive
• State
Main results
• Freudenthal spectral
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
\begin{document}
\begin{center} \fontsize{13pt}{10pt}\selectfont
\textsc{\textbf{GENERALIZATIONS OF GRADED PRIME IDEALS OVER GRADED NEAR RINGS}}
\end{center}
\begin{center}
\fontsize{12pt}{10pt}\selectfont
\textsc{{\footnotesize Malik Bataineh*, Tamem Al-shorman and Eman Al-Kilany }} \end{center}
\begin{abstract}
This paper considers graded near-rings over a monoid G as a generalizations of the graded rings over groups, introduce certain innovative graded weakly prime ideals and graded almost prime ideals as a generalizations of graded prime ideals over graded near-rings, and explore their various properties and their generalizations in graded near-rings. \end{abstract}
\section{INTRODUCTION} Throughout this article, G will be an abelian group with identity e and R be a commutative ring with nonzero unity 1 element. R is called a G-graded ring if $ R= \bigoplus\limits_{g \in G} R_g$ with the property $R_gR_h\subseteq R_{gh}$ for all $g,h \in G$, where $R_g$ is an additive subgroup of R for all $g\in G$. The elements of $R_g$ are called homogeneous of degree g. If $x\in R$, then $x$ can be written uniquely as $\sum\limits_{g\in G} x_g$, where $x_g$ is the component of $x$ in $R_g$. The set of all homogeneous elements of R is $h(R)= \bigcup\limits_{g\in G} R_g$. Let P be an ideal of a G-graded ring R. Then P is called a graded ideal if $P=\bigoplus\limits_{g\in G}P_g$, i.e, for $x\in P$ and $x=\sum\limits_{g\in G} x_g$ where $x_g \in P_g$ for all $g\in G$. An ideal of a G-graded ring is not necessary graded ideal (see \cite{abu2019graded}). The concept of graded prime ideals and its generalizations have an indispensable role in commutative G-graded rings.
Near-rings are generalizations of rings in which addition is not necessarily abelian and only one distributive law holds. They arise in a natural way in the study of mappings on groups: the set M(G) of all maps of a group (G; +) into itself endowed with point-wise addition and composition of functions is a near-ring. For general background on the theory of near-rings, the monographs written by Pilz \cite{pilz2011near} and Meldrum \cite{meldrum1985near} should be referred. The definition of a near-ring ($N$, +, $\times$) is a set $N$ with two binary + and $\times$ that satisfy the following axioms: \\ (1) ($N$, +) is a group. \\ (2) ($N$, $\times$) is semi group. (semi group: a set together with an associative binary operation). \\(3) $\times$ is right distributive over + (i.e. $(a+b)\times y = ay + by$).
The graded rings were introduced by Yoshida in \cite{yoshida1955homogeneous}. Also, graded near-rings were introduced and studied by Dumitru, Nastasescu and Toader in \cite{dumitru2016graded}. Let G be a multiplicatively monoid (an algebraic structure with a single associative binary operation) with identity. A near-ring $N$ is called a G-graded near-ring if there exists a family of additive normal subgroups $\{N_\sigma\}$ of $N$ satisfying that: \\ (1) $N = \bigoplus\limits_{\sigma \in G} N_\sigma$. \\ (2) $N_\sigma N_\tau \subseteq N_{\sigma \tau}$ for all $\sigma, \tau \in G$. \\ A graded ideal P of a G-graded ring R is said to be graded prime ideal of R if $ab \in P$, where $a, b \in h(R)$, then $a \in P$ or $b \in P$. Graded prime ideals have been generalized to graded weakly prime ideals and graded almost prime ideals. In \cite{atani2006graded}, a graded ideal P of R is said to be graded weakly prime ideal of R if $0\neq ab \in P$, where $a, b \in h(R)$, then $a\in P$ or $b \in P$. We say that a graded ideal P of R is a graded almost prime ideal of R if $ab \in P-[P^2 \cap R]$, where $a, b \in h(R)$, then $a\in P$ or $b \in P$ (see \cite{jaber2008almost}).
Bataineh, Al-Shorman and Al-Kilany in \cite{bataineh2022graded}, defined the concept of graded prime ideals over graded near-rings. A graded ideal P of a graded near-ring $N$ is said to be a graded prime ideal of $N$ if whenever $IJ \subseteq P$, then either $I \subseteq P$ or $J \subseteq P$, for any graded ideals $I$ and $J$ in $N$. In Section Tow, we introduced the concept of graded weakly prime ideals in graded near-rings. We say that P is a graded weakly prime ideal of $N$ if whenever $\{0\} \neq IJ \subseteq P$, then either $I \subseteq P$ or $J \subseteq P$, for any graded ideals $I$ and $J$ in $N$. In Section Three, we introduce the concept of graded almost prime ideals in graded near-ring. We say that P is a graded almost prime ideal of $N$ if whenever $IJ\subseteq P$ and $IJ \not\subseteq (P^2 \cap N)$, then either $I \subseteq P$ or $J \subseteq P$, for any graded ideals $I$ and $J$ in $N$.
\section{ GRADED WEAKLY PRIME IDEALS OVER GRADED NEAR RINGS}
In this section, we introduce graded weakly prime ideals graded over near-rings concept and study their basic properties.
\begin{definition}\label{def1} Let G be a multiplucative monoid group with identity element and N be a G-graded near-ring. A graded ideal P of N is called graded weakly prime ideal of N if whenever $\{0\} \neq IJ \subseteq P$, then either $I \subseteq P$ or $J \subseteq P$, for any graded ideals I and J in N. \end{definition}
\begin{example}\label{ex1} Consider the ring ($\mathbf{Z}_{12}$, +, $\times$) is a near-ring with $G=\{0,1\}$ is a group under (+), where (+) defined as 0+0=0, 0+1=1, 1+0=1 and 1+1=1. \\ Let N be a G-graded near-ring defined by $N_0= \mathbf{Z}_{12}$ and $N_1 =\{0\}$. Note that the graded ideals $P_1= \{0\}$, $P_2=\{0,2,4,6,8,10\}$ and $P_3=\{0,3,6,9\}$ are graded weakly prime ideals of N. \end{example}
\begin{remark}\label{rem1} Every graded prime ideals over graded near-rings is a graded weakly prime ideals over graded near-rings. However, the converse is not true. For Example \ref{ex1}, $P_1$ is a graded weakly prime ideal of N but not graded prime ideal of N. \end{remark}
The following theorem and corollary state that graded weakly prime ideals of N are graded prime ideals of N when certain conditions are met.
\begin{theorem}\label{thm1} Let N be a G-graded near-ring and P be a graded weakly prime ideal of N. If P is not graded prime ideal of N, then $P^2 \cap N = \{0\}$. \\ \\ \textbf{Proof.} Suppose that $P^2 \cap N \neq \{0\}$. It is observed that P is graded prime ideal of N. Let I and J be a graded ideals of N such that $IJ \subseteq P$. If $IJ \neq \{0\}$, then $I \subseteq P$ or $J\subseteq P$ since P is a graded weakly prime ideal of N. So, it could be assumed that $IJ=\{0\}$. Since $P^2 \cap N \neq \{0\}$, so there exists $p,q \in P$ such that $<p><q> \neq 0$ and so $(I+<p>)(J+<q>) \neq \{0\} $. Suppose that $(I+<p>)(J+<q>) \not\subseteq P$, then there exists $i \in I$, $j\in J$, $p_0\in <p>$ and $q_0 \in <q>$ such that $(i+p_0)(j+q_0) \not\in P$ which implies that $i(j+q_0) \not\in P$, but $i(j+q_0)=i(j+q_0) - ij \in P$ since $IJ = \{0\}$. This is a contradiction. Thus, $\{0\} \neq (I+<p>)(J+<q>) \subseteq P $ which implies that $I \subseteq P$ or $J \subseteq P$. \end{theorem}
\begin{corollary}\label{coro1} Let N be a G-graded near-ring and let P be a graded ideal of N such that $P^2 \cap N \neq \{0\}$. Then P is graded prime ideal of N if and only if P is graded weakly prime ideal of N. \\ \\ \textbf{Proof.} Let P be a graded ideal of N such that $P^2 \cap N \neq \{0\}$. By Theorem \ref{thm1}, if P is a graded weakly prime ideal of N, than P is a graded prime ideal of N. Also, by Remark \ref{rem1}, if P is a graded prime ideal of N, then P is a graded weakly prime ideal of N. \end{corollary}
\begin{remark}\label{rem2} It is not necessary that P is graded weakly prime ideal of N such that $P^2 \cap N = \{0\}$. Let N be a graded near-ring which is defined in Example \ref{ex1} and let $P=\{0,6\}$, Note that $P^2 \cap N =\{0\}$, but P is not graded weakly prime ideal of N. \end{remark}
The next proposition gives an interesting case where graded weakly prime ideals lead to graded prime ideals in a graded near-ring.
\begin{proposition}\label{prop1} Let N be a G-graded near-ring and P be a graded ideal of N. If P is a graded weakly prime ideal of N and $(\{0\} : P ) \subseteq P$, then P is a grade prime ideal of N. \\ \\ \textbf{Proof.} Suppose that P is not a graded prime ideal of N, then there exists $I \not\subseteq P$ and $J \not\subseteq P$ satisfying that $IJ \subseteq P$, where I and J are two graded ideals of N. If $IJ \neq \{0\}$, then it is completed. So, it is assume that $IJ = \{0\}$. Note that $IJ \subseteq P$ since if an element belongs to $IP$, then it belongs to both N and P. Consider $I(J+P) \subseteq P$ if $I(J+P) \neq \{0\}$, then either $I \subseteq P$ or $J\subseteq P$, this is a contradiction. Otherwise, $I(J+P) = \{0\}$, then $IP = \{0\}$ implies $I \subseteq (\{0\} :P) \subseteq P$. \end{proposition}
\begin{theorem}\label{thm2} Let N be a G-graded near-ring and P be a graded weakly prime ideal of N. If $IJ = \{0\}$ with $I \not\subseteq P$ and $J \not\subseteq P$ where I and J are two graded ideals of N, then $IP = PJ$. \\ \\ \textbf{Proof.} Suppose that there exists $p \in P$ and $i \in I$ such that $ip = 0$. Then $\{0\} \neq I(J+<p>) \subseteq P$. But $I \not\subseteq P$ and $J+<p> \not\subseteq P$, which contradicts that P being graded weakly prime ideal of N. \end{theorem}
\begin{lemma}\label{lem1} Let N be a G-graded near-ring. If P, I and J are graded ideals of N such that $P = I \cup J$, then P equals I or J. \\ \\ \textbf{Proof.} Suppose P does not equal I nor J. Let $x \in P$ such that $x\in I$ but $x\not\in J$ and $y \in P$ such that $y\in J$ but $y\not\in I$. Since P is a graded ideal of N, $x-y\in P$. This implies $x-y \in I$ or $x-y \in J$. If $x-y\in I$, then $y \in I$ since I is a graded ideal of N, this is a contradiction. If $x-y\in J$, then $x\in J$ since J is a graded ideal of N, this is a contradiction. Therefore, P equals either I or J. \end{lemma}
\begin{proposition}\label{prop2} Let N be a G-graded near-ring and P be a graded ideal of N. Then the following are equivalent: \\ (1) For $x,y \ and\ z \in N$ with $0\neq x(<y>+<z>) \subseteq P$, $x\in P$ or $y \ and\ z \in P$. \\ (2) For $x\in N$ but $x \not \in P$ we have $(P:<x>+<y>) = P \cup (0:<x>+<y>)$ for any $y \in N$. \\ (3) For $x\in N$ but $x \not\in P$ we have $(P:<x>+<y>) = P$ or $(P:<x>+<y>) = (0:<x>+<y>)$ for any $y\in N$. \\ (4) P is a graded weakly prime ideal of N. \\ \\ \textbf{Proof.} $(1) \Rightarrow (2)$: Let $t \in N$ and $t \in (P :< x > + < y >)$ for any y and x belongs to N but $x \not\in P$. Then $t(< x > + < y >) \subseteq P$. If $t(< x > + < y >) = 0$. Then $t \in (0 :< x > + < y >)$. Otherwise $0 \neq t(< x > + < y >) \subseteq P$. Thus, $t \in P$ by hypothesis. \\ $(2) \Rightarrow (3)$: It is following directly from Lemma \ref{lem1}. \\ $(3) \Rightarrow (4)$: Let I and J be a graded ideal of N such that $IJ \subseteq P$. Suppose that $I \not\subseteq P$ and $J \not\subseteq P$. Then there exist $j \in J$ with $j \not\in P$. Now, it is claimed that $IJ = \{0\}$. Let $j_1 \in J$, then $I(<j>+<j_1> \subseteq P$, which implies $I \subseteq (P:<j>+<j_1>)$. Then by assumption, $I(<j>+<j_1>) =0 $ which gives $Ij_1 = \{0\}$. Thus $IJ = \{0\}$ and hence P is a graded weakly prime ideal of N. \\ $(4) \Rightarrow (1)$: If $0 \neq x(<y>+<z>) \subseteq P$, then $\{0\} \neq <x>(<y>+<z>) \subseteq P$. Since P is a graded weakly prime ideal of N, there is $<x> \subseteq P$ or $<y>+<z> \subseteq P$. By assumption, $x, y \ and\ z \in N$. Hence $x\in P$ or $y \ and\ z \in P$. \end{proposition}
\begin{theorem}\label{thm3} Let N be a G-graded near-ring and P be a graded ideal of N. Then the following are equivalent: \\ (1) P is a graded weakly prime ideal of N. \\ (2) For any ideals I and J in N with $P \subset I$ and $P \subset J$, then there is either $IJ=\{0\}$ or $IJ \not\subseteq P$. \\ (3) For any ideals I and J in N with $I \not\subseteq P$ and $J \not\subseteq P$, then there is either $IJ=\{0\}$ or $IJ \not\subseteq P$. \\ \\ \textbf{Proof.} $(1) \Rightarrow (2)$: Let I an J be two graded ideals of N with $P \subset I$, $P \subset J$ and $IJ\neq \{0\}$. Take $i \in I$ and $j \in J$ with $i \not \in P$ and $j \not \in P$, which implies that $\{0\} \neq <i><j> \not\subseteq P$ and hence $\{0\} \neq IJ \not\subseteq P$. \\ $(2) \Rightarrow (3)$: Let I and J be a graded ideals of N with $I \not\subseteq P$ and $J \not\subseteq P$. Then there exists $i_1 \in I$ and $j_1 \in J$ such that $i_1 \not \in P$ and $j_1 \not\in P$. Suppose that $<i><j> \neq \{0\}$ for some $i \in I$ and $j \in J$. Then $(P+<i>+<i_1>)(P+<j>+<j_1>) \neq \{0\}$ and $P \subset (P+<i>+<i_1>) $ and $ P \subset (P+<j>+<j_1>)$. By hypothesis, $(P+<i>+<i_1>)(P+<j>+<j_1>) \not\subseteq P$. So, $<i>(P+<j>+<j_1>) +<i_1>(P+<j>+<j_1>) \not\subseteq P$. Hence there exists $i' \in <i>$, $i_1'\in <i_1>$, $j', j'' \in <j>$, $j_1', j_1'' \in <j_1>$ and $p_1, p_2 \in P$ such that $i'(p_1 + j' + j_1') + i_1'(p_2 + j'' + j_1'') \not\in P$. Thus $i'(p_1 + j' + j_1') - i'(j' + j_1') + i'(j' + j_1') + i_1'(p_2 + j'' + j_1'') - i_1'(j'' + j_1'') + i_1'(j'' + j_1'') \not\in P$. But $i'(p_1 + j' + j_1') - i'(j' + j_1') \in P$ and $i_1'(p_2 + j'' + j_1'') - i_1'(j'' + j_1'') \in P$. Which implies neither $i'(j' + j_1')$ nor $i_1'(j'' + j_1'')$ belongs to P. Therefore, $IJ \not\subseteq P$. \\ $(3) \Rightarrow (1) $: Follows directly from the definition of graded weakly prime ideals of N. \end{theorem}
\begin{proposition}\label{prop3} Let N be a G-graded near-ring, A be a totally ordered set and $(P_a)_{a\in A}$ be a family of graded weakly prime ideals of N with $P_a \subseteq P_b$ for any $a,b \in A$ with $a\leq b$. Then $P = \bigcap\limits_{a\in A} P_a$ is a graded weakly prime ideal of N. \\ \\ \textbf{Proof.} Let I and J be two graded ideals of N with $\{0\} \neq IJ \subseteq P$, which implies for all $a \in A$ there is $IJ \subseteq P_a$. If there exists $a \in A$ such that $I \not\subseteq P_a$, then $J \subseteq P_a$. Hence for all $a \leq b$ there is $J \subseteq P_b$. If there exists $c < a$ such that $j \not \subseteq P_c$, then $I \subseteq P_c $ and then $I \subseteq P_a$, this is a contradiction. Hence for any $a \in A$, there is $J \subseteq P_a$. Therefore, $J \subseteq P$. \end{proposition}
\begin{proposition}\label{prop4} Let N be a G-graded near-ring and P be an intersection of some graded weakly prime ideals of N. Then for any graded ideal I of N satisfying that $\{0\} \neq I^2 \subseteq P$ there is $I \subseteq P$. \\ \\ \textbf{Proof.} Let $P_a$ be a set of graded weakly prime ideals of N, P be the intersection of $P_a$ and I be a graded ideal of N such that $\{0\} \neq I^2 \subseteq P$. Then $I^2$ is subset of each $P_a$ since $P_a$ is graded weakly prime ideal of N there is $I \subseteq P_a$. Therefore, $I \subseteq P$. \end{proposition}
Next Example and Theorem \ref{thm4}, shows that the pre-image of a surjective homomorphism map of graded weakly prime ideal of N is not necessary to be graded weakly prime ideal of N, while the image of a surjective homomorphism map of graded weakly prime ideal of N which contains the kernal is graded weakly prime ideal of N.
\begin{example}\label{ex2} Let G be the multiplicatively monoid which defined in Example \ref{ex1} and $N = \mathbf{Z}_8$ and $M=\mathbf{Z}_4$ be two G-graded near-rings where $N_0 = \mathbf{Z}_8$, $N_1 = \{0\}$, $M_0 = \mathbf{Z}_4$ and $M_1 = \{0\}$. Consider $\phi : N \rightarrow M$ where $\phi(x) =X$ is surjective homomorphism map. However, $\{0\}$ is a graded weakly prime ideal in M although $\phi^{-1}(\{0\}) = \{0, 4\}$ is not graded weakly prime ideal of N. \end{example}
\begin{lemma}\label{lem2} Let N and M be two G-graded near-rings and $\phi$ be a surjective homomorphism from N into M. For any two graded ideals I and J of N if $IJ \neq \{0\}$, then $\phi^{-1}(I) \phi^{-1}(J) \neq \{0\}$. \\ \\ \textbf{Proof.} Let I and J be two graded ideals of N such that $IJ \neq \{0\}$. Suppose that $\phi^{-1}(I) \phi^{-1}(J) = \{0\}$, then $\phi^{-1}(I) \phi^{-1}(J) = \phi^{-1}(IJ) = \{0\}$. Therefore, $\phi(\{0\}) = IJ$ which contradict with the fact that the image of zero is zero for any homomorphism map. Hence $\phi^{-1}(I) \phi^{-1}(J) \neq \{0\}$. \end{lemma}
\begin{theorem}\label{thm4} Let N and M be two G-graded near-rings and $\phi$ be a surjective homomorphism from N into M. Then the image of graded weakly prime ideal of N which contains the kernal of $\phi$ is a graded weakly prime ideal of M. \\ \\ \textbf{Proof.} Suppose that $\{0\} \neq IJ \subseteq \phi (P)$ where I and J are graded ideals of M and P is a graded weakly prime ideals of N. By Lemme \ref{lem2}, $\phi^{-1}(I) \phi^{-1}(J) \neq \{0\}$. Hence $\{0\} \neq \phi^{-1}(I) \phi^{-1}(J) \subseteq P + Ker(\phi) \subseteq P $. However, $\phi^{-1}(I) \phi^{-1}(J) \subseteq N$ then $\phi^{-1}(I) \phi^{-1}(J) \subseteq P$ since P is a graded weakly prime ideal of N, so $\phi^{-1}(I) \subseteq P$ $\phi^{-1}(J) \subseteq P$. Therefore, $I \subseteq \phi(P)$ or $J \subseteq \phi(P)$. Hence $\phi(P)$ is a graded weakly prime ideal of M. \end{theorem}
Next Example and Theorem \ref{thm5}, if $I \subseteq P$ and $\pi : N \rightarrow \Bar{N} := N/I$ is the canonical epimorphism, then $\pi(P)$ is graded weakly prime ideal of $\Bar{N}$ if P is graded weakly prime ideal of N while it is not necessary that P is graded weakly prime ideal in N if $\pi(P)$ is graded weakly prime ideal of $\Bar{N}$.
\begin{example}\label{ex3} Let $N = \mathbf{Z}_{18}$ be a G-graded near-ring where $N_0 = \mathbf{Z}_{18}$ and $N_1 = \{0\}$. Consider $\pi : N \rightarrow \Bar{N} := N/I$, where $\pi(x)=x$ and $I = \{0, 9\}$. It is easily to check hat $\pi(\{0, 9\}) = \Bar{0}$ is a graded weakly prime ideal of $\Bar{N}$. However, $I \subseteq \{0, 9\}$ is not graded weakly prime ideal of N. \end{example}
\begin{theorem}\label{thm5} Let N be a G-graded near-ring and P, I be a graded ideals of N with $I \subseteq P$. Consider $\pi : N \rightarrow \Bar{N} := N/I$ is the canonical epimorphism. If P is a graded weakly prime ideal of N, then $\pi(P)$ is a graded weakly prime ideal of $\Bar{N}$. \\ \\ \textbf{Proof.} Let J and K be a graded ideals of N with $ \{0\}\neq KJ \subseteq P$, so $\pi(J)$ and $\pi(K)$ are graded ideals of $\Bar{N}$ with $\{0\} \neq \pi(J)\pi(K)=\pi(JK) \subseteq \pi(P)$. Since $\{0\} \neq \pi(J)\pi(K) $ then by Lemma \ref{lem2} $\{0\} \neq \pi^{-1}(\pi(J)) \pi^{-1}(\pi(K))$ and then $\pi^{-1}(\pi(J)) \pi^{-1}(\pi(K)) = JK \subseteq \pi^{-1}(\pi(P))= P+I = P$. Thus $J \subseteq P$ or $K \subseteq P$. Therefore. $J = \pi^{-1}(\pi(J)) \subseteq P \subseteq \pi^{-1}(\pi(P))$ so $\pi(J) \subseteq \pi(P)$ or $\pi(K) \subseteq \pi(P)$. Thus $\pi(P)$ is a grade weakly prime ideal of $\Bar{N}$. \end{theorem}
Note that, from the definition of graded weakly prime ideals, for any graded ideal of N with $I^2 \subseteq P$ where P is a graded weakly prime ideal of N, if $I \not \subseteq P$ then $I^2 = \{0\}$. If there is another special cases that guarantee $I^2 =\{0\}$. The Theorem \ref{thm6} gives one case of them but before state it the following lemma is presented.
\begin{lemma}\label{lem3} Let P be a graded weakly prime ideal of N. If $\Bar{I}$ is a graded ideal of N/P with $\Bar{I}\Bar{J} =\{0\}$ for some non zero graded ideal $\Bar{J}$ of N/P. Then there is either $I \subseteq P$ or $PJ = \{0\}$. \\ \\ \textbf{Proof.} Suppose that $I \not\subseteq P$ and let $p \in P$. Then $(<p>+I)\not\subseteq P$ and $(<p>+I)J \subseteq P$ which implies $(<p>+I)J = \{0\}$ but P is a graded weakly prime ideal of N. Thus $<p>J =\{0\}$ and hence $PJ = \{0\}$. \end{lemma}
\begin{theorem}\label{thm6} Let N be a G-graded near-ring and P be a graded weakly prime ideal of N with $P^2 = \{0\}$. If I is a graded ideal of N and $I^2 \subseteq P$, then $I^2= \{0\}$. \\ \\ \textbf{Proof.} Let P is a graded weakly prime ideal of N and for any $x, y \in I$, there is $<x><y> \subseteq I^2 \subseteq P$. Now, the claim is that $<x><y> = \{0\}$. Suppose not, then since P is graded weakly prime ideal of N, there is $x\in P$ or $y \in P$. If both $x, y \in P$, then $<x><y> \subseteq P^2 =\{0\}$. So, it is assumed that only one of them x or y belongs to P. Take $x \in P$ since $<y><y> \subseteq I^2 \subseteq P$ and by Lemma \ref{lem3} we have $<x><y> \subseteq P<y> = \{0\}$ which implies $I^2 = \{0\}$. \end{theorem}
Recall that, if $N$ and $M$ is a G-graded near-rings, then $N \times M$ is a G-graded near-ring.
\begin{theorem}\label{thm7} Let N and M be a G-graded near-rings and P be a graded ideal of N. Then P is a graded weakly prime ideal of N if and only if $P \times M$ is a graded weakly prime ideal of $N\times M$. \\ \\ \textbf{Proof.} ($\Rightarrow$) Let P be a graded weakly prime ideal of N and $I \times M$, $J\times M$ be a graded ideal of $N\times M$ such that $\{0\} \neq (I\times M)(J \times M) \subseteq P \times M$. Then $\{0\} \neq (I\times M)(J \times M) = (IJ \times MM) \subseteq P \times M$. So, $\{0\} \neq IJ \subseteq P$ but P is a graded weakly prime ideal of N then $I \subseteq P$ or $J \subseteq P$. Therefore, $I \times M \subseteq P \times M$ or $J \times M \subseteq P \times M$. Thus $P \times M$ is a graded weakly prime ideal of $N \times M$. \\ ($\Leftarrow$) Suppose that $P\times M$ is a graded weakly prime ideal of $N\times M$ and Let I, J be a graded ideals of N such that $\{0\} \neq IJ \subseteq P$. Then $\{0\} \neq (I\times M)(J \times M) \subseteq P \times M$. By assumption we have $I \times M \subseteq P \times M$ or $J \times M \subseteq P \times M$. So $I \subseteq P$ or $J \subseteq P $. Thus P is a graded weakly prime ideal of N. \end{theorem}
\begin{corollary}\label{coro2} Let N and M be two G-graded near-rings. If every graded ideal of N and M is a product of graded weakly prime ideals, then every graded ideal of $N \times M$ is a product of graded weakly prime ideals. \\ \\ \textbf{Proof.} Let I be a graded ideal of N and J be a graded ideal of M such that $I = I_1 ... I_n$ and $J = J_1 ... J_m$ where $I_i$ and $J_j$ is a graded weakly prime ideal of N and M respectively. If the graded ideal is of the form $I \times M$ then $I \times M = (I_1 ... I_n) \times M $ can be written as $(I_1 \times M)...(I_n \times M)$ which is by Theorem \ref{thm7} a product of graded weakly prime ideals. Similarly, if the graded ideal is of the form $N \times J$, then it is a product of graded weakly prime ideals. If the graded ideal is of the form $I \times J$ then it can be written as $(I_1 ... I_n) \times (J_1 ... J_m) = ((I_1 ... I_2) \times M)(N \times (J_1 ... J_m)) = (I_1 \times M)...(I_n \times M)(N \times J_1)...(N \times J_m)$ which is a product of graded weakly prime ideals. \end{corollary}
\begin{theorem}\label{thm8} Let $N$ and $M$ be two $G$-graded near-rings. Then a graded ideal $P$ of $N \times M$ is graded weakly prime if and only if it has one of the following two forms: \\ (i) $I \times M$, where $I$ is a graded weakly prime ideal of $N$. \\ (ii) $N \times J$, where $J$ is a graded prime ideal of $M$. \\ \\ \textbf{Proof.} Let $P$ be a graded ideal of $N \times M.$ Then $P$ has one of the following three forms (i) $I \times M$ where $I$ is a graded ideal of $N$ or (ii)$N \times J$, where $J$ is proper ideal of $M$ or $I \times J$, where $I\neq N $ and $ J \neq M$. If $P$ is of the form $I \times M$ or of the form $N \times J$ then by Theorem \ref{thm7}, $P$ is graded weakly prime ideal of $N \times N$ if and only if both $I$ and $J$ are graded weakly prime ideals of N and M respectively. Let $P = I \times J$ be a graded weakly prime ideal of $N \times M$ with $I \neq N $ and $ J \neq M $. Suppose $ x \in I $. Then $< x > \times \{0\} \subseteq P$ This implies that either $< x > \times M \subseteq P $ or $(N \times \{0\} \subseteq P$. If $< x > \times M \subseteq P $, then
$M = J$ and if $(N \times \{0\} \subseteq P$, then $N = I$ this is a contradiction. Hence $I\times J$ can not be graded weakly prime ideal of $N \times M$ if both $I$ and $J$ are graded ideals. \end{theorem}
\begin{theorem}\label{thm9} Let $N$ and $M$ be two G-graded near-rings. Then $P= \{0\} \times \{0\}$ is a graded weakly prime ideal of $N \times M$. \\ \\ \textbf{Proof.} Let $I = \{0\}$ be a graded ideal of N. Suppose that $x \in I-\{0\}$ then $<x> \times \{0\} \subseteq P$ and $<x> \times \{0\} \neq \{0\}$. This implies that either $<x> \times M \subseteq P$ or $N \times \{0\} \subseteq P$ then $N = I$ this is a contradiction. So $I -\{0\}$ is empty. Similarly if $J = \{0\}$ where J is a graded ideal of M. Therefore, P is a graded weakly prime ideal of $N \times M$. \end{theorem}
\begin{proposition}\label{prop5} Let N be a G-graded near-ring and P, I be two graded ideals of N. If P and I are graded weakly prime ideal of N, then $P \cup I$ is a graded weakly prime ideal of N. \\ \\ \textbf{Proof.} Let J and K be a two graded ideals of N such that $\{0\}\ \neq JK \subseteq P \cup I$. Then $JK \subseteq P$ or $JK \subseteq I$. Since $JK \neq \{0\}$ and if $JK \subseteq P$, then $J \subseteq P$ or $K \subseteq P$ since P is a graded weakly prime ideal of N. Hence $J \subseteq P \cup I$ or $K \subseteq P \cup I$. If $JK \subseteq I$, then $J \subseteq I$ or $K \subseteq I$ since I is a graded weakly prime ideal of N. thus $J \subseteq P \cup I$ or $K \subseteq P\cup I$. Therefore, $P\cup I$ is a graded weakly prime ideal of N. \end{proposition}
\section{GRADED ALMOST PRIME IDEALS OVER GRADED NEAR RINGS}
In this section, we introduce graded almost prime ideals over near-rings concept and study their basic properties.
\begin{definition}\label{def2} Let G be a multiplucative monoid group with identity element and N be a G-graded near-ring. A graded ideal P of N is called graded almost prime ideal of N if whenever $IJ\subseteq P$ and $IJ \not\subseteq (P^2 \cap N)$, then either $I \subseteq P$ or $J \subseteq P$, for any graded ideals I and J in N. \end{definition}
\begin{example}\label{ex4} Consider a G-graded near-ring which is defined in Example \ref{ex1}. Not that $P_4= \{0,4,8\}$ is a graded almost prime ideal of N but not graded weakly prime ideal of N. However, $P_5 = \{0,6\}$ is neither graded weakly prime ideal of N nor graded almost prime ideal of N. \end{example}
In the previous section, it was observed that if P is graded prime ideal then it is graded weakly prime ideal but the converse is not true for example P1 in Example \ref{ex1} is not graded prime ideal of N but it is graded weakly prime ideal of N. Also, by Example \ref{ex4}, a graded almost prime ideal of N may not implies a graded weakly prime ideal of N . Now, the question is: Does a graded weakly prime ideal give a graded almost prime ideal? The next theorem answers this question.
\begin{theorem}\label{thm10} Let N be a G-graded near-ring and P be a graded ideal of N. If P is a graded weakly prime ideal of N, then P is a graded almost prime ideal of N. \\ \\ \textbf{Proof.} Let P be a graded weakly prime ideal of N and I, J be two graded ideals of N such that $IJ\subseteq P$ and $IJ \not\subseteq (P^2 \cap N)$. If P is a graded prime ideal of N then P is a graded almost prime ideal of N. Otherwise, $(P^2 \cap N) = \{0\}$ by Theorem \ref{thm1} $IJ \neq \{0\}$ since $IJ \not\subseteq (P^2 \cap N) = \{0\}$. But P is a graded weakly prime ideal of N. Therefore, either $I\subseteq P$ or $J \subseteq P$ which means that P is a graded almost prime ideal of N. \end{theorem}
\begin{proposition}\label{prop6} Let N be a G-graded near-ring and P be a graded prime ideal of N. If P is a graded almost prime ideal of N and $((P^2 \cap N):P) \subseteq P$, then P is a graded prime ideal of N. \\ \\ \textbf{Proof.} Suppose that P is not graded prime ideal of N. Then there exist $I \not\subseteq P$ and $J \not\subseteq P$ satisfying that $IJ \subseteq P$ where I and J are two graded ideals of N. If $IJ \not\subseteq (P^2 \cap N)$ we are done. So, it is assumed $IJ \subseteq (P^2 \cap N)$. Consider $I(J+P) \subseteq P$ if $I(J+P) \not\subseteq (P^2 \cap N)$ then there is $I \subseteq P$ or $J \subseteq P$ this is a contradiction. Otherwise, $I(J+P) \subseteq (P^2 \cap N)$ then $IP \subseteq (P^2 \cap N)$ which implies $I \subseteq ((P^2 \cap N):P) \subseteq P$ which is a contradiction. Thus P is a graded prime ideal of N. \end{proposition}
Next, some equivalent conditions are given for a graded ideal to be graded almost prime ideal in the G-graded near-ring.
\begin{theorem}\label{thm11} Let N be a G-graded near-ring and P be a graded ideal of N. Then the following are equivalent: \\ (1) For $x,y \ and\ z \in N$ with $ x(<y>+<z>) \subseteq P$ and $x(<y>+<z>) \not \subseteq (P^2\cap N)$ there is $x\in P$ or $y \ and\ z \in P$. \\ (2) For $x\in N$ but $x \not \in P$, $(P:<x>+<y>) = P \cup ( (P^2 \cap N):<x>+<y>)$ for any $y \in N$. \\ (3) For $x\in N$ but $x \not\in P$ we have $(P:<x>+<y>) = P$ or $(P:<x>+<y>) = ((P^2\cap N):<x>+<y>)$ for any $y\in N$. \\ (4) P is a graded almost prime ideal of N. \\ \\ \textbf{Proof.} $(1) \Rightarrow (2)$: Let $t \in N$ and $t \in (P :< x > + < y >)$ for any y and x belongs to N but $x \not\in P$. Then $t(< x > + < y >) \subseteq P$. If $t(< x > + < y >) \subseteq (P^2\cap N)$. Then $t \in ((P^2 \cap N) :< x > + < y >)$. Otherwise, we get $ t(< x > + < y >) \not\subseteq (P^2\cap N)$. Thus, $t \in P$ by hypothesis. \\ $(2) \Rightarrow (3)$: It is following directly from Lemma \ref{lem1}. \\ $(3) \Rightarrow (4)$: Let I and J be a graded ideal of N such that $IJ \subseteq P$ and $IJ \not\subseteq (P^2\cap N)$. Suppose that $I \not\subseteq P$ and $J \not\subseteq P$. Then there exist $j \in J$ with $j \not\in P$. Now, it is claimed that $IJ \subseteq (P^2 \cap N)$. Let $j_1 \in J$, then $I(<j>+<j_1> \subseteq P$, which implies $I \subseteq (P:<j>+<j_1>)$. Then by assumption, $I(<j>+<j_1>) \subseteq (P^2\cap N) $ which gives $Ij_1 \subseteq (P^2 \cap N) $. Thus $IJ \subseteq (P^2\cap N)$ and hence P is a graded almost prime ideal of N. \\ $(4) \Rightarrow (1)$: If $ x(<y>+<z>) \subseteq P$ and $ x(<y>+<z>) \not\subseteq (P^2\cap N)$, then $\{0\} \neq <x>(<y>+<z>) \subseteq P$. Since P is a graded almost prime ideal of N, there is $<x> \subseteq P$ or $<y>+<z> \subseteq P$. By assumption, $x, y \ and\ z \in N$. Hence $x\in P$ or $y \ and\ z \in P$. \end{theorem}
\begin{theorem}\label{thm12} Let N be a G-graded near-ring and P be a graded ideal of N. Then the following are equivalent: \\ (1) P is a graded almost prime ideal of N. \\ (2) For any ideals I and J in N with $P \subset I$ and $P \subset J$, then there is either $IJ \subseteq (P^2\cap N)$ or $IJ \not\subseteq P$. \\ (3) For any ideals I and J in N with $I \not\subseteq P$ and $J \not\subseteq P$, then there is either $IJ \subseteq (P^2\cap N)$ or $IJ \not\subseteq P$. \\ \\ \textbf{Proof.}$(1) \Rightarrow (2)$: Let I an J be two graded ideals of N with $P \subset I$, $P \subset J$ and $IJ\not\subseteq (P^2\cap N)$. Take $i \in I$ and $j \in J$ with $i \not \in P$ and $j \not \in P$, which implies that $<i><j> \not\subseteq P$ and hence $IJ \not \subseteq (P^2\cap N)$ and $IJ \not\subseteq P$. \\ $(2) \Rightarrow (3)$: Let I and J be a graded ideals of N with $I \not\subseteq P$ and $J \not\subseteq P$. Then there exists $i_1 \in I$ and $j_1 \in J$ such that $i_1 \not \in P$ and $j_1 \not\in P$. Suppose that $<i><j> \not (P^2\cap N)$ for some $i \in I$ and $j \in J$. Then $(P+<i>+<i_1>)(P+<j>+<j_1>) \not \subset (P^2 \cap N)$ and $P \subset (P+<i>+<i_1>) $, $ P \subset (P+<j>+<j_1>)$. By hypothesis, $(P+<i>+<i_1>)(P+<j>+<j_1>) \not\subseteq P$. So, $<i>(P+<j>+<j_1>) +<i_1>(P+<j>+<j_1>) \not\subseteq P$. Hence there exists $i' \in <i>$, $i_1'\in <i_1>$, $j', j'' \in <j>$, $j_1', j_1'' \in <j_1>$ and $p_1, p_2 \in P$ such that $i'(p_1 + j' + j_1') + i_1'(p_2 + j'' + j_1'') \not\in P$. Thus $i'(p_1 + j' + j_1') - i'(j' + j_1') + i'(j' + j_1') + i_1'(p_2 + j'' + j_1'') - i_1'(j'' + j_1'') + i_1'(j'' + j_1'') \not\in P$. But $i'(p_1 + j' + j_1') - i'(j' + j_1') \in P$ and $i_1'(p_2 + j'' + j_1'') - i_1'(j'' + j_1'') \in P$. Which implies neither $i'(j' + j_1')$ nor $i_1'(j'' + j_1'')$ belongs to P. Therefore, $IJ \not\subseteq P$. \\ $(3) \Rightarrow (1) $: Follows directly from the definition of graded almost prime ideals of N. \end{theorem}
\begin{proposition}\label{prop7} Let N be a G-graded near-ring, A be a totally ordered set and $(P_a)_{a\in A}$ be a family of graded almost prime ideals of N with $P_a \subseteq P_b$ for any $a,b \in A$ with $a\leq b$. Then $P = \bigcap\limits_{a\in A} P_a$ is a graded almost prime ideal of N. \\ \\ \textbf{Proof.} Let I and J be two graded ideals of N with $ IJ \subseteq P$ but $IJ \not\subseteq (P^2\cap N)$, which implies for all $a \in A$ there is $IJ \subseteq P_a$. If there exists $a \in A$ such that $I \not\subseteq P_a$, then $J \subseteq P_a$. Hence for all $a \leq b$ there is $J \subseteq P_b$. If there exists $c < a$ such that $j \not \subseteq P_c$, then $I \subseteq P_c $ and then $I \subseteq P_a$, this is a contradiction. Hence for any $a \in A$, there is $J \subseteq P_a$. Therefore, $J \subseteq P$. \end{proposition}
\begin{proposition}\label{prop8} Let N be a G-graded near-ring and P be an intersection of some graded almost prime ideals of N. Then for any graded ideal I of N satisfying that $ I^2 \subseteq P$ but $I^2 \not\subseteq (P^2\cap N)$ we have $I \subseteq P$. \\ \\ \textbf{Proof.} Let $P_a$ be a set of graded almost prime ideals of N, P be the intersection of $P_a$ and I be a graded ideal of N such that $ I^2 \subseteq P$ but $I^2 \not\subseteq (P^2\cap N)$. Then $I^2$ is subset of each $P_a$ since $P_a$ is graded almost prime ideal of N there is $I \subseteq P_a$. Therefore, $I \subseteq P$. \end{proposition}
\begin{lemma}\label{lem4} Let N and M be two G-graded near-rings and $\phi$ be a surjective homomorphism from N into M. For any two graded ideals P, I and J of N if $IJ \not\subseteq P$, then $\phi^{-1}(I) \phi^{-1}(J) \not\subseteq \phi^{-1}(P)$. \\ \\ \textbf{Proof.} If $I \not\subseteq P$, then $\phi^{-1}(I) \not\subseteq \phi^{-1}(P)$ since $\phi(I) \subseteq \phi(P)$, then $I = \phi(\phi^{-1}(I)) \subseteq \phi(\phi^{-1}(P)) \subseteq P$. Hence if $IJ \not\subseteq P$, then $\phi^{-1}(IJ)=\phi^{-1}(I) \phi^{-1}(J) \not \subseteq \phi^{-1}(P)$. \end{lemma}
\begin{theorem}\label{thm13} Let N and M be two G-graded near-rings and $\phi$ be a surjective homomorphism from N into M. Then the image of graded almost prime ideal of N which contains the kernal of $\phi$ is a graded almost prime ideal of M. \\ \\ \textbf{Proof.} Suppose that $IJ \subseteq \phi(P)$ and $IJ \not \subseteq ((\phi(P))^2 \cap N)$ where I and J be two graded ideals of N and P is a graded almost prime ideal of N. By Lemma \ref{lem4} $\phi^{-1}(I) \phi^{-1}(J) \not \subseteq (P^2\cap N)$. Hence $\phi^{-1}(I) \phi^{-1}(J) \subseteq P+Ker(\phi) = P$. Since P is a graded almost prime ideal of N then $\phi^{-1}(I) \subseteq P$ or $ \phi^{-1}(J) \subseteq P$. Therefore, $I \subseteq \phi(P)$ or $J \subseteq \phi(P)$. Hence $\phi(P)$ is a graded almost prime ideal of M. \end{theorem}
\begin{theorem}\label{thm14} Let N be a G-graded near-ring and P, I be a graded ideals of N with $I \subseteq P$. Consider $\pi : N \rightarrow \Bar{N} := N/I$ is the canonical epimorphism. If P is a graded almost prime ideal of N, then $\pi(P)$ is a graded almost prime ideal of $\Bar{N}$. \\ \\ \textbf{Proof.} Let P be a graded almost prime ideal of N and $\pi(J), \pi(K)$ be a graded ideal of $\Bar{N}$ with $\pi(J)\pi(K)= \pi(JK) \subseteq \pi(P)$ and $\pi(J)\pi(K)= \pi(JK) \not\subseteq ((\pi(P))^2 \cap N)$. Since $\pi(J)\pi(K)= \pi(JK) \not\subseteq ((\pi(P))^2 \cap N)$ then by Lemma \ref{lem4} $JK \not \subseteq P^2$. Therefore, $\pi^{-1}(\pi(J)) \pi^{-1}(\pi(K)) = JK \subseteq \pi^{-1}(\pi(P))= P+I = P$. Hence $J \subseteq P$ or $K \subseteq P$ and hence $\pi(J) \subseteq of \pi(P)$ or $\pi(K) \subseteq of \pi(P)$. Thus $\pi(P)$ is a graded almost prime ideal of $\Bar{N}$. \end{theorem}
\begin{lemma}\label{lem5} Let P be a graded almost prime ideal of N. If $\Bar{I}$ is a graded ideal of N/P with $\Bar{I}\Bar{J} =\{0\}$ for some non zero graded ideal $\Bar{J}$ of N/P. Then there is either $I \subseteq P$ or $PJ \subseteq (P^2 \cap N)$. \\ \\ \textbf{Proof.} Suppose that $I \not\subseteq P$ and let $p \in P$. Then $(<p>+I)\not\subseteq P$ and $(<p>+I)J \subseteq P$ which implies $(<p>+I)J \subseteq (P^2\cap N)$ but P is a graded almost prime ideal of N. Thus $<p>J \subseteq (P^2 \cap N)$ and hence $PJ \subseteq (P^2 \cap N)$. \end{lemma}
\begin{theorem}\label{thm15} Let N be a G-graded near-ring and P be a graded almost prime ideal of N. If I is a graded ideal of N and $I^2 \subseteq P$, then $I^2 \subseteq (P^2\cap N)$. \\ \\ \textbf{Proof.} Let P is a graded almost prime ideal of N and for any $x, y \in I$ we have $<x><y> \subseteq I^2 \subseteq P$. Now, the claim is that $<x><y> \subseteq (P^2\cap N)$. Suppose not, then since P is graded almost prime ideal of N, there is $x\in P$ or $y \in P$. So, it is assumed that only one of them x or y belongs to P. Take $x \in P$ since $<y><y> \subseteq I^2 \subseteq P$ and by Lemma \ref{lem5} we have $<x><y> \subseteq P<y> \subseteq (P^2\cap N)$ which implies $I^2 \subseteq (P^2 \cap N)$. \end{theorem}
Previous theorem is important in some G-graded near-ring with unique maximal in $N$ (as $N$ is a G-graded near-ring). If this maximal ideal is graded ideal and satisfying that $M M = M^2 \cap N$ like the G-graded near-ring $N= (\mathbf{Z}_{16}, + , \times)$ with $G$ defined as Example \ref{ex1} where $N_0 = \mathbf{Z}_{16}$ and $N_1 = \{0\}$. Note that $N$ as a G-graded ring has unique Maximal ideal $M=\{0, 2, 4, 6, 8, 10, 12, 14\}$ and $M$ satisfies the property $M M = (M^2 \cap N)$. The importance of such G-graded near-rings is explained in the following theorem.
\begin{theorem}\label{thm16} Let N be a G-graded near-ring. If M is the unique maximal ideal of N with $M M = (M^2 \cap N)$, then for any graded ideal P of N with $M^2\cap N \subseteq P$. We have P is a graded almost prime ideal of N if and only if $(M^2 \cap N) = (P^2 \cap N)$. \\ \\ \textbf{Proof.} ($\Rightarrow$) Let P be a graded almost prime ideal of N. Then there is $M^2 \cap N = M M \subseteq (P^2\cap N)$ by Theorem \ref{thm15}. Also, $(P^2 \cap N) = (M^2 \cap N)$ since M is the unique Maximal ideal of N. \\ ($\Leftarrow$) Let $(M^2 \cap N) = (P^2 \cap N)$ we claim that P is a graded almost prime ideal of N. Let I and J be two graded ideals of N with $IJ \subseteq P$. Since M is the unique Maximal ideal of N so $I \subseteq M$ and $J \subseteq M$. Therefore, $IJ \subseteq M M = (M^2 \cap N) = (P^2 \cap N)$. Hence P is a graded almost prime ideal of N. \end{theorem}
\begin{theorem}\label{thm17} Let N and M be a G-graded near-rings and P be a graded ideal of N. Then P is a graded almost prime ideal of N if and only if $P \times M$ is a graded almost prime ideal of $N\times M$. \\ \\ \textbf{Proof.} ($\Rightarrow$) Let P be a graded almost prime ideal of N and $I \times M$, $J\times M$ be a graded ideal of $N\times M$ such that $ (I\times M)(J \times M) \subseteq P \times M$ and $ (I\times M)(J \times M) \not\subseteq ((P \times M)^2 \cap (N\times M)$ . Then $(I\times M)(J \times M) = (IJ \times MM) \subseteq P \times M$ and $(I\times M)(J \times M) = (IJ \times MM)\not\subseteq ((P^2 \cap N)\times M)$. So, $ IJ \subseteq P$ and $IJ \not \subseteq (P^2\cap N)$ but P is a graded almost prime ideal of N then $I \subseteq P$ or $J \subseteq P$. Therefore, $I \times M \subseteq P \times M$ or $J \times M \subseteq P \times M$. Thus $P \times M$ is a graded almost prime ideal of $N \times M$. \\ ($\Leftarrow$) Suppose that $P\times M$ is a graded almost prime ideal of $N\times M$ and Let I, J be a graded ideals of N such that $ IJ \subseteq P$ and $IJ \not\subseteq (P^2\cap N)$. Then $ (I\times M)(J \times M) \subseteq P \times M$ and $ (I\times M)(J \times M) \not\subseteq ((P \times M)^2 \cap (N\times M)$. By assumption we have $I \times M \subseteq P \times M$ or $J \times M \subseteq P \times M$. So $I \subseteq P$ or $J \subseteq P $. Thus P is a graded almost prime ideal of N. \end{theorem}
\begin{corollary}\label{coro3} Let N and M be two G-graded near-rings. If every graded ideal of N and M is a product of graded almost prime ideals, then every graded ideal of $N \times M$ is a product of graded almost prime ideals. \\ \\ \textbf{Proof.} Let I be a graded ideal of N and J be a graded ideal of M such that $I = I_1 ... I_n$ and $J = J_1 ... J_m$ where $I_i$ and $J_j$ is a graded almost prime ideal of N and M respectively. If the graded ideal is of the form $I \times M$ then $I \times M = (I_1 ... I_n) \times M $ can be written as $(I_1 \times M)...(I_n \times M)$ which is by Theorem \ref{thm17} a product of graded almost prime ideals. Similarly, if the graded ideal is of the form $N \times J$, then it is a product of graded almost prime ideals. If the graded ideal is of the form $I \times J$ then it can be written as $(I_1 ... I_n) \times (J_1 ... J_m) = ((I_1 ... I_2) \times M)(N \times (J_1 ... J_m)) = (I_1 \times M)...(I_n \times M)(N \times J_1)...(N \times J_m)$ which is a product of graded almost prime ideals. \end{corollary}
\end{document} | arXiv |
On a Kirchhoff wave model with nonlocal nonlinear damping
EECT Home
Moving and oblique observations of beams and plates
June 2020, 9(2): 469-486. doi: 10.3934/eect.2020020
Robust attractors for a Kirchhoff-Boussinesq type equation
Zhijian Yang 1,, , Na Feng 2, and Yanan Li 1,
School of Mathematics and Statistics, Zhengzhou University, No.100, Science Road, Zhengzhou 450001, China
College of Science, Zhongyuan University of Technology, No.41, Zhongyuan Road, Zhengzhou 450007, China
* Corresponding author: Zhijian Yang
Received October 2018 Revised January 2019 Published June 2020 Early access December 2019
Fund Project: The authors are supported by NNSF of China (Grant No. 11671367)
The paper studies the existence of the pullback attractors and robust pullback exponential attractors for a Kirchhoff-Boussinesq type equation: $ u_{tt}-\Delta u_{t}+\Delta^{2} u = div\Big\{\frac{\nabla u}{\sqrt{1+|\nabla u|^{2}}}\Big\}+\Delta g(u)+f(x,t) $. We show that when the growth exponent $ p $ of the nonlinearity $ g(u) $ is up to the critical range: $ 1\leq p\leq p^*\equiv\frac{N+2}{(N-2)^{+}} $, (ⅰ) the IBVP of the equation is well-posed, and its solution has additionally global regularity when $ t>\tau $; (ⅱ) the related dynamical process $ \{U_f(t,\tau)\} $ has a pullback attractor; (ⅲ) in particular, when $ 1\leq p< p^* $, the process $ \{U_f(t,\tau)\} $ has a family of pullback exponential attractors, which is stable with respect to the perturbation $ f\in \Sigma $ (the sign space).
Keywords: Kirchhoff-Boussinesq type equation, well-posedness, pullback attractor, pullback exponential attractor, stability.
Mathematics Subject Classification: Primary: 35B40, 37L15, 37B55; Secondary: 35B41, 35B33, 35B65.
Citation: Zhijian Yang, Na Feng, Yanan Li. Robust attractors for a Kirchhoff-Boussinesq type equation. Evolution Equations & Control Theory, 2020, 9 (2) : 469-486. doi: 10.3934/eect.2020020
J. L. Bona and R. L. Sachs, Global existence of smooth solutions and stability of solitary waves for a generalized Boussinesq equation, Comm. Math. Phys., 118 (1988), 15-29. doi: 10.1007/BF01218475. Google Scholar
A. N. Carvalho, I. A. Langa and J. C. Robinson, On the continuity of pullback attractors for evolution processes, Nonlinear Anal., 71 (2009), 1812-1824. doi: 10.1016/j.na.2009.01.016. Google Scholar
A. N. Carvalho, I. A. Langa and J. C. Robinson, Attractors for Infinite-Dimensional Non-Autonomous Dynamical Systems, Applied Mathematical Sciences, 182. Springer, New York, 2013. doi: 10.1007/978-1-4614-4581-4. Google Scholar
A. N. Carvalho and S. Sonner, Pullback exponential attractors for evolution processes in Banach spaces: Theoretical result, Commun. Pure Appl. Anal., 12 (2013), 3047-3071. doi: 10.3934/cpaa.2013.12.3047. Google Scholar
A. N. Carvalho and S. Sonner, Pullback exponential attractors for evolution processes in Banach spaces: Properties and applications, Commun. Pure Appl. Anal., 13 (2014), 1141-1165. doi: 10.3934/cpaa.2014.13.1141. Google Scholar
I. Chueshov and I. Lasiecka, Existence, uniqueness of weak solutions and global attractors for a class of 2D Kirchhoff-Boussinesq models, Discrete Contin. Dyn. Syst., 15 (2006), 777-809. doi: 10.3934/dcds.2006.15.777. Google Scholar
I. Chueshov and I. Lasiecka, Long-time behavior of second order evolution equations with nonlinear damping, Mem. Amer. Math. Soc., 195 (2008). doi: 10.1090/memo/0912. Google Scholar
I. Chueshov and I. Lasiecka, On global attractor for 2D Kirchhoff-Boussinesq model with supercritical nonlinearity, Comm. Partial Differential Equations, 36 (2011), 67-69. doi: 10.1080/03605302.2010.484472. Google Scholar
I. Chueshov, Long-time dynamics of Kirchhoff wave models with strong nonlinear damping, J. Differential Equations, 252 (2012), 1229-1262. doi: 10.1016/j.jde.2011.08.022. Google Scholar
R. Czaja and M. A. Efendiev, Pullback exponential attractors for nonautonomous equations Part Ⅰ: Semilinear parabolic equations, J. Math. Anal. Appl., 381 (2011), 748-765. doi: 10.1016/j.jmaa.2011.03.053. Google Scholar
P. Y. Ding, Z. J. Yang and Y. N. Li, Global attractor of the Kirchhoff wave models with strong nonlinear damping, Appl. Math. Letters, 76 (2018), 40-45. doi: 10.1016/j.aml.2017.07.008. Google Scholar
M. A. Efendiev, A. Miranville and S. Zelik, Exponential attractors and finite-dimensional reduction for nonautonomous dynamical systems, Proc. Roy. Soc. Edinburgh Sect. A, 135 (2005), 703-730. doi: 10.1017/S030821050000408X. Google Scholar
M. A. Efendiev, Y. Yamamoto and A. Yagi, Exponential attractors for nonautonomous dynamical systems, J. Math. Soc. Japan, 63 (2011), 647-673. doi: 10.2969/jmsj/06320647. Google Scholar
M. Grasselli, G. Schimperna and S. Zelik, On the 2D Cahn-Hilliard equation with inertial term, Comm. Partial Differential Equations, 34 (2009), 137-170. doi: 10.1080/03605300802608247. Google Scholar
M. Grasselli, G. Schimperna, A. Segatti and S. Zelik, On the 3D Cahn-Hilliard equation with inertial term, J. Evol. Equ., 9 (2009), 371-404. doi: 10.1007/s00028-009-0017-7. Google Scholar
V. Kalantarov and S. Zelik, Finite-dimensional attractors for the quasi-linear strongly damped wave equation, J. Differential Equations, 247 (2009), 1120-1155. doi: 10.1016/j.jde.2009.04.010. Google Scholar
S. Kawashima and Y. Shibata, Global existence and exponential stability of small solutions to nonlinear viscoelasticity, Comm. Math. Phys., 148 (1992), 189-208. doi: 10.1007/BF02102372. Google Scholar
K. Kobayashi, H. Pecher and Y. Shibata, On a global in time existence theorem of smooth solutions to a nonlinear wave equation with viscosity, Math. Ann., 296 (1993), 215-234. doi: 10.1007/BF01445103. Google Scholar
L. A. Langa, A. Miranville and J. Real, Pullback exponential attractors, Discrete Contin. Dyn. Syst., 26 (2010), 1329-1357. doi: 10.3934/dcds.2010.26.1329. Google Scholar
J. Lagnese and J. L. Lions, Modeling Analysis and Control of Thin Plates, Recherches en Mathématiques Appliquées [Research in Applied Mathematics], 6. Masson, Paris, 1988. Google Scholar
J. E. Lagnese, Boundary Stabilization of Thin Plates, SIAM Studies in Applied Mathematics, 10. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1989. doi: 10.1137/1.9781611970821. Google Scholar
F. Linares, Global existence of small solutions for a generalized Boussinesq equation, J. Differential Equations, 106 (1993), 257-293. doi: 10.1006/jdeq.1993.1108. Google Scholar
T. F. Ma and M. L. Pelicer, Attractors for weakly damped beam equation with $p$-Laplacian, Discrete Contin. Dyn. Syst. 2013, Dynamical Systems, Differential Equations and Applications. 9th AIMS Conference. Suppl., 34 (2013), 525-534. doi: 10.3934/proc.2013.2013.525. Google Scholar
M. Nakao, Energy decay for the quasilinear wave equation with viscosity, Math. Z., 219 (1995), 289-299. doi: 10.1007/BF02572366. Google Scholar
J. Simon, Compact sets in the space $L^{p}(0, T;B)$, Ann. Mat. Pura Appl., 146 (1987), 65-96. doi: 10.1007/BF01762360. Google Scholar
C. Y. Sun, D. M. Cao and J. Q. Duan, Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity, Nonlinearity, 19 (2006), 2645-2665. doi: 10.1088/0951-7715/19/11/008. Google Scholar
V. Varlamov, Eigenfunction expansion method and the long-time asymptotics for the damped Boussinesq equation, Discrete Contin. Dynam. Systems, 7 (2001), 675-702. doi: 10.3934/dcds.2001.7.675. Google Scholar
Y. H. Wang and C. K. Zhong, Upper semicontinuity of pullback attractors for nonautonomous Kirchhoff wave models, Discrete Contin. Dyn. Syst., 33 (2013), 3189-3209. doi: 10.3934/dcds.2013.33.3189. Google Scholar
Z. J. Yang, Finite-dimensional attractors for the Kirchhoff models, J. Math. Phys., 51 (2010), 092703, 25 pp. doi: 10.1063/1.3477939. Google Scholar
Z. J. Yang, Longtime dynamics of the damped Boussinesq equations, J. Math. Anal. Appl., 399 (2013), 180-190. doi: 10.1016/j.jmaa.2012.09.042. Google Scholar
Z. J. Yang and Z. M. Liu, Longtime dynamics of the for the quasi-linear wave equations with structural damping and supercritical nonlinearity, Nonlinearity, 30 (2017), 1120-1145. doi: 10.1088/1361-6544/aa599f. Google Scholar
Z. J. Yang and P. Y. Ding, Longtime dynamics of Boussinesq type equations with fractional damping, Nonlinear Anal., 161 (2017), 108-130. doi: 10.1016/j.na.2017.05.015. Google Scholar
Z. J. Yang and Y. N. Li, Criteria on the existence and stability of pullback exponential attractors and their application to non-autonomous Kirchhoff wave models, Discrete Contin. Dyn. Syst., 38 (2018), 2629-2653. doi: 10.3934/dcds.2018111. Google Scholar
Z. J. Yang, P. Y. Ding and X. B. Liu, Attractors and their stability on Boussinesq type equations with gentle dissipation, Comm. Pure Appl. Anal., 18 (2019), 911-930. doi: 10.3934/cpaa.2019044. Google Scholar
X. G. Yang, Z. H. Fan and K. Li, Uniform attractor for non-autonomous Boussinesq-type equation with critical nonlinearity, Math. Meth. Appl. Sci., 39 (2016), 3075-3087. doi: 10.1002/mma.3753. Google Scholar
Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270
Nobu Kishimoto, Minjie Shan, Yoshio Tsutsumi. Global well-posedness and existence of the global attractor for the Kadomtsev-Petviashvili Ⅱ equation in the anisotropic Sobolev space. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1283-1307. doi: 10.3934/dcds.2020078
Pengyan Ding, Zhijian Yang. Well-posedness and attractor for a strongly damped wave equation with supercritical nonlinearity on $ \mathbb{R}^{N} $. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1059-1076. doi: 10.3934/cpaa.2021006
Zhijian Yang, Yanan Li. Criteria on the existence and stability of pullback exponential attractors and their application to non-autonomous kirchhoff wave models. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2629-2653. doi: 10.3934/dcds.2018111
Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194
Stefan Meyer, Mathias Wilke. Global well-posedness and exponential stability for Kuznetsov's equation in $L_p$-spaces. Evolution Equations & Control Theory, 2013, 2 (2) : 365-378. doi: 10.3934/eect.2013.2.365
Baoyan Sun, Kung-Chien Wu. Global well-posedness and exponential stability for the fermion equation in weighted Sobolev spaces. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021147
Rodrigo Samprogna, Tomás Caraballo. Pullback attractor for a dynamic boundary non-autonomous problem with Infinite Delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 509-523. doi: 10.3934/dcdsb.2017195
T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037
Zeqi Zhu, Caidi Zhao. Pullback attractor and invariant measures for the three-dimensional regularized MHD equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1461-1477. doi: 10.3934/dcds.2018060
Mustapha Yebdri. Existence of $ \mathcal{D}- $pullback attractor for an infinite dimensional dynamical system. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 167-198. doi: 10.3934/dcdsb.2021036
Shulin Wang, Yangrong Li. Probabilistic continuity of a pullback random attractor in time-sample. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2699-2722. doi: 10.3934/dcdsb.2020028
Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1
Akram Ben Aissa. Well-posedness and direct internal stability of coupled non-degenrate Kirchhoff system via heat conduction. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021106
José A. Langa, Alain Miranville, José Real. Pullback exponential attractors. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1329-1357. doi: 10.3934/dcds.2010.26.1329
Luiz Gustavo Farah. Local solutions in Sobolev spaces and unconditional well-posedness for the generalized Boussinesq equation. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1521-1539. doi: 10.3934/cpaa.2009.8.1521
Dalibor Pražák. Exponential attractor for the delayed logistic equation with a nonlinear diffusion. Conference Publications, 2003, 2003 (Special) : 717-726. doi: 10.3934/proc.2003.2003.717
Tarek Saanouni. Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity. Communications on Pure & Applied Analysis, 2014, 13 (1) : 273-291. doi: 10.3934/cpaa.2014.13.273
Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143
Yangrong Li, Lianbing She, Jinyan Yin. Longtime robustness and semi-uniform compactness of a pullback attractor via nonautonomous PDE. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1535-1557. doi: 10.3934/dcdsb.2018058
Zhijian Yang Na Feng Yanan Li | CommonCrawl |
Abstract: Particle capture by a slowly varying one-dimensional periodic potential is studied by the method of averaging . For large time intervals $t\sim 1/\alpha$ ($\alpha$ is the small parameter which characterizes the rate of change of the potential), including the point of intersection of the separatrix, the solution is constructed up to the first correction terms of order relative to the leading term. The increment $\Delta I$ of the action in a complete evolution interval is also calculated in the leading order in $\alpha$. | CommonCrawl |
Self-maps on flat manifolds with infinitely many periods
Existence of nontrivial solutions to Polyharmonic equations with subcritical and critical exponential growth
June 2012, 32(6): 2207-2221. doi: 10.3934/dcds.2012.32.2207
On dynamical behavior of viscous Cahn-Hilliard equation
Desheng Li 1, and Xuewei Ju 1,
Department of Mathematics, School of Science, Tianjin University, Tianjin, 300072, China, China
Received February 2011 Revised October 2011 Published February 2012
In this paper, we consider the initial and Dirichlet boundary value problem of the viscous Cahn-Hilliard equation with a general nonlinearity $f$, that is $$ d((1-\alpha)u-\alpha\Delta u)+(\Delta^2u-\Delta f(u))dt= 0, $$where $\alpha\in[0,1]$. Firstly, we establish the existence and continuity results on weak solutions and attractors to this problem. Secondly, we show the $\alpha$-uniform attractiveness of the attractors $A_\alpha$.
Keywords: attractors, The $\alpha$-uniform dissipativeness, global solutions, the $\alpha$-uniform attractiveness of attractors..
Mathematics Subject Classification: 35Q99, 35B40, 35B4.
Citation: Desheng Li, Xuewei Ju. On dynamical behavior of viscous Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2207-2221. doi: 10.3934/dcds.2012.32.2207
N. D. Alikakos, P. W. Bates and G. Fusco, Slow motion for the Cahn-Hilliard equation in one space dimension,, J. Differential Equations, 90 (1991), 81. doi: 10.1016/0022-0396(91)90163-4. Google Scholar
P. Bates and P. Fife, Spectral comparison principles for the Cahn-Hilliard and phase-field equations, and time-scales for coarsening,, Phys. D, 43 (1990), 335. doi: 10.1016/0167-2789(90)90141-B. Google Scholar
F. Bai, C. M. Elliott, A. Gardiner, A. Spence and A. M. Stuart, The viscous Cahn-Hilliard equation. I. Computations,, Nonliearity, 8 (1995), 131. doi: 10.1088/0951-7715/8/2/002. Google Scholar
A. N. Carvalho and T. Dlotko, Dynamics of the viscous Cahn-Hilliard equation,, J. Math. Anal. Appl., 344 (2008), 703. doi: 10.1016/j.jmaa.2008.03.020. Google Scholar
A. Debussche and L. Dettori, On the Cahn-Hilliard equation with a logarithmic free energy,, Nonlinear Anal., 24 (1995), 1491. doi: 10.1016/0362-546X(94)00205-V. Google Scholar
T. Dlotko, On the Cahn-Hilliard equation with a logarithmic free energy $H^2$ and $H^3$,, J. Differential Equations, 113 (1994), 381. doi: 10.1006/jdeq.1994.1129. Google Scholar
C. M. Elliott and A. M. Stuart, Viscous Cahn-Hilliard equation. II. Analysis,, J. Differeential Equations, 128 (1996), 387. doi: 10.1006/jdeq.1996.0101. Google Scholar
M. Efendiev, A. Miranville and S. Zelik, Exponential attractors for a singularly perturbed Cahn-Hilliard system,, Math. Nachr., 272 (2004), 11. doi: 10.1002/mana.200310186. Google Scholar
J. K. Hale, "Asymptotic Behavior of Dissipative Systems,", Mathematical Surveys and Monographs, 25 (1988). Google Scholar
J. K. Hale and G. Raugel, Lower semicontinuity of attractors of gradient systems and applications,, Ann. Mat. Pura Appl. (4), 154 (1989), 281. doi: 10.1007/BF01790353. Google Scholar
J. K. Hale, X.-B. Lin and G. Raugel, Upper semicontinuity of attractors for approximations of semigroups and partial differential equations,, Math. Comp., 50 (1988), 89. doi: 10.1090/S0025-5718-1988-0917820-X. Google Scholar
J. K. Hale, Dynamics of numerical approximations,, Appl. Math. Comput., 89 (1998), 5. doi: 10.1016/S0096-3003(97)81644-X. Google Scholar
D. Henry, "Geometric Theory of Semilinear Parabolic Equations,", Lecture Notes in Mathematics, 840 (1981). Google Scholar
D. S. Li and P. E. Kloeden, Equi-attraction and the continuous dependence of attractors on parameters,, Glasg. Math. J., 46 (2004), 131. doi: 10.1017/S0017089503001605. Google Scholar
D. S. Li and C. K. Zhong, Global attractor for the Cahn-Hilliard system with fast growing nonlinearity,, J. Differential Equations, 149 (1998), 191. doi: 10.1006/jdeq.1998.3429. Google Scholar
D. S. Li and X. X. Zhang, Strongly positively-invariant attractor for periodic processes,, J. Math. Anal. Appl., 241 (2000), 10. doi: 10.1006/jmaa.1999.6499. Google Scholar
A. Miranville and S. Zelik, Exponential attractors for the Cahn-Hilliard equation with dynamic boundary conditions,, Math. Methods Appl. Sci., 28 (2005), 709. doi: 10.1002/mma.590. Google Scholar
B. Nicolaenko, B. Scheurer and R. Temam, Some global dynamical properties of a class of pattern formation equations,, Comm. Partial Differential Equations, 14 (1989), 245. Google Scholar
A. Novick-Cohen, On the viscous Cahn-Hilliard equation,, in, (1988), 329. Google Scholar
R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics,", 2nd edition, 68 (1997). Google Scholar
R. Temam, "Navier-Stokes Equations. Theory and Numerical Analysis,", Studies in Mathematics and its Applications, (1977). Google Scholar
Vladimir V. Chepyzhov, Monica Conti, Vittorino Pata. Totally dissipative dynamical processes and their uniform global attractors. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1989-2004. doi: 10.3934/cpaa.2014.13.1989
Gaocheng Yue, Chengkui Zhong. Global attractors for the Gray-Scott equations in locally uniform spaces. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 337-356. doi: 10.3934/dcdsb.2016.21.337
Michael Zgurovsky, Mark Gluzman, Nataliia Gorban, Pavlo Kasyanov, Liliia Paliichuk, Olha Khomenko. Uniform global attractors for non-autonomous dissipative dynamical systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 2053-2065. doi: 10.3934/dcdsb.2017120
P.E. Kloeden, Victor S. Kozyakin. Uniform nonautonomous attractors under discretization. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 423-433. doi: 10.3934/dcds.2004.10.423
P.E. Kloeden, Desheng Li, Chengkui Zhong. Uniform attractors of periodic and asymptotically periodic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 213-232. doi: 10.3934/dcds.2005.12.213
Tomás Caraballo, Antonio M. Márquez-Durán, José Real. Pullback and forward attractors for a 3D LANS$-\alpha$ model with delay. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 559-578. doi: 10.3934/dcds.2006.15.559
Alexey Cheskidov, Songsong Lu. The existence and the structure of uniform global attractors for nonautonomous Reaction-Diffusion systems without uniqueness. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 55-66. doi: 10.3934/dcdss.2009.2.55
Xinyuan Liao, Caidi Zhao, Shengfan Zhou. Compact uniform attractors for dissipative non-autonomous lattice dynamical systems. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1087-1111. doi: 10.3934/cpaa.2007.6.1087
Pierre Fabrie, Cedric Galusinski, A. Miranville, Sergey Zelik. Uniform exponential attractors for a singularly perturbed damped wave equation. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 211-238. doi: 10.3934/dcds.2004.10.211
Caidi Zhao, Shengfan Zhou. Compact uniform attractors for dissipative lattice dynamical systems with delays. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 643-663. doi: 10.3934/dcds.2008.21.643
Pierluigi Colli, Antonio Segatti. Uniform attractors for a phase transition model coupling momentum balance and phase dynamics. Discrete & Continuous Dynamical Systems - A, 2008, 22 (4) : 909-932. doi: 10.3934/dcds.2008.22.909
Yuncheng You, Caidi Zhao, Shengfan Zhou. The existence of uniform attractors for 3D Brinkman-Forchheimer equations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3787-3800. doi: 10.3934/dcds.2012.32.3787
Ahmed Y. Abdallah, Rania T. Wannan. Second order non-autonomous lattice systems and their uniform attractors. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1827-1846. doi: 10.3934/cpaa.2019085
Sergey Zelik. Strong uniform attractors for non-autonomous dissipative PDEs with non translation-compact external forces. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 781-810. doi: 10.3934/dcdsb.2015.20.781
Gaocheng Yue. Attractors for non-autonomous reaction-diffusion equations with fractional diffusion in locally uniform spaces. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1645-1671. doi: 10.3934/dcdsb.2017079
Xin-Guang Yang, Marcelo J. D. Nascimento, Maurício L. Pelicer. Uniform attractors for non-autonomous plate equations with $ p $-Laplacian perturbation and critical nonlinearities. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1937-1961. doi: 10.3934/dcds.2020100
Wendong Wang, Liqun Zhang. The $C^{\alpha}$ regularity of weak solutions of ultraparabolic equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1261-1275. doi: 10.3934/dcds.2011.29.1261
Anne Bronzi, Ricardo Rosa. On the convergence of statistical solutions of the 3D Navier-Stokes-$\alpha$ model as $\alpha$ vanishes. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 19-49. doi: 10.3934/dcds.2014.34.19
Monica Conti, Vittorino Pata. On the regularity of global attractors. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1209-1217. doi: 10.3934/dcds.2009.25.1209
Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero. Regular solutions and global attractors for reaction-diffusion systems without uniqueness. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1891-1906. doi: 10.3934/cpaa.2014.13.1891
Desheng Li Xuewei Ju | CommonCrawl |
\begin{definition}[Definition:Derivative/Complex Function/Point]
Let $D\subseteq \C$ be an open set.
Let $f : D \to \C$ be a complex function.
Let $z_0 \in D$ be a point in $D$.
Let $f$ be complex-differentiable at the point $z_0$.
That is, suppose the limit $\ds \lim_{h \mathop \to 0} \ \frac {\map f {z_0 + h} - \map f {z_0} } h$ exists.
Then this limit is called the '''derivative of $f$ at the point $z_0$'''.
It can be denoted $f' \left({z_0}\right)$,
Category:Definitions/Complex Analysis
\end{definition} | ProofWiki |
Distortion and the automorphism group of a shift
JMD Home
This Volume
Symbolic dynamics for non-uniformly hyperbolic diffeomorphisms of compact smooth manifolds
2018, 13: 115-145. doi: 10.3934/jmd.2018014
The mapping class group of a shift of finite type
Mike Boyle 1, and Sompong Chuysurichay 2,
Department of Mathematics, University of Maryland, College Park, MD 20742-4015, USA
Algebra and Applications Research Unit, Department of Mathematics and Statistics, Prince of Songkla University, Songkhla, Thailand 90110
Dedicated to Roy Adler, in memory of his insight, humor, and kindness
Received April 27, 2017 Revised August 18, 2017 Published December 2018
Let $(X_A,σ_{A})$ be a nontrivial irreducible shift of finite type (SFT), with $\mathscr{M}_A$ denoting its mapping class group: the group of flow equivalences of its mapping torus $\mathsf{S} X_A$, (i.e., self homeomorphisms of $\mathsf{S} X_A$ which respect the direction of the suspension flow) modulo the subgroup of flow equivalences of $\mathsf{S} X_A$ isotopic to the identity. We develop and apply machinery (flow codes, cohomology constraints) and provide context for the study of $\mathscr M_A$, and prove results including the following. $\mathscr{M}_A$ acts faithfully and $n$-transitively (for every $n$ in $\mathbb{N}$) by permutations on the set of circles of $\mathsf{S} X_A$. The center of $\mathscr{M}_A$ is trivial. The outer automorphism group of $\mathscr{M}_A$ is nontrivial. In many cases, $\text{Aut}(σ_{A})$ admits a nonspatial automorphism. For every SFT $(X_B,σ_B)$ flow equivalent to $(X_A,σ_{A})$, $\mathscr{M}_A$ contains embedded copies of ${\rm{Aut}}({\sigma _B})/\left\langle {{\sigma _B}} \right\rangle $, induced by return maps to invariant cross sections; but, elements of $\mathscr M_A$ not arising from flow equivalences with invariant cross sections are abundant. $\mathscr{M}_A$ is countable and has solvable word problem. $\mathscr{M}_A$ is not residually finite. Conjugacy classes of many (possibly all) involutions in $\mathscr M_A$ can be classified by the $G$-flow equivalence classes of associated $G$-SFTs, for $G = \mathbb{Z}/2\mathbb{Z}$. There are many open questions.
Keywords: Shift of finite type, flow equivalence, mapping class group, automorphism group.
Mathematics Subject Classification: Primary: 37B10; Secondary: 20F10, 20F38.
Citation: Mike Boyle, Sompong Chuysurichay. The mapping class group of a shift of finite type. Journal of Modern Dynamics, 2018, 13: 115-145. doi: 10.3934/jmd.2018014
F. Blanchard and G. Hansel, Systèmes codés, Theoret. Comput. Sci., 44 (1986), 17-49. Google Scholar
R. Bowen and J. Franks, Homology for zero-dimensional nonwandering sets, Ann. of Math. (2), 106 (1977), 73-92. doi: 10.2307/1971159. Google Scholar
M. Boyle, Flow equivalence of shifts of finite type via positive factorizations, Pacific J. Math., 204 (2002), 273-317. doi: 10.2140/pjm.2002.204.273. Google Scholar
M. Boyle, Positive K-theory and symbolic dynamics, in Dynamics and Randomness (Santiago, 2000), Nonlinear Phenom. Complex Systems, 7, Kluwer Acad. Publ., 2002, 31-52. Google Scholar
M. Boyle, T. Carlsen and S. Eilers, Flow equivalence of G-SFTs, arXiv: 1512.05238, 2015. Google Scholar
M. Boyle, T. Carlsen and S. Eilers, Flow equivalence of sofic shifts, Israel J. Math., to appear; arXiv: 1511.03481, 2015. Google Scholar
M. Boyle, T. Carlsen and S. Eilers, Flow equivalence and isotopy for subshifts, Dyn. Syst., 32 (2017), 305-325. doi: 10.1080/14689367.2016.1207753. Google Scholar
M. Boyle and U.-R. Fiebig, The action of inert finite-order automorphisms on finite subsystems of the shift, Ergodic Theory Dynam. Systems, 11 (1991), 413-425. Google Scholar
M. Boyle and D. Handelman, Orbit equivalence, flow equivalence and ordered cohomology, Israel J. Math., 95 (1996), 169-210. doi: 10.1007/BF02761039. Google Scholar
M. Boyle and D. Huang, Poset block equivalence of integral matrices, Trans. Amer. Math. Soc., 355 (2003), 3861-3886. doi: 10.1090/S0002-9947-03-02947-7. Google Scholar
M. Boyle and W. Krieger, Periodic points and automorphisms of the shift, Trans. Amer. Math. Soc., 302 (1987), 125-149. doi: 10.1090/S0002-9947-1987-0887501-5. Google Scholar
M. Boyle and W. Krieger, Almost Markov and shift equivalent sofic systems, in Dynamical Systems (College Park, MD, 1986-87), Lecture Notes in Math., 1342, Springer, Berlin, 1988, 33-93. Google Scholar
M. Boyle, D. Lind and D. Rudolph, The automorphism group of a shift of finite type, Trans. Amer. Math. Soc., 306 (1988), 71-114. doi: 10.1090/S0002-9947-1988-0927684-2. Google Scholar
M. Boyle and S. Schmieding, Finite group extensions of shifts of finite type: K-theory, Parry and Livšic, Ergodic Theory Dynam. Systems, 37 (2017), 1026-1059. doi: 10.1017/etds.2015.87. Google Scholar
M. Boyle and M. C. Sullivan, Equivariant flow equivalence for shifts of finite type, by matrix equivalence over group rings, Proc. London Math. Soc. (3), 91 (2005), 184-214. doi: 10.1112/S0024611505015285. Google Scholar
M. Boyle and J. B. Wagoner, Positive algebraic K-theory and shifts of finite type, in Modern Dynamical Systems and Applications, Cambridge Univ. Press, Cambridge, 2004, 45-66. Google Scholar
V. Capraro and M. Lupini, Introduction to Sofic and Hyperlinear Groups and Connes' Embedding Conjecture, Lecture Notes in Mathematics, 2136, Springer, Cham, 2015. With an appendix by Vladimir Pestov. Google Scholar
S. Chuysurichay, Positive Rational Strong Shift Equivalence and the Mapping Class Group of a Shift of Finite Type, Thesis (Ph.D.)-University of Maryland, College Park, 2011, 95 pp, ProQuest LLC, Ann Arbor, MI. Google Scholar
E. M. Coven, A. Quas and R. Yassawi, Computing automorphism groups of shifts using atypical equivalence classes, Discrete Anal., (2016), Paper No. 3, 28pp. Google Scholar
V. Cyr, J. Franks, B. Kra and S. Petite, Distortion and the automorphism group of a shift, J. Mod. Dyn., 13 (2018), 147–161. doi: 10.3934/jmd.2018015. Google Scholar
V. Cyr and B. Kra, The automorphism group of a minimal shift of stretched exponential growth, J. Mod. Dyn., 10 (2016), 483-495. doi: 10.3934/jmd.2016.10.483. Google Scholar
S. Donoso, F. Durand, A. Maass and S. Petite, On automorphism groups of low complexity subshifts, Ergodic Theory Dynam. Systems, 36 (2016), 64-95. doi: 10.1017/etds.2015.70. Google Scholar
S. Eilers, G. Restorff, E. Ruiz and A. P. W. Sorensen, The complete classification of unital graph C*-algebras: Geometric and strong, Canad. J. Math., 70 (2018), 294-353. doi: 10.4153/CJM-2017-016-7. Google Scholar
B. Farb and D. Margalit, A Primer on Mapping Class Groups, Princeton Mathematical Series, 49, Princeton University Press, Princeton, NJ, 2012. Google Scholar
U.-R. Fiebig, Periodic points and finite group actions on shifts of finite type, Ergodic Theory Dynam. Systems, 13 (1993), 485-514. Google Scholar
R. J. Fokkink, The Structure of Trajectories, Thesis (Ph.D.)-Technische Universiteit Delft (The Netherlands), 1991, 112pp, ProQuest LLC, Ann Arbor, MI. Google Scholar
J. Franks, Flow equivalence of subshifts of finite type, Ergodic Theory Dynam. Systems, 4 (1984), 53-66. Google Scholar
T. Giordano, I. F. Putnam and C. F. Skau, Topological orbit equivalence and $ C^*$-crossed products, J. Reine Angew. Math., 469 (1995), 51-111. Google Scholar
T. Giordano, I. F. Putnam and C. F. Skau, Full groups of Cantor minimal systems, Israel J. Math., 111 (1999), 285-320. doi: 10.1007/BF02810689. Google Scholar
R. I. Grigorchuk and K. S. Medinets, On the algebraic properties of topological full groups, Mat. Sb., 205 (2014), 87-108. Google Scholar
G. A. Hedlund, Endomorphisms and automorphisms of the shift dynamical system, Math. Systems Theory, 3 (1969), 320-375. doi: 10.1007/BF01691062. Google Scholar
M. Hochman, On the automorphism groups of multidimensional shifts of finite type, Ergodic Theory Dynam. Systems, 30 (2010), 809-840. doi: 10.1017/S0143385709000248. Google Scholar
K. Juschenko and N. Monod, Cantor systems, piecewise translations and simple amenable groups, Ann. of Math. (2), 178 (2013), 775-787. doi: 10.4007/annals.2013.178.2.7. Google Scholar
K. H. Kim and F. W. Roush, On the automorphism groups of subshifts, Pure Math. Appl. Ser. B, 1 (1990), 203-230 (1991). Google Scholar
K. H. Kim and F. W. Roush, Free $ Z_p$ actions on subshifts, Pure Math. Appl., 8 (1997), 293-322. Google Scholar
K. H. Kim, F. W. Roush and J. B. Wagoner, Automorphisms of the dimension group and gyration numbers, J. Amer. Math. Soc., 5 (1992), 191-212. doi: 10.1090/S0894-0347-1992-1124983-3. Google Scholar
K. H. Kim, F. W. Roush and S. G. Williams, Duality and its consequences for ordered cohomology of finite type subshifts, in Combinatorial & Computational Mathematics (Pohang, 2000), World Sci. Publ., River Edge, NJ, 2001, 243-265. Google Scholar
Y.-O. Kim, J. Lee and K. K. Park, A zeta function for flip systems, Pacific J. Math., 209 (2003), 289-301. doi: 10.2140/pjm.2003.209.289. Google Scholar
D. A. Lind, The entropies of topological Markov shifts and a related class of algebraic integers, Ergodic Theory Dynam. Systems, 4 (1984), 283-300. Google Scholar
D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge University Press, Cambridge, 1995. Google Scholar
N. Long, Fixed point shifts of inert involutions, Discrete Contin. Dyn. Syst., 25 (2009), 1297-1317. doi: 10.3934/dcds.2009.25.1297. Google Scholar
K. Matsumoto and H. Matui, Continuous orbit equivalence of topological Markov shifts and Cuntz-Krieger algebras, Kyoto J. Math., 54 (2014), 863-877. doi: 10.1215/21562261-2801849. Google Scholar
H. Matui, Topological full groups of one-sided shifts of finite type, J. Reine Angew. Math., 705 (2015), 35-84. Google Scholar
M. Nasu, Topological conjugacy for sofic systems and extensions of automorphisms of finite subsystems of topological Markov shifts, in Dynamical Systems (College Park, MD, 1986-87), Lecture Notes in Math., 1342, Springer, Berlin, 1988, 564-607. Google Scholar
W. Parry and D. Sullivan, A topological invariant of flows on 1-dimensional spaces, Topology, 14 (1975), 297-299. doi: 10.1016/0040-9383(75)90012-9. Google Scholar
V. G. Pestov, Hyperlinear and sofic groups: A brief guide, Bull. Symbolic Logic, 14 (2008), 449-480. doi: 10.2178/bsl/1231081461. Google Scholar
G. Restorff, Classification of Cuntz-Krieger algebras up to stable isomorphism, J. Reine Angew. Math., 598 (2006), 185-210. Google Scholar
M. Rørdam, Classification of Cuntz-Krieger algebras, K-Theory, 9 (1995), 31-58. doi: 10.1007/BF00965458. Google Scholar
J. Patrick Ryan, The shift and commutivity. Ⅱ, Math. Systems Theory, 8 (1974/75), 249-250. doi: 10.1007/BF01762673. Google Scholar
V. Salo, Groups and monoids of cellular automata, in Cellular Automata and Discrete Complex Systems, Lecture Notes in Comput. Sci., 9099, Springer, Heidelberg, 2015, 17-45. Google Scholar
V. Salo and I. Törmä, Block maps between primitive uniform and Pisot substitutions, Ergodic Theory Dynam. Systems, 35 (2015), 2292-2310. doi: 10.1017/etds.2014.29. Google Scholar
M. Schraudner, On the algebraic properties of the automorphism groups of countable-state Markov shifts, Ergodic Theory Dynam. Systems, 26 (2006), 551-583. doi: 10.1017/S0143385705000507. Google Scholar
S. Schwartzman, Asymptotic cycles, Ann. of Math. (2), 66 (1957), 270-284. doi: 10.2307/1969999. Google Scholar
J. B. Wagoner, Strong shift equivalence theory and the shift equivalence problem, Bull. Amer. Math. Soc. (N.S.), 36 (1999), 271-296. doi: 10.1090/S0273-0979-99-00798-3. Google Scholar
J. B. Wagoner, Strong shift equivalence and $ K_2$ of the dual numbers, J. Reine Angew. Math., 521 (2000), 119-160, with an appendix by K. H. Kim and F. W. Roush. Google Scholar
B. Weiss, Sofic groups and dynamical systems, Ergodic Theory and Harmonic Analysis (Mumbai, 1999), Sankhyā Ser. A, 62 (2000), 350-359. Google Scholar
Hongyan Guo. Automorphism group and twisted modules of the twisted Heisenberg-Virasoro vertex operator algebra. Electronic Research Archive, , () : -. doi: 10.3934/era.2021008
Qiao Liu. Local rigidity of certain solvable group actions on tori. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 553-567. doi: 10.3934/dcds.2020269
Kien Trung Nguyen, Vo Nguyen Minh Hieu, Van Huy Pham. Inverse group 1-median problem on trees. Journal of Industrial & Management Optimization, 2021, 17 (1) : 221-232. doi: 10.3934/jimo.2019108
Meihua Dong, Keonhee Lee, Carlos Morales. Gromov-Hausdorff stability for group actions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1347-1357. doi: 10.3934/dcds.2020320
Ivan Bailera, Joaquim Borges, Josep Rifà. On Hadamard full propelinear codes with associated group $ C_{2t}\times C_2 $. Advances in Mathematics of Communications, 2021, 15 (1) : 35-54. doi: 10.3934/amc.2020041
Shudi Yang, Xiangli Kong, Xueying Shi. Complete weight enumerators of a class of linear codes over finite fields. Advances in Mathematics of Communications, 2021, 15 (1) : 99-112. doi: 10.3934/amc.2020045
Li Cai, Fubao Zhang. The Brezis-Nirenberg type double critical problem for a class of Schrödinger-Poisson equations. Electronic Research Archive, , () : -. doi: 10.3934/era.2020125
Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1197-1212. doi: 10.3934/dcdss.2020234
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278
Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065
Kengo Matsumoto. $ C^* $-algebras associated with asymptotic equivalence relations defined by hyperbolic toral automorphisms. Electronic Research Archive, , () : -. doi: 10.3934/era.2021006
Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115
Shuang Liu, Yuan Lou. A functional approach towards eigenvalue problems associated with incompressible flow. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3715-3736. doi: 10.3934/dcds.2020028
Pablo D. Carrasco, Túlio Vales. A symmetric Random Walk defined by the time-one map of a geodesic flow. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020390
Joan Carles Tatjer, Arturo Vieiro. Dynamics of the QR-flow for upper Hessenberg real matrices. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1359-1403. doi: 10.3934/dcdsb.2020166
Petr Pauš, Shigetoshi Yazaki. Segmentation of color images using mean curvature flow and parametric curves. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1123-1132. doi: 10.3934/dcdss.2020389
PDF downloads (115)
Mike Boyle Sompong Chuysurichay | CommonCrawl |
\begin{definition}[Definition:Order Embedding/Definition 2]
Let $\struct {S, \preceq_1}$ and $\struct {T, \preceq_2}$ be ordered sets.
Let $\phi: S \to T$ be a mapping.
$\phi$ is an '''order embedding of $S$ into $T$''' {{iff}} both of the following conditions hold:
:$(1): \quad \phi$ is an injection
:$(2): \quad \forall x, y \in S: x \preceq_1 y \iff \map \phi x \preceq_2 \map \phi y$
\end{definition} | ProofWiki |
\begin{document}
\title[Hurewicz fibrations, almost submetries and critical points]{Hurewicz fibrations, almost submetries and critical points of smooth maps}
\author{Sergio Cacciatori}
\address{Universt\`a dell'Insubria - Dipartimento di Scienza e Alta Tecnologia\endgraf Via Valleggio 11, I-22100 Como, Italy and
\endgraf INFN, Sezione di Milano, via Celoria 16, I-20133 Milano, Italy.} \email{[email protected]}
\author{Stefano Pigola} \address{Universt\`a dell'Insubria - Dipartimento di Scienza e Alta Tecnologia\endgraf Via Valleggio 11, I-22100 Como, Italy} \email{[email protected]} \begin{abstract} We prove that the existence of a Hurewicz fibration between certain spaces with the homotopy type of a CW-complex implies some topological restrictions on their universal coverings. This result is used to deduce differentiable and metric properties of maps between compact Riemannian manifolds under curvature restrictions. \end{abstract}
\date{\today}
\subjclass[2010]{55R05} \keywords{Hurewicz fibration, almost submetry, critical point}
\maketitle
\section{Introduction and main results} The main purpose of the paper is to prove that the existence of a Hurewicz fibration between certain spaces with the homotopy type of a CW-complex implies certain topological restrictions on their universal coverings. For the sake of brevity, we shall say that the topological space $Z$ is $\mathbb{K}$-acyclic if its reduced singular homology with coefficients in the field $\mathbb{K}$ satisfies $\tilde H_n(Z;\mathbb{K}) = 0$, for every $n \geq 0$.
\begin{theorem} \label{th-general} Let $X$ be a connected, locally path connected and semi-locally simply connected, separable metric space with finite covering dimension $\dim X <+\infty$ and with the homotopy type of a CW-complex. Let $Y$ be a connected, locally path connected and semi-locally simply connected space with the homotopy type of a finite-dimensional CW-complex. Assume that there exists a Hurewicz fibration $\pi: X \to Y$. \begin{itemize} \item [(a)] If at least one fibre $F$ is locally contractible and $X$ is aspherical then the universal covering space $Y'$ of $Y$ is $\mathbb{K}$-acyclic, for any field $\mathbb{K}$. \item [(b)] Let $\widetilde {\Omega_{{y}} Y}$ denote the connected component of the loop space of $Y$ containing the constant loop $c_{y}\equiv y$. If $X$ is aspherical and the fibre $F=\pi^{-1}(y)$ is a finite dimensional CW-complex, then $H_{k} (\widetilde {\Omega_{y} Y}, \mathbb{K}) = 0 $ for every $k > \dim X$ and for any field $\mathbb{K}$. \end{itemize} \end{theorem} A variant of Theorem \ref{th-general} of special interest can be obtained in case $X$ and $Y$ are genuine CW-complexes. Recall that a finite dimensional CW-complex is a locally contractible (hence locally path connected and semi-locally simply connected), paracompact, normal space of finite covering dimension, \cite{FrPi-book, Miy-Tohoku}.
\begin{theorem}\label{corollary-cw} Let $X$ and $Y$ be connected, finite dimensional, CW-complexes. Assume that there exists a Hurewicz fibration $\pi: X \to Y$ with at least one locally contractible fibre. If $X$ is aspherical then the universal covering space $Y'$ of $Y$ is $\mathbb{K}$-acyclic, for any field $\mathbb{K}$. \end{theorem}
Our main motivation to investigate topological properties of Hurewicz fibrations is to get information on maps between compact Riemannian manifolds under curvature restrictions. Indeed, recall that smooth manifolds are finite-dimensional, separable metric space and also finite dimensional CW-complexes. By way of example, we point out the following consequences of Theorem \ref{th-general} or Theorem \ref{corollary-cw}. First, we consider critical points of smooth maps. Recall that a point $p \in M$ is critical for the $C^{1}$-map $f: M \to N$ if $f$ is not submersive at $p$. \begin{corollary}\label{corollary-criticalpoints} Let $f: M \to N$ be a smooth map between compact Riemannian manifolds $(M,g)$ and $(N,h)$ of dimensions $m , n \geq 2$. Assume also that $\operatorname{Sect}_M \leq 0$ and $\operatorname{Ric}_N \geq K$ for some constant $K>0$. Then $f$ must have a critical point. \end{corollary} \begin{proof} Let $m \geq n$ for, otherwise, the result is trivial. By the Bonnet-Myers theorem, the universal covering space $N'$ of $N$ is compact. Since $N'$ is simply connected, hence orientable, $H_n(N';\mathbb{R}) \not=0$. On the other hand, by the Cartan-Hadamard theorem $M'$ is diffeomorphic to $\mathbb{R}^m$, hence $M$ is aspherical. Now, by contradiction, assume that $f:M \to N$ is a smooth map without critical points. Then, the Ehresmann fibration theorem implies that $f$ is a locally trivial bundle. Since $N$ is (para)compact, $f$ is also a Hurewicz fibration with fibre a smooth $(m-n)$-dimensional manifold. This contradicts Theorem \ref{th-general} or Theorem \ref{corollary-cw}. \end{proof}
\begin{remark} \rm{ Existence of critical points for any smooth map $f:\times_1^m \mathbb{S}^1 \to \mathbb{SO}(3)$ was observed by D. Gottlieb, \cite{Go-Robot}, in relation with critical configurations of multi-linked robot arms. } \end{remark}
\begin{remark}\label{rem-curv} \rm{It is clear from the proof that the role of the curvature of $M$ is just to guarantee that the covering space $M'$ is contractible. Obviously, we do not need that $M'$ is diffeomorphic to $\mathbb{R}^m$ and therefore the Corollary applies e.g. when $M$ is a compact quotient (if any) of a Whitehead-like manifold. Similarly, the Ricci curvature condition on the target has the purpose to guarantee that the universal covering manifold $N'$ is compact. Thus, we can take e.g. $N$ to be any quotient of the product $\mathbb{S}^{n_{1}}\times \mathbb{S}^{n_{2}}$ with $n_{1},n_{2}\geq 2$. Summarizing, the following differential topological result holds true.} \end{remark}
\begin{corollary}
Let $f : M \to N$ be a smooth map between compact differentiable manifolds of dimensions $m , n \geq 2$. If $M$ is aspherical and the universal covering space of $N$ is compact then, necessarily, $f$ has a critical point. \end{corollary}
The next application is an estimate of the $\epsilon$-constant for $e^{\epsilon}$-LcL maps under curvature restrictions. \begin{definition} A continuous map $f : X \to Y$ between metric spaces is called an $e^{\epsilon}$-Lipschitz and co-Lipschitz map ($e^{\epsilon}$-LcL for short), if for any $p \in X$, and any $r > 0$, the metric balls of $X$ and $Y$ satisfy \[ B^{Y}_{e^{-\epsilon}r}(f(p)) \subseteq f(B^{X}_r(p)) \subseteq B^{Y}_{e^{\epsilon}r}(f(p)). \] A $1$-LcL is a submetry in the usual sense of V. Berestovskii. \end{definition} Fibration properties of $e^{\epsilon}$-LcL maps has been investigated in \cite{RoXu-advances}. More recent results are contained in \cite{Xu}.
\begin{corollary}\label{corollary-LCL} Let $M$ and $N$ be compact Riemannian manifolds such that $\operatorname{Sect}_M \leq 0$ and $\operatorname{Ric}_N \geq K$ for some constant $K >0$. If $f : M \to N$ is an $e^{\epsilon}$-LcL map then $\epsilon > \ln (1.02368)$. \end{corollary} \begin{proof} By \cite[Theorem A]{Xu} any $(1.02368)$-LcL proper map from a complete Riemannian manifold with curvature bounded from below into any Riemannian manifold is a Hurewicz fibration. Moreover, all the fibres are locally contractible. On the other hand, as we have already observed above, the curvature assumptions on $M$ and $N$ imply that $M$ is aspherical and $N'$ is not $\mathbb{R}$-acyclic. \end{proof}
\begin{remark} \rm{ The previous result could be stated in the more general setting of Alexandrov spaces. Namely, we can choose $M$ to be a compact Alexandrov-space of finite dimension, with the homotopy type of a CW complex, and curvature $-C \leq \mathrm{Curv}_M \leq 0$. Indeed, \cite[Theorem A]{Xu} is already stated in this metric setting and, in order to obtain that the universal covering space $M'$ is contractible we can apply the metric version of the Cartan-Hadamard theorem by S. Alexander and R. L. Bishop. See e.g. \cite[Theorem II.4.1 and Lemma II.4.5]{BH-book}. } \end{remark}
\begin{remark} \rm{ Concerning the curvature assumptions what we said in Remark \ref{rem-curv} applies also to Corollary \ref{corollary-LCL}. In fact, we have the following surprisingly general result. } \end{remark}
\begin{corollary} Let $M$ be a compact smooth manifold with contractible universal covering $M'$ and let $N$ be a compact smooth manifold with compact universal covering $N'$. Then, for any fixed Riemannian metrics $g$ on $M$ and $h$ on $N$ and for every $e^{\epsilon}$-LcL map $f:(M,g) \to (N,h)$ it holds $\epsilon > \ln(1.02368)$. \end{corollary}
The paper is organized as follows: in Section 2 we collect some preliminary facts concerning Hurewicz fibrations, loop spaces and the cohomological dimension of a space in connection with the covering space theory. Section 3 is devoted to the proofs of Theorems \ref{th-general} and \ref{corollary-cw}.
\section{Preliminary results} In this section we lift a Hurewicz fibration to the universal covering spaces without changing the assumption of the main theorem and, as a consequence, we deduce some information on the singular homology of the lifted fibre.
\subsection{Lifting Hurewicz fibrations} In the next result we collect some properties of our interest that a generic covering space inherits from its base space. \begin{proposition}\label{prop-lift} Let $P:E' \to E$ be a covering projection between path connected spaces. Then, each of the following properties lifts from the base space $E$ to the covering space $E'$. \begin{enumerate} \item[(a)] The space is a CW-complex. \item[(b)] The space has the homotopy type of a CW-complex. \item[(c)] The space is regular (and $T_1$) and has finite small inductive dimension. \item[(d)] The space is locally path connected and $\mathrm{II}$-countable. \item[(e)] The space is locally path connected, separable and metrizable. \end{enumerate} Moreover, if $E'$ is simply connected, i.e. $P:E' \to E$ is the universal covering of $E$, then (b) can be replaced by \begin{enumerate}
\item [(b')] The space is Hausdorff, locally path connected and has the homotopy type of a finite dimensional CW-complex. \end{enumerate} \end{proposition}
\begin{remark} \rm{ It will be clear from the proof of (d) that $E'$ is $\mathrm{II}$-countable provided that $E$ has the homotopy type of a $\mathrm{II}$-countable, CW-complex $Y$ with a countable $1$-skeleton $Y^1$. Indeed, the injection $i:Y^1 \to Y$ induces a surjective homomorphism between fundamental groups $i_{\ast} : \pi_1(Y^1, \ast) \to \pi_1(Y,\ast) \simeq \pi_1(E,\ast)$, \cite[Proposition 1.26]{Ha-book} or \cite[Corollary 2.4.7]{FrPi-book}. } \end{remark}
\begin{proof}
(a) Indeed, the $n^{\mathrm{th}}$-skeletons $(E')^n$ of $E'$ and $E^n$ of $E$ are related by $P^{-1}(E^n) = (E')^n$ and $P|_{(E')^{n}}:(E')^{n} \to E^{n}$ is a covering projection; \cite[Proposition 2.3.9]{FrPi-book}.
(b) Assume that $E$ has the homotopy type of a CW-complex. Since the covering projection $P$ is a Hurewicz fibration, \cite[Theorem II.2.3]{Sp}, and each fibre $P^{-1}(e)$ is a discrete space, hence a CW-complex, the conclusion follows by applying \cite[Theorem 5.4.2]{FrPi-book}.
(c) Assume that $E$ is regular, i.e., it is a $T_1$ space with the shrinking property of arbitrarily small open neighborhoods. Then $E'$ is also $T_1$ because for any $e' \in E'$, the fibre $P^{-1}(P(e'))$ is a discrete and closed subspace of $E'$. The shrinking property of $E'$ follows easily from the fact that $P$ is a local homeomorphism. Finally, let us show that the regular space $E'$ has small inductive dimension $\mathrm{ind}(E') \leq n <+\infty$ provided $\mathrm{ind}(E) \leq n <+\infty$. To this end, fix $e' \in E'$ and an open neighborhood $U'$ of $e'$. We have to show that there exists an open set $e' \in V' \subset U$ such that $\mathrm{ind}(\operatorname{Fr}_{E'} V') \leq n-1$. Here, $\operatorname{Fr}_{X}Y$ denotes the topological boundary of $Y$ as a subspace of $X$. Without loss of generality, we can assume that $P|_{U'} :U' \to U=P(U')$ is a homeomorphism. Since $\mathrm{ind}(U) \leq \mathrm{ind}(E)$, \cite[1.1.2]{En-dim}, by the topological invariance of the inductive dimension we have $\mathrm{ind}(U') \leq n$. By regularity, we can choose an open neighborhood $e' \in U'_0 \subset \overline{U'_0} \subset U'$. It follows from the definition of inductive dimension of $U'$ that there exists an open set (in $U'$ hence in $E'$) $e' \in V' \subset U'_0$ such that $\mathrm{ind}(\operatorname{Fr}_{U'} V') \leq n-1$. To conclude, observe that $\operatorname{Fr}_{U'} {V'} = \operatorname{Fr}_{E'} {V'}$.
(d) Assume that $E$ is locally path connected and $\mathrm{II}$-countable. Since $P$ is a local homeomorphism then $E'$ is locally path-connected. Moreover, since the topology of $E$ has a countable basis made up by open sets evenly covered by $P$, to prove that $E'$ is $\mathrm{II}$-countable it is enough to verify that the fibre of the covering projection is a countable set. On the other hand, the fibre $P^{-1}(e)$ is in one-to-one correspondence with the co-set $\pi_1(E,e)/P_{\ast}\pi_1(E',e')$ of the fundamental group $\pi_1(E,e)$ of $(E,e)$. To conclude, we recall that $\pi_1(E,e)$ is countable because the topology of the path connected space $E$ has a countable basis $\mathcal{U}$ such that, for any $U_1, U_2 \in \mathcal{U}$, the connected components of $U_1 \cap U_2$ are open (hence countable); see e.g. the proof of \cite[Theorem 8.11]{Lee-top}.
(e) We already know that $E'$ is a ($T_1$-)regular, $\mathrm{II}$-countable (hence separable) topological space. Therefore, we can apply the Urysohn metrization theorem.
(b') The Hausdorff property lifts from the base space $E$ to the covering space $E'$: two points on the same fibre are separated by the pre-images of an evenly covered open set of $E$, whereas two points on different fibres are separated by the pre-images of disjoint open sets in $E$. Moreover $E'$ is locally path connected as already observed.\\ Assume that $P:E' \to E$ is the universal covering projection and that the Hausdorff space $E$ has the homotopy type of a CW-complex $Y$ with $\dim Y <+\infty$. Recall that $Y$ is Hausdorff (actually normal) and that the (Lebesgue) covering dimension $\dim Y$ coincides with the dimension of $Y$ as a CW-complex; \cite[Propositions 1.2.1, 1.5.14]{FrPi-book}. Let $Q: Y' \to Y$ be the universal covering of $Y$. We know from (a) that $Y'$ is a CW-complex of dimension $\dim Y' = \dim Y$. We shall show that $E'$ has the same homotopy type of $Y'$, that is: \begin{lemma}\label{lemma_equivcov} Let $P_X:X' \to X$ and $P_Y:Y' \to Y$ be universal covering projections between Hausdorff spaces. If $X$ is homotopy equivalent to $Y$ then $X'$ is homotopy equivalent to $Y'$. \end{lemma} The proof relies on the following standard fact. Recall that a covering projection $P:X' \to X$ between path connected and locally path connected spaces is said to be normal if the image $P_{\ast}(\pi_1(X',x'))$ of the fundamental group of $X'$ is normal in the fundamental group $\pi_1(X,P(x'))$ of $X$. This condition is trivially satisfied if $X'$ is simply connected. \begin{lemma}\label{lemma_covtransf} Let $P:X' \to X$ be a normal covering between connected, locally path connected and Hausdorff spaces. Let $f:X\to X$ be a continuous map. Let $f',g':X' \to X'$ be continuous liftings of $f$, that is $P \circ f' = f\circ P = P \circ g'$. Then, there exists a covering transformation $\tau \in \mathrm{Deck}(P)$ such that $\tau \circ f' = g'$. \end{lemma} We are now in the position to give the proof of Lemma \ref{lemma_equivcov}. In all that follows, given a homotopy $H:X \times I \to Y$, we shall use the notation $H_{t}:= H(\cdot,t):X \to Y$. \begin{proof}[Proof of Lemma \ref{lemma_equivcov}] Let $f:X \to Y$ be a homotopy equivalence. This means that there exist continuous maps $g,h:Y \to X$ such that $g \circ f \simeq \mathbf{1}_X$ via a homotopy $H:X\times I \to X$ and $f \circ h \simeq \mathbf{1}_Y$ via a homotopy $K: Y \times I \to Y$. Choose any liftings \begin{itemize} \item $f':X' \to Y'$ of $(f\circ P_X):X' \to Y$ with respect to the covering projection $P_Y$ \item $g',h':Y' \to X'$ of $(g\circ P_Y), (h\circ P_Y) : Y' \to X$ with respect to the covering projection $P_X$ \item $H':X'\times I \to X'$ of $H \circ (P_X \times \mathbf{1}_I):X' \times I \to X$ with respect to $P_X$ \item $K':Y' \times I \to Y'$ of $K \circ (P_Y \times \mathbf{1}_I):Y' \times I \to Y$ with respect to $P_Y$. \end{itemize} This is possible, without any restriction on the maps, because each of the domain spaces is simply connected.\\ Now, since \[ P_X \circ H'_{0} = P_X \circ (g' \circ f') \] and since the universal covering is normal, by Lemma \ref{lemma_covtransf} there exists a covering transformation $\tau_1\in \mathrm{Deck}(P_X)$ such that \[ \tau_1 \circ H'_{0} = g'\circ f'. \] Similarly, since \[ P_X \circ H'_{1} = P_X \circ \mathbf{1}_{X'} \] we find $\tau_2 \in \mathrm{Deck}(P_X)$ such that \[ \tau_2 \circ H'_{1} = \mathbf{1}_{X'}. \] Let $H''=\tau_2 \circ H'$ and $g'' = \tau_2\circ \tau_1^{-1}\circ g'$. Then, it follows from the above equations that the homotopy $H''$ realizes the equivalence \begin{equation}\label{equivcov1} g'' \circ f' \simeq \mathbf{1}_{X'}. \end{equation} Arguing in a similar way, we obtain the existence of $\xi_1,\xi_2 \in \mathrm{Deck}(P_Y)$ such that the homotopy $K''=\xi_2 \circ K'$ realizes the equivalence \[ (\xi_2 \circ \xi_1^{-1} \circ f') \circ h' \simeq \mathbf{1}_{Y'}. \] Define $h'' = h' \circ (\xi_1 \circ \xi_2 ^{-1})$ and the new homotopy \[ K''' =( \xi_1 \circ \xi_2^{-1}) \circ K'' \circ (\xi_1 \circ \xi_2 ^{-1} \times \mathbf{1}_I). \] Then, it is straightforward to verify that $K'''$ realizes the equivalence \begin{equation}\label{equivcov2} f' \circ h'' \simeq \mathbf{1}_{Y'}. \end{equation} From \eqref{equivcov1} and \eqref{equivcov2} we conclude that $f'$ is a homotopy equivalence between $X'$ and $Y'$. \end{proof} The proof of Proposition \ref{prop-lift} is completed. \end{proof}
We now verify that a Hurewicz fibration always lifts to a Hurewicz fibration between universal covering spaces. According to the previous Lemma, the lifted fibration enjoys some good properties of the base fibration. \begin{proposition}\label{prop-lift-hur} Let $\pi:E \to B$ be a Hurewicz fibration between connected, locally path connected and semi-locally simply connected spaces with the homotopy type of a CW-complex. Let $P_E: E' \to E$ and $P_B:B' \to B$ denote the universal covering maps of $E$ and $B$ respectively. Then:
\begin{enumerate} \item [(a)] There exists a Hurewicz fibration $\pi':E' \to B'$ such that $P_B \circ \pi' = \pi \circ P_E$. \item [(b)] $E'$ and the fibres $F'$ have the homotopy type of a CW-compex. Moreover, if $E$ is a separable metric space then also $E'$ and $F'$ are separable metric spaces. \item [(c)] Let $F_{0} = \pi^{-1}(b_0)$ and $F'_{0}=(\pi')^{-1}(b'_0)$ with $P_B(b'_0) = b_0$. If $E'$ is contractible, then $F'_{0}$ is exactly one of the connected components of $P_E^{-1}(F_{0})$. In particular:
\begin{enumerate}
\item [(c.1)] If $F_{0}$ is locally contractible, so is $F'_{0}$.
\item [(c.2)] If $F_{0}$ is a finite dimensional CW-complex then the same holds for $F'_{0}$.
\end{enumerate} \item [(d)] If $E$ is a separable metric space with finite covering dimension then $E'$ and $F'$ have the same properties. \end{enumerate} \end{proposition}
\begin{proof} (a) First, we observe that the composition of Hurewicz fibrations is still a Hurewicz fibration and this, in particular, applies to $\pi'' := \pi \circ P_E :E' \to B$. The verification is straightforward from the definition. Since $E'$ is simply connected, $\pi''$ lifts to a continuous map $\pi' : E' \to B'$ such that $$P_B \circ \pi' = \pi''.$$ We show that $\pi'$ is a Hurewicz fibration. To this end, let $H:X \times I \to B'$ be a given homotopy, and assume that $\widetilde {H_{0}}: X \to E'$ is a continuous lifting of $H_{0} : X \to B$ with respect to $\pi'$, i.e., \[ \pi' \circ \widetilde{H_{0}} = H_{0}. \] Without loss of generality, we can assume that $X$ is connected. If not, we argue on each component. Consider the homotopy $$H'' := P_B \circ H: X \times I \to B'$$ and observe that $\widetilde{H_{0}}$ is a continuous lifting of $H''_{0}:X \to B$ with respect to $\pi''$. Indeed, \[ \pi'' \circ \widetilde{H_{0}} = P_B \circ (\pi' \circ \widetilde{H_{0}}) = P_B \circ H_{0} = H''_{0}. \] Since $\pi''$ is a Hurewicz fibration, there exists a continuous lifting \[ \tilde H : X\times I \to E' \]
of $H''$ with respect to $\pi''$ such that
\[
\tilde H(x,0) = \widetilde{H_{0}}(x)\text{, }\forall x\in X.
\]
Note that, in particular, \[ H'' = \pi'' \circ \tilde H = P_B \circ (\pi' \circ \tilde H), \] namely, $(\pi' \circ \tilde H) : X \times I \to B'$ is the unique continuous lifting of $H''$ with respect to the covering projection $P_B$ satisfying \[ (\pi' \circ \tilde H)(x,0) = (\pi' \circ \widetilde{H_{0}})(x) = H_{0}(x). \] Since, by definition of $H''$, the map $H:X\times I \to B'$ has the same property we must conclude \[ \pi' \circ \tilde H = H. \] Summarizing, we have proved that $\tilde H$ is a continuous lifting of $H$ with respect to $\pi'$ and satisfies $\tilde H (x,0) = \widetilde{H_{0}}(x)$.\\
(b) By Proposition \ref{prop-lift} (b), both $E'$ and $B'$ have the homotopy type of a CW-complex. Then, by \cite[Proposition 5.4.1]{FrPi-book}, $F'$ has the homotopy type of a CW-complex. Moreover, by (e) of Proposition \ref{prop-lift}, $E'$ is separable and metrizable. The same holds for the subspace $F'$ of $E'$.\\
(c) We shall see in Proposition \ref{prop-contrfibre} below that $F'_{0}$ is path connected. Note that $F'_{0}\subseteq P_E^{-1}(F_{0})$ and $\pi'(P_E^{-1}(F_{0})) \subseteq P_B^{-1}(b_0)$. Since $P_B^{-1}(b_0)$ is a discrete space, the continuous map $\pi'|_{P_{E}^{-1}(F_{0})}$ is constant on the connected components of $P_E^{-1}(F_{0})$. Let $C$ be a component of $P_E^{-1}(F_{0})$ such that $C \cap F'_{0} \not= \emptyset$. Clearly $C \supseteq F'_{0}$ because $F'_{0}$ is connected also as a subspace of $F_{0}$. Since $\pi'(F'_{0})=b'_0$ then, necessarily, we have $\pi'(C) = b'_0$. Thus $C= F'_{0}$.\\ (c.1) Now, assume that $F_{0}$ is locally contractible. Since \[
P_{F_{0}} = P_E|_{P_E^{-1}(F_{0})}: P_E^{-1}(F_{0}) \to F_{0} \] is a covering projection, hence a local homeomorphism, we have that also $P_E^{-1}(F_{0})$ is locally contractible. In particular, this space is locally path connected. It follows that the connected component $C=F'_{0}$ of $P_E^{-1}(F_{0})$ is an open subset and, therefore, $F'_{0}$ inherits the local contractibility of $P_{E}^{-1}(F_{0})$. \\
(c.2) Finally, suppose that $F_{0}$ is a finite-dimensional CW-complex. Then, using again that $P_{F_{0}}$ is a covering projection, by Proposition \ref{prop-lift} (a) we have that $P_{E}^{-1}(F_{0})$ is a finite dimensional CW-complex. To conclude, recall that $F'_{0}$ is a connected component of $P_{E}^{-1}(F_{0})$, hence a CW-subcomplex.\\
(d) Recall that for a separable, metrizable space the small inductive dimension coincides with the (Lebesgue) covering dimension, \cite{En-dim}. Therefore, we can apply Proposition \ref{prop-lift} (c) and conclude that, if $\dim E <+\infty$ then $\dim E', \dim F'<+\infty$. \end{proof}
\subsection{The loop space of the base space} We now consider the path fibration $\varepsilon:\mathcal{P}_{x_{0}}(X) \rightarrow X$ where \[ \mathcal{P}_{x_{0}}(X) =\left\{ \gamma:[0,1]\rightarrow X:\gamma\text{ continuous and }\gamma\left( 0\right) =x_{0}\right\} \] is endowed with the compact-open topology and $\varepsilon$ is the end-point map: \[ \varepsilon\left( \gamma\right) =\gamma\left( 1\right). \] It is a Hurewicz fibration and the fibre $F=\varepsilon^{-1}(x_0)$ is the loop space of $X$ based at $x_{0}$: \[ \varepsilon^{-1}\left( x_{0}\right) =\Omega_{x_{0}}X. \] Sometimes, when there is no danger of confusion, the base point is omitted from the notation.
\begin{proposition} \label{prop-contrfibre} Let $\pi:E \to B$ be a Hurewicz fibrations with fibre $F$ and path connected spaces $E$ and $B$. Assume that $E$, $B$ (hence $F$) all have the homotopy type of a CW-complex. \begin{enumerate} \item[(a)] If $E$ is contractible and $B$ is simply connected, then $F=\pi^{-1}(b)$ is path connected and it is homotopy equivalent to the loop space $\Omega_b B$, for any $b\in B$. \item[(b)] Assume that $B$ admits the universal covering projection $P_B:B' \to B$. Having fixed $b'_{0}\in B'$, let $P_B(b'_{0}) =b_{0}\in B$ and denote by $\widetilde{\Omega_{b_{0}} B}$ the connected component of $\Omega_{b_{0}}B$ containing the constant loop $c_{b_{0}}$. Then, there exists a homotopy equivalence $\theta:\Omega_{b'_{0}}B'\rightarrow\widetilde{\Omega_{b_{0}}B}$. \end{enumerate} \end{proposition}
\begin{proof} (a) The loop space $\Omega_{b}B$ of the simply connected space $B$ is path connected. Indeed, a relative homotopy $H : I \times I \to B$ between the constant loop $c_{b} \in \Omega_{b}B$ and any other loop $\gamma \in \Omega_{b}B$ gives rise to the continuous path $\Gamma: I \to \Omega_{b }B$, $\Gamma(s) = H(\ast,s)$, connecting $c_{b}$ to $\gamma$. Moreover, by \cite[Corollary 3]{Mi-TAMS}, $\Omega_b B$ has the homotopy type of a CW-complex.
Now, since $B$ is path connected then the path space $\mathcal{P}_{b}(B)$ is contractible, \cite[Lemma 3 on p. 75]{Sp}. If follows from \cite[Proposition 4.66]{Ha-book} that there is a weak homotopy equivalence $h:F\rightarrow\Omega_{b} B$. In particular, since $h_{\ast}: \pi_{0}(F) \to \pi_{0}(\Omega_{b}B)$ is bijective, then $F$ must be connected. Summarizing, $h$ is a weak homotopy equivalence between connected spaces with the homotopy type of a CW-complex. Therefore, we can apply the classical theorem by J. H. C. Whitehead and conclude that $h$ is a genuine homotopy equivalence, \cite[Theorem 4.5]{Ha-book}.
\noindent (b) Consider the following diagram of fibrations\\
\hspace{4.2cm} \xymatrix{ B' \ar@{->}[d]_{P_B} \ar@{<-}[r]^{\varepsilon^{\prime}} & \mathcal{P}_{b'_{0}}(B^{\prime}) \\ B \ar@{<-}[r]_{\varepsilon} &\mathcal{P}_{b_{0}}(B)}\\[3mm]
Since the path space $\mathcal{P}_{b'_{0}}(B')$ is contractible to the constant loop $c_{b_{0}^{\prime}}$, the continuous map \[ P_B^{\prime}=P_B\circ\varepsilon^{\prime}:\mathcal{P}_{b'_{0}}(B')\rightarrow B \] is homotopic to the constant map $f_{b_{0}} \equiv b_{0}:\mathcal{P}_{b'_{0}}(B')\rightarrow B$. Let $H:\mathcal{P}_{b'_{0}}(B')\times\lbrack0,1]\rightarrow B$ be a homotopy between $f_{b_{0}}$ and $P_B^{\prime}$ such that $H\left( \cdot,0\right) =f_{b_{0}}$ and consider the constant map $f_{b_{0}}^{\prime}\equiv c_{b_{0}}:\mathcal{P}_{b'_{0}}(B')\rightarrow \mathcal{P}_{b_{0}}(B) $. Clearly $f_{b_{0} }^{\prime}$ is a lifting of $f_{b_{0}}$ with respect to $\varepsilon$. Then, by the homotopy lifting property of the fibration $\varepsilon:\mathcal{P}_{b_{0}}( B) \rightarrow B$, there exists a unique lifting $H^{\prime }:\mathcal{P}_{b'_{0}}(B')\times\lbrack0,1]\rightarrow \mathcal{P}_{b_{0}}(B)$ of $H$ such that $H^{\prime}\left( \cdot,0\right) =f_{b_{0}}^{\prime}$. Define \[ \Theta=H^{\prime}\left( \cdot,1\right) :\mathcal{P}_{b'_{0}}(B')\rightarrow \mathcal{P}_{b_{0}}(B) . \] Since, by construction, $\varepsilon\circ H^{\prime}=H$ we have that $\varepsilon\circ\Theta=P_B^{\prime}$ and the following diagram commutes:
\hspace{4.2cm} \xymatrix{ B' \ar@{->}[d]_{P_B} \ar@{<-}[r]^{\varepsilon^{\prime}} & \mathcal{P}_{b'_{0}}(B^{\prime}) \ar@{->}[d]^\Theta \ar@{->}[ld]^{P'_B}\\ B \ar@{<-}[r]_{\varepsilon} &\mathcal{P}_{b_{0}}(B)}\\[3mm]
Now, define $\theta:= \Theta|_{\Omega_{b'_{0}}B'}:\Omega_{b_{0}^{\prime}}B^{\prime}\rightarrow \mathcal{P}_{b_{0}}(B)$. We claim that, actually, \[ \theta:\Omega_{b_{0}^{\prime}}B^{\prime}\rightarrow\widetilde{\Omega_{b_{0}} B} \subseteq \Omega_{b_{0}}B. \] Indeed, for any $\gamma^{\prime}\in\Omega_{b_{0}^{\prime}}B^{\prime}$ we have \[ \varepsilon\circ\Theta\circ\gamma^{\prime}=P_B^{\prime}\circ\gamma^{\prime }=P_B\circ\varepsilon^{\prime}\circ\gamma^{\prime}=P_B\left( b_{0}^{\prime }\right) =b_{0}, \] proving that the end-point of $\Theta\circ\gamma^{\prime}$ is $b_{0}$. On the other hand, by definition, $\Theta$ takes values in $\mathcal{P}_{b_{0}}(B) $, thus $\Theta\circ\gamma^{\prime}$ is a path issuing from $b_{0}$. Summarizing $\Theta\circ\gamma^{\prime}\in\Omega_{b_{0}}B$, i.e., $\theta(\Omega_{b_{0}^{\prime}}B^{\prime})\subseteq\Omega_{b_{0}}B$. Actually, since $B^{\prime}$ is simply connected then $\Omega_{b_{0}^{\prime}}B^{\prime}$ is path connected and $\theta(\varepsilon_{b_{0}^{\prime}})=\varepsilon _{b_{0}}$. It follows by the continuity of $\theta$ that $\theta(\Omega _{b_{0}^{\prime}}B^{\prime})\subseteq\widetilde{\Omega_{b_{0}}B}$, as claimed.
We have thus obtained the following commutative diagram
\begin{eqnarray}
\xymatrix{ B' \ar@{->}[d]_{P_B} \ar@{<-}[r]^{\varepsilon^{\prime}} & \mathcal{P}_{b'_{0}}(B^{\prime}) \ar@{->}[d]^\Theta &\Omega_{b_{0}^{\prime}}B^{\prime} \ar@{_{(}->}[l]_{\quad\ i'} \ar@{->}[d]^\theta \\ B \ar@{<-}[r]_{\varepsilon} &\mathcal{P}_{b_{0}}(B) \ar@{->}[l] \ar@{<-^{)}}[r]_{\quad\ i} & \Omega_{b_{0}}B}\label{diagram_pathcov} \end{eqnarray} where $i:\Omega_{b_{0}}B\hookrightarrow \mathcal{P}_{b_{0}}(B)$ and $i^{\prime}:\Omega_{b_{0}^{\prime}}B^{\prime}\hookrightarrow \mathcal{P}_{b'_{0}}(B^{\prime })$ are the inclusion maps and each row is a fibration. Consider the associated homotopy sequences with $j\geq1$ and any $\gamma' \in \Omega'_{b'_{0}}B'$, \cite[Theorem 4.41]{Ha-book}:\\ \xymatrix{
1\ar@{<->}[d]|{\shortparallel} & & & 1\ar@{<->}[d]|{\shortparallel}\\
\pi_{j}(\mathcal{P}B^{\prime},\gamma') \ar@{<-}[r] \ar@{<->}[d]|{\shortparallel} & \pi_{j}(\Omega B',\gamma')\ar[d]^{\theta_*} \ar@{<-}[r]^{\ \Delta'_{j+1}} & \pi_{j+1}(B',b'_{0}) \ar[d]^{(P_B)_*} \ar@{<-}[r] & \pi_{j+1}(\mathcal{P}B^{\prime},\gamma') \ar@{<->}[d]|{\shortparallel}\\
\pi_{j}(\mathcal{P}B, \Theta(\gamma')) \ar@{<-}[r] \ar@{<->}[d]|{\shortparallel}& \pi_{j}(\Omega B, \theta(\gamma')) \ar@{<-}[r]_{\quad \Delta_{j+1}}
\ar@{<->}[d]|{\shortparallel}&
\pi_{j+1}(B,b_{0}) \ar@{<-}[r] & \pi_{j+1}(\mathcal{P}B',\Theta(\gamma')) \ar@{<->}[d]|{\shortparallel} \\ 1 & \pi_{j}(\widetilde{\Omega B}, \theta(\gamma')) & & 1 }\\[0.5cm]
Each row is exact. Moreover, $\pi_{j}\left( \mathcal{P}B^{\prime}\right) =1=\pi _{j}\left( \mathcal{P}B\right) $ because these spaces are contractible. It follows that both $\Delta_{j+1}^{\prime}$ and $\Delta_{j+1}$ are isomorphisms. On the other hand, from the homotopy exact \ sequence of the covering $P_B:B^{\prime }\rightarrow B$ we see that $(P_B)_{\ast}:\pi_{j+1}\left( B^{\prime} ,x_{0}^{\prime}\right) \rightarrow \pi_{j+1}\left( B,x_{0}\right) $ is an isomorphism for every $j\geq1$. Therefore, we conclude that \[ \theta_{\ast}=\Delta_{j+1}\circ (P_B)_{\ast}\circ (\Delta'_{j+1})^{-1}: \pi_{j}(\Omega B',\gamma') \to \pi_{j}(\widetilde{\Omega B},\theta(\gamma')) \] is an isomorphism for every $j\geq1$ as a composition of isomorphisms. Finally, since both $\widetilde{\Omega B}$ and $\Omega B^{\prime}$ are path connected, $\theta_{\ast}$ is in fact an \textquotedblleft isomorphism\textquotedblright\ also for $j=0$. In conclusion, $\theta$ is a weak homotopy equivalence of $\widetilde{\Omega B}$ and $\Omega B^{\prime}$. Since these spaces have the homotopy type of a $CW$-complex, by the theorem of Whitehead, $\theta$ is a genuine homotopy equivalence. \end{proof}
\subsection{Singular homology and dimension} It is well known that the covering dimension dominates the \v{C}ech cohomological dimension of a paracompact space. By the abstract de Rham isomorphism and by duality we therefore deduce the following
\begin{proposition}\label{prop-cech} Let $X$ be a topological space with the homotopy type of a locally contractible, paracompact space $Y$ of finite covering dimension $\dim Y <+\infty$. Then, for any field $\mathbb{K}$, the singular homology of $X$ with coefficients in $\mathbb{K}$ satisfies \begin{equation}\label{homologyfibre} H_k(X;\mathbb{K}) = 0 \text{, } \forall k > \dim Y. \end{equation} \end{proposition}
\begin{proof} Indeed, by the homotopy invariance of the singular homology, \[ H_{k}(X;\mathbb{K}) \simeq H_{k}(Y;\mathbb{K})\text{ for every }k. \] Let $\check{H}^k(Y;\mathcal{K})$ denote the \v{C}ech cohomology with coefficients in the constant sheaf $\mathcal{K}$ generated by $\mathbb{K}$. Since $Y$ is locally contractible and paracompact we have (\cite[Theorem 5.10.1]{Go-topalg}, \cite[Theorem 5.25]{Wa}) \[ \check{H}^k(Y;\mathcal{K}) \simeq \check{H}^k(Y;\mathbb{K}) \simeq {H}^k(Y;\mathbb{K}) \simeq H^{k}(Y;\mathcal{K}), \] and, moreover, $\check{H}^k(Y;\mathcal{K})=0$ for every $k > \dim Y$, \cite[Section 5.12]{Go-topalg}. Therefore, $H^k(Y;\mathbb{K}) =0 $ for every $k>\dim Y$. Since we are taking coefficients in a field, by the universal coefficient theorem we have $H^k(Y;\mathbb{K}) \simeq \mathrm{Hom}(H_k(Y;\mathbb{K});\mathbb{K})$, \cite[Theorem 53.5]{Mu-elements}, and this implies that $H_{k}(X;\mathbb{K}) \simeq H_{k}(Y;\mathbb{K}) = 0$ for every $k > \dim Y$. \end{proof}
\section{Proof of the main Theorems} Let us begin with the \begin{proof}[Proof of Theorem \ref{th-general}]
(a) Let the fibre $F_{0}=\pi^{-1}(y_{0})$ be locally contractible and assume that the total space $X$ of the Hurewicz fibration $\pi:X \to Y$ is aspherical. Since $X$, hence its universal covering space $X'$, has the homotopy type of a CW-complex, this is equivalent to say that $X'$ is contractible. We have to show that the universal covering space $Y'$ of the base space $Y$ is $\mathbb{K}$-acyclic for every field $\mathbb{K}$. By contradiction, suppose that this is not the case. Since $Y'$ is simply connected, this means that: \begin{equation}\label{nonacyclic} \text{there exists }m\geq 2 \text{ and a field }\mathbb{K} \text{ such that }H_m(Y';\mathbb{K}) \not= 0. \end{equation} Now, according to Proposition \ref{prop-lift-hur}, let us consider the lifted Hurewicz fibration $\pi':X' \to Y'$. Then, $X'$ and $F'_{0}=(\pi')^{-1}(y'_{0})$, with $P_{Y}(y_{0}')=y_{0}$, are separable metric spaces with the homotopy type of a CW-complex and of finite covering dimension $\dim X' , \dim F'_{0}<+\infty$. Moreover, by (b') of Proposition \ref{prop-lift}, $Y'$ has the homotopy type of a finite dimensional CW-complex and by (c.1) of Proposition \ref{prop-lift-hur}, $F'_{0}$ is locally contractible.
Using \eqref{nonacyclic} joint with Proposition \ref{prop-cech} applied to $Y'$, we deduce that there exists an integer $n \geq m$ such that $H_n(Y';\mathbb{K}) \not= 0$ and $H_k(Y';\mathbb{K}) = 0$, for every $k>n$. Therefore the following result by J.P. Serre, \cite[Proposition 11, p. 484]{Se-annals} can by applied to the simply connected space $Y'$: \begin{theorem}\label{th_Serre} Let $Z$ be a simply connected space and assume that there exists an integer $n \geq 2$ such that the singular homology of $Z$ with coefficients in a field $\mathbb{K}$ satisfies $H_k(Z;\mathbb{K})=0$ for every $k > n$ and $H_n(Z;\mathbb{K}) \not=0$. Then, for every $i \geq 0$ there exists $0<j<n$ such that $H_{i+j}(\Omega Z;\mathbb{K}) \not=0$. \end{theorem} In view of this result, the singular homology of the loop space $\Omega_{y'_{0}} Y'$ satisfies \[ H_{k}(\Omega_{y'_{0}} Y';\mathbb{K}) \not=0\text{, for infinitely many }k>0. \] On the other hand, since we are assuming that $X'$ is contractible, it follows from Proposition \ref{prop-contrfibre} that $F'_{0}$ has the homotopy type of $\Omega_{y'_{0}} Y'$. Whence, using again Proposition \ref{prop-cech} with $X=F'_{0}$, we conclude \[ H_k(\Omega_{y'_{0}} Y';\mathbb{K}) \simeq H_k(F'_{0};\mathbb{K}) =0 \text{, }\forall k\gg 1. \] Contradiction.\\
(b) Assume that $X$ is aspherical (i.e. $X'$ is contractible) and that $F_{0} = \pi^{-1}(y_{0})$ is a finite dimensional CW-complex. Combining Proposition \ref{prop-lift-hur} (c.2) with Proposition \ref{prop-contrfibre} we have that $\widetilde{\Omega_{y_{0}} Y}$ has the same homotopy type of the finite dimensional CW-complex $F'_{0}=(\pi')^{-1}(y'_{0})$, with $P_{Y}(y'_{0})=y_{0}$. It follows from Proposition \ref{prop-cech} and from the homotopy invariance of the singular homology that \[ H_k(\widetilde{\Omega_{y_{0}} Y};\mathbb{K}) \simeq H_k(\Omega_{y'_{0}} Y';\mathbb{K}) \simeq H_k(F'_{0};\mathbb{K}) =0 \text{, }\forall k \geq \dim X. \]
The proof is completed. \end{proof}
As a corollary of the above arguments, we obtain the
\begin{proof}[Proof of Theorem \ref{corollary-cw}] Assume that $X$ is aspherical. By Proposition \ref{prop-contrfibre} (a) and Proposition \ref{prop-lift-hur} (c.1), $\Omega_{y'_{0}} Y'$ is homotopy equivalent to the locally contractible space $F'_{0}=(\pi')^{-1}(y'_{0})$, with $P_{Y}(y'_{0}) = y_{0}$. Moreover, $F'_{0}$ is paracompact because it is a closed subspace of the CW-complex $X'$, which is paracompact; \cite{Miy-Tohoku}, \cite[Proposition 1.3.5]{FrPi-book}. Finally, since the covering dimension of the CW-complex $X'$ is exactly its CW-dimension, \cite[Proposition 1.5.14]{FrPi-book}, and since $F'_{0}$ is a closed subspace of $X'$ we have $\dim F'_{0} \leq \dim X' < +\infty$, \cite{En-dim}. To conclude, we now use Proposition \ref{prop-cech} as in the proof of Theorem \ref{th-general}. \end{proof}
\end{document} | arXiv |
\begin{document}
\title[Almost disjoint subgroups]{Almost disjoint pure subgroups of the Baer-Specker group} \author{Oren Kolman} \address{King's College London\\ Strand\\ London WC2R 2LS, UK} \thanks{We thank the referee for constructive comments.} \email{[email protected]}
\author{Saharon Shelah} \thanks{This research was partially supported by the German-Israel Foundation for Science; publication number 683.} \address{Institute of Mathematics\\ Hebrew University\\ Jerusalem, Israel} \email{[email protected]}
\begin{abstract} We prove in ZFC that the Baer-Specker group ${\bf Z}^\omega$ has $2^{\aleph_1}$ non-free pure subgroups of cardinality $\aleph_1$ which are almost disjoint: there is no non-free subgroup embeddable in any pair. \end{abstract} \thispagestyle{empty}
\maketitle
In this short paper we prove the following result.
\begin{theorem} \label{1}There exists a family ${\bf G}=\{G_\alpha:\alpha<2^{\aleph_1}\}$ of non-isomorphic non-free pure subgroups of the Baer-Specker group ${\bf Z}^\omega$ such that:\hfil\break (1.1)\quad each $G_\alpha$ has cardinality $\aleph_1$;\hfil\break (1.2)\quad if $\alpha<\beta$, then $G_\alpha$ and $G_\beta$ are almost disjoint: if $H$ is isomorphic to subgroups of $G_\alpha$ and $G_\beta$, then $H$ is free. In particular, $G_\alpha\cap G_\beta$ is free. \end{theorem} Recall that the Baer-Specker group ${\bf Z}^\omega$ is the abelian group of functions from the natural numbers into the integers (see \cite{1} and \cite{18}). It contains the canonical pure free subgroup ${\bf Z}_\omega=\oplus_{n<\omega}{\bf Z}$. The group ${\bf Z}^\omega$ is not $\kappa$-free for any cardinal $\kappa>\aleph_1$, but it is $\aleph_1$-free, so the groups $G_\alpha$ in Theorem \ref{1} are almost free.
Theorem \ref{1} answers a question of the first author, and has its place in the line of recent research dealing with the lattice structure of the pure subgroups of ${\bf Z}^\omega$ (see \cite{2}, \cite{3}, and \cite{5}--\cite{8}). For example, Irwin asked whether there is a subgroup of ${\bf Z}^\omega$ with uncountable dual but no free summands of infinite rank. This problem was resolved recently by Corner and Goebel \cite{5} who proved the following stronger fact.
\begin{theorem}\cite{5}
The Baer-Specker group ${\bf Z}^\omega$ contains a pure subgroup $G$ whose endomorphism ring splits as End$(G)={\bf Z}\oplus$Fin$(G)$, with $|G^*|=2^{\aleph_0}$, where $ {\bf Z}$ is the scalar multiplication by integers and Fin$(G)$ is the ideal of all endomorphisms of $G$ of finite rank. \end{theorem} Quotient-equivalent and almost disjoint abelian groups have been studied by Eklof, Mekler and Shelah in \cite{9}--\cite{11}, who showed that under various set-theoretic hypotheses, there exist families of maximal possible size of almost free abelian groups which are pairwise almost disjoint. Following \cite{11}, we say that two groups A and B are almost disjoint if whenever H is embeddable as a subgroup in both A and B, then H is free. Clearly if A and B are non-free and almost disjoint, then they are non-isomorphic in a very strong way. On the other hand, the intersection of two almost disjoint groups of size $\aleph_1$ need not necessarily be countable, so group-theoretic almost disjointness differs from its set-theoretic homonym. Theorem \ref{1} establishes in ZFC that the Baer-Specker group contains large families of almost disjoint almost free non-free pure uncountable subgroups.
Our group and set-theoretic notation is standard and can be found in \cite{10} and \cite{14}. For example, ${}^{\omega_1>}2$ is the set of partial functions from $\omega_1$ into $\{0,1\}$ whose domains are at most countable; ${}^{\omega_1}2$ is the set of all functions from $\omega_1$ into $\{0,1\}$; for a regular cardinal $\chi,\ H(\chi)$ is the family of all sets of hereditary cardinality less than $\chi$.
For a set $A\subseteq H(\chi)$ for $\chi$ large enough, we write dcl$_{\big(H( \chi),\in,<\big)}[A]$ for the Skolem closure (Skolem hull) of $A$ in the structure $\big(H(\chi),\in,<\big)$, where $<$ is a well-ordering of $H(\chi)$ (for details, see \cite{16}, 400-402, or \cite{15}, 165-170).
In proving Theorem \ref{1}, we shall appeal to the well-known Engelking-Kar\l owicz theorem from set-theoretic topology:
\begin{theorem} \cite{13}
If $|Y|=\mu=\mu^{<\sigma}<\lambda=|X|\leq2^\mu$, then there are functions $h_ \alpha:X\rightarrow Y$ for $\alpha<\mu$ such that for every partial function $f$ from $X$ to $Y$ of cardinality less than $\sigma$, for some $\alpha<\mu, \ f\subseteq h_\alpha$. \end{theorem} A self-contained short proof can be found in \cite{17}, 422-423. We shall need just the case when $\mu=\sigma=\aleph_0$, and $\lambda=2^\mu$. Since it may be less familiar to algebraists, for convenience we deduce the fact to which we appeal later on (although it also appears as Corollary 3.17 in \cite{4}).
\begin{lemma} \label{2}There exists a family $\{f_\eta:\eta\in^{\omega_1>}\!2\}$ such that $f_\eta:\omega \rightarrow{\bf Z}$, and whenever $\eta_1,\dots,\eta_k$ are distinct and $a_1, \dots,a_k\in{\bf Z}$, then\hfil\break $\{i<\omega:(\forall\ l\leq k)(f_{\eta_l}(i)=a_l)\}$ is infinite. \end{lemma}
\begin{proof} Take $\mu=\sigma=\aleph_0,\ \lambda=2^\mu,\ X={}^{\omega_1>}2$ and $Y={\bf Z}$
in the Engelking-Kar\l owicz theorem. Since $|^{\omega_1>}2|=2^{\aleph_0}$ and $|{\bf Z}|=\aleph_0$, we know that there exist functions $h_n:{}^{\omega_1>}2\rightarrow{\bf Z}$ for $n<\omega$ such that for every partial function $f$ from ${}^{\omega_1>}2$ to ${\bf Z}$ whose domain is finite, there is some $m<\omega$ such that $f\subseteq h_m$. Let $\{g_i:i<\omega\}$ be an enumeration with infinitely many repetitions of each $h_n$ for $n<\omega$.
For each $\eta\in^{\omega_1>}2$, define $f_\eta:\omega\rightarrow{\bf Z}$ by $f_\eta(i)=g_i(\eta)$. The family $\{f_\eta:\eta\in^{\omega_1>}2\}$ is as required: for if $\eta_1,\dots,\eta_k$ are distinct and $a_1,\dots,a_k\in{\bf Z}$ are given, then the set $f=\langle(\eta_1,a_1),\dots,(\eta_k,a_k)\rangle$ is a finite function, so there is some $m$ such that $f\subseteq h_m$ and it is now easy to see that $\{i<\omega:(\forall\ l\leq k)(f_{\eta_l}(i)=a_l)\}$ is infinite. \end{proof}
A well-known algebraic fact will also be useful:
\begin{lemma} \label{3}Let $C$ be a closed unbounded subset of the regular uncountable cardinal $\kappa$. Suppose that $H$ is an abelian group of cardinality $\kappa$, and $\langle H_\alpha: \alpha<\kappa\rangle$ is a $\kappa$-filtration of $H$ (a continuous increasing chain of subgroups
$H_\alpha,\ |H_\alpha|<\kappa$, whose union is $H$). Let $S=\{\alpha\in C:H/H_\alpha$ is not $\kappa$-free$\}$. Then $H$ is free if and only if $S$ is non-stationary in $\kappa$. \end{lemma}
\begin{proof} Well-known: see Proposition IV.1.7 in \cite{10}. \end{proof}
We refer the reader to \cite{14} for the definitions of the characteristic $\chi(g)$ and the type $\tau(g)$ of an element $g$ in a group.
\vskip 10pt \noindent Now we prove Theorem \ref{1}.
\begin{proof} Let ${\bf P}$ be the set of prime numbers, and let $\{P_\eta:\eta\in^{\omega_ 1>}2\}$ be a family of almost disjoint (infinite) subsets of $\bf P$:
$\eta\ne\nu\in^{\omega_1>}2\ \Rightarrow\ |P_\eta\cap P_\nu|<\aleph_0$. By Lemma \ref{2}, there exists $\{f_\eta:\eta\in^{\omega_1>}2\}$ such that $f_\eta:\omega\rightarrow{\bf Z}$, and if $\eta_1,\dots,\eta_k$ are distinct and $a_1,\dots,a_k\in{\bf Z}$, then $\{i<\omega:(\forall\ l\leq k)(f_{\eta_l}(i)=a_l)\}$ is infinite.
Define functions $x_\eta$ and $x_{\eta,j}$ in ${\bf Z}^\omega$ as follows. Let $x_\eta=\langle\pi_{\eta,i}\cdot f_\eta(i):i<\omega\rangle$ where $\pi_{\eta,i}=\Pi\{p\in P_\eta:p<i\}$, and let $x_{\eta,j}=\langle\pi^j_{\eta,i}\cdot f_\eta(i):i<\omega\rangle$ where $\pi^j_{\eta,i} =\Pi\{p\in P_\eta:j\leq p<i\}$ (=0 if $i\leq j$). Note that $x_\eta=x_{\eta,0}$.
For $\eta\in^{\omega_1}2$, let $G_\eta$ be the subgroup of ${\bf Z}^\omega$
generated by ${\bf Z}_\omega\cup\{x_{\eta|\alpha,j}:\alpha<\omega_1,0\leq j<\omega\}$.
We show that the family ${\bf G}=\{G_\eta:\eta\in^{\omega_1}2\}$ satisfies the conclusions of Theorem \ref{1}. \vskip 10pt \noindent{\bf Claim 1:} $G_\eta$ is pure in ${\bf Z}^\omega$. \vskip 10pt
\noindent{\bf Proof of Claim 1:} Suppose that $rx=g$ for some $x\in{\bf Z}^\omega,\ r\in{\bf N}$, and $g\in G_\eta$. Say $g=y+n_1x_{\eta|\alpha_1,j_1}+\dots+n_mx_{\eta|\alpha _m,j_m},\ n_l\ne0$, with $y\in{\bf Z}_\omega$. Without loss of generality (adding more elements from ${\bf Z}_\omega$ to the RHS if necessary), $(\forall\ l\leq m)(j_l=j)$ for some $j<\omega,\ j>r,\ y(i)=0\ (\forall\ i>j)$, and $x(i)=0\ (\forall i\leq j)$. Relabelling (if necessary), we may assume that $\alpha_1<\dots<\alpha _m<\omega_1$, and because $x_{\eta|\alpha_l,j}(i)=0$ if $i\leq j$, we may write $$
rx=ry^*+n_1x_{\eta|\alpha_1,j}+\dots+n_mx_{\eta|\alpha_m,j},\quad{\rm for\ some} \ y^*\in{\bf Z}_\omega. $$
Fix $k\in\{1,\dots,m\}$. Since $\eta|\alpha_1,\dots,\eta|\alpha_m$ are distinct $(\alpha_1<\dots<\alpha_m)$, letting $a_l=\delta_{kl}$ (Kronecker delta), we know that the set $N_k=\{i<\omega:(\forall\ l\ne k)(f_{\eta_l}(i)=0,\ f_{\eta_k}(i) =1)\}$ is infinite. For large enough $i$ in this set \big(e.g. $i>{\rm max}_{1\leq l
\leq m}\ [{\rm min}(P_{\eta|\alpha_l}\backslash\{0,\dots,j\})]\big),\ x_{\eta|
\alpha_l,j}(i)$ is zero if and only if $l\ne k$. So for infinitely many $i<\omega$, for $l\ne k,\ x_{\eta|\alpha_l,j}(i)=0$, and $x_{\eta|\alpha_k,j}(i)\ne0$.
Unfix $k$. For each $k\leq m$, for infinitely many $i\in(j,\omega)\cap N_k,\ rx(i)=n_k x_{\eta|\alpha_k,j}(i)=n_k\Pi\{p\in P_{\eta|\alpha_k}:j\leq p<i\}$. Since $r<j$, we must have $rs_k=
n_k$ for some $s_k$ in $\bf Z$, and therefore $x=y^*+s_1x_{\eta|\alpha_1,j}+\dots
+s_mx_{\eta|\alpha_m,j}\in G_\eta\ (G_\eta$ is torsion-free). Hence $G_\eta$ is pure in ${\bf Z}^\omega$, which establishes Claim 1. \vskip 10pt \noindent{\bf Claim 2:} $G_\eta$ has cardinality $\aleph_1$, so (1.1) holds. \vskip10pt \noindent{\bf Proof of Claim 2:} If $\xi\ne\zeta\in^{\omega_1>}2$, then for some $j<\omega,\ P_ \xi\cap P_\zeta\subseteq j$. Pick $p,q>j$ with $p\in P_\xi$ and $q\in P_\zeta$; so the set $B=\{i<\omega:f_\xi(i)=p$ and $f_\zeta(i)=q\}$ is infinite, and if $i\in B$ is bigger than max$\{j,p,q\}$, then $x_{\xi,j}(i)\ne x_{\zeta,j}(i)$, since $x_{\xi,j}(i)$ is non-zero and divisible by $p^2$ but by no prime in $P_ \zeta$, and $x_{\zeta,j}(i)$ is non-zero and divisible by $q^2$ but by no prime in $P_\xi$. It follows that $G_\eta$ has cardinality $\aleph_1$. After this observation, a second's reflection on the element types of $G_{\eta}$ and $G_{\nu}$ (for $\eta\ne\nu$) should convince the reader that the groups are neither isomorphic nor free. \vskip 10pt \noindent{\bf Claim 3:} (1.2) holds: if $\eta_1\ne\eta_2\in^{\omega_1}2$, then $G_{\eta_1}$ and $G_{\eta_2}$ are almost disjoint. \vskip 10pt \noindent{\bf Proof of Claim 3:} Suppose (towards a contradiction) that for some $\eta_1\ne\eta _2\in^{\omega_1}2$, for some non-free abelian group $H$, there exist isomorphisms $\varphi_l:H\rightarrow$ range$(\varphi_l)\leq G_{\eta_l},\ l=1,2$. Since $G_{\eta_l}$ is $\aleph_1$-free, $H$ must have cardinality $\aleph_1$. Let $\langle H_i:i<\omega_1\rangle$ be an $\omega_1$-filtration of $H$. Without loss of generality, we may assume that each $H_i$ is pure in $H$, so that $H/H_i$ is torsion-free.
Let $G_{\eta,i}=\langle{\bf Z}_\omega\cup\{x_{\eta|\beta,j}:j<\omega,\ \beta<i\} \rangle$ for $i<\omega_1$ and $\eta\in\{\eta_1,\eta_2\}$.
Note that $\langle G_{\eta,i}:i<\omega_1\rangle$ is a $\omega_1$-filtration of $G_\eta$, since it is increasing and continuous with union $G_\eta$, and each $G_{\eta,i}$ is countable. For large enough $\chi$, the set $C$ defined by $\{\delta<\omega_1:{\rm dcl}_{\big(H(\chi),\in,<\big)} [\delta\cup\{G_{\eta_1},G_{\eta_2},\{x_\nu,f_\nu:\nu\in^{\omega_1>}2\},\eta_1, \eta_2,\varphi_1,\varphi_2,\{H_i:i<\omega_1\}\}]\cap\omega_1=\delta\}$ is a club of $\omega_1$ (well-known, or see \cite{16}, 401). Note that if $\delta\in C$, then $\varphi_l$ maps $H_\delta$ into $G_{\eta_l,\delta}$. Since $H$ is not free, it follows by Lemma \ref{3} that $S= \{\delta\in C:H/H_\delta$ is not $\aleph_1$-free$\}$ is stationary. By Pontryagin's Criterion, for each $\delta\in S,\ H/H_\delta$ has a non-free (torsion-free) subgroup $K_ \delta/H_\delta$ of finite rank $n_\delta+1$ such that every subgroup of $K_\delta /H_\delta$ of rank less than $n_\delta+1$ is free. Let $H_\delta^{\ +}/H_\delta$ be a pure subgroup of $K_\delta/H_\delta$ of rank $n_\delta$. Then $H_\delta^{\ +} /H_\delta$ is free with basis $y_0+H_\delta,\dots,y_{n_\delta-1}+H_\delta$ say. So $K_\delta/H_\delta^{\ +}\simeq(K_\delta/H_\delta)/(H_\delta^{\ +}/H_\delta)$ is a torsion-free rank-1 group which is not free, and hence there is a non-zero element $y_{n_\delta}+H_\delta^{\ +}$ which is divisible in $K_\delta/H_\delta^ {\ +}$ by infinitely many natural numbers. Call this set of natural numbers $A$.
For $l=1,2$, for large enough $j_l(*)<\omega$, and $\beta^l_{\ 0}<\dots<\beta^l _{\ k_l}<\omega_1,\ \varphi_l(y_m)$ is an element of the subgroup of $G_{\eta_l}$
generated by $G_{\eta_l,\delta}\cup\{x_{\eta_l|\beta^l_{\ 0},j_l(*)},\dots,x_{
\eta_l|\beta^l_{\ k_l},j_l(*)}\}$ for all $m\leq n_{\delta}$.
Taking large enough $\delta\in S$, we may assume that min$\{\alpha:\eta_1|\alpha
\ne\eta_2|\alpha\}<\beta^l_{\ 0},\ l=1,2$. Since $\delta\in C$, we can show the following claims:\hfil\break $(*)_1$: The set $A$ does not contain infinitely many powers of one prime.\hfil\break
$(*)_2$: The set $Q=({\bf P}\cap A)\subseteq P_{\eta_l|\beta^l_{\ 0}}\cup\dots\cup P _{\eta_l|\beta^l_{\ k_l}}$.
Now $(*)_1$ is true because non-zero sums of elements in\hfil\break
$G_{\eta_l,\delta}\cup\{x_{\eta_l|\beta^l_{\ 0},j_l(*)},\dots,x_{\eta_l|\beta^l_{\ k_l},j_l(*)}\}$ are divisible by at most finitely many powers of any given prime (by the definition of the elements
$x_{\eta_l|\beta,j})$. Note that $\chi(y_{n_\delta}+H_\delta^{\ +})=\cup_{\{y\in y_{n_ \delta}+H_\delta^{\ +}\}}\chi(y)\leq cup_{\{y\in y_{n_ \delta}+H_\delta^{\ +}\}}\chi(\varphi_l(y))$, where the characteristics are taken relative to $K_\delta/H_\delta^{\ +}$, $K_\delta$ and\hfil\break
$G_{\eta_l,\delta}\cup\{x_{\eta_l|\beta^l_{\ 0},j_l(*)},\dots,x_{\eta_l|\beta^l_{\ k_l},j_l(*)}\}$ respectively. Hence $(*)_1$ holds. By $(*)_1$, since $A$ is infinite, the set $Q={\bf P}\cap A$ is infinite.
Also, the same characteristic inequality implies that $Q\subseteq P_{\eta_l|\beta^l_{\ 0}}\cup\dots\cup P_{\eta_l|\beta^l_{\ k _l}}$. So $(*)_2$ is true. Hence, $Q\subseteq\cap_{l=1,2}(\cup_{k\leq k_l}P_{\eta_l|\beta^l_{\ k}})$ which is finite (since the family $\{P_\eta:\eta\in^{\omega_1>}2\}$ is almost disjoint). This is a contradiction, and so Claim 3 follows, completing the proof of Theorem \ref{1}. \end{proof}
\begin{corollary} Every non-slender $\aleph_1$-free abelian group $G$ has a family $\{G_\alpha:\alpha<2^{\aleph_1}\}$ of non-free subgroups such that:\hfil\break 1. each $G_\alpha$ is almost free of cardinality $\aleph_1$;\hfil\break 2. if $\alpha<\beta$, then $G_\alpha$ and $G_\beta$ are almost disjoint. \end{corollary} \begin{proof} By Nunke's characterisation of slender groups (see Corollary IX.2.5 in \cite{10} for example), $G$ must contain a copy of the Baer-Specker group. \end{proof} \vskip10pt \noindent{\bf Remark}: For the same reason, the corollary is true for any non-slender cotorsion-free abelian group.
\end{document} | arXiv |
\begin{document}
\centerline{ \bf On $k$-free numbers over Beatty sequences }
\centerline{Wei Zhang}
\centerline{ School of Mathematics and Statistics, Henan University, Kaifeng 475004, Henan, China} \centerline{[email protected]}
\textbf{Abstract} In this paper, we consider $k$-free numbers over Beatty sequences. New results are given. In particular, for a fixed irrational number $\alpha>1$ of finite type $\tau<\infty$, any constant $\varepsilon>0$, we can show that \begin{align*} \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in\mathcal{Q}_{k}}} 1- \frac{x}{ \zeta(k)} \ll x^{k/(2k-1)+\varepsilon} +x^{1-1/(\tau+1)+\varepsilon}, \end{align*} where $\mathcal{Q}_{k}$ is the set of positive $k$-free integers and the implied constant depends only on $\alpha,$ $\varepsilon,$ $k$ and $\beta.$ This improves previous results. The main new ingredient of our idea is an employing of the double exponential sums of the type \[ \sum_{1\leq h\leq H}\sum_{\substack{1\leq n\leq x\\ n\in\mathcal{Q}_{k}}}e(\vartheta hn). \]
\textbf{Keywords} $k$-free numbers, exponential sums, Beatty sequence
\textbf{2000 Mathematics Subject Classification} 11L07, 11B83
\numberwithin{equation}{section}
\section{\bf{Introduction}} In this paper, we are interested in $k-$free integers over Beatty sequences. The so-called Beatty sequence of integers are defined by $ \mathcal{B}_{\alpha,\beta}:=\{[\alpha n+\beta]\}_{n=1}^{\infty}, $ where $\alpha$ and $\beta$ are fixed real numbers and $[x]$ denotes the greatest integer not larger than $x$. The analytic properties of such sequences have been studied by many experts. For example, one can refer to \cite{ABS,BS1,BY} and the references therein. A number $q$ is called $k$-free integer if and only if $
m^{k}|q\Longrightarrow m=1. $ For sufficiently large $x\geq1,$ it is well known that \[ \sum_{n\in\mathcal{Q}_{k}}n^{-s}= \frac{\zeta(s)}{\zeta(ks)},\ \ \ \Re s>1 \] and \begin{align}\label{kf} \sum_{n\leq x,\ n\in\mathcal{Q}_{k}}1= \frac{x}{\zeta(k)}+O(x^{1/k}), \end{align} where $\mathcal{Q}_{k}$ is the set of positive $k$-free integers. In this paper, we are interested in the sum \[ \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in\mathcal{Q}_{k}}} 1.\] In fact, this problem has been considered by many experts. For example, in 2008, G\"{u}lo\u{g}lu and Nevans \cite{GC} proved that \[ \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in\mathcal{Q}_{2}}} 1=\frac{x}{ \zeta(2)} +O\left(\frac{x\log \log x}{\log x}\right),\] where $\alpha>1$ is the
irrational number of finite type.
Now we will recall some notion related to the type of $\alpha.$ The definition of an irrational number of constant type can be cited as follows. For an irrational number $\alpha,$ we define its type $\tau$ by the relation \[ \tau:=\sup\left\{\theta\in\mathbb{R}:\ \liminf_{\substack{q\rightarrow \infty\\ q\in\mathbb{Z}^{+}}}q^{\theta}\parallel \alpha q\parallel=0\right\}. \] Let $\psi$ be a non-decreasing positive function that defined for integers. The irrational number $\alpha$ is said to be of type $<\psi$ if $q\parallel q\alpha\parallel\geq 1/\psi(q)$ holds for every positive integers $q.$ If $\psi$ is a constant function, then an irrational $\alpha$ is also called a constant type (finite type). This relation between these two definitions is that an individual number $\alpha$ is of type $\tau$ if and only if for every constant $\tau$, there is a constant $c(\tau,\alpha)$ such that $\alpha$ is of type $\tau$ with $q\parallel q\alpha\parallel\geq c(\tau,\alpha)q^{-\tau-\varepsilon+1}.$
Recently, in \cite{Go,Di}, it is proved that \[ \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in\mathcal{Q}_{2}}} 1=\frac{x}{ \zeta(2)} +O\left(Ax^{5/6}(\log x)^{5}\right),\] where $ A=\max\{\tau(m), 1\leq m \leq x^{2}\}, $ and $\alpha>1$ is fixed irrational algebraic number. More recently, Kim, Srichan and Mavecha \cite{KSM} improved the above result by showing that \[ \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in\mathcal{Q}_{k}}} 1=\frac{x}{ \zeta(k)} +O\left(x^{(k+1)/2k}(\log x)^{3}\right).\]
Recently, with some much more generalized arithmetic functions, in \cite{ABS,TZ}, one may also get some other estimates for such type sums. However, the estimates of \cite{ABS,TZ} cannot be applied to an individual $\alpha.$ In this paper, we can give the following formula. \begin{Theorem}\label{th2} Let $\alpha>1$ be a fixed irrational number of finite type $\tau<\infty$. Then for any constant $\varepsilon>0$, we have \begin{align*} \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in\mathcal{Q}_{k}}} 1- \alpha^{-1}\sum_{\substack{1\leq n\leq [\alpha x+\beta]\\ n\in\mathcal{Q}_{k}}}1 \ll x^{k/(2k-1)+\varepsilon}+x^{1-1/(\tau+1)+\varepsilon}, \end{align*} where the implied constant depends only on $\alpha,$ $\varepsilon,$ $k$ and $\beta.$ \end{Theorem}
Then by (\ref{kf}) and the above theorem, we can obtain the follows. \begin{Corollary} Let $\alpha>1$ be a fixed irrational number of finite type $\tau<\infty$. Then for any constant $\varepsilon>0$, we have \begin{align*} \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in\mathcal{Q}_{k}}} 1- \frac{x}{ \zeta(k)} \ll x^{k/(2k-1)+\varepsilon}+x^{1-1/(\tau+1)+\varepsilon}, \end{align*} where the implied constant depends only on $\alpha,$ $\varepsilon,$ $k$ and $\beta.$ \end{Corollary} In fact, our result relies heavily on the following double sum. \begin{Theorem}\label{IK} Suppose for some positive integers $a,q,h,$ $q\leq x,$ $h\leq H\ll x,$ $(a,q)=1$ and \begin{align}\label{DAT}
\left|\vartheta-\frac{a}{q}\right|\leq \frac{1}{q^{2}}, \end{align} then for sufficiently large $x$ and any $\varepsilon>0,$ we have \[ \sum_{1\leq h\leq H}\sum_{\substack{1\leq n\leq x\\ n\in\mathcal{Q}_{k}}}e(\vartheta hn)\ll \left(Hx^{k/(2k-1)}+ q+ Hx/q\right)x^{\varepsilon}, \] where the implied constant may depend on $k$ and $\varepsilon.$ \end{Theorem} \begin{Remark} One can also compare this result with the results of Br\"{u}dern-Perelli \cite{BP} and Tolev \cite{To}. By using the argument of \cite{BP,To}, one may get some better results for some special cases. \end{Remark} \section{Proof of Theorem \ref{th2}} We will start the proof by introducing some necessary lemmas.
\begin{Lemma}\label{z2} Let $\alpha>1$ be of finite type $\tau<\infty$ and let $K$ be sufficiently large. For an integer $w\geq1,$ there exists $a,q\in\mathbb{N},$ $a/q\in\mathbb{Q}$ with $(a,q)=1$ and $q$ satisfying $K^{1/\tau-\varepsilon}w^{-1}<q\leq K$ such that \[
\left|\alpha w-\frac{a}{q}\right|\leq \frac{1}{qK}. \] \end{Lemma} \begin{proof} By Dirichlet approximation theorem, there is a rational number $a/q$ with $(a,q)=1$ and $q\leq K$ such that $
\left|\alpha w-a/q\right|<1/qK. $ Then we have $ \parallel qw\alpha\parallel\leq 1/K. $ Since $\alpha$ is of type $\tau<\infty,$ for sufficiently large $K,$ we have $ \parallel qw\alpha\parallel\geq (qw)^{-\tau-\varepsilon}. $ Then we have $ 1/K\geq \parallel qw\alpha\parallel\geq (qw)^{-\tau-\varepsilon}. $ This gives that $ q\geq K^{1/\tau-\varepsilon}w^{-1}. $ \end{proof}
In order to prove the theorem, we need the definition of the discrepancy. Suppose that we are given a sequence $u_{m},$ $m=1,2,\cdots$, $M$ of points of $\mathbb{R}/\mathbb{Z}.$ Then the discrepancy $D(M)$ of the sequence is \begin{align}\label{di} D(M)=\sup_{\mathcal{I}\in[0,1)}
\left|\frac{\mathcal{V}(\mathcal{I},M)}{M}
-|\mathcal{I}|\right|, \end{align} where the supremum is taken over all subintervals $\mathcal{I}=(c, d)$ of the interval $[0, 1),$ $\mathcal{V} (\mathcal{I}, M)$ is the number of positive integers $m\leq M$ such that $a_{m}\in\mathcal{I}$, and
$|\mathcal{I}| = d-c$ is the length of $|\mathcal{I}|.$
Without lose generality, let $D_{\alpha,\beta}(M)$ denote the discrepancy of the sequence $\{\alpha m+\beta\},$ $m=1,2,\cdots$, $M$, where $\{x\}=x-[x].$ The following lemma is from \cite{BS1}. \begin{Lemma}\label{lere} Let $\alpha>1.$ An integer $m$ has the form $m=[\alpha n+\beta]$ for some integer $n$ if and only if \[ 0<\{\alpha^{-1}(m-\beta+1)\}\leq \alpha^{-1}. \] The value of $n$ is determined uniquely by $m.$ \end{Lemma}
\begin{Lemma}[See Theorem 3.2 of Chapter 2 in \cite{KN}]\label{let} Let $\alpha$ be a fixed irrational number of type $\tau<\infty.$ Then, for all $\beta\in\mathbb{R},$ we have \[ D_{\alpha,\beta}(M)\leq M^{-1/\tau+o(1)} ,\ \ (M\rightarrow\infty), \] where the function implied by $o(1)$ depends only on $\alpha.$ \end{Lemma}
\begin{Lemma}[see page 32 of \cite{Vi}]\label{levi} For any $\Delta\in\mathbb{R}$ such that $0<\Delta<1/8$ and $\Delta\leq 1/2\min\{\gamma,1-\gamma\},$ there exists a periodic function $\Psi_{\Delta}(x)$ of period 1 satisfying the following properties: \begin{itemize} \item $0\leq$$\Psi_{\Delta}(x)$$\leq1$ for all $x\in\mathbb{R};$
\item $\Psi_{\Delta}(x)=\Psi(x)$ if $\Delta\leq x\leq\gamma-\Delta$ or $\gamma+\Delta\leq x\leq 1-\Delta;$
\item $\Psi_{\Delta}(x)$ can be represented as a Fourier series \[ \Psi_{\Delta}(x)=\gamma+\sum_{j=1}^{\infty} g_{j}e(jx)+h_{j}e(-jx), \] \end{itemize} where
\begin{align*}
\Psi(x)=
\begin{cases}
1\ \ &\textup{if}\ 0<x\leq \gamma,\\
0\ \ &\textup{if}\ \gamma<x\leq 1, \end{cases}
\end{align*}
and the coefficients $g_{j}$ and $h_{j}$ satisfy the upper bound \[
\max\{|g_{j}|, |h_{j}|\}\ll \min\{j^{-1},j^{-2}\Delta^{-1}\},\ \ (j\geq1). \] \end{Lemma}
Suppose that $\alpha>1.$ Then we have that $\alpha$ and $\gamma=\alpha^{-1}$ are of the same type. This means that $\tau(\alpha)=\tau(\gamma)$ (see page 133 in \cite{BY}). Let $\delta=\alpha^{-1}(1-\beta)$ and $M=[\alpha x+\beta].$ Then by Lemma \ref{lere}, we have \begin{align}\label{8} \begin{split} \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in \mathcal{Q}_{k}}}1&=\sum_{\substack{1\leq m\leq M \\ 0<\{\gamma m+\delta\}\leq \gamma\\ m\in \mathcal{Q}_{k}}}1+O(1)\\ &=\sum_{\substack{1\leq m\leq M\\ m\in \mathcal{Q}_{k}}}\Psi(\gamma m+\delta)+O(1), \end{split} \end{align} where $\Psi(x)$ is the periodic function with period one for which
\begin{align*}
\Psi(x)=
\begin{cases}
1\ \ &\textup{if}\ 0<x\leq \gamma,\\
0\ \ &\textup{if}\ \gamma<x\leq 1. \end{cases}
\end{align*} By a classical result of Vinogradov (see Lemma \ref{levi}), it is known that for any $\Delta$ such that $ 0<\Delta<1/8\ \textup{and} \ \Delta\leq \min\{\gamma,1-\gamma\}/2, $ there is a real-valued function $\Psi_{\Delta}(x)$ satisfy the conditions of Lemma \ref{levi}. Hence, by (\ref{8}), we can obtain that \begin{align}\label{7} \begin{split} \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in \mathcal{Q}_{k}}}1 &=\sum_{\substack{1\leq m\leq M\\ m\in \mathcal{Q}_{k}}}\Psi(\gamma m+\delta)+O(1)\\ &=\sum_{\substack{1\leq m\leq M\\ m\in \mathcal{Q}_{k}}}\Psi_{\Delta}(\gamma m+\delta)+O(1+V(I,M)M^{\varepsilon}), \end{split} \end{align} where $V(I,M)$ denotes the number of positive integers $m\leq M$ such that \[ \{\gamma m+\delta\}\in I=[0,\Delta)\cup (\gamma-\Delta,\gamma+\Delta) \cup(1-\Delta,1). \]
Since $|I|\ll \Delta,$ it follows from the definition (\ref{di}) and Lemma \ref{let} that \begin{align}\label{5} V(I,M)\ll \Delta x+x^{1-1/\tau+\varepsilon}, \end{align} where the implied constant depends only on $\alpha.$
By Fourier expansion for $\Psi_{\Delta}(\gamma m+\delta)$ (Lemma \ref{levi}) and changing the order of summation, we have \begin{align}\label{6} \begin{split} \sum_{\substack{1\leq m\leq M\\m\in \mathcal{Q}_{k}}}& \Psi_{\Delta}(\gamma m+\delta)\\ &=\gamma\sum_{\substack{m\leq M\\ m\in \mathcal{Q}_{k}}}1+\sum_{k=1}^{\infty}g_{k}e(\delta k)\sum_{\substack{1\leq m\leq M\\ m\in \mathcal{Q}_{k}}}e(\gamma km)\\ &+\sum_{k=1}^{\infty}h_{k}e(-\delta k)\sum_{\substack{1\leq m\leq M\\ m\in \mathcal{Q}_{k}}}e(\gamma km). \end{split} \end{align} By Theorem \ref{IK}, Lemma \ref{z2} and Lemma \ref{levi}, we see that for $0<k \ll x^{(4k-4)/(2k-1)+\varepsilon}$, we have \begin{align}\label{1} \begin{split} \sum_{1\leq k\leq x^{(4k-4)/(2k-1)+\varepsilon}}g_{k}e(\delta k)\sum_{\substack{1\leq m\leq M\\ m\in \mathcal{Q}_{k}}}e(\gamma km) \ll x^{k/(2k-1)+\varepsilon} +x^{1-1/(\tau+1)+\varepsilon}, \end{split} \end{align} where we have also used the fact that $\alpha$ and $\alpha^{-1}$ are of the same type (finite type). Similarly, we have \begin{align}\label{2} \begin{split} \sum_{1\leq k\leq x^{(4k-4)/(2k-1)+\varepsilon}}h_{k}e(-\delta k)\sum_{\substack{1\leq m\leq M\\ m\in \mathcal{Q}_{k}}}e(\gamma km) \ll x^{k/(2k-1)+\varepsilon} +x^{1-1/(\tau+1)+\varepsilon}, \end{split} \end{align} On the other hand, the trivial bound \[ \sum_{\substack{1\leq m\leq M\\ m\in \mathcal{Q}_{k}}}e(\gamma km)\ll x \] implies that \begin{align}\label{3} \begin{split} \sum_{k\geq x^{(4k-4)/(2k-1)+\varepsilon}}g_{k}e(\delta k)\sum_{\substack{1\leq m\leq M\\ m\in\mathcal{Q}_{k}}}e(\gamma km)&\ll x^{1+\varepsilon}\sum_{k\geq x^{(4k-4)/(2k-1)+\varepsilon}}k^{-2}\Delta^{-1}\\&\ll x^{k/(2k-1)+\varepsilon} \end{split} \end{align} and \begin{align}\label{4} \begin{split} \sum_{k\geq x^{(4k-4)/(2k-1)+\varepsilon}}h_{k}e(-\delta k)\sum_{\substack{1\leq m\leq M\\ m\in\mathcal{Q}_{k}}}e(\gamma km)&\ll x^{1+\varepsilon}\sum_{k\geq x^{(4k-4)/(2k-1)+\varepsilon}}k^{-2}\Delta^{-1}\\&\ll x^{k/(2k-1)+\varepsilon} \end{split} \end{align} where $\Delta=x^{-(k-1)/(2k-1)+\varepsilon} .$ Inserting the bounds (\ref{1})-(\ref{4}) into (\ref{6}), we have \begin{align*} \sum_{\substack{1\leq n\leq x\\ [\alpha n+\beta]\in\mathcal{Q}_{k}}} 1-\alpha^{-1} \sum_{\substack{1\leq n\leq [\alpha x+\beta]\\ n\in\mathcal{Q}_{k}}}1 \ll x^{k/(2k-1)+\varepsilon}+x^{1-1/(\tau+1)+\varepsilon}, \end{align*} where the implied constant depends on $\alpha,$ $k,$ $\beta$ and $\varepsilon.$ Substituting this bounds and (\ref{5}) into (\ref{7}) and choosing $\Delta=x^{-(k-1)/(2k-1)+\varepsilon}$, we complete the proof of Theorem \ref{th2}.
\section{Proof of Theorem \ref{IK}}
By Dirichlet hyperbolic method, we have \begin{align*} \sum_{1\leq h\leq H}\sum_{\substack{1\leq n\leq x\\ n\in \mathcal{Q}_{k}}}e(\alpha hn)&= \sum_{1\leq h\leq H}\sum_{1\leq m^{k}\leq y}\mu(m)\sum_{1\leq l\leq x/m^{k}} e(\alpha m^{k}lh) \\&+ \sum_{1\leq h\leq H}\sum_{1\leq l\leq x/y} \sum_{1\leq m^{k}\leq x/l}\mu(m)e(\alpha m^{k}lh)\\ &-\sum_{1\leq h\leq H}\sum_{1\leq m^{k}\leq y}\mu(m)\sum_{1\leq l\leq x/y} e(\alpha m^{k}lh), \end{align*} where $y$ is certain parameter to be chosen later. By the well known estimate \[
\sum_{1\leq n\leq x}e(n\alpha)\leq \min\left(x,\frac{1}{2||\alpha||}\right), \]
we have \begin{align*} \sum_{1\leq h\leq H}\sum_{1\leq m^{k}\leq y}\mu(m)\sum_{1\leq l\leq x/m^{k}} e(\alpha m^{k}lh) &\ll \sum_{1\leq h\leq H}\sum_{1\leq m^{k}\leq y}
\min\left(x/m^{k},\frac{1}{2||\alpha hm^{k}||}\right)\\ &\ll \sum_{1\leq h\leq H}\sum_{1\leq m \leq y}
\min\left(x/m ,\frac{1}{2||\alpha hm ||}\right)\\ &\ll
(Hy)^\varepsilon\sum_{1\leq n \leq Hy}
\min\left(Hx/n ,\frac{1}{2||\alpha n ||}\right)\\ \\ &\ll
(Hy)^\varepsilon \left(Hy +Hx/q+q\right),\\ \end{align*} where we have used the following lemma. \begin{Lemma}[See section 13.5 in \cite{IK}]\label{MS} For \[
\left|\theta-a/q\right|\leq q^{-2}, \] $a,q\in\mathbb{N}$ and $(a,q)=1,$ then we have \[ \sum_{1\leq n \leq M}\min\left\{\frac{x}{n},\frac{1}{2\parallel n\theta\parallel}\right\}\ll \left(M+q+xq^{-1}\right)\log 2qx, \] where $\parallel u\theta\parallel$ denotes the distance of $u$ from the nearest integer. \end{Lemma}
For the second the sum, we can use the exponential sum for M\"{o}bius function. We have \[ \sum_{1\leq h\leq H}\sum_{1\leq l\leq x/y} \sum_{1\leq m^{k}\leq x/l}\mu(m)e(\alpha m^{k}lh)\ll Hx/y^{1-1/k}(\log x)^{-A}, \] where $A$ is any positive constant. For the third sum we need the following lemma. \begin{Lemma}[See section 13.5 in \cite{IK}]\label{MS0} For \[
\left|\theta-a/q\right|\leq q^{-2}, \] $a,q\in\mathbb{N}$ and $(a,q)=1,$ then we have \[ \sum_{1\leq n \leq M}\min\left\{x,\frac{1}{2\parallel n\theta\parallel}\right\}\ll \left(M+x+Mx/q+q\right)\log 2qx, \] where $\parallel u\theta\parallel$ denotes the distance of $u$ from the nearest integer. \end{Lemma}
By Lemma \ref{MS0}, we have \begin{align*} \sum_{1\leq h\leq H}\sum_{1\leq m^{k}\leq y}\mu(m)\sum_{1\leq l\leq x/y} e(\alpha m^{k}lh) &\ll \sum_{1\leq h\leq H}\sum_{1\leq m^{k}\leq y}
\min\left(x/y,\frac{1}{2||\alpha hm^{k}||}\right)\\ &\ll \sum_{1\leq h\leq H}\sum_{1\leq m \leq y}
\min\left(x/y ,\frac{1}{2||\alpha hm ||}\right)\\ &\ll
(Hy)^\varepsilon\sum_{1\leq n \leq Hy}
\min\left(x/y ,\frac{1}{2||\alpha n ||}\right)\\ \\ &\ll
(Hy)^\varepsilon \left(Hy +x/y+Hx/q+q\right).\\ \end{align*}
Choosing $y=x^{k/(2k-1)}$ completes the proof of Theorem \ref{IK}.
$\mathbf{Acknowledgements}$ I am deeply grateful to the referee(s) for carefully reading the manuscript and making useful suggestions.
\address{Wei Zhang\\ School of Mathematics\\
Henan University\\
Kaifeng 475004, Henan, China} \email{[email protected]}
\end{document} | arXiv |
\begin{definition}[Definition:Bipartite Graph]
A '''bipartite graph''' is a graph $G = \struct {V, E}$ where:
:$V$ is partitioned into two sets $A$ and $B$ such that:
:each edge is incident to a vertex in $A$ and a vertex in $B$.
That is:
:no two vertices in $A$ are adjacent
and
:no two vertices in $B$ are adjacent.
It is a common practice when illustrating such a graph to draw the vertices of $A$ in one colour, and those of $B$ in another.
$G$ can be denoted $G = \struct {A \mid B, E}$ to emphasise that $A$ and $B$ partition $V$.
\end{definition} | ProofWiki |
Joint rate control and power allocation for low-latency reliable D2D-based relay network
Yahui Wang1,
Yanhua He1,
Chen Xu1,
Zhenyu Zhou ORCID: orcid.org/0000-0002-3344-44631,
Shahid Mumtaz2,
Jonathan Rodriguez2,3 &
Haris Pervaiz4
EURASIP Journal on Wireless Communications and Networking volume 2019, Article number: 111 (2019) Cite this article
The Correction to this article has been published in EURASIP Journal on Wireless Communications and Networking 2019 2019:235
Emerging 5G applications impose stringent requirements on network latency and reliability. In this work, we propose a low-latency reliable device-to-device (D2D) relay network framework to improve the cell coverage and user satisfaction. Particularly, we develop a cross-layer low-complexity resource allocation algorithm, which jointly optimizes the rate control and power allocation from a long-term perspective. The long-term optimization problem is transformed into a series of short-term subproblems by using Lyapunov optimization, and the objective function is separated into two independent subproblems related to rate control in network layer and power allocation in physical layer. Next, the Karush-Kuhn-Tucher (KKT) conditions and alternating direction method of multipliers (ADMM) algorithm are employed to solve the rate control subproblem and power allocation subproblem, respectively. Finally, simulation results demonstrate that the proposed algorithm can reach 99.9% of the optimal satisfaction of D2D pairs with lower average network delay compared to the baseline algorithm. Furthermore, the convergence time of the ADMM-based power allocation algorithm is only about 1.7% of that by using the CVX toolbox.
With the explosive growth of mobile applications, it is predicted that approximately 50 billion devices will interconnect to the network by 2020 [1, 2]. Cell-edge devices are likely to experience poor quality of service (QoS) and quality of experience (QoE) due to the long distance and time-varying channel states between devices and the base station (BS). Device-to-device (D2D)-based relay communication, as a key technology of 5G, can improve data transmission and network coverage by assisting users with inferior channel conditions via multi-hop transmissions. Specifically, the transmitters (TXs) of D2D pairs can act as relay nodes to reduce the transmission distance and conditions. improve the channel The required data of D2D receivers (RXs) is transmitted from the BS to the nearby TXs, which is stored in the queue buffers of TXs before being transmitted to RXs. Compared to conventional multi-hop relay network, the D2D-based relay network can be deployed underlying conventional cellular networks, which enables centralized resource management and coordination.
However, despite the advantages described above, the widespread deployment of D2D-based relay network still faces some challenges.
Firstly, there lacks a cross-layer resource allocation scheme to guarantee the reliability of network operation as well as to satisfy the low-latency requirements of applications. Since both the arrival rate of required data at TXs and channel conditions among D2D pairs are varying over time, there requires a joint optimization of the rate control in network layer and transmission rate in physical layer. Traditional schemes, which only consider the optimization of the physical-layer transmission rate while ignoring the arrival rate in network layer, will result in the data imbalance between the arrival rate and the transmission rate. The data imbalance will cause queue backlog and packet drop at TXs due to the limited queue buffering capability of TXs, which leads to intolerable latency and network unreliability.
Second, there lacks an effective online resource allocation scheme which optimizes the network performance from a long-term perspective. Conventional short-term optimization cannot satisfy the long-term optimization objective and constraints, which will lead to severe performance degradation since resources are only allocated based on instantaneous states and constraints. In addition, it is difficult to obtain accurate future information in practical applications due to the casuality constraint.
Last but not least, the computational complexity of traditional resource allocation algorithms increase dynamically as the number of D2D pairs increases. The reason is that numerous optimization variables in the network are coupled with each other, e.g., the constraint of sum rate and sum power consumption, which leads to prohibitive computational complexity. On the other hand, the optimization problem with coupled constraints is carried out in a time slot basis, which further increases computational complexity.
In this paper, to solve the abovementioned challenges, we propose a cross-layer online joint resource allocation algorithm to optimize the long-term satisfaction of D2D pairs while maintaining network reliability and reduce transmission delay. The main contributions of this work are summarized as follows:
We transform the long-term joint rate control and power allocation problem into a series of short-term optimization problems by using Lyapunov optimization [3]. At each time slot, the cross-layer joint optimization problem can be decomposed into two separate subproblems and solved independently. The proposed scheme guarantees a \(\left [\mathcal {O}(\frac {1}{V}),\mathcal {O}(V)\right ]\) tradeoff between queue stability and D2D pair satisfaction.
The power allocation problem has high computational complexity due to the coupling of optimization variables among different D2D pairs. To provide a tractable solution, we developed an alternating direction method of multipliers (ADMM)-based low-complexity power allocation scheme, which decomposes the large problem into a series of smaller subproblems and coordinates the solutions of these subproblems to find the solution of the original problem.
Simulation results demonstrate that the proposed joint optimization scheme can converge quickly and approximate the optimal solution. Moreover, we analyze the tradeoff between satisfaction of D2D pairs and network delay, which proves that significant performance can be improved by the proposed algorithm.
The remaining parts of this paper are organized as follows. Section 2 describes the related works. Section 3.1 introduces the power allocation model, the queue backlog model and satisfaction model. The problem formulation is presented in Section 3.2. The online joint rate control and power allocation algorithm based on Lyapunov optimization and ADMM and the performance analysis of the proposed algorithm are described in Sections 3.3 and 3.4, respectively. Simulation results are presented in Section 4. The conclusion and future works are summarized in Section 5.
Due to the advantages of D2D-based relay network such as enlarging cell coverage, reducing network delay and enhancing network reliability, it has aroused widespread concern in both academia and industry. A multi-dimensional optimization algorithm was proposed to solve a content distribution problem in multi-hop D2D relay networks [4], which can effectively reduce the average delay in the network. In [5], Zhou et al. studied the D2D communication underlying cellular networks and proposed a joint channel selection and power allocation optimization algorithm to improve the energy efficiency subject to various QoS constraints. In [6], Dang et al. proposed a full-duplex based D2D multi-hop communication framework, where the data forwarded between D2D transmitters and receivers are assisted by using multiple relays. However, these works mainly focus on the optimization of short-term network performance, e.g., instantaneous network capacity, energy efficiency, and transmission latency, while ignoring the optimization of time-average performance.
Lyapunov optimization is a powerful methodology for studying long-term optimization problems, which is able to transform long-term objective function into a series of short-term subproblems and transform the long-term constraints into queue stability constraints. It has been applied in various application scenarios such as D2D networks [7], edge computing [8], and OFDMA-based cellular networks [9]. In [3], Sheng et al. proposed a resource allocation algorithm to maximize the energy efficiency of D2D communication underlaying cellular networks subject to the time-average and network stability constraints by combining fractional programming and Lyapunov optimization. In [10], Guo et al. proposed a cross-layer joint rate control and resource allocation scheme, which can maximize the time-average user satisfaction based on Lyapunov optimization. In [11], Peng et al. considered the energy efficiency optimization problem in multimedia HCRANs subject to individual front-haul capacity as well as multiple interference constraints to sense queue and proposed an online resource allocation algorithm based on Lyapunov optimization. However, when optimizing the performance of the overall network, the computational complexity increases dramatically with the number of devices due to the coupling of optimization variables and constraints across devices.
ADMM algorithm can solve some specific convex optimization problems with a much lower complexity, because both the primal and dual variables are updated in an alternative direction to increase the convergence speed [12]. It employs a decomposition-coordination procedure, in which the global optimization problem is firstly decomposed into numerous small subproblems, and then the solutions to these subproblems are calculated, updated, and coordinated to find a solution to the global problem. ADMM has been widely adopted in addressing large-scale optimization problems in various application scenarios. In [13], Li et al. decoupled the power constraint and objective function by employing ADMM and proposed a robust design of transceiver multi-cell distributed antenna network with numerous remote radio heads. In [14], Ling et al. proposed a weighted ADMM algorithm to solve the consensus optimization problem in decentralized networks, which is able to minimize the communication cost of optimization. In [15], Chen et al. combined the ADMM algorithm with the convex-concave procedure to reduce the complexity and improve system performance for large-scale multi-group multicast beamforming problems.
Different from the abovementioned works, we propose a long-term cross-layer joint optimization of rate control and power allocation scheme for D2D relay networks by combining Lyapunov optimization and ADMM. Various constraints of network reliability, transmission delay, and power consumption have been taken into consideration. The difference between [10] and our work is that instead of solving the large-scale network optimization problem directly, we develop a low-complexity power allocation algorithm based on ADMM.
The proposed algorithm is not constrained to D2D-relay networks. It can be extended to solve similar joint rate control and power allocation problems in different application scenarios such as task offloading [8] and energy harvesting [16].
Theoretical method
System model
In the traditional cellular network, the QoS of some cell-edge devices cannot be well satisfied due to the long distance and time-varying channel gain between the devices and BS. Thus, D2D relay networks can be utilized to enhance cell coverage via a two-hop communication manner. As shown in Fig. 1, we consider a D2D-based relay network in a single cell, which consists of one BS and M D2D pairs. In this work, we assume that the relay selection has been finished, which has been studied in many papers [17, 18] so that it is left out of consideration in this work. That is, there exists a one-to-one mapping between D2D TXs and D2D RXs. The BS operates in a time-slotted manner and collects the queue state information (QSI) and channel state information (CSI) at each time slot [19]. The set of time slots is defined as \(\mathcal {T}=\{0,\cdots, t, \cdots, T-1\}\). Taking the D2D pair m as an example, the data requested by the RX is firstly transmitted from the BS to the TX. The TX maintains a queue temporarily to store the arrival data, which is then delivered to the RX. At each time slot, assume that Am(t) Mbits of data arrive at the TX of D2D pair m from the BS, which is assumed to be independently and identically distributed (i.i.d.) over time slots with the maximum arrival rate Am,max. In addition, assume that Dm(t) Mbits of data depart from the TX of D2D pair m in each time slot, which is related to the channel state and transmission power between the D2D pairs.
The D2D-based relay network
Power allocation model
D2D-based relay communication can be divided into in-band communication (also known as LTE direct) and out-band communication. In-band D2D communication can be further classified into the categories of underlay and overlay D2D communication [20]. In the circumstances of in-band overlay D2D-based relay network, the D2D communication occupies the licensed spectrum owned by the cellular operators. The cellular operators are able to employ complex interference mitigation techniques to provide higher satisfaction for D2D pairs compared to the use of unlicensed spectrum [21]. Hence, we assume that the transmission data at TXs is transmitted to the RXs through a series of orthogonal channels in the LTE direct system [22], which means there is no interference among D2D pairs. The transmission rate of D2D pair m is expressed as:
$$\begin{array}{*{20}l} D_{m}(t)= B_{m}(t)\log_{2} \left(1+\frac{p_{m}(t) {h^{2}_{m}}(t)}{{\sigma^{2}_{0}}}\right), \end{array} $$
where Bm(t) represents the channel bandwidth allocated to the D2D pair m, pm(t) is the transmission power of D2D pair m, hm(t) is the channel gain of D2D pair m, and \({\sigma _{0}^{2}}\) is the power of additive white Gaussian noise. Without loss of generality, assume that hm(t) is i.i.d. over time slots, and takes values in a finite state space. Moreover, hm(t) keeps constant during one time slot but varies across different time slots.
In order to reduce the power consumption of the network, the long-term time-average power consumption for arbitrary D2D pair m is defined as:
$$\begin{array}{*{20}l} 0 \le \underset{T \to \infty}{\lim} \frac{1}{T}\sum_{t =0}^{T - 1} p_{m}(t) \le P_{m,ave}, \end{array} $$
and instantaneous transmission power for each time slot t is defined as:
$$\begin{array}{*{20}l} 0 \le \sum_{m=1}^{M}p_{m}(t)\le P_{max}. \end{array} $$
where Pm,ave and Pmax are the time-average and instantaneous power consumption constraints, respectively.
Queue backlog model and satisfaction model
Due to the fact that the requested data cannot be transmitted instantaneously to the RX, it has to be stored at the queue of the TX temporarily. The queue backlogs at the TXs are denoted as Q(t)\(\buildrel \Delta \over = \{Q_{1}(t), \cdots, Q_{m}(t), \cdots, Q_{M}(t) \}\) at each time slot t, which are determined by the arrival rate and the transmission rate. Hence, the dynamic queue consists of the arrival data and the departure data. The data arrival process of the queue is determined by the rate control policy, which affects the amount of data enters into the queue and the satisfaction of D2D pairs. The data departure process of the queue is determined by the power allocation policy, which affects the amount of data leaves the queue and the network latency and stability. Thus the queue Qm(t) at TX of D2D pair m evolves in accordance with the following expression:
$$\begin{array}{*{20}l} Q_{m}(t+1) &= \max \{Q_{m}(t)-D_{m}(t), 0\} +A_{m}(t). \end{array} $$
There exists no data overflow if the long-term average transmission data of the queue is larger than or equal to the long-term average arrival data of the queue. Thus, the queue Qm(t) is mean rate stable [23] if
$$\begin{array}{*{20}l} \lim \limits_{T \to \infty} \frac{\mathbb{E}\left\{{|Q_{m}(T)|} \right\}}{T}=0. \end{array} $$
Equation (5) implies that the data in the stable network should be transmitted within finite delay and the stability of the network is guaranteed if the queue length is finite.
In addition, we define the satisfaction of D2D pair m as a nondecreasing concave function [10, 24]:
$$\begin{array}{*{20}l} S_{m} (A_{m}(t))=\gamma_{m} \log_{2} (A_{m}(t)), \end{array} $$
where γm is a predefined parameter related to the service of RX in D2D pair m. The logarithmic function indicates that the marginal increment of the satisfaction declines gradually with Am(t).
Problem formulation
The objective of this paper is to maximize the long-term time-average satisfaction of D2D pairs. The optimization problem can be formulated as follows:
$$\begin{array}{*{20}l} &\mathbf{P1}: \max_{\{A_{m}(t)\},\{p_{m}(t)\}} {\underset{T \to \infty}{\lim} \frac{1}{T}\sum_{t =0}^{T - 1} \mathbb{E} {\left \{ \sum_{m=1}^{M} S_{m} (A_{m}(t))\right \}}},\\ &\text{s.t.}\ C_{1}:A_{m}(t) \le A_{m,max}, \forall m,t, \\ &C_{2}: p_{m}(t)\ge 0, \forall m,t,\\ &C_{3}:\sum_{m=1}^{M}p_{m}(t)\le P_{max}, \forall t, \\ &C_{4}: \underset{T \to \infty}{\lim} \frac{1}{T}\sum_{t =0}^{T-1} p_{m}(t) \le P_{m,ave}, \forall m,\\ &C_{5}: Q_{m}(t) \text{ is mean rate stable}, \forall m, t. \end{array} $$
where C1 represents that the arrival rate cannot exceed the maximum tolerance rate of TX. C2∼C4 are the non-negative transmission power, instantaneous transmission power, and time-average transmission power constraints, respectively. C5 denotes the queue stability constraint in Eq. (5).
Next, we propose a cross-layer online optimization algorithm to solve P1 based on the Lyapunov optimization algorithm and ADMM algorithm.
Joint rate control and power allocation optimization
In this section, we firstly transform the long-term time-average optimization objective into a series of online subproblems by using the Lyapunov optimization. Then, we describe the detailed procedures of cross-layer joint optimization problem of rate control and power allocation.
Problem transformation
It is noticed that there exist long-term constraints in original problem P1. To handle the long-term time-average power consumption constraint, C4 can be transformed into queue stability constraint by employing virtual queue [25]. The virtual queue is defined as Z(t)\(\buildrel \Delta \over = \{Z_{1}(t), \cdots, Z_{m}(t), \cdots, Z_{M}(t) \}\), and Zm(t) evolves as follows:
$$\begin{array}{*{20}l} Z_{m}(t+1) = \max \{Z_{m}(t)-P_{m,ave},0\}+p_{m}(t), \end{array} $$
It is worth noting that there is no actual queue data in queue Zm(t), which is only proposed to satisfy constraint C4.
Theorem 1
If virtual queue Zm(t) is mean rate stable, then C4 holds automatically.
The detailed proof can be found in [10]. □
According to Theorem 1, P1 can be rewritten as:
$$\begin{array}{*{20}l} &\mathbf{P2}: \max_{\{A_{m}(t)\},\{p_{m}(t)\}} {\underset{T \to \infty}{\lim}\frac{1}{T}\sum_{t=0}^{T - 1} \mathbb{E} {\left \{\sum_{m=1}^{M} S_{m} (A_{m}(t))\right \}}},\\ &\text{s.t.}\ C_{1} \sim C_{3}, \\ &C_{6}: Q_{m}(t), Z_{m}(t)\text{ are mean rate stable}, \forall m, t. \end{array} $$
Lyapunov optimization
Let Θ(t) =[Q(t), Z(t)] be the concatenated vector of queue length in the network. Define the Lyapunov function as a measure of total queue length at each time slot t:
$$\begin{array}{*{20}l} L(\Theta(t))=\frac{1}{2}\sum_{m=1}^{M}\left \{{Q_{m}^{2}}(t)+{Z^{2}_{m}}(t)\right \}, \end{array} $$
At each time slot, the conditional Lyapunov drift is expressed as:
$$\begin{array}{*{20}l} \Delta (\Theta(t)) & \buildrel \Delta \over =\mathbb{E} \{ L(\Theta(t+1)) - L(\Theta(t))|\Theta(t) \}, \end{array} $$
At each time slot, it can be observed that the Lyapunov function is able to control Lyapunov drift's ultimate value by adjusting the final queue length. According to the Little's Theorem [26], the average delay is proportional to the average queue length, which is expressed as:
$$\begin{array}{*{20}l} D_{net}=\frac{\underset{T \to \infty}{\lim} \frac{1}{T}\sum_{t =0}^{T - 1}\mathbb{E}{\{Q_{m} (t)\}}} {\underset{T \to \infty}{\lim} \frac{1}{T}\sum_{t =0}^{T - 1}\mathbb{E}{\{A_{m}(t)\}}}, \end{array} $$
where Dnet is the time-average delay, which can be adjusted by minimizing Lyapunov drift.
To minimize the network delay and maximize the long-term time-average satisfaction of all D2D pairs, the drift-minus-reward term is defined as:
$$\begin{array}{*{20}l} & \Delta (\Theta(t))-V\mathbb{E}\left \{\sum_{m=1}^{M} S_{m} (A_{m}(t))|\Theta(t) \right\}, \end{array} $$
where V is a non-negative control parameter that is chosen to affect the relative performance of the network delay and satisfaction of D2D pairs, i.e., the tradeoff between "network delay minimization" and "satisfaction maximization of D2D pairs".
At each time slot, under any possible Θ(t) with given V≥0, the drift-minus-reward term is upper bounded by:
$$\begin{array}{*{20}l} & \Delta (\Theta(t))-V\mathbb{E}\left \{\sum_{m=1}^{M} S_{m} (A_{m}(t))|\Theta(t) \right \} \\ & \le C+ \sum_{m=1}^{M} \mathbb{E} \left \{Q_{m}(t)A_{m}(t)-{VS}_{m} (A_{m}(t))|\Theta(t) \right \} \\ & + \sum_{m=1}^{M} \mathbb{E} \left \{Z_{m}(t)(p_{m}(t)-P_{m,ave})|\Theta(t) \right \} \\ & - \sum_{m=1}^{M} \mathbb{E} \left \{Q_{m}(t)D_{m}(t)|\Theta(t) \right \}, \end{array} $$
where C is a positive constant which satisfies:
$$\begin{array}{*{20}l} C &\ge \frac{1}{2}\sum_{m=1}^{M}\mathbb{E} \left\{{A_{m}^{2}}(t)+{D_{m}^{2}}(t)|\Theta(t) \right\} \\ &+\frac{1}{2}\sum_{m=1}^{M} \mathbb{E}\left\{{p_{m}^{2}}(t)+(P_{m,ave})^{2}|\Theta(t)\right\}. \end{array} $$
The detailed proof can be found in Appendix 1. □
According to the principle of Lyapunov optimization, P2 can be transformed into optimizing the drift-minus-reward with the constraints C1∼C3. The second term of the right hand side (RHS) in (14) involves only the rate control variables {Am(t)}, while the third and the fourth term of the RHS in (14) involves only the power allocation variables {pm(t)}. Therefore, P2 can be decoupled into two independent rate control subproblem and power allocation subproblem.
Rate control
The rate control subproblem can be formulated as:
$$\begin{array}{*{20}l} &\mathbf{P3}: \min_{\{A_{m}(t)\}} \sum_{m=1}^{M} \{Q_{m}(t)A_{m}(t)-{VS}_{m} (A_{m}(t))\},\\ &\text{s.t.}\ C_{1}, \end{array} $$
The second-order derivative of P3 is greater than zero, which indicates that it is a convex function with respect to {Am(t)} and can be solved by KKT conditions [27]. The Lagrangian associated with Qm(t)Am(t)−VSm(Am(t)) is expressed as:
$$ \begin{aligned} &L(A_{m}(t),\lambda)\\ &=Q_{m}(t)A_{m}(t)-{VS}_{m} (A_{m}(t))+\lambda(A_{m}(t)-A_{m,max}), \end{aligned} $$
where λ is the Lagrange multiplier.
The first-order conditions of (17) with respect to Am(t) is expressed as:
$$\begin{array}{*{20}l} \frac{\partial L}{\partial A_{m}(t)}=Q_{m}(t)-\frac{V\gamma_{m}}{A_{m}(t)\ln2}+\lambda=0, \end{array} $$
Considering the primal constraint Am(t)≤Am,max, dual constraint λ≥0 and complementary slackness constraint λ(Am(t)−Am,max)=0, the optimal rate is given by:
$$\begin{array}{*{20}l} A^{*}_{m}(t)=\min \left \{\frac{\gamma_{m} V}{Q_{m}(t)\ln2},A_{m,max} \right\}. \end{array} $$
It is noticed that the optimal rate is inversely proportional to the queue length Qm(t). Thus the online algorithm can adjust the arrival rate based on the queue length.
ADMM-based power allocation algorithm
The power allocation subproblem is expressed as:
$$\begin{array}{*{20}l} &\mathbf{P4}: \min_{\{p_{m}(t)\}} \left\{\sum_{m=1}^{M} Z_{m}(t)\left(p_{m}(t)-P_{m,ave}\right)\right.\\ & \left.- Q_{m}(t)D_{m}(t){\vphantom{\sum_{m=1}^{M} Z_{m}(t)(p_{m}(t)-P_{m,ave})}}\right\},\\ &\text{s.t.}\ C_{2}, C_{3}. \end{array} $$
It can be proved that P4 is a convex function regarding to pm(t) by calculating the corresponding second-order derivative. However, the calculating time by using toolbox to solve the optimization objective is very large due to the coupled power variables and the dynamically increasing of D2D pairs. Hence, we develop a low complexity ADMM-based power allocation algorithm to solve the problem P4.
The ADMM algorithm is simple but powerful to research distributed convex optimization problems, which had been successfully applied in many aspects, i.e., statistical learning problems, time-series analysis and scheduling [28]. In general, the diverse application domains are characterized with the large-scale problems, high-dimensional data processing and distributed collection of large scale data in stochastic process [29]. The basic produces of ADMM algorithm is to alternatively update primal variables and dual variables in an iterative manner [12]. It is beneficial to find the optimal solution with low computational complexity by using decomposition-coordination procedure.
In order to obtain the optimal solution, we rewrite the power variables into p1={p1,p2,⋯,pn} and p2={pn+1,pn+2,⋯,pM}. Then let x=p1 and z=p2. Therefore, we can transform P4 into the following ADMM format [30]:
$$\begin{array}{*{20}l} &\mathbf{P5}: \min_{\mathbf{x},\mathbf{z}} f(\mathbf{x})+ g(\mathbf{z}) \\ &\text{s.t.} \ C_{7}: \mathbf{Jx}+\mathbf{K}\mathbf{z}\le c, \end{array} $$
where x∈Rn×1,z∈R(M−n)×1,J∈R1×n,K∈R1×(M−n), and c=Pmax. J and K are unit vectors. f(x) and g(z) satisfy:
$$\begin{array}{*{20}l} f(\mathbf{x})&=\sum_{m=1}^{n}f(x_{m})=\sum_{m=1}^{n} \{Z_{m}(t)(p_{m}(t)-P_{m,ave}) \\ & -Q_{m}(t)\log_{2} \left(1+\frac{p_{m}(t) {h^{2}_{m}}(t)}{{\sigma^{2}_{0}}}\right)\} \end{array} $$
$$\begin{array}{*{20}l} g(\mathbf{z})&=\sum_{m=n+1}^{M} g(z_{m})=\sum_{m=n+1}^{M} \{Z_{m}(t)(p_{m}(t)\\&-P_{m,ave})-Q_{m}(t)\log_{2} \left(1+\frac{p_{m}(t) {h^{2}_{m}}(t)}{{\sigma^{2}_{0}}}\right)\} \end{array} $$
There exist two basic forms of the ADMM algorithm, such as the unscaled form and the scaled form [16]. For the sake of simplicity, the scaled ADMM algorithm is employed in this paper. The augmented Lagrangian of P5 is expressed as:
$$\begin{array}{*{20}l} &L_{\rho}(\mathbf{x},\mathbf{z}, \boldsymbol{\beta}) =f(\mathbf{x})+ g(\mathbf{z}) \\ & + \frac{\boldsymbol{\rho}}{2} \parallel \mathbf{Jx}+\mathbf{K}\mathbf{z}-c+ \frac{1}{\boldsymbol{\rho}}\boldsymbol{\beta} {\parallel^{2}_{2}}-\frac{\boldsymbol{\rho}}{2} \parallel \frac{1}{\boldsymbol{\rho}}\boldsymbol{\beta} {\parallel^{2}_{2}}. \end{array} $$
where ρ>0 represents the penalty parameter, which is related to the convergence speed of ADMM algorithm. β is the vector form of the Lagrange multipliers.
The ADMM algorithm in scaled form consists of the following iterations regarding to the primal variables and Lagrange multipliers:
$$\begin{array}{*{20}l} \mathbf{x}[i+1]& :=\arg \min_{\mathbf{x}} \left\{{\vphantom{\frac{1}{\boldsymbol{\rho}}}}f(\mathbf{x})+\frac{\boldsymbol{\rho}}{2} \parallel \mathbf{Jx}+\mathbf{K}\mathbf{z}[i]-c \right.\\ &\left.+\frac{1}{\boldsymbol{\rho}}\boldsymbol{\beta}[i] {\parallel^{2}_{2}} \right\}, \end{array} $$
$$\begin{array}{*{20}l} \mathbf{z}[i+1]& :=\arg \min_{\mathbf{z}} \left\{{\vphantom{\frac{1}{\boldsymbol{\rho}}}}g(\mathbf{z})+\frac{\boldsymbol{\rho}}{2} \parallel \mathbf{J}\mathbf{x}[i+1]+\mathbf{K}\mathbf{z}-c\right. \\ &\left.+\frac{1}{\boldsymbol{\rho}}\boldsymbol{\beta}[i] {\parallel^{2}_{2}} \right\}, \end{array} $$
$$\begin{array}{*{20}l} \boldsymbol{\beta}[i+1]:=\boldsymbol{\beta}[i]+\boldsymbol{\rho}(\mathbf{J}\mathbf{x}[i+1]-\mathbf{K}\mathbf{z}[i+1]-c). \end{array} $$
where i denotes the index of iteration.
Next, based on the analysis of optimality conditions [31], the primal residual is expressed as:
$$\begin{array}{*{20}l} \mathbf{r}[i+1]=\mathbf{J}\mathbf{x}[i+1]+\mathbf{K}\mathbf{z}[i+1]-c, \end{array} $$
and the dual residual is expressed as:
$$\begin{array}{*{20}l} \mathbf{s}[i+1]=\boldsymbol{\rho}\mathbf{J}^{T}\mathbf{K}\left(\mathbf{z}[i+1]-\mathbf{z}[i]\right). \end{array} $$
Therefore, the reasonable termination criteria satisfies:
$$\begin{array}{*{20}l} \parallel \mathbf{r}[i] \parallel_{2}\le \epsilon^{pri} and \parallel \mathbf{s}[i] \parallel_{2}\le \epsilon^{dual}. \end{array} $$
where εpri>0 and εdual>0 denote feasibility tolerances with respect to primal conditions and dual conditions. Consequently, the ADMM-based power allocation algorithm is summarized in Algorithm 1.
In this subsection, we analyze the performance of Lyapunov optimization algorithm and ADMM algorithm, respectively.
Performance of Lyapunov optimization algorithm
Due to the fact that all physical quantities cannot be infinitely large in the practical network, we consider that the arrival rate, transmission rate, power consumption, and satisfaction of D2D pairs are all bounded, i.e., \(\mathbb {E}\{A_{m}(t)\}\le \theta, \mathbb {E}\{D_{m}(t)\}\le \theta \), \(\mathbb {E}\{p_{m}(t)\}\le \theta \), \(S_{min} \le \mathbb {E}\{S_{m} (A_{m}(t))\}\le S_{max}\), where θ, Smin and Smax are finite non-negative constants.
Assume that there is at least one feasible solution to problem P1 which satisfies constraints C1∼C5 and the bounded values mentioned above. For arbitrary small positive real number ε and ζ, the following expressions hold [32]:
$$\begin{array}{*{20}l} \mathbb{E}\{ A^{*}_{m}(t)-D^{*}_{m}(t)|\Theta(t)\}=\mathbb{E}\{ A^{*}_{m}(t)-D^{*}_{m}(t)\} \le -\varepsilon \end{array} $$
$$\begin{array}{*{20}l} \mathbb{E}\{ p^{*}_{m}(t)-P_{m,ave}|\Theta(t)\}=\mathbb{E}\{ p^{*}_{m}(t)-{P_{m,ave}}\} \le -\zeta \end{array} $$
$$\begin{array}{*{20}l} \mathbb{E}\{ S^{*}_{m} (A^{*}_{m}(t))|\Theta(t)\}=\mathbb{E}\{S^{*}_{m} (A^{*}_{m}(t))\} = S_{opt} \end{array} $$
where \(A^{*}_{m}(t), D^{*}_{m}(t), p^{*}_{m}(t)\), and \(S^{*}_{m} (A^{*}_{m}(t))\) are the corresponding resulting values, and Sopt denotes the theoretical optimal value.
Suppose that the problem P1 is feasible, hm(t) is i.i.d. with slotted time, and that \(\mathbb {E} \{{L(\Theta (0))}\}<\infty \). For arbitrary V≥0, the following properties corresponding to the proposed algorithm hold:
Qm(t) and Zm(t) are mean rate stable, which guarantee the constraint C6.
The time-average satisfaction of D2D pairs satisfies:
$$\begin{array}{*{20}l} \underset{T \to \infty}{\lim} \frac{1}{T}\sum_{t =0}^{T-1} \mathbb{E}\{S_{m} (A_{m}(t))\} \ge S_{opt} - \frac{C}{V} \end{array} $$
The time-average queue length satisfies:
$$\begin{array}{*{20}l} \underset{T \to \infty}{\lim} \frac{1}{T}\sum_{t =0}^{T-1} \sum_{m=1}^{M} \mathbb{E}\{Q_{m}(t)\} \le \frac{C+V(S_{max}-S_{opt})}{\varepsilon} \end{array} $$
Based on the above analysis, we conclude that the proposed rate control and power allocation algorithm can satisfy the queue stability constraint and achieve the trade-off between network delay and satisfaction of D2D pairs by adjusting parameter V.
Convergence of ADMM algorithm
The objective function of P5 is closed, proper, and convex, and the Lagrangian L0(x,z,β) has a saddle point. Thus, the iterations satisfy the following convergence properties.
The residual convergence, objective convergence and dual variable convergence are expressed as follows:
Residual convergence: The primal and dual residuals converge to 0 as i→∞, which implies that the iterations approach feasibility.
Objective convergence: The objective function of P5 eventually converges to the optimal value under the stopping criterion as i→∞.
Dual variable convergence: The dual variable β[i+1] converges to dual optimal value as i→∞.
In this section, we verify the system performance of the proposed algorithm through simulation results. Assume that there are M=4 D2D pairs for data transmission in each time slot, and the corresponding number of subchannels is set as 4. Detailed parameters are summarized in Table 1 [10, 34, 35].
Table 1 Parameter table
Figure 2 shows the queue length of the data queue Qm(t) and the virtual power queue Zm(t) versus time slots, respectively. It can be observed that both the data queue and the virtual queue are bounded after a period of time, which guarantees the stability of network. The phenomenon can be well explained by the first property in Theorem 3.
a, b Queue stability versus time slots
Figure 3 shows the transmission rate and arrival rate versus time slots, respectively. It can be observed that both the transmission rate and arrival rate are stable, which guarantees the long-term time-averaged power constraint. In addition, the fluctuation of arrival rate in Fig. 3a is smaller than that of transmission rate in Fig. 3b. The reason is that the arrival rate is only related to the stable queues Q(t) and Z(t), while the transmission rate is not only related to the stable queues Q(t) and Z(t), but also related to the transmission power and channel gain, which is varying across time slots. Therefore, the transmission rate remains stable over a larger range of values compared with the arrival rate.
a, b Rate stability versus time slots
Figure 4 shows the satisfaction of D2D pairs and the average network delay versus the control parameter V, respectively. It can be observed that the satisfaction of D2D pairs and the average network delay increase as V increases. The reason is consistent with the second and third properties of Theorem 3. Furthermore, the snapshot based algorithm [10] is denoted as the baseline algorithm to maximize instantaneous satisfaction of all D2D pairs for the purpose of comparison. Typically, the baseline algorithm only considers the service demands of users in short-term while ignoring the power constraint and stability of the network in long-term. It can be observed that the proposed algorithm can approximate to 99.9% of the optimal satisfaction of D2D pairs with lower average network delay. The reason is that the proposed algorithm can simultaneously optimize the arrival rate in network layer and the power allocation in physical layer from a global long-term perspective. Figure 5 shows the residual convergence of the ADMM algorithm versus the number of iterations. The stopping criterion constraints εpri and εdual are represented by the dotted lines in Fig. 5a and b, respectively. It can be observed that the stopping criterion can be satisfied after 57 iterations in Fig. 5a and 4 iterations in Fig. 5b. The iterations will stop if and only if both the primal residual and the dual residual conditions are satisfied simultaneously, i.e., after 57 iterations, which is consistent with the first property of Theorem 4.
a, b Satisfaction of D2D pairs and network delay versus control parameter V
a, b The residual convergence versus the number of iterations
Figure 6 shows the the optimal convergence of the ADMM algorithm versus the number of iterations. It shows the convergence of objective function, which demonstrates that the proposed algorithm can obtain the optimal solution of the objective function, i.e., 8.75. The reason can be explained by the second property of Theorem 4. In addition, the dual variables will converge after multiple iterations due to the derivation from the primal variables, which is consistent with the third property of Theorem 4.
The objective convergence versus the number of iterations
Table 2 shows the simulation analysis by using CVX toolbox approach and ADMM-based power allocation algorithm. On one hand, it can be observed that the convergence time of ADMM-based power allocation algorithm requires 0.021236 seconds, which is only about 1.7% of the convergence time by using the CVX toolbox approach. Due to the fast convergence of the ADMM algorithm, the ADMM-based power allocation algorithm can get the optimal solution with lower computational complexity within error tolerance at each time slot. On the other hand, 4 TXs share the total power of 0.8 W at each time slot, which proves the feasibility of the proposed power allocation algorithm. The table shows that the power allocation varied with the channel states and the service demands of different TXs.
Table 2 Comparison of algorithm complexity
In this paper, we propose the D2D-based relay framework to improve the reliability and reduce delay of the network. Based on the Lyapunov optimization, the cross-layer joint optimization problem is separated into two independent rate control subproblem and power allocation subproblem, which can be solved by using KKT conditions and ADMM algorithm, respectively. The tradeoff between network delay minimization and satisfaction maximization of D2D pairs with \([\mathcal {O}(\frac {1}{V}),\mathcal {O}(V)]\) can be obtained by the proposed algorithm, which has been verified by the simulation results.
In the future work, we will improve the performance of the network while considering device mobility from a long-term perspective. Furthermore, we will study how to combine Lyapunov optimization and ADMM algorithm with big data or machine learning to deal with high complexity problems.
Appendix 1: Proof of Theorem 2
According to (max{Q−b,0}+A)2≤Q2+b2+A2+2Q(A−b), where Q, b and A are non-negative real numbers, we obtain the following expression:
$$ {\begin{aligned} \Delta (\Theta(t)) & \buildrel \Delta \over =\mathbb{E} \{ L(\Theta(t+1)) - L(\Theta(t))|\Theta(t) \}\\ &\le C+\sum_{m=1}^{M} \mathbb{E} \left \{Q_{m}(t)[A_{m}(t)-D_{m}(t)]|\Theta(t) \right \}\\ & + \sum_{m=1}^{M} \mathbb{E} \{Z_{m}(t)(p_{m}(t)-P_{m,ave})|\Theta(t) \}, \end{aligned}} $$
$$ {\begin{aligned} C &\ge \frac{1}{2}\sum_{m=1}^{M}\mathbb{E} \{{A_{m}^{2}}(t)+{D_{m}^{2}}(t)|\Theta(t) \}\\ &+\frac{1}{2}\sum_{m=1}^{M} \mathbb{E}\left\{{p_{m}^{2}}(t)+(P_{m,ave})^{2}|\Theta(t)\right\}. \end{aligned}} $$
Add and subtract \(V\mathbb {E}\left \{ \sum _{m=1}^{M} S_{m} (A_{m}(t))|\Theta (t) \right \}\) to the both sides of Eq. (36), then merge the same variables to prove (14).
In order to minimize the RHS of (14) with constraints C1∼C3, we obtain the following expression:
$$ {\begin{aligned} & \Delta (\Theta(t))-V\mathbb{E}\left \{\sum_{m=1}^{M} S_{m} (A_{m}(t))|\Theta(t) \right \}\\ &\le C+\sum_{m=1}^{M} \mathbb{E} \left \{Q_{m}(t)[A^{*}_{m}(t)-D^{*}_{m}(t)]|\Theta(t) \right \}\\ & + \sum_{m=1}^{M} \mathbb{E} \{Z_{m}(t)(p^{*}_{m}(t)-{P_{m,ave}})|\Theta(t) \},\\ & -V\mathbb{E}\left \{ \sum_{m=1}^{M} S^{*}_{m} (A^{*}_{m}(t))|\Theta(t) \right \}, \end{aligned}} $$
Plugging (31), (32) and (33) into (38), and taking ζ→0, the following expression satisfys:
$$ {\begin{aligned} \Delta (\Theta(t))&-V\mathbb{E}\left \{ \sum_{m=1}^{M} S_{m} (A_{m}(t))|\Theta(t) \right \}\\ \le C&-V\mathbb{E}\{ S_{opt}|\Theta(t)\}-\varepsilon \sum_{m=1}^{M} \mathbb{E} \{Q_{m}(t)|\Theta(t)\}, \end{aligned}} $$
By applying the rules of iterated expectation and telescoping sums, we obtain:
$$\begin{array}{*{20}l} & \mathbb{E} \{{L(\Theta(t))}\}-\mathbb{E} \{{L(\Theta(0))}\}-V\sum_{t=0}^{T-1}\mathbb{E}\left\{S_{m} (A_{m}(t))\right\}\\ & \le T(C-{VS}_{opt})-\varepsilon \sum_{t=0}^{T-1} \sum_{m=1}^{M} \mathbb{E} \{Q_{m}(t)\}, \end{array} $$
Since \(\mathbb {E} \{{L(\Theta (0))}\}=0, \mathbb {E} \{{L(\Theta (t))}\}\ge 0\) and Qm(t)≥0, rearrange terms and we obtain:
$$\begin{array}{*{20}l} V\sum_{t=0}^{T-1}\mathbb{E}\left\{S_{m} (A_{m}(t))\right\}\ge {TVS}_{opt}-TC, \end{array} $$
Dividing by VT at both sides and taking T→∞, (34) can be proved.
Similarly, by rearranging terms, (40) can be rewritten as:
$$\begin{array}{*{20}l} \varepsilon \sum_{t=0}^{T-1} \sum_{m=1}^{M} \mathbb{E} \{Q_{m}(t)\} &\le T(C-{VS}_{opt})\\ & +V\sum_{t=0}^{T-1}\mathbb{E}\left\{S_{m} (A_{m}(t))\right\} \\ & \le T(C-{VS}_{opt})+{TVS}_{max}, \end{array} $$
Dividing by εT at both sides and taking T→∞, (35) can be proved.
According to the definition of Lyapunov function, taking expectation and rearranging terms yield:
$$\begin{array}{*{20}l} \sum_{m=1}^{M} \mathbb{E} \left\{{Q^{2}_{m}}(t)\right\}=2\mathbb{E} \{L(\Theta(t))\}-\sum_{m=1}^{M} \mathbb{E} \left\{{Z^{2}_{m}}(t)\right\}, \end{array} $$
We obtain the following expression by plugging (43) into (40) and rearranging terms:
$$\begin{array}{*{20}l} \sum_{m=1}^{M} \mathbb{E} \left\{{Q^{2}_{m}}(t)\right\} \le 2TV\left(S_{max}-S_{opt}\right)+TC, \end{array} $$
According to inequality \(\mathbb {E}^{2}\{|Q_{m}(t)|\} \le \mathbb {E}\left \{{Q^{2}_{m}}(t)\right \}\), we obtain:
$$\begin{array}{*{20}l} \sum_{m=1}^{M}\mathbb{E}\{|Q_{m}(t)|\} &\le \sqrt{\sum_{m=1}^{M}\mathbb{E}\left\{{Q^{2}_{m}}(t)\right\}} \\ & \le \sqrt{2TV\left(S_{max}-S_{opt}\right)+TC}, \end{array} $$
Dividing by T and taking T→∞, we can obtain:
$$\begin{array}{*{20}l} \underset{T \to \infty}{\lim} \frac{\sum_{m=1}^{M}\mathbb{E}\{|Q_{m}(T)|\}}{T}=0. \end{array} $$
which proves the stability of queues by changing the order of taking expectation and limit. Queue Qm(t) is mean rate stable according to (5), thus C5 can be satisfied. Similar proof can be applied to Zm(t) with slight modification.
Please note that in the original article [1] an incorrect address has been provided for affiliation '2'.
ADMM:
Alternating direction method of multipliers
CSI:
Channel state information
D2D:
Device-to-device
i.i.d.:
Independent and identical distribution
KKT:
Karush-Kuhn-Tucher
QoE:
QoS:
QSI:
Queue state information
RHS:
Right side hand
RX:
D2D receiver
D2D transmitter
Z. Zhou, H. Liao, B. Gu, K. M. S. Huq, S. Mumtaz, J. Rodriguez, Robust mobile crowd sensing: when deep learning meets edge computing. IEEE Netw.32(4), 54–60 (2018).
Z. Zhou, H. Yu, C. Xu, Y. Zhang, S. Mumtaz, J. Rodriguez, Dependable content distribution in D2D-based cooperative vehicular networks: a big data-integrated coalition game approach. IEEE Trans. Ind. Informat.19(3), 953–964 (2018).
M. Sheng, Y. Li, X. Wang, J. Li, Y. Shi, Energy efficiency and delay tradeoff in device-to-device communications underlaying cellular networks. IEEE J. Sel. Areas Commun.34(1), 92–106 (2016).
C. Xu, J. Feng, Z. Zhou, J. Wu, C. Perera, Cross-layer optimization for cooperative content distribution in multihop device-to-device networks. IEEE Internet Things J.99:, 1–1 (2017).
Z. Zhou, M. Dong, K. Ota, G. Wang, L. T. Yang, Energy-efficient matching for resource allocation in D2D enabled cellular networks. IEEE Internet Things J.3(3), 428–438 (2016).
S. Dang, G. Chen, J. P. Coon, Multicarrier relay selection for full-duplex relay-assisted OFDM D2D systems. IEEE Trans. Veh. Technol.67(8), 7204–7218 (2018).
L. Pu, X. Chen, J. Xu, X. Fu, D2D fogging: An energy-efficient and incentive-aware task offloading framework via network-assisted D2D collaboration. IEEE J. Sel. Areas Commun.34(12), 3887–3901 (2016).
L. Chen, S. Zhou, J. Xu, Computation peer offloading for energy-constrained mobile edge computing in small-cell networks. IEEE/ACM Trans. Netw.26(4), 1619–1632 (2018).
Y. Guo, Q. Yang, J. Liu, K. S. Kwak, Cross-layer rate control and resource allocation in spectrum-sharing OFDMA small-cell networks with delay constraints. IEEE Trans. Veh. Technol.66(5), 4133–4147 (2017).
Y. Guo, Q. Yang, K. S. Kwak, Quality-oriented rate control and resource allocation in time-varying OFDMA networks. IEEE Trans. Veh. Technol.66(3), 2324–2338 (2017).
M. Peng, Y. Yu, H. Xiang, H. V. Poor, Energy-efficient resource allocation optimization for multimedia heterogeneous cloud radio access networks. IEEE Trans. Multimedia. 18(5), 879–892 (2016).
C. Liang, F. R. Yu, H. Yao, Z. Han, Virtual resource allocation in information-centric wireless networks with virtualization. IEEE Trans. Veh. Technol.65(12), 9902–9914 (2016).
N. Li, Z. Fei, C. Xing, D. Zhu, M. Lei, Robust low-complexity MMSE precoding algorithm for cloud radio access networks. IEEE Commun. Lett.18(5), 773–776 (2014).
Q. Ling, Y. Liu, W. Shi, Z. Tian, Weighted ADMM for fast decentralized network optimization. IEEE Trans. Signal Process.64(22), 5930–5942 (2016).
E. Chen, M. Tao, ADMM-based fast algorithm for multi-group multicast beamforming in large-scale wireless systems. IEEE Trans. Commun.65(6), 2685–2698 (2016).
G. Zhang, Y. Chen, Z. Shen, L. Wang, Distributed energy management for multi-user mobile-edge computing systems with energy harvesting devices and QoS constraints. IEEE Internet Things J., 1–1 (2018).
Z. Zhou, F. Xiong, C. Xu, Y. He, S. Mumtaz, Energy-efficient vehicular heterogeneous networks for green cities. IEEE Trans. Ind. Informat.14(4), 1522–1531 (2018).
C. Xu, J. Feng, B. Huang, Z. Zhou, S. Mumtaz, J. Rodriguez, Joint relay selection and resource allocation for energy-efficient D2D cooperative communications using matching theory. Appl. Sci.7(5), 491–515 (2017).
J. Liu, Q. Yang, S. G, Congestion avoidance and load balancing in content placement and request redirection for mobile CDN. IEEE/ACM Trans. Netw.26(2), 851–863 (2018).
A. Asadi, Q. Wang, V. Mancuso, A survey on device-to-device communication in cellular networks. Commun. Surveys Tuts.16(4), 1801–1819 (2014).
H. Kim, J. Na, E. Cho, in Proceedings of the International Conference on Information Networking 2014. Resource allocation policy to avoid interference between cellular and D2D links/ and D2D links in mobile networks (IEEEPhuket, 2014), pp. 588–591.
P. Mach, Z. Becvar, T. Vanek, In-band device-to-device communication in OFDMA cellular networks: A survey and challenges. Commun. Surveys Tuts.17(4), 1885–1922 (2015).
M. J. Neely, Stochastic Network Optimization with Application to Communication and Queueing Systems (Morgan and Claypool, San Rafael, 2010).
M. Chen, S. S. M. Ponec, J. Li, P. A. Chou, Utility maximization in peer-to-peer systems with applications to video conferencing. IEEE/ACM Trans. Netw.20(6), 1681–1694 (2012).
W. Bao, H. Chen, Y. Li, B. Vucetic, Joint rate control and power allocation for non-orthogonal multiple access systems. IEEE J. Sel. Areas Commun.35(12), 2798–2811 (2017).
D. Bertsekas, R. Gallager, Data Networks (Prentice hall, NJ,USA, 1987).
L. Duan, T. Kubo, K. Sugiyama, J. Huang, T. Hasegawa, J. Walrand, Motivating smartphone collaboration in data acquisition and distributed computing. IEEE Trans. Mobile Comput.13(10), 2320–2333 (2014).
C. Lu, J. Feng, S. Yan, Z. Lin, A unified alternating direction method of multipliers by majorization minimization. IEEE Trans. Pattern Anal. Mach. Intell.40(3), 527–541 (2018).
Y. Li, G. Shi, W. Yin, L. Liu, Z. Han, A distributed ADMM approach with decomposition-coordination for mobile data offloading. IEEE Trans. Veh. Technol.67(3), 2514–2530 (2018).
Y. Wang, L. Wu, S. Wang, A fully-decentralized consensus-based ADMM approach for DC-OPF with demand response. IEEE Trans. Smart Grid. 8(6), 2637–2647 (2017).
G. Chen, Q. Yang, An ADMM-based distributed algorithm for economic dispatch in islanded microgrids. IEEE Trans. Ind. Informat.14(9), 3892–3903 (2018).
L. Georgiadis, M. J. Neely, L. Tassiulas, Resource allocation and cross-layer control in wireless networks. J. Found. Trends Netw.1(1), 1–144 (2006).
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn.3(1), 1–122 (2011).
Z. Zhou, J. Feng, B. Gu, B. Ai, S. Mumtaz, J. Rodriguez, M. Guizani, When mobile crowd sensing meets UAV: Energy-efficient task assignment and route planning. IEEE Trans. Commun.66(11), 5526–5538 (2018).
Z. Zhou, K. Ota, M. Dong, C. Xu, Energy-efficient matching for resource allocation in D2D enabled cellular networks. IEEE Trans. Veh. Technol.66(6), 5256–5268 (2017).
Thanks to Professor Liangrui Tang of North China Electric Power University for his guidance of this research.
This work was partially supported by the National Natural Science Foundation of China (NSFC) under grant number 61601181; the Fundamental Research Funds for the Central Universities under grant number 2017MS001.
The State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources, School of Electrical and Electronic Engineering, North China Electric Power University, Beijing, China
Yahui Wang
, Yanhua He
, Chen Xu
& Zhenyu Zhou
The Institute of Telecommunications, Aveiro, 3810-193, Portugal
Shahid Mumtaz
& Jonathan Rodriguez
The University of South Wales, Pontypridd, CF37 1DL, UK
Jonathan Rodriguez
SCC, Lancaster University, Pontypridd, UK
Haris Pervaiz
Search for Yahui Wang in:
Search for Yanhua He in:
Search for Chen Xu in:
Search for Zhenyu Zhou in:
Search for Shahid Mumtaz in:
Search for Jonathan Rodriguez in:
Search for Haris Pervaiz in:
YW wrote the manuscript and made a part of simulations. YH gave some suggestions and made a part of simulations. ZZ proposed the idea and revised this paper. All authors took an active role in the writing process of the document, and read and approved the final manuscript.
Correspondence to Zhenyu Zhou.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricteduse, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Wang, Y., He, Y., Xu, C. et al. Joint rate control and power allocation for low-latency reliable D2D-based relay network. J Wireless Com Network 2019, 111 (2019) doi:10.1186/s13638-019-1418-0
D2D-based relay network
Power allocation
ADMM algorithm
Low-latency and reliability
Ultra-Reliable-and-Available Low-Latency Communications for 5G/B5G-enabled IoT | CommonCrawl |
\begin{document}
\title[Blow--up for the wave equation with...] {Blow--up for the wave equation with nonlinear source and boundary damping terms} \author{Alessio Fiscella} \author{Enzo Vitillaro} \address[E.~Vitillaro]
{Dipartimento di Matematica ed Informatica, Universit\`a di Perugia\\
Via Vanvitelli,1 06123 Perugia ITALY} \email{[email protected]} \address[A.~Fiscella]{Dipartimento di Matematica ``Federigo Enriques'', Universit\`a di Milano\\ Via Cesare Saldini, 50 20133 Milano ITALY} \email{[email protected]} \date{\today}
\begin{abstract} The paper deals with blow--up for the solutions of an evolution problem consisting on a semilinear wave equation posed in a bounded $C^{1,1}$ open subset of $\mathbb R^n$, supplied with a Neumann boundary condition involving a nonlinear dissipation.
The typical problem studied is
$$\begin{cases} u_{tt}-\Delta u=|u|^{p-2}u \qquad &\text{in $[0,\infty)\times\Omega$,}\\
u=0\qquad &\text{on $(0,\infty)\times\Gamma_0$,}\\ \partial_\nu u
=-\alpha(x)\left(|u_t|^{m-2}u_t+\beta |u_t|^{\mu-2}u_t\right) \qquad &\text{on $(0,\infty)\times\Gamma_1$,}\\
u(0,x)=u_0(x),\qquad u_t(0,x)=u_1(x)&\text{in $\Omega$,} \end{cases}$$ where $\partial\Omega=\Gamma_0\cup\Gamma_1$, $\Gamma_0\cap \Gamma_1=\emptyset$, $\sigma(\Gamma_0)>0$, $2<p\le 2(n-1)/(n-2)$ (when $n\ge 3$), $m>1$, $\alpha\in L^\infty(\Gamma_1)$, $\alpha\ge 0$, $\beta\ge 0$. The initial data are posed in the energy space. The aim of the paper is to improve previous blow--up results concerning the problem. \end{abstract}
\maketitle
\section{Introduction} We deal with the evolution problem consisting on a semilinear wave equation posed in a bounded subset of $\mathbb R^n$, supplied with a Neumann boundary condition involving a nonlinear dissipation. More precisely we consider the initial--and--boundary value problem \begin{equation}\label{P} \begin{cases} u_{tt}-\Delta u=f(x,u) \qquad &\text{in $(0,\infty)\times\Omega$,}\\
u=0\qquad &\text{on $(0,\infty)\times\Gamma_0$,}\\ \partial_\nu u=-Q(x,u_t) \qquad &\text{on $(0,\infty)\times\Gamma_1$,}\\
u(0,x)=u_0(x),\qquad u_t(0,x)=u_1(x)&
\text{in $\Omega$,} \end{cases}\end{equation} where $u=u(t,x)$, $t\ge 0$, $x\in\Omega$, $\Delta$ denotes the Laplacian operator, with respect to the $x$ variable. We assume that $\Omega$ is a bounded and $C^{1,1}$ open subset of $\mathbb R^n$ ($n\ge 1$), $\partial\Omega=\Gamma_{0}\cup\Gamma_{1}$, $\Gamma_{0}\cap\Gamma_{1}=\emptyset$ with $\Gamma_0$ and $\Gamma_1$ being misurable with respect to the natural (Lebesgue) measure on the manifold $\Gamma=\partial\Omega$, in the sequel denoted by $\sigma$, and $\sigma(\Gamma_0)>0$. These properties of $\Omega$,
$\Gamma_0$ and $\Gamma_1$ are assumed, without further comments, throughout the paper. The initial data are in the energy space, that is $u_0\in H^1(\Omega)$ and $u_1\in L^2(\Omega)$, with the compatibility condition ${u_0}_{|\Gamma_0}=0$ (in the trace sense).
Moreover $Q$ represents a nonlinear boundary damping and, roughly,
$Q(x,v)\simeq \alpha(x)(|v|^{m-2}v+\beta |v|^{\mu-2}v)$, $1<\mu\le m$, $\beta\ge 0$, $\alpha\in L^\infty(\Gamma_1)$, $\alpha\ge 0$. When $\beta>0$ and $\mu=2$ the term $Q$ describes a realistic dissipation rate, linear for small $v$ and superlinear for large $v$ (see for example \cite{levicivita}), possibly depending on the space variable, while when $\beta=0$ and $\alpha=1$ it is a pure--power model nonlinearity. Finally $f$ is a nonlinear source and roughly
$f(x,u)\simeq |u|^{p-2}u$, $2<p\le 2^*$, where as usual $2^*$ denotes the Sobolev critical exponent $2n/(n-2)$ when $n\ge 3$, $2^*=\infty$ when $n=1,2$.
The presence of the boundary damping in \eqref{P} plays a critical role in the context of boundary control. See for example \cite{chen2}, \cite{chen4}, \cite{chen3}, \cite{chen1}, \cite{lagnese2}, \cite{lasiecka1}, \cite{lastat}, \cite{lasieckatriggiani} and \cite{zuazua}. For this reason, and for their clear physical meaning, problems like \eqref{P} are subject of a wide literature. In addition to the already quoted papers see also \cite{CDCL}, \cite{CDCM}, \cite{cavalcantisoriano}, \cite{chueshovellerlasiecka}, \cite{CLT}, \cite{gerbi}, \cite{Ha}, \cite{lasieckatriggiani2}, \cite{lasieckatriggiani3}, \cite{phy}, \cite{tataru} and \cite{stable}.
The analysis of problems like \eqref{P} is related to the treatment of quasilinear wave equations with Neumann boundary conditions involving source terms. See \cite{bociulasiecka2}, \cite{bociulasiecka1}, \cite{BRT}, \cite{bociulasiecakproc}, \cite{PH} and \cite{global}.
In order to clearly describe the specific subject of this paper we consider problem \eqref{P} when $f$ and $Q$ are exactly the model nonlinearities, that is when problem \eqref{P} reduces to \begin{equation}\label{2}
\begin{cases} u_{tt}-\Delta u=|u|^{p-2}u \qquad &\text{in $(0,\infty)\times\Omega$,}\\
u=0\qquad &\text{on $(0,\infty)\times\Gamma_0$,}\\
\partial_\nu u=-\alpha(x) (|u_t|^{m-2}u_t+\beta |u_t|^{\mu-2}u_t) \qquad &\text{on $(0,\infty)\times\Gamma_1$,}\\
u(0,x)=u_0(x),\qquad u_t(0,x)=u_1(x)&
\text{in $\Omega$,} \end{cases}\end{equation} with $1<\mu\le m$, $\beta\ge 0$, $\alpha\in L^\infty(\Gamma_1)$, $\alpha\ge 0$ and $2<p\le 2^*$.
Local existence and uniqueness for weak solutions of problem \eqref{2} when $2<p\le 1+2^*/2$ was first proved in
\cite[Theorem~4]{stable}, see Theorem~\ref{localexistencetheorem}, p.~\pageref{localexistencetheorem}. In the literature one often refer to this parameter range as the subcritical/critical one, since the Nemitskii operator $u\mapsto |u|^{p-2}u$ is locally Lipschitz from $H^1(\Omega)$ to $L^2(\Omega)$. In this case the nonlinear semigroup theory is directly available.
The quoted result was subsequently extended to more general nonlinearities $Q$ and $f$, of non--algebraic type, in \cite{CDCL} and \cite{CDCM}. Moreover, at least when $\alpha$ is constant, H\"{a}damard well--posedness for problem \eqref{2} follows from the results in \cite{bociulasiecka1}, dealing with more general versions of problem \eqref{P} possibly involving internal nonlinear damping and boundary source terms. On this concern it is worth observing that, when no internal damping is present in the equation, the well--posedness result in \cite{bociulasiecka1} only applies to the subcritical/critical range $2<p\le 1+2^*/2$, due to \cite[Assumption 1.1]{bociulasiecka1}. Moreover, when $u_0$ and $u_1$ are small (in the energy space) the solutions of \eqref{2} are global in time.
On the other hand blow--up results for problem \eqref{2} are much less frequent in the literature. In the particular case $\Gamma_1=\emptyset$ (the same arguments work also when $\alpha\equiv 0$) it is well--known that, for particularly chosen data, local solutions of problem \eqref{2}, when they exist, blow--up in finite time. See for example \cite{ball3}, \cite{glassey}, \cite{jorgens}, \cite{keller}, \cite{klp}, \cite{levine4}, \cite{levine2} and \cite{tsutsumi}. We also refer to the related papers \cite{levpayne1} and \cite{levpayne2}, dealing with boundary source terms. In \cite{paynesattinger} the authors introduced the so called ``potential well theory" for semilinear wave equation with Dirichlet boundary condition, and in particular blow--up for positive initial energy was proved. We also would like to mention the paper \cite{georgiev}, dealing with the equation
$u_{tt}-\Delta u+|u_t|^{m-2}u_t=|u|^{p-2}u$ in $[0,\infty)\times\Omega$ with homogeneous Dirichlet boundary conditions, when $2<p\le 1+2^*/2$ and $m>1$, which was the first contribution facing the competition between nonlinear damping and source terms. In particular it was there proved that solutions may blow--up in finite time (depending on initial data) if and only if $m<p$. The result was subsequently generalized to positive initial energy and abstract evolution equations in several papers. See for example \cite{levserr}, \cite{ps:private} and \cite{blowup}.
When $\Gamma_1\not=\emptyset$ and $m=2$ the problem of global nonexistence for solutions of \eqref{2} was studied in \cite{rendiconti} using the classical concavity method of H. Levine, which is no longer available for nonlinear damping terms. The first blow--up result for problem \eqref{2} in the general case $m>1$ (and $2<p\le 1+2^*/2$) is contained in the already quoted paper
\cite{stable}. To recall it we need to introduce some basic notation. We denote by $\|\cdot\|_p$ the norm in $L^p(\Omega)$ as well as the norm in $[L^p(\Omega)]^n$. We also introduce the Hilbert space
$$H^1_{\Gamma_0}(\Omega)=\{u\in H^1(\Omega): u_{|\Gamma_0}=0\}$$
(where $u_{|\Gamma_0}$ is intended in the trace sense), equipped with the norm $\|\nabla u\|_2$, which is equivalent, by a Poincar\`e type inequality (see \cite{ziemer}), to the standard one. We also introduce the functionals \begin{equation}\label{J}
J(u)=\frac 12 \|\nabla u\|_2^2-\frac 1p\|u\|_p^p\qquad\text{and}\quad K(u)=\|\nabla u\|_2^2-\|u\|_p^p \end{equation}
for $u\in H^1_{\Gamma_0}(\Omega)$. The energy associated to initial data $u_0\in H^1_{\Gamma_0}(\Omega)$ and $u_1\in L^2(\Omega)$ is denoted by $E(u_0,u_1):=\frac 12 \|u_1\|_2^2+J(u_0)$. Moreover we set \begin{equation}\label{d} d =\underset{u\in H_{\Gamma_0}^{1}(\Omega)\setminus\{0\}}{\inf}\sup\limits_{\lambda >0}J(\lambda u). \end{equation} It is well--known that $d>0$. See Section~\ref{section4}, where Lemma~\ref{NUOVO} makes clear this property, and also Remark~\ref{variational}, where a variational characterizations of $d$ is recalled. Finally we introduce the "bad part of the potential well" (we owe this suggesting name to \cite{BRTCOMP}) \begin{equation}\label{Wu} W_u:=\{(u_0,u_1)\in H^1_{\Gamma_0}(\Omega)\times L^2(\Omega): K(u_0)\le 0 \quad\text{and} \quad E(u_0,u_1)<d\}. \end{equation} Trivially if $E(u_0,u_1)<0$ then $(u_0,u_1)\in W_u$ since $p>2$. The situation is clearly described by Figure~\ref{fig1} below.
In particular \cite[Theorem~7]{stable} asserts that solutions blow--up in finite time if $(u_0,u_1)\in W_u$ and the further condition \begin{equation}\label{3} m<m_0(p):=\frac {2(n+1)p-4(n-1)}{n(p-2)+4} \end{equation} holds. It is worth mentioning that $m_0(p)>2$ when $p>2$, so the case $1<m\le 2$ is fully covered, but when $m>2$ condition \eqref{3} is rather restrictive. See Figure~\ref{fig2} below.
\begin{figure}
\caption{The sets of the $(p,m)$ considered in \cite{stable} and in the present paper, in the two cases $n=1,2$ and when $n\ge 3$. The figure are made when $n=2$ and $n=3$ in different scales due to the unboundedness of the sets considered in the first case.}
\label{fig2}
\end{figure}
In \cite{BRT} and \cite{CDCL} (also) the blow--up problem is considered. These papers deal with a modified version of \eqref{2}, where also internal damping and boundary source terms are present. Assumption \eqref{3} is absent there, since the combination of internal and boundary source is more effective in producing blow--up.
As to problem \eqref{2} without boundary sources we mention the paper \cite{gerbi} where exponential growth, but non blow--up, for solutions of \eqref{2} is proved when $m<p$. A generalized version of assumption \eqref{3} also appears in the recent paper \cite{autuoripucci}, dealing with much more general Kirchhoff systems and a larger class of initial data.
Assumption \eqref{3} was first skipped in \cite{bociulasieckaApplmat}, where blow--up for a modified version of problem \eqref{2} is proved when $m<1+p/2$ and $E(u_0,u_1)<0$. Even if the blow--up result in the quoted paper is stated in presence of an internal damping, one easily sees that the arguments in the proof apply as well to problem \eqref{2}. Clearly assumption $m<1+p/2$ is more general than \eqref{3}, since $m_0(p)<1+p/2$ for $p>2$ (see Figure~\ref{fig2} again). The improvement in the assumption was obtained by using interpolation estimate in the full scale of Besov spaces instead than in the Hilbert scale used in \cite{stable}.
\begin{figure}
\caption{The sets of initial data considered by \cite{bociulasieckaApplmat}, having negative initial energy, and those considered only in the present paper.}
\label{fig1}
\end{figure}
Subsequently assumption \eqref{3} was skipped also in the recent paper \cite{FLZ}, dealing with the one--dimensional case $n=1$, when $\beta=0$ and $\alpha\equiv 1$. Blow--up for problem \eqref{2} is proved there when $E(u_0,u_1)<0$ and \renewcommand{(\roman{enumi})}{(\roman{enumi})} \begin{enumerate} \item either $m<1+p/2$,
\item or $m\ge 1+p/2$ and $|\Omega|$ is sufficiently large. \end{enumerate} The arguments used by the authors in the two cases are different. Consequently in dimension one the line $m=p$ is not the threshold between global existence and blow--up for suitable data. A natural conjecture is then that the same phenomenon occurs in higher space dimension $n$, even if the one--dimensional case is sometimes different from the higher--dimensional one (see for example the papers \cite{vazvit2} and \cite{vazvit} where a similar situation occurs for well--posedness, and the related paper \cite{vazvitHLB}). Unfortunately the arguments used to handle with the case $m\ge 1+p/2$ cannot be adapted to $n\ge 2$.
The aim of this paper is to show that the technique in \cite{stable} can be adapted to cover at least the case $m<1+p/2$. In this way we extend the blow--up result from \cite{bociulasieckaApplmat} to positive initial energy. Instead of using interpolation theory we adapt a more elementary estimate, used in \cite{FLZ} when $n=1$, to the case $n\ge 1$.
Our main result concerning problem \eqref{2} is the following one.
\begin{thm}\label{theorem1} Let $\alpha\in L^\infty(\Gamma_1)$, $\alpha\ge 0$, $\beta\ge 0$, $$2<p\le 1+2^*/2,\qquad 1<m<1+p/2$$ and $(u_0,u_1)\in W_u$. Then the weak solution $u$ of problem \eqref{2}
blows--up in finite time, that is there is $T_{max}<\infty$ such that $\|u(t)\|_p\to \infty$ (and so also $\|u(t)\|_\infty\to \infty$
and $\|\nabla u(t)\|_2\to \infty$) as $t\to T_{max}^-$. \end{thm}
\begin{rem} The meaning of weak solutions will be made precise in the sequel. Moreover, it will be clear (after the proof) that the parameter range $2<p\le 1+2^*/2$ in Theorem~\ref{theorem1} can be extended to $2<p\le 2^*$, but when $1+2^*/2<p\le 2^*$ we merely obtain global nonexistence of weak solutions, since a local existence theorem is missing. \end{rem}
The paper is organized as follows. In Section~\ref{section2} we recall (from \cite{stable}) our main assumptions, local existence and potential-- well theories for problem \eqref{P}, with some additional remarks. Section~\ref{section3} is devoted to state and prove our main result, that is Theorem~\ref{theorem 4}, on problem \eqref{P}. In Section~\ref{section4} we show that, when applying Theorem~\ref{theorem 4} to problem~\eqref{2}, we obtain Theorem~\ref{theorem1}.
\section{Preliminaries}\label{section2}
\noindent In this section we recall some material from \cite{stable}, referring to the quoted paper for most of the proofs. We start by recalling the assumptions on $Q$ and $f$ needed for local existence. \numera \item \label{Q1} $Q$ is a Carath\'eodory real function in $\Gamma_1\times\mathbb R$, and there are $\alpha\in L^1(\Gamma_1)$, $\alpha \ge 0$ \begin{footnote}{the integrability of $\alpha$ on $\Gamma_1$, although not explicitly assumed in \cite[Theorem~4]{stable}, was tacitely used there.}\end{footnote},
and an exponent $m>1$ such that, if $m\ge 2$,
$$\left(Q(x,v)-Q(x,w)\right)(v-w)\ge \alpha(x)|v-w|^m$$ for all $x\in\Gamma_1$, $v,w\in\mathbb R$, while, if $1<m<2$, $$\left(Q(x,v)-Q(x,w)\right)(v-w)\ge \alpha(x)
\left||v|^{m-2}v-|w|^{m-2}w\right|^{m'}$$ for all $x\in\Gamma_1$, $v,w\in\mathbb R$, where $1/m+1/m'=1$;
\item \label{Q2} there are $1<\mu\le m$ and $c_1>0$ such that
$$|Q(x,v)|\le c_1\alpha(x)\left(|v|^{\mu-1}+|v|^{m-1}\right)$$ for all $x\in\Gamma_1$, $v\in\mathbb R$.
\finenumera 2
\begin{rem}\label{remnew1} The model nonlinearity \begin{equation} \label{Q0}
Q_0(x,v)=\alpha(x)\left(|v|^{\mu-2}v+|v|^{m-2}v\right),\qquad 1<\mu\le m,\quad \alpha\ge 0,\quad \alpha\in L^1(\Gamma_1), \end{equation} satisfies \assref{Q1} and \assref{Q2}. Indeed, while \assref{Q2} is trivially verified, assumption \assref{Q1} holds, when $m\ge2$, up to multiplying $\alpha$ by an inessential positive constant, due to the elementary inequality \begin{equation}\label{elementary2}
(|v|^{m-2}v-|w|^{m-2}w)(v-w)\ge \text{\upshape Const.} |v-w|^m,\qquad v,w\in\mathbb R.
\begin{footnote}{which is a consequence of the boundedness of the real function $(|t-1|^{m-2}(t-1))/(|t|^{m-2}t-1)$ when $m\ge 2$. }\end{footnote} \end{equation} When $1<m<2$ we get \assref{Q1} by applying \eqref{elementary2} to
$m'>2$, $|v|^{m-2}v$ and $|w|^{m-2}w$. \end{rem}
We note, for a future use, some consequences of \assref{Q1}--\assref{Q2}. First of all it follows that \begin{equation}\label{low}
Q(x,v)v\ge \alpha (x)|v|^m \end{equation} for all $x\in\Gamma_1$, $v\in\mathbb R$. Moreover $Q(x,\cdot)$ is increasing for all $x\in\Gamma_1$, and $Q(\cdot,0)\equiv 0$. Then, after setting \begin{equation}\label{PHI} \Phi(x,u)=\int_0^uQ(x,s)\,ds, \end{equation} we obtain \begin{equation}\label{philow}
\Phi(x,u)\ge \frac {\alpha(x)}m |v|^m\qquad\text{for all $x\in \Gamma_1$, $v\in\mathbb R$}. \end{equation}
We now introduce some notation. When $1<q\le \infty$ we denote by $L^q(\Gamma,\alpha)$ the $L^q$ space on $\Gamma$ associated to the measure $\mu_\alpha$ defined by $\mu_\alpha(A)=\int_A\alpha(x)\,d\sigma $ for any measurable subset $A$ of $\Gamma$, while $L^q(\Gamma)$ denotes the standard $L^q$ space, that is $L^q(\Gamma)=L^q(\Gamma,1)$. The analogous convention will be adopted on $\Gamma_1$ and in $(0,T)\times\Gamma_1$ for $T>0$ (in the latter case the measure $\mu_\alpha$ being replaced by $dt\times\mu_\alpha$). Moreover we shall write for simplicity \begin{alignat*}2
&\|\cdot\|_{q,\Gamma,\alpha}:=\|\cdot\|_{L^q(\Gamma,\alpha)},\qquad
&&\|\cdot\|_{q,\Gamma}:=\|\cdot\|_{L^q(\Gamma)},\\
&\|\cdot\|_{q,\Gamma_1,\alpha}:=\|\cdot\|_{L^q(\Gamma_1,\alpha)},\qquad
&&\|\cdot\|_{q,\Gamma_1}:=\|\cdot\|_{L^q(\Gamma_1)}. \end{alignat*} Our assumption concerning $f$ is the following one: \numeraf \item \label{F1} $f$ is a Carath\'eodory real function in $\Omega\times\mathbb R$, $f(x,0)=0$ and there are $p>2$ and $c_2>0$ such that
$$|f(x,u)-f(x,v)|\le c_2 |u-v|(1+|u|^{p-2}+|v|^{p-2})$$ for all $x\in\Omega$, $u,v\in\mathbb R$.
\finenumeraf 1
\begin{rem}\label{remnew2} The model nonlinearity \begin{equation}\label{f0}
f_0(x,u)=a|u|^{q-2}u+b|u|^{p-2}u,\qquad 2\le q<p,\qquad a,b\in\mathbb R, \end{equation} satisfies \fassref{F1}, due to the elementary inequality
$$\big||u|^{s-2}u-|v|^{s-2}v\big|\le \text{\upshape Const.} |v-w|(1+|u|^{s-2}+|v|^{s-2}),\qquad u,v\in\mathbb R,$$ which holds for $s\ge 2$. \end{rem} We make precise the definition of weak solution used (somewhat implicitly) in \cite{stable}. \begin{definition}\label{def1} When \assref{Q1},\assref{Q2}, \fassref{F1} hold and $2<p\le 2^*$ we say that $u$ is a weak solution of problem \eqref{P} in $[0,T]$, $T>0$, if \renewcommand{(\roman{enumi})}{(\alph{enumi})} \begin{enumerate} \item $u\in C([0,T];H^{1}_{\Gamma_{0}}(\Omega))\cap C^1([0,T];L^{2}(\Omega));$
\item the spatial trace of $u$ on $(0,T)\times\Gamma$ (which exists by the trace theorem) has a distributional time derivative on $(0,T)\times\Gamma_{1}$, belonging to \mbox{$L^{m}((0,T)\times\Gamma_{1},\alpha)$};
\item for all $\varphi\in C([0,T];H^{1}_{\Gamma_{0}}(\Omega))\cap C^{1}([0,T];L^{2}(\Omega))\cap L^{m}((0,T)\times\Gamma_1,\alpha )$ and for almost all $t\in [0,T]$ the distribution identity \begin{equation}\label{ovo}
\int_\Omega u_t\varphi\Big|_0^t=\int_0^t\int_\Omega u_t\varphi_t-\nabla u\nabla\varphi +\int_0^t\int_{\Omega} f(\cdot,u)\varphi-\int_0^t\int_{\Gamma_1} Q(\cdot,u_t)\varphi \end{equation}
holds true; \item $u(0)=u_{0}$ and $u_{t}(0)=u_{1}$. \end{enumerate} We say that $u$ is a weak solution of problem \eqref{P} in $[0,T)$ if $u$ is a weak solution in $[0,T']$ for all $T'\in (0,T)$. Finally we say that a weak solution $u$ in $[0,T)$ is maximal if $u$ cannot be seen as a restriction of a weak solution in $[0,T')$, $T<T'$. \end{definition}
\begin{rem} The term $\int_0^t\int_{\Omega} f(\cdot,u)\varphi$ in \eqref{ovo} makes sense by \fassref{F1}, the continuity of Nemitski operators and Sobolev embedding theorem. To recognize that the last term in the right--hand side of \eqref{ovo} makes sense requires some attention. At first we note that, by (b), we have $\alpha^{1/m}u_t\in L^m((0,T)\times\Gamma_1)$ and then
$\alpha^{1/m'}|u_t|^{m-1}\in L^{m'}((0,T)\times\Gamma_1)$. Since $\varphi \in L^{m}((0,T)\times\Gamma_1,\alpha )$ we have $\alpha^{1/m}\varphi \in L^{m}((0,T)\times\Gamma_1)$. Consequently
$\alpha |u_t|^{m-1}\varphi\in L^1((0,T)\times\Gamma_1)$. Now, since $\mu_\alpha(\Gamma_1)<\infty$ and $\mu\le m$, we have $L^m((0,T)\times\Gamma_1,\alpha)\subset L^{\mu}((0,T)\times\Gamma_1,\alpha)$, hence we can repeat previous arguments with $\mu$ instead of $m$ to show that $\alpha
|u_t|^{\mu-1}\varphi\in L^1((0,T)\times\Gamma_1)$. Consequently, by \assref{Q2} we get $Q(\cdot,u_t)\varphi\in L^1((0,T)\times\Gamma_1)$. \end{rem}
\begin{rem} \label{shift} We remark, for the sake of clearness, the following facts. Since the equation and boundary conditions in problem \eqref{P} are autonomous, the choice of the initial time as zero is purely conventional. Consequently, for any $a\in\mathbb R$, we shall speak of weak solutions in $[a,a+T]$, $T>0$, of the problem \begin{equation}\label{Pa} \begin{cases} u_{tt}-\Delta u=f(x,u) \qquad &\text{in $(a,\infty)\times\Omega$,}\\
u=0\qquad &\text{on $(a,\infty)\times\Gamma_0$,}\\ \partial_\nu u=-Q(x,u_t) \qquad &\text{on $(a,\infty)\times\Gamma_1$,}\\
u(a,x)=u_0(x),\qquad u_t(a,x)=u_1(x)&
\text{in $\Omega$,} \end{cases}\end{equation} when (a--d) in Definition~\ref{def1} hold true with $0$ and $T$ respectively replaced by $a$ and $a+T$. Moreover \renewcommand{(\roman{enumi})}{\roman{enumi})} \begin{enumerate}
\item the function $u$ is a weak solution of \eqref{P} in $[0,T]$ if and only if the time shifted function $\tau_a u$ defined by \begin{equation}\label{ua}
(\tau_a u)(t):=u(t-a) \end{equation}
is a weak solution of \eqref{Pa} in $[a,a+T]$;
\item let $b\in \mathbb R$, $0<T_1<T_2$, $u_1$ be a weak solution in $[b,b+T_1]$ of problem \eqref{Pa} with $a=b$ and $u_2$ be a weak solution in $[b+T_1,b+T_2]$ of problem \eqref{Pa} with $a=b+T_1$. Define $u$ in $[b,b+T_2]$ by $u(t)=u_1(t)$ for $t\in [b,b+T_1]$ and $u(t)=u_2(t)$ for $t\in (b+T_1,b+T_2]$. Then $u$ is a weak solution of \eqref{Pa} with $a=b$ in $[b,b+T_2]$ if and only if $u_1(b+T_1)=u_2(b+T_1)$ and $(u_1)_t(b+T_1)=(u_2)_t(b+T_1)$. \end{enumerate} \end{rem}
We now recall \cite[Theorem~4]{stable}.
\begin{thm}\label{localexistencetheorem} Suppose that \assref{Q1}--\assref{Q2} and \fassref{F1} hold, that $2<p\le 1+2^*/2$, and $u_0\in H^1_{\Gamma_0}(\Omega)$, $u_1\in L^2(\Omega)$. Then there is $T>0$ and a unique weak solution of \eqref{P} in $[0,T]$. Moreover $u$ satisfies the energy identity \begin{equation}\label{EI} E(t)-E(s)=-\int_s^t\int_{\Gamma_1}Q(\cdot,u_t)u_t \end{equation} for $0\le s\le t$, where \begin{align}
E(t)=E(u(t),u_t(t))=&\frac 12 \|u_t(t)\|_2^2+\frac 12\|\nabla u(t)\|_2^2- \int_\Omega F(\cdot,u(t)),\label{eff}\\ \intertext{and} F(x,s)=&\int_0^s f(x,\tau)\,d\tau\qquad\text{for $x\in\Omega$, $s\in\mathbb R$.}\label{F} \end{align} \end{thm}
\begin{rem} Actually Theorem~\ref{localexistencetheorem} was stated in \cite{stable} for regular (i.e. $C^1$) domains, but one immediately sees that $\Omega$ can be also disconnected (even if this case is not of particular interest). \end{rem}
As a consequence of the arguments used in the proof of Theorem~\ref{localexistencetheorem} it follows the following continuation principle, which was used in the quoted paper without an explicit proof. For the sake of clearness we prefer to give here its proof.
\begin{thm}\label{continuation} Suppose that \assref{Q1}--\assref{Q2} and \fassref{F1} hold, that $2<p\le 1+2^*/2$, and $u_0\in H^1_{\Gamma_0}(\Omega)$, $u_1\in L^2(\Omega)$. Then \eqref{P} has a unique weak maximal solution $u$ in $[0,T_{max})$. Moreover the following alternative holds: \renewcommand{(\roman{enumi})}{(\roman{enumi})} \begin{enumerate}
\item either $T_{max}=\infty$;
\item or $T_{max} < \infty$ and $\lim\limits_{t \rightarrow T^{-}_{max}}
\|u(t)\|_{H^{1}_{\Gamma_{0}(\Omega)}}+\|u_t(t)\|_2 = \infty$. \end{enumerate} \end{thm} \begin{proof} By the arguments in the proof of Theorem~\ref{localexistencetheorem} it easily follows that the assured existence time $T$ depends on the initial data $u_0$ and $u_1$ as a decreasing function of
$\|u_0\|_{H^1_{\Gamma_0}(\Omega)}^2+\|u_1\|_2^2$, which is in the sequel denoted by
$$T^*=T^*(\|u_0\|_{H^1_{\Gamma_0}(\Omega)}^2+\|u_1\|_2^2).$$ From this remark the statement follows in a standard way. More precisely we first construct the unique maximal solution $u$ as follows. We set $\cal{U}$ to be the set of all weak solutions of \eqref{P} in right--open intervals $[0,T')$, $T'>0$.
Then we claim that for any couple $u$, $v$ of elements of $\cal{U}$, weak solutions respectively in $[0,T_u)$ and $[0,T_v)$, $u=v$ in the intersection $[0,T)$ of their domains. To prove our claim we set \begin{equation}\label{t0} t_0:=\sup\{t\in [0,T): u(s)=v(s)\quad\text{for all}\quad s\in [0,t)\}, \end{equation} so $t_0\le T$. Now we suppose by contradiction that $t_0<T$. Since $$u,v\in C([0,t_0];H^{1}_{\Gamma_{0}}(\Omega))\cap C^1([0,t_0];L^{2}(\Omega))$$ we easily get that $u(t_0)=v(t_0):=v_0$ and $u_t(t_0)=v_t(t_0):=v_1$. Now since $u,v$ are weak solutions (see Remark~\ref{shift}) of \eqref{Pa} with $a=t_0$ and initial data $v_0$, $v_1$, we see that $\tau_{-t_0}u$ and $\tau_{-t_0}v$ (defined in \eqref{ua}) are both weak solutions in $[0,T-t_0)$ of \eqref{P} with initial data $v_0$ and $v_1$. Hence, by the uniqueness assertion in Theorem~\ref{localexistencetheorem} we get that $\tau_{-t_0}u=\tau_{-t_0}v$ in $[0,T'']$,
$T''=T^*(\|v_0\|_{H^1_{\Gamma_0}(\Omega)}^2+\|v_1\|_2^2)>0$. Consequently $u=v$ in $[0,t_0+T'']$, contradicting \eqref{t0}. Hence $t_0=T$ proving our claim. To construct the maximal weak solution we define $u$ to coincide with any element of $\cal{U}$ in the union of the domains.
We now have to prove the alternative in the statement. We suppose, by contradiction, that \begin{equation}\label{alternative} T_{\max} < \infty \quad
\mbox{and } \liminf_{t\rightarrow T^{-}_{\max}}\left(\|u(t)\|_{H^{1}_{\Gamma_{0}}(\Omega)}+\|u_t(t)\|_2\right) < \infty. \end{equation}
Then there is a sequence $t_{n}\rightarrow T^{-}_{\max}$ such that
$\left\|u(t_n)\right\|_{H^{1}_{\Gamma_{0}}(\Omega)}$ and
$\|u_t(t_n)\|_2$ are bounded, so $M:=\sup\limits_n
\left(\|u(t_n)\|_{H^1_{\Gamma_0}(\Omega)}^2+\|u_t(t_n)\|_2^2\right)<\infty$. By Theorem~\ref{localexistencetheorem} and the monotonicity of $T^*$ asserted before for each $n\in\mathbb N$ the problem \eqref{P} with initial data $u(t_n)$ and $u_t(t_n)$ has a unique weak solution $v_n$ in $[0,T_1]$, $T_1=T^*(M)$. Hence, for each $n\in\mathbb N$, $w_n=\tau_{t_n}v_n$ is a weak solution of \eqref{Pa} in $[t_n,t_n+T_1]$ with $a=t_n$ and initial data $u(t_n)$ and $u_t(t_n)$. It follows (see Remark \ref{shift}) that $u$ can be extended to a weak solution of \eqref{P} in $[0,t_n+T_1]$, contradicting the maximality of $u$ for $n$ large enough. \end{proof} We now recall from \cite{stable} the additional assumption on $f$ needed to set--up the potential well theory. \numeraf \item \label{F2} There is $c_3>0$ such that
$$F(x,u)\le \frac {c_3}p|u|^p$$ for all $x\in\Omega$ and $u\in\mathbb R$, where $F$ is the primitive of $f$ defined in \eqref{F}. \finenumeraf 1
\begin{rem}\label{remnew3} It is clear, recalling Remark~\ref{remnew2}, that $f_0$ given in \eqref{f0} satisfies \fassref{F1} and \fassref{F2} when $2\le q<p$, $a\le 0$ and $b\in \mathbb R$. \end{rem}
We set, when $2<p\le 2^*$, \begin{equation} K_0=\sup_{u\in H^1_{\Gamma_0}(\Omega),\,\,u\not=0} \frac
{\int_\Omega F(\cdot,u)}{\|\nabla u\|_2^p}. \label{K0}\end{equation} By \fassref{F1} and \fassref{F2}, we have $0\le K_0\le p^{-1}c_3B_1^p$, where $B_1$ is the optimal constant of the Sobolev embedding $H^1_{\Gamma_0}(\Omega)\hookrightarrow L^p(\Omega)$, i.e. \begin{equation}\label{Poincare} B_1=\sup\limits_{u\in H^1_{\Gamma_0}(\Omega),\,\,u\not=0} \dfrac
{\|u\|_p}{\|\nabla u\|_2}. \end{equation}
We denote
\begin{footnote}{this is the correct form of $\lambda_1$, which is the unique positive maximum point of the function $\lambda^2/2-K_0\lambda^p$,
incorrectly typewritten in \cite{stable}} \end{footnote} \begin{gather}\label{defE1} \lambda_1=(1/pK_0)^{1/(p-2)}, \qquad E_1=\left(\frac 12-\frac 1p\right)\lambda_1^2, \\
\intertext{when $K_0>0$, while $\lambda_1=E_1=+\infty$ when $K_0=0$, and} \label{defW}W=\{(u_0,u_1)\in H^1_{\Gamma_0}(\Omega)\times L^2(\Omega): E(u_0,u_1)<E_1\quad \text{and}\quad \|\nabla u_0\|_2>\lambda_1\} \intertext{where, in accordance to \eqref{eff},}
E(u_0,u_1):=\frac 12 \|u_1\|_2^2+\frac 12\|\nabla u_0\|_2^2- \int_\Omega F(\cdot,u_0). \end{gather} Clearly when $K_0=0$ then $W=\emptyset$, so what follows is of interest only when $K_0>0$. On the other hand when $K_0=0$ all weak solutions are global (see \cite[p. 389]{stable}). We recall the following result (\cite[Lemma~2, (ii)]{stable}).
\begin{lem}\label{lemma1}
Suppose that the assumptions of Theorem~\ref{localexistencetheorem}, together with \fassref{F2}, hold true. Let $u$ be the maximal solution of \eqref{P}. Assume moreover that $(u_0,u_1)\in W$. Then there is $\lambda_2>\lambda_1$ such that $\|\nabla u(t)\|_2\ge
\lambda_2$ and $\|u(t)\|_p\ge (pK_0/c_3)^{1/p} \lambda_2$ for all $t\in [0,T_{\text{max}})$. \end{lem}
Our final assumptions are the following ones. \numera \item There is $c_4>0$ such that
$$Q(x,v)v\ge c_4\alpha(x) \left(|v|^\mu+|v|^m\right),\qquad 1<\mu\le m,$$\label{Q3} for all $x\in\Gamma_1$, $v\in \mathbb R$; \finenumera 1 \numeraf \item there is $\varepsilon_0>0$ such that for all $\varepsilon\in (0,\varepsilon_0]$ there exists $c_5=c_5(\varepsilon)>0$ such that
$$f(x,u)u-(p-\varepsilon)F(x,u)\ge c_5 |u|^p$$ for all $x\in\Omega$, $u\in\mathbb R$. \label{F3} \finenumeraf 1
\begin{rem}\label{finalremark} Clearly $Q_0$ given in \eqref{Q0} satisfies, beside \assref{Q1}--\assref{Q2}\begin{footnote}{as noted in Remark~\ref{remnew1}}\end{footnote}, also \assref{Q3} with $c_4=1$. Moreover \assref{Q3} immediately follows from \eqref{low} when $m=\mu$, while it is not a consequence of \assref{Q1}--\assref{Q2} when $\mu<m$. Next $f_0$ given in \eqref{f0} satisfies, beside \fassref{F1}--\fassref{F2}\begin{footnote}{see Remark~\ref{remnew3}}\end{footnote}, also \fassref{F3} when $a\le 0$ and $b>0$, with $\varepsilon_0=p-q>0$ and $c_5(\varepsilon)=b\varepsilon/p$. Next \fassref{F3} implies the standard growth condition \begin{equation}\label{quadr} f(x,u)u\ge p F(x,u)\qquad\text{for all $x\in\Omega$, $u\in \mathbb R$.} \end{equation} Finally it is worth observing that \fassref{F1}--\fassref{F2} and \eqref{quadr} cannot be responsible of a blow--up phenomenon, since
$f\equiv 0$ satisfies them and blow--up does not occur in this case. \end{rem}
\section{Main result}\label{section3}
\noindent This section is devoted to state and prove our main result. We start with a key estimate. \begin{lem}\label{lemma2} Let $1<m\le 1+p/2$ and $2<p\le 2^*$. Then there is a positive constant $C_1=C_1(m,p,\Omega, \Gamma_0)$ such that \begin{equation}\label{mainestimate}
\|u\|_{m,\Gamma_1}^m\le C_1 \|u\|_p^{m-1}\|\nabla
u\|_2\qquad \text{for all $u\in H^1_{\Gamma_0}(\Omega)$}. \end{equation} \end{lem}
\begin{proof} We first consider the auxiliary non--homogeneous Neumann problem \begin{equation}\label{phi} \begin{cases}-\Delta w+w=0\qquad &\text{in $\Omega$}\\ \partial_\nu w=1\qquad &\text{on $\Gamma$.} \end{cases} \end{equation} By Riesz--Fr\'echet theorem problem \eqref{phi} has a unique weak solution, i.e. $w\in H^1(\Omega)$ such that \begin{equation}\label{weakneumann}
\int_\Omega \nabla w\nabla \phi+\int_\Omega w\phi=\int_\Gamma
\phi\qquad\text{for all $\phi\in H^1(\Omega)$.} \end{equation} Moreover, since $\Omega$ is bounded and $C^{1,1}$, by Agmon--Douglis--Nirenberg regularity estimate (here used in the form stated in \cite[Theorem~2.4.2.7, p. 126]{grisvard}), we have $w\in W^{2,q}(\Omega)$ for all $q>1$. It follows, by Morrey's Theorem (\cite[Corollary~9.15, p. 285]{brezis2}), that $w\in C^1(\overline{\Omega})$.
Now let $u\in H^1(\Omega)$. We claim that $|u|^m\in W^{1,1}(\Omega)$. Since $m\le 2^*$, by Sobolev embedding theorem we have $|u|^m\in L^1(\Omega)$. Moreover, by using the chain rule for Sobolev function (see \cite[Theorem~2.2]{mm}), we get that $|u|^m$
possesses a weak gradient $\nabla (|u|^m)=m|u|^{m-2}u\,\nabla u$. Since $m\le 1+2^*/2$, using Sobolev embedding theorem again, we have $|u|^{m-2}u\in L^2(\Omega)$, hence by H\"{o}lder inequality we get that $\nabla (|u|^m)\in [L^1(\Omega)]^n$ and
$${\|\nabla (|u|^m)\|}_1\le m\left(\int_\Omega
|u|^{2(m-1)}\right)^{1/2}{\|\nabla u\|}_2.$$ Since $2(m-1)\le p$ and $\Omega$ is bounded it follows \begin{equation}\label{Iroman}
{\|\nabla (|u|^m)\|}_1\le m|\Omega|^{\frac 12 -\frac
{m-1}p}\|u\|_p^{m-1}\|\nabla u\|_2, \end{equation}
where $|\Omega|$ denotes the Lebesgue measure of $\Omega$. Our claim is then proved. Consequently (see \cite[Corollary~9.8~p. 277]{brezis2}) there is a sequence $(\phi_n)_n$ in
$C^\infty_c(\mathbb R^N)$ such that ${\phi_n}_{|\Omega}\to |u|^m$ in $W^{1,1}(\Omega)$. By the trace theorem it follows that
${\phi_n}_{|\Gamma}\to {|u|^m}_{|\Gamma}$ in $L^1(\Gamma)$. Since in particular $\phi_n\in H^1(\Omega)$ then \eqref{weakneumann} holds with $\phi=\phi_n$ for $n\in\mathbb N$.
Since $w,|\nabla w|\in L^\infty(\Omega)$ we can pass to the limit as $n\to\infty$ and get \begin{equation}\label{IIroman}
\int_\Omega \nabla w \nabla(|u|^m)+\int_\Omega w |u|^m=\int_\Gamma
|u|^m. \end{equation} Combining \eqref{Iroman} and \eqref{IIroman} we have
$$\|u\|_{m,\Gamma}^m\le \|w\|_\infty \|u\|_m^m+m\|\nabla w\|_\infty |\Omega|^{\frac 12 -\frac
{m-1}p}\|u\|_p^{m-1}\|\nabla u\|_2$$ for all $u\in H^1(\Omega)$. Since $m\le p\le 2^*$ and $\Omega$ is bounded, we consequently get by using H\"{o}lder inequality again
$$\|u\|_{m,\Gamma}^m\le \left(\|w\|_\infty |\Omega|^{1 -\frac mp} \|u\|_p+m\|\nabla w\|_\infty |\Omega|^{\frac 12 -\frac
{m-1}p}\|\nabla u\|_2\right)\|u\|_p^{m-1}.$$ By restricting now to $u\in H^1_{\Gamma_0}(\Omega)$ we use the Poincar\`e type inequality recalled above to get \eqref{mainestimate}, where $C_1$ is given by
$$C_1=\|w\|_\infty |\Omega|^{1 -\frac mp} B_1+m\|\nabla w\|_\infty |\Omega|^{\frac 12 -\frac {m-1}p},$$ where $B_1$ is the positive constant defined in \eqref{Poincare}. Since $w$ depends only on $\Omega$, the proof is complete. \end{proof}
We can finally state our main result.
\begin{thm}\label{theorem 4} Suppose that \assref{Q1}--\assref{Q3} and \fassref{F1}--\fassref{F3} hold, that $\alpha\in L^\infty(\Gamma_1)$, $$2<p\le 1+2^*/2,\qquad 1<m<1+p/2,$$ and $(u_0,u_1)\in W$. Then for any solution of \eqref{P} we have
$T_{max} < \infty$ and $\|u(t)\|_p\to \infty$ (so also
$\|u(t)\|_\infty\to \infty$ and $\|\nabla u(t)\|_2\to \infty$) as $t\to T_{max}^-$. \end{thm}
\begin{proof} The proof is a variant of the proof of \cite[Theorem~7]{stable}, where we use Lemma~\ref{lemma2} instead of the estimate \cite[(50)]{stable}. Nevertheless, since the proof of \cite[Theorem~7]{stable} was itself a variant of the proof of \cite[Theorem~2]{blowup}, we give in the sequel, for the sake of clearness, a self--contained proof.
We first claim that our statement reduces to prove that problem \eqref{P} cannot have global weak solutions, i.e. weak solutions in the whole of $[0,\infty)$. Indeed, once this fact is proved, then we must have, by Theorem~\ref{continuation}, that $T_{\text{max}}<\infty$ and \begin{equation}\label{new1}
\|u(t)\|_{H^{1}_{\Gamma_{0}}(\Omega)}+\|u_t(t)\|_2 \to \infty\qquad\text{as $t\to T_{max}^-$}. \end{equation} Hence, to prove our claim, we have to show only that also
$\|u(t)\|_p\to \infty$ as $t\to T_{max}^-$. We first note that, by \eqref{low} and \eqref{EI}, the energy function $E$ (defined in \eqref{eff}) is decreasing. Hence, by \eqref{eff}, \begin{equation}\label{new2}
\frac 12 \|\nabla u(t)\|_2^2 +\frac 12
\|u_t(t)\|_2^2-\int_\Omega F(x, u(t))\le E_0 \end{equation} for $t\in [0,T_{\text{max}})$, where $E_0:=E(u_0,u_1)$. Hence, by \fassref{F2}, we have \begin{equation}\label{new3}
\frac 12 \|\nabla u(t)\|_2^2 +\frac 12
\|u_t(t)\|_2^2-\frac {c_3}p\|u(t)\|_p^p\le E_0 \end{equation}
for $t\in [0,T_{\text{max}})$. Consequently, by \eqref{new1}, we get that $\|u(t)\|_p\to \infty$ as well, so concluding the proof of our claim.
We now have to prove that problem \eqref{P} cannot have global solutions. We suppose by contradiction that $T_{\text{max}}=\infty$. We fix $E_2\in (E_0,E_1)$ and we set \begin{equation}\label{calH} \cal{H}(t)=\cal{H}(u(t),u_t(t))=E_2-E(u(t),u_t(t)). \end{equation} Since, as noted before, $E$ is decreasing, the function $\cal{H}$ is increasing and $\cal{H}(t)\ge \cal{H}_0:=\cal{H}(0)=E_2-E_0>0$. In the sequel of the proof we shall omit, for simplicity, explicit dependence on time of $u$ and $u_t$ on the notation. By Lemma~\ref{lemma1} we have
$$\cal{H}(t)\le E_2-\frac 12 \|\nabla u\|_2^2+ \int_\Omega F(\cdot,u)\le E_1-\frac 12\lambda_1^2+ \int_\Omega F(\cdot,u)$$ and then, by \eqref{defE1} and \fassref{F3}, \begin{equation}\label{Hlow}
\cal{H}(t)\le \int_\Omega F(\cdot,u)\le \dfrac {c_3}p\|u\|_p^p. \end{equation}
We now introduce, as in \cite{georgiev} and \cite{levserr}, the main auxiliary function which shows the blow--up properties of $u$, i.e. \begin{equation}\label{Z}
\cal{Z}(t)=\cal{H}^{1-\eta}(t)+\xi\int_\Omega u_t u, \end{equation}
where $\xi>0$ and $\eta\in (0,1)$ are constants to be fixed later. In order to estimate the derivative of $\cal{Z}$ it is convenient to estimate \begin{equation}\label{I}I_1:=\frac d{dt}\int_\Omega u_tu. \end{equation} Using Definition~\ref{def1} we can take $\varphi=u$ in \eqref{ovo} and get \begin{equation}\label{II}
I_1=\|u_t\|_2^2-\|\nabla u\|_2^2+\int_\Omega f(\cdot,u)u-\int_{\Gamma_1}Q(\cdot,u_t)u \end{equation} almost everywhere in $(0,\infty)$. Now we claim that there are positive constants $c_6$ and $c_7$, depending on $p$ and $K_0$, such that \begin{equation}\label{III}
I_1\ge 2\|u_t\|_2^2+c_6\|u\|_p^p+c_7\|\nabla u\|_2^2+2\cal{H}(t)-\int_{\Gamma_1}Q(\cdot,u_t)u \end{equation} in $[0,\infty)$. Using \eqref{eff} and \eqref{calH} we can write, for any $\varepsilon>0$, the identity \eqref{II} in the form \begin{multline}\label{IIbis}
I_1=\tfrac12 (p+2-\varepsilon))\|u_t\|_2^2+\tfrac 12(p-2-\varepsilon)\|\nabla u\|_2^2\\+\int_\Omega [f(\cdot,u)u-(p-\varepsilon)F(\cdot,u)]+(p-\varepsilon)\cal{H}(t)-(p-\varepsilon)E_2-\int_{\Gamma_1}Q(\cdot,u_t)u. \end{multline} Using \fassref{F3} for $0<\varepsilon<\min\{\varepsilon_0,p-2\}$ we consequently get \begin{align*}
I_1\ge &2\|u_t\|_2^2+\int_\Omega [f(\cdot,u)u -(p-\varepsilon)F(\cdot,u)] +\tfrac 12(p-\varepsilon-2)\|\nabla u\|_2^2
-(p-\varepsilon)E_2\\ & \hskip 6.6truecm +(p-\varepsilon)\cal{H}(t) -\int_{\Gamma_1}Q(\cdot,u_t)u\\
\ge &2\|u_t\|_2^2+c_5(\varepsilon)\|u\|_p^p+\tfrac 12(p-\varepsilon-2)\|\nabla u\|_2^2 -(p-\varepsilon)E_2+2\cal{H}(t)-\int_{\Gamma_1}Q(\cdot,u_t)u. \end{align*} By Lemma~\ref{lemma1} \begin{gather*}
\tfrac 12 (p-\varepsilon-2)\|\nabla u\|_2^2-(p-\varepsilon)E_2
\ge c_7(\varepsilon)\|\nabla u\|_2^2+c_8(\varepsilon),\\ \intertext{where} c_7(\varepsilon)=\tfrac 12 (p-\varepsilon-2)\left(1-\lambda_1^2/\lambda_2^2\right)\qquad\text{and}\quad c_8(\varepsilon)=\tfrac 12(p-\varepsilon-2)\lambda_1^2-(p-\varepsilon)E_2. \end{gather*} Clearly $c_7(\varepsilon)>0$ and, as $\varepsilon\to 0^+$, $$c_8(\varepsilon)\to\frac 12(p-2)\lambda_1^2-pE_2>\tfrac 12(p-2)\lambda_1^2-pE_1=0,$$ so also $c_8(\varepsilon)>0$ for $\varepsilon$ sufficiently small. Fixing a sufficiently small $\varepsilon=\overline{\varepsilon}$ and setting $c_6=c_5(\overline{\varepsilon})$, $c_7=c_7(\overline{\varepsilon})$ we conclude the proof of \eqref{III}.
Now, in order to estimate $I_1$, we estimate the last term in \eqref{III}. Using \assref{Q2}, H\H{o}lder inequality (with respect to $\mu_\alpha$), and assumption $\alpha\in L^\infty(\Gamma_1)$ we obtain
$$I_2:=\left|\int_{\Gamma_1}Q(\cdot,u_t)u\right|\le c_1
\|\alpha\|_{\infty,\Gamma_1}\left(\|u_t\|_{\mu,\Gamma_1,\alpha}^{\mu-1}\|u\|_{\mu,\Gamma_1}+
\|u_t\|_{m,\Gamma_1,\alpha}^{m-1}\|u\|_{m,\Gamma_1} \right).$$ Since $\mu\le m$, applying H\H{o}lder inequality again we get \begin{equation}\label{IIIbis}
I_2\le C_2 \left(\|u_t\|_{\mu,\Gamma_1,\alpha}^{\mu-1}
+\|u_t\|_{m,\Gamma_1,\alpha}^{m-1}\right)\|u\|_{m,\Gamma_1} \end{equation} with
$C_2=C_2\left(\mu,m,c_1,\|\alpha\|_{\infty,\Gamma_1},\sigma(\Gamma_1)\right)>0$. By Lemma~\ref{lemma2} we consequently get \begin{equation}\label{IV}
I_2\le C_3 \left(\|u_t\|_{\mu,\Gamma_1,\alpha}^{\mu-1}
+\|u_t\|_{m,\Gamma_1,\alpha}^{m-1}\right)\|u\|_p^{1-1/m}\|\nabla u\|_2^{1/m} \end{equation} where
$C_3=C_3(\mu,m,p,c_1,\|\alpha\|_{\infty,\Gamma_1},\Omega,\Gamma_0)>0$. Let us denote $$I_3:=\|u_t\|_{\mu,\Gamma_1,\alpha}^{\mu-1}
\|u\|_p^{1-1/m}\|\nabla u\|_2^{1/m}\qquad\text{and}\quad I_4:=\|u_t\|_{\mu,\Gamma_1,\alpha}^{m-1} \|u\|_p^{1-1/m}\|\nabla u\|_2^{1/m}.$$ It is convenient to write \begin{equation}\label{V}
I_3=\|u_t\|_{\mu,\Gamma_1,\alpha}^{\mu-1}\|\nabla u\|_2^{1/m}
\|u\|_p^{p\left(\frac 1\mu-\frac 1{2m}\right)} \|u\|_p^{1-\frac 1m-p\left(\frac 1\mu-\frac 1{2m}\right)}. \end{equation} We now apply, for any $\delta>0$, weighted Young's inequality to the first three multiplicands in the right hand side of \eqref{V} , with exponents $p_1=\mu'$, $p_2=2m$ and $p_3=2m\mu/(2m-\mu)$, so that $\frac 1{p_1}+\frac 1{p_2}+\frac 1{p_3}=1$ (note that trivially $p_1,p_2>1$ while $p_3>1$ as $\frac 1{p_3}=\frac 1\mu-\frac 1{2m}\in (0,1)$ since $m\ge\mu>1$). Thus we get the estimate \begin{equation}\label{VI}
I_3\le \left(\delta ^{\frac 1{1-\mu}}\|u_t\|_{\mu,\Gamma_1,\alpha}^\mu+\delta \|\nabla u\|_2^2+\delta \|u\|_p^p\right)\|u\|_p^{1-\frac 1m-p\left(\frac 1\mu-\frac 1{2m}\right)} \end{equation} and, by particularizing it to the subcase $m=\mu$, also the estimate \begin{equation}\label{VIbis}
I_4\le \left(\delta ^{\frac 1{1-m}}\|u_t\|_{m,\Gamma_1,\alpha}^m+\delta \|\nabla u\|_2^2+\delta
\|u\|_p^p\right)\|u\|_p^{1-\frac 1m-\frac p{2m}}. \end{equation}
Moreover, by Lemma~\ref{lemma1} we have $\|u\|_p\ge [c_3(pK_0)^{\frac 2{p-2}}]^{-1/p}$. Hence, since $\mu\le m$ implies $1-\frac 1m-p\left(\frac 1\mu-\frac 1{2m}\right)\le 1-\frac 1m-\frac p{2m}$, we also have \begin{equation}\label{upp}
\|u\|_p^{1-\frac 1m-p\left(\frac 1\mu-\frac 1{2m}\right)}\le
[c_3(pK_0)^{\frac 2{p-2}}]^{\frac 1\mu-\frac 1m} \|u\|_p^{1-\frac 1m-\frac p{2m}}. \end{equation} By combining \eqref{IV} and \eqref{VI}--\eqref{upp} we get \begin{equation}\label{VII} I_2\le C_4\left[S(\delta)\left(
\|u_t\|_{\mu,\Gamma_1,\alpha}^\mu+\|u_t\|_{m,\Gamma_1,\alpha}^m
\right) +\delta \|\nabla u\|_2^2 +\delta
\|u\|_p^p\right]\|u\|_p^{1-\frac 1m-\frac p{2m}} \end{equation} where $S(\delta)=\left(\delta ^{\frac 1{1-\mu}}+\delta ^{\frac 1{1-m}}\right)$ and
$C_4=C_4(\mu,m,p,c_1,c_3,K_0,\|\alpha\|_{\infty,\Gamma_1},\Omega,\Gamma_0)>0$. Now we set $\overline{\eta}=-\frac 1p \left(1-\frac 1m-\frac p{2m} \right)$. Since $m<1+p/2$, we have $\overline{\eta}>0$. Moreover $\overline{\eta}=\frac 1{2m}-\frac{m-1}{pm}<\frac 1{2m}<1$. By combining \eqref{VII} and \eqref{Hlow} we get \begin{equation}\label{VIII} I_2\le C_5\left[S(\delta)\left(
\|u_t\|_{\mu,\Gamma_1,\alpha}^\mu+\|u_t\|_{m,\Gamma_1,\alpha}^m
\right) +\delta \|\nabla u\|_2^2 +\delta
\|u\|_p^p\right]{\cal{H}}^{-\overline{\eta}}(t) \end{equation} where
$C_5=C_5(\mu,m,p,c_1,c_3,K_0,\|\alpha\|_{\infty,\Gamma_1},\Omega,\Gamma_0)>0$. Since, by \eqref{EI} and \assref{Q3} we have $$\cal{H}'(t)\ge c_4\left(
\|u_t\|_{\mu,\Gamma_1,\alpha}^\mu+\|u_t\|_{m,\Gamma_1,\alpha}^m\right)$$ and $\cal{H}(t)\ge {\cal H}_0$, by \eqref{VIII} we get, for any $\eta\in (0,\overline{\eta})$, \begin{equation}\label{IX} I_2\le C_6\left[S(\delta){\cal H}'(t){\cal{H}}(t)^{-\eta} +\delta
\|\nabla u\|_2^2 +\delta \|u\|_p^p\right] \end{equation} where
$C_6=C_6(\mu,m,p,c_1,c_3,K_0,\|\alpha\|_{\infty,\Gamma_1},\Omega,\Gamma_0, {\cal H}_0)>0$. By combining \eqref{III} and \eqref{IX} we have the desired estimate of $I_1$, i.e. \begin{equation}\label{X}
I_1\ge 2\|u_t\|_2^2+(c_6-\delta C_6)\|u\|_p^p+(c_7-\delta C_6)\|\nabla
u\|_2^2+2{\cal{H}}(t)-S(\delta){\cal H}'(t){\cal{H}}^{-\eta}(t).
\end{equation} By making the choice $\delta=\text{min}\{c_6,c_7\}/(2C_6)$ from \eqref{X} we get \begin{equation}\label{XI}
I_1\ge 2\|u_t\|_2^2+\frac{c_6}2\|u\|_p^p+\frac{c_7}2\|\nabla
u\|_2^2+2{\cal{H}}(t)-C_7{\cal H}'(t){\cal{H}}^{-\eta}(t)
\end{equation} where
$C_7=C_7(\mu,m,p,c_1,c_3,K_0,\|\alpha\|_{\infty,\Gamma_1},\Omega,\Gamma_0, {\cal H}_0)>0$.
By combining \eqref{Z} and \eqref{XI} we get, for any $\eta\in (0,\overline{\eta})$,
$${\cal Z}'(t)\ge (1-\eta-C_7\xi){\cal H}^{-\eta}(t){\cal H}'(t)+2\xi{\cal{H}}(t)+2\xi\|u_t\|_2^2+\frac{\xi c_6}2\|u\|_p^p+\frac{\xi c_7}2\|\nabla
u\|_2^2.$$ We now fix $\eta=\text{min}\left\{\frac{\overline{\eta}}4,\frac{p-2}{4p}\right\}\in(0,1)$ and we restrict to $0<\xi\le(1-\eta)/{C_7}$. Hence, since $\cal{H}'\ge 0$, from previous estimate it follows \begin{equation}\label{XII}
{\cal Z}'(t)\ge \xi c_8\left(\|u_t\|_2^2+\|\nabla u\|_2^2+\|u\|_p^p+{\cal H}(t)\right) \end{equation} were $c_8=c_8(p,K_0)>0$. Next, since $\cal Z(0)={\cal H}_0^{1-\eta}+\xi \int_\Omega u_0 u_1$, by fixing
$\xi=\xi_0=\xi_0(\mu,m,p,c_1,c_3,K_0,\|\alpha\|_{\infty,\Gamma_1},\Omega,\Gamma_0,u_0,u_1)>0$ sufficiently small we have $\cal Z(0)>0$, hence ${\cal Z}(t)\ge {\cal Z(0)}>0$ by \eqref{XII}. Now we denote $r=1/(1-\eta)$ and $\overline{r}=1/(1-\overline{\eta})$. Since $0<\eta<\overline{\eta}<1$ we have $1<r<\overline{r}$. Now using Cauchy--Schwartz inequality as well as the elementary inequality $(A+B)^r\le 2^{r-1}(A^r+B^r)$ for $A,B\ge 0$, we have from \eqref{Z}
$${\cal Z}^r(t)\le \left({\cal H}^{1-\eta}(t)+\xi_0 \left|\int_\Omega u_tu\right| \,\right)^r
\le 2^{r-1}\left({\cal H}(t)+\xi_0^r\|u_t\|_2^r \|u\|_2^r\right).$$ We now set $q=2/r=2(1-\eta)$. Since $\eta<\frac 12-\frac 1p<\frac 12$ it follows that $q>1$. We can then apply Young's inequality with exponents $q$ and $q'=\frac{1-\eta}{\frac 12-\eta}$ to get
$${\cal Z}^r(t)\le 2^{r-1}\left({\cal H}(t)+\xi_0^2\|u_t\|_2^2+ \|u\|_2^{\frac 1{\frac 12-\eta}}\right).$$ Now, since ${\frac 1{\frac 12-\eta}}<p$ a further application of Young's inequality yields
$$\|u\|_2^{\frac 1{\frac 12-\eta}}\le 1+\|u\|_2^p$$ and then, as $\Omega$ is bounded and $\cal{H}(t)\ge \cal{H}_0$, by H\"{o}lder inequality we get \begin{equation}\label{XIII}
{\cal Z}^r(t)\le C_8 \left({\cal H}(t)+\|u_t\|_2^2+ \|u\|_p^p\right) \end{equation} where
$C_8=C_8(\mu,m,p,c_1,c_3,K_0,\|\alpha\|_{\infty,\Gamma_1},\Omega,\Gamma_0,u_0,u_1)>0$. By combining \eqref{XII} and \eqref{XIII}, as $r>1$, we get $${\cal Z}'(t)\ge C_9 {\cal Z}^r(t)\qquad\text{for all $t\in [0,\infty)$}$$ where
$C_9=C_9(\mu,m,p,c_1,c_3,K_0,\|\alpha\|_{\infty,\Gamma_1},\Omega,\Gamma_0,u_0,u_1)>0$. Since $r>1$ this final estimate gives the desired contradiction. \end{proof}
\section{Proof of Theorem~\ref{theorem1}}\label{section4}
This section is devoted to show that Theorem~\ref{theorem1} is a simple corollary of Theorem~\ref{theorem 4}. We first need to show that, for problem~\eqref{2}, $E_1$ and $W$, as defined in \eqref{defE1}--\eqref{defW}, are nothing but $d$ and $W_u$ (introduced in \eqref{d}--\eqref{Wu}). The proof is an adaptation of the proof of \cite[Lemma~4.1]{fisvit}.
\begin{lem} \label{NUOVO}
Suppose $f(x,u)=|u|^{p-2}u$, $2<p\leq 2^*$, $\sigma(\Gamma_{0})>0$. Then $E_{1}=d$ and $W=W_u$. \end{lem}
\begin{proof}
When $f(x,u)=|u|^{p-2}u$ we have $K_0=\frac 1p B_1^p$, hence \begin{equation}\label{gho1} \lambda_1=B_1^{\frac {-p}{p-2}}\quad\text{and}\quad E_1=\left(\frac 12-\frac 1p\right) B_1^{-2p/(p-2)}. \end{equation} An easy calculation shows that for any $u\in H^{1}_{\Gamma_{0}}(\Omega)\setminus\{0\}$ we have
$$\max\limits_{\lambda > 0}J(\lambda u)=J(\lambda(u)u)=\left(\frac{1}{2}-\frac{1}{p}\right)\left(\frac{\left\|\nabla u\right\|_{2}}{\left\|u\right\|_{p}}
\right)^{2p/(p-2)}\!\!\!\!\!\!\!\!\text{where}\quad
\lambda(u)=\frac{\left\|\nabla
u\right\|^{2/(p-2)}_{2}}{\left\|u\right\|^{p/(p-2)}_{p}}.$$ Hence, by \eqref{Poincare}, $d=\left(\frac 12-\frac 1p\right) B_1^{-2p/(p-2)}$. Combing with \eqref{gho1} we have $d=E_1$.
In order to show that $W=W_u$ we first prove that $W\subseteq W_u$. Let $(u_0,u_1)\in W$ and suppose, by contradiction, that $K(u_{0})>
0$. Hence $\left\|u_{0}\right\|^{p}_{p}<\left\|\nabla u_{0}\right\|^{2}_{2}$ by \eqref{J}. Moreover, $J(u_{0})\le E(u_0,u_1) < d =E_{1}$ and $\left\|\nabla u_{0}\right\|_{2}> \lambda_{1}$. Then it follows that
\[E_{1}>E(u_0,u_1)\ge J(u_0)>\left(\frac{1}{2}-\frac{1}{p}\right)\left\|\nabla u_{0}\right\|^{2}_{2} >\left(\frac{1}{2}-\frac{1}{p}\right)\lambda^{2}_{1}, \] which contradicts \eqref{defE1}.
To prove that $W_u\subseteq W$, we take $(u_0,u_1)\in W_{u}$. We note that, by \eqref{Poincare}, we have $J(v)\geq h(\|\nabla v\|_2)$ for all $v\in H^1_{\Gamma_0}(\Omega)$, where $h$ is defined by $h(\lambda)=\frac 12 \lambda^2-\frac 1p B_1^p\lambda^p$ for
$\lambda\geq 0$. One easily verify that $h(\lambda_1)=E_1$. Then, since $J(u_0)\le E(u_0,u_1)<E_1$, we have $\|\nabla u_0\|_2\not=\lambda_1$. Moreover, since $K(u_0)\le 0$, by \eqref{Poincare} we have
$$\|\nabla u_0\|_2^2\le \|u_0\|_p^p\le B_1^p\|\nabla u_0\|_p^p$$ and consequently $\|\nabla u_0\|_2\ge B_1^{-p/(p-2)}=\lambda_1$. Then $\|\nabla u_0\|_2> B_1^{-p/(p-2)}=\lambda_1$, concluding the proof. \end{proof}
\begin{rem}\label{variational} When $f(x,u)=|u|^{p-2}u$ $d$ is also equal to the Mountain Pass level associated to the elliptic problem
$$\begin{cases} -\Delta u=|u|^{p-2}u \qquad &\text{in $\Omega$,}\\
u=0\qquad &\text{on $\Gamma_0$,}\\
\partial_\nu u=0 \qquad &\text{on $\Gamma_1$,} \end{cases}$$ that is $d=\inf\limits_{\gamma\in \Lambda}\sup\limits_{t\in [0,1]}J(\gamma(t))$, where $$\Lambda=\{\gamma \in C([0,1];H^1_{\Gamma_0}(\Omega)): \gamma(0)=0,\quad J(\gamma(1))<0\}.$$ The proof of this remark was given in \cite[Final Remarks]{poroso}. \end{rem}
We can now give the \begin{proof} [\bf Proof of Theorem~\ref{theorem1}] By Remark~\ref{finalremark} the nonlinearities involved in problem~\eqref{2} satisfy assumption \assref{Q1}--\assref{Q3} and \fassref{F1}--\fassref{F3}, so we can apply Theorem~\ref{theorem 4}. Due to Lemma~\ref{NUOVO} we get exactly Theorem~\ref{theorem1}. \end{proof}
\def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} | arXiv |
If Rosa's age is divided by 2, 3, 4, or 6, the remainder is 1. If her age is divided by 7, the remainder is 0. She is less than 75 years old. How many years old is Rosa?
Since her age divided by 7 results in a remainder of 0, her age must be a multiple of 7. If her age is $n$, we notice that $n-1$ must be a multiple of 2, 3, 4, and 6. The least common multiple of those numbers is 12, so $n-1$ must be a multiple of 12. The multiples of 12 less than 75 are 12, 24, 36, 48, and 60. Adding 1 results in 13, 25, 37, 49, and 61, where 49 is the only multiple of 7. So Rosa is $\boxed{49}$ years old.
OR
We look for a multiple of 7 that is not divisible by 2, 3, 4, or 6. First we list all odd multiples of 7 less than 75, which are 7, 21, 35, 49, and 63. Since 21 and 63 are multiples of 3, we're left with 7, 35, and 49 as possibilities. Only $\boxed{49}$ has a remainder of 1 when divided by 2, 3, 4, or 6. | Math Dataset |
\begin{document}
\newcommand {\ignore} [1] {}
\def \aa {\alpha} \def {\bf b} {\beta} \def \gg {\gamma} \def \ee {\varepsilon} \def \el {\ell} \def \ss {\sigma} \def \dd {\delta} \def \Om {\Omega}
\def \PP {{\cal P}} \def \QQ {{\cal Q}} \def \DD {{\cal D}} \def \NN {{\cal N}} \def \AA {{\cal A}} \def \MM {{\cal M}} \def \II {{\cal I}} \def \TT {{\cal T}} \def \RR {{\cal R}}
\newcommand{{\it i.e.}}{{\it i.e.}} \newcommand{{\it e.g.}}{{\it e.g.}} \newcommand{{\mathbf{u}}}{{\mathbf{u}}} \newcommand{{\mathbf{c}}}{{\mathbf{c}}} \newcommand{{\mathbf{x}}}{{\mathbf{x}}} \newcommand{{\mathbf{y}}}{{\mathbf{y}}} \newcommand{{\mathbf{v}}}{{\mathbf{v}}} \newcommand{{\mathbf{e}}}{{\mathbf{e}}} \newcommand{{\mathbf{t}}}{{\mathbf{t}}} \newcommand{{\mathbf{D}}}{{\mathbf{D}}} \newcommand{{\texttt{Multiway Cut}}}{{\texttt{Multiway Cut}}} \newcommand{{\texttt{Node Multiway Cut}}}{{\texttt{Node Multiway Cut}}} \newcommand{{\texttt{Directed Multiway Cut}}}{{\texttt{Directed Multiway Cut}}} \newcommand{{\texttt{$0$-Extension}}}{{\texttt{$0$-Extension}}} \newcommand{{\texttt{Vertex Cover}}}{{\texttt{Vertex Cover}}} \newcommand{{\texttt{Metric Labeling}}}{{\texttt{Metric Labeling}}} \newcommand{{\texttt{Uniform Metric Labeling}}}{{\texttt{Uniform Metric Labeling}}} \newcommand{{\texttt{Max SAT}}}{{\texttt{Max SAT}}}
\newcommand{{\mathtt{R}}}{{\mathtt{R}}} \newcommand{{\mathtt{opt}}}{{\mathtt{opt}}} \newcommand{{\text{argmin}}}{{\text{argmin}}} \newcommand{{c_{\text{max}}}}{{c_{\text{max}}}}
\newcommand{{\textsf{Correlation Clustering}}}{{\textsf{Correlation Clustering}}} \newcommand{{\textsf{Max Min Agreements}}}{{\textsf{Max Min Agreements}}} \newcommand{{\textsf{Min Max Disagreements}}}{{\textsf{Min Max Disagreements}}} \newcommand{{\textsf{Max Local Agreements}}}{{\textsf{Max Local Agreements}}} \newcommand{{\textsf{Min Local Disagreements}}}{{\textsf{Min Local Disagreements}}} \newcommand{{\textsf{Min Max $s-t$ Cut}}}{{\textsf{Min Max $s-t$ Cut}}} \newcommand{{\textsf{Min Max Multiway Cut}}}{{\textsf{Min Max Multiway Cut}}} \newcommand{{\textsf{Min Max Multicut}}}{{\textsf{Min Max Multicut}}} \newcommand{{\textsf{Max Cut}}}{{\textsf{Max Cut}}} \newcommand{{\textsf{Min $s-t$ Cut}}}{{\textsf{Min $s-t$ Cut}}} \newcommand{{\textsf{Multiway Cut}}}{{\textsf{Multiway Cut}}} \newcommand{{\textsf{Multicut}}}{{\textsf{Multicut}}}
\newcommand{\RN}[1]{
\textup{\expandafter{\romannumeral#1}} }
\def{\mathbf{x}} {{\bf x}} \def{\bf a}{{\bf a}} \def{\bf b}{{\bf b}} \def{\bf d}{{\bf d}}
\pagenumbering{arabic}
\makeatletter \renewcommand*{\@fnsymbol}[1]{\ensuremath{\ifcase#1\or \dagger\or \ddagger\or \ddagger\or
\mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger
\or \ddagger\ddagger \else\@ctrerr\fi}} \makeatother
\title{Local Guarantees in Graph Cuts and Clustering}
\author{Moses Charikar\inst{1} \fnmsep \thanks{Supported by NSF grants CCF-1617577, CCF-1302518 and a Simons Investigator Award}, Neha Gupta\inst{1} \fnmsep $^\dagger$ \and Roy Schwartz\inst{2} \fnmsep \thanks{Supported by ISF grant 1336/16} }
\institute{Stanford University, Stanford CA 94305, USA,\\ \email{\{moses,nehagupta\}@cs.stanford.edu}, \and Technion, Haifa, 3200003, Israel,\\ \email{[email protected]} }
\maketitle
\begin{abstract} {{\textsf{Correlation Clustering}}} is an elegant model that captures fundamental graph cut problems such as {{\textsf{Min $s-t$ Cut}}}, {{\textsf{Multiway Cut}}}, and {{\textsf{Multicut}}}, extensively studied in combinatorial optimization.
Here, we are given a graph with edges labeled $+$ or $-$ and the goal is to produce a clustering that agrees with the labels as much as possible: $+$ edges within clusters and $-$ edges across clusters.
The classical approach towards {{\textsf{Correlation Clustering}}} (and other graph cut problems) is to optimize a global objective.
We depart from this and study local objectives: minimizing the maximum number of disagreements for edges incident on a single node, and the analogous max min agreements objective. This naturally gives rise to a family of basic min-max graph cut problems. A prototypical representative is {{\textsf{Min Max $s-t$ Cut}}}: find an $s-t$ cut minimizing the largest number of cut edges incident on any node.
We present the following results: $(1)$ an $O(\sqrt{n})$-approximation for the problem of minimizing the maximum total weight of disagreement edges incident on any node (thus providing the first known approximation for the above family of min-max graph cut problems), $(2)$ a remarkably simple $7$-approximation for minimizing local disagreements in complete graphs (improving upon the previous best known approximation of $48$), and $(3)$ a $\nicefrac[]{1}{(2+\varepsilon)}$-approximation for maximizing the minimum total weight of agreement edges incident on any node, hence improving upon the $\nicefrac[]{1}{(4+\varepsilon)}$-approximation that follows from the study of approximate pure Nash equilibria in cut and party affiliation games.
\noindent {\bf Keywords:} Approximation Algorithms, Graph Cuts, Correlation Clustering, Linear Programming \end{abstract}
\thispagestyle{empty}
\section{Introduction}\label{sec:Introduction} Graph cuts are extensively studied in combinatorial optimization, including fundamental problems such as {{\textsf{Min $s-t$ Cut}}}, {{\textsf{Multiway Cut}}}, and {{\textsf{Multicut}}}. Typically, given an undirected graph $G=(V,E)$ equipped with non-negative edge weights $c:E\rightarrow \mathcal{R}_+$ the goal is to find a {\em constrained} partition $\mathcal{S}=\left\{ S_1,\ldots, S_{\ell}\right\}$ of $V$ minimizing the total weight of edges crossing between different clusters of $\mathcal{S}$. e.g., in {{\textsf{Min $s-t$ Cut}}}, $\mathcal{S}$ has two clusters, one containing $s$ and the other containing $t$.
Similarly, in {{\textsf{Multiway Cut}}}, $\mathcal{S}$ consists of $k$ clusters each containing exactly one of $k$ given special vertices $t_1,\ldots,t_k$. In {{\textsf{Multicut}}}, the clusters of $\mathcal{S}$ must separate $k$ given pairs of special vertices $\left\{ s_i,t_i\right\} _{i=1}^k$.
The elegant model of {{\textsf{Correlation Clustering}}} captures all of the above fundamental graph cut problems, and was first introduced by Bansal {\em et al.} \cite{bansal2004correlation} more than a decade ago. In {{\textsf{Correlation Clustering}}}, we are given an undirected graph $G=(V,E)$ equipped with non-negative edge weights $c:E\rightarrow \mathcal{R} _+$. Additionally, $E$ is partitioned into $E^+$ and $E^-$, where edges in $E^+$ ($E^-$) are considered to be labeled as $+$ ($-$).
The goal is to find a partition of $V$ into an {\em arbitrary} number of clusters $\mathcal{S}=\left\{ S_1,\ldots,S_{\ell}\right\}$ that agrees with the edges' labeling as much as possible: the endpoints of $+$ edges are supposed to be placed in the same cluster and endpoints of $-$ edges in different clusters.
Typically, the objective is to find a clustering that minimizes the total weight of misclassified edges. This models, {\it e.g.}, {{\textsf{Min $s-t$ Cut}}}, since one can label all edges in $G$ with $+$, and add $(s,t)$ to $E$ with a label of $-$ and set its weight to $c_{s,t}=\infty$ ({{\textsf{Multiway Cut}}} and {{\textsf{Multicut}}} are modeled in a similar manner).
{{\textsf{Correlation Clustering}}} has been studied extensively for more than a decade \cite{ailon2012improved,ailon2008aggregating,charikar2003clustering,chawla2015near,demaine2006correlation,Wirth10}. In addition to the simplicity and elegance of the model, its study is also motivated by a wide range of practical applications: image segmentation \cite{Wirth10}, clustering gene expression patterns \cite{amit2004bicluster,ben1999clustering}, cross-lingual link detection \cite{van2007correlation}, and the aggregation of inconsistent information \cite{filkov2004integrating}, to name a few (refer to the survey \cite{Wirth10} and the references therein for additional details).
Departing from the classical global objective approach towards {{\textsf{Correlation Clustering}}}, we consider a broader class of objectives that allow us to bound the number of misclassified edges incident on any node (or alternatively edges classified correctly). We refer to this class as {{\textsf{Correlation Clustering}}} with {\em local guarantees}. First introduced by Puleo and Milenkovic \cite{puleo2016correlation}, {{\textsf{Correlation Clustering}}} with local guarantees naturally arises in settings such as community detection without antagonists, {\it i.e.}, objects that are inconsistent with large parts of their community, and has found applications in diverse areas, {\it e.g.}, recommender systems, bioinformatics, and social sciences \cite{cheng2000biclustering,kriegel2009clustering,puleo2016correlation,symeonidis2006nearest}.
\noindent {\bf{Local Minimization of Disagreements and Graph Cuts:}} A prototypical example when considering minimization of disagreements with local guarantees is the {{\textsf{Min Max Disagreements}}} problem, whose goal is to find a clustering that minimizes the maximum total weight of misclassified edges incident on any node.
Formally, given a partition $\mathcal{S}=\left\{S_1,\ldots,S_{\ell}\right\}$ of $V$,
for $u\in S_i$, define: $$\text{disagree}_{\mathcal{S}}(u)\triangleq \sum _{v\notin S_i:(u,v)\in E^+}c_{u,v} + \sum _{v\in S_i:(u,v)\in E^-}c_{u,v}~.$$
The objective of {{\textsf{Min Max Disagreements}}} is: $\min _{\mathcal{S}}\max _{u\in V}\left\{ \text{disagree}_{\mathcal{S}}(u)\right\}$.
This is NP-hard even on complete unweighted graphs and approximations are known for only a few special cases \cite{puleo2016correlation}. No approximation is known for general graphs.
Just as minimization of total disagreements in {{\textsf{Correlation Clustering}}} models fundamental graph cut problems, {{\textsf{Min Max Disagreements}}} gives rise to a variety of basic min-max graph cut problems.
A natural problem here is {{\textsf{Min Max $s-t$ Cut}}}: Its input is identical to that of {{\textsf{Min $s-t$ Cut}}}, however its objective is to find an $s-t$ cut $(S,\overline{S})$ minimizing the total weight of cut edges incident on any node: $ \min _{S\subseteq V:s\in S, t\notin S}\max _{u\in V}\{\sum _{v:(u,v)\in \delta(S)}c_{u,v}\}$.\footnote{$\delta(S)$ denotes the collection of edges crossing the cut $(S,\overline{S})$.}
Despite the fact that {{\textsf{Min Max $s-t$ Cut}}} is a natural graph cut problem, no approximation is known for it.
{{\textsf{Min Max Disagreements}}} also gives rise to {{\textsf{Min Max Multiway Cut}}} and {{\textsf{Min Max Multicut}}}, defined similarly; no approximation is known for these. One of our goals is to highlight this family of min-max graph cut problems which we believe deserve further study. Other graph cut problems were studied from the min-max perspective, {\it e.g.}, \cite{bansal2014min,svitkina2004min}. However, the goal there is to find a constrained partition that minimizes the total weight of cut edges incident on any {\em cluster} (as opposed to incident on any {\em node}).
{{\textsf{Min Max Disagreements}}} is a special case of the more general {{\textsf{Min Local Disagreements}}} problem. Given a clustering $\mathcal{S}$, consider the vector of all disagreement values $\text{disagree}_{\mathcal{S}}(V)\in \mathcal{R}_+^V$, where $(\text{disagree}_{\mathcal{S}}(V)) _u = \text{disagree}_{\mathcal{S}}(u) $ $\forall u\in V$. The objective of {{\textsf{Min Local Disagreements}}} is to find a partition $\mathcal{S}$ that minimizes $f( \text{disagree}_{\mathcal{S}}(V))$ for a given function $f$. For example, if $f$ is the $\max$ function {{\textsf{Min Local Disagreements}}} reduces to {{\textsf{Min Max Disagreements}}}, and if $f$ is the summation function {{\textsf{Min Local Disagreements}}} reduces to the classic objective of minimizing total disagreements.
\noindent {\bf{Local Maximization of Agreements:}} Another natural objective of {{\textsf{Correlation Clustering}}} is that of maximizing the total weight of edges correctly classified \cite{bansal2004correlation,swamy2004correlation}.
A prototypical example for local guarantees is {{\textsf{Max Min Agreements}}}, {\it i.e.} \text{ } finding a clustering that maximizes the minimum total weight of correctly classified edges incident on any node.
Formally, given a partition $\mathcal{S}=\{S_1,\ldots,S_{\ell}\}$ of $V$,
for $u \in S_i$, define: $$\text{agree}_{\mathcal{S}}(u)\triangleq \sum _{v\in S_i:(u,v)\in E^+}c_{u,v} + \sum _{v\notin S_i:(u,v)\in E^-}c_{u,v}~.$$
The objective of {{\textsf{Max Min Agreements}}} is: $ \max _{\mathcal{S}}\min _{u\in V}\{ \text{agree}_{\mathcal{S}}(u)\}$.
This is a special case of the more general {{\textsf{Max Local Agreements}}} problem. Given a clustering $\mathcal{S}$, consider the vector of all agreement values $\text{agree}_{\mathcal{S}}(V)\in \mathcal{R}_+^V$, where $( \text{agree}_{\mathcal{S}}(V))_u=\text{agree}_{\mathcal{S}}(u)$ $\forall u\in V$. The objective of {{\textsf{Max Local Agreements}}} is to find a partition $\mathcal{S}$ that maximizes $g( \text{agree}_{\mathcal{S}}(V))$ for a given function $g$. For example, if $g$ is the $\min$ function {{\textsf{Max Local Agreements}}} reduces to {{\textsf{Max Min Agreements}}}, and if $g$ is the summation function {{\textsf{Max Local Agreements}}} reduces to the classic objective of maximizing total agreements.
{{\textsf{Max Local Agreements}}} is closely related to the computation of local optima for {{\textsf{Max Cut}}}, and the computation of pure Nash equilibria in cut and party affiliation games \cite{balcan2009improved,bhalgat2010approximating,christodoulou2006convergence,fabrikant2004complexity,schaffer1991simple} (a well studied special class of potential games \cite{monderer1996potential}). In the setting of party affiliation games, each node of $G$ is a player that can choose one of two sides of a cut. The player's payoff is the total weight of edges incident on it that are classified correctly. It is well known that such games admit a pure Nash equilibria via the {\em best response dynamics} (also known as {\em Nash dynamics}), and that each such pure Nash equilibrium is a $\left(\nicefrac[]{1}{2}\right)$-approximation for {{\textsf{Max Local Agreements}}}.
Unfortunately, in general the computation of a pure Nash equilibria in cut and party affiliation games is PLS-complete \cite{johnson1988easy}, and thus it is widely believed no polynomial time algorithm exists for solving this task. Nonetheless, one can apply the algorithm of Bhalgat {\em et al.} \cite{bhalgat2010approximating} for finding an approximate pure Nash equilibrium and obtain a $\nicefrac[]{1}{(4+\varepsilon)}$-approximation for {{\textsf{Max Local Agreements}}} (for any constant $\varepsilon > 0$). This approximation is also the best known for the special case of {{\textsf{Max Min Agreements}}}.
\noindent {\bf{Our Results:}} Focusing first on {{\textsf{Min Max Disagreements}}} on general graphs we prove that both the natural LP and SDP relaxations admit a large integrality gap of $\nicefrac[]{n}{2}$. Nonetheless, we present an $O(\sqrt{n})$-approximation for {{\textsf{Min Max Disagreements}}}, bypassing the above integrality gaps.
\begin{theorem}\label{thrm:IntegralityGapMMD} The natural LP and SDP relaxations for {{\textsf{Min Max Disagreements}}} have an integrality gap of $\nicefrac[]{n}{2}$. \end{theorem}
\begin{theorem}\label{thrm:sqrtMMD} {{\textsf{Min Max Disagreements}}} admits an $O(\sqrt{n})$-approximation for general weighted graphs. \end{theorem}
\noindent Since {{\textsf{Min Max $s-t$ Cut}}}, along with {{\textsf{Min Max Multiway Cut}}} and {{\textsf{Min Max Multicut}}}, are a special case of {{\textsf{Min Max Disagreements}}}, Theorem \ref{thrm:sqrtMMD} applies to them as well, thus providing the first known approximation for this family of cut problems.\footnote{Theorem \ref{thrm:IntegralityGapMMD} can be easily adapted to apply also for {{\textsf{Min Max $s-t$ Cut}}}, {{\textsf{Min Max Multiway Cut}}}, and {{\textsf{Min Max Multicut}}}, resulting in a gap of $\nicefrac[]{(n-1)}{2}$.}
When considering the more general {{\textsf{Min Local Disagreements}}} problem, we present a remarkably simple approach that achieves an improved approximation of $7$ for both complete graphs and complete bipartite graphs (where disagreements are measured w.r.t one side only). This improves upon and simplifies \cite{puleo2016correlation} who presented an approximation of $48$ for the former and $10$ for the latter.
\begin{theorem}\label{thrm:7ApproxCliqueMLD} {{\textsf{Min Local Disagreements}}} admits a $7$-approximation for complete graphs.\\ where $f$ is required to satisfy the following three conditions: $(1)$ for any ${\mathbf{x}}, {\mathbf{y}}\in \mathcal{R}_+^V$ if ${\mathbf{x}} \leq {\mathbf{y}}$ then $f({\mathbf{x}})\leq f({\mathbf{y}})$ (monotonicity), $(2)$ $f(\alpha {\mathbf{x}})\leq \alpha f({\mathbf{x}})$ for any $\alpha \geq 0$ and ${\mathbf{x}}\in \mathcal{R}_+^V$ (scaling), and $(3)$ $f$ is convex.
\end{theorem} \begin{theorem}\label{thrm:7ApproxBipartiteMLD} {{\textsf{Min Local Disagreements}}} admits a $7$-approximation for complete bipartite graphs where disagreements are measured w.r.t. one side of the graph. where $f$ is required to satisfy the following three conditions: $(1)$ for any ${\mathbf{x}}, {\mathbf{y}}\in \mathcal{R}_+^V$ if ${\mathbf{x}} \leq {\mathbf{y}}$ then $f({\mathbf{x}})\leq f({\mathbf{y}})$ (monotonicity), $(2)$ $f(\alpha {\mathbf{x}})\leq \alpha f({\mathbf{x}})$ for any $\alpha \geq 0$ and ${\mathbf{x}}\in \mathcal{R}_+^V$ (scaling), and $(3)$ $f$ is convex.
\end{theorem}
Focusing on local maximization of agreements, we present a $\nicefrac[]{1}{(2+\varepsilon)}$ approximation for {{\textsf{Max Min Agreements}}} without any assumption on the edge weights. This improves upon the previous known $\nicefrac[]{1}{(4+\varepsilon)}$-approximation that follows from the computation of approximate pure Nash equilibria in party affiliation games \cite{bhalgat2010approximating}. As before, we show that both the natural LP and SDP relaxations for {{\textsf{Max Min Agreements}}} have a large integrality gap of $\frac{n}{2(n-1)}$. \begin{theorem}\label{thrm:ApproxMMA} For any $\varepsilon > 0$, {{\textsf{Max Min Agreements}}} admits a $\nicefrac[]{1}{(2+\varepsilon)}$-approximation for general weighted graphs, where the running time of the algorithm is $poly(n,\nicefrac[]{1}{\varepsilon})$.
\end{theorem} \begin{theorem}\label{thrm:IntegralityGapMMA} The natural LP and SDP relaxations for {{\textsf{Max Min Agreements}}} have an integrality gap of $\frac{n}{2(n-1)}$. \end{theorem}
\begin{table}[h] \caption{Results for {{\textsf{Correlation Clustering}}} with local guarantees.\label{tab:Results}} \begin{center} {\footnotesize {
\begin{tabu}{ |[1pt]c|[1pt]c|[1pt]c|[1pt]c| } \hline
\multirow{2}{*}{\bf Problem} & \multirow{2}{*}{\bf Input Graph} & \multicolumn{2}{ c| }{\bf Approximation} \\ \cline{3-4} & & {\bf This Work} & {\bf Previous Work} \\ \hline \multirow{2}{*}{{\textsf{Min Local Disagreements}}} & complete & $7$ & $48$ \cite{puleo2016correlation} \\ \cline{2-4} & complete bipartite (one sided) & $7$ & $10$ \cite{puleo2016correlation} \\ \hline {{\textsf{Min Max Disagreements}}} & general weighted & $O(\sqrt{n})$ & $-$ \\ \hline {\hspace{-15pt}{\textsf{Min Max $s-t$ Cut}}} & \multirow{3}{*}{general weighted} & \multirow{3}{*}{$O(\sqrt{n})$} & \multirow{3}{*}{$-$}\\ {{\textsf{Min Max Multiway Cut}}} & & &\\ {\hspace{-20pt}{\textsf{Min Max Multicut}}} & & &\\ \hline {{\textsf{Max Min Agreements}}} & general weighted & $\nicefrac[]{1}{(2+\varepsilon)}$ & $\nicefrac[]{1}{(4+\varepsilon)}$ \cite{bhalgat2010approximating} \\ \hline
\end{tabu} } }
\end{center} \end{table} \noindent Our main algorithmic results are summarized in Table \ref{tab:Results}.
\noindent {\bf{Approach and Techniques:}} The non-linear nature of {{\textsf{Correlation Clustering}}} with local guarantees makes problems in this family much harder to approximate than {{\textsf{Correlation Clustering}}} with classic global objectives.
Firstly, LP and SDP relaxations are not always useful when considering local objectives. For example, the natural LP relaxation for the global objective of minimizing total disagreements on general graphs has a bounded integrality gap of $O(\log{n})$ \cite{charikar2003clustering,demaine2006correlation,garg1993approximate}. However, we prove that for its local objective counterpart, {\it i.e.}, {{\textsf{Min Max Disagreements}}}, both the natural LP and SDP relaxations have a huge integrality gap of $\nicefrac[]{n}{2}$ (Theorem \ref{thrm:IntegralityGapMMD}). To overcome this our algorithm for {{\textsf{Min Max Disagreements}}} on general weighted graphs uses a {\em combination} of the LP lower bound and a combinatorial bound. Even though each of these bounds on its own is bad, we prove that their combination suffices to obtain an approximation of $O(\sqrt{n})$, thus bypassing the huge integrality gaps of $\nicefrac[]{n}{2}$.
Secondly, randomization is inherently difficult to use for local guarantees, while many of the algorithms for minimizing total disagreements, {\it e.g.}, \cite{ailon2012improved,ailon2008aggregating,chawla2015near}, as well as maximizing total agreements, {\it e.g.}, \cite{swamy2004correlation}, are all randomized in nature. The reason is that a bound on the expected weight of misclassified edges incident on any node does not translate to a bound on the maximum of this quantity over all nodes (similarly the expected weight of correctly classified edges incident on any node does not translate to a bound on the minimum of this quantity over all nodes). To overcome this difficulty, all the algorithms we present are deterministic, {\it e.g.}, for {{\textsf{Min Local Disagreements}}} we propose a new remarkably simple method of clustering that greedily chooses a center node $s^*$ and cuts a sphere of a fixed and predefined radius around $s^*$, and for {{\textsf{Max Min Agreements}}} we present a new {\em non-oblivious} local search algorithm that runs on a graph with modified edge weights and circumvents the need to compute approximate pure Nash equilibria in party affiliation games.
\noindent {\bf{Paper Organization:}} Section \ref{sec:MinLocalDisagree} contains the improved approximations for {{\textsf{Min Max Disagreements}}} on general weighted graphs and for {{\textsf{Min Local Disagreements}}} on complete and complete bipartite graphs (Theorems \ref{thrm:sqrtMMD}, \ref{thrm:7ApproxCliqueMLD}, and \ref{thrm:7ApproxBipartiteMLD}), along with the integrality gaps of the natural LP and SDP relaxations (Theorem \ref{thrm:IntegralityGapMMD}).
Section \ref{sec:MaxMinAgree} contains the improved approximation for {{\textsf{Max Min Agreements}}} as well as the integrality gaps of the natural LP and SDP relaxations (Theorems \ref{thrm:ApproxMMA} and \ref{thrm:IntegralityGapMMA}).
\section{Preliminaries} We state the conditions required from both $f$ and $g$ in the definitions of {{\textsf{Min Local Disagreements}}} and {{\textsf{Max Local Agreements}}}, respectively. $f$ is required to satisfy the following three conditions: $(1)$ for any ${\mathbf{x}}, {\mathbf{y}}\in \mathcal{R}_+^V$ if ${\mathbf{x}} \leq {\mathbf{y}}$ then $f({\mathbf{x}})\leq f({\mathbf{y}})$ (monotonicity), $(2)$ $f(\alpha {\mathbf{x}})\leq \alpha f({\mathbf{x}})$ for any $\alpha \geq 0$ and ${\mathbf{x}}\in \mathcal{R}_+^V$ (scaling), and $(3)$ $f$ is convex. Whereas, $g$ is required to satisfy the following two conditions: $(1)$ for any ${\mathbf{x}}, {\mathbf{y}}\in \mathcal{R}_+^V$ if ${\mathbf{x}} \leq {\mathbf{y}}$ then $g({\mathbf{x}})\leq g({\mathbf{y}})$ (monotonicity), and $(2)$ $g(\alpha {\mathbf{x}})\geq \alpha g({\mathbf{x}})$ for any $\alpha \geq 0$ and ${\mathbf{x}}\in \mathcal{R}_+^V$ (reverse scaling). Note that $g$ is not required to be concave.
\section{Local Minimization of Disagreements and Graph Cuts}\label{sec:MinLocalDisagree}
We consider the natural convex programming relaxation for {{\textsf{Min Local Disagreements}}}. The relaxation imposes a metric $d$ on the vertices of the graph. For each node $u\in V$ we have a variable $D(u)$ denoting the total fractional {\em disagreement} of edges incident on $u$. Additionally, we denote by ${\mathbf{D}}\in \mathcal{R}_+^V$ the vector of all $D(u)$ variables. Note that the relaxation is solvable in polynomial time since $f$ is convex.\footnote{The convexity of $f$ is used only to show that relaxation (\ref{Relaxation:Disagreements}) can be solved, and it is not required in the rounding process.}
\begin{align} \min ~~~ & f\left( {\mathbf{D}}\right) & \label{Relaxation:Disagreements}\\ & \sum _{v:(u,v)\in E^+}c_{u,v} d\left( u,v\right) + \sum _{v:(u,v)\in E^-} c_{u,v} \left( 1-d\left( u,v\right) \right) = D(u) & \forall u\in V \nonumber \\ & d(u,v) + d(v,w) \geq d(u,w) & \forall u,v,w\in V \nonumber \\ & D(u)\geq 0, ~0\leq d(u,v) \leq 1 & \forall u,v\in V \nonumber \end{align}
\noindent For the special case of {{\textsf{Min Max Disagreements}}}, {\it i.e.}, $f$ is the $\max$ function, (\ref{Relaxation:Disagreements}) can be written as an LP. The proof of Theorem \ref{thrm:IntegralityGapMMD}, which states that even for the special case of {{\textsf{Min Max Disagreements}}} the above natural LP and in addition the natural SDP both have a large integrality gap of $\nicefrac[]{n}{2}$, appears in Appendix \ref{app:GapMMD}. We note that Theorem \ref{thrm:IntegralityGapMMD} also applies to {{\textsf{Min Max $s-t$ Cut}}}, a further special case of {{\textsf{Min Max Disagreements}}}.
\subsection{Min Max Disagreements on General Weighted Graphs}
Our algorithm for {{\textsf{Min Max Disagreements}}} on general weighted graphs cannot rely solely on the the lower bound of the LP relaxation, since it admits an integrality gap of $\nicefrac[]{n}{2}$ (Theorem \ref{thrm:IntegralityGapMMD}). Thus, a different lower bound must be used. Let {${c_{\text{max}}}$} be the maximum weight of an edge that is misclassified in some optimal solution $\mathcal{S}^*$. Clearly, {${c_{\text{max}}}$} also serves as a lower bound on the value of an optimal solution. Hence, we can mix these two lower bounds and choose $\max \left\{ \max_{u\in V}\left\{ D(u)\right\},{c_{\text{max}}}\right\}$ to be the lower bound we use. Note that we can assume w.l.o.g. that {${c_{\text{max}}}$} is known to the algorithm, as one can simply execute the algorithm for every possible value of {${c_{\text{max}}}$} and return the best solution.
Our algorithm consists of two main phases. In the first we compute the LP metric $d$ but require additional constraints that ensure no {\em heavy} edge, {\it i.e.}, an edge $e$ having $ c_e > {c_{\text{max}}}$, is (fractionally) misclassified by $d$. In the second phase, we perform a careful {\em layered clustering} of an auxiliary graph consisting of all $+$ edges whose length in the metric $d$ is short. At the heart of the analysis lies a distinction between $+$ edges whose length in the metric $d$ is short and all other edges. The contribution of the former is bounded using the combinatorial lower bound, {\it i.e.}, ${c_{\text{max}}}$, whereas the contribution of the latter is bounded using the LP. Our algorithm also ensures that in the final clustering no heavy edge is misclassified. Let us now elaborate on the two phases, before providing an exact description of the algorithm (Algorithm \ref{alg:MMD_General}).
\noindent {\bf{Phase $1$ (constrained metric computation):}} Denote by, $$E^+_{\text{heavy}}\triangleq \{ e\in E^+:c_e>{c_{\text{max}}}\} ~~~\text{ and }~~~ E^-_{\text{heavy}}\triangleq \{ e\in E^-:c_e>{c_{\text{max}}}\}$$ the collection of all heavy $+$ and $-$ edges, respectively. We solve the LP relaxation (\ref{Relaxation:Disagreements}) (recall that $f$ is the $\max$ function) while adding the following additional constraints that ensure $d$ does not (fractionally) misclassify heavy edges: \begin{align} d(u,v) = 0 & ~~~~~~~~~~~\forall e=(u,v)\in E^+_{\text{heavy}} \label{extraConst1} \\ d(u,v) = 1 & ~~~~~~~~~~~\forall e=(u,v)\in E^-_{\text{heavy}} \label{extraConst2} \end{align} If no feasible solution exists then our current guess for ${c_{\text{max}}}$ is incorrect.
\noindent {\bf{Phase $2$ (layered clustering):}} Denote the collections of $+$ and $-$ edges which are {\em almost} classified correctly by $d$ as $E^+_{\text{bad}}\triangleq \left\{ e=(u,v)\in E^+:d(u,v)<\nicefrac[]{1}{\sqrt{n}}\right\}$ and $E^-_{\text{bad}}\triangleq \left\{ e=(u,v)\in E^-:d(u,v)>1-\nicefrac[]{1}{\sqrt{n}}\right\}$, respectively. Intuitively, any edge $ e\notin E^+_{\text{bad}}\cup E^-_{\text{bad}}$ can use its length $d$ to pay for its contribution to the cost, regardless of what the output is. This is not the case with edges in $E^+_{\text{bad}} $ and $E^-_{\text{bad}} $, therefore all such edges are considered {\em bad}. Additionally, denote by $E^+_0\triangleq \left\{ e=(u,v)\in E^+:d(u,v)=0\right\}$ the collection of $+$ edges for which $d$ assigns a length of $0$.\footnote{Note that $ E^+_{\text{heavy}}\subseteq E^+_0\subseteq E^+_{\text{bad}}$ and $E^-_{\text{heavy}}\subseteq E^-_{\text{bad}}$.}
We design the algorithm so it ensures that no mistakes are made for edges in $ E^+_0$ and $E^-_{\text{bad}} $. However, the algorithm might make mistakes for edges in $ E^+_{\text{bad}}$, thus a careful analysis is required. To this end we consider the auxiliary graph consisting of all edges in $E^+_{\text{bad}}$, {\it i.e.}, $G^+_{\text{bad}}\triangleq \left( V,E^+_{\text{bad}}\right)$, and equip it with the distance function $\text{dist}_{\ell}$ defined as the shortest path metric with respect to the length function $\ell:E^+_{\text{bad}}\rightarrow\left\{ 0,1\right\}$: \begin{align} \ell (e) \triangleq \begin{cases}0 & e\in E^+_0\\ 1 & e\in E^+_{\text{bad}}\setminus E^+_0\end{cases}\nonumber \end{align} Assume $E^-_{\text{bad}} $ contains $k$ edges and denote the endpoints of the $i$\textsuperscript{th} edge by $s_i$ and $t_i$. The algorithm partitions every connected component $X$ of $G^+_{\text{bad}}$ into clusters as follows: as long as $X$ contains $s_i$ and $t_i$ for some $i$, we examine the layers $\text{dist}_{\ell}(s_i,\cdot)$ defines and perform a carefully chosen level cut. This {\em layered clustering} suffices as we can prove that our choice of a level cut ensures $(1)$ no mistakes are made for edges in $ E^+_0$ and $E^-_{\text{bad}} $, and $(2)$ the {\em number} of misclassified edges from $E^+_{\text{bad}}\setminus E^+_0$ incident on any node is at most $O(\sqrt{n})$. This ends the description of the second phase.
\begin{algorithm} \caption{Layered Clustering $(G=(V,E),{c_{\text{max}}})$}\label{alg:MMD_General} \begin{algorithmic}[1] \STATE $\mathcal{C}\leftarrow \emptyset$. \STATE let $d$ be a solution to LP (\ref{Relaxation:Disagreements}) with the additional constraints (\ref{extraConst1}) and (\ref{extraConst2}) \FOR {every connected component $X$ in $G^+_{\text{bad}}$} \WHILE {$X$ contains $\left\{ s_i,t_i\right\}$ for some $i$} \STATE $ r_i\leftarrow \text{dist}_{\ell}(s_i,t_i)$ and $L^i_j\leftarrow \left\{ u:\text{dist}_{\ell}(s_i,u)=j\right\}$ for every $j=0,1,\ldots, r_i$.
\STATE choose $j^*\leq\nicefrac[]{(\sqrt{n}-1)}{2}$ s.t. $| L^i_{j^*}|,| L^i_{j^*+1}|,| L^i_{j^*+2}|\leq 16\sqrt{n}$. \STATE $S\leftarrow \cup _{j=0}^{j^*}L^i_j$. \STATE $X\leftarrow X\setminus S$ and $\mathcal{C}\leftarrow \mathcal{C}\cup \{ S\} $. \ENDWHILE \STATE $\mathcal{C}\leftarrow \mathcal{C}\cup \left\{ X\right\}$. \ENDFOR \STATE Output $\mathcal{C}$. \end{algorithmic} \end{algorithm}
Refer to Algorithm \ref{alg:MMD_General} for a precise description of the algorithm. The following Lemma states that the distance between any $\{ s_i,t_i\}$ pair with respect to the metric $\text{dist}_{\ell}$ is large, its proof appears in Appendix \ref{app:LongPath}. \begin{lemma}\label{lem:LongPath} For every $i=1,\ldots,k$, $\text{dist}_{\ell}(s_i,t_i)>\sqrt{n}-1$. \end{lemma}
\noindent The following Lemma simply states that only a few layers could be too large, its proof appears in Appendix \ref{app:BadLayers}. It implies Corollary \ref{cor:ChoosingLayer}, whose proof appears in Appendix \ref{app:ChoosingLayer}. \begin{lemma}\label{lem:BadLayers}
For every $i=1,\ldots,k$, the number of layers $L^i_j$ for which $|L^i_j|> 16\sqrt{n}$ is at most $\nicefrac[]{\sqrt{n}}{16}$. \end{lemma}
\begin{corollary}\label{cor:ChoosingLayer} Algorithm \ref{alg:MMD_General} can always find $j^*$ as required. \end{corollary}
\noindent Lemma \ref{lem:CorrectEdges} proves that no mistakes are made for edges in $E^+_0 $ and $ E^-_{\text{bad}}$, whereas Lemma \ref{lem:Short+Edges} bounds the {\em number} of misclassified edges from $E^+_{\text{bad}}\setminus E^+_0 $ incident on any node. Their proofs appear in Appendices \ref{app:CorrectEdges} and \ref{app:Short+Edges}. \begin{lemma}\label{lem:CorrectEdges} Algorithm \ref{alg:MMD_General} never misclassifies edges in $E^+_0 $ and $ E^-_{\text{bad}}$. \end{lemma}
\begin{lemma}\label{lem:Short+Edges}
Let $u\in V$ and $S$ be the cluster in $\mathcal{C}$ Algorithm \ref{alg:MMD_General} assigned $u$ to. Then, $\left|\left\{ e\in E^+_{\text{bad}}\setminus E^+_0:e=(u,v),v\notin S\right\}\right|\leq 48\sqrt{n}$.
\end{lemma}
\noindent We are now ready to prove the main result, Theorem \ref{thrm:sqrtMMD}. \begin{proof}[of Theorem \ref{thrm:sqrtMMD}] We prove that Algorithm \ref{alg:MMD_General} achieves an approximation of $49\sqrt{n}$. The proof considers edges according to their type: $(1)$ $E^+_0$ and $E^-_{\text{bad}}$ edges, $(2)$ $E^+_{\text{bad}}\setminus E^+_0$ edges, and $(3)$ all other edges. It is worth noting that the contribution of edges of type $(2)$ is bounded using the combinatorial lower bound, {\it i.e.}, ${c_{\text{max}}}$, whereas the contribution of edges of type $(3)$ is bounded using the LP, {\it i.e.}, $D(u)$ for every node $u\in V$ (as defined by the relaxation (\ref{Relaxation:Disagreements})).
First, consider edges of type $(1)$. Lemma \ref{lem:CorrectEdges} implies Algorithm \ref{alg:MMD_General} does not make any mistakes with respect to these edges, thus their contribution to the value of the output $\mathcal{C}$ is always $0$. Second, consider edges of type $(2)$. Lemma \ref{lem:Short+Edges} implies that every node $u$ has at most $48\sqrt{n}$ edges of type $(2)$ incident on it that are classified incorrectly. Additionally, the weight of every edge of type $(2)$ is at most ${c_{\text{max}}}$ since $ E^+_{\text{heavy}}\subseteq E^+_0$ and edges of type $(2)$ do not contain any edge of $E^+_0$. Thus, we can conclude that for every node $u$ the total weight of edges of type $(2)$ that touch $u$ and are misclassified is at most $48\sqrt{n}\cdot {c_{\text{max}}}$.
Finally, consider edges of type $(3)$. Fix an arbitrary node $u$ and let $D(u)$ be the fractional disagreement value the LP assigned to $u$ (see (\ref{Relaxation:Disagreements})). Edge $e$ of type $(3)$ is either an edge $e\in E^+$ whose $d$ length is at least $\nicefrac[]{1}{\sqrt{n}}$, or an edge $e\in E^-$ whose $d$ length is at most $1-\nicefrac[]{1}{\sqrt{n}}$. Hence, in any case the fractional contribution of such an edge $e$ to $D(u)$ is at least $\nicefrac[]{c_e}{\sqrt{n}}$. Therefore, regardless of what the output is, the total weight of misclassified edges of type $(3)$ incident on $u$ is at most $\sqrt{n}\cdot D(u)$.
Summing over all types of edges, we can conclude that the total weight of misclassified edges incident on $u$ in $\mathcal{C}$ (the output of Algorithm \ref{alg:MMD_General}) is at most $48\sqrt{n}{c_{\text{max}}} + \sqrt{n}\cdot D(u)$. Since both ${c_{\text{max}}}$ and $D(u)$ are lower bounds on the value of an optimal solution, the proof is concluded.
$\square$
\end{proof}
\subsection{Min Local Disagreements on Complete Graphs}
We consider a simple deterministic greedy clustering algorithm for complete graphs that iteratively partitions the graph.
In every step it does the following: $(1)$ greedily chooses a center node $s^*$ that has many nodes {\em close} to it, and $(2)$ removes from the graph a sphere around $s^*$ which constitutes a new cluster. The greedy choice of $s^*$ is similar to that of \cite{puleo2016correlation}. However, our algorithm departs from the approach of \cite{puleo2016correlation}, as it {\em always} cuts a large sphere around $s^*$. The algorithm of \cite{puleo2016correlation}, on the other hand, outputs either a singleton cluster containing $s^*$ or some other large sphere around $s^*$ (the average distance within the large sphere determines which of the two options is chosen), thus mimicking the approach of \cite{charikar2003clustering}. Surprisingly, restricting the algorithm's choice enables us not only to obtain a simpler algorithm, but also to improve upon the approximation guarantee from $48$ to $7$.
Algorithm \ref{alg:7ApproxClique} receives as input the metric $d$ as computed by the relaxation (\ref{Relaxation:Disagreements}), whereas the variables $D(u)$ are required only for the analysis. Additionally, we denote by $\text{Ball}_S(u,r)\triangleq \left\{ v\in S:d(u,v)<r\right\}$ the sphere of radius $r$ around $u$ in subgraph $S$.
\begin{algorithm} \caption{Greedy Clustering $( \{ d(u,v)\} _{u,v\in V})$}\label{alg:7ApproxClique} \begin{algorithmic}[1] \STATE $S\leftarrow V$ and $\mathcal{C}\leftarrow \emptyset$. \WHILE {$S\neq \emptyset$}
\STATE $s^*\leftarrow \text{argmax}\left\{ \left| \text{Ball}_S(s,\nicefrac[]{1}{7})\right|:s\in S\right\}$. \STATE $\mathcal{C} ~\leftarrow \mathcal{C} \cup \left\{ \text{Ball}_S(s^*,\nicefrac[]{3}{7})\right\}$. \STATE $S~\leftarrow S\setminus \text{Ball}_S(s^*,\nicefrac[]{3}{7})$. \ENDWHILE \STATE Output $\mathcal{C}$. \end{algorithmic} \end{algorithm} \noindent The following lemma summarizes the guarantee achieved by Algorithm \ref{alg:7ApproxClique} (its proof appears in Appendix \ref{app:7ApproxClique}, which also contains an overview of our charging scheme). \begin{lemma}\label{lem:7ApproxClique} Assuming the input is a complete graph, Algorithm \ref{alg:7ApproxClique} guarantees that $\text{disagree}_{\mathcal{C}}(u) \leq 7D(u)$ for every $u\in V$. \end{lemma}
\begin{proof}[of Theorem \ref{thrm:7ApproxCliqueMLD}] Apply Algorithm \ref{alg:7ApproxClique} to the solution of the relaxation (\ref{Relaxation:Disagreements}). Lemma \ref{lem:7ApproxClique} guarantees that for every node $u\in V$ we have that $ \text{disagree}_{\mathcal{C}}(u) \leq 7D(u)$, {\it i.e.}, $ \text{disagree}_{\mathcal{C}}(V)\leq 7{\mathbf{D}}$. The value of the output of the algorithm is $f\left( \text{disagree}_{\mathcal{C}}(V)\right)$ and one can bound it as follows: $$ f\left( \text{disagree}_{\mathcal{C}}(V)\right)\stackrel{(1)}{\leq} f\left( 7{\mathbf{D}}\right) \stackrel{(2)}{\leq} 7f\left( {\mathbf{D}}\right)~.$$ Inequality $(1)$ follows from the monotonicity of $f$, whereas inequality $(2)$ follows from the scaling property of $f$. This concludes the proof since $f\left( {\mathbf{D}}\right) $ is a lower bound on the value of any optimal solution.
$\square$ \end{proof}
\subsection{Min Local Disagreements on Complete Bipartite Graphs}
Our algorithm for {{\textsf{Min Local Disagreements}}} on complete bipartite graphs (with one sided disagreements) is a natural extension of Algorithm \ref{alg:7ApproxClique}. Similarly to the complete graph case, we are able to present a remarkably simple algorithm achieving an improved approximation of $7$. The description of the algorithm and the proof of Theorem \ref{thrm:7ApproxBipartiteMLD} appear in Appendix \ref{app:7Bipartite}.
\section{Local Maximization of Agreements}\label{sec:MaxMinAgree}
As previously mentioned, {{\textsf{Max Local Agreements}}} is closely related to the computation of local optima for {{\textsf{Max Cut}}} and pure Nash equilibria in cut and party affiliation games, both of which are PLS-complete problems. We focus on the special case of {{\textsf{Max Min Agreements}}}.
The natural local search algorithm for {{\textsf{Max Min Agreements}}} can be defined similarly to that of {{\textsf{Max Cut}}}: it maintains a single cut $S\subseteq V$; a node $u$ moves to the other side of the cut if the move increases the total weight of correctly classified edges incident on $u$.
This algorithm terminates in a local optimum that is a $\left(\nicefrac[]{1}{2}\right)$-approximation for {{\textsf{Max Min Agreements}}}. Unfortunately, it is known that such a local search algorithm can take exponential time,
even for {{\textsf{Max Cut}}}.
When considering {{\textsf{Max Cut}}}, this can be remedied by altering the local search step as follows: a node $u$ moves to the other side of the cut $S$ if the move increases the total weight of edges crossing $S$
by a multiplicative factor of at least $(1+\varepsilon)$ (for some $\varepsilon > 0$). This approach {\em fails} for the computation of (approximate) pure Nash equilibria in party affiliation games, as well as for {{\textsf{Max Min Agreements}}}. The reason is that both of these problems have {\em local} requirements from nodes, as opposed to the {\em global} objective of {{\textsf{Max Cut}}}. Thus, not surprisingly, the current best known $\nicefrac[]{1}{(4+\varepsilon)}$-approximation for {{\textsf{Max Min Agreements}}} follows from \cite{bhalgat2010approximating} who present the state of the art algorithm for finding approximate pure Nash equilibria in party affiliation games.
We propose a direct approach for approximating {{\textsf{Max Min Agreements}}} that circumvents the need to compute approximate pure Nash equilibria in party affiliation games. We improve upon the $\nicefrac[]{1}{(4+\varepsilon)}$-approximation by considering a {\em non-oblivious} local search that is executed with altered edge weights. We are able to change the edges' weights in such a way that: $(1)$ any local optimum is a $\nicefrac[]{1}{(2+\varepsilon)}$-approximation, and $(2)$ the local search performs at most $O(\nicefrac[]{n}{\varepsilon})$ iterations. The proof of Theorem \ref{thrm:ApproxMMA} appears in Appendix \ref{app:ApproxMMA}, along with some intuition for our non-oblivious local search algorithm. Additionally, we prove that the natural LP and SDP relaxations for {{\textsf{Max Min Agreements}}} on general graphs admit an integrality gap of $\frac{n}{2(n-1)}$ (Theorem \ref{thrm:IntegralityGapMMA}). This appears in Appendix \ref{app:IntegralityGapMMA}.
\appendix
\section{Proof of Theorem \ref{thrm:IntegralityGapMMD}}\label{app:GapMMD}
\begin{proof}[of Theorem \ref{thrm:IntegralityGapMMD}] Let $G$ be the unweighted cycle on $n$ vertices, where all edges are labeled $+$ and one edge is labeled $-$. Specifically, denote the vertices of $G$ by $\left\{ v_1,v_2,\ldots,v_n\right\}$ where there is an edge $(v_i,v_{i+1})\in E^+$ for every $i=1,\ldots,n-1$ and additionally the edge $(v_n,v_1)\in E^-$.
First, we prove that the value of any integral solution is at least $1$. A clustering that includes $V$ as a single cluster has value of $1$, as both $v_1$ and $v_n$ have exactly one misclassified edge touching them. Moreover, one can easily verify that any clustering into two or more clusters has a value of at least $1$. Thus, any integral solution for the above instance has value of at least $1$.
For simplicity of presentation let us re-state here the LP relaxation (\ref{Relaxation:Disagreements}) of {{\textsf{Min Max Disagreements}}}: \begin{align*} \min ~~~ & \max _{u\in V}\left\{ D(u)\right\} & \\ & \sum _{v:(u,v)\in E^+}c_{u,v}d\left( u,v\right) + \sum _{v:(u,v)\in E^-} c_{u,v}\left( 1-d\left( u,v\right) \right) = D(u) & \forall u\in V \\ & d(u,v) + d(v,w) \geq d(u,w) & \forall u,v,w\in V \\ & D(u)\geq 0, ~0\leq d(u,v) \leq 1 & \forall u,v\in V \end{align*} Let us construct a fractional solution. Assign a length of $\nicefrac[]{1}{n}$ for every $+$ edge and a length of $1-\nicefrac[]{1}{n}$ for the single $-$ edge, and let $d$ be the shortest path metric in $G$ induced by these lengths. Obviously, the triangle inequality is satisfied and one can verify that $d(u,v)\leq 1$ for all $u,v\in V$. Consider a vertex $v_i$ that does not touch the $-$ edge, {\it i.e.}, $i=2,\ldots,n-1$. Such a $v_i$ has two + edges touching it both having a length of $\nicefrac[]{1}{n}$, hence $D(v_i)=\nicefrac[]{2}{n}$. Focusing on $v_1$ and $v_n$, each has one $+$ edge whose length is $\nicefrac[]{1}{n}$ and one $-$ edge whose length is $1-\nicefrac[]{1}{n}$ touching it. Hence, $D(v_1)=D(v_n)=\nicefrac[]{2}{n}$. Therefore, the above instance has an integrality gap of $\nicefrac[]{n}{2}$.
Now, consider the natural semi-definite relaxation for {{\textsf{Min Max Disagreements}}}, where each vertex $u$ corresponds to a unit vector ${\mathbf{y}} _u$. Intuitively, if $S_1,\ldots, S_{\ell}$ is an integral clustering, then all vertices in cluster $S_j$ are assigned to the standard $j$\textsuperscript{th} unit vector, {\it i.e.}, ${\mathbf{e}} _j$. Hence, the natural semi-definite relaxation requires that all vectors lie in the same orthant, {\it i.e.}, for every $u$ and $v$: ${\mathbf{y}} _u \cdot {\mathbf{y}} _v\geq 0$, and that $\{ {\mathbf{y}} _u\} _{u\in V}$ satisfy the $\ell _2^2$ triangle inequality. Therefore, the natural semi-definite relaxation is: \begin{align*} \min ~~~ & \max _{u\in V}\left\{ D(u)\right\} & \\ & \sum _{v:(u,v)\in E^+}c_{u,v}\left( 1-{\mathbf{y}} _u \cdot {\mathbf{y}} _v\right) + \sum _{v:(u,v)\in E^-} c_{u,v}\left({\mathbf{y}} _u \cdot {\mathbf{y}} _v\right) = D(u) & \forall u\in V \\
& || {\mathbf{y}} _u - {\mathbf{y}} _v||_2^2 + || {\mathbf{y}} _v - {\mathbf{y}} _w||_2^2 \geq || {\mathbf{y}} _u - {\mathbf{y}} _w||_2^2 & \forall u,v,w\in V \\ & {\mathbf{y}} _u \cdot {\mathbf{y}} _u = 1 & \forall u\in V \\ & {\mathbf{y}} _u \cdot {\mathbf{y}} _v \geq 0 & \forall u,v\in V \end{align*}
In order to construct a fractional solution, it will be helpful to consider $Y\in \mathcal{R}^{V\times V}$ the positive semi-definite matrix of all inner products of $\left\{ {\mathbf{y}} _{v_i}\right\} _{i=1}^n$, {\it i.e.}, $Y_{v_i,v_j}={\mathbf{y}} _{v_i}\cdot {\mathbf{y}} _{v_j}$. Intuitively, we consider a collection of integral solutions where for each one we construct the corresponding $Y$ matrix. At the end, our fractional solution will be the average of all these $Y$ matrices.
Consider the following $n-1$ integral solutions, each having only two clusters, where the first cluster consists of $\left\{ v_1,\ldots,v_i\right\}$ and the second consists of $\left\{ v_{i+1},\ldots,v_n\right\}$ (here $i=1,\ldots,n-1$). Fixing $i$ and using the above translation of an integral solution to a feasible solution for the semi-definite relaxation, we assign each $v_j$, where $j=1,\ldots,i$ to ${\mathbf{e}} _1$ and each $v_j$, where $j=i+1,\ldots,n$, to ${\mathbf{e}} _2$. Let $Y^i$ be the resulting (positive semi-definite) inner product matrix. Additionally, consider one additional integral solution that consists of a single cluster containing all of $V$. In this case, the above translation yields that all $v_i$ vectors are assigned to ${\mathbf{e}} _1$. Denote by $Y^n$ the resulting (positive semi-definite) inner product matrix. Clearly, each of the $Y^1,\ldots,Y^n$ defines a feasible solution for the above natural semi-definite relaxation.
Our fractional solution is given by the average of all the above inner product matrices: $\overline{Y}\triangleq \frac{1}{n}\sum _{i=1}^n Y^i$. Obviously, $\overline{Y}$ defines a feasible solution for the above natural semi-definite relaxation. Note that ${\mathbf{y}} _{v_1} \cdot {\mathbf{y}} _{v_n} = \frac{n-1}{n}\cdot 0 + \frac{1}{n}\cdot 1 = \frac{1}{n}$ and that ${\mathbf{y}} _{v_i} \cdot {\mathbf{y}} _{v_{i+1}} =\frac{n-1}{n}\cdot 1 + \frac{1}{n}\cdot 0 = \frac{n-1}{n}$, for every $i=1,\ldots,n-1$. Therefore, we can conclude that: \begin{align*} D(v_i) & = 2\left( 1-\frac{n-1}{n}\right)=\frac{2}{n} &\forall i=2,\ldots,n-1 \\ D(v_1) & =D(v_n) =\left( 1-\frac{n-1}{n}\right) + \frac{1}{n}=\frac{2}{n} & \end{align*}
This demonstrates that the above instance also has an integrality gap of $\nicefrac[]{n}{2}$ for the natural semi-definite relaxation.
$\square$ \end{proof}
\section{Proof of Lemma \ref{lem:LongPath}}\label{app:LongPath} \begin{proof}[of Lemma \ref{lem:LongPath}] If $s_i$ and $t_i$ are not in the same connected component of $G^+_{\text{bad}}$ then $\text{dist}_{\ell}(s_i,t_i)=\infty$. Otherwise, let $P$ be a path connecting $s_i$ and $t_i$ in $G^+_{\text{bad}}$. Note that $\sum _{e=(u,v)\in P}d(u,v)\geq d(s_i,t_i)>1-\nicefrac[]{1}{\sqrt{n}}$, where the first inequality follows from the triangle inequality for $d$ and the second inequality from the fact that $(s_i,t_i)\in E^-_{\text{bad}}$, {\it i.e.}, $d(s_i,t_i)>1-\nicefrac[]{1}{\sqrt{n}}$.
Let us now lower bound the number of edges in $P$ that belong to $ E^+_{\text{bad}}\setminus E^+_0$, {\it i.e.}, edges $e$ for which $\ell(e)=1$. Examine $\sum _{e=(u,v)\in P}d(u,v)$ and note that every edge $e=(u,v)\in E^+_0$ has a $d$ length of $0$. Hence, we can remove those edges from the sum and conclude that: $ \sum _{e=(u,v)\in P\setminus E^+_0}d(u,v)>1-\nicefrac[]{1}{\sqrt{n}}$. Recall that $G^+_{\text{bad}}$ contains only edges from $E^+_{\text{bad}}$, thus every $e=(u,v)\in P\setminus E^+_0$ satisfies: $d(u,v)<\nicefrac[]{1}{\sqrt{n}}$. This implies that $P\setminus E^+_0$ must contain more than $ \sqrt{n}-1$ edges, {\it i.e.}, $\text{dist}_{\ell}(s_i,t_i)>\sqrt{n}-1$, concluding the proof.
$\square$ \end{proof}
\section{Proof of Lemma \ref{lem:BadLayers}}\label{app:BadLayers} \begin{proof}[of Lemma \ref{lem:BadLayers}]
Let $x$ be the number of layers $L^i_j$ for which $|L^i_j|> 16\sqrt{n}$. Since the total number of vertices in all layers, for a fixed $i$, cannot exceed $n$, we can conclude that $x\leq \nicefrac[]{n}{(16\sqrt{n})}=\nicefrac[]{\sqrt{n}}{16}$.
$\square$ \end{proof}
\section{Proof of Corollary \ref{cor:ChoosingLayer}}\label{app:ChoosingLayer} \begin{proof}[of Corollary \ref{cor:ChoosingLayer}] We will prove that for any connected component $X$ of $G^+_{\text{bad}}$, such that both $\left\{ s_i,t_i\right\}$ belong to $X$, there are $3$ consecutive layers as required by Algorithm \ref{alg:MMD_General}.\footnote{This implies that Algorithm \ref{alg:MMD_General} can always find $j^*$ as required since any connected component $X$ can only shrink as the algorithm progresses.} Lemma \ref{lem:BadLayers} implies that there are at most $\nicefrac[]{\sqrt{n}}{16}$ layers whose size is more than $ 16\sqrt{n}$. Therefore, the number of layers among $L^i_0,\ldots,L^i_{\nicefrac[]{(\sqrt{n}-1)}{2}}$ whose size is at most $16\sqrt{n}$ is at least: $ \nicefrac[]{(\sqrt{n}-1)}{2} - \nicefrac[]{\sqrt{n}}{16}$. The latter is at least $\nicefrac[]{3}{4}\cdot\nicefrac[]{(\sqrt{n}-1)}{2} $ (as long as $n\geq 4$). Thus, there must be at least $3$ consecutive layers among $L^i_0,\ldots,L^i_{\nicefrac[]{(\sqrt{n}-1)}{2}}$, each having a size of at most $16\sqrt{n}$.
$\square$ \end{proof}
\section{Proof of Lemma \ref{lem:CorrectEdges}}\label{app:CorrectEdges} \begin{proof}[of Lemma \ref{lem:CorrectEdges}] Let us start by focusing on edges in $E^+_0$. Since $E^+_0\subseteq E^+_{\text{bad}}$, all edges of $ E^+_0$ are present in $G^+_{\text{bad}}$ by definition, and in particular both endpoints of every $e\in E^+_0$ are contained in the same connected component $X$ of $G^+_{\text{bad}}$. Furthermore, for any $i$ such that both $\left\{ s_i,t_i\right\}$ are contained in $X$, both endpoints $e$ must also be contained in the same layer $L^i_j$ (for some $j$). This follows from the fact that $\ell(e)=0$ for all edges $ e\in E^+_0$ and the definition of all layers $L^i_0,\ldots,L^i_{r_i}$. Thus, both endpoints of every edge $e\in E^+_0$ are always in the same cluster $S$ in the output $ \mathcal{C}$, {\it i.e.}, such an edge $e$ is never misclassified.
Let us now focus on edges in $E^-_{\text{bad}} $, and recall that our notation implies that $E^-_{\text{bad}}=\left\{ (s_i,t_i)\right\}_{i=1}^k$. We prove that $(s_i,t_i)$ cannot be contained in some cluster $S\in \mathcal{C}$. Note that each cluster $S$ is in fact a sphere of radius at most $\nicefrac[]{(\sqrt{n}-1)}{2}$ with respect to the metric $\text{dist}_{\ell}$. Lemma \ref{lem:LongPath} states that $\text{dist}_{\ell}(s_i,t_i)>\sqrt{n}-1$, hence the triangle inequality for $\text{dist}_{\ell}$ implies that both $s_i$ and $t_i$ cannot be contained in the same cluster $S$. Therefore, each $(s_i,t_i)$ edge is never misclassified.
$\square$ \end{proof}
\section{Proof of Lemma \ref{lem:Short+Edges}}\label{app:Short+Edges} \begin{proof}[of Lemma \ref{lem:Short+Edges}] Fix $u$, $S$ the cluster Algorithm \ref{alg:MMD_General} assigned $u$ to, and $X$ the connected component of $G^+_{\text{bad}}$ defining $S$. Consider the first iteration an edge $e=(u,v)\in E^+_{\text{bad}}\setminus E^+_0$ touching $u$ is misclassified by the algorithm. Let $i$ correspond to the pair $\left\{ s_i,t_i\right\}$ considered in the above iteration and $j^*$ the index by which Algorithm \ref{alg:MMD_General} defined the cluster in the same iteration.
If the above occurs in an iteration where $S$ itself is added to $\mathcal{C}$, then it must be the case that $u\in L^i_{j^*}$ and $v\in L^i_{j^*+1}$. Additionally, no other edges in $ E^+_{\text{bad}}\setminus E^+_0$ touching $u$ can be misclassified in subsequent iteration. Therefore, in this case the total number of edges in $E^+_{\text{bad}}\setminus E^+_0$ touching $u$ that are misclassified can be upper bounded by $|L^i_{j^*+1}|\leq 16\sqrt{n}$.
Otherwise, the first iteration an edge $e=(u,v)\in E^+_{\text{bad}}\setminus E^+_0$ touching $u$ is misclassified by the algorithm is not the iteration in which $S$ itself is added to $\mathcal{C}$. Thus, since the algorithm cuts between layers $L^i_{j^*}$ and $L^i_{j^*+1}$, it must be the case that $u\in L^i_{j^*+1}$. Since edges in $G^+_{\text{bad}}$ can connect only vertices in the same or adjacent layers, the total degree of $u$ in $G^+_{\text{bad}}$ is at most $|L^i_{j^*}|+|L^i_{j^*+1}|+|L^i_{j^*+2}|-1$. From the choice of $j^*$ the latter can be upper bounded by $48\sqrt{n}$.
$\square$ \end{proof}
\section{Proof of Lemma \ref{lem:7ApproxClique}}\label{app:7ApproxClique} Let us start with some intuition as to why $s^*$ is chosen greedily.
One of the goals of the analysis is to bound the contribution of $+$ edges crossing the boundary of the sphere around $s^*$. Since those edges might have an extremely small fractional contribution w.r.t. the metric, {\it i.e.}, their $d$ length is very short, we must charge their cost to other edges. The fact that there are many vertices close to $s^*$, along with the fact that the graph is complete, implies that there are many other edges present within the sphere, or crossing its boundary, that we can charge to.
\noindent {\bf{Charging Scheme Overview:}} Fix an arbitrary node $u\in V$. In order to bound the number of misclassified edges incident on $u$, our analysis tracks two quantities of interest. The first is the total number of edges incident on $u$ that are classified incorrectly by the algorithm.
Recall that this quantity is denoted by $\text{disagree}_{\mathcal{C}}(u)$, and we refer to it as $u$'s {\em cost}. The second is the total fractional disagreement of node $u$ as given by the relaxation, {\it i.e.}, $D(u)$. We refer to $D(u)$ as $u$'s {\em budget}. Since we consider unweighted complete graphs $D(u)$ reduces to: $ D(u) = \sum _{v:(u,v)\in E^+}d\left( u,v\right) + \sum _{v:(u,v)\in E^-} \left( 1-d\left( u,v\right) \right)$.
Note that both $u$'s cost and budget are fixed. However, it will be conceptually helpful to view these quantities as changing as the algorithm progresses. Initially: $(1)$ $u$'s cost is $0$ as no edge has been classified yet, {\it i.e.}, $ \text{disagree}_{\mathcal{C}}(u)=0$ once Algorithm \ref{alg:7ApproxClique} starts, and $(2)$ $u$'s budget is full, {\it i.e.}, $D(u) = \sum _{v:(u,v)\in E^+}d\left( u,v\right) + \sum _{v:(u,v)\in E^-} \left( 1-d\left( u,v\right) \right)$ once Algorithm \ref{alg:7ApproxClique} starts. In every iteration $u$'s cost increases by the number of newly misclassified edges incident on $u$, and $u$'s budget decreases by the total fractional contribution of all newly classified edges incident on $u$ (whether correct or not). Our analysis bounds the ratio of these two changes in each iteration of Algorithm \ref{alg:7ApproxClique}.
\begin{figure}
\caption{Case $1$: $u\notin \text{Ball}_S(s^*,\nicefrac[]{3}{7})$}
\label{fig:Case1}
\caption{Case $2$: $u\in \text{Ball}_S(s^*,\nicefrac[]{3}{7})$}
\label{fig:Case2}
\caption{Cases of Lemmas \ref{lem:7ApproxClique} and \ref{lem:7ApproxBipartite} analysis.}
\label{fig:Cases}
\end{figure}
\begin{proof}[of Lemma \ref{lem:7ApproxClique}] Fix a vertex $u$ and an arbitrary iteration. Consider two cases depending on whether $u$ belongs to the cluster formed in the chosen iteration. It is important to note that once $u$ is assigned to a cluster that is added to $\mathcal{C}$, its value, {\it i.e.}, $\text{disagree}_{\mathcal{C}}(u)$, does not change in subsequent iterations and remains fixed until the algorithm terminates.
\noindent {\bf{Case $1$: $u\notin \text{Ball}_S(s^*,\nicefrac[]{3}{7})$:}} Note that the only edges incident on $u$ that are classified incorrectly in the current iteration, are edges $(u,v)\in E^+$ for some node $v\in \text{Ball}_S(s^*,\nicefrac[]{3}{7})$.
Let us define the following disjoint collections of vertices: $A\triangleq \text{Ball}_S(s^*,\nicefrac[]{1}{7})$, $B\triangleq \text{Ball}_S(s^*,\nicefrac[]{3}{7})\cap \text{Ball}_S(u,\nicefrac[]{1}{7})$, and $C\triangleq \text{Ball}_S(s^*,\nicefrac[]{3}{7})\setminus \left( A \cup B\right)$. Refer to Figure \ref{fig:Case1} for a drawing of $A$, $B$ and $C$. Thus, the erroneously classified edges are $(u,v_A)\in E^+$ where $v_A\in A$, $(u,v_B)\in E^+$ where $v_B\in B$, and $(u,v_C)\in E^+$ where $v_C\in C$.
First, let us focus on edges $(u,v_C)\in E^+$. Each edge $(u,v_C)$ increases $u$'s cost by $1$. We charge this increase to the fractional contribution of $(u,v_C)$ to $u$'s budget, which equals $d(u,v_C)$. Since $v_C\notin \text{Ball}_S(u,\nicefrac[]{1}{7})$ it must be the case that $d(u,v_C)\geq \nicefrac[]{1}{7}$. Therefore, each $(u,v_C)$ edge incurs a multiplicative loss of at most $7$.
Let us focus now on edges $(u,v_B)\in E^+$ and $(u,v_A)$ simultaneously.
Since $s^*$ was chosen greedily, {\it i.e.}, it maximizes the number of nodes within distance less than $\nicefrac[]{1}{7}$ from it, we can conclude that $|B| \leq |A|$. Thus, each node in $B$ can be assigned to a {\em distinct} node in $A$. Fix $v_B\in B$ and let $v_A\in A$ be the node assigned to it.
\begin{enumerate} \item If $(u,v_A)\in E^+$ then the {\em joint} contribution of $(u,v_B)$ and $(u,v_A)$ to $u$'s cost is $2$. We charge this cost to the fractional contribution of $(u,v_A)$ alone to $u$'s budget, which equals $d(u,v_A)$. The triangle inequality implies that $d(u,v_A)\geq d(u,s^*) - d(s^*,v_A)\geq \nicefrac[]{3}{7} - \nicefrac[]{1}{7}= \nicefrac[]{2}{7}$. Hence, both $(u,v_B)$ and $(u,v_A)$ incur a multiplicative loss of at most $\nicefrac[]{2}{\left(\nicefrac[]{2}{7}\right)}=7$.
\item If $(u,v_A)\in E^-$ then $(u,v_A)$ does not increase $u$'s cost, and therefore the increase in $u$'s cost is caused solely by $(u,v_B)$ and it equals $1$. We charge this cost to the fractional contribution of $(u,v_A)$ alone to $u$'s budget, which equals $1-d(u,v_A)$. The triangle inequality implies that $d(u,v_A)\leq d(u,v_B) + d(v_B,s^*)+ d(s^*,v_A)\leq \nicefrac[]{1}{7}+\nicefrac[]{3}{7}+\nicefrac[]{1}{7} = \nicefrac[]{5}{7}$. Hence, $(u,v_B)$ incurs a multiplicative loss of at most $\frac{1}{1-\nicefrac[]{5}{7}}=\nicefrac[]{7}{2}$.
\item If there are any remaining nodes $v_A\in A$ that no node in $B$ was assigned to them, and $(u,v_A)\in E^+$, then we charge the $1$ cost $ (u,v_A)$ adds to $u$'s cost to the fractional contribution of $(u,v_A)$ to $u$'s budget, which equals $d(u,v_A)$. The triangle inequality implies that $d(u,v_A)\geq d(u,s^*) - d(s^*,v_A)\geq \nicefrac[]{3}{7} - \nicefrac[]{1}{7}= \nicefrac[]{2}{7}$. Hence, such an edge $(u,v_A)$ incurs a multiplicative loss of at most $\nicefrac[]{1}{\left(\nicefrac[]{2}{7}\right)}=\nicefrac[]{7}{2}$. \end{enumerate}
\noindent Thus, we can conclude that for the first case in which $u\notin \text{Ball}_S(s^*,\nicefrac[]{3}{7})$ we lose a factor of at most $7$.
\noindent {\bf Case $2$: $u\in \text{Ball}_S(s^*,\nicefrac[]{3}{7})$:} Note that the only edges incident on $u$ that are classified incorrectly in the current iteration, are edges $(u,v)\in E^+$ for some $v\notin \text{Ball}_S(s^*,\nicefrac[]{3}{7})$ and edges $(u,v)\in E^-$ for some $v\in \text{Ball}_S(s^*,\nicefrac[]{3}{7})$.
Let us define the following disjoint collections of vertices: $A\triangleq \text{Ball}_S(s^*,\nicefrac[]{1}{7})$, $B\triangleq \text{Ball}_S(u,\nicefrac[]{1}{7})\setminus \text{Ball}_S(s^*,\nicefrac[]{3}{7})$, $C\triangleq \text{Ball}_S(s^*,\nicefrac[]{3}{7})\setminus \left( A \cup \text{Ball}_S(u,\nicefrac[]{1}{7})\right)$, $D\triangleq \text{Ball}_S(u,\nicefrac[]{1}{7})\cap \text{Ball}_S(s^*,\nicefrac[]{3}{7})$, and $F\triangleq S \setminus (A\cup B \cup C \cup D)$. Refer to Figure \ref{fig:Case2} for a drawing of $A$, $B$, $C$, $D$, and $F$. Thus, the erroneously classified edges are $(u,v_A)\in E^-$ where $v_A\in A$, $(u,v_B)\in E^+$ where $v_B\in B$, $(u,v_C)\in E^-$ where $v_C\in C$, $(u,v_D)\in E^-$ where $v_D\in D$, and $(u,v_F)\in E^+$ where $v_F\in F$.
Let us focus now on edges $(u,v_B)\in E^+$ and $(u,v_A)$ simultaneously.
Since $s^*$ was chosen greedily, {\it i.e.}, it maximizes the number of nodes within distance less than $\nicefrac[]{1}{7}$ from it, we can conclude that $|B| \leq |A|$. Thus, each node in $B$ is assigned to a {\em distinct} node in $A$. Fix $v_B\in B$ and let $v_A\in A$ be the node assigned to it. \begin{enumerate} \item If $(u,v_A)\in E^+$ then $(u,v_A)$ does not increase $u$'s cost, and therefore the increase in $u$'s cost is caused solely by $(u,v_B)$ and it equals $1$. We charge this cost to the fractional contribution of $(u,v_A)$ alone to $u$'s budget, which equals $d(u,v_A)$. The triangle inequality implies that $d(v_A,u)\geq d(s^*,v_B) - d(s^*,v_A) - d(u,v_B)\geq \nicefrac[]{3}{7}-\nicefrac[]{1}{7}-\nicefrac[]{1}{7} = \nicefrac[]{1}{7}$. Hence, $(u,v_B)$ incurs a multiplicative loss of at most $\nicefrac[]{1}{\left(\nicefrac[]{1}{7}\right)}=7$.
\item If $(u,v_A)\in E^-$ then the {\em joint} contribution of $(u,v_B)$ and $(u,v_A)$ to $u$'s cost is $2$. We charge this cost to the fractional contribution of $(u,v_A)$ alone to $u$'s budget, which equals $1-d(u,v_A)$. The triangle inequality implies that $d(u,v_A)\leq d(u,s^*) + d(s^*,v_A)\leq \nicefrac[]{3}{7} + \nicefrac[]{1}{7}= \nicefrac[]{4}{7}$. Hence, both $(u,v_B)$ and $(u,v_A)$ incur a multiplicative loss of at most $\nicefrac[]{2}{(1-\nicefrac[]{4}{7})}=\nicefrac[]{14}{3}$.
\item If there are any remaining nodes $v_A\in A$ that no node in $B$ was assigned to them, and that $(u,v_A)\in E^-$, we charge the $1$ cost $ (u,v_A)$ adds to $u$'s cost to the fractional contribution of $(u,v_A)$ to $u$'s budget, which equals $1-d(u,v_A)$. As before, the triangle inequality implies that $d(u,v_A)\leq d(u,s^*) + d(s^*,v_A)\leq \nicefrac[]{3}{7} + \nicefrac[]{1}{7}= \nicefrac[]{4}{7}$. Hence, such an edge $(u,v_A)$ incurs a multiplicative loss of at most $\nicefrac[]{1}{(1-\nicefrac[]{4}{7})}=\nicefrac[]{7}{3}$. \end{enumerate}
Let us now focus on edges $(u,v_C)\in E^-$ and $(u,v_D)\in E^-$. For simplicity, let us denote such an edge by $(u,v)$ where $v\in C\cup D$. Each such edge increases $u$'s cost by $1$. We charge this increase to the fractional contribution of the same edge to $u$'s budget, which equals $1-d(u,v)$. Since both $u,v\in \text{Ball}_S(s^*,\nicefrac[]{3}{7})$ the triangle inequality implies that $d(u,v)\leq d(u,s^*) + d(s^*,v)\leq \nicefrac[]{6}{7}$. Therefore, each $(u,v)\in E^-$, where $ v\in C\cup D$, incurs a multiplicative loss of at most $\nicefrac[]{1}{(1-\nicefrac[]{6}{7})}=7$.
Finally, consider edges $(u,v_F)\in E^+$. Each such edge increases $u$'s cost by 1. We charge this increase to the fractional contribution of the same edge to $u$'s budget, which equals $d(u,v_F)$. Since $d(u,v_F) \geq \nicefrac[]{1}{7}$, each such edge incurs a multiplicative loss of at most $\nicefrac[]{1}{\left(\nicefrac[]{1}{7}\right)}=7$. This concludes the proof as we have shown that for every vertex $u$ and every iteration, the increase in $u$'s cost during the iteration as it most $7$ times the decrease in $u$'s budget during the same iteration.
$\square$ \end{proof}
\section{Proof of Theorem \ref{thrm:7ApproxBipartiteMLD}}\label{app:7Bipartite}
Let $G=(V,E)$ be an unweighted complete bipartite graph. Let $V_1$ and $V_2$ be the two sides of the graph G. Our algorithm will ensure a $7$ approximation factor for mistakes on all vertices in $V_1$ but does not give any guarantee for vertices in $V_2$. The algorithm is a slight modification of the Algorithm \ref{alg:7ApproxClique} presented earlier.
We consider the following simple deterministic greedy clustering algorithm. Algorithm \ref{alg:7ApproxBipartite} receives as input the metric $d$ (as computed by the relaxation (\ref{Relaxation:Disagreements})), whereas the variables $D(u)$ are required only for the analysis. In every step, the algorithm greedily chooses a vertex $s^* \in V_1$ that has many vertices in $V_2$ {\em close} to it with respect to the metric $d$. Then, $s^*$ just cuts a {\em large} sphere around it to form a new cluster. \begin{algorithm} \caption{Greedy Clustering $\left( \left\{ d(u,v)\right\} _{u,v\in V}\right)$}\label{alg:7ApproxBipartite} \begin{algorithmic}[1] \STATE $S\leftarrow V$ and $\mathcal{C}\leftarrow \emptyset$. \WHILE {$S\cap V_1\neq \emptyset$}
\STATE $s^*\leftarrow \text{argmax}\left\{ \left| \text{Ball}_{V_2}(s,\nicefrac[]{1}{7})\right|:s\in V_1\right\}$. \STATE $\mathcal{C} ~\leftarrow \mathcal{C} \cup \left\{ \text{Ball}_S(s^*,\nicefrac[]{3}{7})\right\}$. \STATE $S~\leftarrow S\setminus \text{Ball}_S(s^*,\nicefrac[]{3}{7})$. \ENDWHILE \WHILE {$S\neq \emptyset$} \STATE $s^*\leftarrow s^* \in S $. \STATE $\mathcal{C} ~\leftarrow \mathcal{C} \cup \left\{ s^* \right\}$. \STATE $S~\leftarrow S\setminus s^*$. \ENDWHILE \STATE Output $\mathcal{C}$. \end{algorithmic} \end{algorithm}
The following lemma summarizes the guarantee achieved by Algorithm \ref{alg:7ApproxBipartite}. \begin{lemma}\label{lem:7ApproxBipartite} Assuming the input is an unweighted complete bipartite graph, Algorithm \ref{alg:7ApproxBipartite} guarantees that $\text{disagree}_{\mathcal{C}}(u) \leq 7D(u)$ for any $u\in V_1$. \end{lemma}
\noindent {\bf{Charging Scheme Overview:}} Fix an arbitrary vertex $u\in V_1$. As before, we track two quantities: $u$'s cost and $u$'s budget. Our analysis bounds the ratio of the change in these two quantities in each iteration of Algorithm \ref{alg:7ApproxBipartite}.
\begin{proof}[of Lemma \ref{lem:7ApproxBipartite}] Fix a vertex $u \in V_1$ and an arbitrary iteration. We consider two cases depending on whether $u$ belongs to the cluster formed in the chosen iteration. It is important to note that once $u$ is chosen to a cluster that is added to $\mathcal{C}$, its value, {\it i.e.}, $\text{disagree}_{\mathcal{C}}(u)$, does not change and remains fixed until the algorithm terminates.
\noindent {\bf Case $1$: $u\notin \text{Ball}_S(s^*,\nicefrac[]{3}{7}), u \in V_1$:} Note that the only edges incident on $u$ that are classified incorrectly in the current iteration, are edges $(u,v)\in E^+$ for some $v\in \text{Ball}_S(s^*,\nicefrac[]{3}{7}) \cap V_2$. Define the following disjoint collections of vertices: $A\triangleq \text{Ball}_S(s^*,\nicefrac[]{1}{7}) \cap V_2$, $B\triangleq \text{Ball}_S(s^*,\nicefrac[]{3}{7})\cap \text{Ball}_S(u,\nicefrac[]{1}{7}) \cap V_2$, and $C\triangleq (\text{Ball}_S(s^*,\nicefrac[]{3}{7}) \cap V_2)\setminus \left( A \cup B\right)$. Refer to Figure \ref{fig:Case1} for a drawing of $A$, $B$ and $C$. Thus, the erroneously classified edges are $(u,v_A)\in E^+$ where $v_A\in A$, $(u,v_B)\in E^+$ where $v_B\in B$, and $(u,v_C)\in E^+$ where $v_C\in C$. Note that whenever there is an edge $(u,v)$ where $u \in V_1$, $v$ must belong to $V_2$ since G is a bipartite graph.
Let us focus on edges $(u,v_C)\in E^+$. Each edge $(u,v_C)$ increases $u$'s cost by $1$. We charge this increase to the fractional contribution of $(u,v_C)$ to $u$'s budget, which equals $d(u,v_C)$. Since $v_C\notin \text{Ball}_S(u,\nicefrac[]{1}{7})$ it must be the case that $d(u,v_C)\geq \nicefrac[]{1}{7}$. Therefore, each $(u,v_C)$ edge incurs a multiplicative loss of at most $7$.
Let us focus now on edges $(u,v_A)$ and $(u,v_B) \in E^+$ simultaneously.
Since $s^*$ was chosen greedily, {\it i.e.}, it maximizes the number of nodes $\in V_2$ within distance of at most $\nicefrac[]{1}{7}$ from it, we can conclude that $|B| \leq |A|$. Thus, each node in $B$ can be assigned to a {\em distinct} node in $A$. Fix $v_B\in B$ and let $v_A\in A$ be the node assigned to it. \begin{enumerate} \item If $(u,v_A)\in E^+$ then the {\em joint} contribution of $(u,v_B)$ and $(u,v_A)$ to $u$'s cost is $2$. We charge this cost to the fractional contribution of $(u,v_A)$ alone to $u$'s budget, which equals $d(u,v_A)$. The triangle inequality implies that $d(u,v_A)\geq d(u,s^*) - d(s^*,v_A)\geq \nicefrac[]{3}{7} - \nicefrac[]{1}{7}= \nicefrac[]{2}{7}$. Hence, both $(u,v_B)$ and $(u,v_A)$ incur a multiplicative loss of at most $\nicefrac[]{2}{\left(\nicefrac[]{2}{7}\right)}=7$. \item If $(u,v_A)\in E^-$ then $(u,v_A)$ does not increase $u$'s cost, and therefore the increase in $u$'s cost is caused solely by $(u,v_B)$ and it equals $1$.
We charge this cost to the fractional contribution of $(u,v_A)$ alone to $u$'s budget, which equals $1-d(u,v_A)$.
The triangle inequality implies that $d(u,v_A)\leq d(u,v_B) + d(v_B,s^*)+ d(s^*,v_A)\leq \nicefrac[]{1}{7}+\nicefrac[]{3}{7}+\nicefrac[]{1}{7} = \nicefrac[]{5}{7}$.
Hence, $(u,v_B)$ incurs a multiplicative loss of at most $\frac{1}{1-\nicefrac[]{5}{7}}=\nicefrac[]{7}{2}$. \item If there are any remaining nodes $v_A\in A$ such no node in $B$ was assigned to them, and $(u,v_A)\in E^+$, we charge the $1$ cost $ (u,v_A)$ adds to $u$'s cost to the fractional contribution of $(u,v_A)$ to $u$'s budget, which equals $d(u,v_A)$.
The triangle inequality implies that $d(u,v_A)\geq d(u,s^*) - d(s^*,v_A)\geq \nicefrac[]{3}{7} - \nicefrac[]{1}{7}= \nicefrac[]{2}{7}$.
Hence, such an edge $(u,v_A)$ incurs a multiplicative loss of at most $\nicefrac[]{1}{\left(\nicefrac[]{2}{7}\right)}=\nicefrac[]{7}{2}$. \end{enumerate} We can conclude that the first case in which $u\notin \text{Ball}_S(s^*,\nicefrac[]{3}{7})$ we lose a factor of at most $7$.
\noindent {\bf Case $2$: $u\in \text{Ball}_S(s^*,\nicefrac[]{3}{7}) \cap V_1$: } Note that the only edges incident on $u$ that are classified incorrectly in the current iteration, are edges $(u,v)\in E^+$ for some $v\notin \text{Ball}_S(s^*,\nicefrac[]{3}{7}), v \in V_2$ and edges $(u,v)\in E^-$ for some $v\in \text{Ball}_S(s^*,\nicefrac[]{3}{7}) \cap V_2$. Define the following disjoint collections of vertices: $A\triangleq \text{Ball}_S(s^*,\nicefrac[]{1}{7}) \cap V_2$, $B\triangleq (\text{Ball}_S(u,\nicefrac[]{1}{7}) \cap V_2)\setminus \text{Ball}_S(s^*,\nicefrac[]{3}{7})$, $C\triangleq (\text{Ball}_S(s^*,\nicefrac[]{3}{7}) \cap V_2)\setminus \left( A \cup \text{Ball}_S(u,\nicefrac[]{1}{7})\right)$, $D\triangleq \text{Ball}_S(u,\nicefrac[]{1}{7})\cap \text{Ball}_S(s^*,\nicefrac[]{3}{7}) \cap V_2$ and $F\triangleq (V_2 \cup S) \setminus (A\cup B \cup C \cup D)$. Refer to Figure \ref{fig:Case2} for a drawing of $A$, $B$, $C$, $D$ and $F$. Thus, the erroneously classified edges are $(u,v_A)\in E^-$ where $v_A\in A$, $(u,v_B)\in E^+$ where $v_B\in B$, $(u,v_C)\in E^-$ where $v_C\in C$, $(u,v_D)\in E^-$ where $v_D\in D$ and $(u,v_F)\in E^+$ where $v_F\in F$.
Let us focus now on edges $(u,v_A)$ and $(u,v_B) \in E^+$ simultaneously.
Since $s^*$ was chosen greedily, {\it i.e.}, it maximizes the number of nodes $\in V_2$ within distance of at most $\nicefrac[]{1}{7}$ from it, we can conclude that $|B| \leq |A|$. Thus, each node in $B$ can be assigned to a {\em distinct} node in $A$. Fix $v_B\in B$ and let $v_A\in A$ be the node assigned to it. \begin{enumerate} \item If $(u,v_A)\in E^-$ then the {\em joint} contribution of $(u,v_B)$ and $(u,v_A)$ to $u$'s cost is $2$. We charge this cost to the fractional contribution of $(u,v_A)$ alone to $u$'s budget, which equals $1-d(u,v_A)$. The triangle inequality implies that $d(u,v_A)\leq d(u,s^*) + d(s^*,v_A)\leq \nicefrac[]{3}{7} + \nicefrac[]{1}{7}= \nicefrac[]{4}{7}$. Hence, both $(u,v_B)$ and $(u,v_A)$ incur a multiplicative loss of at most $\nicefrac[]{2}{\left(1-\nicefrac[]{4}{7}\right)}=\nicefrac[]{14}{3}$. \item If $(u,v_A)\in E^+$ then $(u,v_A)$ does not increase $u$'s cost, and therefore the increase in $u$'s cost is caused solely by $(u,v_B)$ and it equals $1$.
We charge this cost to the fractional contribution of $(u,v_A)$ alone to $u$'s budget, which equals $d(u,v_A)$.
The triangle inequality implies that $d(u,v_A)\geq d(u,s^*) - d(v_A,s^*) \geq d(v_B,s^*) - d(v_B,u) - d(v_A,s^*)\geq \nicefrac[]{3}{7}-\nicefrac[]{1}{7}-\nicefrac[]{1}{7} = \nicefrac[]{1}{7}$.
Hence, $(u,v_B)$ incurs a multiplicative loss of at most $\nicefrac[]{1}{\left(\nicefrac[]{1}{7}\right)}=7$. \item If there are any remaining nodes $v_A\in A$ such that no node in $B$ was assigned to them, and $(u,v_A)\in E^-$, then $u$'s cost increases by 1. We charge this cost to the fractional contribution of $(u,v_A)$ to $u$'s budget, which equals $1-d(u,v_A)$.
The triangle inequality implies that $d(u,v_A)\leq d(u,s^*) + d(s^*,v_A)\leq \nicefrac[]{3}{7} + \nicefrac[]{1}{7}= \nicefrac[]{4}{7}$.
Hence, such an edge $(u,v_A)$ incurs a multiplicative loss of at most $\nicefrac[]{1}{\left(1-\nicefrac[]{4}{7}\right)}=\nicefrac[]{7}{3}$. \end{enumerate}
Let us focus on edges $(u,v_C)\in E^-$, and $(u,v_D)\in E^-$. For simplicity, let us denote such an edge by $(u,v)$ where $v\in C\cup D$. Each such edge increases $u$'s cost by $1$. We charge this increase to the fractional contribution of the same edge to $u$'s budget, which equals $1-d(u,v)$. Since both $u,v\in \text{Ball}_S(s^*,\nicefrac[]{3}{7})$ it must be the case from the triangle inequality that $d(u,v)\leq d(u,s^*) + d(s^*,v)\leq \nicefrac[]{6}{7}$. Therefore, each $(u,v)\in E^-$, where $ v\in C\cup D$, incurs a multiplicative loss of at most $\nicefrac[]{1}{(1-\nicefrac[]{6}{7})}=7$.
Now, let us consider edges $(u,v_F)\in E^+$. Each such edge increases $u$'s cost by 1. We charge this increase to the fractional contribution of the same edge to $u$'s budget, which equals $d(u,v_F)$. Since $d(u,v_F) \geq \nicefrac[]{1}{7}$. Hence, each such edge incurs a multiplicative loss of at most $\nicefrac[]{1}{\nicefrac[]{1}{7}}=7$. This concludes the proof as we have shown that for every vertex $u \in V_1$ and every iteration, the increase in $u$'s cost during the iteration as it most 7 times the decrease in $u$'s budget during the same iteration.
$\square$ \end{proof}
\noindent We are now ready to prove Theorem \ref{thrm:7ApproxBipartiteMLD}.
\begin{proof}[of Theorem \ref{thrm:7ApproxBipartiteMLD}] Apply Algorithm \ref{alg:7ApproxBipartite} to the solution of the relaxation (\ref{Relaxation:Disagreements}). Lemma \ref{lem:7ApproxBipartite} guarantees that for every node $u\in V_1$ we have that $ \text{disagree}_{\mathcal{C}}(u) \leq 7D(u)$, {\it i.e.}, $ \text{disagree}_{\mathcal{C}}(V_1)\leq 7{\mathbf{D}}$.\footnote{For simplicity we denote here by ${\mathbf{D}}$ the vector of $D(u)$ variables for vertices in $ V_1$.} The value of the output of the algorithm is $f\left( \text{disagree}_{\mathcal{C}}(V_1)\right)$ and one can bound it as follows: $$ f\left( \text{disagree}_{\mathcal{C}}(V_1)\right)\stackrel{(1)}{\leq} f\left( 7{\mathbf{D}}\right) \stackrel{(2)}{\leq} 7f\left( {\mathbf{D}}\right)~.$$ Inequality $(1)$ follows from the monotonicity of $f$, whereas inequality $(2)$ follows from the scaling property of $f$. This concludes the proof since $f\left( {\mathbf{D}}\right) $ is a lower bound on the value of any optimal solution.
$\square$ \end{proof}
\section{Proof of Theorem \ref{thrm:ApproxMMA}}\label{app:ApproxMMA}
For completeness and intuition, we start with an exposition on the simple local search algorithm for {{\textsf{Max Local Agreements}}}. Let us denote by $c(u)$ the total weight of edges incident on $u$, and by ${\mathbf{c}}\in \mathcal{R}^V$ the vector of all $\{c(u)\}_{u\in V}$. Additionally, for any cut $S\subseteq V$ we denote by $\mathcal{C}_S=\{ S,\bar{S}\}$ the clustering $S$ defines. The simple local search algorithm starts with an arbitrary cut $S$, and repeatedly moves vertices from one side to the other until no additional improvement can be made. Specifically, if $\text{agree}_{\mathcal{C}_S}(u) < \nicefrac[]{c(u)}{2} $ then node $u$ is moved to the other side of the cut.
When the algorithm terminates it must be that for every node $u$: $\text{agree}_{\mathcal{C}_S}(u) \geq \nicefrac[]{c(u)}{2}$. The latter implies that $\mathcal{C}_S$ is a $\nicefrac[]{1}{2}$-approximation for {{\textsf{Max Local Agreements}}} since: $$ g(\text{agree}_{\mathcal{C}_S}(V))\stackrel{(1)}{\geq} g(\nicefrac[]{{\mathbf{c}}}{2}) \stackrel{(2)}{\geq} \nicefrac[]{g({\mathbf{c}})}{2}~.$$ Inequality $(1)$ follows from the monotonicity of $g$, whereas inequality $(2)$ follows from the reverse scaling property of $g$. Note that $g({\mathbf{c}})$ upper bounds the value of any optimal solution to {{\textsf{Max Local Agreements}}}.
One can track the progress of the algorithm by considering the potential $\Phi_{\mathcal{C}_S} \triangleq \sum _{u\in V}\text{agree}_{\mathcal{C}_S}(u)$. For every node $u$ and cut $S$ note that: $(1)$ $\text{agree}_{\mathcal{C}_S}(u) + \text{disagree}_{\mathcal{C}_S}(u) = c(u)$, and $(2)$ if $u$ is moved to the other side of the cut then the values of $ \text{agree}_{\mathcal{C}_S}(u)$ and $\text{disagree}_{\mathcal{C}_S}(u)$ are swapped. Thus, the potential $ \Phi_{\mathcal{C}_S}$ must strictly increase in every iteration, implying that the algorithm always terminates. Unfortunately, it is well known that in general the local search algorithm might terminate after an exponential number of iterations.
For {{\textsf{Max Min Agreements}}} we are able to show that a {\em non-oblivious} local search succeeds in finding a $\nicefrac[]{1}{(2+\varepsilon)}$-approximation in polynomial time, thus matching the existential guarantee of the simple local search algorithm. Our main idea is to alter the edge weights in such a way that in the new resulting instance the ratio of $\max _{S\subseteq V}\left\{ \Phi_{\mathcal{C}_S}\right\}$ to the value of the optimal solution for {{\textsf{Max Min Agreements}}} is polynomially bounded. Once this is guaranteed, the local search algorithm can be altered so it always terminates in polynomial time. We are now ready to prove Theorem \ref{thrm:ApproxMMA}.
\begin{proof}[of Theorem \ref{thrm:ApproxMMA}] First we describe the process of creating the new edge weights. Let $c^*\triangleq \min _{u\in V}\left\{ c(u)\right\}$ be the minimum total weight of edges incident on any node. Clearly, $c^*$ serves as an upper bound on the value of any optimal solution for {{\textsf{Max Min Agreements}}}.
Denote by $T\triangleq \left\{ u\in V:c(u)=c^*\right\}$ the collection of all nodes whose total weight of edges incident on each of them is $c^*$. We are going to describe a process that only decreases edge weights, and we prove that once it terminates all the following are true: $(1)$ $c^*$ does not decrease, $(2)$ $T$ still contains exactly all nodes whose total weight of edges incident on each of them is $c^*$, and $(3)$ $E(\overline{T})=\emptyset$, {\it i.e.}, there is no edge $(u,v)$ such that both $u,v\notin T$.
The process is defined as follows. While there is an edge $(u,v)\in E(\overline{T})$, {\it i.e.}, both $u,v\notin T$, whose weight is $c_{u,v}>0$, decrease its weight until the first of the following happens: $c_{u,v}$ reaches $0$ (in which case we remove the edge), $c(u)$ reaches $c^*$ (in which case we add $u$ to $T$ and stop decreasing the weight of the edge), or $c(v)$ reaches $c^*$ (in which case we add $v$ to $T$ and stop decreasing the weight of the edge). Clearly this process terminates in polynomial time, and $(1)$, $(2)$, and $(3)$ above are all satisfied (see Figure \ref{fig:NoEdgesT}).
\begin{figure}
\caption{No edges in $E(\overline{T})$ after creating new weights.}
\label{fig:NoEdgesT}
\end{figure}
We now execute the following modified local search algorithm on the new graph $G$ and edge weight function. Its full description is given by Algorithm \ref{alg:MMALocalSearch}. \begin{algorithm} \caption{Non-Oblivious Local Search ($G=(V,E),c^*,\varepsilon$)}\label{alg:MMALocalSearch} \begin{algorithmic}[1] \STATE $i\leftarrow 0$ and choose an arbitrary $S_0\subseteq V$. \WHILE {$\exists u \in V \text{such that } \text{agree}_{\mathcal{C}_{S_i}}(u) < (\nicefrac[]{1}{2}-\varepsilon)c^*$} \STATE $\text{move }u \text{ to the other side of the cut } S_i$ and denote the resulting cut by $S_{i+1}$.
\STATE $i\leftarrow i + 1$. \ENDWHILE \STATE output $ \mathcal{C}_{S_i}$. \end{algorithmic} \end{algorithm}
Clearly, once Algorithm \ref{alg:MMALocalSearch} terminates: $\text{agree}_{\mathcal{C}_{S_i}}(u)\geq (\nicefrac[]{1}{2}-\varepsilon)c^*$ for every node $ u\in V$. Thus, the output is a $\nicefrac[]{1}{(2+\varepsilon')}$-approximation for {{\textsf{Max Min Agreements}}}, where $\varepsilon'=\nicefrac[]{4\varepsilon}{(1-2\varepsilon)}$. All that remains is to prove that Algorithm \ref{alg:MMALocalSearch} terminates after a polynomial number of iterations.
For any $S\subseteq V$ define the potential $\Phi_{\mathcal{C}_S} \triangleq \sum _{u\in V}\text{agree}_{\mathcal{C}_S}(u)$ as before. Note that: \begin{align} \max _{S\subseteq V}\left\{ \Phi_{\mathcal{C}_S}\right\} \stackrel{(1)}{\leq} 2\sum _{e\in E}c(e) \stackrel{(2)}{\leq} 2\sum _{u\in T}c(u) \stackrel{(3)}{\leq} 2n c^*~.\label{PotentialBound} \end{align}
Inequality $(1)$ follows from the observation that $\text{agree}_{\mathcal{C}_S}(u)\leq c(u)$ for every $u\in V$, and thus the total potential can never exceed twice the total weight of edges in the graph. Inequality $(2)$ follows from the fact that $E(\overline{T})=\emptyset$, whereas inequality $(3)$ follows from the definition of $T$ and the fact that $ |T|\leq n$. Therefore, we can conclude that the potential $\Phi_{\mathcal{C}_S}$ is upper bounded by $2nc^*$.
Now we claim that in every iteration of Algorithm \ref{alg:MMALocalSearch} the potential $ \Phi_{\mathcal{C}_S}$ must increase by at least $2\varepsilon c^*$. Fix an iteration $i$ and let $u$ be the node that was moved in this iteration. Note that: \begin{align} \Phi _{\mathcal{C}_{S_{i+1}}} - \Phi _{\mathcal{C}_{S_{i}}} & \stackrel{(4)}{=} 2\left( \text{agree}_{\mathcal{C}_{S_{i+1}}}(u) - \text{agree}_{\mathcal{C}_{S_{i}}}(u)\right) \nonumber\\ & \stackrel{(5)}{=} 2\left( c(u) - 2 \cdot \text{agree}_{\mathcal{C}_{S_{i}}}(u)\right) \nonumber\\ & \stackrel{(6)}{\geq} 4\varepsilon c^*~.\label{PotentialGain} \end{align} Equality $(4)$ follows from the definition of the potential. Since it is always the case that $\text{agree}_{\mathcal{C}_{S_{i}}}(u)+\text{disagree}_{\mathcal{C}_{S_{i}}}(u) =c(u)$, and the values of $\text{agree}_{\mathcal{C}_{S_{i}}}(u)$ and $\text{disagree}_{\mathcal{C}_{S_{i}}}(u)$ are swapped once $u$ is moved, {\it i.e.}, $ \text{agree}_{\mathcal{C}_{S_{i+1}}}(u)=\text{disagree}_{\mathcal{C}_{S_{i}}}(u)$, we can conclude that equality $(5)$ is true. Inequality $(6)$ holds since $c(u)\geq c^*$ and the reason $u$ was moved in iteration $i$ is that $\text{agree}_{\mathcal{C}_{S_{i}}}(u)<\left( \nicefrac[]{1}{2}-\varepsilon\right) c^*$. Combining (\ref{PotentialBound}) and (\ref{PotentialGain}) proves that Algorithm \ref{alg:MMALocalSearch} terminates after at most $\nicefrac[]{n}{(2\varepsilon)}$ iterations.
$\square$
\end{proof}
\section{Proof of Theorem \ref{thrm:IntegralityGapMMA}}\label{app:IntegralityGapMMA} We prove now that the same integrality gap example used in proving Theorem \ref{thrm:IntegralityGapMMD} applies also for the current Theorem \ref{thrm:IntegralityGapMMA}.
\begin{proof}[of Theorem \ref{thrm:IntegralityGapMMA}] Let $G$ be the unweighted cycle on $n$ vertices, where all edges are labeled $+$ and one edge is labeled $-$. Specifically, denote the vertices of $G$ by $\left\{ v_1,v_2,\ldots,v_n\right\}$ where there is an edge $(v_i,v_{i+1})\in E^+$ for every $i=1,\ldots,n-1$ and additionally the edge $(v_n,v_1)\in E^-$.
First, we prove that the value of any integral solution is at most $1$. A clustering that includes $V$ as a single cluster has value of $1$, as both $v_1$ and $v_n$ have exactly one correctly classified edge touching them i.e. 1 agreement. Moreover, one can easily verify that any clustering into two or more clusters has a value of at most $1$. Thus, any integral solution for the above instance has value of at most $1$.
Consider the natural linear programming relaxation for {{\textsf{Max Min Agreements}}}: \begin{align*} \max ~~~ & \min _{u\in V}\left\{ A(u)\right\} & \\ & \sum _{v:(u,v)\in E^+}c_{u,v}(1-d\left( u,v\right)) + \sum _{v:(u,v)\in E^-} c_{u,v}d\left( u,v \right) = A(u) & \forall u\in V \\ & d(u,v) + d(v,w) \geq d(u,w) & \forall u,v,w\in V \\ & A(u)\geq 0, ~0\leq d(u,v) \leq 1 & \forall u,v\in V \end{align*} Let us construct a fractional solution. Assign a length of $\nicefrac[]{1}{n}$ for every $+$ edge and a length of $1-\nicefrac[]{1}{n}$ for the single $-$ edge, and let $d$ be the shortest path metric in $G$ induced by these lengths. Obviously, the triangle inequality is satisfied and one can verify that $d(u,v)\leq 1$ for all $u,v\in V$. Consider a vertex $v_i$ that does not touch the $-$ edge, {\it i.e.}, $i=2,\ldots,n-1$. Such a $v_i$ has two + edges touching it both having a length of $\nicefrac[]{1}{n}$, hence $A(v_i)=2-\nicefrac[]{2}{n}$. Focusing on $v_1$ and $v_n$, each has one $+$ edge whose length is $\nicefrac[]{1}{n}$ and one $-$ edge whose length is $1-\nicefrac[]{1}{n}$ touching them. Hence, $A(v_1)=A(v_n)=2-\nicefrac[]{2}{n}$. Therefore, the above instance has an integrality gap of $\nicefrac[]{n}{(2(n-1))}$.
Now, consider the natural semi-definite relaxation for {{\textsf{Max Min Agreements}}}, where each vertex $u$ corresponds to a unit vector ${\mathbf{y}} _u$. Intuitively, if $S_1,\ldots, S_{\ell}$ is an integral clustering, then all vertices in cluster $S_j$ are assigned to the standard $j$\textsuperscript{th} unit vector, {\it i.e.}, ${\mathbf{e}} _j$. Hence, the natural semi-definite relaxation requires that all vectors lie in the same orthant, {\it i.e.}, for every $u$ and $v$: ${\mathbf{y}} _u \cdot {\mathbf{y}} _v\geq 0$, and that $\left\{ {\mathbf{y}} _u\right\} _{u\in V}$ satisfy the $\ell _2^2$ triangle inequality. \begin{align*} \max ~~~ & \min _{u\in V}\left\{ A(u)\right\} & \\ & \sum _{v:(u,v)\in E^+}c_{u,v}\left( {\mathbf{y}} _u \cdot {\mathbf{y}} _v\right) + \sum _{v:(u,v)\in E^-} c_{u,v}\left(1-{\mathbf{y}} _u \cdot {\mathbf{y}} _v\right) = A(u) & \forall u\in V \\
& || {\mathbf{y}} _u - {\mathbf{y}} _v||_2^2 + || {\mathbf{y}} _v - {\mathbf{y}} _w||_2^2 \geq || {\mathbf{y}} _u - {\mathbf{y}} _w||_2^2 & \forall u,v,w\in V \\ & {\mathbf{y}} _u \cdot {\mathbf{y}} _u = 1 & \forall u\in V \\ & {\mathbf{y}} _u \cdot {\mathbf{y}} _v \geq 0 & \forall u,v\in V \end{align*}
In order to construct a fractional solution, it will be helpful to consider $Y\in \mathcal{R}^{V\times V}$ the positive semi-definite matrix of all inner products of $\left\{ {\mathbf{y}} _{v_i}\right\} _{i=1}^n$, {\it i.e.}, $Y_{v_i,v_j}={\mathbf{y}} _{v_i}\cdot {\mathbf{y}} _{v_j}$. Intuitively, we consider a collection of integral solutions where for each one we construct the corresponding $Y$ matrix. At the end, our fractional solution will be the average of all these $Y$ matrices.
Consider the following $n-1$ integral solution, each having only two clusters, where the first cluster consists of $\left\{ v_1,\ldots,v_i\right\}$ and the second contains $\left\{ v_{i+1},\ldots,v_n\right\}$ (here $i=1,\ldots,n-1$). Fixing $i$ and using the above translation of an integral solution to a feasible solution for the semi-definite relaxation, we assign each $v_j$, where $j=1,\ldots,i$ to ${\mathbf{e}} _1$ and each $v_j$, where $j=i+1,\ldots,n$, to ${\mathbf{e}} _2$. Let $Y^i$ be the resulting (positive semi-definite) inner product matrix. Additionally, consider one additional integral solution that consists of a single cluster containing all of $V$. In this case, the above translation yields that all $v_i$ vectors are assigned to ${\mathbf{e}} _1$. Denote by $Y^n$ the resulting (positive semi-definite) inner product matrix. Clearly, each of the $Y^1,\ldots,Y^n$ defines a feasible solution for the above natural semi-definite relaxation.
Our fractional solution is given by the average of all the above inner product matrices: $\overline{Y}\triangleq \frac{1}{n}\sum _{i=1}^n Y^i$. Obviously, $\overline{Y}$ defines a feasible solution for the above natural semi-definite relaxation. Note that ${\mathbf{y}} _{v_1} \cdot {\mathbf{y}} _{v_n} = \frac{n-1}{n}\cdot 0 + \frac{1}{n}\cdot 1 = \frac{1}{n}$ and that ${\mathbf{y}} _{v_i} \cdot {\mathbf{y}} _{v_{i+1}}=\frac{n-1}{n}\cdot 1 + \frac{1}{n}\cdot 0 = \frac{n-1}{n}$, for every $i=1,\ldots,n-1$. Therefore, we can conclude that: \begin{align*} A(v_i) & = 2\left(\frac{n-1}{n}\right)=2-\frac{2}{n} &\forall i=2,\ldots,n-1 \\ A(v_1) & =A(v_n) =1-\frac{1}{n}+\left(\frac{n-1}{n}\right)=2-\frac{2}{n} & \end{align*}
This demonstrates that the above instance also has an integrality gap of $\nicefrac[]{n}{(2(n-1))}$ for the natural semi-definite relaxation.
$\square$ \end{proof}
\end{document} | arXiv |
A Level Playing Field? Empirical Evidence That Ethnic Minority Analysts Face Unequal Access to Corporate Managers
Flam, Rachel W.,Green, Jeremiah,Lee, Joshua A.,Sharp, Nathan Y.
Given the lack of diversity among senior executives of U.S. public companies, we investigate whether ethnic minority analysts face unique barriers to management access. We find managers are less likely to select minority analysts to participate in the Q&A session of public earnings conference calls, and minority analysts selected to participate receive lower levels of prioritization and engagement than non-minority analysts. Minority analysts' access to management does not improve over time or with companies recognized for workplace diversity. The consequences of unequal treatment extend beyond conference calls, as investors are less likely to vote for minorities as Institutional Investor All-Stars.
A Note on the Impossibility of Correctly Calibrating the Current Exposure Method for Large OTC Derivatives Portfolios
Murphy, David
The capital charges for counterparty credit risk form an important part of the Basel Capital Accords. The Basel Committee permits firms to use a variety of methods to calculate regulatory capital on this risk class, including a simple approach â€" the constant exposure method or CEM â€" and a more sophisticated models-based approach known as EPE (for 'expected positive exposure').Counterparty credit risk capital models estimate the potential future exposure ('PFE') of a portfolio of derivatives with a counterparty based on whatever margining scheme applies. The CEM approximates this PFE using a constant percentage of notional, with the portfolio capital charge being the sum of the percentages which apply to each instrument. The CEM therefore recognizes no diversification benefit. In contrast, EPE approaches model the entire future of the net portfolio and thus provide much more accurate estimates for portfolios with more than a handful of instruments. The inaccuracy of the CEM is hardly surprising as it was intended only for smaller portfolios and less sophisticated firms.More recently the Basel Committee has proposed that the CEM be used as a method for determining the adequacy of financial resources available to an OTC derivatives central counterparty ('CCP'). Since cleared portfolios are very large and very well-hedged, it might be imagined that the CEM is not well suited to this task. This paper confirms that suspicion. In particular we show that the use of the CEM to estimate the riskiness of CCP default fund contributions leads to a significant overstatement of risk. Further, we show that the CEM cannot be simply recalibrated to provide a more risk sensitive approach. Thus an approach which provides more accurate estimates for typical CCPs is to be preferred.
Adversarial Robustness of Deep Convolutional Candlestick Learner
Jun-Hao Chen,Samuel Yen-Chi Chen,Yun-Cheng Tsai,Chih-Shiang Shur
Deep learning (DL) has been applied extensively in a wide range of fields. However, it has been shown that DL models are susceptible to a certain kinds of perturbations called \emph{adversarial attacks}. To fully unlock the power of DL in critical fields such as financial trading, it is necessary to address such issues. In this paper, we present a method of constructing perturbed examples and use these examples to boost the robustness of the model. Our algorithm increases the stability of DL models for candlestick classification with respect to perturbations in the input data.
An Impulse-Regime Switching Game Model of Vertical Competition
René Aïd,Luciano Campi,Liangchen Li,Mike Ludkovski
We study a new kind of non-zero-sum stochastic differential game with mixed impulse/switching controls, motivated by strategic competition in commodity markets. A representative upstream firm produces a commodity that is used by a representative downstream firm to produce a final consumption good. Both firms can influence the price of the commodity. By shutting down or increasing generation capacities, the upstream firm influences the price with impulses. By switching (or not) to a substitute, the downstream firm influences the drift of the commodity price process. We study the resulting impulse--regime switching game between the two firms, focusing on explicit threshold-type equilibria. Remarkably, this class of games naturally gives rise to multiple Nash equilibria, which we obtain via a verification based approach. We exhibit three types of equilibria depending on the ultimate number of switches by the downstream firm (zero, one or an infinite number of switches). We illustrate the diversification effect provided by vertical integration in the specific case of the crude oil market. Our analysis shows that the diversification gains strongly depend on the pass-through from the crude price to the gasoline price.
Auditors and the Principal-Principal Agency Conflict in Family-Controlled Firms
Ben Ali, Chiraz,Boubaker, Sabri,Magnan, Michel
This paper examines whether multiple large shareholders (MLS) affect audit fees in firms where the largest controlling shareholder (LCS) is a family. Results show that there is a negative relationship between audit fees and the presence, number, and voting power of MLS. This is consistent with the view that auditors consider MLS as playing a monitoring role over the LCS, mitigating the potential for expropriation by the LCS. Therefore, our evidence suggests that auditors reduce their audit risk assessment and audit effort and ultimately audit fees in family-controlled firms with MLS.
COVID-19 Pandemic and Global Financial Market Interlinkages: A Dynamic Temporal Network Analysis
Chakrabarti, Prasenjit,Jawed, Mohammad Shameem,Sarkhel, Manish
This study uses network theory to investigate the change in the dynamics of the financial markets of G20 countries, in the aftermath of COVID-19. The sheer scale, scope, and nature of the disruptions brought by the pandemic makes it an unprecedented global event. We find a major change in the structure of market linkages, departing from their pre-crisis behavior, both advanced and emerging markets form a tightly coupled close community after the disease outbreak. Chinese market shows a divergence by distancing itself from the rest of the cohort. This has significant implications on the geographical portfolio diversification strategies and benefits
COVID-19 Pandemic and Stock Market Response: A Culture Effect
Fernandez-Perez, Adrian,Gilbert, Aaron B.,Indriawan, Ivan,Nguyen, Nhut (Nick) Hoang
National culture has been shown to impact the way investors, firm managers, and markets in their entirety respond to different situations and events. The psychology literature, however, notes that in terms of crisis, old behaviors and norms can get replaced by new norms as groups adjust to the new situation. To date, no one has looked at the impact of culture on market responses to disasters. This paper is the first to address the effect of national culture on stock market responses to a global health disaster. We find larger declines and greater volatilities for stock markets in countries with higher uncertainty avoidance, lower individualism, and greater experience with disease-causing pathogens during the first three weeks after the confirmation of the first COVID-19 case within a country. Our results are robust after controlling for a number of variables, including investor fear, cumulative infected cases, the stringency of government response policies, the 2003 SARS experience, the level of democracy, political corruption, and trade openness.
COVID-19: Guaranteed Loans and Zombie Firms
Zoller-Rydzek, Benedikt,Keller, Florian
Based on the ZHAW Managers Survey (7-13 April 2020) we evaluate firm reactions towards the COVID-19 crisis. We find that the Swiss economic lockdown measures successfully froze the economy, i.e., firms show very little pro-active reactions towards the crisis, but drastically decrease their business activities. The firms in the survey report that the decline in foreign demand is the single most important reasons for their deteriorating business situation. The only significant pro-active reactions to mitigate the crisis are increased digitalization efforts. These efforts are expected to have a long-lasting impact on firms' performance due to a selection effect, i.e., firms with more positive experience of digitialization will maintain their higher levels of digitalization even after the crisis. In general we find that firms that faced a more difficult business situation before the crisis are affected more severely during the crisis. Moreover, we investigate the impact of the Swiss federal loan program (Bundeshilfe) on the business activities of Swiss firms. Specifically, we focus on the take up of firms and its interaction with the perceived business situation before and during the COVID-19 crisis. To this end, we develop a stylized theoretical model of financially constrained heterogeneous firms. We find that policy makers face a trade-off between immediate higher unemployment rates and long-term higher public spending. The former arises from a combination of a too strong economic impact of the COVID-19 lockdown and too low levels of loans provided by the government to financially distressed firms. Nevertheless, providing (too) high levels of loans to firms might create zombie firms that are going to default on their debt in the future leading to an increase in public spending.
Changes in Ownership Structure; Conversions of Mutual Savings and Loans to Stock Charter
Masulis, Ronald W.
This study analyzes both the causes and effects of mutual S&L conversions to corporate charter. Changes in technology and government policies have substantially increased S&L competition, riskbearing, and potential scale and scope economies. Evidence indicates that these changes have decreased the relative operating advantages of mutual S&Ls, encouraging conversions to stock charter. The S&L's financial and operating characteristics, which affect the success of the conversion effort, are also explored.
Coordinated Transaction Scheduling in Multi-Area Electricity Markets: Equilibrium and Learning
Mariola Ndrio,Subhonmesh Bose,Ye Guo,Lang Tong
Tie-line scheduling in multi-area power systems in the US largely proceeds through a market-based mechanism called Coordinated Transaction Scheduling (CTS). We analyze this market mechanism through a game-theoretic lens. Our analysis characterizes the effects of market liquidity, market participants' forecasts about inter-area price spreads, transaction fees and interaction of CTS markets with financial transmission rights. Using real data, we empirically verify that CTS bidders can employ simple learning algorithms to discover Nash equilibria that support the conclusions drawn from the equilibrium analysis.
Corporate Social Responsibility and Foreign Institutional Investor Heterogeneity
Roy, Partha P.,Rao, Sandeep,Marshall, Andrew P.,Thapa, Chandra
We investigate the nexus between corporate social responsibility (CSR) and ownership of foreign institutional investors (FII). Using a quasi-natural experiment setup of mandated CSR regulation in India, the aggregated examination shows that firms complying with mandated CSR activities (CSR firms) attract more FII ownership (FIO) compared to firms which do not comply. However, relative to all other legal origins, FII from civil law origin countries seem to invest more in CSR firms. Evidence also suggests that independent and long term FII tend to be more drawn towards CSR firms, relative to all other types of FII. Results further indicate that host firms spending more on educational projects as part of their CSR engagement seem to attract higher FIO. Finally, firms which attract greater FIO, as a result of complying with mandated CSR activities, appear to attain higher market valuations.
Denise: Deep Learning based Robust PCA for Positive Semidefinite Matrices
Calypso Herrera,Florian Krach,Anastasis Kratsios,Pierre Ruyssen,Josef Teichmann
The robust PCA of high-dimensional matrices plays an essential role when isolating key explanatory features. The currently available methods for performing such a low-rank plus sparse decomposition are matrix specific, meaning, the algorithm must re-run each time a new matrix should be decomposed. Since these algorithms are computationally expensive, it is preferable to learn and store a function that instantaneously performs this decomposition when evaluated. Therefore, we introduce Denise, a deep learning-based algorithm for robust PCA of symmetric positive semidefinite matrices, which learns precisely such a function. Theoretical guarantees that Denise's architecture can approximate the decomposition function, to arbitrary precision and with arbitrarily high probability, are obtained. The training scheme is also shown to convergence to a stationary point of the robust PCA's loss-function. We train Denise on a randomly generated dataset, and evaluate the performance of the DNN on synthetic and real-world covariance matrices. Denise achieves comparable results to several state-of-the-art algorithms in terms of decomposition quality, but as only one evaluation of the learned DNN is needed, Denise outperforms all existing algorithms in terms of computation time.
Design Choices in Central Clearing: Issues Facing Small Advanced Economies
Murphy, David,Budding, Edwin
For some contracts traded between some institutions, central clearing is becoming mandatory. Regulatory incentives are also being altered to encourage the use of CCPs where reasonably possible, and to ensure that where central clearing is not appropriate capital is held against the risks that arise. In this paper, we review some of the issues involved in deciding which transactions should be centrally cleared, where CCPs should be located, and how they should be designed, managed, and regulated. As derivatives reform progresses, the soundness of the central counterparties becomes more important to the soundness of the financial system, so these questions are important.
Distance to Headquarter and Real Estate Equity Performance
Milcheva, Stanimira,Yildirim, Yildiray,Zhu, Bing
We study the effect of geographic portfolio diversification of real estate firms on their investment performance before and after the global financial crisis (GFC). In addition to previously used dispersion metrics, we also account for the distance of the properties to the corporate headquarters. We document a notable shift in the non-market performance of real estate companies after the crisis. Pre-GFC, we do not find a difference in non-market performance across equities based on geographic diversification. Post-GFC, equities with high geographic dispersion significantly outperform the market, while firms with concentrated property holdings do not deliver a significant alpha. Increased real estate equity market sophistication and strong institutional presence can explain why this effect is only observed for dispersed small firms, those invested outside gateway metro areas, or companies with low institutional ownership.
Do Analysts Mind the GAAP? Evidence From the Tax Cuts and Jobs Act of 2017
Chen, Novia (Xi),Koester, Allison
This study examines the quality of analysts' GAAP-based earnings forecasts. Ideally, addressing this question requires events that have an ex ante estimable earnings impact, and affect GAAP earnings but not street earnings. The deferred tax adjustment as a result of a 2017 tax law change meets these criteria. Focusing on the fourth quarter of 2017 (2017Q4), we find that analysts' GAAP earnings forecasts and revisions fail to incorporate the vast majority of the deferred tax adjustment. We explore two potential explanations for this finding â€" task-specific complexity and lack of GAAP earnings forecasting effort. We find evidence consistent with the latter. Our final analyses consider two implications of our findings. First, despite analysts underreacting to the deferred tax adjustment, investors promptly impound the adjustment into stock prices at the legislative enactment date, indicating that analysts' GAAP earnings forecasts are not a good proxy for investor expectations of GAAP earnings during our sample period. Second, analysts who best incorporate the adjustment into their 2017Q4 GAAP earnings forecasts issue more accurate GAAP earnings forecasts for subsequent quarters, indicating that our inferences extend beyond a single quarter and account. Collectively, these findings have implications for research that relies on analysts' GAAP earnings forecasts to be of reasonable quality.
Do Private Household Transfers to the Elderly Respond to Public Pension Benefits? Evidence from Rural China
Plamen Nikolov,Alan Adelman
Ageing populations in developing countries have spurred the introduction of public pension programs to preserve the standard of living for the elderly. The often-overlooked mechanism of intergenerational transfers, however, can dampen these intended policy effects as adult children who make income contributions to their parents could adjust their behavior to changes in their parents' income. Exploiting a unique policy intervention in China, we examine using a difference-in-difference-in-differences (DDD) approach how a new pension program impacts inter vivos transfers. We show that pension benefits lower the propensity of receiving transfers from adult children in the context of a large middle-income country and we also estimate a small crowd-out effect. Taken together, these estimates fit the pattern of previous research in high-income countries, although our estimates of the crowd-out effect are significantly smaller than previous studies in both high-income and middle-income countries.
Double Deep Q-Learning for Optimal Execution
Brian Ning,Franco Ho Ting Lin,Sebastian Jaimungal
Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Q-value function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median out-performance, (ii) probability of out-performance, and (iii) gain-loss ratios.
Dual Space Arguments Using Polynomial Roots in the Complex Plane: A Novel Approach to Deriving Key Statistical Results
Crack, Timothy Falcon,Osborne, Michael,Crack, Malcolm,Osborne, Mark
We present a canonical orthogonal decomposition of sample variance and its applications. Surprisingly, our decomposition arises naturally from a novel dual space argument using polynomial roots in the complex plane. Linking these two seemingly disparate literatures yields a new pathway to the derivation of key statistical results under standard assumptions. These results include the chi-squared distribution of the scaled sample variance, the loss of one degree of freedom (relative to sample size) in the sample variance, the distribution of Snedecor's F-test of differences in dispersion, the independence of the sample mean and sample variance, and the distribution of the one-sample Student-t test of the mean. We suggest several promising directions for future research using our dual space method.
Duality for optimal consumption under no unbounded profit with bounded risk
Michael Monoyios
We give a definitive treatment of duality for optimal consumption over the infinite horizon, in a semimartingale incomplete market satisfying no unbounded profit with bounded risk (NUPBR). Rather than base the dual domain on (local) martingale deflators, we use a class of supermartingale deflators such that deflated wealth plus cumulative deflated consumption is a supermartingale for all admissible consumption plans. This yields a strong duality, because the enlarged dual domain of processes dominated by deflators is naturally closed, without invoking its closure. In this way we automatically reach the bipolar of the set of deflators. We complete this picture by proving that the set of processes dominated by local martingale deflators is dense in our dual domain, confirming that we have identified the natural dual space. In addition to the optimal consumption and deflator, we characterise the optimal wealth process. At the optimum, deflated wealth is a supermartingale and a potential, while deflated wealth plus cumulative deflated consumption is a uniformly integrable martingale. This is the natural generalisation of the corresponding feature in the terminal wealth problem, where deflated wealth at the optimum is a uniformly integrable martingale. We use no constructions involving equivalent local martingale measures. This is natural, given that such measures typically do not exist over the infinite horizon and that we are working under NUPBR, which does not require their existence. The structure of the duality proof reveals an interesting feature compared with the terminal wealth problem. There, the dual domain is $L^{1}$-bounded, but here the primal domain has this property, and hence many steps in the duality proof show a marked reversal of roles for the primal and dual domains, compared with the proofs of Kramkov and Schachermayer.
Dynamic Clearing and Contagion in Financial Networks
Tathagata Banerjee,Alex Bernstein,Zachary Feinstein
In this paper we will consider a generalized extension of the Eisenberg-Noe model of financial contagion to allow for time dynamics of the interbank liabilities. Emphasis will be placed on the construction, existence, and uniqueness of the continuous-time framework and its formulation as a differential equation driven by the operating cash flows. Finally, the financial implications of time dynamics will be considered. The focus will be on how the dynamic clearing solutions differ from those of the static Eisenberg-Noe model.
Dynamic Horizon Specific Network Risk
Jozef Barunik,Michael Ellington
This paper examines the pricing of dynamic horizon specific network risk in the cross-section of stock returns. We suggest how to track such dynamic network connections on a daily basis using time-varying parameter vector auto-regressions. Empirically, we characterize the short-term and long-term risks from a large-scale dynamic network on all S&P500 constituents' return volatilities. Consistent with theory, we show that stocks with high sensitivities to dynamic network risk earn lower returns. A two-standard deviation increase in long-term (short-term) network risk loadings associate with a 14.73% (12.96%) drop in annualized expected returns.
Earnings Beta
Ellahie, Atif
The literature on 'cash flow' or 'earnings' beta is theoretically well-motivated in its use of fundamentals, instead of returns, to measure systematic risk. However, empirical measures of earnings beta based on either log-linearizing the return equation or log-linearizing the clean-surplus accounting identity are often difficult to construct. I construct simple earnings betas based on various measures of realized and expected earnings, and find that an earnings beta based on price-scaled expectations shocks performs consistently well in explaining the cross-section of returns over 1981â€"2017. I also examine the relation between different measures of beta and several firm characteristics that are either theoretically connected to systematic risk or are empirically associated with returns, and find evidence in support of the construct validity of an earnings beta based on price-scaled expectations shocks. Overall, the findings suggest that this easy-to-construct earnings beta can be suitable for future researchers requiring a measure of systematic risk.
Economics of carbon-dioxide abatement under an exogenous constraint on cumulative emissions
Ashwin K Seshadri
The fossil-fuel induced contribution to further warming over the 21st century will be determined largely by integrated CO2 emissions over time rather than the precise timing of the emissions, with a relation of near-proportionality between global warming and cumulative CO2 emissions. This paper examines optimal abatement pathways under an exogenous constraint on cumulative emissions. Least cost abatement pathways have carbon tax rising at the risk-free interest rate, but if endogenous learning or climate damage costs are included in the analysis, the carbon tax grows more slowly. The inclusion of damage costs in the optimization leads to a higher initial carbon tax, whereas the effect of learning depends on whether it appears as an additive or multiplicative contribution to the marginal cost curve. Multiplicative models are common in the literature and lead to delayed abatement and a smaller initial tax. The required initial carbon tax increases with the cumulative abatement goal and is higher for lower interest rates. Delaying the start of abatement is costly owing to the increasing marginal abatement cost. Lower interest rates lead to higher relative costs of delaying abatement because these induce higher abatement rates early on. The fraction of business-as-usual emissions (BAU) avoided in optimal pathways increases for low interest rates and rapid growth of the abatement cost curve, which allows a lower threshold global warming goal to become attainable without overshoot in temperature. Each year of delay in starting abatement raises this threshold by an increasing amount, because the abatement rate increases exponentially with time.
Endowment Performance and the Demise of the Multi-Asset-Class Model
Ennis, Richard
Endowment funds large and small underperform passive investment. Moreover, an analysis of the performance of 41 of the largest individual endowments over the 11 years ended June 30, 2019, reveals that none outperformed with statistical significance, while one in four underperformed with statistical significance. The multi-asset-class approach to institutional investing has failed to deliver diversification benefits and has had an adverse effect on endowment performance. Given prevailing diversification patterns and costs of 1 to 2% of assets, it is likely that the great majority of endowment funds will underperform in the years ahead.
Equal Risk Pricing of Derivatives with Deep Hedging
Alexandre Carbonneau,Frédéric Godin
This article presents a deep reinforcement learning approach to price and hedge financial derivatives. This approach extends the work of Guo and Zhu (2017) who recently introduced the equal risk pricing framework, where the price of a contingent claim is determined by equating the optimally hedged residual risk exposure associated respectively with the long and short positions in the derivative. Modifications to the latter scheme are considered to circumvent theoretical pitfalls associated with the original approach. Derivative prices obtained through this modified approach are shown to be arbitrage-free. The current paper also presents a general and tractable implementation for the equal risk pricing framework inspired by the deep hedging algorithm of Buehler et al. (2019). An $\epsilon$-completeness measure allowing for the quantification of the residual hedging risk associated with a derivative is also proposed. The latter measure generalizes the one presented in Bertsimas et al. (2001) based on the quadratic penalty. Monte Carlo simulations are performed under a large variety of market dynamics to demonstrate the practicability of our approach, to perform benchmarking with respect to traditional methods and to conduct sensitivity analyses.
Estimating Full Lipschitz Constants of Deep Neural Networks
Calypso Herrera,Florian Krach,Josef Teichmann
We estimate the Lipschitz constants of the gradient of a deep neural network and the network itself with respect to the full set of parameters. We first develop estimates for a deep feed-forward densely connected network and then, in a more general framework, for all neural networks that can be represented as solutions of controlled ordinary differential equations, where time appears as continuous depth. These estimates can be used to set the step size of stochastic gradient descent methods, which is illustrated for one example method.
Explicit option valuation in the exponential NIG model
Jean-Philippe Aguilar
We provide closed-form pricing formulas for a wide variety of path-independent options, in the exponential L\'evy model driven by the Normal inverse Gaussian process. The results are obtained in both the symmetric and asymmetric model, and take the form of simple and quickly convergent series, under some condition involving the log-forward moneyness and the maturity of instruments. Proofs are based on a factorized representation in the Mellin space for the price of an arbitrary path-independent payoff, and on tools from complex analysis. The validity of the results is assessed thanks to several comparisons with standard numerical methods (Fourier-related inversion, Monte-Carlo simulations) for realistic sets of parameters. Precise bounds for the convergence speed and the truncation error are also provided.
Fast calibration of two-factor models for energy option pricing
Emanuele Fabbiani,Andrea Marziali,Giuseppe De Nicolao
Energy companies need efficient procedures to perform market calibration of stochastic models for commodities. If the Black framework is chosen for option pricing, the bottleneck of market calibration is the computation of the variance of the asset. For energy commodities, it is common to adopt multi-factor linear models, whose variance obeys a matrix Lyapunov differential equation. In this paper, both analytical and numerical methods to derive the variance through a Lyapunov equation are discussed and compared in terms of computational efficiency. The Lyapunov approach illustrated herein is more straightforward than ad-hoc derivations found in the quantitative finance literature and can be readily extended by a practitioner to higher-dimensional models. A practical case study is presented, where the variance of a two-factor mean-reverting model is embedded into the Black formulae and the model parameters are then calibrated against listed options. In particular, the analytical and numerical method are compared, showing that the former makes the calibration 14 times faster. A Python implementation of the proposed procedures is available as open-source software on GitHub.
Foreign Exchange and the Capital Market Dynamics: New Evidence from Non-linear Autoregressive Distributed Lag Model
Omoregie, Osaretin Kayode
The purpose of this study was to investigate and analyze the relationship between foreign exchange and capital market dynamics in Nigeria from January 1999 to February 2018. The study deployed the Non-Linear-ARDL model to study the dynamics of exchange rate and the capital market in Nigeria. The research outcome revealed that a rise (fall) in all-share-index is related to real exchange rate depreciation (appreciation), while real exchange rate depreciation (appreciation) is associated with an increase (decrease) in all-share-index. Besides, the research outcome also showed that there is a presence of time-specific long-run, bi-directional, and unidirectional causality with stronger interrelation after the Global Financial Crisis. The study recommends that to properly hedge and diversify portfolio against potential risk in these two markets, market players need to understand the dynamics between them.
From Free Markets to Fed Markets: How Unconventional Monetary Policy Distorts Equity Markets
Putniņš, Tālis J.
In response to the COVID-19 pandemic, the US Federal Reserve almost doubled its balance sheet by adding $3 trillion of assets in the space of three months constituting the most aggressive unconventional monetary policy on record. We show that these actions had a substantial effect on stock markets, accounting for one-third of the rebound in markets since March 2020 (increasing returns by 11-13%) and contributing to the apparent disconnect between stock prices and the real economy. Using dynamic time-series models, we characterize the strong bi-directional symbiotic relation between the Fed's balance sheet and stock markets.
Generating Realistic Stock Market Order Streams
Junyi Li,Xitong Wang,Yaoyang Lin,Arunesh Sinha,Micheal P. Wellman
We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks (GANs). Our Stock-GAN model employs a conditional Wasserstein GAN to capture history dependence of orders. The generator design includes specially crafted aspects including components that approximate the market's auction mechanism, augmenting the order history with order-book constructions to improve the generation task. We perform an ablation study to verify the usefulness of aspects of our network structure. We provide a mathematical characterization of distribution learned by the generator. We also propose statistics to measure the quality of generated orders. We test our approach with synthetic and actual market data, compare to many baseline generative models, and find the generated data to be close to real data.
Hedging-Induced Correlation in Illiquid Markets
Brøgger, Søren Bundgaard
I develop a model with two assets in which the hedging activity of derivatives dealers, interacting with market illiquidity, distorts the covariance structure of the market. I apply the model to hedging of counter party risk, and find strong support for the model's key predictions. Using evidence from Japan, I show that hedging of counter party risk associated with currency swap portfolios drives a strong, non-fundamental correlation between credit and currency markets. The effects are economically significant. For example, I estimate that counter party risk hedging associated with SoftBank's FX swap portfolio accounts for 25% of the weekly volatility of SoftBank CDS returns.
Integrated ridesharing services with chance-constrained dynamic pricing and demand learning
Tai-Yu Ma,Sylvain Klein
The design of integrated mobility-on-demand services requires jointly considering the interactions between traveler choice behavior and operators' operation policies to design a financially sustainable pricing scheme. However, most existing studies focus on the supply side perspective, disregarding the impact of customer choice behavior in the presence of co-existing transport networks. We propose a modeling framework for dynamic integrated mobility-on-demand service operation policy evaluation with two service options: door-to-door rideshare and rideshare with transit transfer. A new constrained dynamic pricing model is proposed to maximize operator profit, taking into account the correlated structure of different modes of transport. User willingness to pay is considered as a stochastic constraint, resulting in a more realistic ticket price setting while maximizing operator profit. Unlike most studies, which assume that travel demand is known, we propose a demand learning process to calibrate customer demand over time based on customers' historical purchase data. We evaluate the proposed methodology through simulations under different scenarios on a test network by considering the interactions of supply and demand in a multimodal market. Different scenarios in terms of customer arrival intensity, vehicle capacity, and the variance of user willingness to pay are tested. Results suggest that the proposed chance-constrained assortment price optimization model allows increasing operator profit while keeping the proposed ticket prices acceptable.
Investor-Driven Governance Standards and Firm Value
Ertimur, Yonca,Patrick, Paige Harrington
We ope-rationalize a corporate governance framework developed and promoted by a diverse, influential group of diverse institutional investors. We find positive associations between consistency with the proposed framework and firm value for smaller firms in recent years, and negative associations for S&P 500 firms in recent years. We detect some evidence of improved monitoring outcomes for firms whose governance provisions are more consistent with provisions eventually included in the framework. However, we do not find that measures of consistency with the framework are more strongly associated with firm outcomes than the much simpler entrenchment index is.
Is Gold a Hedge or Safe Haven Asset during COVIDâ€"19 Crisis?
Akhtaruzzaman, Md,Boubaker, Sabri,Lucey, Brian M.,Sensoy, Ahmet
The COVIDâ€"19 pandemic has shaken the global financial markets. Our study examines the role of gold as a safe haven asset during the different phases of this COVIDâ€"19 crisis by utilizing an intraday dataset. The empirical findings show that dynamic conditional correlations (DCCs) between intraday gold and international equity returns (S&P500, Euro Stoxx 50, Nikkei 225, and China FTSE A50 indices) are negative during Phase I (December 31, 2019âˆ'March 16, 2020) of the COVIDâ€"19 pandemic, indicating that gold is a safe haven asset for these stock markets. However, gold has lost its property as a safe haven asset for these markets during Phase II (March 17âˆ'April 24, 2020). The optimal weights of gold in the portfolios of S&P500, Euro Stoxx 50, Nikkei 225 and WTI crude oil has significantly increased during Phase II, suggesting that investors have increased the optimal weights of gold as 'flight-to-safety assets' during the crisis period. The results also show that hedging costs have significantly increased during Phase II. The hedging effectiveness (HE) index shows that the hedge is effective for portfolios containing gold and major financial assets. Our results are robust to alternative specifications of the DCC-GARCH model.
Litigating Dividends or Not? The Case of Derivative Lawsuits
Ni, Xiaoran,Zhang, Huilin
Recent anecdotal evidence suggests that high litigation risk may induce firms to cut dividends. By comparison, litigation can be an effective governance tool for shareholders to force firms to distribute cash. Therefore, it is unclear how litigation risk affects dividend payouts on average. To address this issue, we exploit the staggered adoption of universal demand (UD) laws across various U.S. states as quasi-exogenous shocks. We find that firms increase dividend payouts significantly after UD laws raise the hurdle of filing derivative lawsuits. In particular, the adoption of UD laws discourages the emission of cash dividends while encourages the initiation of share repurchases. The main effect is more pronounced for firms faced with higher litigation risk, that are more financially distressed, and operating in more competitive product markets. Our overall findings suggest that excessive threats of derivative lawsuits may dampen financial flexibility and deter the distribution of cash to shareholders.
Machine Learning SABR Model of Stochastic Volatility With Lookup Table
Lokvancic, Mahir
We present an embarrassingly simple method for supervised learning of SABR model's European option price function based on lookup table or rote machine learning. The performance in time domain is comparable to generally used analytic approximations utilized in financial industry. However, unlike the approximation schemes based on asymptotic methods â€" universally deemed fastest â€" the methodology admits arbitrary calculation precision to the true pricing function without detrimental impact on time performance apart from memory access latency. The idea is plainly applicable to any function approximation or supervised learning domain with low dimension.
Maintaining Confidence
This paper proposes the solvency/liquidity spiral as an failure mode affecting large financial institutions in the recent crisis. The essential features of this mode are that a combination of funding liquidity risk and investor doubts over the solvency of an institution can lead to its failure.We analyse the failures of Lehman Brothers and RBS in detail, and find considerable support for the spiral model of distress.Our model suggests that a key determinant of the financial stability of many large banks is the confidence of the funding markets. This has consequences for the design of financial regulation, suggesting that capital requirements, liquidity rules, and disclosure should be explicitly constructed so as not just to mitigate solvency risk and liquidity risk, but also to be seen to do so even in stressed conditions.
Optimal Equilibria for Multi-dimensional Time-inconsistent Stopping Problems
Yu-Jui Huang,Zhenhua Wang
We study an optimal stopping problem under non-exponential discounting, where the state process is a multi-dimensional continuous strong Markov process. The discount function is taken to be log sub-additive, capturing decreasing impatience in behavioral economics. On strength of probabilistic potential theory, we establish the existence of an optimal equilibrium among a sufficiently large collection of equilibria, consisting of finely closed equilibria satisfying a boundary condition. This generalizes the existence of optimal equilibria for one-dimensional stopping problems in prior literature.
Past production constrains current energy demands: persistent scaling in global energy consumption and implications for climate change mitigation
Timothy J. Garrett,Matheus R. Grasselli,Stephen Keen
Climate change has become intertwined with the global economy. Here, we describe the importance of inertia to continued growth in energy consumption. Drawing from thermodynamic arguments, and using 38 years of available statistics between 1980 to 2017, we find a persistent time-independent scaling between the historical time integral $W$ of world inflation-adjusted economic production $Y$, or $W\left(t\right) = \int_0^t Y\left(t'\right)dt'$, and current rates of world primary energy consumption $\mathcal E$, such that $\lambda = \mathcal{E}/W = 5.9\pm0.1$ Gigawatts per trillion 2010 US dollars. This empirical result implies that population expansion is a symptom rather than a cause of the current exponential rise in $\mathcal E$ and carbon dioxide emissions $C$, and that it is past innovation of economic production efficiency $Y/\mathcal{E}$ that has been the primary driver of growth, at predicted rates that agree well with data. Options for stabilizing $C$ are then limited to rapid decarbonization of $\mathcal E$ through sustained implementation of over one Gigawatt of renewable or nuclear power capacity per day. Alternatively, assuming continued reliance on fossil fuels, civilization could shift to a steady-state economy that devotes economic production exclusively to maintenance rather than expansion. If this were instituted immediately, continual energy consumption would still be required, so atmospheric carbon dioxide concentrations would not balance natural sinks until concentrations exceeded 500 ppmv, and double pre-industrial levels if the steady-state was attained by 2030.
Prediction defaults for networked-guarantee loans
Dawei Cheng,Zhibin Niu,Yi Tu,Liqing Zhang
Networked-guarantee loans may cause the systemic risk related concern of the government and banks in China. The prediction of default of enterprise loans is a typical extremely imbalanced prediction problem, and the networked-guarantee make this problem more difficult to solve. Since the guaranteed loan is a debt obligation promise, if one enterprise in the guarantee network falls into a financial crisis, the debt risk may spread like a virus across the guarantee network, even lead to a systemic financial crisis. In this paper, we propose an imbalanced network risk diffusion model to forecast the enterprise default risk in a short future. Positive weighted k-nearest neighbors (p-wkNN) algorithm is developed for the stand-alone case -- when there is no default contagious; then a data-driven default diffusion model is integrated to further improve the prediction accuracy. We perform the empirical study on a real-world three-years loan record from a major commercial bank. The results show that our proposed method outperforms conventional credit risk methods in terms of AUC. In summary, our quantitative risk evaluation model shows promising prediction performance on real-world data, which could be useful to both regulators and stakeholders.
Predictive Regression with p-Lags and Order-q Autoregressive Predictors
Jayetileke, Harshanie L.,Wang, You-Gan,Zhu, Min
This paper considers predictive regressions, where yt is predicted by all p lags of x, here with x being autoregressive of order q, PR(p,q). The literature considers model properties in the cases where p=q. We demonstrate that the current augmented regression method can still reduce the bias in predictive coefficients, but its efficiency depends on correctly specifying both p and q. We propose an estimation framework for the predictive regression, PR(p,q), with a data-driven auto-selection of p and q to achieve the best bias reduction in predictive coefficients. The corresponding hypothesis testing procedure is also derived. The efficiency of the proposed method is demonstrated with simulations. Empirical applications to equity premium prediction illustrate the substantial difference between the estimates of our method and those obtained by the common predictive regressions with p=q.
Reducing The Wealth Gap Through Fintech 'Advances' in Consumer Banking and Lending
Foohey, Pamela,Martin, Nathalie
Research shows that Black, Latinx, and other minorities pay more for credit and banking services, and that wealth accumulation differs starkly between their households and white households. The link between debt inequality and the wealth gap, however, remains less thoroughly explored, particularly in light of new credit products and debt-like banking services, such as early wage access and other fintech innovations. These innovations both hold the promise of reducing racial and ethnic disparities in lending and bring concerns that they may be exploited in ways that perpetuate inequality. They also come at a time when policy makers are considering how to help communities of color rebuild their wealth, presenting an opportunity to critique policy proposals. This Article leverages that opportunity by synthesizing research about the long-term costs of debt inequality on communities of color, adding an in-depth analysis of several new advances in banking and lending, and proposing several key principles for reducing debt inequality as an input to the wealth gap.
Response to Welch (2020): Real Estate Collateral Does Affect Corporate Investment
Chaney, Thomas,Sraer, David Alexandre,Thesmar, David
This short note is a response to Welch (2020), who claims that our results in Chaney, Sraer and Thesmar (2012) are not robust. We show that none of his findings invalidate our results. Welch makes three major points. First, he correctly points out that our baseline specification uses a common scaling factor (lagged capital stock) for our dependent (investment) and independent (real estate collateral) variables, creating a mechanical correlation between left- and right-hand side variables. We show in this note that, while this point is formally correct, our results are robust to controlling for or removing entirely this mechanical correlation. Second, Welch correctly, stresses that real estate prices are serially correlated, so that identification of a real estate collateral channel is potentially complex. We show in this note that our results are robust to controlling for the serial correlation in real estate prices. Third, Welch correctly worries about the fact that real estate prices are driven not just by local shocks (MSA or State), but also by common shocks (national). We show in this note that our results are robust to controlling for common national real estate shocks. In other words, while we recognize that Welch raises several important points, we argue that none of those results invalidate the baseline findings in Chaney, Sraer and Thesmar (2012). Yet, some of these objections suggest interesting leads for further analysis on corporate investment. We describe these leads in the note, hoping that they will inspire future research.
Scoring Functions for Multivariate Distributions and Level Sets
Xiaochun Meng,James W. Taylor,Souhaib Ben Taieb,Siran Li
Interest in predicting multivariate probability distributions is growing due to the increasing availability of rich datasets and computational developments. Scoring functions enable the comparison of forecast accuracy, and can potentially be used for estimation. A scoring function for multivariate distributions that has gained some popularity is the energy score. This is a generalization of the continuous ranked probability score (CRPS), which is widely used for univariate distributions. A little-known, alternative generalization is the multivariate CRPS (MCRPS). We propose a theoretical framework for scoring functions for multivariate distributions, which encompasses the energy score and MCRPS, as well as the quadratic score, which has also received little attention. We demonstrate how this framework can be used to generate new scores. For univariate distributions, it is well-established that the CRPS can be expressed as the integral over a quantile score. We show that, in a similar way, scoring functions for multivariate distributions can be "disintegrated" to obtain scoring functions for level sets. Using this, we present scoring functions for different types of level set, including those for densities and cumulative distributions. To compute the scoring functions, we propose a simple numerical algorithm. We illustrate our proposals using simulated and stock returns data.
Sovereign Default Risk and Credit Supply: Evidence from the Euro Area
Olli Palmén
Did sovereign default risk affect macroeconomic activity through firms' access to credit during the European sovereign debt crisis? We investigate this question by a estimating a structural panel vector autoregressive model for Italy, Spain, Portugal, and Ireland, where the sovereign risk shock is identified using sign restrictions. The results suggest that decline in the creditworthiness of the sovereign contributed to a fall in private lending and economic activity in several euro-area countries by reducing the value of banks' assets and crowding out private lending.
Tail probabilities of random linear functions of regularly varying random vectors
Bikramjit Das,Vicky Fasen-Hartmann,Claudia Klüppelberg
We provide a new extension of Breiman's Theorem on computing tail probabilities of a product of random variables to a multivariate setting. In particular, we give a complete characterization of regular variation on cones in $[0,\infty)^d$ under random linear transformations. This allows us to compute probabilities of a variety of tail events, which classical multivariate regularly varying models would report to be asymptotically negligible. We illustrate our findings with applications to risk assessment in financial systems and reinsurance markets under a bipartite network structure.
The Effects of Access to Credit on Productivity Among Microenterprises: Separating Technological Changes from Changes in Technical Efficiency
Nusrat Abedin Jimi,Plamen Nikolov,Mohammad Abdul Malek,Subal Kumbhakar
Improving productivity among farm microenterprises is important, especially in low-income countries where market imperfections are pervasive and resources are scarce. Relaxing credit constraints can increase the productivity of farmers. Using a field experiment involving microenterprises in Bangladesh, we estimate the impact of access to credit on the overall productivity of rice farmers, and disentangle the total effect into technological change (frontier shift) and technical efficiency changes. We find that relative to the baseline rice output per decimal, access to credit results in, on average, approximately a 14 percent increase in yield, holding all other inputs constant. After decomposing the total effect into the frontier shift and efficiency improvement, we find that, on average, around 11 percent of the increase in output comes from changes in technology, or frontier shift, while the remaining 3 percent is attributed to improvements in technical efficiency. The efficiency gain is higher for modern hybrid rice varieties, and almost zero for traditional rice varieties. Within the treatment group, the effect is greater among pure tenant and mixed-tenant farm households compared with farmers that only cultivate their own land.
The Importance of Cognitive Domains and the Returns to Schooling in South Africa: Evidence from Two Labor Surveys
Plamen Nikolov,Nusrat Jimi
Numerous studies have considered the important role of cognition in estimating the returns to schooling. How cognitive abilities affect schooling may have important policy implications, especially in developing countries during periods of increasing educational attainment. Using two longitudinal labor surveys that collect direct proxy measures of cognitive skills, we study the importance of specific cognitive domains for the returns to schooling in two samples. We instrument for schooling levels and we find that each additional year of schooling leads to an increase in earnings by approximately 18-20 percent. The estimated effect sizes-based on the two-stage least squares estimates-are above the corresponding ordinary least squares estimates. Furthermore, we estimate and demonstrate the importance of specific cognitive domains in the classical Mincer equation. We find that executive functioning skills (i.e., memory and orientation) are important drivers of earnings in the rural sample, whereas higher-order cognitive skills (i.e., numeracy) are more important for determining earnings in the urban sample. Although numeracy is tested in both samples, it is only a statistically significant predictor of earnings in the urban sample.
The Importance of Compound Risk in the Nexus of COVID-19, Climate Change and Finance
Monasterolo, Irene,Billio, Monica,Battiston, Stefano
Current approaches to manage the COVID-19 pandemic have a narrow focus on public health and on the short-term economic and financial repercussions. This prevents us to look at how pandemic risk interplays with sustainable and inclusive development goals in the next decade. To fill this gap, we analyse how risk can compound in the nexus of non-linear interactions among pandemic, climate change and finance. We show that neglecting compound risk can lead to a massive underestimation of losses, which can be amplified by financial complexity, as well as to policies that impose unnecessary trade-offs among the economic recovery, health and climate objectives. To address these challenges, we propose an interdisciplinary research agenda to inform effective policies and improve the resilience of our socio-economic systems.
The Rise of Finance Companies and FinTech Lenders in Small Business Lending
Gopal, Manasa,Schnabl, Philipp
We document that finance companies and FinTech Lenders increased lending to small businesses after the 2008 financial crisis. We show that most of the increase substituted for a reduction in lending by banks. In counties where banks had a larger market share before the crisis, finance companies and FinTech lenders increased their lending more. By 2016, the increase in finance company and FinTech lending almost perfectly offset the decrease in bank lending. We control for firms' credit demand by examining lending by different lenders to the same firm, by comparing firms within the same narrow industry, and by comparing firms pledging the same collateral. Consistent with the substitution of bank lending with finance company and FinTech lending, we find limited long-term effects on employment, wages, new business creation, and business expansion. Our results show that finance companies and FinTech lenders are major suppliers of credit to small businesses and played an important role in the recovery from the 2008 financial crisis.
The Theory of Insurance and Gambling
Nyman, John A.
This paper suggests that insurance represents a quid pro quo transaction across states of the world and is purchased because consumers desire to transfer income to a state where it is more valued. Preferences for certainty have little to do with the demand for insurance, but uncertainty itself plays a large role because it operates mechanically to make the payout a multiple of the premium. It also suggests that casino and other forms of institutional gambling represent a similar quid pro quo transaction across states of the world and that consumers gamble to transfer income to a state where it is less costly to obtain. Again, preferences for uncertainty do not motivate gambling, but uncertainty does allow for the augmentation of the payout compared to the wager. These motivations do not conflict with the empirical evidence supporting prospect theory and can accommodate the insurance-purchasing gambler. Both the demand and supply sides are included in the definitions of insurance and gambling presented herein.
The Unprecedented Fall in U.S. Revolving Credit
Raveendranathan, Gajendran,Stefanidis, Georgios
Revolving credit in the U.S. declined drastically in the last decade after several years of upward trending growth. We show that the Ability to Pay provision of the Credit CARD Act of 2009, which places restrictions on credit card limits, accounts for this decline. Extending a model of revolving credit to analyze this policy, we account for changes in credit statistics by income and age. Although the goal was consumer protection, the policy has led to welfare losses. Even consumers with time inconsistent preferences who could benefit from tighter credit constraints are worse off. An alternative policy considered by policymakers - an interest rate cap - improves welfare.
Theoretical Guarantees for Learning Conditional Expectation using Controlled ODE-RNN
Continuous stochastic processes are widely used to model time series that exhibit a random behaviour. Predictions of the stochastic process can be computed by the conditional expectation given the current information. To this end, we introduce the controlled ODE-RNN that provides a data-driven approach to learn the conditional expectation of a stochastic process. Our approach extends the ODE-RNN framework which models the latent state of a recurrent neural network (RNN) between two observations with a neural ordinary differential equation (neural ODE). We show that controlled ODEs provide a general framework which can in particular describe the ODE-RNN, combining in a single equation the continuous neural ODE part with the jumps introduced by RNN. We demonstrate the predictive capabilities of this model by proving that, under some regularities assumptions, the output process converges to the conditional expectation process.
Vertical vs. Horizontal Policy in a Capabilities Model of Economic Development
Alje van Dam,Koen Frenken
Against the background of renewed interest in vertical support policies targeting specific industries or technologies, we investigate the effects of vertical vs. horizontal policies in a combinatorial model of economic development. In the framework we propose, an economy develops by acquiring new capabilities allowing for the production of an ever greater variety of products with an increasing complexity. Innovation policy can aim to expand the number of capabilities (vertical policy) or the ability to combine capabilities (horizontal policy). The model shows that for low-income countries, the two policies are complementary. For high-income countries that are specialised in the most complex products, focusing on horizontal policy only yields the highest returns. We reflect on the model results in the light of the contemporary debate on vertical policy.
What Factors Drive Individual Misperceptions of the Returns to Schooling in Tanzania? Some Lessons for Education Policy
Evidence on educational returns and the factors that determine the demand for schooling in developing countries is extremely scarce. Building on previous studies that show individuals underestimating the returns to schooling, we use two surveys from Tanzania to estimate both the actual and perceived schooling returns and subsequently examine what factors drive individual misperceptions regarding actual returns. Using ordinary least squares and instrumental variable methods, we find that each additional year of schooling in Tanzania increases earnings, on average, by 9 to 11 percent. We find that on average individuals underestimate returns to schooling by 74 to 79 percent and three factors are associated with these misperceptions: income, asset poverty and educational attainment. Shedding light on what factors relate to individual beliefs about educational returns can inform policy on how to structure effective interventions in order to correct individual misperceptions.
Where Is the Risk in Risk Factors? Evidence from the Vietnam War to the COVID-19 Pandemic.
Geertsema, Paul,Lu, Helen
During the COVID-19 pandemic (Jan 2020 - Mar 2020) all of the Fama and French (2018) factors except momentum lost money. Negative payoffs in a bad state would appear to justify the positive premia generated by these risk factors. But this is atypical â€" historically the value, profitability, investment and momentum factors are all more profitable in bear markets. The five non-market factors exhibit their own bull and bear market phases, but these do not correlate with the economic cycle. Factor profitability in bear markets arise primarily from the short side. Biased expectations corrected around earnings announcement offer only a partial explanation.
Why is Dollar Debt Cheaper? Evidence from Peru
Gutiérrez, Bryan,Ivashina, Victoria,Salomao, Juliana
In emerging markets, a significant share of corporate loans are denominated in dollars. Using novel data that enables us to see currency and the cost of credit, in addition to several other transaction-level characteristics, we re-examine the reasons behind dollar credit popularity. We find that a dollar-denominated loan has an interest rate that is 2% lower per year than a loan in Peruvian Soles. Expectations of exchange rate movements do not explain this difference. We show that this interest rate differential for lending rates is closely matched by the differential in the deposit market. Our results suggest that the preference for dollar loans is rooted on the local household preference for dollar savings and a banking sector that is closely matching its foreign assets and liabilities. We find that borrower competitive pressure increases the pass-through of this differential. | CommonCrawl |
Skilful precipitation nowcasting using deep generative models of radar
Suman Ravuri ORCID: orcid.org/0000-0002-7481-76331 na1,
Karel Lenc ORCID: orcid.org/0000-0001-6119-00451 na1,
Matthew Willson ORCID: orcid.org/0000-0002-8730-19271 na1,
Dmitry Kangin ORCID: orcid.org/0000-0001-9769-75852,3,
Remi Lam1,
Piotr Mirowski1,
Megan Fitzsimons2,
Maria Athanassiadou2,
Sheleem Kashem1,
Sam Madge2,
Rachel Prudden2,3,
Amol Mandhane ORCID: orcid.org/0000-0002-3412-26341,
Aidan Clark1,
Andrew Brock1,
Karen Simonyan1,
Raia Hadsell1,
Niall Robinson2,3,
Ellen Clancy1,
Alberto Arribas2,4 &
Shakir Mohamed ORCID: orcid.org/0000-0002-1184-57761
Nature volume 597, pages 672–677 (2021)Cite this article
Precipitation nowcasting, the high-resolution forecasting of precipitation up to two hours ahead, supports the real-world socioeconomic needs of many sectors reliant on weather-dependent decision-making1,2. State-of-the-art operational nowcasting methods typically advect precipitation fields with radar-based wind estimates, and struggle to capture important non-linear events such as convective initiations3,4. Recently introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints5,6. While they accurately predict low-intensity rainfall, their operational utility is limited because their lack of constraints produces blurry nowcasts at longer lead times, yielding poor performance on rarer medium-to-heavy rain events. Here we present a deep generative model for the probabilistic nowcasting of precipitation from radar that addresses these challenges. Using statistical, economic and cognitive measures, we show that our method provides improved forecast quality, forecast consistency and forecast value. Our model produces realistic and spatiotemporally consistent predictions over regions up to 1,536 km × 1,280 km and with lead times from 5–90 min ahead. Using a systematic evaluation by more than 50 expert meteorologists, we show that our generative model ranked first for its accuracy and usefulness in 89% of cases against two competitive methods. When verified quantitatively, these nowcasts are skillful without resorting to blurring. We show that generative nowcasting can provide probabilistic predictions that improve forecast value and support operational utility, and at resolutions and lead times where alternative methods struggle.
The high-resolution forecasting of rainfall and hydrometeors zero to two hours into the future, known as precipitation nowcasting, is crucial for weather-dependent decision-making. Nowcasting informs the operations of a wide variety of sectors, including emergency services, energy management, retail, flood early-warning systems, air traffic control and marine services1,2. For nowcasting to be useful in these applications the forecast must provide accurate predictions across multiple spatial and temporal scales, account for uncertainty and be verified probabilistically, and perform well on heavier precipitation events that are rarer, but more critically affect human life and economy.
Ensemble numerical weather prediction (NWP) systems, which simulate coupled physical equations of the atmosphere to generate multiple realistic precipitation forecasts, are natural candidates for nowcasting as one can derive probabilistic forecasts and uncertainty estimates from the ensemble of future predictions7. For precipitation at zero to two hours lead time, NWPs tend to provide poor forecasts as this is less than the time needed for model spin-up and due to difficulties in non-Gaussian data assimilation8,9,10. As a result, alternative methods that make predictions using composite radar observations have been used; radar data is now available (in the UK) every five minutes and at 1 km × 1 km grid resolution11. Established probabilistic nowcasting methods, such as STEPS and PySTEPS3,4, follow the NWP approach of using ensembles to account for uncertainty, but model precipitation following the advection equation with a radar source term. In these models, motion fields are estimated by optical flow, smoothness penalties are used to approximate an advection forecast, and stochastic perturbations are added to the motion field and intensity model3,4,12. These stochastic simulations allow for ensemble nowcasts from which both probabilistic and deterministic forecasts can be derived and are applicable and consistent at multiple spatial scales, from the kilometre scale to the size of a catchment area13.
Approaches based on deep learning have been developed that move beyond reliance on the advection equation5,6,14,15,16,17,18,19. By training these models on large corpora of radar observations rather than relying on in-built physical assumptions, deep learning methods aim to better model traditionally difficult non-linear precipitation phenomena, such as convective initiation and heavy precipitation. This class of methods directly predicts precipitation rates at each grid location, and models have been developed for both deterministic and probabilistic forecasts. As a result of their direct optimization and fewer inductive biases, the forecast quality of deep learning methods—as measured by per-grid-cell metrics such as critical success index (CSI)20 at low precipitation levels (less than 2 mm h−1)—has greatly improved.
As a number of authors have noted5,6, forecasts issued by current deep learning systems express uncertainty at increasing lead times with blurrier precipitation fields, and may not include small-scale weather patterns that are important for improving forecast value. Furthermore, the focus in existing approaches on location-specific predictions, rather than probabilistic predictions of entire precipitation fields, limits their operational utility and usefulness, being unable to provide simultaneously consistent predictions across multiple spatial and temporal aggregations. The ability to make skilful probabilistic predictions is also known to provide greater economic and decision-making value than deterministic forecasts21,22.
Here we demonstrate improvements in the skill of probabilistic precipitation nowcasting that improves their value. To create these more skilful predictions, we develop an observations-driven approach for probabilistic nowcasting using deep generative models (DGMs). DGMs are statistical models that learn probability distributions of data and allow for easy generation of samples from their learned distributions. As generative models are fundamentally probabilistic, they have the ability to simulate many samples from the conditional distribution of future radar given historical radar, generating a collection of forecasts similar to ensemble methods. The ability of DGMs to both learn from observational data as well as represent uncertainty across multiple spatial and temporal scales makes them a powerful method for developing new types of operationally useful nowcasting. These models can predict smaller-scale weather phenomena that are inherently difficult to predict due to underlying stochasticity, which is a critical issue for nowcasting research. DGMs predict the location of precipitation as accurately as systems tuned to this task while preserving spatiotemporal properties useful for decision-making. Importantly, they are judged by professional meteorologists as substantially more accurate and useful than PySTEPS or other deep learning systems.
Generative models of radar
Our nowcasting algorithm is a conditional generative model that predicts N future radar fields given M past, or contextual, radar fields, using radar-based estimates of surface precipitation XT at a given time point T. Our model includes latent random vectors Z and parameters θ, described by
$$\begin{array}{c}P({{\bf{X}}}_{M+1{\rm{:}}M+N}|{{\bf{X}}}_{1{\rm{:}}M})=\int P({{\bf{X}}}_{M+1{\rm{:}}M+N}{\rm{|}}{\bf{Z}},{{\bf{X}}}_{1{\rm{:}}M},{\boldsymbol{\theta }})P({\bf{Z}}{\rm{|}}{{\bf{X}}}_{1{\rm{:}}M}){\rm{d}}{\bf{Z}}.\end{array}$$
The integration over latent variables ensures that the model makes predictions that are spatially dependent. Learning is framed in the algorithmic framework of a conditional generative adversarial network (GAN)23,24,25, specialized for the precipitation prediction problem. Four consecutive radar observations (the previous 20 min) are used as context for a generator (Fig. 1a) that allows sampling of multiple realizations of future precipitation, each realization being 18 frames (90 min).
Fig. 1: Model overview and case study of performance on a challenging precipitation event starting on = 24 June 2019 at 16:15 UK, showing convective cells over eastern Scotland.
DGMR is better able to predict the spatial coverage and convection compared to other methods over a longer time period, while not over-estimating the intensities, and is significantly preferred by meteorologists (93% first choice, n = 56, P < 10−4). a, Schematic of the model architecture showing the generator with spatial latent vectors Z. b, Geographic context for the predictions. c, A single prediction at T + 30, T + 60 and T + 90 min lead time for different models. Critical success index (CSI) at thresholds 2 mm h−1 and 8 mm h−1 and continuous ranked probability score (CRPS) for an ensemble of four samples shown in the bottom left corner. For axial attention we show the mode prediction. Images are 256 km × 256 km. Maps produced with Cartopy and SRTM elevation data46.
Learning is driven by two loss functions and a regularization term, which guide parameter adjustment by comparing real radar observations to those generated by the model. The first loss is defined by a spatial discriminator, which is a convolutional neural network that aims to distinguish individual observed radar fields from generated fields, ensuring spatial consistency and discouraging blurry predictions. The second loss is defined by a temporal discriminator, which is a three-dimensional (3D) convolutional neural network that aims to distinguish observed and generated radar sequences, imposes temporal consistency and penalizes jumpy predictions. These two discriminators share similar architectures to existing work in video generation26. When used alone, these losses lead to accuracy on par with Eulerian persistence. To improve accuracy, we introduce a regularization term that penalizes deviations at the grid cell resolution between the real radar sequences and the model predictive mean (computed with multiple samples). This third term is important for the model to produce location-accurate predictions and improve performance. In the Supplementary Information, we show an ablation study supporting the necessity of each loss term. Finally, we introduce a fully convolutional latent module for the generator, allowing for predictions over precipitation fields larger than the size used at training time, while maintaining spatiotemporal consistency. We refer to this DGM of rainfall as DGMR in the text.
The model is trained on a large corpus of precipitation events, which are 256 × 256 crops extracted from the radar stream, of length 110 min (22 frames). An importance-sampling scheme is used to create a dataset more representative of heavy precipitation (Methods). Throughout, all models are trained on radar observations for the UK for years 2016–2018 and evaluated on a test set from 2019. Analysis using a weekly train–test split of the data, as well as data of the USA, is reported in Extended Data Figs. 1–9 and the Supplementary Information. Once trained, this model allows fast full-resolution nowcasts to be produced, with a single prediction (using an NVIDIA V100 GPU) needing just over a second to generate.
Intercomparison case study
We use a single case study to compare the nowcasting performance of the generative method DGMR to three strong baselines: PySTEPS, a widely used precipitation nowcasting system based on ensembles, considered to be state-of-the-art3,4,13; UNet, a popular deep learning method for nowcasting15; and an axial attention model, a radar-only implementation of MetNet19. For a meteorologically challenging event, Figs. 1b, c and 4b shows the ground truth and predicted precipitation fields at T + 30, T + 60 and T + 90 min, quantitative scores on different verification metrics, and comparisons of expert meteorologist preferences among the competing methods. Two other cases are included in Extended Data Figs. 2 and 3.
The event in Fig. 1 shows convective cells in eastern Scotland with intense showers over land. Maintaining such cells is difficult and a traditional method such as PySTEPS overestimates the rainfall intensity over time, which is not observed in reality and does not sufficiently cover the spatial extent of the rainfall. The UNet and axial attention models roughly predict the location of rain, but owing to aggressive blurring, over-predict areas of rain, miss intensity and fail to capture any small-scale structure. By comparison, DGMR preserves a good spatial envelope, represents the convection and maintains heavy rainfall in the early prediction, although with less accurate rates at T + 90 min and at the edge of the radar than at previous time steps. When expert meteorologists judged these predictions against ground truth observations, they significantly preferred the generative nowcasts, with 93% of meteorologists choosing it as their first choice (Fig. 4b).
The figures also include two common verification scores. These predictions are judged as significantly different by experts, but the scores do not provide this insight. This study highlights a limitation of using existing popular metrics to evaluate forecasts: while standard metrics implicitly assume that models, such as NWPs and advection-based systems, preserve the physical plausibility of forecasts, deep learning systems may outperform on certain metrics by failing to satisfy other needed characteristics of useful predictions.
Forecast skill evaluation
We verify the performance of competing methods using a suite of metrics as is standard practice, as no single verification score can capture all desired properties of a forecast. We report the CSI27 to measure location accuracy of the forecast at various rain rates. We report the radially averaged power spectral density (PSD)28,29 to compare the precipitation variability of nowcasts to that of the radar observations. We report the continuous ranked probability score (CRPS)30 to determine how well the probabilistic forecast aligns with the ground truth. For CRPS, we show pooled versions, which are scores on neighbourhood aggregations that show whether a prediction is consistent across spatial scales. Details of these metrics, and results on other standard metrics, can be found in Extended Data Figs. 1–9 and the Supplementary Information. We report results here using data from the UK, and results consistent with these showing generalization of the method on data from the USA in Extended Data Figs. 1–9.
Figure 2a shows that all three deep learning systems produce forecasts that are significantly more location-accurate than the PySTEPS baseline when compared using CSI. Using paired permutation tests with alternating weeks as independent units to assess statistical significance, we find that DGMR has significant skill compared to PySTEPS for all precipitation thresholds (n = 26, P < 10−4) (Methods).
Fig. 2: Deterministic verification scores for the UK in 2019.
a, CSI across 20 samples for precipitation thresholds at 1 mm h−1 (left), 4 mm h−1 (middle) and 8 mm h−1 (right). We also report results for the axial attention mode prediction. UNet generates a single deterministic prediction. b, Radially averaged power spectral density for full-frame 2019 predictions for all models at T + 30 min (left) and T + 90 min (middle and right). At T + 90 min, UNet (middle) has an effective resolution of 32 km; both axial attention (right) sample and mode predictions have an effective resolution of 16 km.
Source data.
The PSD in Fig. 2b shows that both DGMR and PySTEPS match the observations in their spectral characteristics, but the axial attention and UNet models produce forecasts with medium- and small-scale precipitation variability that decreases with increasing lead time. As they produce blurred predictions, the effective resolution of the axial attention and UNet nowcasts is far less than the 1 km × 1 km resolution of the data. At T + 90 min, the effective resolution for UNet is 32 km and for axial attention is 16 km, reducing the value of these nowcasts for meteorologists.
For probabilistic verification, Fig. 3a, b shows the CRPS of the average and maximum precipitation rate aggregated over regions of increasing size31. When measured at the grid-resolution level, DGMR, PySTEPS and axial attention perform similarly; we also show an axial attention model with improved performance obtained by rescaling its output probabilities32 (denoted 'axial attention temp. opt.'). As the spatial aggregation is increased, DGMR and PySTEPS provide consistently strong performance, with DGMR performing better on maximum precipitation. The axial attention model is significantly poorer for larger aggregations and underperforms all other methods at scale four and above. Using alternating weeks as independent units, paired permutation tests show that the performance differences between DGMR and the axial attention temp. opt. are significant (n = 26, P < 10−3).
Fig. 3: Probabilistic verification scores for the UK in 2019.
Graphs show CRPS scores at the grid resolution (left), 4-km aggregations (middle) and 16-km aggregations (right). a, Pooled CRPS using the average rain rate. b, Pooled CRPS using the maximum rain rate.
NWP and PySTEPS methods include post-processing that is used by default in their evaluation to improve reliability. We show a simple post-processing method for DGMR in Figs. 2 and 3 (denoted 'recal') (Methods), which further improves its skill scores over the uncalibrated approach. Post-processing improves the reliability diagrams and rank histogram to be as or more skilful than the baseline methods (Extended Data Fig. 4). We also show evaluation on other metrics, performance on a data split over weeks rather than years, and evaluation recapitulating the inability of NWPs to make predictions at nowcasting timescales (Extended Data Figs. 4–6). We show results on a US dataset in Extended Data Figs. 7–9.
Together, these results show that the generative approach verifies competitively compared to alternatives: it outperforms (on CSI) the incumbent STEPS nowcasting approach, provides probabilistic forecasts that are more location accurate, and preserves the statistical properties of precipitation across spatial and temporal scales without blurring whereas other deep learning methods do so at the expense of them.
Forecast value evaluation
We use both economic and cognitive analyses to show that the improved skill of DGMR results in improved decision-making value.
We report the relative economic value of the ensemble prediction to quantitatively evaluate the benefit of probabilistic predictions using a simple and widely used decision-analytic model22; see the Supplementary Information for a description. Figure 4a shows that DGMR provides the highest economic value relative to the baseline methods (has highest peak and greater area under the curve). We use 20 member ensembles and show three accumulation levels used for weather warnings by Met Éireann (the Irish Meteorological service uses warnings defined directly in mm h−1; https://www.met.ie/weather-warnings). This analysis shows the ability of the generative ensemble to capture uncertainty, and we show the improvement with samples in Extended Data Figs. 4 and 9, and postage stamp plots to visualize the ensemble variability in Supplementary Data 1–3.
Fig. 4: DGMR provides greater decision-making value when assessed using both economic and cognitive analyses.
a, Relative economic value analysis across 20 samples for three 90-min rainfall accumulations, using 4-km aggregations. UNet generates a single deterministic prediction. b, Meteorologist preferences for the case study in Fig. 1. c, Meteorologist rankings for medium rain (5 mm h−1) cases. d, Meteorologist rankings for heavy rain (10 mm h−1) cases. Horizontal bars show the percentage of meteorologists who chose each method as their first choice. Whisker lines show the Clopper–Pearson 95% confidence interval. Meteorologists significantly preferred DGMR to alternatives (n = 56, P < 10−4).
Importantly, we ground this economic evaluation by directly assessing decision-making value using the judgments of expert meteorologists working in the 24/7 operational centre at the Met Office (the UK's national meteorology service). We conducted a two-phase experimental study to assess expert judgements of value, involving a panel of 56 experts. In phase 1, all meteorologists were asked to provide a ranked preference assessment on a set of nowcasts with the instruction that 'preference is based on [their] opinion of accuracy and value'. Each meteorologist assessed a unique set of nowcasts, which, at the population level, allows for uncertainty characteristics and meteorologist idiosyncrasies to be averaged out in reporting the statistical effect. We randomly selected 20% of meteorologists to participate in a phase 2 retrospective recall interview33.
Operational meteorologists seek utility in forecasts for critical events, safety and planning guidance. Therefore, to make meaningful statements of operational usefulness, our evaluation assessed nowcasts for high-intensity events, specifically medium rain (rates above 5 mm h−1) and heavy rain (rates above 10 mm h−1). Meteorologists were asked to rank their preferences on a sample of 20 unique nowcasts (from a corpus of 2,126 events, being all high-intensity events in 2019). Data were presented in the form shown in Fig. 1b, c, showing clearly the initial context at T + 0 min, the ground truth at T + 30 min, T + 60 min, and T + 90 min, and nowcasts from PySTEPS, axial attention and DGMR. The identity of the methods in each panel was anonymized and their order randomized. See the Methods for further details of the protocol and of the ethics approval for human subjects research.
The generative nowcasting approach was significantly preferred by meteorologists when asked to make judgments of accuracy and value of the nowcast, being their most preferred 89% (95% confidence interval (CI) [0.86, 0.92]) of the time for the 5 mm h−1 nowcasts (Fig. 4c; P < 10−4), and 90% (95% CI [0.87, 0.92]) for the 10 mm h−1 nowcasts (Fig. 4d, P < 10−4). We compute the P value assessing the binary decision whether meteorologists chose DGMR as their first choice using a permutation test with 10,000 resamplings. We indicate the Clopper–Pearson CI. This significant meteorologist preference is important as it is strong evidence that generative nowcasting can provide meteorologists with physical insight not provided by alternative methods, and provides a grounded verification of the economic value analysis in Fig. 4a.
Meteorologists were not swayed by the visual realism of the predictions, and their responses in the subsequent structured interviews showed that they approached this task by making deliberate judgements of accuracy, location, extent, motion and rainfall intensity, and reasonable trade-offs between these factors (Supplementary Information, section C.6). In the phase 2 interviews, PySTEPS was described as "being too developmental which would be misleading", that is, as having many "positional errors" and "much higher intensity compared with reality". The axial attention model was described as "too bland", that is, as being "blocky" and "unrealistic", but had "good spatial extent". Meteorologists described DGMR as having the "best envelope", "representing the risk best", as having "much higher detail compared to what [expert meteorologists] are used to at the moment", and as capturing "both the size of convection cells and intensity the best". In the cases where meteorologists chose PySTEPS or the axial attention as their first choice, they pointed out that DGMR showed decay in the intensity for heavy rainfall at T + 90 min and had difficulty predicting isolated showers, which are important future improvements for the method. See the Supplementary Information for further reports from this phase of the meteorologist assessment.
Skilful nowcasting is a long-standing problem of importance for much of weather-dependent decision-making. Our approach using deep generative models directly tackles this important problem, improves on existing solutions and provides the insight needed for real-world decision-makers. We showed—using statistical, economic and cognitive measures—that our approach to generative nowcasting provides improved forecast quality, forecast consistency and forecast value, providing fast and accurate short-term predictions at lead times where existing methods struggle.
Yet, there remain challenges for our approach to probabilistic nowcasting. As the meteorologist assessment demonstrated, our generative method provides skilful predictions compared to other solutions, but the prediction of heavy precipitation at long lead times remains difficult for all approaches. Critically, our work reveals that standard verification metrics and expert judgments are not mutually indicative of value, highlighting the need for newer quantitative measurements that are better aligned with operational utility when evaluating models with few inductive biases and high capacity. Whereas existing practice focuses on quantitative improvements without concern for operational utility, we hope this work will serve as a foundation for new data, code and verification methods—as well as the greater integration of machine learning and environmental science in forecasting larger sets of environmental variables—that makes it possible to both provide competitive verification and operational utility.
We provide additional details of the data, models and evaluation here, with references to extended data that add to the results provided in the main text.
A dataset of radar for the UK was used for all the experiments in the main text. Additional quantitative results on a US dataset are available in Supplementary Information section A.
UK dataset
To train and evaluate nowcasting models over the UK, we use a collection of radar composites from the Met Office RadarNet4 network. This network comprises more than 15 operational, proprietary C-band dual polarization radars covering 99% of the UK (see figure 1 in ref. 34). We refer to ref. 11 for details about how radar reflectivity is post-processed to obtain the two-dimensional radar composite field, which includes orographic enhancement and mean field adjustment using rain gauges. Each grid cell in the 1,536 × 1,280 composite represents the surface-level precipitation rate (in mm h−1) over a 1 km × 1 km region in the OSGB36 coordinate system. If a precipitation rate is missing (for example, because the location is not covered by any radar, or if a radar is out of order), the corresponding grid cell is assigned a negative value which is used to mask the grid cell at training and evaluation time. The radar composites are quantized in increments of 1/32 mm h−1.
We use radar collected every five minutes between 1 January 2016 and 31 December 2019. We use the following data splits for model development. Fields from the first day of each month from 2016 to 2018 are assigned to the validation set. All other days from 2016 to 2018 are assigned to the training set. Finally, data from 2019 are used for the test set, preventing data leakage and testing for out of distribution generalization. For further experiments testing in-distribution performance using a different data split, see Supplementary Information section C.
Training set preparation
Most radar composites contain little to no rain. Supplementary Table 2 shows that approximately 89% of grid cells contain no rain in the UK. Medium to heavy precipitation (using rain rate above 4 mm h−1) comprises fewer than 0.4% of grid cells in the dataset. To account for this imbalanced distribution, the dataset is rebalanced to include more data with heavier precipitation radar observations, which allows the models to learn useful precipitation predictions.
Each example in the dataset is a sequence of 24 radar observations of size 1,536 × 1,280, representing two continuous hours of data. The maximum rain rate is capped at 128 mm h−1, and sequences that are missing one or more radar observations are removed. 256 × 256 crops are extracted and an importance sampling scheme is used to reduce the number of examples containing little precipitation. We describe this importance sampling and the parameters used in Supplementary Information section A.1. After subsampling and removing entirely masked examples, the number of examples in the training set is roughly 1.5 million.
Model details and baselines
Here, we describe the proposed method and the three baselines to which we compare performance. When applicable, we describe both the architectures of the models and the training methods. There is a wealth of prior work, and we survey them as additional background in Supplementary Information section E.
DGMR
A high-level description of the model was given in the main text and in Fig. 1a, and we provide some insight into the design decisions here.
The nowcasting model is a generator that is trained using two discriminators and an additional regularization term. Extended Data Fig. 1 shows a detailed schematic of the generative model and the discriminators. More precise descriptions of these architectures are given in Supplement B and corresponds to the code description; pseudocode is also available in the Supplementary Information.
The generator in Fig. 1a comprises the conditioning stack which processes past four radar fields that is used as context. Making effective use of such context is typically a challenge for conditional generative models, and this stack structure allows information from the context data to be used at multiple resolutions, and is used in other competitive video GAN models, for example, in ref. 26. This stack produces a context representation that is used as an input to the sampler. A latent conditioning stack takes samples from N(0, 1) Gaussian distribution, and reshapes into a second latent representation. The sampler is a recurrent network formed with convolutional gated recurrent units (GRUs) that uses the context and latent representations as inputs. The sampler makes predictions of 18 future radar fields (the next 90 min). This architecture is both memory efficient and has had success in other forecasting applications. We also made comparisons with longer context using the past 6 or 8 frames, but this did not result in appreciable improvements.
Two discriminators in Fig. 1b are used to allow for adversarial learning in space and time. The spatial and temporal discriminator share the same structure, except that the temporal discriminator uses 3D convolutions to account for the time dimension. Only 8 out of 18 lead times are used in the spatial discriminator, and a random 128 × 128 crop used for the temporal discriminator. These choices allow the models to fit within memory. We include a spatial attention block in the latent conditioning stack since it allows the model to be more robust across different types of regions and events, and provides an implicit regularization to prevent overfitting, particularly for the US dataset.
Both the generator and discriminators use spectrally normalized convolutions throughout, similar to ref. 35, since this is widely established to improve optimization. During model development, we initially found that including a batch normalization layer (without variance scaling) prior to the linear layer of the two discriminators improved training stability. The results presented use batch normalization, but we later were able to obtain nearly identical quantitative and qualitative results without it.
The generator is trained with losses from the two discriminators and a grid cell regularization term (denoted \({{\mathscr{L}}}_{{\rm{R}}}(\theta )\)). The spatial discriminator Dϕ has parameters ϕ, the temporal discriminator Tψ has parameters ψ, and the generator Gθ has parameters θ. We indicate the concatenation of two fields using the notation {X; G} . The generator objective that is maximized is
$$\begin{array}{c}{{\mathscr{L}}}_{{\rm{G}}}({\boldsymbol{\theta }})={{\mathbb{E}}}_{{{\bf{X}}}_{1{\rm{:}}M+N}}[{{\mathbb{E}}}_{{\bf{Z}}}[D({G}_{{\boldsymbol{\theta }}}({\bf{Z}}{\rm{;}}{{\bf{X}}}_{1{\rm{:}}M}))+T({\rm{\{}}{{\bf{X}}}_{1{\rm{:}}M}{\rm{;}}{G}_{{\boldsymbol{\theta }}}({\bf{Z}}{\rm{;}}{{\bf{X}}}_{1{\rm{:}}M}){\rm{\}}})]-\lambda {{\mathscr{L}}}_{R}({\boldsymbol{\theta }})]{\rm{;}}\end{array}$$
$$\begin{array}{c}{{\mathscr{L}}}_{{\rm{R}}}({\boldsymbol{\theta }})=\frac{1}{{HWN}}{{\rm{|}}|({{\mathbb{E}}}_{{\bf{Z}}}[{G}_{{\boldsymbol{\theta }}}({\bf{Z}}{\rm{;}}{{\bf{X}}}_{1{\rm{:}}M})]-{{\bf{X}}}_{M+1{\rm{:}}M+N}])\odot w({{\bf{X}}}_{M+1{\rm{:}}M+N})|{\rm{|}}}_{1}.\end{array}$$
We use Monte Carlo estimates for expectations over the latent Z in equations (2) and (3). These are calculated using six samples per input X1:M, which comprises M = 4 radar observations. The grid cell regularizer ensures that the mean prediction remains close to the ground truth, and is averaged across all grid cells along the height H, width W and lead-time N axes. It is weighted towards heavier rainfall targets using the function w(y) = max(y + 1, 24), which operate element-wise for input vectors, and is clipped at 24 for robustness to spuriously large values in the radar. The GAN spatial discriminator loss \({{\mathscr{L}}}_{{\rm{D}}}(\phi )\) and temporal discriminator loss \({{\mathscr{L}}}_{{\rm{T}}}(\psi )\)are minimized with respect to parameters ϕ and ψ, respectively; ReLU (x) = max(0, x). The discriminator losses use a hinge loss formulation26:
$$\begin{array}{c}{{\mathscr{L}}}_{{\rm{D}}}(\varphi )={{\mathbb{E}}}_{{{\bf{X}}}_{1{\rm{:}}M+N},{\bf{Z}}}[{\rm{ReLU}}(1-{D}_{\varphi }({{\bf{X}}}_{M+1{\rm{:}}M+N}))+{\rm{ReLU}}(1+{D}_{\varphi }(G({\bf{Z}}{\rm{;}}{{\bf{X}}}_{1{\rm{:}}M})))].\end{array}$$
$$\begin{array}{c}{{\mathscr{L}}}_{{\rm{T}}}(\psi )={{\mathbb{E}}}_{{{\bf{X}}}_{1{\rm{:}}M+N},{\bf{Z}}}[{\rm{ReLU}}(1-{T}_{\psi }({{\bf{X}}}_{1{\rm{:}}M+N}))\,\,\,\,\,\,\,\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,+{\rm{ReLU}}(1+{T}_{\psi }({\rm{\{}}{{\bf{X}}}_{1{\rm{:}}M}{\rm{;}}G({\bf{Z}}{\rm{;}}{{\bf{X}}}_{1{\rm{:}}M}){\rm{\}}}))].\end{array}$$
During evaluation, the generator architecture is the same, but unless otherwise noted, full radar observations of size 1,536 × 1,280, and latent variables with height and width 1/32 of the radar observation size (48 × 40 × 8 of independent draws from a normal distribution), are used as inputs to the conditioning stack and latent conditioning stack, respectively. In particular, the latent conditioning stack allows for spatiotemporally consistent predictions for much larger regions than those on which the generator is trained.
For operational purposes and decision-making, the most important aspect of a probabilistic prediction is its resolution36. Specific applications will require different requirements on reliability that can often be addressed by post-processing and calibration. We develop one possible post-processing approach to improve the reliability of the generative nowcasts. At prediction time, the latent variables are samples from a Gaussian distribution with standard deviation 2 (rather than 1), relying on empirical insights on maintaining resolution while increasing sample diversity in generative models24,37. In addition, for each realization we apply a stochastic perturbation to the input radar by multiplying a single constant drawn from a unit-mean gamma distribution G(α = 5, β = 5) to the entire input radar field. Extended Data Figures 4 (UK) and 9 (US) shows how the post-processing improves the reliability diagram and rank histogram compared to the uncorrected approach.
The model is trained for 5 × 105 generator steps, with two discriminator steps per generator step. The learning rate for the generator is 5 × 10−5, and is 2 × 10−4 for the discriminator and uses Adam optimizer38 with β1 = 0.0 and β2 = 0.999. The scaling parameter for the grid cell regularization is set to λ = 20, as this produced the best continuous ranked probability score results on the validation set. We train on 16 tensor processing unit cores (https://cloud.google.com/tpu) for one week on random crops of the training dataset of size 256 × 256 measurements using a batch size of 16 per training step. The Supplementary Information contains additional comparisons showing the contributions of the different loss components to overall performance. We evaluated the speed of sampling by comparing speed on both CPU (10 core AMD EPYC) and GPU (NVIDIA V100) hardware. We generate ten samples and report the median time: for CPU the median time per sample was 25.7 s, and 1.3 s for the GPU.
UNet baseline
We use a UNet encoder–decoder model as strong baseline similarly to how it was used in related studies5,15, but we make architectural and loss function changes that improve its performance at longer lead times and heavier precipitation. First, we replace all convolutional layers with residual blocks, as the latter provided a small but consistent improvement across all prediction thresholds. Second, rather than predicting only a single output and using autoregressive sampling during evaluation, the model predicts all frames in a single forward pass. This somewhat mitigates the excessive blurring found in ref. 5 and improves results on quantitative evaluation. Our architecture consists of six residual blocks, where each block doubles the number of channels of the latent representation followed by spatial down-sampling by a factor of two. The representation with the highest resolution has 32 channels which increases up to 1,024 channels.
Similar to ref. 6, we use a loss weighted by precipitation intensity. Rather than weighting by precipitation bins, however, we reweight the loss directly by the precipitation to improve results on thresholds outside of the bins specified by ref. 6. Additionally, we truncate the maximum weight to 24 mm h−1 as an error in reflectivity of observations leads to larger error in the precipitation values. We also found that including a mean squared error loss made predictions more sensitive to radar artefacts; as a result, the model is only trained with precipitation weighted mean average error loss.
The model is trained with batch size eight for 1 × 106 steps, with learning rate 2 × 10−4 with weight decay, using the Adam optimizer with default exponential rates. We select a model using early stopping on the average area under the precision–recall curve on the validation set. The UNet baselines are trained with 4 frames of size 256 × 256 as context.
Axial attention baseline
As a second strong deep learning-based baseline, we adapt the MetNet model19, which is a combination of a convolutional long short-term memory (LSTM) encoder17 and an axial attention decoder39, for radar-only nowcasting. MetNet was demonstrated to achieve strong results on short-term (up to 8 h) low precipitation forecasting using radar and satellite data of the continental USA, making per-grid-cell probabilistic predictions and factorizing spatial dependencies using alternating layers of axial attention.
We modified the axial attention encoder–decoder model to use radar observations only, as well as to cover the spatial and temporal extent of data in this study. We rescaled the targets of the model to improve its performance on forecasts of heavy precipitation events. After evaluation on both UK and US data, we observed that additional satellite or topographical data as well as the spatiotemporal embeddings did not provide statistically significant CSI improvement. An extended description of the model and its adaptations is provided in Supplementary Information section D.
The only prediction method described in ref. 19 is the per-grid-cell distributional mode, and this is considered the default method for comparison. To ensure the strongest baseline model, we also evaluated other prediction approaches. We assessed using independent samples from the per-grid-cell marginal distributions, but this was not better than using the mode when assessed quantitatively and qualitatively. We also combined the marginal distributions with a Gaussian process copula, in order to incorporate spatiotemporal correlation similar to the stochastically perturbed parametrization tendencies (SPPT) scheme of ref. 40. We used kernels and correlation scales chosen to minimize spatiotemporally pooled CRPS metrics. The best performing was the product of a Gaussian kernel with 25 km spatial correlation scale, and an AR(1) kernel with 60 min temporal correlation scale. Results, however, were not highly sensitive to these choices. All settings resulted in samples that were not physically plausible, due to the stationary and unconditional correlation structure. These samples were also not favoured by external experts. Hence, we use the mode prediction throughout.
PySTEPS baseline
We use the PySTEPS implementation from ref. 4 using the default configuration available at https://github.com/pySTEPS/pysteps. Refs. 3,4 provide more details of this approach. In our evaluation, unlike other models evaluated that use inputs of size 256 × 256, PySTEPS is given the advantage of being fed inputs of size 512 × 512, which was found to improve its performance. PySTEPS includes post-processing using probability matching to recalibrate its predictions and these are used in all results.
We evaluate our model and baselines using commonly used quantitative verification measures, as well as qualitatively using a cognitive assessment task with expert meteorologists. Unless otherwise noted, models are trained on years 2016–2018 and evaluated on 2019 (that is, a yearly split).
Expert meteorologist study
The expert meteorologist study described is a two-phase protocol consisting of a ranked comparison task followed by a retrospective recall interview. The study was submitted for ethical assessment to an independent ethics committee and received favourable review. Key elements of the protocol involved consent forms that clearly explained the task and time commitment, clear messaging on the ability to withdraw from the study at any point, and that the study was not an assessment of the meteorologist's skills and would not affect their employment and role in any way. Meteorologists were not paid for participation, since involvement in these types of studies is considered part of the broader role of the meteorologist. The study was anonymized, and only the study lead had access to the assignment of experimental IDs. The study was restricted to meteorologists in guidance-related roles, that is, meteorologists whose role is to interpret weather forecasts, synthesize forecasts and provide interpretations, warnings and watches. Fifty-six meteorologists agreed to participate in the study.
Phase 1 of the study, the rating assessment, involved each meteorologist receiving a unique form as part of their experimental evaluation. The axial attention mode prediction is used in the assessment, and this was selected as the most appropriate prediction during the pilot assessment of the protocol by the chief meteorologist. The phase 1 evaluation comprised an initial practice phase of three judgments for meteorologists to understand how to use the form and assign ratings, followed by an experimental phase that involved 20 trials that were different for every meteorologist, and a final case study phase in which all meteorologists rated the same three scenarios (the scenarios in Fig. 1a, and Extended Data Figs. 2 and 3); these three events were chosen by the chief meteorologist—who is independent of the research team and also did not take part in the study—as difficult events that would expose challenges for the nowcasting approaches we compare. Ten meteorologists participated in the subsequent retrospective recall interview. This interview involved an in-person interview in which experts were asked to explain the reasoning for their assigned rating and what aspects informed their decision-making. These interviews all used the same script for consistency, and these sessions were recorded with audio only. Once all the audio was transcribed, the recordings were deleted.
The 20 trials of the experimental phase were split into two parts, each containing ten trials. The first ten trials comprised medium rain events (rainfall greater than 5 mm h−1) and the second 10 trials comprised heavy rain events (rainfall greater than 10 mm h−1). 141 days from 2019 were chosen by the chief meteorologist as having medium-to-heavy precipitation events. From these dates, radar fields were chosen algorithmically according to the following procedure. First, we excluded from the crop selection procedure the 192 km that forms the image margins of each side of the radar field. Then, the crop over 256 km regions, containing the maximum fraction of grid cells above the given threshold, 5 or 10 mm h−1, was selected from the radar image. If there was no precipitation in the frame above the given threshold, the selected crop was the one with the maximum average intensity. We use predictions without post-processing in the study. Each meteorologist assessed a unique set of predictions, which allows us to average over the uncertainty in predictions and individual preference to show statistical effect.
Extended Data Figure 2 shows a high-intensity precipitation front with decay and Extended Data Fig. 3 shows a cyclonic circulation event (low-pressure area), both of which are difficult for current deep learning models to predict. These two cases were also assessed by all expert meteorologists as part of the evaluative study, and in both cases, meteorologists significantly preferred the generative approach (n = 56, P < 10−4) to competing methods. For the high-intensity precipitation front in Extended Data Fig. 2, meteorologists ranked first the generative approach in 73% of cases. Meteorologists reported that DGMR has "decent accuracy with both the shape and intensity of the feature … but loses most of the signal for embedded convection by T + 90". PySTEPS is "too extensive with convective cells and lacks the organisation seen in the observations", and the axial attention model as "highlighting the worst areas" but "looks wrong".
For the cyclonic circulation in Extended Data Fig. 3, meteorologists ranked first the generative approach in 73% of cases. Meteorologists reported that it was difficult to judge this case between DGMR and PySTEPS. When making their judgment, they chose DGMR since it has "best fit and rates overall". DGMR "captures the extent of precipitation overall [in the] area, though slightly overdoes rain coverage between bands", whereas PySTEPS "looks less spatially accurate as time goes on". The axial attention model "highlights the area of heaviest rain although its structure is unrealistic and too binary". We provide additional quotes in Supplementary Information section C.6.
Quantitative evaluation
We evaluate all models using established metrics20: CSI, CRPS, Pearson correlation coefficient, the relative economic value22,41,42, and radially averaged PSD. These are described in Supplementary Information section F.
To make evaluation computationally feasible, for all metrics except PSD, we evaluate the models on a subsampled test set, consisting of 512 × 512 crops drawn from the full radar images. We use an importance sampling scheme (described in Supplementary Information section A.1) to ensure that this subsampling does not unduly compromise the statistical efficiency of our estimators of the evaluation metrics. The subsampling reduces the size of the test set to 66,851 and Supplementary Information section C.3 shows that results obtained when evaluating CSI are not different when using the dataset with or without subsampling. All models except PySTEPS are given the centre 256 × 256 crop as input. PySTEPS is given the entire 512 × 512 crop as input as this improves its performance. The predictions are evaluated on the centre 64 × 64 grid cells, ensuring that models are not unfairly penalized by boundary effects. Our statistical significance tests use every other week of data in the test set (leaving n = 26 weeks) as independent units. We test the null hypothesis that performance metrics are equal for the two models, against the two-sided alternative, using a paired permutation test43 with 106 permutations.
Extended Data Figure 4 shows additional probabilistic metrics that measure the calibration of the evaluated methods. This figure shows a comparison of the relative economic value of the probabilistic methods, showing DGMR providing the best value. We also show how the uncertainty captured by the ensemble increases as the number of samples used is increased from 1 to 20.
Extended Data Figure 5 compares the performance to that of an NWP, using the UKV deterministic forecast44, showing that NWPs are not competitive in this regime. See Supplementary Information section C.2 for further details of the NWP evaluation.
To verify other generalization characteristics of our approach—as an alternative to the yearly data split that uses training data of 2016–2018 and tests on 2019—we also use a weekly split: where the training, validation and test sets comprise Thursday through Monday, Tuesday, and Wednesday, respectively. The sizes of the training and test datasets are 1.48 million and 36,106, respectively. Extended Data Figure 6 shows the same competitive verification performance of DGMR in this generalization test.
To further assess the generalization of our method, we evaluate on a second dataset from the USA using the multi-radar multi-sensitivity (MRMS) dataset, which consists of radar composites for years 2017–201945. We use two years for training and one year for testing, and even with this more limited data source, our model still shows competitive performance relative to the other baselines. Extended Data Figs. 7–9 compares all methods on all metrics we have described, showing both the generalization and skilful performance on this second dataset. The Supplementary Information contains additional comparisons on performance with different initializations and performance of different loss function components.
Processed radar data for the UK yearly data split is released under a creative commons licence. A smaller dataset for exploratory analysis is freely available, and the full dataset (around 1 TB) is also available; for details, see github.com/deepmind/deepmind-research/tree/master/nowcasting. The associated datasets contain public sector information licensed by the Met Office under the UK Open Government Licence 3.0. For the raw data, other licences, and alternative time periods, the data from the UK can be obtained with appropriate agreements from the Met Office; see https://www.metoffice.gov.uk/research/weather/observations-research/radar-products or contact the Met Office Data Provisioning Team using [email protected]. The multi-radar multi-sensor (MRMS) dataset is available with appropriate agreements from NOAA; see https://www.nssl.noaa.gov/projects/mrms/ or contact the MRMS data teams using [email protected]. Source data are provided with this paper.
We rely on several open-source code frameworks including Iris (scitools-iris.readthedocs.io), Cartopy (scitools.org.uk/cartopy), TensorFlow (www.tensorflow.org), and Colab (colab.sandbox.google.com). We have also used open-source tools for PySTEPS (pysteps.github.io), and for Axial Attention (github.com/google-research/google-research/tree/master/axial). The pseudocode for the generative algorithm can be found in the file pseudocode.py in the Supplementary Information. All the neural architecture details and hyperparameters are described in Methods and Supplement. Alongside this model pseudocode, we have also released a pretrained version of the generative model available at github.com/deepmind/deepmind-research/tree/master/nowcasting.
Wilson, J. W., Feng, Y., Chen, M. & Roberts, R. D. Nowcasting challenges during the Beijing Olympics: Successes, failures, and implications for future nowcasting systems. Weather Forecast. 25, 1691–1714 (2010).
ADS Article Google Scholar
Schmid, F., Wang, Y. & Harou, A. Guidelines for Nowcasting Techniques vol. 1198 (World Meteorological Organization, 2017).
Bowler, N. E., Pierce, C. E. & Seed, A. W. STEPS: a probabilistic precipitation forecasting scheme which merges an extrapolation nowcast with downscaled NWP. Quart. J. Roy. Meteorol. Soc. 132, 2127–2155 (2006).
Pulkkinen, S. et al. PySTEPS: an open-source Python library for probabilistic precipitation nowcasting (v1. 0). Geosci. Mod. Dev. 12, 4185–4219 (2019).
Ayzel, G., Scheffer, T. & Heistermann, M. Rainnet v1.0: a convolutional neural network for radar-based precipitation nowcasting. Geosci. Mod. Dev. 13, 2631–2644 (2020).
Shi, X. et al. Deep learning for precipitation nowcasting: a benchmark and a new model. In Advances in Neural Information Processing Systems vol. 30, 5617–5627 (NeurIPS, 2017).
Toth, Z. & Kalnay, E. Ensemble forecasting at NCEP and the breeding method. Mon. Weather Rev. 125, 3297–3319 (1997).
Pierce, C., Seed, A., Ballard, S., Simonin, D. & Li, Z. In Doppler Radar Observations: Weather Radar, Wind Profiler, Ionospheric Radar, and Other Advanced Applications (eds Bech, J. & Chau, J. L.) 97–142 (IntechOpen, 2012).
Sun, J. Convective-scale assimilation of radar data: progress and challenges. Quart. J. Roy. Meteorol. Soc. 131, 3439–3463 (2005).
Buehner, M. & Jacques, D. Non-Gaussian deterministic assimilation of radar-derived precipitation accumulations. Mon. Weather Rev. 148, 783–808 (2020).
Harrison, D. et al. The evolution of the Met Office radar data quality control and product generation system: Radarnet. In AMS Conference on Radar Meteorology 14–18 (AMS, 2015).
Germann, U. & Zawadzki, I. Scale dependence of the predictability of precipitation from continental radar images. Part II: probability forecasts. J. Appl. Meteorol. 43, 74–89 (2004).
Imhoff, R., Brauer, C., Overeem, A., Weerts, A. & Uijlenhoet, R. Spatial and temporal evaluation of radar rainfall nowcasting techniques on 1,533 events. Water Resour. Res. 56, e2019WR026723 (2020).
ADS Google Scholar
Lebedev, V. et al. Precipitation nowcasting with satellite imagery. In International Conference on Knowledge Discovery & Data Mining 2680–2688 (ACM, 2019).
Agrawal, S. et al. Machine learning for precipitation nowcasting from radar images. Preprint at https://arxiv.org/abs/1912.12132 (2019).
Trebing, K., Stańczyk, T. & Mehrkanoon, S. SmaAt-UNet: precipitation nowcasting using a small attention-UNet architecture. Pattern Recog. Lett. 145, 178–186 (2021).
Xingjian, S. et al. Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems vol. 28, 802–810 (NeurIPS, 2015).
Foresti, L., Sideris, I. V., Nerini, D., Beusch, L. & Germann, U. Using a 10-year radar archive for nowcasting precipitation growth and decay: a probabilistic machine learning approach. Weather Forecast. 34, 1547–1569 (2019).
Sønderby, C. K. et al. MetNet: a neural weather model for precipitation forecasting. Preprint at https://arxiv.org/abs/2003.12140 (2020).
Jolliffe, I. T. & Stephenson, D. B. Forecast Verification: A Practitioner's Guide in Atmospheric Science (John Wiley & Sons, 2012).
Palmer, T. & Räisänen, J. Quantifying the risk of extreme seasonal precipitation events in a changing climate. Nature 415, 512–514 (2002).
Richardson, D. S. Skill and relative economic value of the ECMWF ensemble prediction system. Quart. J. Roy. Meteorol. Soc. 126, 649–667 (2000).
Goodfellow, I. et al. Generative adversarial nets. In Advances in Neural Information Processing Systems vol. 27, 2672–2680 (NeurIPS, 2014).
Brock, A., Donahue, J. & Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations (ICLR, 2019).
Mirza, M. & Osindero, S. Conditional generative adversarial nets. Preprint at https://arxiv.org/abs/1411.1784 (2014).
Clark, A., Donahue, J. & Simonyan, K. Adversarial video generation on complex datasets. Preprint at https://arxiv.org/abs/1907.06571 (2019).
Schaefer, J. T. The critical success index as an indicator of warning skill. Weather Forecast. 5, 570–575 (1990).
Harris, D., Foufoula-Georgiou, E., Droegemeier, K. K. & Levit, J. J. Multiscale statistical properties of a high-resolution precipitation forecast. J. Hydrol. 2, 406–418 (2001).
Sinclair, S. & Pegram, G. Empirical mode decomposition in 2-D space and time: a tool for space-time rainfall analysis and nowcasting. Hydrol. Earth Sys. Sci. 9, 127–137 (2005).
Gneiting, T. & Raftery, A. E. Strictly proper scoring rules, prediction, and estimation. J. Am. Stat. Assoc. 102, 359–378 (2007).
MathSciNet CAS Article Google Scholar
Gilleland, E., Ahijevych, D., Brown, B. G., Casati, B. & Ebert, E. E. Intercomparison of spatial forecast verification methods. Weather Forecast. 24, 1416–1430 (2009).
Guo, C., Pleiss, G., Sun, Y. & Weinberger, K. Q. On calibration of modern neural networks. In International Conference on Machine Learning vol. 34, 1321–1330 (ICLR, 2017).
Crandall, B. W. & Hoffman, R. R. In The Oxford Handbook of Cognitive Engineering (ed. Lee, J. D.) 229–239 (Oxford Univ. Press, 2013).
Fairman, J., Schultz, D., Kirshbaum, D., Gray, S. & Barrett, A. Climatology of size, shape, and intensity of precipitation features over Great Britain and Ireland. J. Hydrometeorol. 18, 1595–1615 (2017).
Zhang, H., Goodfellow, I., Metaxas, D. & Odena, A. Self-attention generative adversarial networks. In International Conference on Machine Learning vol. 36, 7354–7363 (ICLR, 2019).
Atger, F. The skill of ensemble prediction systems. Mon. Weather Rev. 127, 1941–1953 (1999).
Ravuri, S. & Vinyals, O. Classification accuracy score for conditional generative models. In Advances in Neural Information Processing Systems vol. 32 (NeurIPS, 2019).
Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR, 2015).
Ho, J., Kalchbrenner, N., Weissenborn, D. & Salimans, T. Axial attention in multidimensional transformers. Preprint at https://arxiv.org/abs/1912.12180 (2019).
Palmer, T. et al. Stochastic Parametrization and Model Uncertainty (ECMWF, 2009).
Roberts, N. M. & Lean, H. W. Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Weather Rev. 136, 78–97 (2008).
Schwartz, C. S. et al. Toward improved convection-allowing ensembles: Model physics sensitivities and optimizing probabilistic guidance with small ensemble membership. Weather Forecast. 25, 263–280 (2010).
Edgington, E. & Onghena, P. Randomization Tests (CRC, 2007).
Bush, M. et al. The first Met Office unified model–JULES regional atmosphere and land configuration, RAL1. Geosci. Mod. Dev. 13, 1999–2029 (2020).
Smith, T. M. et al. Multi-radar multi-sensor (MRMS) severe weather and aviation products: Initial operating capabilities. Bull. Amer. Meteorol. Soc. 97, 1617–1630 (2016).
Jarvis, A., Reuter, H., Nelson, A. & Guevara, E. Hole-Filled Seamless SRTM Data V4 (International Centre for Tropical Agriculture, 2008).
We acknowledge the expertise and contributions of the anonymous expert meteorologists that are central to the findings of this study. We thank our DeepMind colleagues, including A. Muldal, A. Pierce, D. Hassabis, D. Smith, E. White, J. Donahue, K. McKee, K. Kavukcuoglu, L. Bennet, L. Deason, M. Grimes, O. Vinyals, P. Luc and R. Ahamed; A. Banki-Horvath and L. Chumakova; and our colleagues from the Met Office, including C. Bartholomew, D. Suri, K. Norman, S. Adams, P. Davies and T. McCaie.
These authors contributed equally: Suman Ravuri, Karel Lenc, Matthew Willson
DeepMind, London, UK
Suman Ravuri, Karel Lenc, Matthew Willson, Remi Lam, Piotr Mirowski, Sheleem Kashem, Amol Mandhane, Aidan Clark, Andrew Brock, Karen Simonyan, Raia Hadsell, Ellen Clancy & Shakir Mohamed
Met Office, Exeter, UK
Dmitry Kangin, Megan Fitzsimons, Maria Athanassiadou, Sam Madge, Rachel Prudden, Niall Robinson & Alberto Arribas
University of Exeter, Exeter, UK
Dmitry Kangin, Rachel Prudden & Niall Robinson
University of Reading, Reading, UK
Alberto Arribas
Suman Ravuri
Karel Lenc
Matthew Willson
Dmitry Kangin
Remi Lam
Piotr Mirowski
Megan Fitzsimons
Maria Athanassiadou
Sheleem Kashem
Sam Madge
Rachel Prudden
Amol Mandhane
Aidan Clark
Andrew Brock
Karen Simonyan
Raia Hadsell
Niall Robinson
Ellen Clancy
Shakir Mohamed
S.M., A.A., E.C., S.R., N. R, K.S. and R.H. managed the research. S. Madge, M.A., D.K. and K.L. collected and prepared the raw data. R.L., M.W., S.K., K.L., A.M., D.K., R.P. and M.A. created data sets and pipelines. S.R., K.L., P.M., M.W., A.M., R.L., D.K., R.P., A.C. and A.B. wrote the software and conducted experiments. K.L., S.R., M.W., R.L., D.K. and S.K. produced the figures and plots. M.F., S.M., D.K. A.A., E.C. and S.R. established and ran the meteorologist evaluation. S.M., S.R., A.A., N.R., K.L., D.K., R.P., N.R., E.C., P.M., R.L., M.W. and M.A. wrote the paper. E.C., A.A., N.R., S.R. and S.M., managed licensing and legal agreements.
Correspondence to Shakir Mohamed.
S.R., K.L., M.W., R.L., P.M., S.K., A.M., A.C., A.B., K.S., R.H., E.C. and S.M., are employees of DeepMind, a subsidiary of Alphabet Inc., and own Alphabet stock. D.K., M.F., R.P., M.A., S.M., S.A., N.R. and A.A. were employees of the UK Met Office during the entirety of this research. A.A. contributed to this research while at the Met Office and is now at Microsoft. Provisional patent 63/150,509 was filed covering the generative algorithm described in this paper, listing the authors S.R., K.L., M.J.W., R.L. and P.M as inventors. The authors declare no other competing interests related to the manuscript.
Peer review information Nature thanks Imme Ebert-Uphoff and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Extended data figures and tables
Extended Data Fig. 1 Detailed architecture of DGMR.
a, Generator architecture. S2D is space- to-depth operation; D2S is depth to space. b, Temporal discriminator architecture (top left), spatial discriminator (middle left), and latent conditioning stack (bottom left) of the generator. On the right are architectures for the G block (top), D and 3D block (middle), and L block (right). For all panels, (↑) or (↓) indicates spatial up-sampling or down-sampling, respectively.
Extended Data Fig. 2 Case study of performance on a challenging precipitation event starting at 2019-07-24 at 03:15 UK, showing two separate banded structures of intense rainfall in the north-east and south-west over northern England, DGMR is better able to predict the spatial coverage and convection compared to other methods over a longer time period, while not over-estimating the intensities, and is significantly preferred by meteorologists (73% first choice, N = 56, p < 2 × 10−4).
a, Geographic context for the predictions. b, A single prediction at T + 30, T + 60, and T + 90 min lead time for different models. CSI at thresholds 2 mm h−1and 8 mm h−1 and CRPS for an ensemble of four samples shown in a bottom left corner. For axial attention we show the mode prediction and the single sample. Images are 256 km × 256 km. c, Expert meteorologist preference for the visualized prediction (axial attention uses the mode prediction; we report the percentage of meteorologists for their first-choice rating as well as the Clopper–Pearson 95% confidence interval). Maps produced with Cartopy and SRTM elevation data46.
Extended Data Fig. 3 Case study of performance on a challenging precipitation event starting 2019-07-30 at 15:15 UK, showing a pattern of precipitation around a low-pressure area which is slow moving, resulting in the cyclonic banded structures over England, DGMR captures extent of precipitation overall over the area, though slightly overdoes rain coverage between bands, and is significantly preferred by meteorologists (73% first choice, N = 56, p < 2 × 10−4).
a, Geographic context for the predictions. b, A single prediction at T + 30, T + 60, and T + 90 min lead time for different models. CSI at thresholds 2 mm h−1 and 8 mm h−1 and CRPS for an ensemble of four samples shown in a bottom left corner. For axial attention we show the mode prediction and the single sample. Images are 256 km × 256 km. c, Expert meteorologist preference for the visualized prediction (axial attention uses the mode prediction; we report the percentage of meteorologists for their first-choice rating as well as the Clopper–Pearson 95% confidence interval). Maps produced with Cartopy and SRTM elevation data46.
Extended Data Fig. 4 Further verification scores for the UK in 2019.
a, Comparison of relative economic value across 20 samples of different models for different rain accumulations. UNet generates a single deterministic prediction. b, Effect of larger ensemble in increasing economic value. c, Pearson correlation coefficient of various models at grid-resolution (left), rain rates averaged over a 4 km aggregation (middle) and averaged over a 16 km aggregation (right). d, Reliability diagrams and sharpness plots for two precipitation thresholds for T + 60 min predictions. e, Rank histogram at T + 60 min.
Extended Data Fig. 5 Verification scores for the UK by yearly splits aligned with NWP initialization times.
a, CSI across 20 samples of different models across precipitation thresholds 1 mm h−1 (left), 4 mm h−1, 8 mm h−1 (right). UNet generates a single deterministic prediction. b, CRPS of various models for original predictions (left), average rain rate over a 4 km aggregation (middle), and average rain rate over a 16 km aggregation (right). c, Radially averaged power spectral density for full-frame 2019 predictions for different models. Please note that these results are computed with significantly fewer examples of the UK yearly dataset due to the NWP lead time alignment.
Extended Data Fig. 6 Verification scores for the UK (weekly split).
a, CSI for 1 mm h−1 (left), 4 mm h−1, 8 mm h−1 (right) precipitation thresholds. b, Radially averaged power spectral density for full-frame predictions at T + 30 (left), T + 60 (middle), and T + 90 min (right). c, CRPS at grid-scale (left), rain rates averaged over a 4 km aggregation (middle), rain rates averaged over a 16 km aggregation (right). d, CRPS at grid scale (left), maximum rain rate over a 4 km aggregation (middle), and maximum rain rate over a 16 km aggregation (right). e, Relative economic value analysis using 20 samples for three 90 min rainfall accumulations, using 4 km aggregation. UNet generates a single deterministic prediction.
Extended Data Fig. 7 Verification scores for the United States in 2019.
a, CSI for 1 mm h−1 (left), 4 mm h−1, 8 mm h−1 (right) precipitation thresholds. b, CRPS at grid-resolution (left), CRPS for rain rates averaged over a 4 km × 4 km area (middle), CRPS for rain rates averaged over a 16 km × 16 km area (right). c, CRPS at grid-resolution (left), maximum rain rate over a 4 km × 4 km area (middle), and maximum rain rate over a 16 km × 16 km area (right). d, Relative economic value analysis across 20 samples of different models for three 90 min rainfall accumulations, using 4 km aggregation. UNet generates a single deterministic prediction.
Extended Data Fig. 8 Radially averaged power spectral density for the United States in 2019.
a, Map of United States with three 1,536 × 1,536 regions: Pacific Northwest (left), Midwest (middle), Northeast (right). b, Radially averaged power spectral density for Pacific Northwest region for different models at T + 30 (left), T + 60 (middle), and T + 90 min (right). c, Radially averaged power spectral density for Midwest region for different models at T + 30 (left), T + 60 (middle) and T + 90 min (right). d, Radially averaged power spectral density for Northeast region for different models at T + 30 (left), T + 60 (middle), and T + 90 min (right). Map produced with Cartopy.
Extended Data Fig. 9 Further verification scores for the United States in 2019.
a, Comparison of relative economic value for 20 samples for different rain accumulations. UNet generates a single deterministic prediction. b, Effect of larger ensemble in increasing economic value. c, Pearson correlation coefficient of various models at grid-resolution (left), rain rates averaged over a 4 km aggregation (middle), and rain rates averaged over a 16 km aggregation (right). d, Reliability diagrams and sharpness plots for two precipitation thresholds for T + 60 min predictions. e, Rank histogram at T + 60 min.
The Supplementary Information contains six sections: section A provides more details about the datasets used; section B gives more details of the generative model architecture; section C provides additional experiments mentioned in the methods; section D gives a more detailed description of the re-implemented baselines; section E provides context of the related work in nowcasting research; section F describes the precise definitions of the metrics used and their variants.
Source data
Source Data Fig. 2
Ravuri, S., Lenc, K., Willson, M. et al. Skilful precipitation nowcasting using deep generative models of radar. Nature 597, 672–677 (2021). https://doi.org/10.1038/s41586-021-03854-z
Received: 17 February 2021
Issue Date: 30 September 2021
Thinking fast and slow in disaster decision-making with Smart City Digital Twins
Neda Mohammadi
John E. Taylor
Nature Computational Science (2021)
Accurate short-term precipitation prediction
Fernando Chirigati
Research Highlight 12 Nov 2021
Our publishing models
Editorial Values Statement
Journal Impact
History of Nature
Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print) | CommonCrawl |
Gender disparities in the education gradient in self-reported health across birth cohorts in China
Bowen Zhu1 &
Yiwan Ye2
Variation in the relationship between education and health has been studied intensely over the past few decades. Although there is research on gender disparity and cohort variations in educational effect on health using samples from the U.S. and Europe, research about China's is limited. Given the specific social changes in China, our study is designed to analyze the gender and cohort patterns in the education-health gradient.
The latent growth-curve modeling was used to analyze the gender and cohort variations in the education gradient in self-rated health among Chinese respondents. The study employed longitudinal and nationally representative data from the Chinese Family Panel Studies from the years 2010 to 2016. Each cohort is specified according to their distinct periods of social change in China. Following the analysis, we used latent growth-curve model to illustrate gender and cohort differences in the age-graded education and health trajectories.
Although Chinese men have reported to have better health than women in general, women reported 1.6 percentage points higher in self-reported health for each additional year of schooling compared to that of men (P < 0.001). The latent growth curve model showed women's extra education benefits were persistent overtime. Compared to the people born during the "Old China" (1908–1938), the education gradient in self-rated health did not change for cohorts born before 1955 and after 1977, but the education-health gap changed significantly in the 1956–1960 (O.R. = 1.038, P < 0.05), 1967–1976 (O.R. = 1.058, P < 0.001), and 1977–1983 (O.R. = 1.063, P < 0.001) cohorts. There was a gender difference for the cohort variations in the education-health gradient. For women, the education effect in the 1956–1960 (O.R. = 1.063, P < 0.05), 1967–1976 (O.R. = 1.088, P < 0.001) and 1977–1983 (O.R. = 1.102, P < 0.001) cohorts was significantly higher than that of the 1908–1938 cohort. On the contrary, the education-health gradient remained the same across all cohorts for men.
Our study suggests that the education-health gradient varies across cohorts for women, but the size of education effect remains consistent for men across cohorts. The findings support the resource-substitution hypothesis and not the rising-importance hypothesis in China. We discussed the potential influences of the unique, social transformation and educational expansion in China.
Over the past few decades, the relationship between education and health or education-health gradient has been studied intensely. Education was found to affect health directly and indirectly through increasing income and improving job outcomes [1, 2], enabling a healthy lifestyle and strong social support [3], supporting a sense of control in life, enhancing confidence in problem-solving, and strengthening the ability to cope with stress [4, 5]. Although education's positive effects on health are well-studied, it is also important to analyze which population subgroup benefits more from it and which group does not as well as if there are any changes in the education-health relationships across birth cohorts.
The gender gap in health attracts considerable attention from researchers [6,7,8,9,10,11,12]. Their studies show that men tend to report better health experiences and outcomes than women despite higher life expectancy for women versus men. Social factors – underlying social advantage or disadvantage - rather than biological factors are identified as primary explanations for the gender difference in health [13, 14]. Despite advances in gender equality over time, women remain socially and economically disadvantaged in comparison with men, and there are still substantial limits in access to health-related resources for women [15, 16]. Compared with men, women face restricted opportunities for paid employment, higher wages, fulfilling work, and authority in the workplace [10]. According to the 'resource substitution hypothesis' proposed by Ross and Mirowsky [9, 10], resource substitution exists when having multiple resources make outcomes less dependent on the presence of any specific resources and it implies that education's influence on health is greater for persons with fewer alternative resources than it is for the more advantaged. Women's disadvantaged status means that they generally have fewer resources than men. According to the resource substitution hypothesis, women depend more heavily on education to improve their health. The present paper examines whether the resource substitution hypothesis is supported and whose health benefits more from education in the Chinese context.
Social transformation also shapes the relationship between education and health. Throughout the twentieth century, significant social changes have occurred in many countries around the world, especially in developing countries like China. Due to dramatic social changes that shape cohort differences, both the 'rising importance hypothesis' [17], and the 'diminishing health returns hypothesis' [18,19,20] have been tested. The rising importance hypothesis suggests the positive education effect on health increases across birth cohorts, whereas the diminishing health returns hypothesis suggests the education effect diminishes across birth cohorts. Subsequent findings on gender difference and cohort variation reveals both gender and cohort-specific social context shapes education gradient in health. However, these findings are based on the Western democratic context – using data from the U.S. and Europe [17,18,19,20,21,22]. Empirical work on gender and cohort effect and the education-health gradient are often absent in the Chinese context, especially after the educational expansion therein. Our study is to expand on the knowledge if similar gender and cohort patterns of education-health gradient occur in the Eastern communist state.
Compared to the U.S. and Europe, the association between education and health-related resources was more complicated in China due to its drastic socio-political transformation – such as the civil war (1946–1950), the Great Famine (1959–1961), the Cultural Revolution (1966–1976), and Economic Reform (1979–1989). In U.S., the positive relationship between education and income had intensified over the years [23], and growing income difference due to education further increased the health gaps in recent cohorts [21]. In Europe, from 1960s onward, participation in higher education had increased, however, the massive growth in tertiary education has not been accompanied by an equivalent growth in the labor market [18, 19]. Unlike their Western capitalist counterparts, China prior to the marketization reform in 1979 was a collective economy where the government assigned salaries and occupations to individuals, making educational attainment irrelevant for job acquisition [24]. During the Reform and Opening-Up periods, it was not uncommon for people without higher education to earn high wages, thereby explaining the weakness of the education-income relationship. However, from 1992 to 2004, the wage returns to education rose steadily and then stagnated, and ultimately declined from 2004 to 2009 due to the educational expansion [25].
Not only was the gendered relationship between education and health-related resources in China idiosyncratic, but also the cohort variations in education. From 1966 to 1976, the Cultural Revolution had a devastating influence on education, especially for higher education as demonstrated by the college entrance examination system shutting down during that period [26]. According to the Chinese Educational Statistical Report, there were only 0.85 million college graduates in China in 1999, but four years later, the number of graduates rose to 1.88 million. By 2017, there were around 7.36 million college graduates in China [27]. The likelihood of female high school students getting into colleges had been reported to be the similar to that of male high school students [28]. However, the significant educational advancement in China resulted in a market devaluation of educational credentials, and the influx of college credentials also made the labor market more competitive [29]. Gender gap in education was also influenced. It was not until the policy implementation of higher education expansion in 1999 that increased the opportunity of receiving a college education and narrowed the gender gap in educational attainment [30]. However, the gender gap in receiving formal education, junior high school persists [28]. Additionally, urban female residents received more benefits from the education expansion, and gender inequality in education even increased in rural areas [31]. These social upheavals and policy changes in China occurred at different time points, and therefore their influences may vary for birth cohorts who came of age in different historical periods. This study aimed to examine the education-health gradient from both gender and cohort perspectives in the Chinese context.
Data from the Chinese Family Panel Studies (CFPS) are used to answer the following research questions: (1) Is the education benefit in health larger for men or women? I.e. will 'resource substitution hypothesis' be proved in China? (2) Is there any inter-cohort variation in the association between education and health? I.e. will 'rising importance hypothesis' or 'diminishing health return hypothesis be supported in China? (3) Is there any gender difference in inter-cohort variations in education-health gradient?
Our analytic strategy includes two steps. Firstly, we present the descriptive statistics of our analytic samples to illustrate the characteristics of our main variables of interests, namely self-rated health, education years, gender, and other keys covariates. Secondly, we used the latent growth rate model to test the beforementioned hypotheses. This model allows us to examine the statistical significance of education effect on health across gender and cohort groups while controlling for covariates, within individual change, and other unaccounted random errors.
We used four waves of Chinese Family Panel Studies data (2010, 2012, 2014, and 2016) from a nationally representative survey of adults aged 18-years and older. The surveys were administered by the Institute of Social Science Survey at the Peking University of China and were designed to study the historical change of society, economy, population, education, and health in China. The survey used computer assisted personal interview to collect panel data at the individual, household, and community levels. Respondent selection was guided by implicit stratification and multi-stage probability proportional to size (PPS) sampling. The unit in the first sampling stage was a county, the unit in the second stage was a neighborhood committee, and the unit in the final stage was a family household. In the first two sampling stages, the official administrative division data were used to select the counties and neighborhoods, and the households were sampled using the cyclic isometric method with random starting points. The panel design provides an opportunity for cohort analysis of social and economic change over time. We identified the sample of adults aged 22 years or older in 2010 as the baseline cohort, and the working sample consists of 23,706 individuals in 2010, 26,094 individuals in 2012, 25,724 individuals in 2014, and 25,084 individuals in 2016. In total, there were 27,580 unique respondents and 100,608 observations between 2010 and 2016. The percentage of individuals surveyed four to six times is 65.89%, and the percentage of individuals surveyed only three times is 34.11%.
The self-reported health variable measures respondents' subjective assessment of their health. Respondents were specifically asked, 'How good is your health in general?' The Likert scale in 2010 includes the response options of 'very bad,' 'bad,' 'a little bad,' 'fair,' and 'good.' We coded the first of these four items as '0' and "good" health as '1.' The Likert scale from 2012 to 2016 includes the response options of 'bad,' 'fair,' 'little good,' 'good,' and 'very good.' The first two items are coded as '0,' and the last three answers are coded as '1.' The self-reported health measure is regarded as a valid and reliable measure of health as it encompasses the subjective experience of fatal and nonfatal diseases and the general feeling of well-being [3, 32]. The self-reported health variable is highly correlated with objective measures of health, such as mortality, morbidity, or diagnosis from a clinical exam. The self-rated health measure is a salient predictor of morbidity and mortality [33], and an even stronger predictor of physical health, mortality, and chronic diseases [14].
Education was measured by asking, 'What is the highest degree you have completed?' Respondents could select one of the following answer categories: 'not received education,' 'primary school,' 'junior high school/professional high school,' 'senior high school,' 'junior college,' 'college,' and 'undergraduate.' For individuals who were still attending school, they were asked which year they attended at the time of the survey. The answers ranged from 0 years to 22 years. Given that adults aged 24 years or younger may not have completed their educational careers by the time they were surveyed, we selected samples with respondents aged 22 years or older in an attempt to avoid assessing effects of education on health prematurely [21, 34].
The cohort variable was constructed by asking, 'Which year was you born?'. Based on the birth year, we constructed eight birth cohorts based on historical periods of social change in China during respondents' formative years beginning at age 10. The cohorts included the Children of Old China who were born before 1939, the Children of New China (1939–1946), the 'Lost' Generation (1947–1955), the Children of Early Cultural Revolution (1956–1960), the Children of Late Cultural Revolution (1961–1966), the Children of Economic Reform (1967–1976), the Children of Early Opening-Ups (1977–1983), and the Children of Late Opening-Ups (1984–1994) [35].
Additionally, employment status, family income, and frequency of physical exercise were important factors affecting health [36, 37], so we used them as control variables. For employment status, we used the 'not employed' answer as the reference category. We used average family annual income to measure economic background. Family income variable consists of operating income, wage, transfer income, property income and other income. The frequency of physical exercise was measured by asking, 'how often do you exercise in the last month?' The answer options ranged from 'never,' 'one time a month,' 'two or three times a month,' 'two or three times a week, and 'almost every day,' and they were coded as 0 to 4 respectively. Table 1 show the descriptive statistics of variables.
Table 1 Descriptive Statistics of Variables in the Analytic Sample1
In this study, we want to estimate how education effect change within individuals (over survey waves) and between individuals (between male and female, and across birth cohorts). We used latent growth curve model (LGM) rather than a regular hierarchical model, because a multilevel model cannot map the age sequence of self-rated health for short-term panel data. A hierarchical model treats survey waves as period effects instead of treating them as continuous age graded trajectories [38]. Since CFPS is a longitudinal data, the latent growth curve model is appropriate to study both between-person and within-person changes. In our case, LGM estimates cohort variations in health across age groups as well as within-individual age trajectories over the four survey waves [39, 40].
Like other structural equation modeling, the latent growth curve modeling has two main components – the first component is the fixed effects (i.e. the coefficients) and the second component is the random variation, which estimates the amount of health variation that is unexplained. For LGM, the fixed component is further divided into two parts. First part is the intercepts component which estimates the between-individual differences in health across cohorts, gender, age groups. The second part is the slopes or growth rate component which estimate within-individual health trajectories over time, i.e. across survey waves. In practice, LGM treats individual respondents as groups and survey waves as the age sequence vectors. LGM first conducts a series of regression for all 100,608 individuals, where each individual has a regression line based on at least 3 age sequence observations. And each individual regression line has two parameters - an intercept and a slope. Then, LGM uses those intercepts and slopes to estimate the parameters for the entire sample.
Since self-reported health is a dichotomous variable in our study, the functional form for our linear growth curve model is a logistic regression, which estimates the probability of being in good health, adjust for within-person differences at the slope level or level 1 and adjust for between-person changes at the intercept level or level 2. This model estimates the gender differences for both cohort variations and age trajectories in the association between education and health over the life course. We formulate a series of linear growth curve models using HLM 7.1 software. The full model (Model 4) that controls for all interaction terms and covariates is described as follows:
The level-1 model characterizes within-individual change across survey waves after controlling for a series of variables at level-1.
$$ \mathit{\log}\left[{\varphi}_{t,i}/\left(1-{\varphi}_{t,i}\right)\right]={\eta}_{t,i}={\pi}_{0,i}+{\pi}_{1,i}\ast \left({T}_{t,i}\right) $$
Where φt, i is the probability of self-rated health for individual i at time t (i.e., survey wave), π0, i is the intercept component, and π1, i indicates the linear growth rates or the slope component. Tt, i denotes the difference between the current survey year and the reference survey year (i.e., 2010).
The level-2 model estimates the between-individual change in health with age and assesses whether there are patterns in the association between education and health in the age trajectory for gender across different cohorts. Level-2 consists of an intercept component that measures fixed effects for all individuals and a slope equation (i.e., linear growth rate) that measures the changes in fixed effect over time.
The intercept π0, i equation is expressed as the following:
$$ {\pi}_{0,i}={\beta}_{0,0}+{\beta}_{0,1} Ag{e}_i+{\beta}_{0,2} Ag e\_S{q}_i+{\beta}_{0,3} Mal{e}_i+{\beta}_{0,4} Ed{u}_i+{\beta}_{0,5}\left( Mal{e}_i\ast Ed{u}_i\right)+{\beta}_{0,j}\sum \limits_{j=6}^{12} Cohor{t}_i+{\beta}_{0,k}\sum \limits_{k=13}^{19}\left( Cohor{t}_i\ast Ed{u}_i\right)+{\beta}_{0,20} Incom{e}_i+{\beta}_{0,21} Employe{d}_i+{\beta}_{0,22} Exercis{e}_i+{\gamma}_{0i} $$
Where β0, 0 denotes the overall probability of reporting the health of all individuals across survey waves. β0, 1 to β0, 19 are the fixed coefficients, including the main effects of age, age squared, gender, education, and cohort, and the interaction effects between gender and education and between cohort and education. β0, 20 to β0, 22 denote coefficients for level-2 covariates: family income, employee status, and frequency of physical exercise. γ0i denotes the variance component for the fixed intercept equation.
The linear growth rate of the period π1,i equation is expressed as the following:
$$ {\pi}_{1,i}={\beta}_{1,0}+{\beta}_{1,1} Ag{e}_i+{\beta}_{1,2} Mal{e}_i+{\beta}_{1,3} Ed{u}_i+{\beta}_{1,4}\left( Mal{e}_i\ast Ed{u}_i\right)+{\beta}_{1,j}\sum \limits_{j=5}^{11} Cohor{t}_i+{\beta}_{1,k}\sum \limits_{k=12}^{18}\left( Cohor{t}_i\ast Ed{u}_i\right)+{\beta}_{1,19} Incom{e}_i+{\beta}_{1,20} Employe{d}_i+{\beta}_{1,21} Exercis{e}_i $$
The slope component includes all of the corresponding variables from the intercept component, except for Age _ Sqi because we assume the rate of change in age effect is the same across all survey waves. We also do not control for random effect for the slope equation, because we lack the statistical power and we assume the slope coefficients are fixed for all respondents. Lastly, we examine the full model separately for each gender, allowing us to compare the significance of association and general direction between the female-only model (Model 6) and the male-only model (Model 7).
Gender difference: results for the resource substitution hypothesis
Table 2 shows the descriptive statistics for all the variables in the analytic models. As expected, men on average report better health status, higher educational attainment, higher income, and higher physical exercise rates than their female counterparts. The sample size for each cohort by gender is shown in Table 3.
Table 2 Descriptive Statistics of Variables in the Analysis by Gender
Table 3 Sample Size for Each Cohort by Gender (Observations)
In Table 4, we present the results from a series of age vector models. Based on the odds ratio in Model 2 for males, men reported better health than women. This model tests the resource substitution hypothesis, which supposes that there is a substantial educational difference in the slope of gender (Model 1). For male respondents, a one-year increase in education yields a change in log odds of 0.07 or an odds ratio of 1.073. Thus, a female with one additional year of education is on average 1.012 times more likely than a male with one additional year of education to report a status of healthy in the survey holding all else constant. In other words, the association between education and health is weaker among men than among women.
Table 4 Coefficients and Odds Ratios of Linear Growth Curve Models for Education, Gender and Cohort Effects on Self-Rated Health
Cohort variations in education and health: results for the rising importance hypothesis
In order to understand the cohort variations in self-reported health, we added the cohort variables in Model 3. The odds ratios for each cohort in Model 3 shows that the younger cohort reports healthier than the oldest cohort, but only the health for the Late Cultural Revolution (1961–1966) cohort is statistically significant from that of the oldest cohort.
To test the rising importance hypothesis, which supposes that the association between education and health becomes stronger over time for a given population, we included interaction effects between education and cohort in Model 4. We found that the association between education and health did not change significantly for cohorts before 1955, and the association became stronger for younger cohorts. For the 1956–1960 cohort, a one-unit increase in education increased the odds of reporting good health by 3.8% (P < 0.05). The odds ratio for the 1908–1938 cohort was 1.058 (P < 0.001). The odds ratio of a one-unit education gain for cohort 1956–1960 over the odds ratio of a one-unit education gain for the 1908–1938 cohort was 1.063 (P < 0.001). In other words, respondents from cohort 1956–1960 with one additional year of education were on average 1.038 times more likely than respondents from cohort 1908–1938 with one additional year of education to report healthy in the survey holding all else constant. With one additional year of education, respondents from the 1967–1976 cohort and the 1977–1983 cohort were on average 1.058 times and 1.063 times respectively more likely to report as healthy compared with respondents from the 1908–1938 cohort. We also noted that the rising trend disappeared in the youngest cohort.
Gender difference in education and health across cohorts
We also established full models separately for men and women. Model 4 showed that the odds ratios of the interaction between education and cohort were significantly more than 1 among women for the 1956–1960, 1967–1976 and 1977–1983 cohorts. The results in Model 5 suggested that education's positive effects on health increased for the 1956–1960, 1967–1976 and 1977–1983 cohorts. Female respondents from the 1956–1960, 1967–1976 and 1977–1983 cohorts with one additional year of education were on average 1.063 times, 1.088 times, and 1.102 times more likely than respondents from the 1908–1938 cohort (P < 0.05) to report better health, respectively. The pattern for the relationship between education and health for the female subsample was the same as that of the full sample. As with the rising importance hypothesis, the rising trend also disappeared in the youngest cohort. In Model 6, education was not positively related to good health for men (P > 0.1). The interactional effects between education and cohort were not statistically significant. In other words, the education effects were the same across all eight cohorts among men, but there was a gender difference in education and health across cohorts. The gender difference became statistically significant in the slope model, but it was not statistically significant in the intercept model.
This study tests the gender disparity in the education-health relationship across cohorts in China and assesses the resource substitution hypothesis and rising importance hypothesis in the Chinese context. Results reveal that the education effect on health is stronger among women than among men, which means the resource substitution hypothesis is supported in the Chinese context, and the result is consistent with related U.S. studies. However, compared to previous findings about the rising importance hypothesis from the United States, there are two notable differences in the Chinese context. First, the effect of education on health has not increased from the oldest cohort to the youngest cohort, and the gaps in health remained stable for some cohorts. Second, for the rising importance hypothesis, there is a gender difference in the educational effects on health across cohorts in the Chinese context.
We think potential explanations for the U.S.-China differences lie in the role of sociocultural and policy change. There are two crucial reasons which can explain the rising importance phenomenon in U.S. First is the relationship between education and income has intensified over the years [23], and the same intensity also occurred in the relationship between the educational and health-related behaviors [8]. Another reason is the education expansion resulting in an increased adverse selection of the lower-educated individuals, which resulted in the expanding inequality in health-related resources between highly educated and uneducated individuals [41]. However, in China the association between education and health-related resources may be one of the reasons for the complex trend across cohorts. Eating dinner and drinking wine with friends, colleagues, and superiors is a crucial way to maintain a social network and to obtain resources in Chinese society but can be a negative effect for health outcomes [42].
Before the Reform and Opening-Up, individuals had limited access to food and clothing under the socialist economy. After decades of Opening-Up, more people began to enjoy abundant material prosperity [43]. Individuals with higher education and more purchasing power were more likely to participate in social gatherings, indulge in unhealthy diets, and consume more alcohol [40]. Furthermore, these behaviors are empirically found to be more pronounced among men than women [42], which could possibly be supported by a strong positive relationship between education and drinking in the male sample in our study.
As for the cohort patterns, we argued that the drastic educational landscape changes in China affect the relationship between education and health-related resources. The rapid increase in social and economic benefits from education for the 1967–1976 cohort may be due to the recovery of the college entrance exam in 1977. Although there were less than 1.4 million students who graduated from college prior to 2003, credentials became more critical for finding jobs after the Opening-Up and Reform periods in 1978. Members of the 1967–1976 and 1977–1983 cohorts who graduated from college could acquire well-paying jobs compared with cohort members who did not receive a college degree. However, with educational expansion starting in 1999 in China, the number of people receiving a college education had been increasing rapidly. According to the Chinese Educational Statistics Report (2018), the number of people receiving a college education rose from 1.08 million in 1998 to 7.62 million in 2017 [28]. Compared with older cohorts, the children of Later Opening-Ups (1984–1994) could receive an education more easily, but the valuation of the same academic degree had decreased. Educational expansion resulted in the phenomenon of over-education, which lessens the income benefits from educational attainment due to the mass of individuals who have similar credentials vying for similar, limited opportunities. Members of the youngest cohort faced more competition in the labor market than the members of the Early Opening-Ups, for whom the adverse effects from educational expansion had not caught up. Educational expansion weakens the role of education in improving access to both socioeconomic and health-related resources [30, 44]. The rate of returns for education even declined in 2009 [25]. As a result, the rising trend disappeared for the 1984–1994 cohort.
The gender difference in education and health across cohorts can be attributed to women experiencing greater difficulty with gaining access to educational and health-related resources. Results show that the links between education and health are stronger among women than among men in China, which support the 'resource substitution hypothesis' from the perspective of cohorts. Women have been in a socially disadvantaged position for decades [45], so they have fewer resources to rely on. On the contrary, men have more resources presently and historically, so education is less important for men than for women. Another possible reason for this gender disparity is females' disadvantages in the labor market. Due to governmental deregulation of the market, enterprises begin to employ more men than women, and discrimination against women in the labor market has increased since the "reform and open-up" [46, 47]. Moreover, education is more important for women in securing jobs than for men. Hence, education is more important for women than for men across cohorts.
The strength in the research is revealing the gender difference in education-health pattern, investigating the gender difference in the education-health patterns across cohorts, especially for the recent cohorts, and finding the difference between U.S. and China. However, there are also limitations in the research. The observation period has only 6 years. Thus, the age overlaps between cohorts – the points at which actual cohort effects can be identified – are small. Hence, an age vector graph can be useful to identify any age trajectories difference in cohort effects.
Although self-reported health is a valid measure of health status [32], it is susceptible to effects based on individual characteristics and cultural contexts. A study found that people with higher education and income tend to report their health optimistically in China [48]. Given its vulnerability to individual and social influences, the association between education and self-reported health may be receiving too much attention in the literature. The vignette method could be incorporated into studies in the future to provide more qualitative information if related items are included in the design of data collection instruments. The relationship between education and health is complex or potentially reciprocal - people with higher education can improve their health, but people may drop out of high school due to severe health problems. The primary goal of our study is to illustrate the demographic and cohort patterns for the education and health association. As such, we cannot establish a causal relationship between education and health, however different methods and data may allow for the conclusion of causal inferences in future studies.
In conclusion, the results show that the education-health relationship is stronger among women than among men, and there is gender difference in the education-health patterns across cohorts. This different pattern suggests that broad contextual factors such as gender and cohort can significantly shape the education-health patterns in China. Our findings show that the gender difference in the association between education and health is significant, but China's unique history of educational and health development and cohort-specific formulative experience, may have also influenced these education-health patterns. Future studies may consider different theoretical frameworks, such as social transformation theory, to explain the gender disparity in educational benefits across cohorts.
The datasets generated and /or analyzed during the current study are available in the Chinese Family Panel Survey, http://www.isss.pku.edu.cn/cfps/index.htm.
CFPS:
Chinese Family Panel Survey
LGM:
Linear Growth Curve Modeling
Andersen RM. Revisiting the behavioral model and access to medical care: does it matter? J Health Soc Behav. 1995;36:1–10.
Landerman LR, Burns BJ, Swartz MS, Wagner HR, George LK. The relationship between insurance coverage and psychiatric disorder in predicting use of mental health services. Am J Psychiatry. 1994;151:1785..
Article CAS PubMed PubMed Central Google Scholar
Ross CE, Wu CL. The links between education and health. Am Sociol Rev. 1995;60:719–45.
Lin N, Ensel WM. Life stress and health: stressors and resources. Am Sociol Rev. 1989;54:382–99.
Ross CE, Mirowsky J. Why education is the key to socioeconomic differentials in health. Handb Med Sociol. 2010;6:33–51.
Chen F, Yang Y, Liu G. Social change and socioeconomic disparities in health over the life course in China: a cohort analysis. Am Sociol Rev. 2010;75:126–50.
Johnson RJ, Wolinsky FD. Gender, race, and health: the structure of health status among older adults. Gerontologist. 1994;34:24–35.
Leopold L, Leopold T. Education and health across lives and cohorts: a study of cumulative (dis) advantage and its rising importance in Germany. J Health Soc Behav. 2018;59:94–112.
Ross CE, Mirowsky J. Sex differences in the effect of education on depression: resource multiplication or resource substitution? Soc Sci Med. 2006;63:1400–13.
Ross CE, Mirowsky J. Gender and the health benefits of education. Sociol Q. 2010;51:1–19.
Williams DR, Collins C. US socioeconomic and racial differences in health: patterns and explanations. Annu Rev Sociol. 1995;21:349–86.
Zheng L, Zheng X. The cohort variations of education related health gradients in China: analysis based on growth curve model. Popul Econ. 2018;227:69–79.
Braveman P. Health disparities and health equity: concepts and measurement. Annu Rev Public Health. 2006;27:167–94.
Schoenfeld DE, Malmrose LC, Blazer DG, Gold DT, Seeman TE. Self-rated health and mortality in the high-functioning elderly–a closer look at healthy individuals: MacArthur field study of successful aging. J Gerontol. 1994;49:M109–15.
Rose SJ, Hartmann HI. Still a man's labor market: the long-term earnings gap: Institute for Women's policy research; 2004.
Ross CE, Bird CE. Sex stratification and health lifestyle: consequences for men's and women's perceived health. J Health Soc Behav. 1994;35:161–78.
Mirowsky J, Ross CE. Education and self-rated health: cumulative advantage and its rising importance. Res Aging. 2008;30:93–122.
Bracke P, Pattyn E, von dem Knesebeck O. Overeducation and depressive symptoms: diminishing mental health returns to education. Sociol Health Illn. 2013;35:1242–59.
Bracke P, Van De Straat V, Missinne S. Education, mental health, and education-labor market misfit. J Health Soc Behav. 2014;55:442–59.
Delaruelle K, Buffel V, Bracke P. Educational expansion and the education gradient in health: a hierarchical age-period-cohort analysis. Soc Sci Med. 2015;145:79–88.
Goesling B. The rising significance of education for health? Soc Forces. 2007;85:1621–44.
Lynch SM. Cohort and life-course patterns in the relationship between education and health: a hierarchical approach. Demography. 2003;40:309–31.
Hout M. Social and economic returns to college education in the United States. Annu Rev Sociol. 2012;38:379–400.
Davis-Friedmann D. Intergenerational inequalities and the Chinese revolution: the importance of age-specific inequalities for the creation and maintenance of social strata within a state-socialist society. Modern China. 1985;11:177–201.
Ding X, Suhong Y, Ha W. Trends in the Mincerian rates of return to education in urban China: 1989–2009. Front Educ China. 2013;8:378–97.
Deng Z, Treiman DJ. The impact of the cultural revolution on trends in educational attainment in the People's Republic of China. Am J Sociol. 1997;103:391–428.
Ministry of Education of the People's Republic of China. Educational statistics yearbook of China. Beijing: China Statistics Press; 2018.
Li C. The changing trend of educational inequality in China (1940-2010): reexamining the urban-rural gap on educational opportunity. Soc Stud. 2014;2:65–89.
Mok KH, Wu AM. Higher education, changing labor market and social mobility in the era of massification in China. J Educ Work. 2016;29:77–97.
Zhang Z, Chen Q. College expansion and gender equalization in higher education: an Emprical study based on 2008 Chinese general social survey. Soc Stud. 2013;2:173–96.
Li C. Expansion of higher education and inequality in opportunity of education: a study on effect of 'Kuozhao' policy on equalization of educational attainment. Soc Stud. 2010;3:82–113.
Mossey JM, Shapiro E. Self-rated health: a predictor of mortality among the elderly. Am J Public Health. 1982;72:800–8.
Ferraro KF, Farmer MM, Wybraniec JA. Health trajectories: long-term dynamics among black and white adults. J Health Soc Behav. 1997;38:38–54.
Willson AE, Shuey KM, Elder J, Glen H. Cumulative advantage processes as mechanisms of inequality in life course health. Am J Sociol. 2007;112:1886–924.
Shu X, Zhu Y. Uneven transitions: period-and cohort-related changes in gender attitudes in China, 1995–2007. Soc Sci Res. 2012;41:1100–15.
Ecob R, Smith GD. Income and health: what is the nature of the relationship? Soc Sci Med. 1999;48:693.
Umberson D, Crosnoe R, Reczek C. Social relationships and health behavior across the life course. Annu Rev Sociol. 2010;36:139–57.
Curran PJ, Obeidat K, Losardo D. Twelve frequently asked questions about growth curve modeling. J Cognit Dev. 2010;11:121–36.
Raudenbush SW, Bryk AS. Hierarchical Linear Models: Applications and Data Analysis Methods. 2nd Ed. SAGE Publications, Inc; 2001;181–179. https://us.sagepub.com/en-us/nam/hierarchical-linear-models/book9230.
Mirowsky J, Kim J. Graphing age trajectories: vector graphs, synthetic and virtual cohort projections, and virtual cohort projections, and cross-sectional profiles of depression. Sociol Methods Res. 2007;35:497–541.
Haas SA. Health selection and the process of social stratification: the effect of childhood health on socioeconomic attainment. J Health Soc Behav. 2006;47:339–54.
Chen Y, Bian Y. Analyzing the corrosive and differential roles of social eating in political trust: the side effects of Guanxi capital. Chin J Sociol. 2015;35:92–120.
Lu B. A new stage of the nutrition transition in China. Nutr Transit. 2002;5:169–74.
Knight J, Deng Q, Li S. China's expansion of higher education: the labour market consequences of a supply shock. China Econ Rev. 2016;43:127–41.
Shu X. Market transition and gender segregation in urban China. Soc Sci Q. 2005;86:1299–323.
Zhang Y, Hannum E, Wang M. Gender-based employment and income differences in urban China: considering the contributions of marriage and parenthood. Soc Forces. 2008;86:1529–60.
Shu X, Bian Y. Market transition and gender gap in earnings in urban China. Soc Forces. 2003;86:1107–45.
Qi Y. Reliability and validity of self-rated general health. Chin J Sociol. 2014;34:196–215.
We would like to thank the anonymous reviewers for their helpful comments and suggestions on the manuscript.
Funding for this research was provided by the China Scholarship Council to the first author. The research is also supported by Small Research Grant from the Department of Sociology at UC Davis to the second author. We want to thank the Institute of Social Science Survey of Peking University for sharing the data.
School of Public Administration, Hunan Normal University, Lushan Road 36, Changsha, 410081, Hunan, China
Bowen Zhu
Department of Sociology, University of California, Davis, 286 Social Science & Humanities Building, Davis, 95616, USA
Yiwan Ye
ZBW provided the background information of the study; documented, analyzed and interpreted the data; discussed the results of the study; and was the major contributor in writing the manuscript. YYW analyzed, interpreted the data and provided English editing. Both authors read and approved the final manuscript.
Correspondence to Bowen Zhu.
Both authors declare that they have no competing interests.
Zhu, B., Ye, Y. Gender disparities in the education gradient in self-reported health across birth cohorts in China. BMC Public Health 20, 375 (2020). https://doi.org/10.1186/s12889-020-08520-z
Received: 15 October 2019
DOI: https://doi.org/10.1186/s12889-020-08520-z
Education and health gradient
Cohort effect
Latent growth-curve model | CommonCrawl |
Fixed Point Theory and Applications
Contraction principle for multivariate mappings
An application to an initial-value problem related to a first order differential equation
N variable nonexpansive mappings in normed spaces
Multivariate fixed point theorems for contractions and nonexpansive mappings with applications
Yongfu Su1,
Adrian Petruşel2 and
Jen-Chih Yao3Email author
Fixed Point Theory and Applications20162016:9
© Su et al. 2016
Accepted: 21 December 2015
The first purpose of this paper is to prove an existence and uniqueness result for the multivariate fixed point of a contraction type mapping in complete metric spaces. The proof is based on the new idea of introducing a convenient metric space and an appropriate mapping. This method leads to the changing of the non-self-mapping setting to the self-mapping one. Then the main result of the paper will be applied to an initial-value problem related to a class of differential equations of first order. The second aim of this paper is to prove strong and weak convergence theorems for the multivariate fixed point of a N-variables nonexpansive mapping. The results of this paper improve several important works published recently in the literature.
contraction mapping principle
complete metric spaces
multivariate fixed point
multiply metric function
multivariate mapping
strong and weak convergence
Banach's contraction principle is one of the most powerful tool in applied nonlinear analysis. Weak contractions (also called ϕ-contractions) are generalizations of Banach contraction mappings which have been studied by several authors. Let T be a self-map of a metric space \((X, d)\) and \(\phi: [0,+\infty)\rightarrow[0,+\infty)\) be a function. We say that T is a ϕ-contraction if
$$d(Tx,Ty)\leq\phi\bigl(d(x,y)\bigr), \quad \forall x,y\in X. $$
In 1968, Browder [1] proved that if ϕ is non-decreasing and right continuous and \((X,d)\) is complete, then T has a unique fixed point \(x^{*}\) and \(\lim_{n\rightarrow\infty}T^{n}x_{0}=x^{*}\) for any given \(x_{0} \in X\). Subsequently, this result was extended in 1969 by Boyd and Wong [2] by weakening the hypothesis on ϕ, in the sense that it is sufficient to assume that ϕ is right upper semi-continuous (not necessarily monotone). For a comprehensive study of the relations between several such contraction type conditions, see [3–6].
On the other hand, in 2015, Su and Yao [7] proved the following generalized contraction mapping principle.
Theorem SY
Let \((X,d)\) be a complete metric space. Let \(T:X\rightarrow X\) be a mapping such that
$$ \psi\bigl(d(Tx,Ty)\bigr)\leq\phi\bigl(d(x,y)\bigr), \quad\forall x, y \in X, $$
where \(\psi, \phi: [0, +\infty) \rightarrow[0, +\infty)\) are two functions satisfying the conditions:
$$\begin{aligned}& (1) \quad \psi(a)\leq\phi(b) \quad\Rightarrow\quad a \leq b; \\& (2)\quad \textstyle\begin{cases} \psi(a_{n})\leq\phi(b_{n}) \\ a_{n}\rightarrow\varepsilon,\qquad b_{n}\rightarrow\varepsilon \end{cases}\displaystyle \quad\Rightarrow\quad\varepsilon=0. \end{aligned}$$
Then T has a unique fixed point and, for any given \(x_{0} \in X\), the iterative sequence \(T^{n}x_{0}\) converges to this fixed point.
In particular, the study of the fixed points for weak contractions and generalized contractions was extended to partially ordered metric spaces in [8–18]. Among them, some results involve altering distance functions. Such functions were introduced by Khan et al. in [19], where some fixed point theorems are presented.
The first purpose of this paper is to prove an existence and uniqueness result of the multivariate fixed point for contraction type mappings in complete metric spaces. The proof is based on the new idea of introducing a convenient metric space and an appropriate mapping. This ingenious method leads to the changing of the non-self-mapping setting to the self-mapping one. Then the main result of the paper will be applied to an initial-value problem for a class of differential equations of first order. The second aim of this paper is to prove strong and weak convergence theorems for the multivariate fixed point of N-variables nonexpansive mappings. The results of this paper improve several important results recently published in the literature.
2 Contraction principle for multivariate mappings
We will start with some concepts and results which are useful in our approach.
Definition 2.1
A multiply metric function \(\triangle(a_{1},a_{2},\ldots,a_{N})\) is a continuous N variable non-negative real function with the domain
$$\bigl\{ (a_{1},a_{2},\ldots,a_{N})\in R^{N}: a_{i}\geq0, i\in\{1,2,3, \ldots,N\} \bigr\} $$
which satisfies the following conditions:
\(\triangle(a_{1},a_{2},\ldots,a_{N})\) is non-decreasing for each variable \(a_{i}\), \(i\in\{1,2,3, \ldots,N \}\);
\(\triangle(a_{1}+b_{1},a_{2}+b_{2},\ldots,a_{N}+b_{N})\leq \triangle(a_{1},a_{2},\ldots,a_{N})+\triangle(b_{1},b_{2},\ldots,b_{N})\);
\(\triangle(a,a,\ldots,a)=a\);
\(\triangle(a_{1},a_{2},\ldots,a_{N})\rightarrow0 \Leftrightarrow a_{i}\rightarrow0\), \(i\in\{1,2,3,\ldots, N \}\), for all \(a_{i},b_{i}, a \in\mathbb{R}\), \(i\in\{1,2,3, \ldots,N \}\), where \(\mathbb{R}\) denotes the set of all real numbers.
The following are some basic examples of multiply metric functions.
Example 2.2
(1) \(\triangle_{1}(a_{1},a_{2},\ldots,a_{N})=\frac{1}{N} \sum_{i=1}^{N}a_{i}\). (2) \(\triangle_{2}(a_{1},a_{2},\ldots,a_{N})=\frac{1}{h} \sum_{i=1}^{N}q_{i} a_{i}\), where \(q_{i}\in[0,1)\), \(i\in\{1,\ldots, N \}\), and \(0< h:= \sum_{i=1}^{N}q_{i}<1\).
\(\triangle_{3}(a_{1},a_{2},\ldots,a_{N})=\sqrt{\frac{1}{N} \sum_{i=1}^{N}a_{i}^{2}}\).
\(\triangle_{4}(a_{1},a_{2},\ldots,a_{N})=\max \{a_{1},a_{2},\ldots,a_{N}\}\).
An important concept is now presented.
Let \((X,d)\) be a metric space, \(T: X^{N}\rightarrow X\) be a N variable mapping, an element \(p\in X\) is called a multivariate fixed point (or a fixed point of order N; see [20]) of T if
$$p=T(p,p,\ldots,p). $$
In the following, we prove the following theorem, which generalizes the Banach contraction principle.
Theorem 2.6
Let \((X,d)\) be a complete metric space, \(T: X^{N}\rightarrow X\) be an N variable mapping that satisfies the following condition:
$$d(Tx,Ty)\leq h \triangle\bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N}) \bigr), \quad\forall x,y \in X^{N}, $$
where △ is a multiply metric function,
$$x=(x_{1},x_{2}, \ldots, x_{N}) \in X^{N},\qquad y=(y_{1},y_{2}, \ldots, y_{N}) \in X^{N}, $$
and \(h \in(0,1)\) is a constant.
Then T has a unique multivariate fixed point \(p\in X\) and, for any \(p_{0} \in X^{N}\), the iterative sequence \(\{p_{n}\}\subset X^{N}\) defined by
$$\begin{aligned} &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &p_{3}=(Tp_{2},Tp_{2},\ldots,Tp_{2}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned}$$
converges, in the multiply metric △, to \((p,p,\ldots ,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).
We define a two variable function D on \(X^{N}\) by the following relation:
$$D\bigl((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N})\bigr)=\triangle \bigl(d(x_{1},y_{1}),d(x_{2},y_{2}), \ldots,d(x_{N},y_{N}) \bigr) $$
for all \((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N})\in X^{N}\). Next we show that D is a metric on \(X^{N}\). The following two conditions are obvious:
\(D((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N}))=0 \Leftrightarrow (x_{1},x_{2},\ldots,x_{N})= (y_{1},y_{2},\ldots,y_{N})\);
\(D( (y_{1},y_{2},\ldots,y_{N})), (x_{1},x_{2},\ldots ,x_{N})=D((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N}))\), for all \((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N})\in X^{N}\).
Next we prove the triangular inequality. For all
$$(x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2}, \ldots,y_{N}), (z_{1},z_{2},\ldots,z_{N}) \in X^{N}, $$
from the definition of △, we have
$$\begin{aligned} &D\bigl((x_{1},x_{2},\ldots,x_{N}), (y_{1},y_{2},\ldots,y_{N})\bigr) \\ &\quad=\triangle\bigl(d(x_{1},y_{1}),d(x_{2},y_{2}), \ldots,d(x_{N},y_{N}) \bigr) \\ &\quad\leq\triangle\bigl(d(x_{1},z_{1})+d(z_{1},y_{1}),d(x_{2},z_{2})+d(z_{2},y_{2}), \ldots,d(x_{N},z_{N})+d(z_{N},y_{N}) \bigr) \\ &\quad\leq\triangle\bigl(d(x_{1},z_{1}),d(x_{2},z_{2}), \ldots,d(x_{N},z_{N}) \bigr) +\triangle\bigl(d(z_{1},y_{1}),d(z_{2},y_{2}), \ldots,d(z_{N},y_{N}) \bigr) \\ &\quad=D\bigl((x_{1},x_{2},\ldots,x_{N}), (z_{1},z_{2},\ldots,z_{N})\bigr)+D \bigl((z_{1},z_{2},\ldots,z_{N}), (y_{1},y_{2},\ldots,y_{N})\bigr). \end{aligned}$$
Next we prove that \((X^{N},D)\) is a complete metric space. Let \(\{p_{n}\}\subset X^{N}\) be a Cauchy sequence, then we have
$$\lim_{n,m\rightarrow\infty} D(p_{n},p_{m})=\lim _{n,m\rightarrow\infty}\triangle \bigl(d(x_{1,n},x_{1,m}),d(x_{2,n},x_{2,m}), \ldots,d(x_{N,n},x_{N,m})\bigr)=0, $$
$$p_{n}=(x_{1,n},x_{2,n},x_{3,n}, \ldots,x_{N,n}),\qquad p_{m}=(x_{1,m},x_{2,m},x_{3,m}, \ldots,x_{N,m}). $$
$$\lim_{n,m\rightarrow\infty}d(x_{i,n},x_{i,m})=0, $$
for all \(i\in\{1,2,3,\ldots,N \}\). Hence each \(\{x_{i,n}\} \) (\(i\in\{1,2,3, \ldots,N \}\)) is a Cauchy sequence. Since \((X,d)\) is a complete metric space, there exist \(x_{1},x_{2},x_{3},\ldots,x_{N} \in X\) such that \(\lim_{n\rightarrow\infty}d(x_{i,n}, x_{i})=0\) for all \(i\in\{1,2,3,\ldots,N \}\). Therefore
$$\lim_{n\rightarrow\infty}D(p_{n},x)=0, $$
$$x=(x_{1},x_{2},x_{3},\ldots,x_{N})\in X^{N}, $$
which implies that \((X^{N},D)\) is a complete metric space.
We define a mapping \(T^{*}:X^{N}\rightarrow X^{N}\) by the following relation:
$$T^{*}(x_{1},x_{2},\ldots,x_{N})= \bigl(T(x_{1},x_{2},\ldots,x_{N}),T(x_{1},x_{2}, \ldots,x_{N}),\ldots,T(x_{1},x_{2}, \ldots,x_{N})\bigr), $$
for all \((x_{1},x_{2},\ldots,x_{N})\in X^{N}\). Next we prove that \(T^{*}\) is a contraction mapping from \((X^{N},D)\) into itself. Observe that, for any
$$x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2}, \ldots,y_{N})\in X^{N}, $$
$$\begin{aligned} D\bigl(T^{*}x,T^{*}y\bigr)&=\triangle\bigl(d(Tx,Ty),d(Tx,Ty),\ldots,d(Tx,Ty)\bigr) \\ &=d(Tx,Ty) \\ &\leq h \triangle\bigl(d(x_{1},y_{1}),d(x_{2},y_{2}), \ldots,d(x_{N},y_{N})\bigr) \\ &=h D(x,y). \end{aligned}$$
By the Banach contraction mapping principle, there exists a unique element \(u \in X^{N}\) such that \(u=T^{*}u=(Tu,Tu,\ldots,Tu)\) and, for any \(u_{0}=(x_{1},x_{2},\ldots,x_{N})\in X^{N}\), the iterative sequence \(u_{n+1}=T^{*}u_{n}\) converges to u. That is,
$$\begin{aligned} &u_{1}=(Tu_{0},Tu_{0},\ldots,Tu_{0}), \\ &u_{2}=(Tu_{1},Tu_{1},\ldots,Tu_{1}), \\ &u_{3}=(Tu_{2},Tu_{2},\ldots,Tu_{2}), \\ &\cdots \\ & u_{n+1}=(Tu_{n},Tu_{n},\ldots,Tu_{n}), \\ &\cdots \end{aligned}$$
converges to \(u\in X^{N}\). By the structure of \(\{u_{n}\}\), we know that there exists a unique element \(p\in X\) such that \(u=(p,p,\ldots, p)\) and hence the iterative sequence \(\{Tu_{n}\}\) converges to \(p \in X\). By
$$T^{*}u=u=(p,p,\ldots,p),\qquad Tu=T(p,p,\ldots,p), \qquad T^{*}u=(Tu,Tu,\ldots,Tu), $$
we obtain \(p=T(p,p,\ldots,p)\), that is, p is the unique multivariate fixed point of T. This completes the proof. □
Notice that taking \(N=1\), \(\triangle(a)=a\) in Theorem 2.6, we obtain Banach's contraction principle.
Some other consequences of the above general result are the following corollaries.
Corollary 2.7
Let \((X,d)\) be a complete metric space, \(T: X^{N}\rightarrow X\) be a N variables mapping satisfying the following condition:
$$d(Tx,Ty)\leq\frac{h}{N}\sum_{i=1}^{N}d(x_{i},y_{i}), \quad 0< h< 1, $$
$$x=(x_{1},x_{2}, \ldots, x_{N}) \in X^{N}, \qquad y=(y_{1},y_{2}, \ldots, y_{N}) \in X^{N}. $$
converges, in the multiply metric, to \((p,p,\ldots,p)\in X^{N}\) and the iterative sequence \(\{Tp_{n}\}\subset X\) converges, with respect to d, to \(p \in X\).
Notice that the above corollary is related to the well-known Prešić's fixed point theorem (see [21]).
Prešić's theorem
Let \((X,d)\) be a complete metric space, N be a given natural number, and \(T:X^{N}\to X\) be an operator, such that, for all \(x_{1}, \ldots, x_{N}, x_{N+1}\in X\), we have
$$d\bigl(T(x_{1},x_{2}, \ldots x_{N}),T(x_{2}, \ldots, x_{N}, x_{N+1})\bigr)\le q_{1} d(x_{1},x_{2})+ \cdots+q_{N} d(x_{N},x_{N+1}), $$
where \(q_{1}, \ldots, q_{N}\in\mathbb{R}_{+}\) with \(q_{1}+ \cdots+ q_{N}<1\).
Then there exists a unique multivariate fixed point \(p\in X\) and p is the limit of the sequence \((x_{n})\) given by
$$x_{n+k}:=T(x_{n}, \ldots, x_{n+k-1}), \quad\textit{for } n\ge1, $$
independently of the initial N values.
Choosing \(\Delta:=\Delta_{2}\), \(h:= \sum_{i=1}^{N}q_{i}\), and \(x=(x_{1},x_{2},\ldots,x_{N}), y=(x_{2},x_{3},\ldots ,x_{N+1})\in X^{N}\), the contraction condition given in Theorem 2.6 leads to Prešić's contraction type condition.
Let \((X,d)\) be a complete metric space and \(T: X^{N}\rightarrow X\) be a N variable mapping which satisfies the following condition:
$$d(Tx,Ty)\leq h \sqrt{\frac{1}{N}\sum _{i=1}^{N}d(x_{i},y_{i})^{2}}, \quad 0< h< 1, $$
$$d(Tx,Ty)\leq h \max \bigl\{ d(x_{1},y_{1}),d(x_{2},y_{2}), \ldots,d(x_{N},y_{N})\bigr\} ,\quad 0< h< 1, $$
$$\begin{aligned} &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned}$$
Notice also here that the above corollary is related to a multivariate fixed point theorem of Ćirić and Prešić (see [22]), which reads as follows.
Ćirić-Prešić's theorem
$$d\bigl(T(x_{1},x_{2}, \ldots x_{N}),T(x_{2}, \ldots, x_{N}, x_{N+1})\bigr)\le h \max\bigl\{ d(x_{1},x_{2}), d(x_{2},x_{3}), \ldots, d(x_{N},x_{N+1}) \bigr\} , $$
where \(0< h<1\).
Then there exists a multivariate fixed point \(p\in X\) and p is the limit of the sequence \((x_{n})\) given by
If in addition, we suppose that on the diagonal \(\operatorname{Diag}\subset X^{N}\) we have
$$d\bigl(T(u,\ldots, u), T(v,\ldots, v)\bigr)< d(u,v), \quad\textit{for all } u,v\in X \textit{ with } u\ne v, $$
then the multivariate fixed point is unique.
Choosing \(\Delta:=\Delta_{4}\), \(h\in(0,1)\), and \(x=(x_{1},x_{2},\ldots,x_{N}), y=(x_{2},x_{3},\ldots ,x_{N+1})\in X^{N}\), the contraction condition given in Theorem 2.6 leads to the above Ćirić-Prešić's contraction type condition.
It is worth to mention that the above results are in connection with a very interesting multivariate fixed point principle proved by Tasković in [23]. More precisely, Tasković's result is as follows.
Tasković's theorem
Let \((X,d)\) be a complete metric space, N be a given natural number, \(f:\mathbb{R}^{N}\to\mathbb{R}\) be a continuous, increasing, and semi-homogeneous function (in the sense that \(f(\lambda a_{1}, \ldots, \lambda a_{N})\le\lambda f(a_{1}, \ldots, a_{N})\), for any \(\lambda, a_{1}, \ldots a_{N}\in\mathbb{R}\)) and let \(T:X^{k}\to X\) be an operator, such that, for all \(x=(x_{1}, \ldots , x_{N}), y=(y_{1}, \ldots, y_{N})\in X^{k}\), we have
$$d\bigl(T(x),T(y)\bigr)\le\bigl|f\bigl(a_{1} d(x_{1},y_{1}), \ldots, a_{N} d(x_{N},y_{N})\bigr)\bigr|, $$
where \(a_{1}, \ldots, a_{N}\in\mathbb{R}_{+}\) with \(|f(a_{1}, \ldots, a_{N})|<1\).
Notice here that \(\triangle(a_{1},\ldots, a_{n}):=f(a_{1},\ldots, a_{n})\) satisfies part of the axioms of the multiply metric. More connections with the above mentioned results will be given in a forthcoming paper.
The following result is another multivariate fixed point theorem for a class of generalized contraction mappings related to the SY theorem. The proof of it can be obtained by Theorem SY, in the same way as was used in the proof of Theorem 2.6.
Theorem 2.10
$$\psi\bigl( d(Tx,Ty)\bigr)\leq\phi\bigl( \triangle\bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N}) \bigr)\bigr), $$
and \(\psi, \phi: [0, +\infty) \rightarrow[0, +\infty)\) are two functions satisfying the conditions:
$$\begin{aligned}& (1)\quad \psi(a)\leq\phi(b) \quad\Rightarrow\quad a \leq b; \\& (2)\quad \textstyle\begin{cases} \psi(a_{n})\leq\phi(b_{n}) \\ a_{n}\rightarrow\varepsilon,\qquad b_{n}\rightarrow\varepsilon \end{cases}\displaystyle \quad\Rightarrow\quad\varepsilon=0. \end{aligned}$$
In [7], Su and Yao also gave some examples of functions \(\psi(t)\), \(\phi(t)\). Here we recall some of them.
Example 2.11
([7])
The following functions satisfy the conditions (1) and (2) of Theorem 2.10.
$$(\mathrm{a}) \quad \textstyle\begin{cases} \psi_{1}(t)=t, \\ \phi_{1}(t)=\alpha t, \end{cases} $$
where \(0<\alpha<1\) is a constant.
$$\begin{aligned}& (\mathrm{b})\quad \textstyle\begin{cases} \psi_{2}(t)=t^{2}, \\ \phi_{2}(t)=\ln(t^{2}+1), \end{cases}\displaystyle \\& (\mathrm{c})\quad \textstyle\begin{cases} \psi_{3}(t)=t, \\ \phi_{3}(t)= \textstyle\begin{cases} t^{2}, & 0\leq t\leq\frac{1}{2},\\ t-\frac{3}{8},& \frac{1}{2}< t< +\infty, \end{cases}\displaystyle \end{cases}\displaystyle \\& (\mathrm{d}) \quad \textstyle\begin{cases} \psi_{4}(t)= \textstyle\begin{cases} t, & 0\leq t\leq1,\\ t-\frac{1}{2}, & 1< t< +\infty, \end{cases}\displaystyle \\ \phi_{4}(t)= \textstyle\begin{cases} \frac{t}{2}, & 0\leq t\leq1,\\ t-\frac{4}{5}, & 1< t< +\infty, \end{cases}\displaystyle \end{cases}\displaystyle \\& (\mathrm{e}) \quad \textstyle\begin{cases} \psi_{5}(t)= \textstyle\begin{cases} t, & 0\leq t\leq1,\\ \alpha t^{2}, & 1\leq t< +\infty, \end{cases}\displaystyle \\ \phi_{5}(t)= \textstyle\begin{cases} t^{2}, & 0\leq t< 1,\\ \beta t, & 1< t< +\infty, \end{cases}\displaystyle \end{cases}\displaystyle \end{aligned}$$
where \(0<\beta< \alpha\) are constants.
For example, if we choose \(\psi_{5}(t)\), \(\phi_{5}(t)\) in Theorem 2.10, then we can get the following result.
Let \((X,d)\) be a complete metric space. Let \(T: X^{N}\rightarrow X\) be a N variables mapping such that
$$\begin{aligned}& 0\leq d(Tx,Ty)< 1 \quad\Rightarrow\quad d(Tx,Ty)\leq\bigl(\triangle \bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N})\bigr)\bigr)^{2}, \\& d(Tx,Ty)\geq1 \quad\Rightarrow\quad\alpha\bigl(d(Tx,Ty)\bigr)^{2}\leq \beta\triangle \bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N})\bigr), \end{aligned}$$
for any \(x=(x_{1},x_{2},x_{3}, \ldots,x_{N}), y=(y_{1},y_{2},y_{3}, \ldots,y_{N}) \in X^{N}\).
Using the following notions it is easy to prove another consequence of our main results.
Remark 2.13
Let \(\psi, \phi: [0, +\infty) \rightarrow[0, +\infty)\) be two functions satisfying the conditions:
\(\psi(0)=\phi(0)\);
\(\psi(t)>\phi(t)\), \(\forall t>0\);
ψ is lower semi-continuous and ϕ is upper semi-continuous.
Then \(\psi(t)\), \(\phi(t)\) satisfy the above mentioned conditions (1) and (2).
Corollary 2.14
Let \((X,d)\) be a complete metric space. Let \(T: X^{N}\rightarrow X\) be a N variable mapping such that, for any \(x=(x_{1},x_{2},x_{3}, \ldots,x_{N}), y=(y_{1},y_{2},y_{3}, \ldots,y_{N}) \in X^{N}\), we have
$$\psi\bigl(d(Tx,Ty)\bigr)\leq\phi\bigl(\triangle\bigl(d(x_{1},y_{1}), d(x_{2},y_{2}), \ldots, d(x_{N},y_{N}) \bigr)\bigr), $$
where \(\psi, \phi: [0, +\infty) \rightarrow[0, +\infty)\) are two functions with the conditions (i), (ii), and (iii).
$$\begin{aligned}[b] &p_{1}=(Tp_{0},Tp_{0},\ldots,Tp_{0}), \\ &p_{2}=(Tp_{1},Tp_{1},\ldots,Tp_{1}), \\ &\cdots \\ & p_{n+1}=(Tp_{n},Tp_{n},\ldots,Tp_{n}), \\ &\cdots \end{aligned} $$
3 An application to an initial-value problem related to a first order differential equation
We will give now an application of the above results to an initial-value problem related to a first order differential equation of the following form:
$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx}{dt}=f(x(t),x(t),\ldots, x(t),t), \quad t\in I:=[t_{0}-\delta, t_{0}+\delta],\\ x(t_{0})=x^{0} \quad (x_{0}\in\mathbb{R}), \end{array}\displaystyle \right . $$
where \(t_{0},\delta>0\) are given real numbers and \(f:\mathbb{R}^{N}\times I\to\mathbb{R}\) is a continuous \((N+1)\)-variables function satisfying the following Lipschitz type condition:
$$\bigl|f(x_{1},x_{2},\ldots, x_{N}, t)-f(y_{1},y_{2}, \ldots, y_{N}, t)\bigr|\leq k(t) \sum_{i=1}^{N} |x_{i}-y_{i}|, $$
with \(k\in L^{1}(I,\mathbb{R}_{+})\).
For this purpose, we will consider first the following integral equation:
$$x(t)= \int_{t_{0}}^{t}f\bigl(x(\tau),x(\tau),\ldots,x(\tau), \tau\bigr)\,d\tau+g(t), \quad t \in[t_{0}-\delta,t_{0}+ \delta], $$
where \(g\in C(I)\) is a given function and f is as before.
Let \(X:=C[t_{0}-\delta,t_{0}+\delta]\), the linear space of continuous real functions defined on the closed interval \(I:=[t_{0}-\delta,t_{0}+\delta]\), where \(t_{0}, \delta>0\) are real numbers. It is well known that \(C[t_{0}-\delta,t_{0}+\delta]\) is a complete metric space with respect to the Chebyshev metric
$$d(x,y):=\max_{t_{0}-\delta\leq t \leq t_{0}+\delta}\bigl|x(t)-y(t)\bigr|, $$
for \(x,y\in X\).
We can also introduce on X a Bielecki type metric (which is known to be Lipschitz (strongly) equivalent to d), by the relation
$$d_{B}(x,y):=\max_{t_{0}-\delta\leq t \leq t_{0}+\delta}\bigl|x(t)-y(t)\bigr|e^{-LK(t)}, $$
where \(K(t):=\int^{t}_{t_{0}}k(s)\,ds\) and L is a constant greater than N.
Let \(T: X \times X\times\cdots\times X \rightarrow X\) with \(X^{N}\ni x=(x_{1},\ldots, x_{N})\longmapsto Tx\) be a N variable mapping defined by
$$Tx(t):= \int_{t_{0}}^{t}f\bigl(x_{1}(\tau), x_{2}(\tau ),\ldots,x_{N}(\tau),\tau\bigr)\,d\tau+g(t), $$
for all \(x_{1}, x_{2},\ldots, x_{N} \in X\), where \(g\in X\) and \(f(x_{1},x_{2},\ldots x_{N},t)\) is a continuous \((N+1)\) variable function satisfying the following condition:
For any \(x=(x_{1}, x_{2},\ldots,x_{N}), y=(y_{1}, y_{2},\ldots,y_{N})\in X^{N}\), and \(t\in I\) we have
$$\begin{aligned} \bigl|Tx(t)-Ty(t)\bigr|&\le\biggl| \int_{t_{0}}^{t}\bigl|f\bigl(x(\tau),\tau\bigr)-f\bigl(y( \tau),\tau \bigr)\bigr|\,d\tau\biggr| \\ & \le\biggl| \int_{t_{0}}^{t}\sum_{i=1}^{N}k( \tau)\bigl|x_{i}(\tau)-y_{i}(\tau)\bigr|\,d\tau\biggr| \\ &= \Biggl| \int_{t_{0}}^{t}\sum_{i=1}^{N} k(\tau) \bigl|x_{i}(\tau)-y_{i}(\tau)\bigr|e^{-LK(\tau)} e^{LK(\tau)}\,d\tau\Biggr| \\ &\le\Biggl| \int_{t_{0}}^{t}\sum_{i=1}^{N} \max_{\tau\in I}\bigl[\bigl|x_{i}(\tau)-y_{i}( \tau)\bigr|e^{-LK(\tau)}\bigr] k(\tau) e^{LK(\tau)}\,d\tau\Biggr| \\ &= N \biggl| \int_{t_{0}}^{t} \Biggl(\frac{1}{N}\sum _{i=1}^{N} d_{B}(x_{i},y_{i}) \Biggr) k(\tau) e^{LK(\tau)}\,d\tau\biggr| \\ &= N \triangle_{1} \bigl(d_{B}(x_{1},y_{1}), \ldots, d_{B}(x_{N},y_{N})\bigr) \biggl| \int_{t_{0}}^{t} k(\tau) e^{LK(\tau)}\,d\tau\biggr| \\ &\le\frac{N}{L}\cdot\triangle_{1} \bigl(d_{B}(x_{1},y_{1}), \ldots, d_{B}(x_{N},y_{N})\bigr) e^{LK(t)}. \end{aligned}$$
$$\bigl|Tx(t)-Ty(t)\bigr| e^{-LK(t)}\le\frac{N}{L}\cdot\triangle_{1} \bigl(d_{B}(x_{1},y_{1}),\ldots, d_{B}(x_{N},y_{N})\bigr), \quad\mbox{for all } t \in I. $$
Hence we get
$$d_{B}(Tx,Ty)\le\frac{N}{L}\cdot\triangle_{1} \bigl(d_{B}(x_{1},y_{1}),\ldots, d_{B}(x_{N},y_{N})\bigr), \quad\mbox{for all } x,y \in X. $$
Since \(h:= \frac{N}{L}<1\), we conclude, by using Theorem 2.6, that the N variable mapping T has a unique multivariate fixed point \(x^{*}\in X= C[t_{0}-\delta,t_{0}+\delta]\), i.e., such that
$$x^{*}(t)= \int_{t_{0}}^{t}f\bigl(x^{*}(\tau),x^{*}(\tau),\ldots,x^{*}( \tau ),\tau\bigr)\,d\tau+g(t), \quad t\in I, $$
and, for any \(x_{0}\in X\), the iterative sequence \(\{x_{n}(t)\}\) defined by
$$\begin{aligned} &x_{1}(t)= \int_{t_{0}}^{t}f\bigl(x_{0}( \tau),x_{0}(\tau),\ldots,x_{0}(\tau ),\tau\bigr)\,d\tau+x_{0}, \\ &x_{2}(t)= \int_{t_{0}}^{t}f\bigl(x_{1}( \tau),x_{1}(\tau),\ldots,x_{1}(\tau ),\tau\bigr)\,d\tau+x_{0}, \\ & \cdots \\ &x_{n+1}(t)= \int_{t_{0}}^{t}f\bigl(x_{n}( \tau),x_{n}(\tau),\ldots ,x_{n}(\tau),\tau\bigr)\,d\tau+x_{0}, \end{aligned}$$
converges to \(x^{*}\in X=C[t_{0}-\delta,t_{0}+\delta]\). The function \(x^{*}=x^{*}(t)\) is the unique solution of the integral equation
$$x(t)= \int_{t_{0}}^{t}f\bigl(x(\tau),x(\tau),\ldots,x(\tau), \tau\bigr)\,d\tau+g(t), \quad t \in[t_{0}-\delta,t_{0}+ \delta]. $$
In particular, if \(g(t):=x^{0}\) (where \(x^{0}\in\mathbb{R}\)) is a constant function, it is well known that the above integral equation is equivalent to the initial-value problem associated to a first order differential equation of the form
$$\frac{dx(t)}{dt}=f\bigl(x(t),x(t),\ldots,x(t),t\bigr), \qquad x(t_{0})=x_{0}. $$
Thus, by our approach an existence and uniqueness result for the initial-value problem follows.
4 N variable nonexpansive mappings in normed spaces
We will introduce first the concept of N variable nonexpansive mapping.
Let \((X,\|\cdot\|)\) be a normed space. Then a N variable mapping \(T: X^{N}\rightarrow X\) is said to be nonexpansive, if
$$\|Tx-Ty\|\leq\triangle\bigl(\|x_{1}-y_{1}\|, \|x_{2}-y_{2}\|, \ldots,\|x_{N}-y_{N}\|\bigr), $$
for all \(x=(x_{1},x_{2},x_{3},\ldots,x_{N}), x=(y_{1},y_{2},y_{3},\ldots,y_{N})\in X^{N}\), where △ is a multiply metric function.
Some useful results are the following.
Lemma 4.2
Let X be a Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\). We consider on the Cartesian product space \(X^{N}=X\times X\times\cdots\times X\) the following functional:
$$\langle x, y\rangle^{*}=\frac{1}{N}\sum_{i=1}^{N} \langle x_{i},y_{i}\rangle,\quad \forall x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2}, \ldots,y_{N}) \in X^{N}. $$
Then \((X^{N}, \langle\cdot,\cdot\rangle^{*})\) is a Hilbert space.
It is easy to prove that the \(X^{N}\) is a linear space with the following linear operations:
$$\begin{aligned}& (x_{1},x_{2},\ldots,x_{N})+(y_{1},y_{2}, \ldots ,y_{N})=(x_{1}+y_{1},x_{2}+y_{2}, \ldots,x_{N}+y_{N}), \\& \lambda(x_{1},x_{2},\ldots,x_{N})=(\lambda x_{1},\lambda x_{2},\ldots,\lambda x_{N}), \end{aligned}$$
for all \(x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2},\ldots,y_{N}) \in X^{N}\), and \(\lambda\in (-\infty,+\infty)\). Next we prove that \((X^{N}, \langle \cdot,\cdot\rangle^{*})\) is an inner product space. It is easy to see that the following relations hold:
\(\langle x,x\rangle^{*}=\frac{1}{N}\sum_{i=1}^{N}\langle x_{i},x_{i}\rangle\geq0\) and \(\langle x,x\rangle^{*}=0 \Leftrightarrow x=0\), \(\forall x=(x_{1},x_{2},\ldots,x_{N}) \in X^{N}\);
\(\langle x,y\rangle^{*}=\langle y,x\rangle^{*}\), \(\forall x,y \in X^{N}\);
\(\langle\lambda x,y\rangle^{*}=\frac{1}{N}\sum_{i=1}^{N}\langle \lambda x_{i},y_{i}\rangle= \lambda\frac{1}{N} \sum_{i=1}^{N}\langle x_{i},y_{i}\rangle=\lambda\langle x,y\rangle^{*} \), \(\forall x,y \in X^{N}\);
\(\langle x+y,z\rangle^{*}=\langle x,z\rangle^{*}+\langle y,z\rangle^{*}\), \(\forall x,y,z \in X^{N}\).
Hence \((X^{N}, \langle\cdot,\cdot\rangle^{*})\) is an inner product space.
The inner product \(\langle x,y \rangle^{*}\) generates the following norm:
$$\|x\|^{*}= \sqrt{\langle x,x\rangle^{*}}=\sqrt{\frac{1}{N}\sum _{i=1}^{N}\|x_{i} \|^{2}}, \quad \forall x=(x_{1},x_{2}, \ldots,x_{N}) \in X^{N}, $$
where \(\|x_{i}\|= \sqrt{\langle x_{i},x_{i}\rangle}\), \(\forall x_{i} \in X\), \(i=1,2,3,\ldots, N\). Since X is complete, we know that \((X^{N}, \|\cdot\|^{*})\) is also complete. So \((X^{N}, \|\cdot\|^{*})\) is a Hilbert space. □
Let X be a Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and let
$$\langle x, y\rangle^{*}=\frac{1}{N}\sum_{i=1}^{N} \langle x_{i},y_{i}\rangle, \quad \forall x=(x_{1},x_{2},\ldots,x_{N}), y=(y_{1},y_{2}, \ldots,y_{N}) \in X^{N} $$
be the inner product on the Cartesian product space \(X^{N}\). Then the following conclusions hold:
\((X^{N})^{*}=X^{*}\times X^{*}\times\cdots\times X^{*}\);
\(f \in(X^{N})^{*}\) if and only if there exist \(f_{i} \in X^{*}\), \(i\in\{1,2,3, \ldots,N \}\) such that
$$f(x)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x_{i}), \quad \forall x=(x_{1},x_{2},\ldots,x_{N}) \in X^{N}. $$
(Here \((X^{N})^{*}\) and \(X^{*}\) denote the conjugate spaces of \(X^{N}\) and X, respectively.)
By Lemma 4.2, we obtain the conclusion (1). Next we prove the conclusion (2). Assume that \(f \in(X^{N})^{*}\). By Riesz's theorem and by Lemma 4.2, there exists an element \(y=(y_{1},y_{2},\ldots,y_{N}) \in X^{N}\) such that
$$f(x)=\langle x, y \rangle^{*}=\frac{1}{N}\sum_{i=1}^{N} \langle x_{i},y_{i}\rangle,\quad \forall x=(x_{1},x_{2},\ldots,x_{N}) \in X^{N}. $$
Therefore there exist \(f_{i} \in X^{*}\), \(i\in\{1,2,3, \ldots,N \}\) such that
Assume there exist \(f_{i} \in X^{*}\), \(i\in\{1,2,3, \ldots,N \}\) such that
It is easy to see that \(f \in(X^{N})^{*}\). This completes the proof. □
Let X be a Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\|\cdot\|\). Consider on \(X^{N}\) the norm
$$\|x\|^{*}=\sqrt{\frac{1}{N} \sum_{i=1}^{N} \|x_{i}\|^{2}},\quad \forall x=(x_{1},x_{2}, \ldots,x_{N}) \in X^{N}. $$
Let \(T:X^{N}\rightarrow X\) be a N-variables nonexpansive mapping such that the multivariate fixed point set \(F(T)\) is nonempty. Then, for any given \(x_{0}=(x_{1}^{0},x_{2}^{0},\ldots,x_{N}^{0}) \in X^{N}\), the iterative sequences
$$ x_{i}^{n+1}=\alpha_{n}u_{i}+(1- \alpha_{n})T\bigl(x_{1}^{n},x_{2}^{n}, \ldots,x_{N}^{n}\bigr),\quad i=1,2,3,\ldots,N , $$
converge strongly to a multivariate fixed point p of T, where \(u=(u_{1},u_{2},\ldots,u_{n}) \in X^{N}\) is a fixed element and the sequence \(\{\alpha_{n}\} \subset[0,1]\) satisfies the conditions (C 1), (C 2), and (C 3) as follows:
(C1):
\(\lim_{n\rightarrow\infty}\alpha_{n}=0\);
\(\sum_{n=1}^{\infty}\alpha_{n}=+\infty\);
\(\sum_{n=1}^{\infty}|\alpha_{n+1}-\alpha _{n}|<+\infty\).
We define a mapping \(T^{*}:X^{N}\rightarrow X^{N}\), \(x\mapsto T^{*}(x)\) by the following relation:
$$T^{*}(x_{1},x_{2},\ldots,x_{N}):= \bigl(T(x_{1},x_{2},\ldots,x_{N}),T(x_{1},x_{2}, \ldots,x_{N}),\ldots,T(x_{1},x_{2}, \ldots,x_{N})\bigr), $$
for all \((x_{1},x_{2},\ldots,x_{N})\in X^{N}\). Next we prove that \(T^{*}\) is a nonexpansive mapping from \((X^{N}, \|\cdot\|^{*})\) into itself. Observe that, for any
$$\begin{aligned} \bigl\| T^{*}x-T^{*}y\bigr\| ^{*}&=\sqrt{\frac{1}{N}{}\sum _{i=1}^{N}\|Tx-Ty\|^{2}}\leq\sqrt { \frac{1}{N}\sum_{i=1}^{N}\Biggl(\sum _{i=1}^{N}\|x_{i}-y_{i} \|\Biggr)^{2}} \\ &= \sqrt{\frac{1}{N}\sum_{i=1}^{N} \bigl(\|x-y\|^{*}\bigr)^{2}}=\|x-y\|^{*}. \end{aligned}$$
Hence \(T^{*}\) is a nonexpansive mapping from \((X^{N},\|\cdot\|^{*})\) into itself. For any \(p \in F(T)=\{x \in X: x=T(x,x,\ldots,x)\}\), we have
$$T^{*}(p,p,\ldots,p)=\bigl(T(p,p,\ldots,p),T(p,p,\ldots,p), \ldots,T(p,p,\ldots,p) \bigr)=(p,p,\ldots,p), $$
hence \(p^{*}=(p,p,\ldots,p) \in X^{N}\) is a fixed point of \(T^{*}\). Therefore, the mapping \(T^{*}: X^{N}\rightarrow X^{N}\) is a nonexpansive mapping with a nonempty fixed point set
$$F\bigl(T^{*}\bigr)=\bigl\{ (p,p,\ldots,p)\in X^{N}: p \in F(T)\bigr\} . $$
By using the result of Wittmann [24], we know that, for any given \(x_{0} \in X^{N}\), Halpern's iterative sequence
$$ x_{n+1}=\alpha_{n}u+(1-\alpha_{n})T^{*}x_{n} $$
converges in the norm \(\|\cdot\|^{*}\) to a fixed point \(p^{*}=(p,p,\ldots,p)\) of \(T^{*}\), where \(u=(u_{1},u_{2},\ldots, u_{N}) \in X^{N}\). Let
$$x_{n}=\bigl(x_{1}^{n},x_{2}^{n}, \ldots,x_{N}^{n}\bigr), \quad n=0, 1,2,3, \ldots. $$
Then the iterative scheme (4.2) can be rewritten as (4.1). From \(x_{n} \rightarrow p^{*}\) (in the norm \(\|\cdot\|^{*}\)), we have \(x_{i}^{n}\rightarrow p\) in norm \(\|\cdot\|\) for all \(i=1,2,3,\ldots,N\). This completes the proof. □
If the condition (C3) can be replaced by the condition (C4) [25] or the condition (C5) [26], then Theorem 4.4 still holds.
The construction of fixed points of nonexpansive mappings via Mann's algorithm has extensively been investigated recently in the literature (see, e.g., [27] and references therein). Related work can also be found in [28–45]. Mann's algorithm generates, initializing an arbitrary \(x_{0} \in C\), a sequence according to the following recursive procedure:
$$ x_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \quad n\geq0, $$
where \(\{\alpha_{n}\}\) is a real control sequence in the interval \((0, 1)\).
If T is a nonexpansive mapping with at least one fixed point and if the control sequence \(\{\alpha_{n}\}\) is chosen so that \(\sum_{n=0}^{\infty}\alpha_{n}(1-\alpha_{n})=+\infty\), then the sequence \(\{x_{n}\}\) generated by Mann's algorithm (4.3) converges weakly, in a uniformly convex Banach space with a Fréchet differentiable norm (see [27]), to a fixed point of T.
Next we prove a weak convergence theorem for a N-variables nonexpansive mapping in Hilbert spaces.
Let X be a Hilbert space with the inner product \(\langle\cdot,\cdot\rangle\) and the norm \(\|\cdot\|\). Consider on the Cartesian product space \(X^{N}\) the norm
$$\|x\|^{*}=\sqrt{\frac{1}{N}\sum_{i=1}^{N} \|x_{i}\|^{2}}, \quad \forall x=(x_{1},x_{2}, \ldots,x_{N}) \in X^{N}. $$
Let \(T:X^{N}\rightarrow X\) be a N-variables nonexpansive mapping such that the multivariate fixed point set \(F(T)\) is nonempty. Consider, for any given \(x_{0}=(x_{1}^{0},x_{2}^{0},\ldots,x_{N}^{0}) \in X^{N}\), the following iterative sequences:
$$ x_{i}^{n+1}=\alpha_{n}x_{i}^{n}+(1- \alpha_{n})T\bigl(x_{1}^{n},x_{2}^{n}, \ldots,x_{N}^{n}\bigr),\quad i=1,2,3,\ldots,N, $$
where the sequence \(\{\alpha_{n}\} \subset[0,1]\) satisfies the condition \(\sum_{n=0}^{\infty}\alpha_{n}(1-\alpha_{n})=+\infty\).
Then the sequence \(\{x_{i}^{n}\}\) converges weakly to a multivariate fixed point p of T.
$$T^{*}(x_{1},x_{2},\ldots,x_{N}):= \bigl(T(x_{1},x_{2},\ldots,x_{N}),T(x_{1},x_{2}, \ldots,x_{N}),\ldots,T(x_{1},x_{2}, \ldots,x_{N})\bigr). $$
By Theorem 4.5 we know that \(T^{*}: X^{N}\rightarrow X^{N}\) is a nonexpansive mapping with a nonempty fixed point set
By Reich's result [27], we obtain, for any given \(x_{0} \in X^{N}\), Mann's iterative sequence
$$ x_{n+1}=\alpha_{n} x_{n}+(1-\alpha_{n})T^{*}x_{n}, \quad n\geq0, $$
converging weakly to a fixed point \(p^{*}=(p,p,\ldots,p) \in F(T^{*})\), where \(p \in F(T)\). Since \(X^{N}\) is a Hilbert space, for any \(y=(y_{1},y_{2},\ldots,y_{N}) \in X^{N}\), we have
$$\bigl\langle x_{n}-p^{*}, y\bigr\rangle ^{*}=\frac{1}{N}\sum _{i=1}^{N}\bigl\langle x^{n}_{i}-p,y_{i} \bigr\rangle \rightarrow0, \quad\mbox{as } n\rightarrow\infty. $$
Therefore, for any \(i\in\{1,2,3,\ldots,N \}\), let us chose \(y=(0,\ldots , 0, y_{i},0,\ldots,0)\) and we get
$$\bigl\langle x^{n}_{i}-p, y_{i}\bigr\rangle \rightarrow0 \quad\mbox{as } n\rightarrow \infty. $$
Hence \(\langle x^{n}_{i}, y_{i}\rangle\rightarrow\langle p, y_{i}\rangle\) as \(n\rightarrow\infty\), for any \(i\in\{1,2,3,\ldots,N \}\). This shows that the iterative sequences \(\{x^{n}_{i}\}\), \(i\in\{1,2,3,\ldots,N \}\), defined by (4.4) converge weakly to a multivariate fixed point p of T. This completes the proof. □
The above presented method can successfully be applied for several other iterative schemes in order to prove weak and strong convergence theorems for the multivariate fixed points of N-variables nonexpansive type mappings.
For this work, the second author benefits from the financial support of a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-ID-PCE-2011-3-0094. The third author was partially supported by the Grant MOST 103-2923-E-039-001-MY3.
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Department of Mathematics, Tianjin Polytechnic University, Tianjin, 300387, China
Department of Mathematics, Babeş-Bolyai University, Cluj-Napoca, 400084, Romania
Center for General Education, China Medical University, Taichung, 40402, Taiwan
Browder, FE: On the convergence of successive approximations for nonlinear functional equations. Nederl. Akad. Wetensch. Proc. Ser. A 71=Indag. Math. 30, 27-35 (1968) MathSciNetView ArticleGoogle Scholar
Boyd, DW, Wong, JSW: On nonlinear contractions. Proc. Am. Math. Soc. 20, 458-464 (1969) MATHMathSciNetView ArticleGoogle Scholar
Jachymski, J: Equivalence of some contractivity properties over metrical structures. Proc. Am. Math. Soc. 125, 2327-2335 (1997) MATHMathSciNetView ArticleGoogle Scholar
Jachymski, J, Jóźwik, I: Nonlinear contractive conditions: a comparison and related problems. In: Fixed Point Theory and Its Applications. Polish Acad. Sci., vol. 77, pp. 123-146. Banach Center Publ., Warsaw (2007) View ArticleGoogle Scholar
Geraghty, M: On contractive mappings. Proc. Am. Math. Soc. 40, 604-608 (1973) MATHMathSciNetView ArticleGoogle Scholar
Samet, B, Vetro, C, Vetro, P: Fixed point theorem for α-ψ-contractive type mappings. Nonlinear Anal. 75, 2154-2165 (2012) MATHMathSciNetView ArticleGoogle Scholar
Su, Y, Yao, J-C: Further generalized contraction mapping principle and best proximity theorem in metric spaces. Fixed Point Theory Appl. 2015, 120 (2015) View ArticleGoogle Scholar
Kir, M, Kiziltunc, H: The concept of weak \((\psi,\alpha,\beta)\) contractions in partially ordered metric spaces. J. Nonlinear Sci. Appl. 8(6), 1141-1149 (2015) MathSciNetGoogle Scholar
Asgari, MS, Badehian, Z: Fixed point theorems for α-β-ψ-contractive mappings in partially ordered sets. J. Nonlinear Sci. Appl. 8(5), 518-528 (2015) MathSciNetGoogle Scholar
Amini-Harandi, A, Emami, H: A fixed point theorem for contraction type maps in partially ordered metric spaces and application to ordinary differential equations. Nonlinear Anal. 72, 2238-2242 (2010) MATHMathSciNetView ArticleGoogle Scholar
Harjani, J, Sadarangni, K: Fixed point theorems for weakly contraction mappings in partially ordered sets. Nonlinear Anal. 71, 3403-3410 (2009) MATHMathSciNetView ArticleGoogle Scholar
Gnana Bhaskar, T, Lakshmikantham, V: Fixed point theorems in partially ordered metric spaces and applications. Nonlinear Anal. 65, 1379-1393 (2006) MATHMathSciNetView ArticleGoogle Scholar
Lakshmikantham, V, Ćirić, L: Coupled fixed point theorems for nonlinear contractions in partially ordered metric spaces. Nonlinear Anal. 70, 4341-4349 (2009) MATHMathSciNetView ArticleGoogle Scholar
Nieto, JJ, Rodriguez-López, R: Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations. Order 22, 223-239 (2005) MATHMathSciNetView ArticleGoogle Scholar
Nieto, JJ, Rodriguez-López, R: Existence and uniqueness of fixed point in partially ordered sets and applications to ordinary differential equations. Acta Math. Sin. 23, 2205-2212 (2007) MATHView ArticleGoogle Scholar
O'Regan, D, Petruşel, A: Fixed point theorems for generalized contractions in ordered metric spaces. J. Math. Anal. Appl. 341, 1241-1252 (2008) MATHMathSciNetView ArticleGoogle Scholar
Harjani, J, Sadarangni, K: Generalized contractions in partially ordered metric spaces and applications to ordinary differential equations. Nonlinear Anal. 72, 1188-1197 (2010) MATHMathSciNetView ArticleGoogle Scholar
Yan, FF, Su, Y, Feng, Q: A new contraction mapping principle in partially ordered metric spaces and applications to ordinary differential equations. Fixed Point Theory Appl. 2012, 152 (2012) MathSciNetView ArticleGoogle Scholar
Khan, MS, Swaleh, M, Sessa, S: Fixed point theorems by altering distances between the points. Bull. Aust. Math. Soc. 30(1), 1-9 (1984) MATHMathSciNetView ArticleGoogle Scholar
Lee, H, Kim, S: Multivariate coupled fixed point theorems on ordered partial metric spaces. J. Korean Math. Soc. 51(6), 1189-1207 (2014) MATHMathSciNetView ArticleGoogle Scholar
Prešić, SB: Sur une classe d'inéquations aux différences finies et sur la convergence de certaines suites. Publ. Inst. Math. (Belgr.) 5, 75-78 (1965) Google Scholar
Ćirić, LB, Presić, SB: On Prešić type generalization of the Banach contraction principle. Acta Math. Univ. Comen. 76(2), 143-147 (2007) MATHGoogle Scholar
Tasković, MR: Monotonic mappings on ordered sets, a class of inequalities with finite differences and fixed points. Publ. Inst. Math. (Belgr.) 17, 163-172 (1974) Google Scholar
Wittmann, R: Approximation de points of nonexpansive mappings. Arch. Math. 58, 486-491 (1992) MATHMathSciNetView ArticleGoogle Scholar
Lions, P: Approximation de contractions. C. R. Acad. Sci. Paris Sér. A-B 284, 1357-1359 (1997) Google Scholar
Xu, HK: Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 65, 109-113 (2002) MATHView ArticleGoogle Scholar
Reich, S: Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 67, 274-276 (1979) MATHMathSciNetView ArticleGoogle Scholar
Browder, FE, Petryshyn, WV: Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 20, 197-228 (1967) MATHMathSciNetView ArticleGoogle Scholar
Halpern, B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957-961 (1967) MATHView ArticleGoogle Scholar
Kim, TH, Xu, HK: Strong convergence of modified Mann iterations. Nonlinear Anal. 61, 51-60 (2005) MATHMathSciNetView ArticleGoogle Scholar
Kim, TH, Xu, HK: Strong convergence of modified Mann iterations for asymptotically nonexpansive mappings and semigroups. Nonlinear Anal. 64, 1140-1152 (2006) MATHMathSciNetView ArticleGoogle Scholar
Lions, PL: Approximation de points fixes de contractions. C. R. Acad. Sci. Paris Sér. A-B 284, 1357-1359 (1977) MATHGoogle Scholar
Matinez-Yanes, C, Xu, HK: Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 64, 2400-2411 (2006) MathSciNetView ArticleGoogle Scholar
Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003) MATHMathSciNetView ArticleGoogle Scholar
O'Hara, JG, Pillay, P, Xu, HK: Iterative approaches to finding nearest common fixed points of nonexpansive mappings in Hilbert spaces. Nonlinear Anal. 54, 1417-1426 (2003) MATHMathSciNetView ArticleGoogle Scholar
O'Hara, JG, Pillay, P, Xu, HK: Iterative approaches to convex feasibility problems in Banach spaces. Nonlinear Anal. 64, 2022-2042 (2006) MATHMathSciNetView ArticleGoogle Scholar
Reich, S: Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 75, 287-292 (1980) MATHMathSciNetView ArticleGoogle Scholar
Scherzer, O: Convergence criteria of iterative methods based on Landweber iteration for solving nonlinear problems. J. Math. Anal. Appl. 194, 911-933 (1991) MathSciNetView ArticleGoogle Scholar
Shioji, N, Takahashi, W: Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces. Proc. Am. Math. Soc. 125, 3641-3645 (1997) MATHMathSciNetView ArticleGoogle Scholar
Tan, KK, Xu, HK: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178(2), 301-308 (1993) MATHMathSciNetView ArticleGoogle Scholar
Tan, KK, Xu, HK: Fixed point iteration processes for asymptotically nonexpansive mappings. Proc. Am. Math. Soc. 122, 733-739 (1994) MATHMathSciNetView ArticleGoogle Scholar
Wittmann, R: Approximation of fixed points of nonexpansive mappings. Arch. Math. 58, 486-491 (1992) MATHMathSciNetView ArticleGoogle Scholar
Xu, HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240-256 (2002) MATHView ArticleGoogle Scholar
Xu, HK: Remarks on an iterative method for nonexpansive mappings. Commun. Appl. Nonlinear Anal. 10(1), 67-75 (2003) MATHGoogle Scholar
Xu, HK: Strong convergence of an iterative method for nonexpansive mappings and accretive operators. J. Math. Anal. Appl. 314, 631-643 (2006) MATHMathSciNetView ArticleGoogle Scholar
Recent Progress in Fixed Point Theory and Applications (2015) | CommonCrawl |
\begin{document}
\title{ $arphi$-contractibility and $arphi$-Connes amenability coincide with some older notions}
\setcounter{section}{0} \begin{abstract} It is shown that various definitions of $\varphi$-Connes amenability as introduced independently in \cite{Gh-Ja, mah, Sh-Am}, are just rediscovering existing notions and presenting them in different ways. It is also proved that even $\varphi$-contractibility as defined in \cite{Sangani}, is equivalent to an older and simpler concept. \end{abstract}
\section{Introduction} The precious and fertile notion of amenability was introduced by B. E. Johnson in \cite{Joh}. A generalization of amenability depending on homomorphisms was introduced and studied by E. Kaniuth, A. T. Lau and J. Pym \cite{Lau}, and independently by M. S. Monfared \cite{Sangani1}. For a Banach algebra $\mathfrak{A}$, we write $\Delta(\mathfrak{A})$ for the set of all homomorphisms from $\mathfrak{A}$ onto $ \mathbb{C}$. Let $\varphi \in \Delta(\mathfrak{A})$. An element $m \in \mathfrak{A}^{**}$ is called a \textit{right} [\textit{left}] $\varphi$-\textit{mean} if $ m(\varphi)= 1$ and $ m( f \cdot a) = \varphi(a) m(f)$ [$ m( a \cdot f) = \varphi(a) m(f)$] for $a \in \mathfrak{A}$ and $f \in \mathfrak{A}^*$. A Banach algebra is \textit{right} [\textit{left}] $\varphi$-\textit{amenable} if it has a right [left] $\varphi$-mean \cite{Lau, Sangani1}. We call $\mathfrak{A}$ $\varphi$-\textit{amenable} if it is both left and right $\varphi$-amenable.
Later in \cite{Sangani}, the authors introduced the concept of $\varphi$-contractibility. Let $\mathfrak{A}$ be a Banach algebra and $E$ be a Banach $\mathfrak{A}$-bimodule. A continuous linear operator $D:\mathfrak{A} \longrightarrow E$ is a \textit{derivation} if it satisfies $ D(ab) = D(a)\cdot b + a \cdot D(b) $ for all $a,b \in \mathfrak{A}$. Given $x \in E$, the \textit{inner} derivation $ad_x : \mathfrak{A} \longrightarrow E$ is defined by $ad_x(a) = a\cdot x - x\cdot a$. Let $ \varphi \in \Delta (\mathfrak{A}) $. We write $ \mathbb{M}_\varphi$ $ [ _\varphi \mathbb{M} ]$ for the set of all Banach $\mathfrak{A}$-bimodules $E$ such that the right [left] module action of $\mathfrak{A}$ on $E$ is given by $ x \cdot a := \varphi(a) x$ $[ a \cdot x := \varphi(a) x]$ for $a \in \mathfrak{A}$, $x \in E$. Precisely, $\mathfrak{A}$ is \textit{right} [\textit{left}] $\varphi$-\textit{contractible} if for each Banach $\mathfrak{A}$-bimodule $E \in \mathbb{M}_\varphi$ $ [E \in$ $_\varphi \mathbb{M}]$, every derivation $D:\mathfrak{A} \longrightarrow E$ is inner. We say $\mathfrak{A}$ is $\varphi$-contractible if it is both left and right $\varphi$-contractible.
Recently and motivated by the above notions, several authors have defined and studied the concept of $\varphi$-Connes amenability, where $\varphi$ is a $w^*$-continuous homomorphism on a dual Banach algebra \cite{Gh-Ja, mah, Sh-Am}. However, \cite{mah} was released into the public domain over three years ago.
In this brief note, we take a look at $\varphi$-contractibility and $\varphi$-Connes amenability in minute detail. While the authors put a lot of effort into studying these concepts, their effort may have been wasted here. Considering all three types of the notion of $\varphi$-Connes amenability introduced in \cite{Gh-Ja, mah, Sh-Am}, we shall see that \textit{none} of them are new. In fact they coincide with both $\varphi$-amenability and $\varphi$-contractibility. Next, we shall prove that the concept of $\varphi$-contractibility is also equivalent to an existing notion. On closer inspection, saying that a Banach algebra $ \mathfrak{A}$ is $\varphi$-contractible is equivalent to saying that the one-dimensional Banach $ \mathfrak{A}$-bimodule $\mathbb{C}_\varphi$ is projective. Although this concept goes back to Helemskii's works in the 1970s (see his book \cite{Hel}, or alternatively a paper of White \cite{Wh}), most of the authors who have studied $\varphi$-contractibility seem unaware of this fact.
\section{$\varphi$-Connes amenability }
Let $\mathfrak{A}$ be a Banach algebra. A Banach $\mathfrak{A}$-bimodule $E$ is \textit{dual} if there is a closed submodule $E_*$ of $E^*$ such that $E = (E_*)^*$. We say $E_*$ the \textit{predual} of $E$. A Banach algebra is \textit{dual} if it is dual as a Banach $\mathfrak{A}$-bimodule. We write $\mathfrak{A} = (\mathfrak{A}_*)^*$ if we wish to stress that $\mathfrak{A}$ is a dual Banach algebra with predual $\mathfrak{A}_*$.
We start with the definition of $\varphi$-Connes amenability in the sense of \cite{Gh-Ja}.
Let $ \mathfrak{A}=( \mathfrak{A}_*)^*$ be a dual Banach algebra, and let $\varphi \in \Delta (\mathfrak{A}) \cap \mathfrak{A}_*$. A dual Banach $\mathfrak{A}$-bimodule $E \in$ $_\varphi \mathbb{M} $ is \textit{normal} if the module action $ a \longmapsto x \cdot a$ of $\mathfrak{A}$ on $E$ is $w^*$-continuous. A dual Banach algebra $ \mathfrak{A}=( \mathfrak{A}_*)^*$ is \textit{left} $\varphi$-\textit{Connes amenable} if for every normal dual Banach $ \mathfrak{A}$-bimodule $E \in$ $_\varphi \mathbb{M} $, every $w^*$-continuous derivation $ D : \mathfrak{A} \longrightarrow E$ is inner. Although they consider just left $\varphi$-Connes amenable Banach algebras, there are similar definitions for right $\varphi$-Connes amenable and $\varphi$-Connes amenable Banach algebras. The authors show that (left) $\varphi$-Connes amenability of $ \mathfrak{A}$ is equivalent to the existence a (left) $\varphi$-mean \cite[Theorem 2.3]{Gh-Ja}.
\begin{theo} \label{2.1} Suppose that $\mathfrak{A}=( \mathfrak{A}_*)^*$ is a dual Banach algebra and $\varphi \in \Delta (\mathfrak{A}) \cap \mathfrak{A}_*$. Then the following statement are equivalent:
$(i)$ $\mathfrak{A}$ is $\varphi$-Connes amenable in the sense of \cite{Gh-Ja};
$(ii)$ $\mathfrak{A}$ is $\varphi$-contractible;
$(iii)$ $\mathfrak{A}$ is $\varphi$-amenable. \end{theo} {\bf Proof.} The implications $(ii) \Longrightarrow (iii)$ and $(iii) \Longrightarrow (i)$ are immediate.
$(i) \Longrightarrow (ii)$ Take a $\varphi$-mean $ m \in \mathfrak{A}^{**}$. Consider the $\mathfrak{A}$-bimodule inclusion map $ \imath : \mathfrak{A}_* \longrightarrow \mathfrak{A}^*$. Taking adjoints, we obtain a $w^*$-$w^*$-continuous $\mathfrak{A}$-bimodule map $ \xi :\mathfrak{A}^{**} \longrightarrow \mathfrak{A}$. Now put $u = \xi(m) \in \mathfrak{A}$. It is easily checked that $\varphi(u)=1$ and $ua = a u = \varphi(a) u$, for all $a \in \mathfrak{A}$. Therefore by Theorem 3.1 below, $\mathfrak{A}$ is $\varphi$-contractible.
$(ii) \Longrightarrow (i)$ Again by Theorem \ref{3.1}, there is an element $ u \in \mathfrak{A}$ satisfying $\varphi(u)=1$ and $ua = ua = \varphi(a) u$, for all $a \in \mathfrak{A}$. It is readily seen that $u$ is a $\varphi$-mean and whence $\mathfrak{A}$ is $\varphi$-Connes amenable. \qed
Now, we consider the definition of $\varphi$-Connes amenability in the sense of \cite{Sh-Am}. Let $ \mathfrak{A}=( \mathfrak{A}_*)^*$ be a dual Banach algebra, and let $\varphi $ be a non-zero $w^*$-continuous multiplicative linear functional on $ \mathfrak{A}$. The authors in \cite[Definition 2.1]{Sh-Am} say that $ \mathfrak{A}$ is (\textit{left}) $\varphi$-\textit{Connes amenable} if there exists $m \in \mathfrak{A}$ such that $ m(\varphi) = 1$ and $a m = \varphi(a) m$, for every $a \in \mathfrak{A}$. Then by Theorem \ref {3.1}, left $\varphi$-Connes amenability is nothing but right $\varphi$-contractibility. Hence the following is straightforward.
\begin{theo} \label{4.1} Suppose that $\mathfrak{A}$ is a dual Banach algebra and $\varphi $ be a non-zero $w^*$-continuous multiplicative linear functional on $ \mathfrak{A}$. Then the following statement are equivalent:
$(i)$ $\mathfrak{A}$ is $\varphi$-Connes amenable (in the sense of \cite{Sh-Am});
$(ii)$ $\mathfrak{A}$ is $\varphi$-contractible;
$(iii)$ $\mathfrak{A}$ is $\varphi$-amenable. \end{theo}
Let $\mathfrak{A} $ be a dual Banach algebra and let $E$ be a Banach $\mathfrak{A}$-bimodule. From \cite{Rund} we write $\sigma wc(E)$ for the set of all elements $ x \in E$ such that the maps $$ \mathfrak{A} \longrightarrow E \ \ , \ \ \ a \longmapsto \left \{ \begin{array}{ll}
a \ . \ x \\
x \ . \ a
\end{array} \right. \ , $$ are $w^*$-weak continuous.
We conclude by looking at the definition of $\varphi$-Connes amenability from \cite{mah}. Suppose that $\mathfrak{A}$ is a dual Banach algebra and $\varphi$ is a homomorphism from $\mathfrak{A}$ onto $\mathbb{C}$. Then it is an easy observation that $\varphi$ is $w^*$-continuous if and only if $ \varphi \in \sigma wc (\mathfrak{A}^*)$. Suppose that $\mathfrak{A}$ is a dual Banach algebra and $\varphi $ is a $w^*$-continuous homomorphism from $\mathfrak{A}$ onto $\mathbb{C}$. We call $\mathfrak{A}$ (\textit{right}) $\varphi$-\textit{Connes amenable} if $\mathfrak{A}$ admits a (\textit{right}) $\varphi$-\textit{Connes mean} $m$, i.e., there exists a bounded linear functional $m$ on $ \sigma wc (\mathfrak{A}^*)$ satisfying $ m(\varphi) = 1$ and $ m ( f \ . \ a) = \varphi(a) m(f)$ for all $a \in \mathfrak{A}$ and $ f \in \sigma wc (\mathfrak{A}^*)$. Similarly, we may consider \textit{left} $\varphi$-Connes amenability. Meanwhile, $\mathfrak{A}$ is $\varphi$-\textit{Connes amenable} if it is both left and right $\varphi$-Connes amenable.
\begin{theo} \label{4.2} Suppose that $\mathfrak{A}=( \mathfrak{A}_*)^*$ is a dual Banach algebra and $\varphi $ be a non-zero $w^*$-continuous multiplicative linear functional on $ \mathfrak{A}$. Then the following statement are equivalent:
$(i)$ $\mathfrak{A}$ is $\varphi$-Connes amenable (in the sense of \cite{mah});
$(ii)$ $\mathfrak{A}$ is $\varphi$-contractible;
$(iii)$ $\mathfrak{A}$ is $\varphi$-amenable. \end{theo} {\bf Proof.} Only $(i)\Longleftrightarrow (ii)$ needs the proof. Let $ \iota : \mathfrak{A} \longrightarrow \sigma wc (\mathfrak{A}^*)^*$ be the $\mathfrak{A}$-bimodule map obtained by composing the canonical inclusion $\mathfrak{A} \longrightarrow \mathfrak{A}^{**}$ with the quotient map $ \mathfrak{A}^{**} \longrightarrow \sigma wc (\mathfrak{A}^*)^*$, so that $ \langle \iota(a) , \psi \rangle = \psi (a)$ for all $a \in \mathfrak{A}$ and $ \psi \in \sigma wc (\mathfrak{A}^*)$.
$(i) \Longrightarrow (ii)$ Since $\mathfrak{A}$ is a dual Banach algebra, $\mathfrak{A}_*$ is an $\mathfrak{A}$-bimodule and $\mathfrak{A}_* \subseteq \sigma wc (\mathfrak{A}^*)$ \cite[Corollary 4.6]{Rund}. Therefore taking adjoints gives us a $w^*$-$w^*$-continuous $\mathfrak{A}$-bimodule map $ \xi : \sigma wc (\mathfrak{A}^*)^* \longrightarrow \mathfrak{A}$. Notice that $\xi \circ \iota (a) = a$ for all $a \in \mathcal A$. By the assumption, there exists a $\varphi$-Connes mean $m \in \sigma wc (\mathfrak{A}^*)^*$. Setting $u = \xi(m) \in \mathfrak{A}$, we observe that $\mathfrak{A}$ is $\varphi$-contractible by Theorem \ref{3.1}.
$(ii) \Longrightarrow (i)$ Take $ u \in \mathfrak{A}$ satisfying $\varphi(u)=1$ and $u a = au = \varphi(a) u$, for all $a \in \mathfrak{A}$. Then $\iota(u)$ is a $\varphi$-Connes mean on $ \sigma wc (\mathfrak{A}^*)$. \qed
\section{$\varphi$-contractibility }
It was shown that right [left] $\varphi$-contractibility of $\mathfrak{A}$ is equivalent to the existence of a \textit{right} [\textit{left}] $\varphi$-\textit{diagonal} for $\mathfrak{A}$, i. e., an element $ m \in \mathfrak{A} \widehat{\otimes} \mathfrak{A}$ such that $ \varphi ( \pi (m) ) = 1$ and $ a\cdot m = \varphi (a) m $ $[m \cdot a = \varphi (a) m]$ for $a \in \mathfrak{A}$, where $ \pi : \mathfrak{A} \widehat{\otimes} \mathfrak{A} \longrightarrow \mathfrak{A}$ is the bounded linear map determined by $ \pi (a \otimes b ) = ab$. If $m$ is both left and right $\varphi$-diagonal, it called $\varphi$-\textit{diagonal}.
The following is likely to be well-known, but since we could not locate a reference, we include a proof.
\begin{theo} \label{3.1}Suppose that $\mathfrak{A}$ is a Banach algebra and $\varphi \in \Delta(\mathfrak{A} ) $. Then $\mathfrak{A}$ is $\varphi$-contractible if and only if there exists an element $u \in \mathfrak{A}$ satisfying $$ (*) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \varphi(u) = 1 \ \ \text{and} \ \ u a = a u = \varphi(a) u \ \ \ \ \ \ \ (a \in \mathfrak{A}) \ .$$ \end{theo} {\bf Proof.} Let $u \in \mathfrak{A}$ satisfies the conditions in $(*)$. Let $ D : \mathfrak{A} \longrightarrow E$ be a derivation for a Banach $\mathfrak{A}$-bimodule $E \in \mathbb{M}_\varphi$. It is routinely checked that $ D^* ( f \ . \ a ) = D^* ( f ) \ . \ a - \langle
Da , f \rangle \varphi$, for each $f \in E^*$ and $a \in
\mathfrak{A}$. Put $ t := Du \in E$. For $f \in E^*$ and $a \in
\mathfrak{A}$ we have \begin{align*} \langle f, a \ . \ t \rangle &= \langle D^* ( f \ . \ a ) , u \rangle = \langle D^* ( f ) , a u \rangle - \langle Da , f \rangle \\&= \varphi(a) \langle D^* ( f ) , u \rangle - \langle Da , f \rangle = \varphi(a) \langle f , t \rangle - \langle Da , f \rangle \ . \end{align*} Therefore $a \ . \ t = \varphi(a) t - Da $, $a \in
\mathfrak{A}$, and hence $ D = ad_{-t}$. Whence $\mathfrak{A}$ is
right $\varphi$-contractible. A similar argument shows that $\mathfrak{A}$
is also left $\varphi$-contractible.
Conversely, suppose that $ m \in \mathfrak{A} \widehat{\otimes} \mathfrak{A}$ is a $\varphi$-diagonal for $\mathfrak{A}$. Put $u:= \pi(m) \in \mathfrak{A}$. Now, it is easily checked that $u$ has the desired properties in $(*)$.\qed
Let $\mathfrak{A}$ be a Banach algebra with the \textit{unitization} $\mathfrak{A}^\sharp $, and let $P$ be a left Banach $\mathfrak{A}$-module. We recall that $P$ is a \textit{projective} left $\mathfrak{A}$-module if the multiplication map $$ \pi : \mathfrak{A}^\sharp \widehat{\otimes} P \longrightarrow P \ ; \ \ a \otimes x \longmapsto a \cdot x \ \ \ (a \in \mathfrak{A}^\sharp, x \in P)$$ has a right inverse which is also a left $\mathfrak{A}$-module homomorphism. Similar definitions hold for projective right $\mathfrak{A}$-modules and projective $\mathfrak{A}$-bimodules.
For $\varphi \in \Delta(\mathfrak{A} ) $, the space $ \mathbb{C}_\varphi = \{ \alpha \varphi \ : \ \alpha \in \mathbb{C} \}$ is a Banach $\mathfrak{A}$-bimodule with module actions $ a \cdot \varphi = \varphi \cdot a := \varphi(a) \varphi$, $(a \in \mathfrak{A})$.
\begin{theo} \label{3.2}Suppose that $\mathfrak{A}$ is a Banach algebra and $\varphi \in \Delta(\mathfrak{A} ) $. Then $\mathfrak{A}$ is $\varphi$-contractible if and only if $ \mathbb{C}_\varphi$ is a projective Banach $\mathfrak{A}$-bimodule. \end{theo} {\bf Proof.} Without loss of generality, we may assume that $\mathfrak{A}$ is unital. Let $ \mathbb{C}_\varphi$ be projective as a left $\mathfrak{A}$-module. Then there exists a bounded linear map $ \rho : \mathbb{C}_\varphi \longrightarrow \mathfrak{A} \widehat{\otimes} \mathbb{C}_\varphi$ satisfying $ \pi \rho (\varphi) = \varphi$ and $a \cdot \rho (\varphi) = \varphi(a) \rho (\varphi)$ for each $a \in \mathfrak{A}$. We have $ \rho (\varphi) = \sum_{n=1}^\infty a_n \otimes \varphi$, where $a_n \in \mathfrak{A}$
$(n=1,2,...)$ with $ \sum_{n=1}^\infty || a_n || < \infty $. Putting $ u:=\sum_{n=1}^\infty a_n \in \mathfrak{A}$, we observe that $\varphi (u) = 1$ and $ a u = \varphi(a) u$, $a \in \mathfrak{A}$. Now, by Theorem \ref{3.1} $\mathfrak{A}$ is right $\varphi$-contractible.
Conversely, let $\mathfrak{A}$ be right $\varphi$-contractible. Take $ u \in \mathfrak{A}$ with $\varphi (u) = 1$ and $ a u = \varphi(a) u$ for all $a \in \mathfrak{A}$. Then it is easy to verify that the map $ \rho : \mathbb{C}_\varphi \longrightarrow \mathfrak{A} \widehat{\otimes} \mathbb{C}_\varphi$ defined by $ \rho (\varphi) := u \otimes \varphi$ is a left $\mathfrak{A}$-module homomorphism which is a right inverse of $\pi$. Whence $\mathbb{C}_\varphi$ a is projective left $\mathfrak{A}$-module.
Similarly, one can see that $\mathbb{C}_\varphi$ is a projective right $\mathfrak{A}$-module if and only if $\mathfrak{A}$ is left $\varphi$-contractible. \qed
\end{document}
\end{itemize}
\end{document} | arXiv |
\begin{document}
\title{Saturation numbers for Ramsey-minimal graphs} \author{Martin Rolek ~and~ Zi-Xia Song\thanks{Corresponding author. \newline
E-mail addresses: [email protected] (M. Rolek), [email protected] (Z-X. Song). }\\ Department of Mathematics\\ University of Central Florida\\
Orlando, FL 32816 }
\maketitle \begin{abstract}
Given graphs $H_1, \dots, H_t$, a graph $G$ is \dfn{$(H_1, \dots, H_t)$-Ramsey-minimal} if every $t$-coloring of the edges of $G$ contains a monochromatic $H_i$ in color $i$ for some $i\in\{1, \dots, t\}$, but any proper subgraph of $G $ does not possess this property. We define $\mathcal{R}_{\min}(H_1, \dots, H_t)$ to be the family of $(H_1, \dots, H_t)$-Ramsey-minimal graphs. A graph $G$ is \dfn{$\mathcal{R}_{\min}(H_1, \dots, H_t)$-saturated} if no element of $\mathcal{R}_{\min}(H_1, \dots, H_t)$ is a subgraph of $G$, but for any edge $e$ in $\overline{G}$, some element of $\mathcal{R}_{\min}(H_1, \dots, H_t)$ is a subgraph of $G + e$. We define $sat(n, \mathcal{R}_{\min}(H_1, \dots, H_t))$ to be the minimum number of edges over all $\mathcal{R}_{\min}(H_1, \dots, H_t)$-saturated graphs on $n$ vertices. In 1987, Hanson and Toft conjectured that $sat(n, \mathcal{R}_{\min}(K_{k_1}, \dots, K_{k_t}) )= (r - 2)(n - r + 2)+\binom{r - 2}{2} $ for $n \ge r$, where $r=r(K_{k_1}, \dots, K_{k_t})$ is the classical Ramsey number for complete graphs. The first non-trivial case of Hanson and Toft's conjecture for sufficiently large $n$ was setteled in 2011, and is so far the only settled case. Motivated by Hanson and Toft's conjecture, we study the minimum number of edges over all $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated graphs on $n$ vertices, where $\mathcal{T}_k$ is the family of all trees on $k$ vertices. We show that for $n \ge 18$, $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_4)) =\lfloor {5n}/{2}\rfloor$. For $k \ge 5$ and $n \ge 2k + (\lceil k/2 \rceil +1) \lceil k/2 \rceil -2$, we obtain an asymptotic bound for $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k))$ by showing that $\left( \frac{3}{2} + \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right) n -c\le sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k)) \le \left( \frac{3}{2} + \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right) n + C$, where $c=\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil + \frac{3}{2} \right) k -2$ and $C= 2k^2-6k+\frac32-\left\lceil\frac{k}{2}\right\rceil \left(k- \frac{1}{2}\left\lceil\frac{k}{2}\right\rceil -1\right)$. \end{abstract}
{\bf AMS Classification}: 05C55; 05C35.
{\bf Keywords}: Ramsey-minimal; saturation number; saturated graph \baselineskip=18pt
\section{Introduction}\label{sec:Intro}
All graphs considered in this paper are finite and without loops or multiple edges. For a graph $G$, we will use $V(G)$ to denote the vertex set, $E(G)$ the edge set, $|G|$ the number of vertices, $e(G)$ the number of edges, $\delta(G)$ the minimum degree, $\Delta(G)$ the maximum degree, and $\overline{G}$ the complement of $G$. Given vertex sets $A, B \subseteq V(G)$, we say that $A$ is \dfn{complete to} (resp. \dfn{anti-complete to}) $B$ if for every $a \in A$ and every $b \in B$, $ab \in E(G)$ (resp. $ab \notin E(G)$). The subgraph of $G$ induced by $A$, denoted $G[A]$, is the graph with vertex set $A$ and edge set $\{xy \in E(G): x, y \in A\}$. We denote by $B \backslash A$ the set $B - A$, $e_G(A, B)$ the number of edges between $A$ and $B$ in $G$, and $G \backslash A$ the subgraph of $G$ induced on $V(G) \backslash A$, respectively. If $A = \{a\}$, we simply write $B \backslash a$, $e_G(a, B)$, and $G \backslash a$, respectively. For any edge $e\in E(\overline{G})$, we use $G+e$ to denote the graph obtained from $G$ by adding the new edge $e$. The {\dfn{join}} $G\vee H$ (resp.~{\dfn{union}} $G\cup H$) of two vertex disjoint graphs $G$ and $H$ is the graph having vertex set $V(G)\cup V(H)$ and edge set $E(G) \cup E(H)\cup \{xy: x\in V(G), y\in V(H)\}$ (resp. $E(G)\cup E(H)$). Given two isomorphic graphs $G$ and $H$, we may (with a slight but common abuse of notation) write $G = H$. For an integer $t\ge1$ and a graph $H$, we define $tH$ to be the union of $t$ disjoint copies of $H$. We use $K_n$, $K_{1,{n-1}}$, $C_n$, $P_n$ and $T_n$ to denote the complete graph, star, cycle, path and a tree on $n$ vertices, respectively.
Given graphs $G$, ${H}_1, \dots, {H}_t$, we write \dfn{$G \rightarrow ({H}_1, \dots, {H}_t)$} if every $t$-edge-coloring of $G$ contains a monochromatic ${H}_i$ in color $i$ for some $i\in\{1,2, \dots, t\}$. The classical \dfn{Ramsey number} $r({H}_1, \dots, {H}_t)$ is the minimum positive integer $n$ such that $K_n \rightarrow ({H}_1, \dots, {H}_t)$. A graph $G$ is \dfn{$({H}_1, \dots, {H}_t)$-Ramsey-minimal} if $G \rightarrow ({H}_1, \dots, {H}_t)$, but for any proper subgraph $G'$ of $G$, $G' \not\rightarrow ({H}_1, \dots, {H}_t)$. We define $\mathcal{R}_{\min}({H}_1, \dots, {H}_t)$ to be the family of $({H}_1, \dots, {H}_t)$-Ramsey-minimal graphs. It is straightforward to prove by induction that a graph $G$ satisfies $G \rightarrow ({H}_1, \dots, {H}_t)$ if and only if there exists a subgraph $G'$ of $G$ such that $G'$ is $({H}_1, \dots, {H}_t)$-Ramsey-minimal. Ramsey's theorem~\cite{Ramsey1930} implies that $\mathcal{R}_{\min}({H}_1, \dots, {H}_t)\ne\emptyset$ for all integers $t$ and all finite graphs $H_1, \dots, H_t$. As pointed out in a recent paper of Fox, Grinshpun, Liebenau, Person, and Szab\'o~\cite{Fox2016}, ``it is still widely open to classify the graphs in $\mathcal{R}_{\min}(H_1, \dots, H_t)$, or even to prove that these graphs have certain properties". Some properties of $\mathcal{R}_{\min}({H}_1, \dots, {H}_t)$ have been studied, such as the minimum degree $s({H}_1, \dots, {H}_t) := \min\{\delta(G) : G \in \mathcal{R}_{\min}({H}_1, \dots, {H}_t)\}$, which was first introduced by Burr, Erd\H os, and Lov\'asz~\cite{Burr1976}. Recent results on $s({H}_1, \dots, {H}_t) $ can be found in \cite{Fox2007, Fox2016}. For more information on Ramsey-related topics, the readers are referred to a very recent informative survey due to Conlon, Fox, and Sudakov~\cite{Conlon2015}.
In this paper, we study the following problem. A graph $G$ is \dfn{$\mathcal{R}_{\min}(H_1, \dots, H_t)$-saturated} if no element of $\mathcal{R}_{\min}(H_1, \dots, H_t)$ is a subgraph of $G$, but for any edge $e$ in $\overline{G}$, some element of $\mathcal{R}_{\min}(H_1, \dots, H_t)$ is a subgraph of $G + e$. This notion was initiated by Ne\v{s}et\v{r}il~\cite{Nesetril1986} in 1986 when he asked whether there are infinitely many $\mathcal{R}_{\min}(H_1, \dots, H_t)$-saturated graphs. This was answered in the positive by Galluccio, Siminovits, and Simonyi~\cite{Galluccio1992}. We define $sat(n, \mathcal{R}_{\min}(H_1, \dots, H_t))$ to be the minimum number of edges over all $\mathcal{R}_{\min}(H_1, \dots, H_t)$-saturated graphs on $n$ vertices. This notion was first discussed by Hanson and Toft~\cite{Hanson1987} in 1987 when $H_1, \dots, H_t$ are complete graphs. They proposed the following conjecture.
\begin{conj}\label{HTC} Let $r = r(K_{k_1}, \dots, K_{k_t})$ be the classical Ramsey number for complete graphs. Then \[ sat(n, \mathcal{R}_{\min}(K_{k_1}, \dots, K_{k_t})) = \displaystyle\left\{ \begin{array}{ll} \binom{n}{2} \,\, & n < r \\[10pt]
(r - 2)(n - r + 2) + \binom{r - 2}{2} \,\, & n \ge r \end{array} \right. \] \end{conj}
Chen, Ferrara, Gould, Magnant, and Schmitt~\cite{Chen2011} proved that $sat(n, \mathcal{R}_{\min}(K_3, K_3)) = 4n - 10$ for $n\ge56$. This settles the first non-trivial case of Conjecture~\ref{HTC} for sufficiently large $n$, and is so far the only settled case. Ferrara, Kim, and Yeager~\cite{Ferrara2014} proved that $sat(n, \mathcal{R}_{\min}(m_1K_2, \dots, m_tK_2))=3(m_1+\cdots+m_t-t)$ for $m_1, \dots, m_t\ge1$ and $n>3(m_1+\cdots+m_t-t)$. The problem of finding $sat(n, \mathcal{R}_{\min}(K_3, T_k))$ was also explored in~\cite{Chen2011}.
\begin{prop}\label{prop:satRminKtTm} Let $k\ge2$ and $t\ge2$ be integers. Then \begin{align*} sat(n, \mathcal{R}_{\min}(K_t, T_k)) \le n(t -& 2)(k - 1) - (t - 2)^2(k - 1)^2 + \binom{(t - 2)(k - 1)}{2} \\
&+ \left\lfloor \frac{n}{k - 1} \right\rfloor \binom{k - 1}{2} + \binom{r}{2}, \end{align*} where $r = n ~ ($\emph{mod} $k - 1)$. \end{prop}
It was conjectured in \cite{Chen2011} that the upper bound in Proposition~\ref{prop:satRminKtTm} is asymptotically correct. Note that there is only one tree on three vertices, namely, $P_3$. A slightly better result was obtained for $\mathcal{R}_{\min}(K_3, P_3)$-saturated graphs in \cite{Chen2011}.
\begin{thm}\label{K3P3} For $n \ge 11$, $sat(n, \mathcal{R}_{\min}(K_3, P_3)) = \left\lfloor \dfrac{5n}{2} \right\rfloor - 5$. \end{thm}
Motivated by Conjecture~\ref{HTC}, we study the following problem. Let $\mathcal{T}_k$ be the family of all trees on $k$ vertices. Instead of fixing a tree on $k$ vertices as in Proposition~\ref{prop:satRminKtTm}, we will investigate $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k))$, where a graph $G$ is \dfn{$(K_3, \mathcal{T}_k)$-Ramsey-minimal} if for any $2$-coloring $c : E(G) \to \{\text{red, blue} \}$, $G$ has either a red $K_3$ or a blue tree $T_k\in \mathcal{T}_k$, and we define $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$ to be the family of $(K_3, \mathcal{T}_k)$-Ramsey-minimal graphs. By Theorem~\ref{K3P3}, we see that $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_3)) = \lfloor {5n}/{2} \rfloor - 5$ for $n \ge 11$. In this paper, we prove the following two main results. We first establish the exact bound for $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_4))$ for $n\ge18$, and then obtain an asymptotic bound for $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k))$ for all $k \ge 5$ and $n \ge 2k + (\lceil k/2 \rceil +1) \lceil k/2 \rceil +2$.
\begin{thm}\label{K3T4} For $n \ge 18$, $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_4)) =\left\lfloor \dfrac{5n}{2}\right\rfloor$.
\end{thm}
\begin{thm}\label{K3Tk} For any integers $k \ge 5$ and $n \ge 2k + (\lceil k/2 \rceil +1) \lceil k/2 \rceil -2$, there exist constants $c=\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil + \frac{3}{2} \right) k -2$ and $C= 2k^2-6k+\frac32-\left\lceil\frac{k}{2}\right\rceil \left(k- \frac{1}{2} \left\lceil\frac{k}{2}\right\rceil -1\right)$ such that \[ \left( \frac{3}{2} + \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right)n - c \le sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k)) \le \left( \frac{3}{2} + \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right)n + C. \] \end{thm}
The constants $c$ and $C$ in Theorem~\ref{K3Tk} are both quadratic in $k$. We believe that the true value of $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k))$ is closer to the upper bound in Theorem~\ref{K3Tk}. To establish the desired lower and upper bounds for each of Theorem~\ref{K3T4} and Theorem~\ref{K3Tk}, we need to introduce more notation and prove a useful lemma (see Lemma~\ref{blue} below). Given a graph $H$, a graph $G$ is \dfn{$H$-free} if $G$ does not contain $H$ as a subgraph. For a graph $G$, let $c : E(G) \to \{\text{red, blue} \}$ be a $2$-edge-coloring of $G$ and let $E_r$ and $E_b$ be the color classes of the coloring $c$. We use $G_{r}$ and $G_{b}$ to denote the spanning subgraphs of $G$ with edge sets $E_r$ and $E_b$, respectively. We define $c$ to be a \dfn{bad $2$-coloring} of $G$ if $G$ has neither a red $K_3$ nor a blue $T_k\in \mathcal{T}_k$, that is, if $G_r$ is $K_3$-free and $G_b$ is $T_k$-free for any $T_k\in\mathcal{T}_k$. For any $v\in V(G)$, we use $d_r(v)$ and $N_r(v) $ to denote the degree and neighborhood of $v$ in $G_r$, respectively. Similarly, we define $d_b(v)$ and $N_b(v)$ to be the degree and neighborhood of $v$ in $G_b$, respectively.
\noindent {\bf Remark.} One can see that if $G$ is $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated, then $G$ admits at least one bad $2$-coloring but, for any edge $e\in E(\overline{G})$, $G+e$ admits no bad $2$-coloring.
We will utilize the following Lemma~\ref{blue}(a) to force a unique bad $2$-coloring of certain graphs in order to establish an upper bound for $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k))$. Lemma~\ref{blue}(b) and Lemma~\ref{blue}(c) will be applied to establish a lower bound for $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k))$.
\begin{lem}\label{blue} For any integer $k\ge3$, let $c : E(G) \to \{\text{red, blue} \}$ be a bad $2$-coloring of a graph $G$ on $n\ge k+2$ vertices.
\par\hangindent\parindent\mytextindent{(a)} If $e \in E(G)$ belongs to at least $2k - 3$ triangles in $G$, then $e\in E_b$.
\par\hangindent\parindent\mytextindent{(b)} If $G$ is $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated and $D_1, \dots, D_p$ are the components of $G_b$ with $|D_i|< {k}/{2}$ for all $i\in\{1,\dots, p\}$, then $p\le2$. Moreover, if $p=2$, then $V(D_1)$ is complete to $V(D_2)$ in $G_r$.
\par\hangindent\parindent\mytextindent{(c)} If $G$ is $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated, and among all bad $2$-colorings of $G$, $c$ is chosen so that $|E_r|$ is maximum, then $\Delta(G_r) \le n-3$ and $G_r$ is 2-connected. \end{lem}
\noindent {\bf Proof.}~~ To prove (a), suppose that there exists an edge $e = uv \in E_r$ such that $e$ belongs to at least $2k-3$ triangles in $G$. Since $G_r$ is $K_3$-free, we see that either $d_b(u)\ge k-1$ or $d_b(v)\ge k-1$. In either case, $G_b$ contains $K_{1, {k-1}}$ as a subgraph, a contradiction.
To prove (b), let $D_1, \dots, D_p$ be given as in (b). We next show that $p\le2$. Since $G$ is $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated, we see that, for any edge $e$ in $\overline{G}$, $G+e$ admits no bad $2$-coloring. We claim that, for any $i,j\in\{1,\dots, p\}$ with $i\ne j$, $V(D_i)$ is complete to $V(D_j)$ in $G_r$. Suppose that there exist vertices $u \in V(D_i)$ and $v \in V(D_j)$ such that $uv \notin E_r$. Then $uv\notin E(G)$ and so we obtain a bad $2$-coloring of $G+uv$ from $c$ by coloring the edge $uv$ blue, a contradiction. Thus $V(D_i)$ is complete to $V(D_j)$ in $G_r$ for any $i,j\in\{1,\dots, p\}$ with $i\ne j$. Since $G_r$ is $K_3$-saturated, it follows that $p\le2$.
It remains to prove (c). By the choice of $c$, $G_{r}$ is $K_3$-free but $G_r+e$ contains a $K_3$ for any $e\in E(\overline{G_r})$, and $G_b$ is ${T}_k$-free for any $T_k\in \mathcal{T}_k$. Note that $G_b$ is disconnected and every component of $G_b$ contains at most $k - 1$ vertices. Since $G$ is $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated, we see that, for any edge $e$ in $\overline{G}$, $G+e$ admits no bad $2$-coloring. Suppose that $\Delta(G_r) \ge n-2$. Let $x \in V(G)$ with $d_{r}(x) = \Delta(G_r)$ and let $v$ be the unique non-neighbor of $x$ in $G_r$ if $d_r(x)=n-2$.
Since $G_r$ is $K_3$-free, we see that $N_r(x)$ is an independent set in $G_r$. By the choice of $c$, $v$ must be complete to $N_r(x)$ in $G_r$. Since $n\ge k+2$, we have $|N_r(x)|\ge k$. Let $u\in N_r(x)$ and let $H$ be the component of $G_b$ containing $u$. Then $|H|\le k-1$ and $V(H)\subset N_r(x)$. Let $w\in N_r(x)\backslash V(H)$. Clearly, $uw\notin E(G)$. We obtain a bad $2$-coloring of $G+uw$ from $c$ by coloring the edge $uw$ red, and then recoloring all edges incident with $u$ in $G_r$ blue and all edges incident with $u$ in $G_b$ red, a contradiction. This proves that $\Delta(G_r) \le n-3$.
Finally, we show that $G_r$ is $2$-connected. Suppose that $G_r$ is not $2$-connected. Since $G_{r}$ is $K_3$-free but $G_r+e$ contains a $K_3$ for any $e\in E(\overline{G_r})$, we see that $G_r$ is connected and must have a cut vertex, say $u$. Since $\Delta(G_r) \le n-3$, $u$ has a non-neighbor, say $v$, in $G_r$. Let $G_1$ and $G_2$ be two components of $G_r\backslash u$ with $v \in V(G_2)$. Let $w \in V(G_1)$. By the choice of $c$, $wv\notin E_b$, otherwise we obtain a bad $2$-coloring of $G$ from $c$ by recoloring the blue edge $wv$ red. Thus
$wv\notin E(G)$ and then we obtain a bad $2$-coloring of $G+wv$ from $c$ by coloring the edge $wv$ red, a contradiction. Therefore $G_r$ is $2$-connected.
This completes the proof of Lemma~\ref{blue}.
\vrule height3pt width6pt depth2pt
The remainder of this paper is organized as follows. In Section~\ref{sec:K3Sat}, we discuss $K_3$-saturated graphs with a specified minimum degree and prove a structural result which we shall use in the proof of Theorem~\ref{K3T4}. We then prove Theorem~\ref{K3T4} in Section~\ref{sec:satRminK3T4} and Theorem~\ref{K3Tk} in Section~\ref{sec:K3Tk}.
\section{$K_3$-saturated graphs}\label{sec:K3Sat}
In this section we list known results and establish new ones on $K_3$-saturated graphs that we shall need to prove our main results.
Given a graph $H$, a graph $G$ is \dfn{$H$-saturated} if $G$ is $H$-free but, for any edge $e \in E(\overline{G})$, $G + e$ contains a copy of $H$ as a subgraph. We define $sat(n, H)$ to be the minimum number of edges over all $H$-saturated graphs on $n$ vertices. This notion
was introduced by Erd\H os, Hajnal, and Moon~\cite{Erdos1964} in 1964. Results on $H$-saturated graphs can be found in surveys by either Faudree, Faudree, and Schmitt~\cite{Faudree2011} or Pikhurko~\cite{Pikhurko}. In this section we are interested in the case when $H=K_t$. Erd\H os, Hajnal, and Moon~\cite{Erdos1964} showed that if $G$ is a $K_t$-saturated graph on $n$ vertices, then $e(G)\ge(t-2)n- \binom{t - 1}{2}$. Moreover, they showed that the graph $K_{t- 2} \vee \overline{K}_{n - t + 2}$ is the unique $K_t$-saturated graph with $n$ vertices and $(t-2)n- \binom{t - 1}{2} $ edges. Notice that this extremal graph has minimum degree $t-2$. One may ask: what is the minimum number of edges in a $K_t$-saturated graph with specified minimum degree? This was first studied by Duffus and Hanson~\cite{Duffus1986} in 1986. They proved the following two results.
\begin{thm}\label{delta=2} If $G$ is a $K_3$-saturated graph on $n \ge 5$ vertices with $\delta(G) = 2$, then $e(G) \ge 2n - 5$ edges. Moreover, if $e(G) = 2n - 5$, then $G$ can be obtained from $C_5$ by repeatedly duplicating vertices of degree $2$. \end{thm}
\begin{thm}\label{delta=3}
If $G$ is a $K_3$-saturated graph on $n \ge 10$ vertices with $\delta(G) = 3$, then $e(G) \ge 3n - 15$. Moreover, if $e(G) = 3n - 15$, then $G$ contains the Petersen graph as a subgraph. \end{thm}
Alon, Erd\H{o}s, Holzman, and Krivelevich~\cite{Alon1996} showed that any $K_4$-saturated graph on $n \ge 11$ vertices with minimum degree $ 4$ has at least $4n-19$ edges. This has recently been generalized by Bosse, the second author, and Zhang~\cite{Bosse2017+} by showing that any $K_t$-saturated graph on $n \ge t+7$ vertices with minimum degree $ t\ge3$ has at least $tn-{{t+1}\choose 2}-9$ edges. Moreover, they showed that the graphs $K_{t- 3} \vee H$ are the only $K_t$-saturated graphs with $n$ vertices and $tn- \binom{t +1}{2} -9$ edges, where $H$ is a $K_3$-saturated graph on $n-t+3\ge10$ vertices with $\delta(H)=3$. Theorem~\ref{Kp} below is a result of Day~\cite{Day2017} on $K_t$-saturated graphs with prescribed minimum degree. It confirms a conjecture of Bollob\'as~\cite{Bollobas} when $t=3$. It is worth noting that the constant $c$ given in Theorem~\ref{Kp} does not have a dependency on $t$. This is a consequence of the fact that every $K_t$-saturated graph has minimum degree at least $t-2$.
\begin{thm}\label{Kp}
For any integers $p \ge 1$ and $t \ge 3$, there exists a constant $c = c(p)$ such that if $G$ is a $K_t$-saturated graph on $n$ vertices with $\delta(G) \ge p$, then $e(G) \ge pn - c$. \end{thm}
For our proof of Theorem~\ref{K3T4}, we will need a structural result on $K_3$-saturated graphs with minimum degree at most $ 2$. The graph $J$ depicted in Figure~\ref{J} is a $K_3$-saturated graph with minimum degree $2$, where $A\ne\emptyset$ and either $B=C=\emptyset$ or $B\ne\emptyset$ and $C \ne \emptyset$; $A$, $B$ and $C$ are independent sets in $J$ and pairwise disjoint; $A$ is anti-complete to $B\cup C$ and $B$ is complete to $C$; $N_J(y)=A\cup B$ and $N_J(z)=A\cup C$; and $|A|+|B|+|C|=|J|-2$. It is straightforward to check that
$e(J) = 2(|J| - 2) + |B||C| - |B| - |C|\ge 2|J|-5$. Moreover, $e(J)=2|J|-5$ when $|B|=1$ or $|C|=1$. That is, $e(J)=2|J|-5$ when $J$ is obtained from $C_5$ by repeatedly duplicating vertices of degree $2$. Lemma~\ref{structural} below yields a new proof of Theorem~\ref{delta=2}, and has been generalized for all $K_t$-saturated graphs with minimum degree at most $ t - 1$ in~\cite{Bosse2017+}.
\begin{figure}
\caption{The graph $J$ }
\label{J}
\end{figure}
\begin{lem}\label{structural} Let $G$ be a $K_3$-saturated graph with $n$ vertices and $\delta(G)=\delta$.
\par\hangindent\parindent\mytextindent{(a)} If $\delta = 1$, then $G=K_{1, n - 1}$.
\par\hangindent\parindent\mytextindent{(b)} If $\delta = 2$, then $G =J$, where the graph $J$ is depicted in Figure~\ref{J}. Moreover, $J=K_{2, n-2}$ when $B=C=\emptyset$.
\par\hangindent\parindent\mytextindent {(c)} If $\delta \ge 3$, then $2e(G) \ge \max\{(\delta+1)n-\delta^2-1,\, (\delta+2)n - \delta(\delta + t) -2 \}$, where $t := \min \{ d(v) : v \text{ is adjacent to } \text{a vertex of degree }\delta \text{ in } G\}$. \end{lem}
\noindent {\bf Proof.}~~ Let $x \in V(G)$ be a vertex with $d(x) = \delta$. Since $G$ is $K_3$-saturated, we see that $G$ is connected and $K_3$-free. First assume that $d(x) = 1$. Let $y$ be the neighbor of $x$. If there exists a vertex $z\in V(G)$ such that $yz\notin E(G)$, then $G+xz$ is $K_3$-free, contrary to the fact that $G$ is $K_3$-saturated. Thus $y$ is complete to $V(G) \setminus \{y\}$. Clearly, $N(y)$ is an independent set because $G$ is $K_3$-free. Thus $G = K_{1, n-1}$. This proves (a).
Next assume that $d(x) = 2$. Let $N(x) = \{y, z\}$. Then $yz \notin E(G)$ because $G$ is $K_3$-free. We next show that $N(y) \cup N(z) = V(G)\backslash\{y,z\}$. Suppose there exists a vertex $w\in V(G)$ such that $wy, wz\notin E(G)$. Then $G+xw$ is $K_3$-free, contrary to the fact that $G$ is $K_3$-saturated.
Hence $N(y) \cup N(z) = V(G)\backslash\{y,z\}$. Let $A := N(y) \cap N(z)$, $B := N(y) \setminus N(z)$, and $C := N(z) \setminus N(y)$. Then $|A| + |B| + |C| = n - 2$, and $A, B, C$ are pairwise disjoint. Clearly, $x \in A$, and either $B=C=\emptyset$ or $B\ne\emptyset$ and $C \ne \emptyset$ because $\delta(G)= 2$. Since $G$ is $K_3$-free, we see that $A, B, C$ are independent sets in $G$, and $A$ is anti-complete to $B \cup C$. We next show that $B$ must be complete to $C$ when $B\ne\emptyset$ and $C \ne \emptyset$. Suppose there exist vertices $b\in B$ and $c\in C$ such that $bc\notin E(G)$. Then $G+bc$ is $K_3$-free, a contradiction. Thus $G=J$, where $J$ is depicted in Figure~\ref{J}.
It remains to prove (c). Let $\delta\ge3$ and let $t$ be given as in (c). Then $d(x)\ge3$. We first show that $2e(G) \ge (\delta+1)n-\delta^2-1$. Since $G$ is $K_3$-saturated, every vertex in $V(G) \backslash N[x]$ has at least one neighbor in $N(x)$, yielding $ \sum_{v \in N(x)} d(v) \ge |V(G) \backslash N[x]|+d(x)=n-1$. Therefore \begin{align*} 2e(G) & = d(x) + \sum_{v \in N(x)} d(v) + \sum_{v \in V(G) \backslash N[x]} d(v)\\ & \ge \delta+n-1+\delta(n-\delta-1)\\ & \ge (\delta+1)n-\delta^2-1. \end{align*}
We next show that $2e(G) \ge (\delta+2)n - \delta(\delta + t) -2$. We may assume that there exists a vertex $y \in N(x)$ with $d(y) = t$. Notice that $x$ and $y$ have no common neighbor. Let $M: =V(G) \backslash ( N(x) \cup N(y) )$. Then $|M|=n-\delta-t$. Since $G$ is $K_3$-saturated, each vertex in $M$ has at least one neighbor in $N(x) \backslash y$ and at least one neighbor in $N(y) \backslash x$. Thus $\sum_{v \in N(x) \backslash y} d(v) \ge n-t-1$, and $\sum_{v \in N(y) \backslash x} d(v) \ge n-\delta -1$. Then
\begin{align*} 2e(G) & = d(x)+d(y) + \sum_{v \in N(x) \backslash y} d(v) + \sum_{v \in N(y) \backslash x} d(v) + \sum_{v \in M} d(v) \\ & \ge \delta +t+ (n-t-1) + (n-\delta-1) + \delta(n-\delta-t) \\ & =(\delta+2)n - \delta(\delta + t) -2. \end{align*}
This completes the proof of Lemma~\ref{structural}.
\vrule height3pt width6pt depth2pt
\begin{cor}\label{2n}
Let $G$ be a $K_3$-saturated graph on $n\ge5$ vertices with $\delta(G)=2$. If $e(G) = 2n - k$ for some $k \in \{0, 1, 2, 3, 4, 5\}$, then $G = J$ with $|B||C| - |B| - |C| = 4 - k$, where $A, B, C,$ and $J$ are as depicted in Figure~\ref{J} and the values of $|B|$ and $|C|$ are summarized in Table~\ref{table}. \end{cor}
\noindent {\bf Proof.}~~
Since $\delta(G) = 2$, by Lemma~\ref{structural}(b), $G =J$ with $e(G) = 2(n - 2) + |B||C| - |B| - |C|$ and either $B = C = \emptyset$ or $B, C \ne \emptyset$, where $A, B, C,$ and $J$ are as depicted in Figure~\ref{J}. We see that $|B||C| - |B| - |C| = 4 - k$ because $e(G) = 2n - k$, where $k \in \{0, 1, 2, 3, 4, 5\}$. Solving the resulting equation in each case of $k$ yields explicit constructions of $J$, which are summarized in Table~\ref{table}.
\vrule height3pt width6pt depth2pt
\begin{table}[htbp] \centering \begin{tabular}{ *5l @{} *9l @{}*9l @{} } \toprule
$k$ & $e(J)$ & \emph{values of $|B|$ and $|C|$ with $|B|\le|C|$} \\\midrule
$5$ & $2n-5$ & $|B|=1$ and $|C| \ge 1$ \\
$4$ & $2n-4$ & $|B|=|C|=2$ or $|B|=|C|=0$\\
$3$ & $2n-3$ & $|B| =2$ and $|C| = 3$ \\
$2$ & $2n-2$ & $|B| = 2$ and $|C| = 4$ \\
$1$ & $2n-1$ & $|B|=2$ and $|C|=5$ or $|B|=|C|=3$ \\
$0$ & $2n$ & $|B|=2$ and $|C|=6$ \\\bottomrule
\hline \end{tabular} \caption{Construction of the graph $J$ determined by $k$} \label{table} \end{table}
\section{Proof of Theorem~\ref{K3T4}}\label{sec:satRminK3T4}
We are now ready to prove Theorem~\ref{K3T4}. We first establish the desired upper bound for $sat(n, \mathcal{R}_{\min} (K_3, \mathcal{T}_4))$ by constructing an $\mathcal{R}_{\min}(K_3, \mathcal{T}_4)$-saturated graph with the desired number of edges. Let $n\ge8 $ be an integer and let $H=(\lfloor n/2\rfloor -4)K_2$. When $n\ge8$ is even, let $G_{even}$ be the graph obtained from $H$ by adding eight new vertices $y, z, y_1, y_2, y_3, z_1, z_2, z_3$, and then joining: $y$ to all vertices in $V(H)\cup\{y_1, y_2, y_3, z_1, z_2, z_3\}$; $z$ to all vertices in $V(H)\cup\{y_1, y_2, y_3, z_1, z_2\}$; $y_1$ to all vertices in $\{y_2, z_1, z_2, z_3\}$; $y_2$ to all vertices in $\{z_1, z_2, z_3\}$, $z_1$ to $z_2$; and $z_3$ to $y_3$. When $n$ is odd, let $G_{odd}$ be the graph obtained from $H$ by adding nine new vertices $y, z, y_1, y_2, y_3, y_4, z_1, z_2, z_3$, and then joining: $y$ to all vertices in $V(H)\cup\{y_1, z_1, z_2, z_3\}$; $z$ to all vertices in $V(H)\cup\{y_1, y_2, y_3, y_4, z_1, z_2, z_3\}$; $z_1$ to all vertices in $\{ y_1, y_2, y_3, y_4, z_2\}$; $z_2$ to all vertices in $\{y_1, y_2, y_3, y_4\}$, $y_2$ to $y_3$; and $y_4$ to $z_3$. The graphs $G_{odd}$ and $G_{even}$ are depicted in Figure~\ref{EO}. It can be easily checked that $e(G_{odd})= (5n-1)/{2}$ and $e(G_{even})= {5n}/{2}$. We next show that $G_{odd}$ and $G_{even}$ are $\mathcal{R}_{\min}(K_3, \mathcal{T}_4)$-saturated.
\begin{figure}
\caption{Two $\mathcal{R}_{\min} (K_3, \mathcal{T}_4)$-saturated graphs with a unique bad $2$-coloring, where dashed lines indicate blue and solid lines indicate red.}
\label{EO}
\end{figure}
One can easily check that the coloring $c : E(G) \to \{\text{red, blue} \}$ for each of $G_{odd}$ and $G_{even}$ given in Figure~\ref{EO} is a bad $2$-coloring. We next show that $c$ is the unique bad $2$-coloring for each of $G_{odd}$ and $G_{even}$. To find a bad $2$-coloring for $G_{odd}$, by Lemma~\ref{blue}(a), the edges $zz_1, zz_2, z_1z_2$ must be colored blue and so all the other edges incident with $z, z_1, z_2$ must be red. Then $yy_1, y_2y_3, y_4z_3$ and all edges in $E(H)$ must be blue and all the other edges incident with $y$ must be red. This proves that $G_{odd}$ has a unique bad $2$-coloring, as depicted in Figure~\ref{EO}. To find a bad $2$-coloring for $G_{even}$, by Lemma~\ref{blue}(a), $y_1y_2$ must be colored blue. We next show that $z_1z_2$ must be colored blue. Suppose that $z_1z_2$ is colored red. To avoid a red $K_3$, we may assume that $yz_1$ is colored blue. Then all edges $z_1y_1, z_1y_2, yy_1, yy_2$ must be red, and so $z_2y_1, z_2y_2$ must be blue, which then forces $y_1z$ to be red and $z_1z$ to be blue. Now the edges $z_3y$ and $z_3y_1$ must be colored red, which yields a red $K_3$ with vertices $y, z_3, y_1$. This proves that $z_1z_2$ must be colored blue. Similar to the argument for $G_{odd}$, one can see that the coloring of $G_{even}$, depicted in Figure~\ref{EO}, is the unique bad $2$-coloring of $G_{even}$. It is straightforward to see that both $G_{odd}$ and $G_{even}$ are $\mathcal{R}_{\min}(K_3, \mathcal{T}_4)$-saturated, and so $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_4)) \le \lfloor {5n}/{2}\rfloor$. We next show that $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_4)) \ge \lfloor {5n}/{2}\rfloor$.
Let $G$ be an $\mathcal{R}_{\min} (K_3, \mathcal{T}_4)$-saturated graph on $n\ge18$ vertices. Then, for any edge $e\in E(\overline{G})$, $G+e$ has no bad $2$-coloring. Suppose that $e(G)< 5n/2$ if $n$ is even and $e(G)<(5n-1)/2$ if $n$ is odd.
Among all bad $2$-colorings of $G$, let $c : E(G) \to \{\text{red, blue} \}$ be a bad $2$-coloring of $G$ with $|E_r|$ maximum. By the choice of $c$, $G_r$ is $K_3$-saturated. Note that $G_b$ is disconnected and every component of $G$ is isomorphic to $K_1$, $K_2$, $P_3$ or $K_3$. By Lemma~\ref{blue}(c), we have
\noindent \refstepcounter{counter}\label{e:maxdeg} (\arabic{counter})\,\, $\Delta(G_r) \le n-3$ and $G_r$ is 2-connected.
We next show that
\noindent \refstepcounter{counter}\label{e:Gr=J} (\arabic{counter})\,\, $\delta(G_r)=2$ and so $G_r=J$ with $A\ne\emptyset$, $B\ne\emptyset$, and $C\ne\emptyset$, where $J$, $A, B, C$ are depicted in Figure~\ref{J}.
\noindent {\bf Proof.}~~ By \pr{e:maxdeg}, $\delta(G_r)\ge2$. Suppose that $\delta(G_r)\ge3$. We next show that $e(G_r)\ge \lceil (5n-17)/2\rceil$. This is trivially true if $\delta(G)\ge5$. So we may assume that $3\le \delta(G_r)\le4$. By Theorem~\ref{delta=3} applied to $G_r$ when $\delta(G_r)=3$ and Lemma~\ref{structural}(c) applied to $G_r$ when $\delta(G_r)=4$, we see that $e(G_r)\ge \lceil (5n-17)/2\rceil$ because $n\ge18$. By Lemma~\ref{blue}(b), $e(G_b)\ge \lceil (n-2)/2\rceil$. Thus $e(G)=e(G_r)+e(G_b) \ge \lfloor {5n}/{2}\rfloor$, a contradiction. Hence $\delta(G_r)=2$. By Lemma~\ref{structural}(b), $G_r=J$, where $J$, $A\ne\emptyset, B, C$ are depicted in Figure~\ref{J}. By \pr{e:maxdeg}, $B\ne\emptyset$ and $C\ne\emptyset$.
\vrule height3pt width6pt depth2pt
For the remainder of the proof, let $J$, $A, B, C$, and $y, z$ be given as in Figure~\ref{J}, where $A\ne\emptyset$, $B\ne\emptyset$, and $C\ne\emptyset$. By \pr{e:Gr=J}, $G_r=J$. We next show that
\noindent \refstepcounter{counter}\label{e:bc=2} (\arabic{counter})\,\,
$|B|\ge 2$ and $|C| \ge 2$.
\noindent {\bf Proof.}~~ Suppose that $|B|=1$ or $|C| = 1$, say the latter. Let $u$ be the vertex in $C$. If $yz, yu\in E_b$, then $d_b(u)=1$ because $G_b$ is $T_4$-free. Now for any $w\in A$, we obtain a bad $2$-coloring of $G+uw$ from $c$ by coloring the edge $uw$ red, and then recoloring the edge $zu$ blue. Thus either $yz\notin E(G)$ or $yu\notin E(G)$. We may assume that $yz\notin E(G)$. Then $yu\in E_b$, otherwise, we obtain a bad $2$-coloring of $G+yu$ from $c$ by coloring the edge $yu$ blue, and then recoloring the edge $zu$ blue, and all the edges incident with $z$ and $u$ in $G_b$ red. Notice that $d_b(u) = 1$, for otherwise let $w \in A$ be the other neighbor of $u$ in $G_b$ and $v \in B$. Then $d_b(w)=1$ and so we obtain a bad $2$-coloring of $G+wv$ from $c$ by coloring the edge $wv$ red, and then recoloring the edge $yw$ blue. We next claim that $B=N_b(z)$.
Suppose that $B \ne N_b(z)$. Let $w \in B \setminus N_b(z)$, and let $K$ be the component of $G_b$ containing $w$. If $V(K)\subseteq B$, then for any $v\in A$, we obtain a bad $2$-coloring of
$G+wv$ from $c$ by coloring the edge $wv$ red, and then recoloring the edges $yw, uw$ blue and all edges incident with $w$ in $G_b$ red, a contradiction. Thus $V(K)\cap A\ne\emptyset$. Let $v\in V(K)\cap A$. We claim that $V(K)=\{w, v\}$. Suppose that $|K|=3$. Let $v'$ be the third vertex of $K$. Then $K$ is isomorphic to $K_3$. If $v'\in A$, then we obtain a bad $2$-coloring of $G$ from $c$ by recoloring the edge $yw$ blue, and then recoloring the edges $wv, wv'$ red, contrary to the choice of $c$. Thus $v'\in B$, which again yields a bad $2$-coloring of $G$ from $c$ by recoloring the edge $yv$ blue, and then recoloring the edges $vw, vv'$ red, contrary to the choice of $c$. Thus $V(K)=\{w, v\}$, as claimed. For any $v^* \in (A\cup B)\backslash (\{w,v\}\cup N_b(z))$, we obtain a bad $2$-coloring of $G+wv^*$ from $c$ by coloring the edge $wv^*$ red, and then recoloring the edge $wv$ red, and the edges $yw, uw$ blue. Thus $B=N_b(z)$, as claimed.
Since $B=N_b(z)$, we have $|B|\le2$. Then $yu\in E_b$, otherwise by a similar argument for showing $B=N_b(z)$, we have $|A|=|N_b(u)|\le2$ and so $n\le 7$, a contradiction. Let $v\in B$. If $B = \{v\}$, then by a similar argument for showing $d_b(u)=1$, we have $d_b(v) = 1$. But then we obtain a bad $2$-coloring of $G+yz$ from $c$ by coloring the edge $yz$ blue, and then recoloring the edge $yu$ red, and the edge $yv$ blue.
Thus $|B|=2$. Let $v'$ be the other vertex in $B$. Then $vv' \in E_b$, otherwise we obtain a bad $2$-coloring of $G+vv'$ from $c$ by coloring the edge $vv'$ blue. But now we obtain a bad $2$-coloring of $G+yz$ from $c$ by coloring the edge $yz$ blue, and then recoloring the edges $yu, vv', zv'$ red, and edges $yv, uv'$ blue, a contradiction.
\vrule height3pt width6pt depth2pt
By Lemma~\ref{blue}(b), $G_b$ has at most two isolated vertices. Thus $e(G_b)\ge (n-2)/2$. Since $e(G)< 5n/2$, we see that $e(G_r)\le 2n$. By \pr{e:bc=2}, $|B|\ge2$ and $|C|\ge2$. By Corollary~\ref{2n}, $e(G_r)\ge 2n-4$ and $|B|+|C|\le8$. Thus $|A|\ge n-10\ge8$. We next show that
\noindent \refstepcounter{counter}\label{e:P3} (\arabic{counter})\,\,
If $P_3$ is a component of $G_b\backslash \{y,z\}$ with vertices $x_1, x_2, x_3$ in order, then $x_2\in A$ and $|\{x_1, x_3\}\cap B|=|\{x_1, x_3\}\cap C|=1$.
\noindent {\bf Proof.}~~ Clearly, $\{x_1, x_2, x_3\}\not\subseteq A\cup B$ or $\{x_1, x_2, x_3\}\not\subseteq A\cup C$, otherwise $x_1x_3\notin E(G)$ and we obtain a bad $2$-coloring of $G+x_1x_3$ from $c$ by coloring the edge $x_1x_3$ blue. Since $y, z\notin \{x_1, x_2, x_3\}$, we see that $x_2\in A$. Then $|\{x_1, x_3\}\cap B|=|\{x_1, x_3\}\cap C|=1$.
\vrule height3pt width6pt depth2pt
\noindent \refstepcounter{counter}\label{e:yz} (\arabic{counter})\,\,
$yz\notin E(G)$.
\noindent {\bf Proof.}~~ Suppose that $yz\in E(G)$. Then $yz\in E_b$. Since $G_b$ does not contain a ${T}_4$, we see that either $d_b(y)=1$ or $d_b(z)=1$. We may assume that $d_b(z)=1$. We claim that $d_b(y)=1$ as well. Suppose that $d_b(y)=2$. Let $w\in C$ be the other neighbor of $y$ in $G_b$. Then $d_b(w)=1$. Let $v\in A$. We obtain a bad $2$-coloring of $G+wv$ from $c$ by coloring the edge $wv$ red, and recoloring the edge $zw$ blue. Thus $d_b(y)=d_b(z)=1$. Since $e(G_r)\le 2n$ and $|A|\ge n-10\ge8$, by Corollary~\ref{2n} and \pr{e:P3}, $G_b$ contains a component, say $K$, such that $V(K)\cap A\ne\emptyset$ and $V(K)\subset A\cup B$ or $V(K)\subset A\cup C$. Let $u\in V(K)\cap A$ and $w\in A\backslash V(K)$. We obtain a bad $2$-coloring of $G+uw$ from $c$ by coloring the edge $uw$ red, and then recoloring the edges $yu, zu$ blue, and all the edges incident with $u$ in $G_b$ red, a contradiction.
\vrule height3pt width6pt depth2pt
\noindent \refstepcounter{counter}\label{e:Gb} (\arabic{counter})\,\, $G_b$ has no isolated vertex.
\noindent {\bf Proof.}~~ Suppose for a contradiction that $G_b$ has an isolated vertex, say $u$. Then $d(u)=d_r(u)$. By \pr{e:maxdeg}, $d(u)\le n-3$. For any $w\in V(G) \backslash N[u]$, adding a blue edge $uw$ to $G$ must yield a blue ${T}_4$, because $G$ is $\mathcal{R}_{\min} (K_3, \mathcal{T}_4)$-saturated. Hence,
\noindent ($*$) \, every vertex of $V(G) \backslash N[u]$ belongs to a $P_3$ or $K_3$ in $G_b$.
We next claim that every vertex of $A\backslash u$ belongs to a $P_3$ or $K_3$ in $G_b$. By ($*$), this is obvious if $u\in A\cup B\cup C$. So we may assume that $u\in \{y,z\}$. By symmetry, we may further assume that $u=z$. By \pr{e:yz}, $yz\notin E(G)$. Suppose that there exists a vertex $v\in A$ such that $v$ belongs to a component, say $K$, with $|K|\le2$. Then $V(K)\subseteq A\cup B$ or $V(K)\subseteq A\cup C$. Let $w\notin V(K)$ be a vertex in $C$. This is possible because $|C|\ge 2$ by \pr{e:bc=2}. We then obtain a bad $2$-coloring of $G+vw$ from $c$ by coloring the edge $vw$ red, and recoloring the edge $vu$ blue, a contradiction. Thus every vertex of $A\backslash u$ belongs to a $P_3$ or $K_3$ in $G_b$, as claimed.
Since $|B|+|C|\le8$ and $|A|\ge n-10\ge8$, by \pr{e:P3} and Corollary~\ref{2n}, we see that $G_b[A]$ has at least two components isomorphic to $K_3$. By Lemma~\ref{blue}(b), $G_b$ has at most two isolated vertices and so $e(G_b)\ge 6+(n-8)/2$. Since $e(G)<5n/2$, we have $e(G_r)\le 2n-3$. By \pr{e:bc=2}, $|B|\ge2$ and $|C|\ge2$. By Corollary~\ref{2n}, $2n-4\le e(G_r)\le 2n-3$ and $\max\{|B|,|C|\}\le3$. Thus $|A|\ge n-8\ge10$. By \pr{e:P3} and Corollary~\ref{2n} again, $G_b[A]$ has at least three components isomorphic to $K_3$. Thus $e(G_b)\ge 9+\lceil(n-11)/2\rceil$ and so $e(G)\ge (2n-4)+9+\lceil(n-11)/2\rceil\ge\lfloor 5n/2\rfloor$, a contradiction.
\vrule height3pt width6pt depth2pt
\noindent \refstepcounter{counter}\label{e:1nbr} (\arabic{counter})\,\,
$d_b(y)=d_b(z)=2$.
\noindent {\bf Proof.}~~ Suppose that $d_b(y)\le1$ or $d_b(z)\le1$. By \pr{e:Gb}, $d_b(y), d_b(z)\ge1$. We may assume that $d_b(y)=1$. By \pr{e:yz}, $yz\notin E(G)$. Let $y_1\in C$ be the unique neighbor of $y$ in $G_b$, and let $z_1\in B$ be a neighbor of $z$ in $G_b$. We claim that $d_b(y_1)=1$. Suppose that $d_b(y_1)=2$. Let $y_1^*\in A\cup C$ be the other neighbor of $y_1$ in $G_b$. Then $y_1^*\in A$, otherwise, we obtain a bad $2$-coloring of $G+yy_1^*$ from $c$ by coloring the edge $yy_1^*$ blue. Let $w\in B$. Then we obtain a bad $2$-coloring of $G+y_1^*w$ from $c$ by coloring the edge $y_1^*w$ red and recoloring the edge $y_1^*y$ blue. Thus $d_b(y_1)=1$, as claimed.
By \pr{e:bc=2}, $ |B|\ge2$ and $|C|\ge 2$. We next claim that $N_b(z)=B$. Suppose that there exists a vertex $u\in B$ such that $uz\notin E(G_b)$. Then $uz_1\notin E_b$, otherwise, we obtain a bad $2$-coloring of $G+uz$ from $c$ by coloring the edge $uz$ blue. This implies that $B\backslash N_b(z)$ is anti-complete to $N_b(z)$ in $G_b$. Let $K$ be the component of $G_b$ containing $u$. By \pr{e:Gb}, $|K|\ge2$. Since $G_b$ is $T_4$-free, we see that $N_b[z]$ is anti-complete to $V(K)$ in $G_b$. Suppose first that $V(K) \subseteq B$.
If $K$ is isomorphic to $K_3$ or $|N_b(z)|=2$, then $|B| \ge 4$ and $G_b$ contains at least one $K_3$ ($K$ or $G[N_b[z]]$). By Corollary~\ref{2n}, $e(G_r) \ge 2n - 2$. By \pr{e:Gb}, $e(G_b) \ge 3 + \lceil (n - 3)/2 \rceil$. Hence $e(G) =e(G_r)+e(G_b)\ge (2n - 2) + 3 + \lceil(n - 3)/2\rceil \ge \lfloor 5n/2 \rfloor$, a contradiction. Thus $K$ is isomorphic to $K_2$ and $d_b(z)=1$.
Using a similar argument to show that $d_b(y_1)=1$, we have $d_b(z_1) = 1$. Let $V(K)=\{u, u'\}$. If $B=\{u,u', z_1\}$, then we obtain a bad 2-coloring of $G + yz$ from $c$ by coloring the edge $yz$ blue, and then recoloring the edges $y_1u, y_1u', yz_1$ blue, and the edge $yy_1$ red. Thus $|B|\ge4$. By Corollary~\ref{2n}, $|C|=2$. Let $C=\{w, y_1\}$. Let $v\in A$ be such that $v$ and $w$ are not in the same component of $G_b$. This is possible because $|A|\ge 8$. Then
we obtain a bad 2-coloring of $G + vw$ from $c$ by coloring the edge $vw$ red, and then recoloring the edges $z_1w, zw$ blue, and all the edges incident with $w$ in $G_b$ red.
This proves that $V(K) \not\subseteq B$ and so $V(K)\cap A\ne\emptyset$. Let $v\in V(K)\cap A$. We next show that $V(K)=\{u, v\}$. Suppose that $|K|=3$. Let $v'$ be the third vertex of $K$. Then $K$ is isomorphic to $K_3$. If $v'\in A$, then we obtain a bad $2$-coloring of $G$ from $c$ by recoloring the edge $uy$ blue, and then recoloring the edges $uv, uv'$ red, contrary to the choice of $c$. If $v'\in B$, then we obtain a bad $2$-coloring of $G$ from $c$ by recoloring the edge $vy$ blue, and then recoloring the edges $vu, vv'$ red, contrary to the choice of $c$. Thus $v'\in C$. Now for any $w\in A\backslash v$, we obtain a bad $2$-coloring of $G+uw$ from $c$ by coloring the edge $uw$ red, and then recoloring the edges $uy, uy_1$ blue, and $uv$ red. Hence $V(K)=\{u, v\}$. For any $v' \in A\backslash v$, we obtain a bad $2$-coloring of $G+uv'$ from $c$ by coloring the edge $uv'$ red, and then recoloring the edges $uy$ blue and $uv$ red.
Thus $N_b(z)=B$, as claimed.
Since $N_b(z)=B$ and $d_b(z)\le2\le |B|$, we see that $|B|=2$. Let $B=\{z_1, z_2\}$. Then $z_1z_2\in E(G_b)$, otherwise, we obtain a bad $2$-coloring of $G+z_1z_2$ from $c$ by coloring the edge $z_1z_2$ blue. Let $C=\{y_1, \dots, y_t\}$, where $t=|C|$. Then $y_1y_j\notin E(G_b)$ for all $j\in\{2, \dots, t\}$ because $d_b(y_1)=1$. If $t\ge4$, then by Corollary~\ref{2n}, $e(G_r)\ge 2n-2$. By \pr{e:Gb}, $e(G_b)\ge 3+\lceil(n-3)/2\rceil$. Thus $e(G)\ge (2n-2)+3+\lceil(n-3)/2\rceil\ge\lfloor 5n/2\rfloor$, a contradiction. Thus $2\le t\le 3$. Let $v\in A$ be such that $vy_j\notin E(G)$ for all $j\in\{1, 2, \dots, t\}$. This is possible because $|A|\ge 8$ and $t\le3$. We obtain a bad $2$-coloring of $G+y_2v$ from $c$ by coloring the edge $y_2v$ red, and then when $t=2$, recoloring the edges $yz_1, z_1y_1, z_2y_2,y_2z$ blue, the edges $z_1z, z_1z_2$, and all the edges incident with $y_2$ in $G_b$ red; when $t=3$, recoloring the edges $y_1z_1, y_1z_2, zy_2,zy_3$ blue, the edges $yy_1, zz_1, zz_2$, and all the edges between $A$ and $\{y_2, y_3\}$ in $G_b$ red.
\vrule height3pt width6pt depth2pt
By \pr{e:1nbr}, $d_b(y)=d_b(z)=2$. By \pr{e:yz}, $yz\notin E(G)$. Let $N_b(y)=\{y_1, y_2\}\subseteq C$ and $N_b(z)=\{z_1, z_2\}\subseteq B$. Then $y_1y_2, z_1z_2\in E_b$, otherwise, we obtain a bad $2$-coloring of $G+e$ from $c$ by coloring the edge $e$ blue, where $e\in \{y_1y_2, z_1z_2\}$. By \pr{e:Gb}, $e(G_b)\ge 6+\lceil(n-6)/2\rceil$. Since $e(G)< \lfloor 5n/2\rfloor$, by Corollary~\ref{2n}, we see that $n$ is even and $|B|=|C|=2$. Let $v\in A$. We obtain a bad $2$-coloring of $G+vz_1$ from $c$ by coloring the edge $vz_1$ red, and then recoloring the edges $yz_1, z_2y_1, z_2y_2$ blue, and edges $yy_1, yy_2, zz_2, z_1z_2 $ red, a contradiction.
This completes the proof of Theorem~\ref{K3T4}.
\vrule height3pt width6pt depth2pt\\
\section{Proof of Theorem~\ref{K3Tk}}\label{sec:K3Tk}
Finally, we prove Theorem~\ref{K3Tk}. We will construct an $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated graph on $n \ge 2k + (\lceil k/2 \rceil +1)\lceil k/2 \rceil-2$ vertices which yields the desired upper bound in Theorem~\ref{K3Tk}.
For positive integers $k, n$ with $k\ge 5$ and $n \ge 2k + (\lceil k/2 \rceil +1)\lceil k/2 \rceil-2$, let $t$ be the remainder of $n-2k-2\lceil k/2\rceil+2$ when divided by $\lceil k/2 \rceil$, and let $H= 2 K_{\lceil k/2\rceil-1}\cup 2K_{k-2} \cup s K_{\lceil k/2\rceil}\cup tK_{\lceil k/2\rceil+1}$, where $s\ge0$ is an integer satisfying $s\lceil k/2\rceil+ t(\lceil k/2\rceil+1)=n-2k-2\lceil k/2\rceil+2$. Let $H_1$, $H_2$ be the two disjoint copies of $K_{k-2}$, and let $H_3, H_4$ be the two disjoint copies of $K_{\lceil k/2\rceil-1}$ in $H$, respectively. Finally, let $G$ be the graph obtained from $H$ by adding four new vertices $y, z, u, w$, and then joining: every vertex in $H_1$ to all vertices in $H_2$; $y$ to all vertices in $V(H)\cup \{w\}$; $z$ to all vertices in $V(H)\cup \{u\}$; $u$ to all vertices in $\{w\}\cup V(H_2)\cup V(H_3)$; and $w$ to all vertices in $ V(H_1)\cup V(H_4)$, as depicted in Figure~\ref{SatK3Tk}.
\begin{figure}
\caption{An $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated graph with a unique bad $2$-coloring, where dashed lines indicate blue and solid lines indicate red.}
\label{SatK3Tk}
\end{figure}
Clearly, the coloring $c : E(G) \to \{\text{red, blue} \}$ given in Figure~\ref{SatK3Tk} is a bad $2$-coloring of $G$. We next show that $c$ is the unique bad $2$-coloring of $G$. By Lemma~\ref{blue}(a), each edge $e\in E(H_1)\cup E(H_2)$ must be colored blue because $e$ belongs to $2k-3$ triangles in $G$. Then all edges between $V(H_1)$ and $V(H_2)$ in $G$ must be colored red and the edge $yv$ must be colored red for some $v\in V(H_1)\cup V(H_2)$, because $G_b$ is $T_k$-free. Additionally, $y$ can only be joined by a blue edge to a vertex in either $V(H_1)$ or $V(H_2)$ but not both. It follows that $y$ is complete to one of $V(H_1)$ or $V(H_2)$ in $G_r$. We next show that $y$ is complete to $V(H_2)$ in $G_r$. Suppose that $y$ is complete to $V(H_1)$ in $G_r$. Then $y$ is complete to $V(H_2)$ in $G_b$ since $G_r$ is $K_3$-free, and so $yw \in E_r$ since $G_b$ is $T_k$-free. This implies that $z$ must be complete to $V(H_1)$ in $G_b$. But now $w$ must be complete to $V(H_1)$ in $G_r$, which yields a red $K_3$ on $y, w, v$ for any $v\in V(H_1)$, a contradiction. Hence $y$ is complete to $V(H_2)$ in $G_r$. Then $y$ must be complete to $V(H_1)$ in $G_b$. Since $G_b$ is $T_k$-free, $y$ is complete to $\{w\}\cup(V(H)\backslash V(H_1))$ in $G_r$, and $z$ is complete to $V(H_1)$ in $G_r$. Since $G_r$ is $K_3$-free, we see that all edges in each component of $H$ must be colored blue, and then $z$ must be complete to $V(H_2)$ in $G_b$ and $w$ must be complete to $V(H_4)$ in $G_b$. By symmetry of $y$ and $z$, it follows that $z$ is complete to $\{u\}\cup(V(H)\backslash V(H_2))$ in $G_r$, and $u$ is complete to $V(H_3)$ in $G_b$.
This proves that $c$ is the unique bad $2$-coloring of $G$. It is straightforward to see that $G$ is $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated. Using the facts that $s\lceil k/2\rceil+ t(\lceil k/2\rceil+1)=n-2k-2\lceil k/2\rceil+2$ and $t\le \lceil k/2\rceil -1$, we see that \begin{align*} e(G) & = 2(n-2)+{2k-4\choose 2} +(2(k-2)+1)+(s+2) {\lceil k/2\rceil\choose 2}+ t{\lceil k/2\rceil+1\choose 2} \\ & =(2n+2k^2-7k+3)+(s+2)\lceil k/2\rceil \frac{\lceil k/2\rceil-1}2+t(\lceil k/2\rceil+1)\frac{(\lceil k/2\rceil-1)+1}2\\ &=(2n+2k^2-7k+3)+ \dfrac{\lceil k/2\rceil-1}2 \left((s+2)\lceil k/2\rceil+t(\lceil k/2\rceil+1)\right) + \dfrac{t}{2} \left(\left\lceil\frac{k}{2}\right\rceil+1\right)\\ &=(2n+2k^2-7k+3)+ \dfrac{\lceil k/2\rceil-1}2 \left((s\lceil k/2\rceil+t(\lceil k/2\rceil+1))+2\lceil k/2\rceil\right) + \dfrac{t}{2} \left(\left\lceil\frac{k}{2}\right\rceil+1\right)\\ &\le (2n+2k^2-7k+3)+ \dfrac{\lceil k/2\rceil-1}2 \left(n-2k-2\lceil k/2\rceil+2+2\lceil k/2\rceil\right) + \dfrac{t}{2} \left(\left\lceil\frac{k}{2}\right\rceil+1\right)\\
&\le \left( \frac{3}{2} + \frac{1}{2} \left\lceil\frac{k}{2}\right\rceil \right) n + 2k^2-6k+2-(k-1)\left\lceil\frac{k}{2}\right\rceil+ \dfrac{\lceil k/2\rceil-1}{2} \left(\left\lceil\frac{k}{2}\right\rceil+1\right)\\
&\le \left( \frac{3}{2} + \frac{1}{2} \left\lceil\frac{k}{2}\right\rceil \right) n + 2k^2-6k+\frac32-\left\lceil\frac{k}{2}\right\rceil \left(k- \dfrac{1}{2} \left\lceil\frac{k}{2}\right\rceil -1\right)\\
&= \left( \frac{3}{2} + \frac{1}{2} \left\lceil\frac{k}{2}\right\rceil \right) n + C, \end{align*}
\noindent where $C=2k^2-6k+\frac32-\left\lceil\frac{k}{2}\right\rceil \left( k- \frac{1}{2} \left\lceil\frac{k}{2}\right\rceil -1\right)$. Therefore $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k)) \le e(G)\le \left( \frac{3}{2} + \frac{1}{2} \left\lceil\frac{k}{2}\right\rceil \right) n +C$.
Let $c=\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil + \frac{3}{2} \right) k -2$. We next show that $sat(n, \mathcal{R}_{\min}(K_3, \mathcal{T}_k)) \ge \left( \frac{3}{2} + \frac{1}{2} \left\lceil\frac{k}{2}\right\rceil \right) n -c$. Let $G$ be an $\mathcal{R}_{\min}(K_3, \mathcal{T}_k)$-saturated graph on $n \ge 2k + (\lceil k/2 \rceil +1) \lceil k/2 \rceil -2$ vertices. Then $G+e$ has no bad $2$-coloring for any edge $e\in E(\overline{G})$.
Among all bad $2$-colorings of $G$, let $c : E(G) \to \{\text{red, blue} \}$ be a bad $2$-coloring of $G$ with $|E_r|$ maximum. By the choice of $c$, $G_{r}$ is $K_3$-saturated and $G_b$ is ${T}_k$-free for any $T_k\in \mathcal{T}_k$. Note that $G_b$ is disconnected and every component of $G_b$ contains at most $k - 1$ vertices. By Lemma~\ref{blue}(c), we have
\setcounter{counter}{0}
\noindent \refstepcounter{counter}\label{e:maxdeg} (\arabic{counter})\,\, $\Delta(G_r) \le n-3$ and $G_r$ is 2-connected.
Let $D_1, D_2, \dots, D_p$ be the components of $G_b$. Since $n \ge 2k + (\lceil k/2 \rceil +1) \lceil k/2 \rceil -2$, we have $p\ge 3$. We next show that
\noindent \refstepcounter{counter}\label{e:clique} (\arabic{counter}) \, $G[V(D_i)] =K_{|D_i|}$ for all $i\in\{1,2, \dots, p\}$.
\noindent {\bf Proof.}~~ Suppose that there exists a component of $G_b$, say $D_1$, such that $G[V(D_1)] \ne K_{|D_1|}$. Let $u, v \in V(D_1)$ be such that $uv \notin E(G)$. We obtain a bad $2$-coloring of $G + uv$ from $c$ by coloring the edge $uv$ blue, a contradiction.
\vrule height3pt width6pt depth2pt
\noindent \refstepcounter{counter}\label{e:edgecount} (\arabic{counter}) \, $ \displaystyle \sum_{i = 1}^p e(G[V(D_i)]) \ge \left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) n -\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) k$
\noindent {\bf Proof.}~~ By \pr{e:clique}, $G[V(D_i)] =K_{|D_i|}$ for all $i\in\{1,2, \dots, p\}$. By Lemma~\ref{blue}(b), at most two components $D_i$ have fewer than $k/2$ vertices. Let $t$ be the remainder of $n - k$ when divided by $\lceil k/2 \rceil$, and let $s \ge 0$ be an integer such that $n - k = s \lceil k/2 \rceil + t (\lceil k/2 \rceil + 1)$.
It is straightforward to see that $\displaystyle \sum_{i = 1}^p e(G[V(D_i)])$ is minimized when: two of the components, say $D_1, D_2$, are such that $|D_1|, |D_2|< k/2$; $t$ of the components, say $D_3, \dots, D_{t+2}$, are such that $|D_3 |=\cdots=|D_{t+2}| =\lceil k/2 \rceil + 1$; and $s$ of the components, say $D_{t+3}, \dots, D_{t+s+2}$, are such that $|D_{t+3}|=\cdots=|D_{t+s+2}|= \lceil k/2 \rceil$. Using the facts that $s\lceil k/2\rceil+ t(\lceil k/2\rceil+1)=n-2k-2\lceil k/2\rceil+2$ and $t\le \lceil k/2\rceil -1$, it follows that \begin{align*} \sum_{i = 1}^p e(G[V(D_i)]) &> s {\lceil k/2\rceil\choose 2} + t {\lceil k/2\rceil + 1\choose 2} \\
&= s \lceil k/2\rceil \frac{\lceil k/2\rceil-1}2+t(\lceil k/2\rceil+1)\frac{(\lceil k/2\rceil-1)+1}2\\
&= \left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right)\left(s\left\lceil \frac{k}2\right\rceil+t\left(\left\lceil \frac{k}2\right\rceil+1\right)\right) + \dfrac{t}{2} \left(\left\lceil\frac{k}{2}\right\rceil+1\right)\\
&\ge \left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right)(n-k) \\
&\ge \left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) n - \left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) k. \end{align*}
\vrule height3pt width6pt depth2pt
Assume that $G_b[V(D_i)] =K_{|D_i|}$ for all $i\in\{1,2, \dots, p\}$. By \pr{e:edgecount}, $ |E_b| \ge \left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) n -\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) k$. By Lemma~\ref{blue}(b) and Theorem~\ref{delta=3},
$ |E_r| \ge 2n - 5$.
Therefore $e(G) = |E_r| + |E_b| \ge \left( \frac{3}{2} + \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right) n -\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) k -5\ge \left( \frac{3}{2} + \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right) n -c$, where $c=\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil + \frac{3}{2} \right) k -2$, as desired. So we may assume that $G_b[V(D_i)] \ne K_{|D_i|}$ for some $i\in\{1,2, \dots, p\}$, say $i=1$. Let $u_1, u_2 \in V(D_1)$ be such that $u_1u_2\notin E_b$. By \pr{e:clique}, $u_1u_2\in E_r$. Since $G_r$ is $K_3$-saturated, we have $N_r(u_1)\cap N_r(u_2)=\emptyset$. We next show that
\noindent \refstepcounter{counter}\label{e:rednbr} (\arabic{counter}) \,\, for any $j\in\{2,\dots, p\}$ and any $w \in V(D_j)$, if $wu_i\notin E_r$ for some $i\in\{1,2\}$, then $N_r(w)\cap N_r(u_i)\backslash (V(D_1)\cup V(D_j))\ne\emptyset$.
\noindent {\bf Proof.}~~ We may assume that $wu_1\notin E_r$. Since $G_r$ is $K_3$-saturated, we see that $N_r(w)\cap N_r(u_1)\ne\emptyset$. Note that $wu_1\notin E(G)$. If $N_r(w)\cap N_r(u_1)\backslash (V(D_1)\cup V(D_j))=\emptyset$, then we obtain a bad $2$-coloring of $G+wu_1$ from $c$ by coloring $wu_1$ red, and then recoloring all red edges incident with $u_1$ in $D_1$ blue and all red edges incident with $w$ in $D_j$ blue, a contradiction.
\vrule height3pt width6pt depth2pt
\noindent \refstepcounter{counter}\label{e:2rednbr} (\arabic{counter}) \,\,
For any $j\in\{2,\dots, p\}$ and any $w \in V(D_j)$, $|N_r(w)\backslash V(D_j)|\ge2 $.
\noindent {\bf Proof.}~~ This is obvious when $wu_1, wu_2\in E_r$. So we may assume that $wu_1\notin E_r$. Since $N_r(u_1)\cap N_r(u_2)=\emptyset$, it follows from \pr{e:rednbr} that either
$|N_r(w)\backslash (V(D_1)\cup V(D_j))|\ge2$ when $wu_2\notin E(G)$ or $|N_r(w)\backslash V(D_j)|=|N_r(w)\backslash (V(D_1)\cup V(D_j))|+|N_r(w)\cap V(D_1)|\ge1+1=2 $ when $wu_2\in E(G)$. In both cases, $|N_r(w)\backslash V(D_j)|\ge2 $, as desired.
\vrule height3pt width6pt depth2pt
For each vertex $w \in V(G) \backslash V(D_1)$, since $G_r$ is $K_3$-saturated, we see that either $wu_1\notin E_r$ or $wu_2\notin E_r$. Let $P :=\{ w \in V(G) \setminus V(D_1): \, wu_1, wu_2\notin E_r\}$, $Q :=\{ w \in V(G) \setminus V(D_1): \, wu_1\notin E_r, wu_2\in E_r \}$, and $R :=\{ w \in V(G) \setminus V(D_1): \, wu_1\in E_r, wu_2\notin E_r\}$. Further, let $Q_1$ denote the set of vertices $w\in Q$ such that $N_r(w)\cap V(D_1)=\{u_2\}$, and let $R_1$ denote the set of vertices $w\in R$ such that $N_r(w)\cap V(D_1)=\{u_1\} $. Let $Q_2:=Q\backslash Q_1$ and $R_2:=R\backslash R_1$. By definition, $P, Q_1, Q_2, R_1, R_2$ are pairwise disjoint and $|P|+|Q|+|R|=n-|V(D_1)|\ge n-k+1$. Let $H$ be obtained from $G\backslash V(D_1)$ by deleting all edges in $G[V(D_i)]$ for all $i\in\{2,3,\dots, p\}$. Then $E(H)\subset E_r$ and for each edge $e$ in $H$, $e$ is not in $G[V(D_i)]$ for any $i\in\{2,3,\dots, p\}$. For any $w\in Q_1\cup R_1$, by \pr{e:rednbr}, $N_{H}(w)\backslash P\ne\emptyset$. We next show that
\noindent \refstepcounter{counter}\label{e:1rednbr} (\arabic{counter}) \,\, for any $w \in Q_1$, if $w$ is adjacent to exactly one vertex, say $v$, in $H\backslash P$, then $v\in R_2 $.
\noindent {\bf Proof.}~~ We may assume that $w\in V(D_2)$. Since $w\in Q_1$, we have $N_r(w)\cap V(D_1)=\{u_2\}$. By \pr{e:rednbr}, $vu_1\in E_r$, and we may further assume that $v\in V(D_3)$. Then $vu_2\notin E_r$ because $G_r$ is $K_3$-free. Since $D_1$ is a component of $G_b$, there must exist a vertex, say $u\in V(D_1)$, such that $uu_2\in E_b$. Then $wu\notin E_r$ (and so $wu\notin E(G)$) because $N_r(w)\cap V(D_1)=\{u_2\}$. Hence $uv\in E_r$, otherwise, we obtain a bad $2$-coloring of $G+wu$ from $c$ by coloring $wu$ red and then recoloring all edges incident with $w$ in $D_2$ blue. Therefore $v\in R_2$.
\vrule height3pt width6pt depth2pt
By symmetry, for any $w \in R_1$, if $w$ is adjacent to exactly one vertex, say $v$, in $H\backslash P$, then $v\in Q_2 $. We next count the number of edges in $H$.
Since $N_r(u_1)\cap N_r(u_2)=\emptyset$, it follows from \pr{e:rednbr} that for each $w\in P$, $e_H(w, Q\cup R) \ge2$ and so $e_H(P, Q\cup R)\ge 2|P|$.
Let $Q_1^*$ be the set of vertices $w\in Q_1$ such that $w$ is adjacent to exactly one vertex in $H\backslash P$. Similarly, let $R_1^*$ be the set of vertices $w\in R_1$ such that $w$ is adjacent to exactly one vertex in $H\backslash P$. By \pr{e:1rednbr}, $e_H(Q_1^*, R_2)\ge |Q_1^*|$ and $e_H(R_1^*, Q_2)\ge |R_1^*|$. Notice that for any $w\in (Q_1\cup R_1)\backslash (Q_1^*\cup R_1^*)$, $w$ is adjacent to at least two vertices in $H\backslash (P\cup Q_1^*\cup R_1^*)$ and so
$e(H\backslash (P\cup Q_1^*\cup R_1^*))\ge |Q_1\backslash Q_1^*|+|R_1\backslash R_1^*|=|Q_1| + |R_1|-|Q_1^*|-|R_1^*|$. Therefore \begin{align*} e(H)&=e_H(P, Q\cup R)+e_H(Q_1^*, R_2)+e_H(R_1^*, Q_2)+e(H\backslash (P\cup Q_1^*\cup R_1^*))\\
&\ge 2|P|+|Q_1^*|+|R_1^*|+|Q_1| + |R_1|-|Q_1^*|-|R_1^*|\\
&=2|P|+|Q_1|+|R_1|. \end{align*}
Note that $e_G(V(D_1), Q\cup R)\ge |Q_1|+2|Q_2|+|R_1|+2|R_2|=|Q|+|R|+|Q_2|+|R_2|$. We see that $e(H)+e_G(V(D_1), Q\cup R) \ge (2|P|+|Q_1|+|R_1|)+(|Q|+|R|+|Q_2|+|R_2|)=2(|P|+|Q|+|R|)\ge 2n-2k+2$.
By \pr{e:edgecount}, \begin{align*} e(G) &\ge e(H)+e_G(V(D_1), Q\cup R) + \sum_{i = 1}^p e(G[V(D_i)]) \\ &\ge (2n-2k+2)+ \left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) n -\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil - \frac{1}{2} \right) k\\
&= \left( \frac{3}{2} + \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right) n - \left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil + \frac{3}{2} \right) k+2\\
&= \left( \frac{3}{2} + \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right) n -c \end{align*} where $c=\left(\frac{1}{2} \left\lceil \frac{k}{2} \right\rceil + \frac{3}{2} \right) k -2$.
This completes the proof of Theorem~\ref{K3Tk}.
\vrule height3pt width6pt depth2pt
\noindent {\bf Conclusion}. For the graphs $G_{odd}$ and $G_{even}$ in the proof of Theorem~\ref{K3T4}, we want to point out here that we found the graph $G_{odd}$ when $d_b(y)=1$, $d_b(z)=2$, and $G_r=J$ with $|B|=2$ and $|C|=4$; and the graph $G_{even}$ when $d_b(y)=d_b(z)=2$, and $G_r=J$ with $|B|=3$ and $|C|=2$. We believe that the method we developed in this paper can be applied to determine $sat(n, \mathcal{R}_{\min}(K_p, T_k))$ for any given tree $T_k$ and any $p\ge3$.
\section*{Acknowledgments}
The authors would like to thank Christian Bosse, Michael Ferrara, and Jingmei Zhang for their helpful discussion. The authors thank the referees for helpful comments.
\end{document} | arXiv |
Tag: Evolutionary Genetics
A Kimura Age to the Kern-Hahn Era: neutrality & selection
Posted on November 9, 2018 November 9, 2018 by Razib Khan
I'm pretty jaded about a lot of journalism, mostly due to the incentives in the industry driven by consumers and clicks. But Quanta Magazine has a really good piece out, Theorists Debate How 'Neutral' Evolution Really Is. It hits all the right notes (you can listen to one of the researchers quoted, Matt Hahn, on an episode of my podcast from last spring).
As someone who is old enough to remember reading about the 'controversy' more than 20 years ago, it's interesting to see how things have changed and how they haven't. We have so much more data today, so the arguments are really concrete and substantive, instead of shadow-boxing with strawmen. And yet still so much of the disagreement seems to hinge on semantic shadings and understandings even now.
But, as Richard McElreath suggested on Twitter part of the issue is that ultimately Neutral Theory might not even be wrong. It simply tries to shoehorn too many different things into a simple and seductively elegant null model when real biology is probably more complicated than that. With more data (well, exponentially more data) and computational power biologists don't need to collapse all the complexity of evolutionary process across the tree of life into one general model, so they aren't.
Let me finish with a quote from Ambrose, Bishop of Milan, commenting on the suffocation of the Classical religious rites of Late Antiquity:
It is undoubtedly true that no age is too late to learn. Let that old age blush which cannot amend itself. Not the old age of years is worthy of praise but that of character. There is no shame in passing to better things.
Posted in Evolutionary GeneticsTagged Evolutionary Genetics7 Comments on A Kimura Age to the Kern-Hahn Era: neutrality & selection
A historical slice of evolutionary genetics
Posted on October 12, 2018 October 12, 2018 by Razib Khan
A few friends pointed out that I likely garbled my attribution of who were the guiding forces between the "classical" and "balance" in the post below (Muller & Dobzhansky as opposed to Fisher & Wright as I said). I'll probably do some reading and update the post shortly…but it did make me reflect that in the hurry to keep up on the current literature it is easy to lose historical perspective and muddle what one had learned.
Of course on some level science is not as dependent on history as many other disciplines. The history is "baked-into-the-cake." This is clear when you read The Origin of Species. But if you are interested in a historical and sociological perspective on science, with a heavy dose of narrative biography, I highly recommend Ullica Segerstrale's Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond and Nature's Oracle: The Life and Work of W.D. Hamilton.
Defenders of the Truth in particular paints a broad and vivid picture of a period in the 1960s and later into the 1970s when evolutionary thinkers began to grapple with ideas such as inclusive fitness. E. O. Wilson's Sociobiology famously triggered a counter-reaction by some intellectuals (Wilson was also physically assaulted in the 1978 AAAS meeting). Characters such as Noam Chomsky make cameo appearances.
Segerstrale's Nature's Oracle focuses particularly on the life and times of W. D. Hamilton, though if you want that at high speed and max density, read Narrow Roads of Gene Land, Volume 2. Because Hamilton died before the editing phase, the biographical text is relatively unexpurgated. Hamilton also makes an appearance in The Price of Altruism: George Price and the Search for the Origins of Kindness.
The death of L. L. Cavalli-Sforza reminds us that the last of the students of the first generation of population geneticists are now passing on. With that, a great of history is going to be inaccessible. The same is not yet true of the acolytes of W. D. Hamilton, John Maynard Smith, or Robert Trivers.
Posted in Evolutionary GeneticsTagged Evolutionary Genetics
Idle theories are the devil's workshop
Posted on February 28, 2018 February 28, 2018 by Razib Khan
In the 1970s Richard C. Lewontin wrote about how the allozyme era finally allowed for the testing of theories which had long been perfected and refined but lay unused like elegant machines without a task. Almost immediately the empirical revolution that Lewontin began in the 1960s kickstarted debates about the nature of selection and neutrality on the molecular level, now that molecular variation was something they could actually explore.
This led to further debates between "neutralists" and "selectionists." Sometimes the debates were quite acrimonious and personal. The most prominent neutralist, Motoo Kimura, took deep offense to the scientific criticisms of the theoretical population geneticist John Gillespie. The arguments around neutral theory in the 1970s eventually spilled over into other areas of evolutionary biology, and prominent public scientists such as Richard Dawkins and Stephen Jay Gould got pulled into it (neither of these two were population geneticists or molecular evolutionists, so one wonders what they truly added besides bluster and publicity).
Today we do not have these sorts of arguments from what I can tell. Why? I think it is the same reason that is the central thesis of Benjamin Friedman's The Moral Consequences of Economic Growth. In it, the author argues that liberalism, broadly construed, flourishes in an environment of economic growth and prosperity. As the pie gets bigger zero-sum conflicts are attenuated.
What's happened in empirical studies of evolutionary biology over the last decade or so is that in genetics a surfeit of genomic data has swamped the field. Some scholars have even suggested that in evolutionary genomics we have way more data than can be analyzed or understood (in contrast to medical genomics, where more data is still useful and necessary). Scientists still have disagreements, but instead of bickering or posturing, they've been trying to dig out from the under the mountain of data.
It's easy to be gracious to your peers when you're rich in data….
Posted in Evolutionary Genetics, Evolutionary GenomicsTagged Evolutionary Genetics1 Comment on Idle theories are the devil's workshop
Synergistic epistasis as a solution for human existence
Posted on May 6, 2017 May 6, 2017 by Razib Khan
Epistasis is one of those terms in biology which has multiple meanings, to the point that even biologists can get turned around (see this 2008 review, Epistasis — the essential role of gene interactions in the structure and evolution of genetic systems, for a little background). Most generically epistasis is the interaction of genes in terms of producing an outcome. But historically its meaning is derived from the fact that early geneticists noticed that crosses between individuals segregating for a Mendelian characteristic (e.g., smooth vs. curly peas) produced results conditional on the genotype of a secondary locus.
Molecular biologists tend to focus on a classical, and often mechanistic view, whereby epistasis can be conceptualized as biophysical interactions across loci. But population geneticists utilize a statistical or evolutionary definition, where epistasis describes the extend of deviation from additivity and linearity, with the "phenotype" often being fitness. This goes back to early debates between R. A. Fisher and Sewall Wright. Fisher believed that in the long run epistasis was not particularly important. Wright eventually put epistasis at the heart of his enigmatic shifting balance theory, though according to Will Provine in Sewall Wright and Evolutionary Biology even he had a difficult time understanding the model he was proposing (e.g., Wright couldn't remember what the different axes on his charts actually meant all the time).
These different definitions can cause problems for students. A few years ago I was a teaching assistant for a genetics course, and the professor, a molecular biologist asked a question about epistasis. The only answer on the key was predicated on a classical/mechanistic understanding. But some of the students were obviously giving the definition from an evolutionary perspective! (e.g., they were bringing up non-additivity and fitness) Luckily I noticed this early on and the professor approved the alternative answer, so that graders would not mark those using a non-molecular answer down.
My interested in epistasis was fed to a great extent in the middle 2000s by my reading of Epistasis and the Evolutionary Process. Unfortunately not too many people read this book. I believe this is so because when I just went to look at the Amazon page it told me that "Customers who viewed this item also viewed" Robert Drews' The End of the Bronze Age. As it happened I read this book at about the same time as Epistasis and the Evolutionary Process…and to my knowledge I'm the only person who has a very deep interest in statistical epistasis and Mycenaean Greece (if there is someone else out there, do tell).
In any case, when I was first focused on this topic genomics was in its infancy. Papers with 50,000 SNPs in humans were all the rage, and the HapMap paper had literally just been published. A lot has changed.
So I was interested to see this come out in Science, Negative selection in humans and fruit flies involves synergistic epistasis (preprint version). Since the authors are looking at humans and Drosophila and because it's 2017 I assumed that genomic methods would loom large, and they do.
And as always on the first read through some of the terminology got confusing (various types of statistical epistasis keep getting renamed every few years it seems to me, and it's hard to keep track of everything). So I went to Google. And because it's 2017 a citation of the paper and further elucidation popped up in Google Books in Crumbling Genome: The Impact of Deleterious Mutations on Humans. Weirdly, or not, the book has not been published yet. Since the author is the second to last author on the above paper it makes sense that it would be cited in any case.
So what's happening in this paper? Basically they are looking to reduced variance of really bad mutations because a particular type of epistasis amplifies their deleterious impact (fitness is almost always really hard to measure, so you want to look at proxy variables).
Because de novo mutations are rare, they estimate about 7 are in functional regions of the genome (I think this may be high actually), and that the distribution should be Poisson. This distribution just tells you that the mean number of mutations and the variance of the the number of mutations should be the same (e.g., mean should be 5 and variance should 5).
Epistasis refers (usually) to interactions across loci. That is, different genes at different locations in the genome. Synergistic epistasis means that the total cumulative fitness after each successive mutation drops faster than the sum of the negative impact of each mutation. In other words, the negative impact is greater than the sum of its parts. In contrast, antagonistic epistasis produces a situation where new mutations on the tail of the distributions cause a lower decrement in fitness than you'd expect through the sum of its parts (diminishing returns on mutational load when it comes to fitness decrements).
These two dynamics have an effect the linkage disequilibrium (LD) statistic. This measures the association of two different alleles at two different loci. When populations are recently admixed (e.g., Brazilians) you have a lot of LD because racial ancestry results in lots of distinctive alleles being associated with each other across genomic segments in haplotypes. It takes many generations for recombination to break apart these associations so that allelic state at one locus can't be used to predict the odds of the state at what was an associated locus. What synergistic epistasis does is disassociate deleterious mutations. In contrast, antagonistic epistasis results in increased association of deleterious mutations.
Why? Because of selection. If a greater number of mutations means huge fitness hits, then there will strong selection against individuals who randomly segregate out with higher mutational loads. This means that the variance of the mutational load is going to lower than the value of the mean.
How do they figure out mutational load? They focus on the distribution of LoF mutations. These are extremely deleterious mutations which are the most likely to be a major problem for function and therefore a huge fitness hit. What they found was that the distribution of LoF mutations exhibited a variance which was 90-95% of a null Poisson distribution. In other words, there was stronger selection against high mutation counts, as one would predict due to synergistic epistasis.
They conclude:
Thus, the average human should carry at least seven de novo deleterious mutations. If natural selection acts on each mutation independently, the resulting mutation load and loss in average fitness are inconsistent with the existence of the human population (1 − e−7 > 0.99). To resolve this paradox, it is sufficient to assume that the fitness landscape is flat only outside the zone where all the genotypes actually present are contained, so that selection within the population proceeds as if epistasis were absent (20, 25). However, our findings suggest that synergistic epistasis affects even the part of the fitness landscape that corresponds to genotypes that are actually present in the population.
Overall this is fascinating, because evolutionary genetic questions which were still theoretical a little over ten years ago are now being explored with genomic methods. This is part of why I say genomics did not fundamentally revolutionize how we understand evolution. There were plenty of models and theories. Now we are testing them extremely robustly and thoroughly.
Addendum: Reading this paper reinforces to me how difficult it is to keep up with the literature, and how important it is to know the literature in a very narrow area to get the most out of a paper. Really the citations are essential reading for someone like me who just "drops" into a topic after a long time away….
Citation: Science, Negative selection in humans and fruit flies involves synergistic epistasis.
Posted in Evolution, Genetics, GenomicsTagged Epistasis, Evolutionary Genetics
Why the rate of evolution may only depend on mutation
Posted on April 23, 2017 April 24, 2017 by Razib Khan
Sometimes people think evolution is about dinosaurs.
It is true that natural history plays an important role in inspiring and directing our understanding of evolutionary process. Charles Darwin was a natural historian, and evolutionary biologists often have strong affinities with the natural world and its history. Though many people exhibit a fascination with the flora and fauna around us during childhood, often the greatest biologists retain this wonderment well into adulthood (if you read W. D. Hamilton's collections of papers, Narrow Roads of Gene Land, which have autobiographical sketches, this is very evidently true of him).
But another aspect of evolutionary biology, which began in the early 20th century, is the emergence of formal mathematical systems of analysis. So you have fields such as phylogenetics, which have gone from intuitive and aesthetic trees of life, to inferences made using the most new-fangled Bayesian techniques. And, as told in The Origins of Theoretical Population Genetics, in the 1920s and 1930s a few mathematically oriented biologists constructed much of the formal scaffold upon which the Neo-Darwinian Synthesis was constructed.
The product of evolution
At the highest level of analysis evolutionary process can be described beautifully. Evolution is beautiful, in that its end product generates the diversity of life around us. But a formal mathematical framework is often needed to clearly and precisely model evolution, and so allow us to make predictions. R. A. Fisher's aim when he wrote The Genetical Theory Natural Selection was to create for evolutionary biology something equivalent to the laws of thermodynamics. I don't really think he succeeded in that, though there are plenty of debates around something like Fisher's fundamental theorem of natural selection.
But the revolution of thought that Fisher, Sewall Wright, and J. B. S. Haldane unleashed has had real yields. As geneticists they helped us reconceptualize evolutionary process as more than simply heritable morphological change, but an analysis of the units of heritability themselves, genetic variation. That is, evolution can be imagined as the study of the forces which shape changes in allele frequencies over time. This reduces a big domain down to a much simpler one.
Genetic variation is concrete currency with which one can track evolutionary process. Initially this was done via inferred correlations between marker traits and particular genes in breeding experiments. Ergo, the origins of the "the fly room".
But with the discovery of DNA as the physical substrate of genetic inheritance in the 1950s the scene was set for the revolution in molecular biology, which also touched evolutionary studies with the explosion of more powerful assays. Lewontin & Hubby's 1966 paper triggered a order of magnitude increase in our understanding of molecular evolution through both theory and results.
The theoretical side occurred in the form of the development of the neutral theory of molecular evolution, which also gave birth to the nearly neutral theory. Both of these theories hold that most of the variation with and between species on polymorphisms are due to random processes. In particular, genetic drift. As a null hypothesis neutrality was very dominant for the past generation, though in recent years some researchers are suggesting that selection has been undervalued as a parameter for various reasons.
Setting the live scientific debate, which continue to this day, one of the predictions of neutral theory is that the rate of evolution will depend only on the rate of mutation. More precisely, the rate of substitution of new mutations (where the allele goes from a single copy to fixation of ~100%) is proportional to the rate of mutation of new alleles. Population size doesn't matter.
The algebra behind this is straightforward.
[latexpage]
First, remember that the frequency of the a new mutation within a population is $\frac{1}{2N}$, where $N$ is the population size (the $2$ is because we're assuming diploid organisms with two gene copies). This is also the probability of fixation of a new mutation in a neutral scenario; it's probability is just proportional to its initial frequency (it's a random walk process between 0 and 1.0 proportions). The rate of mutations is defined by $\mu$, the number of expected mutations at a given site per generation (this is a pretty small value, for humans it's on the order of $10^{-8}$). Again, there are $2N$ gene copies, so you have $2N\mu$ to count the number of new mutations.
The probability of fixation of a new mutations multiplied by the number of new mutations is:
\[
\( \frac{1}{2N} \) \times 2N\mu = \mu
\]
So there you have it. The rate of fixation of these new mutations is just a function of the rate of mutation.
Simple formalisms like this have a lot more gnarly math that extend them and from which they derive. But they're often pretty useful to gain a general intuition of evolutionary processes. If you are genuinely curious, I would recommend Elements of Evolutionary Genetics. It's not quite a core dump, but it is a way you can borrow the brains of two of the best evolutionary geneticists of their generation.
Also, you will be able to answer the questions on my survey better the next time!
Posted in Genetics, UncategorizedTagged Evolutionary Genetics, Population Genetics9 Comments on Why the rate of evolution may only depend on mutation
Fisherianism in the genomic era
Posted on April 12, 2017 by Razib Khan
There are many things about R. A. Fisher that one could say. Professionally he was one of the founders of evolutionary genetics and statistics, and arguably the second greatest evolutionary biologist after Charles Darwin. With his work in the first few decades of the 20th century he reconciled the quantitative evolutionary framework of the school of biometry with mechanistic genetics, and formalized evolutionary theory in The Genetical Theory of Natural Selection.
He was also an asshole. This is clear in the major biography of him, R.A. Fisher: The Life of a Scientist. It was written by his daughter. But The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century also seems to indicate he was a dick. And W. D. Hamilton's Narrow Roads of Gene Land portrays Fisher has rather cold and distant, despite the fact that Hamilton idolized him.
Notwithstanding his unpleasant personality, R. A. Fisher seems to have been a veritable mentat in his early years. Much of his thinking crystallized in the first few decades of the 20th century, when genetics was a new science and mathematical methods were being brought to bear on a host of topics. It would be decades until DNA was understood to be the substrate of heredity. Instead of deriving from molecular first principles which were simply not known in that day, Fisher and his colleagues constructed a theoretical formal edifice which drew upon patterns of inheritance that were evident in lineages of organisms that they could observe around them (Fisher had a mouse colony which he utilized now and then to vent his anger by crushing mice with his bare hands). Upon that observational scaffold they placed a sturdy superstructure of mathematical formality. That edifice has been surprisingly robust down to the present day.
One of Fisher's frameworks which still gives insight is the geometric model of the distribution of fitness of mutations. If an organism is near its optimum of fitness, than large jumps in any direction will reduce its fitness. In contrast, small jumps have some probability of getting closer to the optimum of fitness. In plainer language, mutations of large effect are bad, and mutations of small effect are not as bad.
A new paper in PNAS loops back to this framework, Determining the factors driving selective effects of new nonsynonymous mutations:
Our study addresses two fundamental questions regarding the effect of random mutations on fitness: First, do fitness effects differ between species when controlling for demographic effects? Second, what are the responsible biological factors? We show that amino acid-changing mutations in humans are, on average, more deleterious than mutations in Drosophila. We demonstrate that the only theoretical model that is fully consistent with our results is Fisher's geometrical model. This result indicates that species complexity, as well as distance of the population to the fitness optimum, modulated by long-term population size, are the key drivers of the fitness effects of new amino acid mutations. Other factors, like protein stability and mutational robustness, do not play a dominant role.
In the title of the paper itself is something that would have been alien to Fisher's understanding when he formulated his geometric model: the term "nonsynonymous" to refer to mutations which change the amino acid corresponding to the triplet codon. The paper is understandably larded with terminology from the post-DNA and post-genomic era, and yet comes to the conclusion that a nearly blind statistical geneticist from about a century ago correctly adduced the nature of mutation's affects on fitness in organisms.
The authors focused on two primary species which different histories, but well characterized in the evolutionary genomic literature: humans and Drosophila. The models they tested are as follows:
Basically they checked the empirical distribution of the site frequency spectra (SFS) of the nonsynonymous variants against expected outcomes based on particular details of demographics, which were inferred from synonymous variation. Drosophila have effective population sizes orders of magnitude larger than humans, so if that is not taken into account, then the results will be off. There are also a bunch of simulations in the paper to check for robustness of their results, and they also caveat the conclusion with admissions that other models besides the Fisherian one may play some role in their focal species, and more in other taxa. A lot of this strikes me as accruing through the review process, and I don't have the time to replicate all the details to confirm their results, though I hope some of the reviewers did so (again, I suspect that the reviewers were demanding some of these checks, so they definitely should have in my opinion).
In the Fisherian model more complex organisms are more fine-tuned due topleiotropy and other such dynamics. So new mutations are more likely to deviate away from the optimum. This is the major finding that they confirmed. What does "complex" mean? The Drosophila genome is less than 10% of the human genome's size, but the migratory locust has twice as large a genome as humans, while wheat has a sequence more than five times as large. But organism to organism, it does seem that Drosophila has less complexity than humans. And they checked with other organisms besides their two focal ones…though the genomes there are not as complete presumably.
As I indicated above, the authors believe they've checked for factors such as background selection, which may confound selection coefficients on specific mutations. The paper is interesting as much for the fact that it illustrates how powerful analytic techniques developed in a pre-DNA era were. Some of the models above are mechanistic, and require a certain understanding of the nature of molecular processes. And yet they don't seem as predictive as a more abstract framework!
Citation: Christian D. Huber, Bernard Y. Kim, Clare D. Marsden, and Kirk E. Lohmueller, Determining the factors driving selective effects of new nonsynonymous mutations PNAS 2017 ; published ahead of print April 11, 2017, doi:10.1073/pnas.1619508114
Posted in GeneticsTagged Evolutionary Genetics1 Comment on Fisherianism in the genomic era
Razib's home page
Twinkie on Our New Elite
Joshua Byrne on R1b-L21 and Goidelic Celtic
Douglas Knight on Our New Elite
Adrian Martyn on R1b-L21 and Goidelic Celtic
Jason m on R1b-L21 and Goidelic Celtic
Reconstructing the spatiotemporal patterns of admixture during the European Holocene using a novel genomic dating method | bioRxiv
Genetic control of KRAB-ZFP genes explains distal CpG-site methylation which associates with human disease phenotypes. | bioRxiv
The False Tropes of Darwinism and a New Narrative of Prosocial Evolution - This View Of Life
There is No Such Thing as a Takedown - Freddie deBoer
The YIMBYs are starting to win a few - by Noah Smith
The Mismeasure and Misuse of GDP
In Support of Amy Wax - Glenn Loury
The genetic identity of the earliest human-made hybrid animals, the kungas of Syro-Mesopotamia
Partial sex linkage and linkage disequilibrium on the guppy sex chromosome | bioRxiv
Evolution of assortative mating following selective introgression of pigmentation genes between two Drosophila species | bioRxiv
The effect of mainland dynamics on data and parameter estimates in island biogeography | bioRxiv
Google, Amazon, Meta and Microsoft Weave a Fiber-Optic Web of Power - WSJ
U.S. Businesses Sour on Saudi Arabia in Blow to Crown Prince's Growth Plans - WSJ
Three Reagents for in-Solution Enrichment of Ancient Human DNA at More than a Million SNPs | bioRxiv
PolarMorphism enables discovery of shared genetic variants across multiple traits from GWAS summary statistics | bioRxiv
The Rise and Fall of Civilizations: A Reader Course – The Scholar's Stage
Estimating alpha, beta, and gamma diversity through deep learning | bioRxiv
Background selection theory overestimates effective population size for high mutation rates | bioRxiv
The heritability of BMI varies across the range of BMI: a heritability curve analysis in a twin cohort | bioRxiv
SNP-level FST outperforms window statistics for detecting soft sweeps in local adaptation | bioRxiv
Aboriginal Australians
After the Ice
Albion's Seed
Blood of the Isles
Calculus Made Easy
Castes of Mind
China's Cosmopolitan Empire
Darwin's Cathedral
Deep Ancestry
Descartes' Baby
Empires and Barbarians
Empires of the Silk Road
Empires of the Word
End of the Bronze Age
Europe Between the Oceans
Fourth Crusade & the Sack of Constantinople
Genghis Khan & the Making of the Modern World
God's War
How Pleasure Works
In Search of the Trojan War
India: A New History
Marketplace of the Gods
Pandora's Seed
Rome and Jersalem
Sailing to Byzantium
Supernatural Selection
The Age of Confucian Rule
The Altruism Equation
The Ancestor's Tale
The Early Chinese Empires
The Faith Instinct
The Power of Babel
The Price of Altruism
The Stuff of Thought
The Tenth Parallel
The Troubled Empire
The Vertigo Years
Unknown Quantity
War, Wine, and Taxes
We Are Doomed
Wealth and Poverty of Nations
Why Some Like It Hot
Admin African Genetics American History Ancient DNA Behavior Genetics Book Club Books China coronavirus COVID-19 cultural evolution Culture Economics Europe Evolution Evolutionary Genetics Fantasy Game of Thrones Genetics Genomics GSS Historical Population Genetics History Hot Sauce Human Evolution Human Genetics Human Population Genetics India India genetics Indo-Europeans Islam Neanderthals Open Thread Paleoanthropology Personal genomics Pigmentation Politics Population Genetics Population genomics Religion Roman History Selection Southeast Asia Technology The Insight | CommonCrawl |
Inference of genetic relatedness between viral quasispecies from sequencing data
Volume 18 Supplement 10
Selected articles from the 6th IEEE International Conference on Computational Advances in Bio and Medical Sciences (ICCABS): genomics
Olga Glebova1,
Sergey Knyazev1,
Andrew Melnyk1,
Alexander Artyomenko1,
Yury Khudyakov2,
Alex Zelikovsky1 &
Pavel Skums1,2
RNA viruses such as HCV and HIV mutate at extremely high rates, and as a result, they exist in infected hosts as populations of genetically related variants. Recent advances in sequencing technologies make possible to identify such populations at great depth. In particular, these technologies provide new opportunities for inference of relatedness between viral samples, identification of transmission clusters and sources of infection, which are crucial tasks for viral outbreaks investigations.
We present (i) an evolutionary simulation algorithm Viral Outbreak InferenCE (VOICE) inferring genetic relatedness, (ii) an algorithm MinDistB detecting possible transmission using minimal distances between intra-host viral populations and sizes of their relative borders, and (iii) a non-parametric recursive clustering algorithm Relatedness Depth (ReD) analyzing clusters' structure to infer possible transmissions and their directions. All proposed algorithms were validated using real sequencing data from HCV outbreaks.
All algorithms are applicable to the analysis of outbreaks of highly heterogeneous RNA viruses. Our experimental validation shows that they can successfully identify genetic relatedness between viral populations, as well as infer transmission clusters and outbreak sources.
Inferring transmission clusters, transmission directions, and sources of outbreaks from viral sequencing data are crucial for viral outbreaks investigation. Outbreaks of RNA viruses, such as Human Immunodeficiency Virus (HIV) and Hepatitis C virus (HCV), are particularly dangerous and pose a significant problem for public health. It is well known that genomes of RNA viruses mutate at extremely high rates [1]. As a result, RNA viruses exist in infected hosts as populations of closely related variants called quasispecies [2, 3]. However, only recently with the progress. Consequently, a contribution of sequencing technologies to molecular surveillance of viral disease epidemic spread becomes more and more substantial [10, 11].
Computational methods can be used to infer transmission characteristics from sequencing data. The first question usually is whether two viral populations belong to the same outbreak. The methods typically utilize the simple observation that all samples from the same outbreak are genetically related, so they use some measure of genetic relatedness as a predictor for epidemiological relatedness [10–12].
The second question is which samples constitute isolated outbreaks. For this purposes, we define a transmission cluster as a connected set of genetically related viral populations. The third questions we address in this article is "Who is the source of infection?". This questions is the most difficult to answer, and there were only a few attempts to do it computationally using solely genomic data [13] without invoking additional epidemiological information [14]. To the best of our knowledge, there is still no freely available computational tool for this problem.
Computational methods for detection of viral transmissions and inference of transmission clusters are often consensus-based, i.e. they analyze only a single representative sequence per intra-host population (for example, consensus sequence). Such methods assign two hosts into one transmission cluster, if the distances between corresponding sequences do not exceed a predefined threshold [10, 11]. Although consensus-based methods proved to be useful, they do not take into account intra-host viral diversity. Inclusion of whole intra-host populations into analysis is important, because minor viral variants are frequently responsible for transmission of RNA viruses [15, 16].
Recently published computational approach (further referred to as MinDist) [12] uses the minimal genetic distance between sequences of two viral populations as a measure of genetic relatedness of intra-host viral populations. Since minimal genetic distances between different pairs of populations can be achieved on various pairs of sequences, this approach takes into account intra-host diversity.
However, both consensus-based and MinDist approaches have further limitations. First of all, they do not allow to detect directions of transmissions, which is crucial for detection of outbreak sources and transmission histories. Secondly, distance thresholds utilized by both approaches could be derived from analysis of limited or incomplete experimental data and highly data- and situation-specific, with different viruses or even different genomic regions of the same virus requiring specifically established thresholds.
In this paper, we address the above limitations by proposing two novel algorithms ReD and VOICE, as well as by suggesting an improvement of the MinDist algorithm. The new algorithms allow to infer important epidemiological characteristics, including genetic relatedness, directions of transmissions and transmission clusters.
Relatedness Depth (ReD) method uses clustering-based analysis of intra-host viral populations. It is a non-parametric algorithm, so it does not rely on any virus-specific threshold values to predict epidemiological characteristics.
Viral Outbreak InferenCE (VOICE) is a simulation-based method which imitates viral evolution as a Markov process in the space of observed viral haplotypes
MinDistB method is a modification of MinDist [12], which takes into account the sizes of relative borders of each pair of viral populations.
The proposed algorithms were validated on the experimental data obtained from HCV outbreaks. Comparative results suggest that our methods are efficient in epidemiological characteristics inference.
Relatedness depth (ReD) algorithm
ReD is a deterministic algorithm based on deterministic hierarchical clustering. The key concept of this method is a k-clustered intersection of viral populations (we used similar idea previously for combinatorial pooling [17]). For two sets of viral sequences P 1 and P 2, their k-clustered intersection \(P_{1} \overline {\cap } P_{2}\) is calculated as follows:
Partition the union P 1∪P 2 into k clusters C 1,...,C k ;
\(P_{1} \overline {\cap } P_{2} = \bigcup \limits _{i\in B} C_{i}\), where B={i∈{1,...,k}:C i ∩P 1≠∅,C i ∩P 2≠∅}, i.e. \(P_{1} \overline {\cap } P_{2}\) is the union of clusters, which contain sequences from both P 1 and P 2 (see Fig. 1);
k-clustered intersection of two viral populations (blue and red). Union of populations is partitioned into k=2 clusters (dashed and solid). Dashed cluster is the k-clustered intersection. Direction of transmission is from the blue population to the red population
The parameter k is a scale of clustering. In particular, populations P 1 and P 2 are separable, if \(P_{1} \overline {\cap } P_{2} = \emptyset \), while the fact that \(P_{1} \overline {\cap } P_{2} \ne \emptyset \) indicates that they may be genetically related. In the most extreme case \(P_{1} \overline {\cap } P_{2} = P_{1}\cup P_{2}\), i.e. populations are completely inseparable under the scale k.
The degree of confidence that the samples are genetically close is represented by the relatedness depth d(P 1,P 2), which is calculated by Algorithm 1. Simply speaking, Algorithm 1 tries to recursively separate populations P 1 and P 2. At each iteration, k-clustered intersection is calculated. If two populations are separable, then the algorithm stops. Otherwise, it continues the separation of sequences from P 1 and P 2 within their k-clustered intersection. The separation depth is a depth of this recursion. It is possible that at some iterations of Algorithm 1 two populations are completely inseparable under a current clustering scale. In this case, the scale k is increased and k-clustered intersection is recalculated. The initial value of k used by Algorithm 1 is k=2.
k-clustered intersections depend on a clustering method. Our implementation uses a hierarchical clustering based on neighbor-joining tree (as implemented in Matlab (MathWorks, Natick, MA)). The algorithm utilizes a standard Jukes-Cantor distance which is based on the simplest substitution-based evolutionary model.
Clustered intersections also allow for estimating the direction of transmissions. It is reasonable to assume that if two hosts share a population, then a host with more heterogeneous population is more likely to be the transmission source [18]. Formally, if \(I = P_{1} \overline {\cap } P_{2}\), P 1⊆I and P 2 ∖ I≠∅, then we assume that probable transmission direction is from P 2 to P 1 (see Fig. 1). The direction is defined according to the first occurrence of such situation during execution of Algorithm 1. Note that in some cases direction may not be identified.
Given the collection of viral populations \(\mathcal {P} = \{P_{1},...,P_{n}\}\), ReD produces the weighted directed genetic relatedness graph G=(V,A,d) with \(V=\mathcal {P}\). An arc (P i ,P j ) is in A whenever populations P i and P j are genetically related, i.e., have sufficiently high relatedness depth; the direction of an arc corresponds to the estimated direction of transmission and its weight to the relatedness depth. Transmission clusters are calculated as weakly connected components of the digraph G. To determine transmission clusters, the simplest depth cutoff T=1 can be used. In addition, only components containing at least one arc a of weight d(a)≥2 were considered as reliable. For each reliable component, a source s of the corresponding outbreak is identified as a vertex with highest eigenvector centrality.
Viral outbreak inference (VOICE) simulation method
VOICE is another approach to predict epidemiological characteristics. Unlike ReD, it is not deterministic. Instead, it simulates the process of evolution from one viral population (source) into another (recipient) as a Markov process on a union of both populations. VOICE starts evolution from a subset of source sequences called the border set and estimates the number of generations required to acquire a genetic heterogeneity observed in the recipient.
Formally, given two sets of viral sequences P 1 and P 2, VOICE simulates viral evolution to estimate times t 12 and t 21 needed to cover all sequences from the recipient population under the assumptions that first and second host were sources of infection. Based on the value min{t 12,t 21}, the algorithm decides whether the populations are related. The direction of possible transmission between the related pair is assumed to follow the direction which requires less time.
The simulation starts from the δ-border set B 1, which contains viral variants that are likely the closest to variants transmitted between P 1 and P 2. It is defined as the set of vertices of P 1 minimizing pairwise Hamming distance D between vertices from P 1 and P 2 up to a constant δ:
$$B_{1} = \left\{u\in P_{1} : \exists v\in P_{2} ~~ D(u,v) = \min_{x\in P_{1},y\in P_{2}}D(x,y) + \delta \right\} $$
(see Fig. 2). The constant δ is a parameter, with the default value 1.
δ-Crossing between two viral populations P 1 and P 2 l≤d(u,v)+δ; (a) |B δ |=5; (b) |B δ |=2
The simulated evolutionary process is carried out in the evolutionary space represented by the variant graph G(B 1,P 2), which is constructed as follows. First, construct a union of all minimal spanning trees of the complete graph on a vertex set B 1∪P 2 with the edge weights equal to Hamming distances between variants (sometimes referred to as a pathfinder network P F N e t(n−1,∞) [19, 20]). Then substitute every edge in graph with two directed edges of the same weight. Next, subdivide each edge (u 1,u 2) of weight w≥2 with w−1 vertices v 1,...,v w−1 and add multiple directed edges as follows: add w−1 edges between vertices u 1 and v 1; w−2 edges between v 1 and v 2; and so forth as shown on Fig. 3. This model can be explained as follows: to mutate from vertex u 1 to u 2 during simulation, there should occur mutations at w positions that are different between u 1 and u 2. During the first step, simulation can mutate any of w positions, then any of w−1 positions on the second step and so forth.
Edge subdividing
The simulation starts from all border vertices B 1 and runs until all the vertices of the population P 2 are reached. At the beginning of the simulation, border vertices get count equal to 1, and the rest of the vertices get count 0. Each tact simulates variants replication by updating vertex counts according to one of the three following scenarios happening with the specified probabilities (see Fig. 4). First, if during replication there are no mutations, then the vertex v replicates itself and its count label is incremented. This happens with the probability p 1 (1). Second, the vertex can mutate into one of its neighboring vertices with probability p 2 (see Eq. (2)), in which case the count of the neighbor is incremented. Finally, with probability p 3, vertex does not produce any viable offspring, in which case vertex counts are not changed. If the count of a vertex reaches the maximum allowed variant population size C max , then it is not increased. The probabilities of these scenarios are calculated as follows:
$$\begin{array}{@{}rcl@{}} p_{1} & = & (1-3\epsilon)^{L} \end{array} $$
All possible moves of a vertex v
$$\begin{array}{@{}rcl@{}} p_{2} & = & p_{1}\frac{\epsilon}{1-3\epsilon} \end{array} $$
$$\begin{array}{@{}rcl@{}} p_{3} & = & 1 - p_{1} - p_{2}\deg^{-}(v) \end{array} $$
where ε is the mutation rate, L is the genome length and deg−(v) is an outdegree of a vertex v.
Algorithm 2 represents the flow of the method. The time t 12 is computed as the average over s simulations. The same procedure is repeated for the opposite direction of the transmission with its border set B 2 and the time t 21 is computed. The value min{t 12,t 21} determines which direction of transmission is more likely.
The sizes of observed intra-host viral populations may significantly vary due to sampling and sequencing biases. Since the larger population will require more time to cover, the estimation of t 12 and t 21 could be biased. VOICE avoids such biases by normalizing the intra-host population sizes. The deterministic normalization partitions each viral population into q clusters using hierarchical clustering and each cluster is replaced with the consensus of its members. The subsampling normalization randomly chooses q sequences from each population. The procedure is repeated r times, and the final result is an average over all subsamplings.
Identification of genetic relatedness, transmission directions, clusters and sources of outbreaks
Analogously to ReD, VOICE produces a weighted directed genetic relatedness graph G=(V,A,w) with \(V=\mathcal {P}\). An arc P i P j is in A whenever populations P i and P j are genetically related, i.e., value min{t ij ,t ji } is less than a threshold. Weakly connected components of G represent transmission clusters or outbreaks. To determine the source of each outbreak, we build a Shortest Paths Tree (SPT) for every vertex in the corresponding component. The source is estimated as the vertex with an SPT of minimal weight.
MinDistB method
The method extends the MinDist approach proposed in [12], which defines the distance between viral populations as the minimum Hamming distance between their representatives. The new approach also takes into account sizes of border sets, on which the minimum distance is achieved.
Formally, given an integer δ (by default δ=1), the δ-crossing between populations P 1 and P 2 is the set of pairs of variants (u,v) from different populations, the Hamming distance D(u,v) between which is within δ from the minimum Hamming distance:
$${{\begin{aligned} {}B_{\delta}(P_{1},P_{2})= \left\{(u,v): u \in P_{1}, v \in P_{2}, D(u,v) \le \min_{x\in P_{1},y\in P_{2}}D(x,y) + \delta\right\} \end{aligned}}} $$
(see Fig. 2). Our empirical study shows that in case when the crossing is large (see Fig. 2 a), then the populations are less likely to be related than in case when the borders are small (see Fig. 2 b).
This effect can be intuitively explained. Two related populations likely diverge away from the common ancestor and from each other, and their borders are formed by few old survived variants closest to the common ancestor. Two unrelated populations diverging from two different ancestors may in time reduce minimum distance from each other randomly and closest variants are relatively young and abundant (see Fig. 5).
Intuition behind the MinDistB method. a Related samples – crossing is between old survived variants. b Unrelated samples –crossing is between many young variants which are close to each other by chance
We define a δ-distance between populations P 1 and P 2 as follows:
$$ D_{\delta}(P_{1}, P_{2}) = D(P_{1},P_{2}) + c \ln(|B_{\delta} (P_{1},P_{2})|) $$
where c=3 is an empirically chosen constant.
Identification of genetic relatedness, transmission clusters and sources of outbreaks
For MinDistB methods, genetic relatedness graph G=(V,E,w) is a weighted undirected graph with the vertex set \(V=\mathcal {P}\) and an edge of weight w i,j connecting populations P i ,P j whenever w i,j =D δ (P 1,P 2) does not exceed a threshold. Transmission clusters are estimated as connected components of the graph G. For each transmission cluster its source could be inferred either as a vertex with maximum eigenvector centrality or as a vertex with the shortest paths tree of minimal weight.
Results and discussions
ReD, VOICE and MinDistB were validated using experimental outbreak sequencing data, and their predictions were compared with the previously published MinDist method [12].
We used the benchmark data presented in [12], which is a collection of HCV intra-host populations sampled from 335 infected individuals.
Outbreak collection contains 142 HCV samples from 33 epidemiologically curated outbreaks reported to Centers for Disease Control and Prevention in 2008–2013. Outbreaks contain from 2 to 19 samples. Epidemiological histories, including sources of infection, are known for 10 outbreaks.
Collection of 193 epidemiologically unrelated HCV samples.
All viral sequences represent a fragment of E1/E2 genomic region of length 264 bp.
Prediction of epidemiological characteristics
The proposed methods were used to infer the following epidemiological characteristics:
genetic relatedness between populations;
transmission clusters representing outbreaks and isolated samples;
sources of outbreaks;
transmission directions between pairs of samples.
Comparison results are collected in Table 1. The variants of VOICE with deterministic and subsampling normalizations are referred to as V O I C E−D and V O I C E−S, and for them we used the normalization constants q=10 and q=4, respectively. For all VOICE runs, five independent simulations were performed, and the averages over that simulations are reported. For each simulation, VOICE-S performs 50 subsamplings, and the results of the algorithm are averaged over all subsamplings. For MinDist, sources of outbreaks were identified as vertices with highest eigenvector centralities in the corresponding genetic relatedness graphs, since for MinDist this method outperform the shortest path tree-based approach.
Table 1 Validation results
Genetic relatedness between populations
Viral populations from two samples are genetically related if they belong to the same outbreak and unrelated, otherwise. The genetic relatedness is validated on the union of both collections containing all outbreaks and unrelated samples. There are 55945 pairs of samples, and 479 of them are related. For all algorithms we choose the best thresholds, which produce no false positives, i.e. no unrelated populations are predicted to be related. The values of thresholds T are: R e D:T=2; M i n D i s t:T=11; M i n D i s t B:T=28.4; V o i c e−D:T=1710; V o i c e−S:T=4585. For each method, the sensitivity (i.e. the percentage of detected related pairs) was calculated (Table 1). The highest sensitivity is achieved by MinDistB method. Figure 6 depict ROC curve for the tested methods (ReD is not present, since for this method only few viable discrete thresholds are possible). MinDistB and V O I C E−D have highest areas under a curve value followed by MinDist and V O I C E−S.
ROC curve for pairs relatedness detection
Detection of transmission clusters
The similarities between true and estimated partitions into transmission clusters were measured using an editing metric [21], which is defined as the minimum number of elementary operations required to transform one partition into another. An elementary operation is either merging (joining of two clusters into a single cluster) or division (partition of a cluster into two clusters) [21]. We calculate sensitivity by normalizing an editing distance E by dividing it by the number N of elementary operations required to transform trivial partition (i.e. the partition into singleton sets) into the true partition. The number N is equal to n−k, where n is the total number of samples and k is the number of true clusters:
$$ Sensitivity = \frac{E}{n-k} \times 100\%. $$
Table 1 shows that MinDistB and MinDist demonstrate the highest sensitivity.
Source identification
The accuracy of the source identification is defined as the percentage of correctly predicted sources for outbreaks, where the correct sources are known. The Source section of Table 1 shows that the best results are achieved by ReD and V O I C E−S which were able to detect sources in 90% of cases. At the same time, MinDist and MinDistB, which are not able to identify transmission directions, were significantly less accurate.
Transmission direction
Among tested algorithms, only ReD and VOICE allows for detection of transmission directions. For that algorithms, percentages of correctly predicted pairs source-recipient were calculated (Table 1). Here the highest accuracy of 87.1% was achieved by ReD and V O I C E−S.
All tests were performed on PC with DDR3-1333MHz 4 GBx12 RAM and 2 Intel Xeon-X5550 2.67 GHz processors. The fastest algorithms were MinDist and MinDistB, with running times 9 ms for a pair of samples in our dataset. ReD requires ∼0.1s per pair of samples, While the running time of VOICE is ∼35 s per pair.
Currently, a molecular viral analysis is one of the major approaches used for investigations of outbreaks and inference of transmission networks. Although modern sequencing technologies significantly facilitated molecular analysis, providing unprecedented access to intra-host viral populations, they generated novel bioinformatics challenges.
This work proposed three novel algorithms for the investigation of viral transmissions based on analysis of the intra-host viral populations, which allow clustering genetically related samples, infer transmission directions and predict sources of outbreaks. Evaluation of the algorithms on experimental data from HCV outbreaks demonstrated their ability to accurately reconstruct various transmission characteristics. It should be noted, that although ReD was proved to be accurate in estimation of transmission clusters, directions and sources, its accuracy of relatedness detection is lower than for other evaluated methods. However, the advantage of this method over other methods is its non-parametricity (i.e. independence from virus-specific and genomic region-specific thresholds), which makes it more universally applicable and extremely useful in situations, when the lack of training data does not allow to establish reliable relatedness thresholds.
The clustering-based ReD approach may be further improved using a more scalable clustering similar to the algorithm proposed in [17]. The simulation-based approach VOICE presented here may be further improved by incorporating more complex viral evolution models taking into account cell proliferation rate and immune responses against viral variants.
All algorithms are planned to be integrated into the pipeline of cloud-based web-system "Global Hepatitis Outbreak and Surveillance Technology" (GHOST), which is currently being developed by US Centers for Disease Control and Prevention (https://webappx.cdc.gov/GHOST/).
Drake JW, Holland JJ. Mutation rates among rna viruses. Proc Natl Acad Sci. 1999; 96(24):13910–3.
Domingo E, Holland J. Rna virus mutations and fitness for survival. Annu Rev Microbiol. 1997; 51(1):151–78.
Domingo E, Sheldon J, Perales C. Viral quasispecies evolution. Microbiol Mol Biol Rev. 2012; 76(2):159–216.
Eriksson N, Pachter L, Mitsuya Y, Rhee SY, Wang C, Gharizadeh B, Ronaghi M, Shafer RW, Beerenwinkel N. Viral population estimation using pyrosequencing. PLoS Comput Biol. 2008; 4(5):1000074.
Archer J, Braverman MS, Taillon BE, Desany B, James I, Harrigan PR, Lewis M, Robertson DL. Detection of low-frequency pretherapy chemokine (cxc motif) receptor 4-using hiv-1 with ultra-deep pyrosequencing. AIDS (London, England). 2009; 23(10):1209.
Hoffmann C, Minkah N, Leipzig J, Wang G, Arens MQ, Tebas P, Bushman FD. Dna bar coding and pyrosequencing to identify rare hiv drug resistance mutations. Nucleic Acids Res. 2007; 35(13):91.
Wang W, Zhang X, Xu Y, Weinstock GM, Di Bisceglie AM, Fan X. High-resolution quantification of hepatitis c virus genome-wide mutation load and its correlation with the outcome of peginterferon-alpha2a and ribavirin combination therapy. PLoS ONE. 2014; 9(6):100131.
Skums P, Campo DS, Dimitrova Z, Vaughan G, Lau DT, Khudyakov Y. Numerical detection, measuring and analysis of differential interferon resistance for individual hcv intra-host variants and its influence on the therapy response. Silico Biol. 2011; 11(5):263–9.
Campo DS, Skums P, Dimitrova Z, Vaughan G, Forbi JC, Teo CG, Khudyakov Y, Lau DT. Drug resistance of a viral population and its individual intrahost variants during the first 48 h of therapy. Clin Pharmacol Ther. 2014; 95(6):627–35.
Wertheim JO, Brown AJL, Hepler NL, Pond SLK. The global transmission network of hiv-1. J Infect Dis. 2014; 209(2):304–13.
Wertheim JO, Pond SLK, Forgione LA, Mehta SR, Murrell B, Shah S, Smith DM, Scheffler K, Torian LV. Social and genetic networks of hiv-1 transmission in new york city. PLoS Pathog. 2017; 13(1):1006000.
Campo DS, Xia GL, Dimitrova Z, Lin Y, Forbi JC, Ganova-Raeva L, Punkova L, Ramachandran S, Thai H, Skums P, et al. Accurate genetic detection of hepatitis c virus transmissions in outbreak settings. J Infect Dis. 2016; 213(6):957–65.
Romero-Severson EO, Bulla I, Leitner T. Phylogenetically resolving epidemiologic linkage. Proc Natl Acad Sci. 2016; 113(10):2690–5. doi:10.1073/pnas.1522930113. http://arxiv.org/abs/http://www.pnas.org/content/113/10/2690.full.pdf.
De Maio N, Wu CH, Wilson DJ. Scotti: Efficient reconstruction of transmission within outbreaks with the structured coalescent. PLoS Comput Biol. 2016; 12(9):1005130.
Fischer GE, Schaefer MK, Labus BJ, Sands L, Rowley P, Azzam IA, Armour P, Khudyakov YE, Lin Y, Xia G. Hepatitis c virus infections from unsafe injection practices at an endoscopy clinic in las vegas, nevada, 2007–2008. Clin Infect Dis. 2010; 51(3):267–73.
Apostolou A, Bartholomew ML, Greeley R, Guilfoyle SM, Gordon M, Genese C, Davis JP, Montana B, Borlaug G. Transmission of hepatitis c virus associated with surgical procedures-new jersey 2010 and wisconsin 2011. MMWR Morb Mortal Wkly Rep. 2015; 64(7):165–70.
Skums P, Artyomenko A, Glebova O, Ramachandran S, Mandoiu I, Campo DS, Dimitrova Z, Zelikovsky A, Khudyakov Y. Computational framework for next-generation sequencing of heterogeneous viral populations using combinatorial pooling. Bioinformatics. 2015; 31(5):682–90. doi:10.1093/bioinformatics/btu726. http://bioinformatics.oxfordjournals.org/content/31/5/682.full.pdf+html.
Astrakhantseva IV, Campo DS, Araujo A, Teo CG, Khudyakov Y, Kamili S. Differences in variability of hypervariable region 1 of hepatitis c virus (hcv) between acute and chronic stages of hcv infection. Silico Biol. 2011; 11(5):163–73.
Quirin A, Cordón O, Guerrero-Bote VP, Vargas-Quesada B, Moya-Anegón F. A quick mst-based algorithm to obtain pathfinder networks. J Am Soc Inf Sci Technol. 2008; 59(12):1912–24.
Campo DS, Dimitrova Z, Yamasaki L, Skums P, Lau DT, Vaughan G, Forbi JC, Teo CG, Khudyakov Y. Next-generation sequencing reveals large connected networks of intra-host hcv variants. BMC Genomics. 2014; 15(Suppl 5):4.
Deza MM, Deza E. Encyclopedia of Distances.Springer-Verlag Berlin Heidelberg; 2009.
AZ was partially supported by NSF Grant CCF-16119110 and NIH Grant 1R01EB025022-01; PS was partially supported by NIH Grant 1R01EB025022-01; OG, SK, AM, and AA were partially supported by GSU Molecular Basis of Disease Fellowship. The publication costs were funded by NSF Grant CCF-1611911.
ReD and VOICE are freely available at https://bitbucket.org/osaofgsu/red and https://bitbucket.org/osaofgsu/voicerep, respectively.
About this supplement
This article has been published as part of BMC Genomics Volume 18 Supplement 10, 2017: Selected articles from the 6th IEEE International Conference on Computational Advances in Bio and Medical Sciences (ICCABS): genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-18-supplement-10.
OG and SK designed, implemented and tested the algorithms; AM and AA implemented and tested the algorithms; YK designed the algorithms and analyzed the algorithms' results; AZ and PS designed and implemented the algorithms, analyzed the results and supervised the research. All authors read and approved the final manuscript.
Computer Science Department, Georgia State University, 25 Park Place NE, Atlanta, 30303, GA, USA
Olga Glebova, Sergey Knyazev, Andrew Melnyk, Alexander Artyomenko, Alex Zelikovsky & Pavel Skums
Centers for Disease Control and Prevention, 1600 Clifton Rd, Atlanta, 30329, GA, USA
Yury Khudyakov & Pavel Skums
Olga Glebova
Sergey Knyazev
Andrew Melnyk
Alexander Artyomenko
Yury Khudyakov
Alex Zelikovsky
Pavel Skums
Correspondence to Olga Glebova.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Glebova, O., Knyazev, S., Melnyk, A. et al. Inference of genetic relatedness between viral quasispecies from sequencing data. BMC Genomics 18 (Suppl 10), 918 (2017). https://doi.org/10.1186/s12864-017-4274-5
Genetic relatedness
Outbreaks investigations | CommonCrawl |
Srinivasa Ramanujan
Indian mathematician
"Ramanujan" redirects here. For other uses, see Ramanujan (disambiguation).
In this Indian name, the name Srinivasa is a patronymic, and the person should be referred to by the given name, Ramanujan.
(1887-12-22)22 December 1887
Erode, Madras Presidency, British India
26 April 1920(1920-04-26) (aged 32)
Kumbakonam, Madras Presidency, British India
Srinivasa Ramanujan Aiyangar
Government Arts College (no degree)
Pachaiyappa's College (no degree)
Trinity College, Cambridge (Bachelor of Arts by Research, 1916)
Landau–Ramanujan constant
Mock theta functions
Ramanujan conjecture
Ramanujan prime
Ramanujan–Soldner constant
Ramanujan theta function
Ramanujan's sum
Rogers–Ramanujan identities
Ramanujan's master theorem
Ramanujan–Sato series
Fellow of the Royal Society
Trinity College, Cambridge
Highly Composite Numbers (1916)
G. H. Hardy
J. E. Littlewood
G. S. Carr
Srinivasa Ramanujan FRS (/ˈsrɪnɪvɑːs rɑːˈmɑːnʊdʒən/;[1] born Srinivasa Ramanujan Aiyangar, IPA: [sriːniʋaːsa ɾaːmaːnud͡ʑan ajːaŋgar]; 22 December 1887 – 26 April 1920)[2][3] was an Indian mathematician who lived during the British Rule in India. Though he had almost no formal training in pure mathematics, he made substantial contributions to mathematical analysis, number theory, infinite series, and continued fractions, including solutions to mathematical problems then considered unsolvable. Ramanujan initially developed his own mathematical research in isolation: according to Hans Eysenck: "He tried to interest the leading professional mathematicians in his work, but failed for the most part. What he had to show them was too novel, too unfamiliar, and additionally presented in unusual ways; they could not be bothered".[4] Seeking mathematicians who could better understand his work, in 1913 he began a postal correspondence with the English mathematician G. H. Hardy at the University of Cambridge, England. Recognizing Ramanujan's work as extraordinary, Hardy arranged for him to travel to Cambridge. In his notes, Hardy commented that Ramanujan had produced groundbreaking new theorems, including some that "defeated me completely; I had never seen anything in the least like them before",[5] and some recently proven but highly advanced results.
During his short life, Ramanujan independently compiled nearly 3,900 results (mostly identities and equations).[6] Many were completely novel; his original and highly unconventional results, such as the Ramanujan prime, the Ramanujan theta function, partition formulae and mock theta functions, have opened entire new areas of work and inspired a vast amount of further research.[7] Nearly all his claims have now been proven correct.[8] The Ramanujan Journal, a scientific journal, was established to publish work in all areas of mathematics influenced by Ramanujan,[9] and his notebooks—containing summaries of his published and unpublished results—have been analysed and studied for decades since his death as a source of new mathematical ideas. As late as 2012, researchers continued to discover that mere comments in his writings about "simple properties" and "similar outputs" for certain findings were themselves profound and subtle number theory results that remained unsuspected until nearly a century after his death.[10][11] He became one of the youngest Fellows of the Royal Society and only the second Indian member, and the first Indian to be elected a Fellow of Trinity College, Cambridge. Of his original letters, Hardy stated that a single look was enough to show they could have been written only by a mathematician of the highest calibre, comparing Ramanujan to mathematical geniuses such as Euler and Jacobi.
In 1919, ill health—now believed to have been hepatic amoebiasis (a complication from episodes of dysentery many years previously)—compelled Ramanujan's return to India, where he died in 1920 at the age of 32. His last letters to Hardy, written in January 1920, show that he was still continuing to produce new mathematical ideas and theorems. His "lost notebook", containing discoveries from the last year of his life, caused great excitement among mathematicians when it was rediscovered in 1976.
A deeply religious Hindu,[12] Ramanujan credited his substantial mathematical capacities to divinity, and said the mathematical knowledge he displayed was revealed to him by his family goddess Namagiri Thayar. He once said, "An equation for me has no meaning unless it expresses a thought of God."[13]
2 Adulthood in India
2.1 Pursuit of career in mathematics
2.2 Contacting British mathematicians
3 Life in England
4 Illness and death
5 Personality and spiritual life
6 Mathematical achievements
6.1 The Ramanujan conjecture
6.2 Ramanujan's notebooks
7 Hardy–Ramanujan number 1729
8 Mathematicians' views of Ramanujan
9 Posthumous recognition
10 In popular culture
11 Further works of Ramanujan's mathematics
12 Selected publications on Ramanujan and his work
13 Selected publications on works of Ramanujan
16.1 Media links
16.2 Biographical links
16.3 Other links
Ramanujan's birthplace on 18 Alahiri Street, Erode, now in Tamil Nadu
Ramanujan's home on Sarangapani Sannidhi Street, Kumbakonam
Ramanujan (literally, "younger brother of Rama", a Hindu deity[14]:12) was born on 22 December 1887 into a Tamil Brahmin Iyengar family in Erode, Madras Presidency (now Tamil Nadu, India), at the residence of his maternal grandparents.[14]:11 His father, Kuppuswamy Srinivasa Iyengar, originally from Thanjavur district, worked as a clerk in a sari shop.[14]:17–18[15] His mother, Komalatammal, was a housewife and sang at a local temple.[16] They lived in a small traditional home on Sarangapani Sannidhi Street in the town of Kumbakonam.[17] The family home is now a museum. When Ramanujan was a year and a half old, his mother gave birth to a son, Sadagopan, who died less than three months later. In December 1889 Ramanujan contracted smallpox, but recovered, unlike the 4,000 others who died in a bad year in the Thanjavur district around this time. He moved with his mother to her parents' house in Kanchipuram, near Madras (now Chennai). His mother gave birth to two more children, in 1891 and 1894, both of whom died before their first birthdays.[14]:12
On 1 October 1892 Ramanujan was enrolled at the local school.[14]:13 After his maternal grandfather lost his job as a court official in Kanchipuram,[14]:19 Ramanujan and his mother moved back to Kumbakonam and he was enrolled in Kangayan Primary School.[14]:14 When his paternal grandfather died, he was sent back to his maternal grandparents, then living in Madras. He did not like school in Madras, and tried to avoid attending. His family enlisted a local constable to make sure he attended school. Within six months, Ramanujan was back in Kumbakonam.[14]:14
Since Ramanujan's father was at work most of the day, his mother took care of the boy, and they had a close relationship. From her he learned about tradition and puranas, to sing religious songs, to attend pujas at the temple, and to maintain particular eating habits—all part of Brahmin culture.[14]:20 At Kangayan Primary School Ramanujan performed well. Just before turning 10, in November 1897, he passed his primary examinations in English, Tamil, geography and arithmetic with the best scores in the district.[14]:25 That year Ramanujan entered Town Higher Secondary School, where he encountered formal mathematics for the first time.[14]:25
A child prodigy by age 11, he had exhausted the mathematical knowledge of two college students who were lodgers at his home. He was later lent a book written by S. L. Loney on advanced trigonometry.[18][19] He mastered this by the age of 13 while discovering sophisticated theorems on his own. By 14 he received merit certificates and academic awards that continued throughout his school career, and he assisted the school in the logistics of assigning its 1,200 students (each with differing needs) to its approximately 35 teachers.[14]:27 He completed mathematical exams in half the allotted time, and showed a familiarity with geometry and infinite series. Ramanujan was shown how to solve cubic equations in 1902; he developed his own method to solve the quartic. The following year he tried to solve the quintic, not knowing that it could not be solved by radicals.
In 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carr's collection of 5,000 theorems.[14]:39[20] Ramanujan reportedly studied the contents of the book in detail.[21] The book is generally acknowledged as a key element in awakening his genius.[21] The next year Ramanujan independently developed and investigated the Bernoulli numbers and calculated the Euler–Mascheroni constant up to 15 decimal places.[14]:90 His peers at the time said they "rarely understood him" and "stood in respectful awe" of him.[14]:27
When he graduated from Town Higher Secondary School in 1904, Ramanujan was awarded the K. Ranganatha Rao prize for mathematics by the school's headmaster, Krishnaswami Iyer. Iyer introduced Ramanujan as an outstanding student who deserved scores higher than the maximum.[14] He received a scholarship to study at Government Arts College, Kumbakonam,[14]:28[14]:45 but was so intent on mathematics that he could not focus on any other subjects and failed most of them, losing his scholarship in the process.[14]:47 In August 1905 Ramanujan ran away from home, heading towards Visakhapatnam, and stayed in Rajahmundry[22] for about a month.[14]:47–48 He later enrolled at Pachaiyappa's College in Madras. There he passed in mathematics, choosing only to attempt questions that appealed to him and leaving the rest unanswered, but performed poorly in other subjects, such as English, physiology and Sanskrit.[23] Ramanujan failed his Fellow of Arts exam in December 1906 and again a year later. Without an FA degree, he left college and continued to pursue independent research in mathematics, living in extreme poverty and often on the brink of starvation.[14]:55–56
In 1910, after a meeting between the 23-year-old Ramanujan and the founder of the Indian Mathematical Society, V. Ramaswamy Aiyer, Ramanujan began to get recognition in Madras's mathematical circles, leading to his inclusion as a researcher at the University of Madras.[24]
Adulthood in India
On 14 July 1909, Ramanujan married Janaki (Janakiammal; 21 March 1899 – 13 April 1994),[25] a girl his mother had selected for him a year earlier and who was ten years old when they married.[14]:71[26][27] It was not unusual then for marriages to be arranged with girls at a young age. Janaki was from Rajendram, a village close to Marudur (Karur district) Railway Station. Ramanujan's father did not participate in the marriage ceremony.[28] As was common at that time, Janaki continued to stay at her maternal home for three years after marriage, until she reached puberty. In 1912, she and Ramanujan's mother joined Ramanujan in Madras.[29]
After the marriage, Ramanujan developed a hydrocele testis.[14]:72 The condition could be treated with a routine surgical operation that would release the blocked fluid in the scrotal sac, but his family could not afford the operation. In January 1910, a doctor volunteered to do the surgery at no cost.[30]
After his successful surgery, Ramanujan searched for a job. He stayed at a friend's house while he went from door to door around Madras looking for a clerical position. To make money, he tutored students at Presidency College who were preparing for their F.A.[clarification needed] exam.[14]:73
In late 1910, Ramanujan was sick again. He feared for his health, and told his friend R. Radakrishna Iyer to "hand [his notebooks] over to Professor Singaravelu Mudaliar [the mathematics professor at Pachaiyappa's College] or to the British professor Edward B. Ross, of the Madras Christian College."[14]:74–75 After Ramanujan recovered and retrieved his notebooks from Iyer, he took a train from Kumbakonam to Villupuram, a city under French control.[31][32] In 1912, Ramanujan moved with his wife and mother to a house in Saiva Muthaiah Mudali street, George Town, Madras, where they lived for a few months.[33] In May 1913, upon securing a research position at Madras University, Ramanujan moved with his family to Triplicane.[34]
Pursuit of career in mathematics
In 1910, Ramanujan met deputy collector V. Ramaswamy Aiyer, who founded the Indian Mathematical Society.[14]:77 Wishing for a job at the revenue department where Aiyer worked, Ramanujan showed him his mathematics notebooks. As Aiyer later recalled:
I was struck by the extraordinary mathematical results contained in [the notebooks]. I had no mind to smother his genius by an appointment in the lowest rungs of the revenue department.[35]
Aiyer sent Ramanujan, with letters of introduction, to his mathematician friends in Madras.[14]:77 Some of them looked at his work and gave him letters of introduction to R. Ramachandra Rao, the district collector for Nellore and the secretary of the Indian Mathematical Society.[36][37][38] Rao was impressed by Ramanujan's research but doubted that it was his own work. Ramanujan mentioned a correspondence he had with Professor Saldhana, a notable Bombay mathematician, in which Saldhana expressed a lack of understanding of his work but concluded that he was not a fraud.[14]:80 Ramanujan's friend C. V. Rajagopalachari tried to quell Rao's doubts about Ramanujan's academic integrity. Rao agreed to give him another chance, and listened as Ramanujan discussed elliptic integrals, hypergeometric series, and his theory of divergent series, which Rao said ultimately convinced him of Ramanujan's brilliance.[14]:80 When Rao asked him what he wanted, Ramanujan replied that he needed work and financial support. Rao consented and sent him to Madras. He continued his research with Rao's financial aid. With Aiyer's help, Ramanujan had his work published in the Journal of the Indian Mathematical Society.[14]:86
One of the first problems he posed in the journal was to find the value of:[14]
1 + 2 1 + 3 1 + ⋯ . {\displaystyle {\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}}.}
He waited for a solution to be offered in three issues, over six months, but failed to receive any. At the end, Ramanujan supplied the solution to the problem himself. On page 105 of his first notebook, he formulated an equation that could be used to solve the infinitely nested radicals problem.
x + n + a = a x + ( n + a ) 2 + x a ( x + n ) + ( n + a ) 2 + ( x + n ) ⋯ {\displaystyle x+n+a={\sqrt {ax+(n+a)^{2}+x{\sqrt {a(x+n)+(n+a)^{2}+(x+n){\sqrt {\cdots }}}}}}}
Using this equation, the answer to the question posed in the Journal was simply 3, obtained by setting x = 2, n = 1, and a = 0.[14]:87 Ramanujan wrote his first formal paper for the Journal on the properties of Bernoulli numbers. One property he discovered was that the denominators (sequence A027642 in the OEIS) of the fractions of Bernoulli numbers are always divisible by six. He also devised a method of calculating Bn based on previous Bernoulli numbers. One of these methods follows:
It will be observed that if n is even but not equal to zero,
Bn is a fraction and the numerator of Bn/n in its lowest terms is a prime number,
the denominator of Bn contains each of the factors 2 and 3 once and only once,
2n(2n − 1)Bn/n is an integer and 2(2n − 1)Bn consequently is an odd integer.
In his 17-page paper "Some Properties of Bernoulli's Numbers" (1911), Ramanujan gave three proofs, two corollaries and three conjectures.[14]:91 His writing initially had many flaws. As Journal editor M. T. Narayana Iyengar noted:
Mr. Ramanujan's methods were so terse and novel and his presentation so lacking in clearness and precision, that the ordinary [mathematical reader], unaccustomed to such intellectual gymnastics, could hardly follow him.[39]
Ramanujan later wrote another paper and also continued to provide problems in the Journal.[40] In early 1912, he got a temporary job in the Madras Accountant General's office, with a monthly salary of 20 rupees. He lasted only a few weeks.[41] Toward the end of that assignment, he applied for a position under the Chief Accountant of the Madras Port Trust.
In a letter dated 9 February 1912, Ramanujan wrote:
I understand there is a clerkship vacant in your office, and I beg to apply for the same. I have passed the Matriculation Examination and studied up to the F.A. but was prevented from pursuing my studies further owing to several untoward circumstances. I have, however, been devoting all my time to Mathematics and developing the subject. I can say I am quite confident I can do justice to my work if I am appointed to the post. I therefore beg to request that you will be good enough to confer the appointment on me.[42]
Attached to his application was a recommendation from E. W. Middlemast, a mathematics professor at the Presidency College, who wrote that Ramanujan was "a young man of quite exceptional capacity in Mathematics".[43] Three weeks after he applied, on 1 March, Ramanujan learned that he had been accepted as a Class III, Grade IV accounting clerk, making 30 rupees per month.[14]:96 At his office Ramanujan easily and quickly completed the work he was given and spent his spare time doing mathematical research. Ramanujan's boss, Sir Francis Spring, and S. Narayana Iyer, a colleague who was also treasurer of the Indian Mathematical Society, encouraged Ramanujan in his mathematical pursuits.[citation needed]
Contacting British mathematicians
In the spring of 1913, Narayana Iyer, Ramachandra Rao and E. W. Middlemast tried to present Ramanujan's work to British mathematicians. M. J. M. Hill of University College London commented that Ramanujan's papers were riddled with holes.[14]:105 He said that although Ramanujan had "a taste for mathematics, and some ability", he lacked the necessary educational background and foundation to be accepted by mathematicians.[44] Although Hill did not offer to take Ramanujan on as a student, he gave thorough and serious professional advice on his work. With the help of friends, Ramanujan drafted letters to leading mathematicians at Cambridge University.[14]:106
The first two professors, H. F. Baker and E. W. Hobson, returned Ramanujan's papers without comment.[14]:170–171 On 16 January 1913, Ramanujan wrote to G. H. Hardy.[45] Coming from an unknown mathematician, the nine pages of mathematics made Hardy initially view Ramanujan's manuscripts as a possible fraud.[46] Hardy recognised some of Ramanujan's formulae but others "seemed scarcely possible to believe".[47]:494 One of the theorems Hardy found amazing was on the bottom of page three (valid for 0 < a < b + 1/2):
∫ 0 ∞ 1 + x 2 ( b + 1 ) 2 1 + x 2 a 2 × 1 + x 2 ( b + 2 ) 2 1 + x 2 ( a + 1 ) 2 × ⋯ d x = π 2 × Γ ( a + 1 2 ) Γ ( b + 1 ) Γ ( b − a + 1 ) Γ ( a ) Γ ( b + 1 2 ) Γ ( b − a + 1 2 ) . {\displaystyle \int \limits _{0}^{\infty }{\frac {1+{\dfrac {x^{2}}{(b+1)^{2}}}}{1+{\dfrac {x^{2}}{a^{2}}}}}\times {\frac {1+{\dfrac {x^{2}}{(b+2)^{2}}}}{1+{\dfrac {x^{2}}{(a+1)^{2}}}}}\times \cdots \,dx={\frac {\sqrt {\pi }}{2}}\times {\frac {\Gamma \left(a+{\frac {1}{2}}\right)\Gamma (b+1)\Gamma (b-a+1)}{\Gamma (a)\Gamma \left(b+{\frac {1}{2}}\right)\Gamma \left(b-a+{\frac {1}{2}}\right)}}.}
Hardy was also impressed by some of Ramanujan's other work relating to infinite series:
1 − 5 ( 1 2 ) 3 + 9 ( 1 × 3 2 × 4 ) 3 − 13 ( 1 × 3 × 5 2 × 4 × 6 ) 3 + ⋯ = 2 π {\displaystyle 1-5\left({\frac {1}{2}}\right)^{3}+9\left({\frac {1\times 3}{2\times 4}}\right)^{3}-13\left({\frac {1\times 3\times 5}{2\times 4\times 6}}\right)^{3}+\cdots ={\frac {2}{\pi }}}
1 + 9 ( 1 4 ) 4 + 17 ( 1 × 5 4 × 8 ) 4 + 25 ( 1 × 5 × 9 4 × 8 × 12 ) 4 + ⋯ = 2 2 π Γ 2 ( 3 4 ) . {\displaystyle 1+9\left({\frac {1}{4}}\right)^{4}+17\left({\frac {1\times 5}{4\times 8}}\right)^{4}+25\left({\frac {1\times 5\times 9}{4\times 8\times 12}}\right)^{4}+\cdots ={\frac {2{\sqrt {2}}}{{\sqrt {\pi }}\,\Gamma ^{2}\left({\frac {3}{4}}\right)}}.}
The first result had already been determined by G. Bauer in 1859. The second was new to Hardy, and was derived from a class of functions called hypergeometric series, which had first been researched by Euler and Gauss. Hardy found these results "much more intriguing" than Gauss's work on integrals.[14]:167 After seeing Ramanujan's theorems on continued fractions on the last page of the manuscripts, Hardy said the theorems "defeated me completely; I had never seen anything in the least like them before",[14]:168 and that they "must be true, because, if they were not true, no one would have the imagination to invent them".[14]:168 Hardy asked a colleague, J. E. Littlewood, to take a look at the papers. Littlewood was amazed by Ramanujan's genius. After discussing the papers with Littlewood, Hardy concluded that the letters were "certainly the most remarkable I have received" and that Ramanujan was "a mathematician of the highest quality, a man of altogether exceptional originality and power".[47]:494–495 One colleague, E. H. Neville, later remarked that "not one [theorem] could have been set in the most advanced mathematical examination in the world".[40]
On 8 February 1913 Hardy wrote Ramanujan a letter expressing interest in his work, adding that it was "essential that I should see proofs of some of your assertions".[48] Before his letter arrived in Madras during the third week of February, Hardy contacted the Indian Office to plan for Ramanujan's trip to Cambridge. Secretary Arthur Davies of the Advisory Committee for Indian Students met with Ramanujan to discuss the overseas trip.[49] In accordance with his Brahmin upbringing, Ramanujan refused to leave his country to "go to a foreign land".[14]:185 Meanwhile, he sent Hardy a letter packed with theorems, writing, "I have found a friend in you who views my labour sympathetically."[50]
To supplement Hardy's endorsement, Gilbert Walker, a former mathematical lecturer at Trinity College, Cambridge, looked at Ramanujan's work and expressed amazement, urging the young man to spend time at Cambridge.[14]:175 As a result of Walker's endorsement, B. Hanumantha Rao, a mathematics professor at an engineering college, invited Ramanujan's colleague Narayana Iyer to a meeting of the Board of Studies in Mathematics to discuss "what we can do for S. Ramanujan".[51] The board agreed to grant Ramanujan a monthly research scholarship of 75 rupees for the next two years at the University of Madras.[52] While he was engaged as a research student, Ramanujan continued to submit papers to the Journal of the Indian Mathematical Society. In one instance Iyer submitted some of Ramanujan's theorems on summation of series to the journal, adding, "The following theorem is due to S. Ramanujan, the mathematics student of Madras University." Later in November, British Professor Edward B. Ross of Madras Christian College, whom Ramanujan had met a few years before, stormed into his class one day with his eyes glowing, asking his students, "Does Ramanujan know Polish?" The reason was that in one paper, Ramanujan had anticipated the work of a Polish mathematician whose paper had just arrived in the day's mail.[53] In his quarterly papers Ramanujan drew up theorems to make definite integrals more easily solvable. Working off Giuliano Frullani's 1821 integral theorem, Ramanujan formulated generalisations that could be made to evaluate formerly unyielding integrals.[14]:183
Hardy's correspondence with Ramanujan soured after Ramanujan refused to come to England. Hardy enlisted a colleague lecturing in Madras, E. H. Neville, to mentor and bring Ramanujan to England.[14]:184 Neville asked Ramanujan why he would not go to Cambridge. Ramanujan apparently had now accepted the proposal; Neville said, "Ramanujan needed no converting" and "his parents' opposition had been withdrawn".[40] Apparently Ramanujan's mother had a vivid dream in which the family goddess, the deity of Namagiri, commanded her "to stand no longer between her son and the fulfilment of his life's purpose".[40] Ramanujan traveled to England by ship,[when?] leaving his wife to stay with his parents in India.[citation needed]
Ramanujan (centre) and his colleague G. H. Hardy (extreme right), with other scientists, outside the Senate House, Cambridge, c.1914–19
Whewell's Court, Trinity College, Cambridge
Ramanujan departed from Madras aboard the S.S. Nevasa on 17 March 1914.[14]:196 When he disembarked in London on 14 April, Neville was waiting for him with a car. Four days later, Neville took him to his house on Chesterton Road in Cambridge. Ramanujan immediately began his work with Littlewood and Hardy. After six weeks Ramanujan moved out of Neville's house and took up residence on Whewell's Court, a five-minute walk from Hardy's room.[14]:202 Hardy and Littlewood began to look at Ramanujan's notebooks. Hardy had already received 120 theorems from Ramanujan in the first two letters, but there were many more results and theorems in the notebooks. Hardy saw that some were wrong, others had already been discovered, and the rest were new breakthroughs.[54] Ramanujan left a deep impression on Hardy and Littlewood. Littlewood commented, "I can believe that he's at least a Jacobi",[55] while Hardy said he "can compare him only with Euler or Jacobi."[56]
Ramanujan spent nearly five years in Cambridge collaborating with Hardy and Littlewood, and published part of his findings there. Hardy and Ramanujan had highly contrasting personalities. Their collaboration was a clash of different cultures, beliefs, and working styles. In the previous few decades the foundations of mathematics had come into question and the need for mathematically rigorous proofs recognised. Hardy was an atheist and an apostle of proof and mathematical rigour, whereas Ramanujan was a deeply religious man who relied very strongly on his intuition and insights. Hardy tried his best to fill the gaps in Ramanujan's education and to mentor him in the need for formal proofs to support his results, without hindering his inspiration—a conflict that neither found easy.
Ramanujan was awarded a Bachelor of Arts by Research degree[57][58] (the predecessor of the PhD degree) in March 1916 for his work on highly composite numbers, sections of the first part of which had been published the preceding year in the Proceedings of the London Mathematical Society. The paper was more than 50 pages long and proved various properties of such numbers. Hardy disliked this topic area but remarked that though it engaged with what he called the 'backwater of mathematics', in it Ramanujan displayed 'extraordinary mastery over the algebra of inequalities'.[59] On 6 December 1917, Ramanujan was elected to the London Mathematical Society. On 2 May 1918, he was elected a Fellow of the Royal Society,[60] the second Indian admitted, after Ardaseer Cursetjee in 1841. At age 31 Ramanujan was one of the youngest Fellows in the Royal Society's history. He was elected "for his investigation in elliptic functions and the Theory of Numbers." On 13 October 1918 he was the first Indian to be elected a Fellow of Trinity College, Cambridge.[14]:299–300
Illness and death
Ramanujan was plagued by health problems throughout his life. His health worsened in England; possibly he was also less resilient due to the difficulty of keeping to the strict dietary requirements of his religion there and because of wartime rationing in 1914–18. He was diagnosed with tuberculosis and a severe vitamin deficiency, and confined to a sanatorium. In 1919 he returned to Kumbakonam, Madras Presidency, and in 1920 he died at the age of 32. After his death his brother Tirunarayanan compiled Ramanujan's remaining handwritten notes, consisting of formulae on singular moduli, hypergeometric series and continued fractions.[29]
Ramanujan's widow, Smt. Janaki Ammal, moved to Bombay; in 1931 she returned to Madras and settled in Triplicane, where she supported herself on a pension from Madras University and income from tailoring. In 1950 she adopted a son, W. Narayanan, who eventually became an officer of the State Bank of India and raised a family. In her later years she was granted a lifetime pension from Ramanujan's former employer, the Madras Port Trust, and pensions from, among others, the Indian National Science Academy and the state governments of Tamil Nadu, Andhra Pradesh and West Bengal. She continued to cherish Ramanujan's memory, and was active in efforts to increase his public recognition; prominent mathematicians, including George Andrews, Bruce C. Berndt and Béla Bollobás made it a point to visit her while in India. She died at her Triplicane residence in 1994.[28][29]
A 1994 analysis of Ramanujan's medical records and symptoms by Dr. D. A. B. Young[61] concluded that his medical symptoms—including his past relapses, fevers, and hepatic conditions—were much closer to those resulting from hepatic amoebiasis, an illness then widespread in Madras, than tuberculosis. He had two episodes of dysentery before he left India. When not properly treated, amoebic dysentery can lie dormant for years and lead to hepatic amoebiasis, whose diagnosis was not then well established.[62] At the time, if properly diagnosed, amoebiasis was a treatable and often curable disease;[62][63] British soldiers who contracted it during the First World War were being successfully cured of amoebiasis around the time Ramanujan left England.[64]
Personality and spiritual life
Ramanujan has been described as a person of a somewhat shy and quiet disposition, a dignified man with pleasant manners.[65] He lived a simple life at Cambridge.[14]:234,241 Ramanujan's first Indian biographers describe him as a rigorously orthodox Hindu. He credited his acumen to his family goddess, Namagiri Thayar (Goddess Mahalakshmi) of Namakkal. He looked to her for inspiration in his work[14]:36 and said he dreamed of blood drops that symbolised her consort, Narasimha. Later he had visions of scrolls of complex mathematical content unfolding before his eyes.[14]:281 He often said, "An equation for me has no meaning unless it expresses a thought of God."[66]
Hardy cites Ramanujan as remarking that all religions seemed equally true to him.[14]:283 Hardy further argued that Ramanujan's religious belief had been romanticised by Westerners and overstated—in reference to his belief, not practice—by Indian biographers. At the same time, he remarked on Ramanujan's strict vegetarianism.[67]
Mathematical achievements
In mathematics there is a distinction between insight and formulating or working through a proof. Ramanujan proposed an abundance of formulae that could be investigated later in depth. G. H. Hardy said that Ramanujan's discoveries are unusually rich and that there is often more to them than initially meets the eye. As a byproduct of his work, new directions of research were opened up. Examples of the most intriguing of these formulae include infinite series for π, one of which is given below:
1 π = 2 2 9801 ∑ k = 0 ∞ ( 4 k ) ! ( 1103 + 26390 k ) ( k ! ) 4 396 4 k . {\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}.}
This result is based on the negative fundamental discriminant d = −4 × 58 = −232 with class number h(d) = 2. Further, 26390 = 5 × 7 × 13 × 58 and 16 × 9801 = 3962, which is related to the fact that
e π 58 = 396 4 − 104.000000177 … . {\textstyle e^{\pi {\sqrt {58}}}=396^{4}-104.000000177\dots .}
This might be compared to Heegner numbers, which have class number 1 and yield similar formulae.
Ramanujan's series for π converges extraordinarily rapidly and forms the basis of some of the fastest algorithms currently used to calculate π. Truncating the sum to the first term also gives the approximation 9801√2/4412 for π, which is correct to six decimal places; truncating it to the first two terms gives a value correct to 14 decimal places. See also the more general Ramanujan–Sato series.
One of Ramanujan's remarkable capabilities was the rapid solution of problems, illustrated by the following anecdote about an incident in which P. C. Mahalanobis posed a problem:
Imagine that you are on a street with houses marked 1 through n. There is a house in between (x) such that the sum of the house numbers to the left of it equals the sum of the house numbers to its right. If n is between 50 and 500, what are n and x?' This is a bivariate problem with multiple solutions. Ramanujan thought about it and gave the answer with a twist: He gave a continued fraction. The unusual part was that it was the solution to the whole class of problems. Mahalanobis was astounded and asked how he did it. 'It is simple. The minute I heard the problem, I knew that the answer was a continued fraction. Which continued fraction, I asked myself. Then the answer came to my mind', Ramanujan replied."[68][69]
His intuition also led him to derive some previously unknown identities, such as
( 1 + 2 ∑ n = 1 ∞ cos ( n θ ) cosh ( n π ) ) − 2 + ( 1 + 2 ∑ n = 1 ∞ cosh ( n θ ) cosh ( n π ) ) − 2 = 2 Γ 4 ( 3 4 ) π = 8 π 3 Γ 4 ( 1 4 ) {\displaystyle {\begin{aligned}&\left(1+2\sum _{n=1}^{\infty }{\frac {\cos(n\theta )}{\cosh(n\pi )}}\right)^{-2}+\left(1+2\sum _{n=1}^{\infty }{\frac {\cosh(n\theta )}{\cosh(n\pi )}}\right)^{-2}\\[6pt]={}&{\frac {2\Gamma ^{4}\left({\frac {3}{4}}\right)}{\pi }}={\frac {8\pi ^{3}}{\Gamma ^{4}\left({\frac {1}{4}}\right)}}\end{aligned}}}
for all θ such that | ℜ ( θ ) | < π {\displaystyle |\Re (\theta )|<\pi } and | ℑ ( θ ) | < π {\displaystyle |\Im (\theta )|<\pi } , where Γ(z) is the gamma function, and related to a special value of the Dedekind eta function. Expanding into series of powers and equating coefficients of θ0, θ4, and θ8 gives some deep identities for the hyperbolic secant.
In 1918 Hardy and Ramanujan studied the partition function P(n) extensively. They gave a non-convergent asymptotic series that permits exact computation of the number of partitions of an integer. In 1937 Hans Rademacher refined their formula to find an exact convergent series solution to this problem. Ramanujan and Hardy's work in this area gave rise to a powerful new method for finding asymptotic formulae called the circle method.[70]
In the last year of his life, Ramanujan discovered mock theta functions.[71] For many years these functions were a mystery, but they are now known to be the holomorphic parts of harmonic weak Maass forms.
The Ramanujan conjecture
Main article: Ramanujan–Petersson conjecture
Although there are numerous statements that could have borne the name Ramanujan conjecture, one was highly influential on later work. In particular, the connection of this conjecture with conjectures of André Weil in algebraic geometry opened up new areas of research. That Ramanujan conjecture is an assertion on the size of the tau-function, which has as generating function the discriminant modular form Δ(q), a typical cusp form in the theory of modular forms. It was finally proven in 1973, as a consequence of Pierre Deligne's proof of the Weil conjectures. The reduction step involved is complicated. Deligne won a Fields Medal in 1978 for that work.[7]
In his paper "On certain arithmetical functions", Ramanujan defined the so-called delta-function, whose coefficients are called τ(n) (the Ramanujan tau function).[72] He proved many congruences for these numbers, such as τ(p) ≡ 1 + p11 mod 691 for primes p. This congruence (and others like it that Ramanujan proved) inspired Jean-Pierre Serre (1954 Fields Medalist) to conjecture that there is a theory of Galois representations that "explains" these congruences and more generally all modular forms. Δ(z) is the first example of a modular form to be studied in this way. Deligne (in his Fields Medal-winning work) proved Serre's conjecture. The proof of Fermat's Last Theorem proceeds by first reinterpreting elliptic curves and modular forms in terms of these Galois representations. Without this theory there would be no proof of Fermat's Last Theorem.[73]
Ramanujan's notebooks
Further information: Ramanujan's lost notebook
While still in Madras, Ramanujan recorded the bulk of his results in four notebooks of looseleaf paper. They were mostly written up without any derivations. This is probably the origin of the misapprehension that Ramanujan was unable to prove his results and simply thought up the final result directly. Mathematician Bruce C. Berndt, in his review of these notebooks and Ramanujan's work, says that Ramanujan most certainly was able to prove most of his results, but chose not to.
This may have been for any number of reasons. Since paper was very expensive, Ramanujan would do most of his work and perhaps his proofs on slate, and then transfer just the results to paper. Using a slate was common for mathematics students in the Madras Presidency at the time. He was also quite likely to have been influenced by the style of G. S. Carr's book, which stated results without proofs. Finally, it is possible that Ramanujan considered his work to be for his personal interest alone and therefore recorded only the results.[74]
The first notebook has 351 pages with 16 somewhat organised chapters and some unorganised material. The second has 256 pages in 21 chapters and 100 unorganised pages, and the third 33 unorganised pages. The results in his notebooks inspired numerous papers by later mathematicians trying to prove what he had found. Hardy himself wrote papers exploring material from Ramanujan's work, as did G. N. Watson, B. M. Wilson, and Bruce Berndt.[74] In 1976, George Andrews rediscovered a fourth notebook with 87 unorganised pages, the so-called "lost notebook".[62]
Hardy–Ramanujan number 1729
Main article: 1729 (number)
The number 1729 is known as the Hardy–Ramanujan number after a famous visit by Hardy to see Ramanujan at a hospital. In Hardy's words:[75]
I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. "No", he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."
Immediately before this anecdote, Hardy quoted Littlewood as saying, "Every positive integer was one of [Ramanujan's] personal friends."[76]
The two different ways are:
1729 = 1 3 + 12 3 = 9 3 + 10 3 . {\displaystyle 1729=1^{3}+12^{3}=9^{3}+10^{3}.}
Generalisations of this idea have created the notion of "taxicab numbers".
Mathematicians' views of Ramanujan
In his obituary of Ramanujan, written for Nature in 1920, Hardy observed that Ramanujan's work primarily involved fields less known even among other pure mathematicians, concluding:
His insight into formulae was quite amazing, and altogether beyond anything I have met with in any European mathematician. It is perhaps useless to speculate as to his history had he been introduced to modern ideas and methods at sixteen instead of at twenty-six. It is not extravagant to suppose that he might have become the greatest mathematician of his time. What he actually did is wonderful enough… when the researches which his work has suggested have been completed, it will probably seem a good deal more wonderful than it does to-day.[47]
Hardy further said:[77]
He combined a power of generalisation, a feeling for form, and a capacity for rapid modification of his hypotheses, that were often really startling, and made him, in his own peculiar field, without a rival in his day. The limitations of his knowledge were as startling as its profundity. Here was a man who could work out modular equations and theorems... to orders unheard of, whose mastery of continued fractions was... beyond that of any mathematician in the world, who had found for himself the functional equation of the zeta function and the dominant terms of many of the most famous problems in the analytic theory of numbers; and yet he had never heard of a doubly periodic function or of Cauchy's theorem, and had indeed but the vaguest idea of what a function of a complex variable was..."
When asked about the methods Ramanujan employed to arrive at his solutions, Hardy said they were "arrived at by a process of mingled argument, intuition, and induction, of which he was entirely unable to give any coherent account."[78] He also said that he had "never met his equal, and can compare him only with Euler or Jacobi".[78]
K. Srinivasa Rao has said,[79] "As for his place in the world of Mathematics, we quote Bruce C. Berndt: 'Paul Erdős has passed on to us Hardy's personal ratings of mathematicians. Suppose that we rate mathematicians on the basis of pure talent on a scale from 0 to 100. Hardy gave himself a score of 25, J. E. Littlewood 30, David Hilbert 80 and Ramanujan 100.'" During a May 2011 lecture at IIT Madras, Berndt said that over the last 40 years, as nearly all of Ramanujan's conjectures have been proven, there had been greater appreciation of Ramanujan's work and brilliance, and that Ramanujan's work was now pervading many areas of modern mathematics and physics.[71][80]
Posthumous recognition
Further information: List of things named after Srinivasa Ramanujan
Bust of Ramanujan in the garden of Birla Industrial & Technological Museum in Kolkata, India
The 2012 Indian stamp dedicated to the National Mathematics Day and featuring Ramanujan
Ramanujan on stamp of India (2011)
The year after his death, Nature listed Ramanujan among other distinguished scientists and mathematicians on a "Calendar of Scientific Pioneers" who had achieved eminence.[81] Ramanujan's home state of Tamil Nadu celebrates 22 December (Ramanujan's birthday) as 'State IT Day'. Stamps picturing Ramanujan were issued by the government of India in 1962, 2011, 2012 and 2016.[82]
Since Ramanujan's centennial year, his birthday, 22 December, has been annually celebrated as Ramanujan Day by the Government Arts College, Kumbakonam, where he studied, and at the IIT Madras in Chennai. The International Centre for Theoretical Physics (ICTP) has created a prize in Ramanujan's name for young mathematicians from developing countries in cooperation with the International Mathematical Union, which nominates members of the prize committee. SASTRA University, a private university based in Tamil Nadu, has instituted the SASTRA Ramanujan Prize of US$10,000 to be given annually to a mathematician not exceeding age 32 for outstanding contributions in an area of mathematics influenced by Ramanujan. Based on the recommendations of a committee appointed by the University Grants Commission (UGC), Government of India, the Srinivasa Ramanujan Centre, established by SASTRA, has been declared an off-campus centre under the ambit of SASTRA University. House of Ramanujan Mathematics, a museum of Ramanujan's life and work, is also on this campus. SASTRA purchased and renovated the house where Ramanujan lived at Kumabakonam.[83]
In 2011, on the 125th anniversary of his birth, the Indian government declared that 22 December will be celebrated every year as National Mathematics Day.[84] Then Indian Prime Minister Manmohan Singh also declared that 2012 would be celebrated as National Mathematics Year.[85]
Ramanujan IT City is an information technology (IT) special economic zone (SEZ) in Chennai that was built in 2011. Situated next to the Tidel Park, it includes 25 acres (10 ha) with two zones, with a total area of 5.7 million square feet (530,000 m2), including 4.5 million square feet (420,000 m2) of office space.[86]
The Man Who Knew Infinity is a 2015 film based on Kanigel's book. British actor Dev Patel portrays Ramanujan.[87][88][89]
Ramanujan, an Indo-British collaboration film chronicling Ramanujan's life, was released in 2014 by the independent film company Camphor Cinema.[90] The cast and crew include director Gnana Rajasekaran, cinematographer Sunny Joseph and editor B. Lenin.[91][92] Indian and English stars Abhinay Vaddi, Suhasini Maniratnam, Bhama, Kevin McGowan and Michael Lieber star in pivotal roles.[93]
Nandan Kudhyadi directed the Indian documentary films The Genius of Srinivasa Ramanujan (2013) and Srinivasa Ramanujan: The Mathematician And His Legacy (2016) about the mathematician.[94]
Ramanujan (The Man Who Reshaped 20th Century Mathematics), an Indian docudrama film directed by Akashdeep released in 2018.[95]
M. N. Krish's thriller novel The Steradian Trail weaves Ramanujan and his accidental discovery into its plot connecting religion, mathematics, finance and economics.[96][97]
Partition, a play by Ira Hauptman about Hardy and Ramanujan, was first performed in 2013.[98][99][100][101]
The play First Class Man by Alter Ego Productions[102] was based on David Freeman's First Class Man. The play centres around Ramanujan and his complex and dysfunctional relationship with Hardy. On 16 October 2011 it was announced that Roger Spottiswoode, best known for his James Bond film Tomorrow Never Dies, is working on the film version, starring Siddharth.[103]
A Disappearing Number is a British stage production by the company Complicite that explores the relationship between Hardy and Ramanujan.[104]
David Leavitt's novel The Indian Clerk explores the events following Ramanujan's letter to Hardy.[105][106]
Google honoured Ramanujan on his 125th birth anniversary by replacing its logo with a doodle on its home page.[107][108]
Ramanujan was mentioned in the 1997 film Good Will Hunting, in a scene where professor Gerald Lambeau (Stellan Skarsgård) explains to Sean Maguire (Robin Williams) the genius of Will Hunting (Matt Damon) by comparing him to Ramanujan.[109]
The brilliant mathematician Amita Ramanujan on the tv show Numb3rs, played by half-Indian actress Navi Rawat, is named for Ramanujan.[citation needed]
Further works of Ramanujan's mathematics
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part I (Springer, 2005, ISBN 0-387-25529-X)[110]
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part II, (Springer, 2008, ISBN 978-0-387-77765-8)
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part III, (Springer, 2012, ISBN 978-1-4614-3809-0)
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part IV, (Springer, 2013, ISBN 978-1-4614-4080-2)
George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part V, (Springer, 2018, ISBN 978-3-319-77832-7)
M. P. Chaudhary, A simple solution of some integrals given by Srinivasa Ramanujan, (Resonance: J. Sci. Education – publication of Indian Academy of Science, 2008)[111]
M.P. Chaudhary, Mock theta functions to mock theta conjectures, SCIENTIA, Series A : Math. Sci., (22)(2012) 33–46.
M.P. Chaudhary, On modular relations for the Roger-Ramanujan type identities, Pacific J. Appl. Math., 7(3)(2016) 177–184.
Selected publications on Ramanujan and his work
Berndt, Bruce C. (1998). Butzer, P. L.; Oberschelp, W.; Jongen, H. Th. (eds.). Charlemagne and His Heritage: 1200 Years of Civilization and Science in Europe (PDF). Turnhout, Belgium: Brepols Verlag. pp. 119–146. ISBN 978-2-503-50673-9.
Berndt, Bruce C.; Rankin, Robert A. (1995). Ramanujan: Letters and Commentary. 9. Providence, Rhode Island: American Mathematical Society. ISBN 978-0-8218-0287-8.
Berndt, Bruce C.; Rankin, Robert A. (2001). Ramanujan: Essays and Surveys. 22. Providence, Rhode Island: American Mathematical Society. ISBN 978-0-8218-2624-9.
Berndt, Bruce C. (2006). Number Theory in the Spirit of Ramanujan. 9. Providence, Rhode Island: American Mathematical Society. ISBN 978-0-8218-4178-5.
Berndt, Bruce C. (1985). Ramanujan's Notebooks. Part I. New York: Springer. ISBN 978-0-387-96110-1.
Berndt, Bruce C. (1999). Ramanujan's Notebooks. Part II. New York: Springer. ISBN 978-0-387-96794-3.
Berndt, Bruce C. (2004). Ramanujan's Notebooks. Part III. New York: Springer. ISBN 978-0-387-97503-0.
Berndt, Bruce C. (1993). Ramanujan's Notebooks. Part IV. New York: Springer. ISBN 978-0-387-94109-7.
Berndt, Bruce C. (2005). Ramanujan's Notebooks. Part V. New York: Springer. ISBN 978-0-387-94941-3.
Hardy, G. H. (March 1937). "The Indian Mathematician Ramanujan". The American Mathematical Monthly. 44 (3): 137–155. doi:10.2307/2301659. JSTOR 2301659.
Hardy, G. H. (1978). Ramanujan. New York: Chelsea Pub. Co. ISBN 978-0-8284-0136-4.
Hardy, G. H. (1999). Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work. Providence, Rhode Island: American Mathematical Society. ISBN 978-0-8218-2023-0.
Henderson, Harry (1995). Modern Mathematicians. New York: Facts on File Inc. ISBN 978-0-8160-3235-8.
Kanigel, Robert (1991). The Man Who Knew Infinity: a Life of the Genius Ramanujan. New York: Charles Scribner's Sons. ISBN 978-0-684-19259-8.
Leavitt, David (2007). The Indian Clerk (paperback ed.). London: Bloomsbury. ISBN 978-0-7475-9370-6.
Narlikar, Jayant V. (2003). Scientific Edge: the Indian Scientist From Vedic to Modern Times. New Delhi, India: Penguin Books. ISBN 978-0-14-303028-7.
Ono, Ken; Aczel, Amir D. (13 April 2016). My Search for Ramanujan: How I Learned to Count. Springer. ISBN 978-3319255668.
Sankaran, T. M. (2005). "Srinivasa Ramanujan- Ganitha lokathile Mahaprathibha" (in Malayalam). Kochi, India: Kerala Sastra Sahithya Parishath. Cite journal requires |journal= (help)
Selected publications on works of Ramanujan
Ramanujan, Srinivasa; Hardy, G. H.; Seshu Aiyar, P. V.; Wilson, B. M.; Berndt, Bruce C. (2000). Collected Papers of Srinivasa Ramanujan. AMS. ISBN 978-0-8218-2076-6.
This book was originally published in 1927[112] after Ramanujan's death. It contains the 37 papers published in professional journals by Ramanujan during his lifetime. The third reprint contains additional commentary by Bruce C. Berndt.
S. Ramanujan (1957). Notebooks (2 Volumes). Bombay: Tata Institute of Fundamental Research.
These books contain photocopies of the original notebooks as written by Ramanujan.
S. Ramanujan (1988). The Lost Notebook and Other Unpublished Papers. New Delhi: Narosa. ISBN 978-3-540-18726-4.
This book contains photo copies of the pages of the "Lost Notebook".
Problems posed by Ramanujan, Journal of the Indian Mathematical Society.
This was produced from scanned and microfilmed images of the original manuscripts by expert archivists of Roja Muthiah Research Library, Chennai.
Mathematics portal
India portal
1729 (number)
Brown numbers
List of amateur mathematicians
List of Indian mathematicians
Ramanujan graph
Ramanujan summation
Ramanujan's constant
Ramanujan's ternary quadratic form
Rank of a partition
^ Olausson, Lena; Sangster, Catherine (2006). Oxford BBC Guide to Pronunciation. Oxford University Press. p. 322. ISBN 978-0-19-280710-6.
^ Kanigel, Robert. "Ramanujan, Srinivasa". Oxford Dictionary of National Biography (online ed.). Oxford University Press. doi:10.1093/ref:odnb/51582. (Subscription or UK public library membership required.)
^ https://trove.nla.gov.au/people/895585?c=people
^ Hans Eysenck (1995). Genius, p. 197. Cambridge University Press, ISBN 0-521-48508-8.
^ Hardy, Godfrey Harold (1940). Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work. Cambridge University Press. p. 9. ISBN 0-8218-2023-0.
^ Berndt, Bruce C. (12 December 1997). Ramanujan's Notebooks. Part 5. Springer Science & Business. p. 4. ISBN 978-0-38794941-3.
^ a b Ono, Ken (June–July 2006). "Honoring a Gift from Kumbakonam" (PDF). Notices of the American Mathematical Society. 53 (6): 640–51 [649–50]. Archived (PDF) from the original on 21 June 2007. Retrieved 23 June 2007.
^ "Rediscovering Ramanujan". Frontline. 16 (17): 650. August 1999. Archived from the original on 25 September 2013. Retrieved 20 December 2012.
^ Alladi, Krishnaswami; Elliott, P. D. T. A.; Granville, A. (30 September 1998). Analytic and Elementary Number Theory: A Tribute to Mathematical Legend Paul Erdos. Springer Science & Business. p. 6. ISBN 978-0-79238273-7.
^ Deep meaning in Ramanujan's 'simple' pattern Archived 3 August 2017 at the Wayback Machine
^ "Mathematical proof reveals magic of Ramanujan's genius" Archived 9 July 2017 at the Wayback Machine. New Scientist.
^ Kanigel, Robert (2016). The Man Who Knew Infinity: A Life of the Genius Ramanujan. Simon & Schuster. pp. 30–33. ISBN 978-1-47676349-1.
^ Kanigel, Robert (1991), "Prologue", The Man Who Knew Infinity, p. 7 .
^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay Kanigel, Robert (1991). The Man Who Knew Infinity: a Life of the Genius Ramanujan. New York: Charles Scribner's Sons. ISBN 978-0-684-19259-8.
^ "Ramanujan, Srinivasa (1887–1920), mathematician", Oxford Dictionary of National Biography, September 2004 (Oxford University Press). Retrieved 14 March 2019.
^ Berndt & Rankin 2001, p. 89 harvnb error: multiple targets (2×): CITEREFBerndtRankin2001 (help)
^ Srinivasan, Pankaja (19 October 2012). "The Nostalgia Formula". The Hindu. Retrieved 7 September 2016.
^ Berndt & Rankin 2001, p. 9 harvnb error: multiple targets (2×): CITEREFBerndtRankin2001 (help)
^ Hardy, G. H. (1999). Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work. Providence, Rhode Island: American Mathematical Society. p. 2. ISBN 978-0-8218-2023-0.
^ McElroy, Tucker (2005). A to Z of mathematicians. Facts on File. p. 221. ISBN 0-8160-5338-3-
^ a b Ramanujan Aiyangar, Srinivasa; Hardy, Godfrey Harold; Aiyar, P. Veṅkatesvara Seshu (2000), "Collected papers of Srinivasa Ramanujan", Nature, 123 (3104): xii, Bibcode:1929Natur.123..631L, doi:10.1038/123631a0, ISBN 978-0-8218-2076-6, S2CID 44812911
^ "Ramanujan lost and found: a 1905 letter from The Hindu". The Hindu. Chennai, India. 25 December 2011. [permanent dead link]
^ Krishnamachari, Suganthi (27 June 2013). "Travails of a Genius". The Hindu. Archived from the original on 26 August 2017. Retrieved 7 September 2016.
^ Krishnamurthy, V. "Srinivasa Ramanujan – His life and his genius". www.krishnamurthys.com. (Expository address delivered on Sep.16, 1987 at Visvesvarayya Auditorium as part of the celebrations of Ramanujan Centenary by the IISC, Bangalore). Archived from the original on 21 September 2016. Retrieved 7 September 2016.
^ "The seamstress and the mathematician". Live mint.
^ Bullough, V.L. (1990). "2. History in adult human sexual behavior with children and adolescents in Western societies". Pedophilia: Biosocial Dimensions. New York: Springer-Verlag. p. 71. ISBN 978-1-46139684-0.
^ Kolata, Gina (19 June 1987). "Remembering a 'Magical Genius'". Science. New Series. 236 (4808): 1519–21. Bibcode:1987Sci...236.1519K. doi:10.1126/science.236.4808.1519. PMID 17835731.
^ a b "Ramanujan's wife: Janakiammal (Janaki)" (PDF). Chennai: Institute of Mathematical Sciences. Archived from the original (PDF) on 24 December 2012. Retrieved 10 November 2012.
^ a b c Janardhanan, Arun (6 December 2015). "A passage to infinity". Indian Express. Archived from the original on 5 September 2016. Retrieved 7 September 2016.
^ Ramanujan, Srinivasa (1968). P. K. Srinivasan (ed.). Ramanujan Memorial Number: Letters and Reminiscences. 1. Madras: Muthialpet High School. 100.
^ Ranganathan, Shiyali Ramamrita (1967). Ramanujan: The Man and the Mathematician. Bombay: Asia Publishing House. p. 23. ISBN 9788185273372.
^ Srinivasan (1968), Vol. 1, p. 99.
^ Rao, K. Srinivasa. "Ramanujan's wife Janakiammal (Janaki)" (PDF). IMSC. Institute of Mathematical Sciences, Chennai. Archived from the original (PDF) on 10 January 2017. Retrieved 7 September 2016.
^ "About Ramanujan". The Ramanujan Institute. Archived from the original on 6 October 2016. Retrieved 7 September 2016.
^ Srinivasan (1968), Vol. 1, p. 129.
^ Neville, Eric Harold (January 1921). "The Late Srinivasa Ramanujan". Nature. 106 (2673): 661–662. Bibcode:1921Natur.106..661N. doi:10.1038/106661b0. S2CID 4185656.
^ Ranganathan 1967, p. 24
^ Seshu Iyer, P. V. (June 1920). "The Late Mr. S. Ramanujan, B.A., F.R.S.". Journal of the Indian Mathematical Society. 12 (3): 83.
^ a b c d Neville, Eric Harold (March 1942). "Srinivasa Ramanujan". Nature. 149 (3776): 292–293. Bibcode:1942Natur.149..292N. doi:10.1038/149292a0.
^ Srinivasan (1968), p. 176.
^ Srinivasan (1968), p. 31.
^ Letter from M. J. M. Hill to a C. L. T. Griffith (a former student who sent the request to Hill on Ramanujan's behalf), 28 November 1912.
^ The letter that revealed Ramanujan's genius
^ Snow, C. P. (1966). Variety of Men. New York: Charles Scribner's Sons. pp. 30–31.
^ a b c Hardy, G. H. (June 1920). "Obituary, S. Ramanujan". Nature. 105 (7): 494–495. Bibcode:1920Natur.105..494H. doi:10.1038/105494a0. S2CID 4174904.
^ Letter, Hardy to Ramanujan, 8 February 1913.
^ Letter, Ramanujan to Hardy, 22 January 1914.
^ Letter, Ramanujan to Hardy, 27 February 1913, Cambridge University Library.
^ Ram, Suresh (1972). Srinivasa Ramanujan. New Delhi: National Book Trust. p. 29.
^ Ranganathan 1967, pp. 30–31
^ Hardy, G. H. (1940). Ramanujan. Cambridge: Cambridge University Press. p. 10.
^ Letter, Littlewood to Hardy, early March 1913.
^ Hardy, G. H. (1979). Collected Papers of G. H. Hardy. 7. Oxford, England: Clarendon Press. 720.
^ The Cambridge University Reporter, of 18 March 1916, reports: Bachelors designate in Arts, Srinivasa Ramanujan (Research Student), Trin. A clear photographic image of said document can be viewed on the following YouTube video at the specified timestamp: https://www.youtube.com/watch?v=uhNGCn_3hmc&t=1636
^ "The Maths PhD in the UK: Notes on its History". www.economics.soton.ac.uk. Retrieved 9 August 2020.
^ Jean-Louis Nicolas, Guy Robin (eds.), Highly Composite Numbers by Srinivasa Ramanujan, The Ramanujan Journal 1997 1, 119–153, p.121
^ Embleton, Ellen (2 October 2018). "Revisiting Ramanujan". The Royal Society. The Royal Society. Retrieved 16 February 2020.
^ Young, D. A. B. (1994). "Ramanujan's illness". Notes and Records of the Royal Society of London. 48 (1): 107–119. doi:10.1098/rsnr.1994.0009. PMID 11615274. S2CID 33416179.
^ a b c Peterson, Doug. "Raiders of the Lost Notebook". UIUC College of Liberal Arts and Sciences. Archived from the original on 12 January 2014. Retrieved 11 January 2014.
^ Gunn, J. W. C. and Savage, B. (1919). "Report on the treatment of Entamoeba histolytica infections". Journal of the Royal Army Medical Corps. 33 (5): 418–426. CS1 maint: multiple names: authors list (link)
^ Langley, George J. (24 December 1921). "The Difficulties in Diagnosis And Treatment of Hepatic Abscess". British Medical Journal. 2 (3182): 1073–1074. doi:10.1136/bmj.2.3182.1073. JSTOR 20429465. PMC 2339657. PMID 20770524.
^ "Ramanujan's Personality". Archived from the original on 27 September 2007. Retrieved 23 June 2018.
^ Chaitin, Gregory (28 July 2007). "Less Proof, More Truth". New Scientist (2614): 49. doi:10.1016/S0262-4079(07)61908-3.
^ Berndt, Bruce C.; Rankin, Robert Alexander (2001). Ramanujan: Essays and Surveys. American Mathematical Society. p. 47. ISBN 978-0-82182624-9. Retrieved 8 June 2015.
^ Calyampudi Radhakrishna Rao (1997). Statistics and truth: putting chance to work. World Scientific. p. 185. ISBN 978-981-02-3111-8. Retrieved 7 June 2010.
^ "Partition Formula". Archived from the original on 9 February 2010. Retrieved 23 June 2018.
^ a b "100-Year-Old Deathbed Dreams of Mathematician Proved True". Fox News. 28 December 2012. Archived from the original on 7 January 2013.
^ Ramanujan, Srinivasa (1916). "On certain arithmetical functions" (PDF). Transactions of the Cambridge Philosophical Society. XXII (9). Archived from the original (PDF) on 11 June 2016. Retrieved 15 May 2016. The tau function is discussed in pages 194–197.
^ Ono, Ken; Aczel, Amir D. (13 April 2016). My Search for Ramanujan: How I Learned to Count. Springer. pp. 236–237. ISBN 978-3319255668. ideas that were critical to the proof of Fermat's last theorem
^ a b Berndt, Bruce C. (12 December 1997). Ramanujans Notebooks. ISBN 978-0387949413.
^ "Quotations by Hardy". Gap.dcs.st-and.ac.uk. Archived from the original on 16 July 2012. Retrieved 20 November 2012.
^ "Obituary Notices: Srinivasa Ramanujan". Hardy, G.H., Proceedings of the London Mathematical Society 19, p. lvii. Archived from the original on 5 March 2016.
^ The world of mathematics. James R. Newman. Mineola, N.Y.: Dover Publications. 2000. pp. 373–4. ISBN 978-0-486-41153-8. OCLC 43555029. CS1 maint: others (link)
^ a b Srinivasa Ramanujan Archived 25 March 2005 at the Wayback Machine. Retrieved 2 December 2010.
^ Rao, K Srinivasa. "Srinivasa Ramanujan (22 December 1887 – 26 April 1920)". Archived from the original on 16 April 2012. Retrieved 23 June 2018.
^ "Bruce Berndt on "Ramanujan's Lost Notebook", IIT Madras, 24th May 2011". youtube.com. Archived from the original on 6 December 2015.
^ "Calendar of Scientific Pioneers". Nature. 107 (2686): 252–254. 21 April 1921. Bibcode:1921Natur.107..252.. doi:10.1038/107252b0.
^ Srinivasa Ramanujan on stamps. commons.wikimedia.org
^ "Sastra University – Srinivasa Ramanujan Center – About Us". Archived from the original on 15 June 2017. Retrieved 23 June 2018.
^ "Singh's first visit to the state". CNN IBN. India. 26 December 2011. Archived from the original on 15 July 2012. Retrieved 12 April 2016.
^ "Welcome 2012 – The National Mathematical Year in India". India. 28 December 2011. Archived from the original on 6 December 2017. Retrieved 6 December 2017.
^ . 19 August 2019 https://property.jll.co.in/office-lease/chennai/perungudi/ramanujan-it-city-hardy-tower-ind-p-000f4f. Missing or empty |title= (help)
^ "Cannes: Dev Patel to Star as Famed Indian Mathematician". hollywoodreporter.com. Archived from the original on 9 January 2014.
^ Barraclough, Leo (5 December 2013). "Jeremy Irons to Co-star in 'The Man Who Knew Infinity'". variety.com. Archived from the original on 12 October 2017.
^ McNary, Dave (15 July 2014). "Dev Patel's 'The Man Who Knew Infinity' Moves to Production After 8 Years in Development". variety.com. Archived from the original on 4 July 2017.
^ "'Ramanujan' Makers Shoot in His House". Indiatimes. Times Internet Limited. Archived from the original on 11 July 2013. Retrieved 12 July 2013.
^ "Camphor Cinema Presents Their First Film Ramanujan". Box Office India. Select Publishing Company. 11 June 2013. Archived from the original on 20 August 2013. Retrieved 12 July 2013.
^ "Makers of 'Ramanujan' shoot in genius' house". Z News. Zee Media Corporation Ltd. Archived from the original on 8 July 2013. Retrieved 12 July 2013.
^ Krishnamachari, Suganthy (27 June 2013). "Travails of a genius". The Hindu. Chennai, India. Archived from the original on 1 July 2013. Retrieved 12 July 2013.
^ "Pune-based filmmaker wins 3 awards at National Science Film festival". The Indian Express. 27 February 2017.
^ "Ramanujan (The Man who reshaped 20th Century Mathematics) (2018)". Indiancine.ma.
^ Basu, Kankana (7 December 2014). "Racy read". The Hindu. Retrieved 30 April 2016.
^ "Crime in a World of High Science". 16 September 2014. Archived from the original on 15 April 2016. Retrieved 30 April 2016.
^ Ribet, Kenneth A. (December 2003). "Theater Review. Partition" (PDF). Notices of the AMS. 50 (1): 1407–1408. Archived (PDF) from the original on 6 October 2016. Retrieved 27 September 2016.
^ Harvey, Dennis (18 May 2003). "Review: 'Partition'". Archived from the original on 6 October 2016. Retrieved 23 March 2017.
^ "Partitions – a play on Ramanujan". The Hindu. 26 May 2003. Archived from the original on 20 July 2008.
^ DATTA, SRAVASTI (19 December 2014). "An ode to a genius". The Hindu. Retrieved 23 March 2017.
^ "First Class Man". Alteregoproductions.org. Archived from the original on 29 June 2007. Retrieved 20 November 2012.
^ "News / National: James Bond director to make film on Ramanujan". The Hindu. India. 16 October 2011. Archived from the original on 17 October 2011. Retrieved 18 October 2011.
^ Lunden, Jeff (15 July 2010). "'Disappearing Number': A Vivid Theatrical Equation". Morning Edition. National Public Radio. Retrieved 24 April 2018.
^ Freudenberger, Nell (16 September 2007). "Lust for Numbers". The New York Times. Archived from the original on 10 January 2012. Retrieved 4 September 2011.
^ Taylor, D. J. (26 January 2008). "Adding up to a life". The Guardian. UK. Archived from the original on 6 October 2014. Retrieved 4 September 2011.
^ "Google doodles for Ramanujan's 125th birthday". Times of India. 22 December 2012. Archived from the original on 22 December 2012. Retrieved 22 December 2012.
^ "Srinivasa Ramanujan's 125th Birthday". www.google.com. Archived from the original on 10 May 2016. Retrieved 30 April 2016.
^ Kumar, V. Krishna (2 February 2018). "A Legendary Creative Math Genius: Srinivasa Ramanujan". Psychology Today. Retrieved 24 April 2018.
^ Bressoud, David (2006). "Review: Ramanujan's Lost Notebook, Part I, by George Andrews and Bruce C. Berndt" (PDF). Bull. Amer. Math. Soc. (N.S.). 43 (4): 585–591. doi:10.1090/s0273-0979-06-01110-4.
^ "A simple solution of some integrals given by Srinivasa Ramanujan" (PDF). Resonance. 13 (9): 882–884.
^ Bell, E. T. (1928). "Collected Papers of Srinivasa Ramanujan, edited by G. H. Hardy, P. V. Seshu Aiyar and B. M. Wilson". Bull. Amer. Math. Soc. 34 (6): 783–784. doi:10.1090/S0002-9904-1928-04651-7.
Srinivasa Ramanujanat Wikipedia's sister projects
Media from Wikimedia Commons
Biswas, Soutik (16 March 2006). "Film to celebrate mathematics genius". BBC. Retrieved 24 August 2006.
Feature Film on Mathematics Genius Ramanujan by Dev Benegal and Stephen Fry
BBC radio programme about Ramanujan – episode 5
A biographical song about Ramanujan's life
Biographical links
Srinivasa Ramanujan at the Mathematics Genealogy Project
O'Connor, John J.; Robertson, Edmund F., "Srinivasa Ramanujan", MacTutor History of Mathematics archive, University of St Andrews
Weisstein, Eric Wolfgang (ed.). "Ramanujan, Srinivasa (1887–1920)". ScienceWorld.
A short biography of Ramanujan
"Our Devoted Site for Great Mathematical Genius"
Wolfram, Stephen (27 April 2016). "Who Was Ramanujan?".
A Study Group For Mathematics: Srinivasa Ramanujan Iyengar
The Ramanujan Journal – An international journal devoted to Ramanujan
International Math Union Prizes, including a Ramanujan Prize
Hindu.com: Norwegian and Indian mathematical geniuses, Ramanujan – Essays and Surveys, Ramanujan's growing influence, Ramanujan's mentor
Hindu.com: The sponsor of Ramanujan
Bruce C. Berndt; Robert A. Rankin (2000). "The Books Studied by Ramanujan in India". American Mathematical Monthly. 107 (7): 595–601. doi:10.2307/2589114. JSTOR 2589114. MR 1786233.
"Ramanujan's mock theta function puzzle solved"
Ramanujan's papers and notebooks
Sample page from the second notebook
Ramanujan on Fried Eye
Clark, Alex. "163 and Ramanujan Constant". Numberphile. Brady Haran. Archived from the original on 4 February 2018. Retrieved 23 June 2018.
Indian mathematics
Mathematicians
Apastamba
Katyayana
Manava
Pingala
Āryabhaṭa I
Āryabhaṭa II
Bhāskara I
Bhāskara II
Melpathur Narayana Bhattathiri
Brahmadeva
Brahmagupta
Brihaddeshi
Govindasvāmi
Halayudha
Jyeṣṭhadeva
Kamalakara
Mādhava of Saṅgamagrāma
Mahāvīra
Mahendra Sūri
Munishvara
Parameshvara
Achyuta Pisharati
Jagannatha Samrat
Nilakantha Somayaji
Śrīpati
Sridhara
Varāhamihira
Virasena
Shanti Swarup Bhatnagar Prize recipients in Mathematical Science
Āryabhaṭīya
Bakhshali manuscript
Bijaganita
Brāhmasphuṭasiddhānta
Ganita Kaumudi
Karanapaddhati
Līlāvatī
Lokavibhaga
Paulisa Siddhanta
Paitamaha Siddhanta
Romaka Siddhanta
Sadratnamala
Siddhānta Shiromani
Śulba Sūtras
Surya Siddhanta
Tantrasamgraha
Vasishtha Siddhanta
Veṇvāroha
Yuktibhāṣā
Yavanajataka
Brahmi numerals
Hindu–Arabic numeral system
Symbol for zero (0)
Kerala school of astronomy and mathematics
Jantar Mantar (Jaipur, New Delhi, Ujjain, Varanasi)
Historians of
Bibhutibhushan Datta
T. A. Sarasvati Amma
A. A. Krishnaswami Ayyangar
Sudhakara Dvivedi
Radha Charan Gupta
Kim Plofker
K. V. Sarma
Bapudeva Sastri
Prabodh Chandra Sengupta
Walter Eugene Clark
David Pingree
Islamic mathematics
Bhaskaracharya Pratishthana
Chennai Mathematical Institute
Institute of Mathematical Sciences
Harish-Chandra Research Institute
Homi Bhabha Centre for Science Education
Ramanujan Institute for Advanced Study in Mathematics
TIFR
Article Srinivasa Ramanujan in English Wikipedia took following places in local popularity ranking:
Presented content of the Wikipedia article was extracted in 2021-06-13 based on https://en.wikipedia.org/?curid=47717 | CommonCrawl |
Mathematics of apportionment
Mathematics of apportionment describes mathematical principles and algorithms for fair allocation of identical items among parties with different entitlements. Such principles are used to apportion seats in parliaments among federal states or political parties. See apportionment (politics) for the more concrete principles and issues related to apportionment, and apportionment by country for practical methods used around the world.
Mathematically, an apportionment method is just a method of rounding fractions to integers. As simple as it may sound, each and every method for rounding suffers from one or more paradoxes. The mathematical theory of apportionment aims to decide what paradoxes can be avoided, or in other words, what properties can be expected from an apportionment method.
The mathematical theory of apportionment was studied as early as 1907 by the mathematician Agner Krarup Erlang. It was later developed to a great detail by the mathematician Michel Balinsky and the economist Peyton Young.[1][2][3] Besides its application to political parties,[4] it is also applicable to fair item allocation when agents have different entitlements. It is also relevant in manpower planning - where jobs should be allocated in proportion to characteristics of the labor pool, to statistics - where the reported rounded numbers of percentages should sum up to 100%,[5][6] and to bankruptcy problems.[7]
Definitions
Input
The inputs to an apportionment method are:
• A positive integer $h$ representing the total number of items to allocate. It is also called the house size, since in many cases, the items to allocate are seats in a house of representatives.
• A positive integer $n$ representing the number of agents to which items should be allocated. For example, these can be federal states or political parties.
• A vector of numbers $(t_{1},\ldots ,t_{n})$ representing entitlements - $t_{i}$ represents the entitlement of agent $i$, that is, the amount of items to which $i$ is entitled (out of the total of $h$). These entitlements are often normalized such that $\sum _{i=1}^{n}t_{i}=1$. Alternatively, they can be normalized such that their sum is $h$; in this case the entitlements are called quotas and termed denoted by $q_{i}$, where $q_{i}:=t_{i}\cdot h$ and $\sum _{i=1}^{n}q_{i}=h$. Alternatively, one is given a vector of populations $(p_{1},\ldots ,p_{n})$; here, the entitlement of agent $i$ is $t_{i}=p_{i}/\sum _{j=1}^{n}p_{j}$.
Output
The output is a vector of integers $a_{1},\ldots ,a_{n}$ with $\sum _{i=1}^{n}a_{i}=h$, called an apportionment of $h$, where $a_{i}$ is the number of items allocated to agent i.
For each agent $i$, the real number $q_{i}:=t_{i}\cdot h$ is called the quota of $i$, and denotes the exact number of items that should be given to $i$. In general, a "fair" apportionment is one in which each allocation $a_{i}$ is as close as possible to the quota $q_{i}$.
An apportionment method may return a set of apportionment vectors (in other words: it is a multivalued function). This is required, since in some cases there is no fair way to distinguish between two possible solutions. For example, if $h=101$ (or any other odd number) and $t_{1}=t_{2}=1/2$, then (50,51) and (51,50) are both equally reasonable solutions, and there is no mathematical way to choose one over the other. While such ties are extremely rare in practice, the theory must account for them (in practice, when an apportionment method returns multiple outputs, one of them may be chosen by some external priority rules, or by coin flipping, but this is beyond the scope of the mathematical apportionment theory).
An apportionment method is denoted by a multivalued function $M(\mathbf {t} ,h)$; a particular $M$-solution is a single-valued function $f(\mathbf {t} ,h)$ which selects a single apportionment from $M(\mathbf {t} ,h)$.
A partial apportionment method is an apportionment method for specific fixed values of $n$ and $h$; it is a multivalued function $M^{*}(\mathbf {t} )$ that accepts only an $n$-vectors.
Variants
Sometimes, the input also contains a vector of integers $r_{1},\ldots ,r_{n}$ representing minimum requirements - $r_{i}$ represents the smallest number of items that agent $i$ should receive, regardless of its entitlement. So there is an additional requirement on the output: $a_{i}\geq r_{i}$ for all $i$.
When the agents are political parties, these numbers are usually 0, so this vector is omitted. But when the agents are states or districts, these numbers are often positive in order to ensure that all are represented. They can be the same for all agents (e.g. 1 for USA states, 2 for France districts), or different (e.g. in Canada or the European parliament).
Sometimes there is also a vector of maximum requirements, but it is less common.
Basic requirements
There are basic properties that should be satisfied by any reasonable apportionment method. They were given different names by different authors: the names on the left are from Pukelsheim;[8]: 75 The names in parentheses on the right are from Balinsky and Young.[1]
• Anonymity (=Symmetry) means that the apportionment does not depend on the agents' names or indices. Formally, if $\mathbf {t'} $ is any permutation of $\mathbf {t} $, then the apportionments in $M(\mathbf {t'} ,h)$ are exactly the corresponding permutations of the apportionments in $M(\mathbf {t} ,h)$.
• This requirement makes sense when there are no minimal requirements, or when the requirements are the same; if they are not the same, then anonymity should hold subject to the requirements being satisfied.
• Balancedness (=Balance) means that if two agents have equal entitlements, then their allocation should differ by at most 1: $t_{i}=t_{j}$ implies $a_{i}\geq a_{j}-1$.
• Concordance (=Weak population monotonicity) means that an agent with a strictly higher entitlement receives at least as many items: $t_{i}>t_{j}$ implies $a_{i}\geq a_{j}$.
• Decency (=Homogeneity) means that scaling the entitlement vector does not change the outcome. Formally, $M(c\cdot \mathbf {t} ,h)=M(\mathbf {t} ,h)$ for every constant c (this is automatically satisfied if the input to the apportionment method is normalized).
• Exactness (=Weak proportionality) means that if there exists a perfect solution, then it must be selected. Formally, if the quota $q_{i}=t_{i}\cdot h$ of each agent $i$ is an integer number, then $M(\mathbf {t} ,h)$ must contain a unique vector $(q_{1},\ldots ,q_{n})$. In other words, if an h-apportionment $\mathbf {a} $ is exactly proportional to $\mathbf {t} $, then it should be the unique element of $M(\mathbf {t} ,h)$.
• Strong exactness[9]: 13 means that exactness also holds "in the limit". That is, if a sequence of entitlement vectors converges to an integer quota vector $(q_{1},\ldots ,q_{n})$, then the only allocation vector in all elements of the sequence is $(q_{1},\ldots ,q_{n})$. To see the difference from weak exactness, consider the following rule. (a) Give each agent its quota rounded down, $\lfloor q_{i}\rfloor $; (b) give the remaining seats iteratively to the largest parties. This rule is weakly exact, but not strongly exact. For example, suppose h=6 and consider the sequence of quota vectors (4+1/k, 2-1/k). The above rule yields the allocation (5,1) for all k, even though the limit when k→∞ is the integer vector (4,2).
• Strong proportionality[1] means that, in addition, if $\mathbf {a'} \in M(\mathbf {t} ,h')$, and $h<h'$, and there is some h-apportionment $\mathbf {a} $ that is exactly proportional to $\mathbf {a} '$, then it should be the unique element of $M(\mathbf {t} ,h)$. For example, if one solution in $M(\mathbf {t} ,6)$ is (3,3), then the only solution in $M(\mathbf {t} ,4)$ must be (2,2).
• Completeness means that, if some apportionment $\mathbf {a} $ is returned for a converging sequence of entitlement vectors, then $\mathbf {a} $ is also returned for their limit vector. In other words, the set $\{\mathbf {t} |\mathbf {a} \in M(\mathbf {t} ,h)\}$ - the set of entitlement vectors for which $\mathbf {a} $ is a possible apportionment - is topologically closed. An incomplete method can be "completed" by adding the apportionment $\mathbf {a} $ to any limit entitlement if and only if it belongs to every entitlement in the sequence. The completion of a symmetric and proportional apportionment method is complete, symmetric and proportional.[1]: Prop.2.2
• Completeness is violated by methods that apply an external tie-breaking rule, as done by many countries in practice. The tie-breaking rule applies only in the limit case, so it might break the completeness.
• Completeness and weak-exactness together imply strong-exactness. If a complete and weakly-exact method is modified by adding an appropriate tie-breaking rule, then the resulting rule is no longer complete, but it is still strongly-exact.[9]: 13
Other considerations
The proportionality of apportionment can be measured by seats-to-votes ratio and Gallagher index. The proportionality of apportionment together with electoral thresholds impact political fragmentation and barrier to entry to the political competition.[10]
Common apportionment methods
There are many apportionment methods, and they can be classified into several approaches.
1. Largest remainder methods start by computing the vector of quotas rounded down, that is, $\lfloor q_{1}\rfloor ,\ldots ,\lfloor q_{n}\rfloor $. If the sum of these rounded values is exactly $h$, then this vector is returned as the unique apportionment. Typically, the sum is smaller than $h$. In this case, the remaining items are allocated among the agents according to their remainders $q_{i}-\lfloor q_{i}\rfloor $: the agent with the largest remainder receives one seat, then the agent with the second-largest remainder receives one seat, and so on, until all items are allocated. There are several variants of the LR method, depending on which quota is used:
• The simple quota, also called the Hare quota, is $t_{i}h$. Using LR with the Hare quota leads to Hamilton's method.
• The Hagenbach-Bischoff quota, also called the exact Droop quota, is $t_{i}\cdot (h+1)$. The quotas in this method are larger, so there are fewer remaining items. In theory, it is possible that the sum of rounded-down quotas would be $h+1$ which is larger than $h$, but this rarely happens in practice.
• The Imperiali quota is $t_{i}\cdot (h+2)$. This quota is less common, since there are higher chances that the sum of rounded-down quotas will be larger than $h$.
2. Divisor methods, instead of using a fixed multiplier in the quota (such as $h$ or $h+1$), choose the multiplier such that the sum of rounded quotas is exactly equal to $h$, so there are no remaining items to allocate. Formally, $M(\mathbf {t} ,h):=\{\mathbf {a} |a_{i}=\operatorname {round} (t_{i}\cdot H){\text{ and }}\sum _{i=1}^{n}a_{i}=h{\text{ for some real number }}H\}$. Divisor methods differ by the method they use for rounding. A divisor method is parametrized by a divisor function $d(k)$ which specifies, for each integer $k\geq 0$, a real number in the interval $[k,k+1]$. It means that all numbers in $[k,d(k)]$ should be rounded down to $k$, and all numbers in $[d(k),k+1]$ should be rounded up to $k+1$. The rounding function is denoted by $\operatorname {round} ^{d}(x)$, and returns an integer $k$ such that $d(k-1)\leq x\leq d(k)$. The number $d(k)$ itself can be rounded both up and down, so the rounding function is multi-valued. For example, Adams' method uses $d(k)=k$, which corresponds to rounding up; D'Hondt/Jefferson method uses $d(k)=k+1$, which corresponds to rounding down; and Webster/Sainte-Laguë method uses $d(k)=k+0.5$, which corresponds to rounding to the nearest integer. A divisor method can also be computed iteratively: initially, $a_{i}$ is set to 0 for all parties. Then, at each iteration, the next seat is allocated to a party which maximizes the ratio ${\frac {t_{i}}{d(a_{i})}}$.
3. Rank-index methods are parametrized by a function $r(t,a)$ which is decreasing in $a$. The apportionment is computed iteratively. Initially, set $a_{i}$ to 0 for all parties. Then, at each iteration, allocate the next seat to an agent which maximizes $r(t_{i},a_{i})$. Divisor methods are a special case of rank-index methods: a divisor method with divisor function $d(a)$ is equivalent to a rank-index method with rank-index $r(t,a)=t/d(a)$.
4. Optimization-based methods aim to attain, for each instance, an allocation that is "as fair as possible" for this instance. An allocation is "fair" if $a_{i}=q_{i}$ for all agents i; in this case, we say that the "unfairness" of the allocation is 0. If this equality is violated, one can define a measure of "total unfairness", and try to minimize it. One can minimize the sum of unfairness levels, or the maximum unfairness level. Each optimization criterion leads to a different rule.
Staying within the quota
Main article: Quota rule
The exact quota of agent $i$ is $q_{i}=t_{i}\cdot h$. A basic requirement from an apportionment method is that it allocates to each agent $i$ its quota $q_{i}$ if it is an integer; otherwise, it should allocate it an integer that is near the exact quota, that is, either its lower quota $\lfloor q_{i}\rfloor $ or its upper quota $\lceil q_{i}\rceil $.[11] We say that an apportionment method -
• Satisfies lower quota if $a_{i}\geq \lfloor q_{i}\rfloor $ for all $i$ (this holds iff $a_{i}+1>q_{i}$).
• Satisfies upper quota if $a_{i}\leq \lceil q_{i}\rceil $ for all $i$ (this holds iff $a_{i}-1<q_{i}$).
• Satisfies both quotas if both the above conditions hold (this holds iff ${\frac {q_{i}}{a_{i}+1}}<1<{\frac {q_{i}}{a_{i}-1}}$).
Hamilton's largest-remainder method satisfies both lower quota and upper quota by construction. This does not hold for the divisor methods:[1]: Prop.6.2, 6.3, 6.4, 6.5
• All divisor methods satisfy both quotas when there are 2 agents;
• Webster's method is the only divisor method satisfying both quotas for 3 agents;
• Adams' method is the only divisor method satisfying upper quota for any number of agents;
• Jefferson's method is the only divisor method satisfying lower quota for any number of agents;
• No divisor method simultaneously violates upper quota for one agent and violates lower quota for another agent.
Therefore, no divisor method satisfies both upper quota and lower quota for any number of agents. The uniqueness of Jefferson and Adams holds even in the much larger class of rank-index methods.[12]
This can be seen as a disadvantage of divisor methods, but it can also be considered a disadvantage of the quota criterion:[1]: 129
"For example, to give D 26 instead of 25 seats in Table 10.1 would mean taking a seat from one of the smaller states A, B, or C. Such a transfer would penalize the per capita representation of the small state much more - in both absolute and relative terms - than state D is penalized by getting one less than its lower quota. Similar examples can be invented in which some state might reasonably get more than its upper quota. It can be argued that staying within the quota is not really compatible with the idea of proportionality at all, since it allows a much greater variance in the per capita representation of smaller states than it does for larger states."
In Monte-Carlo simulations, Webster's method satisfies both quotas with a very high probability. Moreover, Wesbter's method is the only division method that satisfies near quota:[1]: Thm.6.2 there are no agents $i,j$ such that moving a seat from $i$ to $j$ would bring both of them nearer to their quotas:
$q_{i}-(a_{i}-1)~<~a_{i}-q_{i}~~{\text{ and }}~~(a_{j}+1)-q_{j}~<~q_{j}-a_{j}$.
Jefferson's method can be modified to satisfy both quotas, yielding the Quota-Jefferson method.[11] Moreover, any divisor method can be modified to satisfy both quotas.[13] This yields the Quota-Webster method, Quota-Hill method, etc. This family of methods is often called the quatatone methods,[12] as they satisfy both quotas and house-monotonicity.
Minimizing pairwise inequality
One way to evaluate apportionment methods is by whether they minimize the amount of inequality between pairs of agents. Clearly, inequality should take into account the different entitlements: if $a_{i}/t_{i}=a_{j}/t_{j}$ then the agents are treated "equally" (w.r.t. to their entitlements); otherwise, if $a_{i}/t_{i}>a_{j}/t_{j}$ then agent $i$ is favored, and if $a_{i}/t_{i}<a_{j}/t_{j}$ then agent $j$ is favored. However, since there are 16 ways to rearrange the equality $a_{i}/t_{i}=a_{j}/t_{j}$, there are correspondingly many ways by which inequality can be defined.[1]: 100–102
• $|a_{i}/t_{i}-a_{j}/t_{j}|$. Webster's method is the unique apportionment method in which, for each pair of agents $i$ and $j$, this difference is minimized (that is, moving a seat from $i$ to $j$ or vice versa would not make the difference smaller).
• $a_{i}-(t_{i}/t_{j})a_{j}$ for $a_{i}/t_{i}\geq a_{j}/t_{j}$ This leads to Adams's method.
• $a_{i}(t_{j}/t_{i})-a_{j}$ for $a_{i}/t_{i}\geq a_{j}/t_{j}$. This leads to Jefferson's method.
• $|t_{i}/a_{i}-t_{j}/a_{j}|$. This leads to Dean's method.
• $\left|{\frac {a_{i}/t_{i}}{a_{j}/t_{j}}}-1\right|$. This leads to the Huntington-Hill method.
This analysis was done by Huntington in the 1920s.[14][15][16] Some of the possibilities do not lead to a stable solution. For example, if we define inequality as $|a_{i}/a_{j}-t_{i}/t_{j}|$, then there are instances in which, for any allocation, moving a seat from one agent to another might decrease their pairwise inequality. There is an example with 3 states with populations (737,534,329) and 16 seats.[1]: Prop.3.5
Bias towards large/small agents
When the agents are federal states, it is particularly important to avoid bias between large states and small states. There are several ways to measure this bias formally. All measurements lead to the conclusion that Jefferson's method is biased in favor of large states, Adams' method is biased in favor of small states, and Webster's method is the least biased divisor method.
Consistency properties
Consistency properties are properties that characterize an apportionment method, rather than a particular apportionment. Each consistency property compares the outcomes of a particular method on different inputs. Several such properties have been studied.
State-population monotonicity means that, if the entitlement of an agent increases, its apportionment should not decrease. The name comes from the setting where the agents are federal states, whose entitlements are determined by their population. A violation of this property is called the population paradox. There are several variants of this property. One variant - the pairwise PM - is satisfied exclusively by divisor methods. That is, an apportionment method is pairwise PM if-and-only-if it is a divisor method.[1]: Thm.4.3
When $n\geq 4$ and $h\geq n+3$, no partial apportionment method satisfies pairwise-PM, lower quota and upper quota.[1]: Thm.6.1 Combined with the previous statements, it implies that no divisor method satisfies both quotas.
House monotonicity means that, when the total number of seats $h$ increases, no agent loses a seat. The violation of this property is called the Alabama paradox. It was considered particularly important in the early days of the USA, when the congress size increased every ten years. House-monotonicity is weaker than pairwise-PM. All rank-index methods (hence all divisor methods) are house-monotone - this clearly follows from the iterative procedure. Besides the divisor methods, there are other house-monotone methods, and some of them also satisfy both quotas. For example, the Quota method of Balinsky and Young satisfies house-monotonicity and upper-quota by construction, and it can be proved that it also satisfies lower-quota.[11] It can be generalized: there is a general algorithm that yields all apportionment methods which are both house-monotone and satisfy both quotas. However, all these quota-based methods (Quota-Jefferson, Quota-Hill, etc.) may violate pairwise-PM: there are examples in which one agent gains in population but loses seats.[1]: Sec.7
Uniformity (also called coherence[17]) means that, if we take some subset of the agents $1,\ldots ,k$, and apply the same method to their combined allocation $h_{k}=a_{1}+\cdots +a_{k}$, then the result is the vector $(a_{1},\ldots ,a_{k})$. All rank-index methods (hence all divisor methods) are uniform, since they assign seats to agents in a pre-determined method - determined by $r(t,a)$, and this order does not depend on the presence or absence of other agents. Moreover, every uniform method that is also anonymous and balanced must be a rank-index method.[1]: Thm.8.3
Every uniform method that is also anonymous, weakly-exact and concordant (= $t_{i}>t_{j}$ implies $a_{i}\geq a_{j}$) must be a divisor method.[1]: Thm.8.4 Moreover, among all anonymous methods:[12]
• Jefferson's method is the only uniform method satisfying lower quota;
• Adams's method is the only uniform method satisfying upper quota;
• Webster's method is the only uniform method that is near quota;
• No uniform method satisfies both quotas. In particular, Hamilton's method and the Quota method are not uniform. However, the Quota method is the unique method that satisfies both quotas in addition to house-monotonicity and "quota-consistency", which is a weaker form of uniformity.
Encouraging coalitions
When the agents are political parties, they often split or merge. How such splitting/merging affects the apportionment will impact political fragmentation. Suppose a certain apportionment method gives two agents $i,j$ some $a_{i},a_{j}$ seats respectively, and then these two agents form a coalition, and the method is re-activated.
• An apportionment method encourages coalitions if the coalition receives at least $a_{i}+a_{j}$ seats (in other words, it is split-proof - a party cannot gain a seat by splitting).
• An apportionment method encourages schisms if the coalition receives at most $a_{i}+a_{j}$ seats (in other words, it is merge-proof - two parties cannot gain a seat by merging).
Among the divisor methods:[1]: Thm.9.1, 9.2, 9.3
• Jefferson's method is the unique divisor method that encourages coalitions;
• Adams's method is the unique divisor method that encourages schisms.
• Webster's method is neither split-proof nor merge-proof, but it is "coalition neutral": when votes are distributed randomly, a coalition is equally likely to gain a seat than to lose a seat.[1]: Prop.9.4
Since these are different methods, no divisor method gives every coalition of $i,j$ exactly $a_{i}+a_{j}$ seats. Moreover, this uniqueness can be extended to the much larger class of rank-index methods.[12]
A weaker property, called "coalitional-stability", is that every coalition of $i,j$ should receive between $a_{i}+a_{j}-1$ and $a_{i}+a_{j}+1$ seats; so a party can gain at most one seat by merging/splitting.
• The Hamilton method is coalitionally-stable.[18]: Thm.2 [12]: Appendix
• A divisor method with divisor $d$ is coalitionally-stable iff $d(a_{1}+a_{2})\leq d(a_{1})+d(a_{2})\leq d(a_{1}+a_{2}+1)$; this holds for all five standard divisor methods.[18]: Thm.1
Moreover, every method satisfying both quotas is "almost coalitionally-stable" - it gives every coalition between $a_{i}+a_{j}-2$ and $a_{i}+a_{j}+2$ seats.[12]
Summary table
Method Lower quota Upper quota Near Quota House monotonicity Uniformity Pairwise
Population
Monotonicity
Encouraging
coalitions
Encouraging
schisms
Coalition
neutrality
Positive results:
Divisor methods [Only Jefferson] [Only Adams] [Only Webster] Yes Yes Yes [Only Jefferson] [Only Adams] [Only Webster]
Rank-index methods [Only Jefferson] [Only Adams] [Only Webster] Yes Yes [Only divisor methods] [Only Jefferson?] [Only Adams?] [Only Webster?]
Hamilton Yes Yes Yes No No No ? ? ?
Quota-capped divisor methods Yes Yes Yes Yes No No ? ? ?
Impossibility results:
- Yes Yes Yes
- Yes Yes Yes
- Yes Yes Yes Yes
- Yes Yes Yes Yes
See also
• Proportional representation
• Multi-attribute proportional representation.[19]
• Apportionment when there are errors in the population counts.[20]
• Proportional cake-cutting with different entitlements
• Fair item allocation
References
1. Balinski, Michel L.; Young, H. Peyton (1982). Fair Representation: Meeting the Ideal of One Man, One Vote. New Haven: Yale University Press. ISBN 0-300-02724-9.
2. Balinski, Michel L.; Young, H. Peyton (2001). Fair Representation: Meeting the Ideal of One Man, One Vote (2nd ed.). Washington, DC: Brookings Institution Press. ISBN 0-8157-0111-X.
3. Balinski, M.L.; Young, H.P. (1994-01-01). "Chapter 15 Apportionment". Handbooks in Operations Research and Management Science. 6: 529–560. doi:10.1016/S0927-0507(05)80096-9. ISBN 9780444892041. ISSN 0927-0507.
4. COTTERET J. M; C, EMERI (1973). LES SYSTEMES ELECTORAUX.
5. Diaconis, Persi; Freedman, David (1979-06-01). "On Rounding Percentages". Journal of the American Statistical Association. 74 (366a): 359–364. doi:10.1080/01621459.1979.10482518. ISSN 0162-1459.
6. Balinski, M. L.; Demange, G. (1989-11-01). "An Axiomatic Approach to Proportionality Between Matrices". Mathematics of Operations Research. 14 (4): 700–719. doi:10.1287/moor.14.4.700. ISSN 0364-765X.
7. Csoka, Péter; Herings, P. Jean-Jacques (2016-01-01). "Decentralized Clearing in Financial Networks (RM/16/005-revised-)". {{cite journal}}: Cite journal requires |journal= (help)
8. Pukelsheim, Friedrich (2017), Pukelsheim, Friedrich (ed.), "Divisor Methods of Apportionment: Divide and Round", Proportional Representation: Apportionment Methods and Their Applications, Cham: Springer International Publishing, pp. 71–93, doi:10.1007/978-3-319-64707-4_4, ISBN 978-3-319-64707-4, retrieved 2021-09-01
9. Palomares, Antonio; Pukelsheim, Friedrich; Ramírez, Victoriano (2016-09-01). "The whole and its parts: On the coherence theorem of Balinski and Young". Mathematical Social Sciences. 83: 11–19. doi:10.1016/j.mathsocsci.2016.06.001. ISSN 0165-4896.
10. Tullock, Gordon. "Entry barriers in politics." The American Economic Review 55.1/2 (1965): 458-466.
11. Balinski, M. L.; Young, H. P. (1975-08-01). "The Quota Method of Apportionment". The American Mathematical Monthly. 82 (7): 701–730. doi:10.1080/00029890.1975.11993911. ISSN 0002-9890.
12. Balinski, M. L.; Young, H. P. (1978-09-01). "Stability, Coalitions and Schisms in Proportional Representation Systems*". American Political Science Review. 72 (3): 848–858. doi:10.2307/1955106. ISSN 0003-0554. JSTOR 1955106. S2CID 144161027.
13. Still, Jonathan W. (1979-10-01). "A Class of New Methods for Congressional Apportionment". SIAM Journal on Applied Mathematics. 37 (2): 401–418. doi:10.1137/0137031. ISSN 0036-1399.
14. Huntington, E. V. (1928). "The Apportionment of Representatives in Congress". Transactions of the American Mathematical Society. 30 (1): 85–110. doi:10.2307/1989268. ISSN 0002-9947. JSTOR 1989268.
15. Huntington, Edward V. (1921-09-01). "A New Method of Apportionment of Representatives". Quarterly Publications of the American Statistical Association. 17 (135): 859–870. doi:10.1080/15225445.1921.10503487. ISSN 1522-5445. S2CID 129746319.
16. Huntington, Edward V. (1921-04-01). "The Mathematical Theory of the Apportionment of Representatives". Proceedings of the National Academy of Sciences of the United States of America. 7 (4): 123–127. Bibcode:1921PNAS....7..123H. doi:10.1073/pnas.7.4.123. ISSN 0027-8424. PMC 1084767. PMID 16576591.
17. Pukelsheim, Friedrich (2017), Pukelsheim, Friedrich (ed.), "Securing System Consistency: Coherence and Paradoxes", Proportional Representation: Apportionment Methods and Their Applications, Cham: Springer International Publishing, pp. 159–183, doi:10.1007/978-3-319-64707-4_9, ISBN 978-3-319-64707-4, retrieved 2021-09-01
18. Balinski, M. L.; Young, H. P. (1979-02-01). "Criteria for Proportional Representation". Operations Research. 27 (1): 80–95. doi:10.1287/opre.27.1.80. ISSN 0030-364X.
19. Lang, Jérôme; Skowron, Piotr (2018-10-01). "Multi-attribute proportional representation". Artificial Intelligence. 263: 74–106. doi:10.1016/j.artint.2018.07.005. ISSN 0004-3702. S2CID 11079872.
20. Spencer, Bruce D. (1985-12-01). "Statistical Aspects of Equitable Apportionment". Journal of the American Statistical Association. 80 (392): 815–822. doi:10.1080/01621459.1985.10478188. ISSN 0162-1459.
| Wikipedia |
\begin{document}
\title[Piltz divisor problem over number fields \`a la Vorono\"i]{Piltz divisor problem over number fields \`a la Vorono\"i} \author{Soumyarup Banerjee} \address{\rm Department of Mathematics, University of Hong Kong, Pokfulam, Hong Kong} \email{[email protected]}
\subjclass[2010] {11R42, 11R11, 11S40, 33C60}
\keywords{Piltz divisor problem, Dedekind zeta function, Special function, Riesz sum}
\begin{abstract} In this article, we study the {\rm Piltz divisor problem}, which is sometimes called the {\rm generalized Dirichlet divisor problem}, over number fields. We establish an identity akin to Vorono\"i's formula concerning the error term in the Dirichlet divisor problem. \end{abstract}
\thanks{The research of the author was supported by grants from the Research Grants Council of the Hong Kong SAR, China.} \maketitle
\section{Introduction} The asymptotic behavior of arithmetic functions has long been a fascinating subject in analytic number theory. In particular, one often investigates the behaviour of $a(n)$ as $n$ increases. A common technique to understand arithmetic functions involves studying the partial sums $\sum_{n\leq x}a(n)$. For example, Dirichlet famously estimated the asymptotic behaviour of the partial sums $\sum_{n\leq x} d(n)$ by relating it to the problem of counting the number of lattice points lying inside or on the hyperbola. Here $d(n)$ denotes the divisor function i.e, $d(n) = \sum_{d\mid n} 1$. He obtained an asymptotic formula with the main term $x\log x + (2\gamma - 1)x + \frac{1}{4}$, where $\gamma$ is the Euler-Mascheroni constant and an error term of order $\sqrt{x}$. The problem of estimating the error term between the sum $\sum_{n\leq x} d(n)$ and the main term is known as the Dirichlet hyperbola problem or the Dirichlet divisor problem. The bound on the error term has been further improved by many mathematicians.
Vorono\"i \cite{Voronoi} introduced a new phase into the Dirichlet divisor problem. He was able to express the error term as an infinite series containing the Bessel functions. More precisely, letting $Y_1$ (resp. $K_1$) denote the Bessel function of the second kind (resp. modified Bessel function of second kind) and $\gamma$ denote the Euler-Mascheroni constant, Vorono\"i obtained the following. \begin{thma}[Vorono\"i identity] For every $x > 0$, we have \begin{equation*} \sideset{}{'}\sum_{n\le x}\!\!d(n) =x\log x+(2\gamma-1)x+\frac{1}{4} - \sum_{k=1}^\infty\frac{d(k)}{k} \left(Y_1\left(4\pi\,\sqrt{xk}\,\right)+\frac{2}{\pi}K_1\left(4\pi\,\sqrt{xk}\,\right)\right)\sqrt{xk}, \end{equation*} where $\sum'$ means that the term corresponding to $n=x$ is halved. \end{thma} A very natural generalization of the Dirichlet divisor problem is the determination of asymptotics for the partial sums $\sum_{n\leq x} d_k(n)$ where $d_k(n)$ counts the number of ways that $n$ can be written as a product of $k$ numbers. The problem of estimating the error term is known as the Piltz divisor problem, named in honor of of Adolf Piltz. An error term of Vorono\"i-type was previously obtained by the author and Wang \cite{Wang} for the shifted Piltz divisor problem; this is the problem of counting the number of lattice points lying inside or on the hyperbola after shifting the origin to a fixed coordinate. In this article, we consider the Piltz divisor problem over number fields, which we next describe. Let $\mathbb{K}$ be a number field with extension degree $[\mathbb{K} : \mathbb{Q}] = d$ and signature $(r_1, r_2)$ (i. e., $d = r_1+2r_2$). We let $D_\mathbb{K}$ denote the absolute value of the discriminant of $\mathbb{K}$. Let $\mathcal{O}_\mathbb{K}$ be its ring of integers and $v_{\mathbb{K}}(m)$ denote the number of non-zero integral ideals in $\mathcal{O}_K$ with norm $m$. Let $\mathfrak{N}$ be the norm map of $\mathbb{K}$ over $\mathbb{Q}$. The main question considered in this paper is the problem of counting the number of $m$-tuples of ideals $(\mathfrak{a}_1, \mathfrak{a}_2, \ldots, \mathfrak{a}_m)$ for which the product of their ideal norms
$\mathfrak{N}_{\mathfrak{a}_1}\mathfrak{N}_{\mathfrak{a}_2}\cdots \mathfrak{N}_{\mathfrak{a}_m}$ is at most $x$. In other words, we study the asymptotic behaviour of the partial sum $$ I_\mathbb{K}^m(x) = \sideset{}{'}\sum_{n \leq x} v_{\mathbb{K}}^m(n) $$ where $v_{\mathbb{K}}^m(n)$ counts the number of m-tuples of ideals $(\mathfrak{a}_1, \mathfrak{a}_2, \ldots, \mathfrak{a}_m)$ with $\mathfrak{N}_{\mathfrak{a}_1}\mathfrak{N}_{\mathfrak{a}_2}\cdots \mathfrak{N}_{\mathfrak{a}_m} = n$. We denote the main term and the error term of $I_\mathbb{K}^m(x)$ by $P_\mathbb{K}^m(x)$ and $\Delta_\mathbb{K}^m(x)$, respectively. It is known that the main term is $P_\mathbb{K}^m(x) = \Res_{s=1} \zeta_{\mathbb{K}}(s)^m x^s s^{-1}$, where $\zeta_\mathbb{K}(s)$ denotes the Dedekind zeta function of the number field $\mathbb{K}$. The main term can be obtained by a standard procedure using complex analysis. The main goal of this article is to estimate the error term $\Delta_\mathbb{K}^m(x)$ in terms of special functions.
The asymptotic behaviour of the partial sum $I_\mathbb{Q}^m(x)$ is well studied, with many mathematicians having already investigated this sum as early as the 18th century. The best known result so far for $\Delta_\mathbb{Q}^2(x)$ is $x^{517/1648+\epsilon}$ \cite{Bourgain}. The partial sum $I_\mathbb{K}^m(x)$ with $m=1$ is the ordinary ideal counting function over $\mathbb{K}$ and there are numerous investigations of the asymptotics for these dating back to the 19th century. The best known result can be found in \cite{Takeda} or \cite{Krishnarjun}. In comparison to other divisor problems, it seems that very few improvements [c.f \cite{Nowak} \cite{Takeda}, \cite{Krishnarjun}] have been made on the Piltz divisor problem over number fields. In this article, we obtain a Vorono\"i-type identity for Piltz divisor problem over number fields. In particular, we express the error term in terms of an infinite series containing the ``Meijer G-function". Moreover, we discuss certain special cases in another section where the error terms may be written in terms of different special functions.
\begin{theorem}\label{Theorem 1}
Let $\mathbb{K}$ be any number field of degree $d$ with signature $(r_1, r_2)$ and discriminant $d_\mathbb{K}$ {\rm(}with $|d_{\mathbb{K}}|=D_{\mathbb{K}}${\rm)}. Then \begin{align} I_\mathbb{K}^m(x) =\zeta_\mathbb{K}(0)^m + H_m x &- \frac{i^{mr_1}D_\mathbb{K}^{m/2}}{(2\pi)^{mr_1+mr_2}}\sum\limits_{j=0}^{mr_1}(-1)^j \(\begin{matrix} mr_1\\ j \end{matrix}\) \sum\limits_{n=1}^\infty \frac{v_\mathbb{K}^m(n)}{n} \nonumber\\ &\times G_{0, \,\ md}^{m(r_1+r_2), 0}\(\begin{matrix} - \\ \boldsymbol{1}_{mr_1+mr_2-1}, 0, \boldsymbol{1}_{mr_2} \end{matrix}
\bigg| (e^{\frac{i\pi}{2}})^{2j-mr_1} \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m} nx\), \end{align} where $\boldsymbol{1}_\ell$ denotes the $\ell$-tuple all of whose entries equal $1$ and $H_m =\underset{s=1} \Res \, \ \zeta_\mathbb{K}(s)^m$. \end{theorem} Steen introduced a new function in \cite{Steen} that naturally appears when investigating the general divisor problem; we call this function the Vorono\"i-Steen function, and its definition may be found in \S \ref{sec:specialfunctions}. It reduces to a modified Bessel function of the second kind as a special case. In the special case that the number field $\mathbb{K}$ is totally real, the error term can be expressed as an infinite series containing the Vorono\"i-Steen function. \begin{corollary}\label{Corollary 1.2} Let $\mathbb{K}$ be any totally real number field of degree $d$ with discriminant $d_\mathbb{K}$. Then \begin{equation} I_\mathbb{K}^m(x) =\zeta_\mathbb{K}(0)^m + H_m x - \frac{i^{md}D_\mathbb{K}^{m/2}}{(2\pi)^{md}}\sum\limits_{j=0}^{md}(-1)^j \binom{md}{j}\sum\limits_{n=1}^\infty \frac{v_\mathbb{K}^m(n)}{n}
\, V\( (e^{\frac{i\pi}{2}})^{2j-md} \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m} nx ; \boldsymbol{1}_{md-1}, 0 \). \end{equation}
\end{corollary} We next consider the special case where the number field is purely imaginary. \begin{corollary}\label{Corollary 1.3} Let $\mathbb{K}$ be any purely imaginary number field of degree $d$ with discriminant $d_\mathbb{K}$. Then \begin{align} I_\mathbb{K}^m(x) =\zeta_\mathbb{K}(0)^m + H_m x - \(\frac{D_{\mathbb{K}}}{(2\pi)^d}\)^{m/2} \sum\limits_{n=1}^\infty \frac{v_\mathbb{K}^m(n)}{n} G_{0, \,\ md}^{\frac{md}{2}, 0}\(\begin{matrix} - \\ \boldsymbol{1}_{\frac{md}{2}-1}, 0, \boldsymbol{1}_{\frac{md}{2}} \end{matrix}
\bigg| \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m} nx\). \end{align}
\end{corollary}
The paper is organised as follows. In \S 2, we discuss the main ingredients that are needed to prove our results. In \S 3, we provide the proof of Theorem \ref{Theorem 1} and the corollaries. In \S 4, we discuss special cases of Theorem \ref{Theorem 1}.
\section{Preliminaries} Throughout the paper, we require some basic tools of analytic number theory and complex analysis.
\subsection{Gamma function} The Gamma function plays a significant role in this paper. For $\mathfrak{R}(z) > 0$, it can be defined via the convergent improper integral \begin{equation} \Gamma(z) = \int_0^\infty e^{-t} t^{z-1} {\rm d}t. \end{equation} The analytic properties and functional equation of the $\Gamma$-function are given in the following proposition. \begin{proposition} The function $\Gamma(z)$ is absolutely convergent for $\mathfrak{R}(z) > 0$. It can be analytically continued to the whole complex plane except for simple poles at every non-positive integers. It also satisfies the functional equation: \begin{equation}\label{Gamma functional equation} \Gamma(z+1) = z\Gamma(z). \end{equation} \end{proposition} \begin{proof} This is well known, and a proof may be found, for example, in \cite[Appendix A]{Ayoub}. \end{proof} The $\Gamma$-function satisfies many important properties. Here we mention two of them. \begin{itemize} \item[(i)]
Euler's reflection formula :
\begin{equation}\label{Reflection formula}
\Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin \pi z}
\end{equation}
where $z \notin \mathbb{Z}$.
\item[(ii)]
Legendre duplication formula :
\begin{equation}\label{Duplication formula}
\Gamma(z)\Gamma \left(z+\frac{1}{2}\right) = 2^{1-2z} \sqrt{\pi} \Gamma(2z).
\end{equation} \end{itemize} The proof of these properties can be found in \cite[Appendix A]{Ayoub}.
\subsection{Dedekind zeta function} Let $\mathbb{K}$ be any number field with extension degree $[\mathbb{K} : \mathbb{Q}] = d$ and signature $(r_1, r_2)$ (i. e., $d = r_1+2r_2$) and $D_\mathbb{K}$ denotes the absolute value of the discriminant of $\mathbb{K}$. Let $\mathcal{O}_\mathbb{K}$ be its ring of integers and $\mathfrak{N}$ be the norm map of $\mathbb{K}$ over $\mathbb{Q}$. Then the \begin{it}Dedekind zeta function\end{it} attached to number field $\mathbb{K}$ is defined by $$ \zeta_\mathbb{K}(s)=\sum_{\mathfrak{a}\subset\mathcal{O}_\mathbb{K}}\frac{1}{N(\mathfrak{a})^s}=\prod_{\mathfrak{p}\subset \mathcal{O}_\mathbb{K}}\bigg(1-\frac{1}{N(\mathfrak{p})^s}\bigg)^{-1}, $$ for all $s \in \mathbb{C}$ with $\mathfrak{R} (s)>1$, where $\mathfrak{a}$ and $\mathfrak{p}$ run over the non-zero integral ideals and prime ideals of $\mathcal{O}_\mathbb{K}$ respectively. If $v_\mathbb{K}(m)$ denotes the number of non-zero integral ideals in $\mathcal{O}_\mathbb{K}$ with norm $m$, then $\zeta_\mathbb{K}$ can also be expressed as $$ \zeta_\mathbb{K}(s)=\sum_{m=1}^\infty \frac{v_\mathbb{K}(m)}{m^s}. $$ Set \begin{equation*} \Lambda_\mathbb{K}(s) = D_\mathbb{K}^{s/2}\Gamma_\mathbb{R}(s)^{r_1} \Gamma_\mathbb{C}(s)^{r_2}\zeta_\mathbb{K}(s), \end{equation*} where $\Gamma_\mathbb{R}(s) = \pi^{-s/2}\Gamma(s/2)$ and $\Gamma_\mathbb{C}(s) = 2(2\pi)^{-s}\Gamma(s)$. The following proposition provides the analytic behaviour and the functional equation satisfied by the Dedekind zeta function. \begin{proposition}\label{Prop 2.2} The function $\Lambda_\mathbb{K}(s)$ is absolutely convergent for $\mathfrak{R}(s) > 1$. It can be analytically continued to the whole complex plane except for a simple pole at $s = 1$. It also satisfies the functional equation \begin{equation}\label{functional equation} \Lambda_\mathbb{K}(s) = \Lambda_K(1-s). \end{equation} \end{proposition} \begin{proof} For example, one can find a proof of this statement in \cite[pp. 254-255]{Lang}. \end{proof} The following lemma gives the convexity bound of the Dedekind zeta function in the critical region. \begin{lemma}\label{Convexity bound}
Let $\mathbb{K}$ be any number field of degree d with discriminant $d_\mathbb{K}$ (so $D_{\mathbb{K}}=|d_{\mathbb{K}}|$). Then \begin{equation*} \zeta_\mathbb{K}(\sigma+it) \ll \begin{cases}
|t|^{\frac{d}{2}-d\sigma+\epsilon} D_\mathbb{K}^{\frac{1}{2}-\sigma+\epsilon}, & \text{if} \, \ \sigma \leq 0, \\
|t|^{\frac{d(1-\sigma)}{2}+\epsilon} D_\mathbb{K}^{\frac{1-\sigma}{2}+\epsilon}, & \text{if} \, \ 0 \leq \sigma \leq 1,\\
|t|^\epsilon D_\mathbb{K}^\epsilon, & \text{if} \, \ \sigma \geq 1 \end{cases} \end{equation*} holds true for any $\epsilon>0$. \end{lemma} \begin{proof} This follows from a standard argument by applying the Phragmen-Lindel\"of principle and the functional equation \eqref{functional equation} of the Dedekind zeta function. The details may be found in \cite[Chapter 5]{Iwaniec}, for example. \end{proof}
\subsection{Special function}\label{sec:specialfunctions} The mathematical functions which have more or less established names and notations due to their importance in mathematical analysis, functional analysis, geometry, physics, or other applications are known as special functions. These mainly appear as solutions of differential equations or integrals of elementary functions.
One of the most important families of special functions are the Bessel functions, which are basically the canonical solution of Bessel's differential equations \begin{equation*} x^2\frac{d^2y}{dx^2}+x\frac{dy}{dx}+(x^2-a^2)y = 0 \end{equation*} where $a$ is any arbitrary complex number.
The G-function was introduced initially by Meijer as a very general function using a series. Later, it was defined more generally via a line integral in the complex plane (cf. \cite{Bateman}) given by \begin{equation}\label{G-function} \begin{aligned} G^{m, \ n}_{p, \ q}\bigg(\begin{matrix} a_1, \ldots, a_p \\ b_1, \ldots, b_q
\end{matrix} \ \bigg|\ z\bigg)=\frac{1}{2\pi i}\underset{{(C)}}{\bigints} \frac{\prod\limits_{j=1}^m\Gamma(b_j-s)\prod\limits_{j=1}^n\Gamma(1-a_j+s)}{\prod\limits_{j=m+1}^q\Gamma(1-b_j+s)\prod\limits_{j=n+1}^p\Gamma(a_j-s)}z^s \rm{d}s , \end{aligned} \end{equation}
where $z \neq 0$ and $m$, $n$, $p$, $q$ are integers which satisfy $0 \leq m \leq q$ and $0 \leq n \leq p$. The poles of the integrand must be all simple. Here $(C)$ in the integral denotes the vertical line from $C-i\infty$ to $C+i\infty$ such that all poles of $\Gamma(b_j-s)$ for $ j=1, \ldots, m$, must lie on one side of the vertical line while all poles of $\Gamma(1-a_j+s)$ for $ j=1, \ldots, n$ must lie on the other side. The integral then converges for $|\arg z| < \delta \pi$ where $$\delta = m+n - \frac{1}{2}(p+q).$$
The integral additionally converges for $|\arg z|= \delta \pi$ if $(q-p)(\Re(s) + 1/2) > \Re (v) + 1$, where $$ v = \sum_{j=1}^{q}b_j - \sum_{j=1}^{p}a_j. $$ Special cases of the $G$-function include many other special functions. For instance, there are many formulae which yield relations between the $G$-function and the Bessel functions (cf. \cite{Bateman}). Two important formulae among them are given by \begin{equation}\label{G and J function} G_{0,2}^{1,0}\( \begin{array}{cl} - \\ a, b
\end{array} \bigg | z \) = z^{\frac{1}{2}(a+b)} J_{a-b} (2z^{1/2}), \end{equation} \begin{equation}\label{G and K function} G_{0,2}^{2,0}\( \begin{array}{cl} - \\ a, b
\end{array} \bigg | z \) = 2z^{\frac{1}{2}(a+b)} K_{a-b} (2z^{1/2}). \end{equation} The Vorono\"i-Steen function $V=V(x;a_1,\ldots,a_n)$ (cf. \cite{Steen}) is defined by \begin{align*}
\frac{1}{2\pi i}&\int_{0}^{\infty}x^sV(x;a_1,\ldots,a_n)\,\frac{{\rm d}x}{x} =\Gamma(s+a_1)\cdots \Gamma(s+a_n). \end{align*} It is a special case of the $G$-function: \begin{equation}\label{Voronoi function} V(x;a_1,\ldots,a_n)=G_{0,n}^{n,0}\!\left( \begin{array}{c} - \\ a_1,\ldots,a_n
\end{array} \bigg| \, \ x \right). \end{equation}
\subsection{Riesz sum}\label{Riesz sum} Riesz sums (cf. \cite{CM}, \cite{Hardy}, \cite{KT}) were introduced by M. Riesz and have been studied in connection with summability of Fourier series and that of Dirichlet series. For a given increasing sequence $\{\lambda_n\}$ of positive real numbers and a given sequence $\{\alpha_n\}$ of complex numbers, the Riesz sum of order $\rho$ is defined by \begin{equation} \mathcal{A}^\rho(x)=\mathcal{A}_\lambda^\rho (x)= \sideset{}{'}\sum_{\lambda_n\leq x}(x-\lambda_n)^\rho\alpha_n, \end{equation} where $\rho$ is any non-negative integer and the prime appearing next to the summation sign means the corresponding term is to be halved for $\lambda_n=x$.
It can also be expressed as \begin{equation}\label{Rieszsum} \mathcal{A}_\lambda^\rho(x)= \rho\int_0^x(x-t)^{\rho-1}\mathcal{A}_\lambda (t){\rm d}t \end{equation} for $\rho \geq 1$, where $\mathcal{A}_\lambda(x)=\mathcal{A}_\lambda^0(x)=\sideset{}{'}\sum_{\lambda_n\leq x}\alpha_n$ (cf. \cite{Hardy}, \cite{KT}). The generalization of Perron's formula for the $\rho$-order Riesz sum is given by \begin{equation}\label{Perron's formula} \frac{1}{\Gamma(\rho+1)}\sideset{}{'}\sum\limits_{\lambda_n\leq x}(x-\lambda_n)^\rho \alpha_n=\frac{1}{2\pi i}\int_{C-i\infty}^{C+i\infty}\frac{\Gamma(w)\varphi(w)x^{\rho+w}}{\Gamma(w+\rho+1)}{\rm d}w, \end{equation} where $\varphi(w)=\sum\limits^\infty_{n=1}\frac{\alpha_n}{\lambda_n^w}$ and $C$ is bigger than the abscissa of absolute convergence of $\varphi(s)$ (cf. \cite{Hardy}, \cite{KT}). \begin{remark}\label{Remark} Note that the integral in \eqref{Perron's formula} is an improper integral for the unbounded region. Hence it is not obvious that one can interchange the integral and the summation which is coming from the Dirichlet series $\varphi(s)$. Moreover, it is warned in \cite{Davenport} that applying the 0th order Perron's formula is problematic for this reason. It is usually safer to apply the truncated Perron's formula which can be found in many textbooks. The integral \begin{align*}
\int_{(C)}\bigg | \frac{\Gamma(w)\varphi(w)x^{\rho+w}}{\Gamma(w+\rho+1)}\bigg |{\rm d}w &\ll x^{C+\rho} \int_{-\infty}^{\infty}\frac{1}{(C^2 +t^2)^{(\rho+1)/2}}\,{\rm d}t \ll x^{C+\rho}\int_0^{\infty}\frac{1}{(C^2 +t^2)^{(\rho+1)/2}}\,{\rm d}t\\ &\ll x^{C+\rho}\[\int_0^1 \frac{1}{(C^2 +t^2)^{(\rho+1)/2}}\,{\rm d}t + \lim_{R \to \infty} \int_1^R \frac{1}{t^{\rho+1}} \,{\rm d}t \right]
< \infty \end{align*} for $\rho \geq 1$. Hence it follows from Fubini's theorem that the integral and summation appearing in the $\rho$-order Riesz sum can be interchanged for $\rho \geq 1$. \end{remark} The lower-order Riesz sums can be obtained from the higher-order Riesz sums by using the following lemma. \begin{lemma}\label{Lemma Riesz sum} Let $\mathcal{A}_\lambda^\rho$ be the Riesz sum of order $\rho$ where $\rho$ is any non-negative integer. Then \begin{equation} \frac{d^i}{dx^i}\mathcal{A}_\lambda^\rho(x) =\rho(\rho-1)\cdots(\rho-i+1)\mathcal{A}_\lambda^{\rho-i}(x) \end{equation} holds true for every $0 \leq i \leq \rho$. In particular, we have \begin{equation} \frac{d^\rho}{dx^\rho}\(\frac{1}{\Gamma(\rho + 1)}\mathcal{A}_\lambda^\rho(x) \)= \mathcal{A}_\lambda(x). \end{equation} \end{lemma} \begin{proof} We will prove this lemma by induction on $\rho$. The statement holds trivially for the case $\rho = 0$ and the case of $\rho = 1$ follows from \eqref{Rieszsum}. Let us assume that the statement is true for all $2 \leq \rho \leq k$. We need to show that the statement is true for $\rho = k+1$.
It follows from \eqref{Rieszsum} and integration by parts that \begin{align*} \mathcal{A}_\lambda^{k+1}(x) &= (k+1)\int_0^x(x-t)^k \mathcal{A}_\lambda (t){\rm d}t\\ &= (k+1) \[ (x-t)^k \mathcal{A}_\lambda^1(t) \right]_{t=0}^{t=x} + (k+1)k\int_0^x \mathcal{A}_\lambda^1(t) (x-t)^{k-1}{\rm d}t \end{align*} where the first term of the right-hand side becomes $0$, since $\mathcal{A}_\lambda^j(0) = 0$ for any non-negative $j$. We have from the inductive hypothesis that \begin{equation}\label{Derivative} \frac{d}{dx}\mathcal{A}_\lambda^\rho(x) = \rho \mathcal{A}_\lambda^{\rho-1}(x) \end{equation} for every $2 \leq \rho \leq k$. Now proceeding similarly and applying \eqref{Derivative} repeatedly, we have \begin{align}\label{Derivative2} \mathcal{A}_\lambda^{k+1}(x) &= (k+1)k\int_0^x \mathcal{A}_\lambda^1(t) (x-t)^{k-1}{\rm d}t \nonumber\\ & = (k+1)k \frac{k-1}{2}\int_0^x \mathcal{A}_\lambda^2(t) (x-t)^{k-2}{\rm d}t \nonumber\\ & = (k+1)k \frac{k-1}{2}\frac{k-2}{3}\int_0^x \mathcal{A}_\lambda^3(t) (x-t)^{k-3}{\rm d}t \nonumber\\ & \hspace{.2cm} \vdots \nonumber\\ & = (k+1) \int_0^x \mathcal{A}_\lambda^k(t){\rm d}t. \end{align}
Now applying \eqref{Derivative2} and the inductive hypothesis respectively, we can finally conclude \begin{align} \frac{d^i}{dx^i}\mathcal{A}_\lambda^{k+1}(x) = \frac{d^{i-1}}{dx^{i-1}}\frac{d}{dx}\mathcal{A}_\lambda^{k+1}(x) =(k+1) \frac{d^{i-1}}{dx^{i-1}} \mathcal{A}_\lambda^k(x) = (k+1)k\cdots (k-i+2) \mathcal{A}_\lambda^{k+1-i}(x) \end{align} for every $0 \leq i \leq k+1$, which yields the claim. \end{proof}
\section{Proof of results} In this section we prove Theorem \ref{Theorem 1} and the corollaries of Theorem \ref{Theorem 1}. \subsection{Proof of Theorem \ref{Theorem 1} :} Consider the Riesz sum $I_\mathbb{K}^{m, \rho}(x)$ of order positive integer $\rho$ with \begin{equation}\label{Assumption order} \rho \geq \frac{md}{2}(1-\mu) + 1 \end{equation} where $-1 < \mu < 0$ and normalized by the $\Gamma$-factor as via \begin{equation} I_\mathbb{K}^{m, \rho}(x) =\frac{1}{\Gamma(\rho+1)} \sideset{}{'}\sum_{n \leq x} (x-n)^\rho v_{\mathbb{K}}^m(n), \end{equation} where $ v_{\mathbb{K}}^m(n)$ counts the number of m-tuples of ideals $(\mathfrak{a}_1, \mathfrak{a}_2, \ldots, \mathfrak{a}_m)$ with $\mathfrak{N}_{\mathfrak{a}_1}\mathfrak{N}_{\mathfrak{a}_2}\cdots \mathfrak{N}_{\mathfrak{a}_m} = n$. The Dirichlet series associated to the arithmetic function $ v_{\mathbb{K}}^m(n)$ is $$ \sum_{n=1}^\infty \frac{v_{\mathbb{K}}^m(n)}{n^w} = \zeta_\mathbb{K}(w)^m, $$
which naturally arises from the product of $m$ Dedekind zeta functions. We apply the generalized Perron's formula \eqref{Perron's formula} on $I_\mathbb{K}^{m, \rho}(x)$ and obtain \begin{equation}\label{Generalized Perron} I_\mathbb{K}^{m, \rho}(x) = \frac{1}{2\pi i}\int_{C - i\infty}^{C + i\infty}f(w){\rm d}w, \end{equation} where $C > 1$ and $$ f(w) = \frac{\Gamma(w)}{\Gamma(w+\rho+1)}\zeta_\mathbb{K}(w)^m x^{\rho+w}= \frac{1}{w(w+1)\cdots (w+\rho)}\zeta_\mathbb{K}(w)^m x^{\rho+w}. $$ We consider the contour $\mathcal{C}$ given by the rectangle with vertices $\{C - iT, C + iT, \mu +iT, \mu - iT \}$ in the anticlockwise direction as $T \to \infty$. The integrand $f(w)$ is analytic inside the contour $\mathcal{C}$ except for simple poles at $w=0$ and $w = 1$. The residues of $f(w)$ at $w = 0$ and $w=1$ are \begin{equation*} \underset{w=0} \Res \, \ f(w) = \zeta_\mathbb{K}(0)^m\frac{x^{\rho}}{\rho !} \end{equation*} and \begin{equation*} \underset{w=1} \Res \, \ f(w) = \(\underset{w=1} \Res \, \ \zeta_\mathbb{K}(w)^m \)\frac{x^{\rho+1}}{(\rho+1) !} = H_m \frac{x^{\rho+1}}{(\rho+1) !} \end{equation*} respectively, where $H_m = \underset{w=1} \Res \, \ \zeta_\mathbb{K}(w)^m$. Hence by Cauchy's residue formula we have \begin{equation}\label{Residue theorem} \frac{1}{2\pi i}\int_{\mathcal{C}} f(w){\rm d}w =\underset{w=0} \Res \, \ f(w) + \underset{w=1}\Res \, \ f(w) =\zeta_\mathbb{K}(0)^m\frac{x^{\rho}}{\rho !} + H_m \frac{x^{\rho+1}}{(\rho+1) !}. \end{equation} Combining \eqref{Generalized Perron} and \eqref{Residue theorem}, it follows that \begin{equation}\label{Horizontal and vertical} I_\mathbb{K}^{m, \rho}(x) =\zeta_\mathbb{K}(0)^m\frac{x^{\rho}}{\rho !} + H_m \frac{x^{\rho+1}}{(\rho+1) !} + \mathcal{H}_1 + \mathcal{H}_2 + \mathcal{V} \end{equation} where $\mathcal{H}_1 = \underset{T \to \infty} \lim \frac{1}{2\pi i}\int_{\mu + iT}^{C+iT} f(w) {\rm d}w$ and $\mathcal{H}_2 = \underset{T \to \infty} \lim \frac{1}{2\pi i}\int_{C - iT}^{\mu - iT} f(w) {\rm d}w$ are the horizontal integrals and $\mathcal{V} = \frac{1}{2\pi i}\int_{\mu - i\infty}^{\mu + i\infty} f(w){\rm d}w$ is the vertical integral. Firstly, we want to estimate the horizontal integrals. We replace $w$ by $\sigma + iT$ and apply Lemma \ref{Convexity bound} respectively to obtain \begin{align}
\bigg| \frac{1}{2\pi i}\int_{\mu + iT}^{C+iT} f(w) {\rm d}w\bigg| &\leq \frac{1}{2\pi }\int_{\mu}^{C} |\zeta_\mathbb{K} (\sigma + it)|^m \frac{x^\sigma}{T^{\rho + 1}} {\rm d}\sigma \nonumber\\ & \ll \int_{\mu}^{C} (T^{md}D_\mathbb{K}^m)^{\frac{1-\sigma}{2}+\epsilon} \frac{x^\sigma}{T^{\rho +1}} {\rm d}\sigma \nonumber\\ & \ll T^{\frac{md}{2} + \epsilon - (\rho + 1)} D_\mathbb{K}^{\frac{m}{2} + \epsilon} \underset{\mu \leq \sigma \leq C} \max x^\sigma (T^{md}D_\mathbb{K}^m)^{- \frac{\sigma}{2}}\nonumber\\ & \leq T^{\frac{md}{2} + \epsilon - (\rho + 1)} D_\mathbb{K}^{\frac{m}{2} + \epsilon} \{x^C (T^{md}D_\mathbb{K}^m)^{- \frac{C}{2}} + x^\mu (T^{md}D_\mathbb{K}^m)^{- \frac{\mu}{2}} \}\nonumber\\ &\leq T^{\frac{md}{2}(1 - \mu) + \epsilon - (\rho + 1)} D_\mathbb{K}^{\frac{m}{2}(1 - \mu) + \epsilon} \{x^C + x^\mu \}. \end{align} Now, from the assumption \eqref{Assumption order} on $\rho$, it satisfies that \begin{equation}
\bigg| \frac{1}{2\pi i}\int_{\mu + iT}^{C+iT} f(w) {\rm d}w \bigg| \leq \frac{D_\mathbb{K}^{\frac{m}{2}(1 - \mu) + \epsilon}}{T^{2-\epsilon}}(x^C + x^\mu). \end{equation} Therefore, we can conclude that $\mathcal{H}_1$ vanishes as $T \to \infty$. Similarly, one can show that $\mathcal{H}_2$ vanishes as $T \to \infty$.
We now shift our attention to the vertical integral $\mathcal{V}$. It follows from \eqref{functional equation} that \begin{equation}\label{Functional equation 2 } \zeta_\mathbb{K}(w) = D_\mathbb{K}^{1/2 - w}\(\frac{\pi^{w-1/2} \Gamma(\frac{1-w}{2})}{\Gamma(\frac{w}{2})} \)^{r_1} \(\frac{(2\pi)^{2w-1} \Gamma(1-w)}{\Gamma(w)} \)^{r_2} \zeta_\mathbb{K} (1-w). \end{equation} We apply \eqref{Reflection formula} and \eqref{Duplication formula} to obtain \begin{align}\label{Gamma properties} \frac{\Gamma(\frac{1-w}{2})^{r_1}}{\Gamma(\frac{w}{2})^{r_1}} = \frac{[\Gamma(\frac{1-w}{2})\Gamma(1 - \frac{w}{2})]^{r_1}}{[\Gamma(\frac{w}{2})\Gamma(1 - \frac{w}{2})]^{r_1}} = \frac{[\Gamma(\frac{1-w}{2})\Gamma(\frac{1}{2} + \frac{1 - w}{2})]^{r_1}}{\(\frac{\pi}{\sin \frac{\pi}{2}w} \)^{r_1}} = \( \frac{2^w}{\sqrt{\pi}} \)^{r_1} \(\sin \frac{\pi}{2}w \)^{r_1} \Gamma(1-w)^{r_1}. \end{align} Inserting \eqref{Gamma properties} into \eqref{Functional equation 2 }, we get \begin{align*} \zeta_\mathbb{K}(w) &= D_{\mathbb{K}}^{1/2 - w}i^{r_1}(2\pi)^{dw - r_1 - r_2} \(e^{-\frac{i\pi w}{2}} - e^{\frac{i\pi w}{2}} \)^{r_1} \frac{\Gamma(1-w)^{r_1+r_2}}{\Gamma(w)^{r_2}}\zeta_\mathbb{K}(1-w) \\ &= D_\mathbb{K}^{1/2 - w}i^{r_1}(2\pi)^{dw - r_1 - r_2} \sum_{j=0}^{r_1} (-1)^j \( \begin{matrix} r_1 \\ j \end{matrix} \) \( e^{-\frac{i\pi w}{2}} \)^{r_1 - j} \( e^{\frac{i\pi w}{2}} \)^j \frac{\Gamma(1-w)^{r_1+r_2}}{\Gamma(w)^{r_2}}\zeta_\mathbb{K}(1-w). \end{align*} Therefore, taking the $m$-th power, we have \begin{equation}\label{Equation 3.10} \zeta_\mathbb{K} (w)^m = \frac{i^{mr_1}D_\mathbb{K}^{m/2}}{(2\pi)^{mr_1 + mr_2}} \sum_{j=0}^{mr_1} (-1)^j \( \begin{matrix} mr_1 \\ j \end{matrix} \) \[ (e^{\frac{i\pi}{2}})^{2j-mr_1}\right]^w \( \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m} \)^w \frac{\Gamma(1-w)^{mr_1+mr_2}}{\Gamma(w)^{mr_2}}\zeta_\mathbb{K}(1-w)^m. \end{equation} We now insert \eqref{Equation 3.10} into the integrand of the vertical integral and obtain \begin{align}\label{Equation 3.11} \mathcal{V} = \frac{i^{mr_1}D_\mathbb{K}^{m/2}}{(2\pi)^{mr_1 + mr_2}} \sum_{j=0}^{mr_1} (-1)^j \( \begin{matrix} mr_1 \\ j \end{matrix} \) \frac{1}{2\pi i} \int_{\mu - i\infty}^{\mu + i\infty} &\[ \(e^{\frac{i\pi}{2}}\)^{2j-mr_1} \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m} \right]^w \frac{\Gamma(1-w)^{mr_1+mr_2}}{\Gamma(w)^{mr_2}} \nonumber\\ &\times \zeta_\mathbb{K}(1-w)^m \frac{1}{w(w+1)\cdots (w+\rho)} x^{\rho+w} {\rm d}w \nonumber\\ = \frac{i^{mr_1}D_{\mathbb{K}}^{m/2}}{(2\pi)^{mr_1 + mr_2}} \sum_{j=0}^{mr_1} (-1)^j \( \begin{matrix} mr_1 \\ j \end{matrix} \) \frac{1}{2\pi i} \int_{\mu - i\infty}^{\mu + i\infty} &\[ \(e^{\frac{i\pi}{2}}\)^{2j-mr_1} \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m} \right]^w \frac{\Gamma(1-w)^{mr_1+mr_2}}{\Gamma(w)^{mr_2}} \nonumber\\ &\times \sum_{n=1}^\infty \frac{v_{\mathbb{K}}^m(n)}{n^{1-w}} \frac{1}{w(w+1)\cdots (w+\rho)} x^{\rho+w} {\rm d}w. \end{align} It follows from the remark in \S \ref{Riesz sum} that we can interchange the integral and the summation in \eqref{Equation 3.11} under our assumption $\rho > 1$. Hence we have \begin{align*} \mathcal{V} = \frac{i^{mr_1}D_{\mathbb{K}}^{m/2}}{(2\pi)^{mr_1 + mr_2}} \sum_{j=0}^{mr_1} (-1)^j \( \begin{matrix} mr_1 \\ j \end{matrix} \) &\sum_{n=1}^\infty \frac{v_{\mathbb{K}}^m(n)}{n} \frac{1}{2\pi i} \int_{\mu - i\infty}^{\mu + i\infty} \[ \(e^{\frac{i\pi}{2}}\)^{2j-mr_1} \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m}n \right]^w \\ &\times \frac{\Gamma(1-w)^{mr_1+mr_2}}{\Gamma(w)^{mr_2}} \frac{1}{w(w+1)\cdots (w+\rho)} x^{\rho+w} {\rm d}w. \end{align*} We now differentiate the vertical integral $\rho$-times with respect to $x$ to obtain \begin{align} \frac{d^\rho}{dx^\rho}\mathcal{V} = \frac{i^{mr_1}D_{\mathbb{K}}^{m/2}}{(2\pi)^{mr_1 + mr_2}} \sum_{j=0}^{mr_1} (-1)^j \( \begin{matrix} mr_1 \\ j \end{matrix} \) \sum_{n=1}^\infty \frac{v_{\mathbb{K}}^m(n)}{n} \frac{1}{2\pi i} &\int_{\mu - i\infty}^{\mu + i\infty} \[ \(e^{\frac{i\pi}{2}}\)^{2j-mr_1} \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m}n \right]^w \nonumber\\ &\times \frac{\Gamma(1-w)^{mr_1+mr_2}}{\Gamma(w)^{mr_2}} \, \ \frac{x^w}{w} \, \ {\rm d}w. \end{align} It follows from the functional equation \eqref{Gamma functional equation} for the $\Gamma$-function that $\Gamma(1-w) = -w\Gamma (-w)$. Hence we have \begin{align}\label{Equation 3.13} \frac{d^\rho}{dx^\rho}\mathcal{V} = - \frac{i^{mr_1}D_{\mathbb{K}}^{m/2}}{(2\pi)^{mr_1 + mr_2}} \sum_{j=0}^{mr_1} (-1)^j \( \begin{matrix} mr_1 \\ j \end{matrix} \) \sum_{n=1}^\infty \frac{v_{\mathbb{K}}^m(n)}{n} \frac{1}{2\pi i} &\int_{\mu - i\infty}^{\mu + i\infty} \[ \(e^{\frac{i\pi}{2}}\)^{2j-mr_1} \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m}nx \right]^w \nonumber \\ &\times \frac{\Gamma(1-w)^{mr_1+mr_2-1}\Gamma(-w)}{\Gamma(w)^{mr_2}} \, \ {\rm d}w. \end{align} As before, we let $\boldsymbol{1}_\ell$ denote the $\ell$-tuple all of whose entries equal $1$. We plug the definition of the Meijer G-function \eqref{G-function} into \eqref{Equation 3.13} and obtain \begin{align}\label{Equation 3.14} \frac{d^\rho}{dx^\rho}\mathcal{V} = - \frac{i^{mr_1}D_{\mathbb{K}}^{m/2}}{(2\pi)^{mr_1 + mr_2}} &\sum_{j=0}^{mr_1} (-1)^j \( \begin{matrix} mr_1 \\ j \end{matrix} \) \sum_{n=1}^\infty \frac{v_{\mathbb{K}}^m(n)}{n}\nonumber\\ & \times G_{0, \,\ md}^{m(r_1+r_2), 0}\(\begin{matrix} - \\ \boldsymbol{1}_{mr_1+mr_2-1}, 0, \boldsymbol{1}_{mr_2} \end{matrix}
\bigg| \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m} (e^{\frac{i\pi}{2}})^{2j-mr_1}nx\). \end{align} Differentiating both sides of \eqref{Horizontal and vertical} $\rho$-times with respect to $x$, we obtain \begin{equation} \frac{d^\rho}{dx^\rho}I_\mathbb{K}^{m, \rho}(x) = \zeta_\mathbb{K}(0)^m + H_m x + \frac{d^\rho}{dx^\rho}\mathcal{V}. \end{equation} Finally, we obtain the result \begin{align} I_\mathbb{K}^m(x) = \zeta_\mathbb{K}(0)^m + H_m x &- \frac{i^{mr_1}D_{\mathbb{K}}^{m/2}}{(2\pi)^{mr_1+mr_2}}\sum\limits_{j=0}^{mr_1}(-1)^j \(\begin{matrix} mr_1\\ j \end{matrix}\) \sum\limits_{n=1}^\infty \frac{v_k^m(n)}{n} \nonumber\\ &\times G_{0, \,\ md}^{m(r_1+r_2), 0}\(\begin{matrix} - \\ \boldsymbol{1}_{mr_1+mr_2-1}, 0, \boldsymbol{1}_{mr_2} \end{matrix}
\bigg| \frac{(2\pi)^{dm}}{D_{\mathbb{K}}^m} (e^{\frac{i\pi}{2}})^{2j-mr_1}nx\), \end{align} using Lemma \ref{Lemma Riesz sum} and \eqref{Equation 3.14}. This completes the proof of Theorem \ref{Theorem 1}.
\subsection{Proof of corollaries :} Corollary \ref{Corollary 1.2} follows from Theorem \ref{Theorem 1} by considering $r_1 = d$ and $r_2 = 0$ in Theorem \ref{Theorem 1} where $(r_1, r_2)$ is the signature of the number field $\mathbb{K}$. Here the error terms can be obtained from the relation \eqref{Voronoi function}. Corollary \ref{Corollary 1.3} also follows from Theorem \ref{Theorem 1} by considering $r_1 = 0$ and $r_2 = d/2$ in Theorem \ref{Theorem 1}.
\section{Applications} In this section, we investigate some special cases of Theorem \ref{Theorem 1}. \subsection{Piltz divisor problem in $\mathbb{Q}$} We consider first the problem of estimating the partial sum $$ I_{\mathbb{Q}}^m(x) = \sideset{}{'} \sum_{n\leq x} d_m(n) $$ where $\sum'$ means that the term corresponding to $n=x$ is halved and $d_m(n)$ counts the number of ways that $n$ can be written as a product of $m$ numbers. This problem can be considered a special case of Corollary \ref{Corollary 1.2} by taking $d=1$ in Corollary \ref{Corollary 1.2}. We can conclude the following. \begin{theorem}\label{Theorem 2} For every $x> 0$, we have \begin{align} \sideset{}{'}\sum_{n\leq x} d_m(n) =xP_{m-1}(\log x) + \(-\frac{1}{2}\)^m &- \frac{i^{m}}{(2\pi)^{m}}\sum\limits_{j=0}^{m}(-1)^j \(\begin{matrix} m\\ j \end{matrix}\) \sum\limits_{n=1}^\infty \frac{d_m(n)}{n} \nonumber\\
&\times V\( (e^{\frac{i\pi}{2}})^{2j-m} (2\pi)^{m} nx ; \boldsymbol{1}_{m-1}, 0 \), \end{align} where $P_{m-1}(t)$ is a polynomial of degree $m-1$ in $t$ such that the coefficients can be evaluated from the relation $$P_{m-1}(\log x) = \underset{s=1}\Res \, \ \zeta^m(s)\frac{x^{s-1}}{s}.$$ \end{theorem} Here the main term of the partial sum $I_\mathbb{Q}^m(x)$ can be obtained from the sum of the residues of the function $\zeta^m(s)x^s s^{-1}$ at $s=0$ and $s=1$. We obtain Vorono\"i's theorem from Theorem \ref{Theorem 2} by considering $m=2$. Let $d_2(n) = d(n)$ be the divisor function. Then we can conclude the following as a corollary. \begin{corollary}[Vorono\"i's Theorem]\label{Voronoi proof} For every $x > 0$, we have \begin{equation*} \sideset{}{'}\sum_{n\le x}\!\!d(n) =x\log x+(2\gamma-1)x+\frac{1}{4} - \sum_{n=1}^\infty\frac{d(n)}{n} \left(Y_1\left(4\pi\,\sqrt{xn}\,\right)+\frac{2}{\pi}K_1\left(4\pi\,\sqrt{xn}\,\right)\right)\sqrt{xn}, \end{equation*} where $\gamma$ is Euler-Mascheroni constant. \end{corollary} \begin{proof} The main term of the partial sum $I_\mathbb{Q}^2(x)$ is basically the sum of the residues of $\zeta^2(s)x^s s^{-1}$ at $s=0$ and $s=1$. Let $\Delta_\mathbb{Q}^2(x)$ denote the error term of $I_\mathbb{Q}^2(x)$. It follows from Theorem \ref{Theorem 2} that \begin{align*} \Delta_\mathbb{Q}^2(x) = \sum_{n=1}^\infty \frac{d(n)}{n}\frac{1}{4\pi^2}\[ V(e^{-i\pi} 4\pi^2nx ; 1,0) -2V(4\pi^2nx ; 1,0) + V(e^{i\pi} 4\pi^2nx ; 1,0) \right]. \end{align*} We apply \eqref{G and K function} here and obtain \begin{align}\label{Error term with K} \Delta_\mathbb{Q}^2(x) =\sum_{n=1}^\infty \frac{d(n)}{n} \frac{\sqrt{nx}}{\pi}\[ -i K_1(- 4\pi i \sqrt{nx}) -2K_1(4\pi \sqrt{nx}) + i K_1(4\pi i \sqrt{nx}) \right]. \end{align} The Bessel functions are inter-connected via the following relations (cf. \cite{BatemanE}). \begin{equation}\label{Y I and K functions} Y_\nu(iz) = e^{\frac{\pi i (\nu +1)}{2}}I_\nu(z) - \frac{2}{\pi} e^{-\frac{\pi i \nu}{2}}K_\nu(z), \end{equation} where $-\pi < \arg z \leq \frac{\pi}{2}$ and \begin{equation}\label{J and I functions} J_\nu(iz) = e^{\frac{\pi i \nu}{2}} I_\nu(z). \end{equation} It follows from \eqref{Y I and K functions} and \eqref{J and I functions} that for $\nu = 1$, we have \begin{equation}\label{K Y and J result} K_1(iz) = -\frac{\pi}{2} [J_1(-z) + i Y_1(-z)]. \end{equation} We also need the following two basic formulas to evaluate Bessel functions of integer order at negative arguments: \begin{equation}\label{J at negative arg} J_n(-z) = (-1)^n J_n(z) \end{equation} and \begin{equation}\label{Y at negative arg} Y_n(−z) = (-1)^n Y_n(z) + 2i (-1)^n J_n(z). \end{equation} We can now conclude the desired result inserting \eqref{K Y and J result}, \eqref{J at negative arg} and \eqref{Y at negative arg} into \eqref{Error term with K}. \end{proof}
\subsection{Ideal counting problem} We now consider the problem of counting the number of ideals $\mathfrak{a}$ in any number field $\mathbb{K}$ such that the norm of an ideal $\mathfrak{N}_\mathfrak{a} \leq x$. This problem can be considered as a special case of Theorem \ref{Theorem 1} by considering $m=1$ in Theorem \ref{Theorem 1}. We substitute $I_\mathbb{K}(x), H$ in place of $I_\mathbb{K}^1(x), H_1$ respectively for simplicity. We can conclude the following.
\begin{theorem}\label{Theorem 4.3} Let $\mathbb{K}$ be any number field of degree $d$ with signature $(r_1, r_2)$ and discriminant $d_\mathbb{K}$. Then we have \begin{align}\label{Eqn 4.8} I_\mathbb{K}(x) =\zeta_\mathbb{K}(0) + Hx &- \frac{i^{r_1}D_\mathbb{K}^{1/2}}{(2\pi)^{r_1+r_2}}\sum\limits_{j=0}^{r_1}(-1)^j \(\begin{matrix} r_1\\ j \end{matrix}\) \sum\limits_{n=1}^\infty \frac{v_\mathbb{K}(n)}{n} \nonumber\\ &\times G_{0, \,\ d}^{r_1+r_2, 0}\(\begin{matrix} - \\ \boldsymbol{1}_{r_1+r_2-1}, 0, \boldsymbol{1}_{r_2} \end{matrix}
\bigg| (e^{\frac{i\pi}{2}})^{2j-r_1} \frac{(2\pi)^{d}}{D_{\mathbb{K}}} nx\). \end{align}
\end{theorem} The following theorem provides the result for the partial sum $I_\mathbb{K}(x)$ when $\mathbb{K}$ is totally real. \begin{theorem}\label{Theorem 4.4} Let $\mathbb{K}$ be any totally real number field of degree $d \geq 2$ with discriminant $d_\mathbb{K}$. Then we have \begin{align}\label{Eqn 4.9} I_\mathbb{K}(x) = H x - \frac{i^{d}D_{\mathbb{K}}^{1/2}}{(2\pi)^{d}}\sum\limits_{j=0}^{d}(-1)^j \(\begin{matrix} d\\ j \end{matrix}\) \sum\limits_{n=1}^\infty \frac{v_\mathbb{K}(n)}{n}
\, V\( (e^{\frac{i\pi}{2}})^{2j-d} \frac{(2\pi)^{d}}{D_{\mathbb{K}}} nx ; \boldsymbol{1}_{d-1}, 0 \). \end{align}
\end{theorem} \begin{proof} The proof of the theorem follows immediately from Theorem \ref{Theorem 4.3} by replacing $r_1$ by $d$ and $r_2$ by $0$ in \eqref{Eqn 4.8}. It follows from proposition \ref{Prop 2.2} that for any number field of degree $d$ with signature $(r_1 , r_2)$, Dedekind zeta function vanishes at $0$ when $r_1 + r_2 >1$. Hence the term $\zeta_\mathbb{K}(0)$ does not appear in the formula for considering the number field with $r_1 \geq 2$ and $r_2 = 0$. \end{proof} We next consider the result for a real quadratic field as a corollary of Theorem \ref{Theorem 4.4} and estimate the partial sum $I_\mathbb{K}(x)$. \begin{corollary} Let $\mathbb{K}$ be any real quadratic field with discriminant $d_\mathbb{K}$. Then we have \begin{align} I_\mathbb{K}(x) = H x - \sum_{n=1}^\infty\frac{v_\mathbb{K}(n)}{n} \left[Y_1\left(\frac{4\pi\,\sqrt{xn}}{D_\mathbb{K}^{1/2}}\,\right)+\frac{2}{\pi}K_1\left(\frac{4\pi\,\sqrt{xn}}{D_\mathbb{K}^{1/2}}\) \right]\sqrt{xn}. \end{align} \end{corollary} \begin{proof} The proof follows an argument similar to that given in the proof of the Corollary \ref{Voronoi proof}. \end{proof} The next theorem provides the result for the partial sum $I_\mathbb{K}(x)$ when $\mathbb{K}$ is purely imaginary which follows directly from Theorem \ref{Theorem 4.3}. \begin{theorem}\label{Theorem 4.6} Let $\mathbb{K}$ be any purely imaginary number field of degree $d$ with discriminant $d_\mathbb{K}$. Then \begin{align} I_\mathbb{K}(x) =\zeta_\mathbb{K}(0) + H x - \(\frac{D_\mathbb{K}}{(2\pi)^d}\)^{1/2} \sum\limits_{n=1}^\infty \frac{v_\mathbb{K}(n)}{n} G_{0, \, d}^{\frac{d}{2}, \, 0}\(\begin{matrix} - \\ \boldsymbol{1}_{\frac{d}{2}-1}, 0, \boldsymbol{1}_{\frac{d}{2}} \end{matrix}
\bigg| \frac{(2\pi)^{d}}{D_{\mathbb{K}}} nx\). \end{align} \end{theorem} We finally consider the special case of Theorem \ref{Theorem 4.6} for imaginary quadratic fields and estimate the partial sum $I_\mathbb{K}(x)$. \begin{corollary} Let $\mathbb{K}$ be any imaginary quadratic field with discriminant $d_\mathbb{K}$. Then we have \begin{align} I_\mathbb{K}(x) =\zeta_\mathbb{K}(0) + H x + \sum_{n=1}^\infty \frac{v_\mathbb{K}(n)}{n} J_1\(\frac{4\pi\sqrt{nx}}{D_\mathbb{K}^{1/2}}\)\sqrt{nx}. \end{align} \end{corollary} \begin{proof} It follows from Theorem \ref{Theorem 4.6} that since $\mathbb{K}$ is a purely imaginary number field of degree $2$, we have \begin{align} I_\mathbb{K}(x) = \zeta_\mathbb{K}(0) + H x - \frac{D_\mathbb{K}^{1/2}}{2\pi} \sum\limits_{n=1}^\infty \frac{v_\mathbb{K}(n)}{n} G_{0, \, 2}^{1, \, 0}\(\begin{matrix} - \\ 0, 1 \end{matrix}
\bigg| \frac{4\pi^2}{D_{\mathbb{K}}} nx\). \end{align} We now apply \eqref{G and J function} to conclude our result. \end{proof}
\end{document} | arXiv |
\begin{document}
\title{Regularity properties of the Schrödinger cost}
\author{Gauthier Clerc\thanks{Institut Camille Jordan, Umr Cnrs 5208, Universit\'e Claude Bernard Lyon 1, 43 boulevard du 11 novembre 1918, F-69622 Villeurbanne cedex. [email protected]}}
\date{\today} \maketitle
\abstract{The Schrödinger problem is an entropy minimisation problem on the space of probability measures. Its optimal value is a cost between two probability measures. In this article we investigate some regularity properties of this cost: continuity with respect to the marginals and time derivative of the cost along probability measures valued curves.}
\section{Introduction}
The Schrödinger problem was formulated by Schrödinger himself in the articles~\cite{Sch31, Sch32} in the thirties. The modern approach of this problem has been mainly developed in the two seminal papers~\cite{follmer1988} and~\cite{leonard2014}. The discovery in~\cite{mikami04} that the Monge-Kantorovitch problem is recovered as the short time limit of the Schrödinger problem has triggered intense research activities in the last decade. This interest is due to the fact that adding an entropic penalty in the Monge-Kantorovitch problem leads to major computationnal advantages using the Sinkhorn algorithm (see for instance~\cite{COTFNT}). The Schrödinger problem can also be a fruitful tool to prove some functional inequalities (see~\cite{clerc2020},~\cite{HWI}, etc.). \\ The problem is, observing the empirical distribution of a cloud of Brownian particles at time $t=0$ and $t=1$, to find the distribution at time $0 < s < 1$ of the cloud. In modern language, this is an entropy minimization problem. The relative entropy of two measures is loosely defined by $$
H(p|r):= \left\{\begin{array}{cc}
\int \log \left( \frac{dp}{dr}\right)dp \ \text{if} \ p \ll r ,\\
+ \infty\ \text{else}. \end{array} \right. $$ We leave the precise definition of the relative entropy to the main body of the paper. Given two probability measures $\mu$ and $\nu$ on a Riemannian manifold $N$ equipped with a generator $L$ of reversible measure $m$, the Schrödinger cost is defined as \begin{equation*}
\mathrm{Sch}(\mu,\nu):= \inf H(\gamma | R_{01}). \end{equation*} Here the infimum is taken over every probability measures on $N \times N$ with $\mu$ and $\nu$ as marginals and $R_{01}$ is the joint law of initial and final position of the unique diffusion measure with generator $L$ on $C\left( [0,1],N\right)$ starting from $m$. Independently proven in different papers (see~\cite{chengeorgiou2016, gentil-leonard2017, gentil-leonard2020, gigli-tamanini2020}), the Benamou-Brenier-Schrödinger formula state that \begin{equation*} \label{BBS1}
\mathrm{Sch}(\mu,\nu)= \frac{\mathcal{C}(\mu,\nu)}{4}+ \frac{H(\mu|m)+H(\nu|m)}{2}. \end{equation*} Here $\mathcal C(\mu,\nu)$ is the entropic cost given by \begin{equation} \label{cout intro}
\mathcal{C}(\mu,\nu):= \inf \int_0^1 \left( \|v_s\|^2_{L^2(\mu_s)}+\left\|\nabla \log \frac{d \mu_s}{dm}\right\|_{L^2(\mu_s)}^2 \right) ds, \end{equation} and the infimum is taken over every $(\mu_s,v_s)_{0 \leq s \leq 1}$ such that $(\mu_s)_{0 \leq s \leq 1}$ is an absolutely continuous path with respect to the Wasserstein distance which connects $\mu$ to $\nu$ and satisfies in a weak sense for every $s \in [0,1]$ $$ \left\{\begin{array}{cc}
v_s \in L^2(\mu_s), \\
\partial_s \mu_s=- \nabla \cdot (\mu_sv_s). \end{array} \right. $$ In this paper we investigate regularity properties of the functions $(\mu,\nu) \mapsto \mathrm{Sch}(\mu,\nu)$ and $(\mu,\nu) \mapsto \mathcal C(\mu,\nu)$.{ To my knowledge, regularity properties of the Schrödinger cost as function over probability measures haven't been investigate yet, but the stability of optimizer has been investigate in~\cite{tamanini2017} and more recently in~\cite{ghosal2021stability} and~\cite{backhoff2019stability}.} We give an overview of the main contributions of this paper, leaving precise statements to other sections. \begin{itemize} \item In Section~\ref{sec-cont} we investigate continuity properties of the cost function $\mathcal{C}$. In Theorem~\ref{continuityofthecost} we show that $$ \underset{k \rightarrow \infty}{\lim} \mathcal{C}(\mu_k,\nu_k)= \mathcal{C}(\mu,\nu) $$ if $W_2(\mu_k,\mu) \underset{k \rightarrow \infty}{\rightarrow} 0$ (resp. $\nu_k$ to $\nu$) with additional hypothesis about the entropy and the Fisher information along the sequences. \item In Section~\ref{sec-appli} we provide a few applications of the preceding continuity properties. The main result of this section is that using the continuity properties of $\mathrm{Sch}$ and $\mathcal{C}$ we are able to show that the Benamou-Brenier-Schrödinger formula~(\ref{BBS1}) is valid assuming that both measures have finite entropy, finite Fisher information and locally bounded densities. { Up to my knowledge this is a new result up to my knowledge, because no compactness assumptions are needed for the two marginals. } \item In Section~\ref{sec-deriv} we investigate the question of the derivability of the functions $t \mapsto \mathrm{Sch}(\mu_t,\nu_t)$ and $t \mapsto \mathcal C (\mu_t,\nu_t)$, where $(\mu_t)_{t \geqslant 0}$ and $(\nu_t)_{t \geqslant 0}$ are some curves on the Wasserstein space. These results extend the existing ones for the Wasserstein distance, see~\cite[Theorem 8.4.7]{ambrosio-gigli2008} and~\cite[Theorem 23.9]{villani2009}. We prove that the derivative of the entropic cost is given for almost every $t$, by $$
\frac{d}{dt} \mathcal C(\mu_t,\nu_t)= \langle \dot \mu_s^t|_{s=1}, \dot \nu_t \rangle_{L^2(\nu_t)} - \langle \dot \mu_s^t|_{s=0}, \dot \mu_t \rangle_{L^2(\mu_t)}, $$ where $(\mu_s^t)_{s \in [0,1]}$ is the minimizer of the problem~(\ref{cout intro}) from $\mu_t$ to $\nu_t$ and $\dot \mu_s^t$ is the velocity of the path $s \mapsto \mu_s^t$ defined in other section. Such minimizers are called entropic interpolations. Note that this is exactly the formula which holds for the Wasserstein distance, replacing the Wasserstein geodesics by the entropic interpolations. For technical reasons we prove this formula in the case where $N= \mathbb R^n$ and $L$ is the classical Laplacian operator.
\end{itemize}
\section{Setting of our work}
\subsection{Markov semigroups}
Let $(N, \mathfrak{g})$ be a smooth, connected and complete Riemannian manifold. We denote $dx$ the Riemannian measure and $\langle \cdot , \cdot \rangle$ the Riemannian metric (we omit $\mathfrak{g}$ for simplicity). Let $\nabla$ denote the gradient operator associated to $(N, \mathfrak{g})$ and $\nabla \cdot$ be the associated divergence in order to have for every smooth function $f$ and vector field $\zeta$ $$ \int \langle \nabla f(x) , \zeta(x) \rangle dx = -\int f(x) \nabla \cdot \zeta(x) dx. $$ Hence the Laplace-Beltrami operator can be defined as $\Delta = \nabla \cdot \nabla$. We consider a differential generator $L:= \Delta - \langle \nabla , \nabla W \rangle $ for some smooth function $W:N \rightarrow \mathbb R$. We define the carré du Champ operator for every smooth functions $f$ and $g$ by $$ \Gamma(f,g):= \frac{1}{2}\left(L(fg)-fLg-gLf \right). $$
Under our current hypotheses we have $ \Gamma(f):= \Gamma(f,f)= |\nabla f|^2$, which is the length of $\nabla f$ with to the Riemannian metric $\mathfrak{g}$. Let $Z:= \int e^{-W}dx$, then if $Z<\infty$ the reversible probability measure associated with $L$ is given by $$ d m:= \frac{e^{-W}}{Z}dx. $$ If $Z= \infty$, the reversible measure associated with $L$ is $dm:= e^{-W}dx$ of infinite mass. Following the work of~\cite{bakry-emery1985} we define the iterated carré du champ operator given by $$ \Gamma_2(f,g)= \frac{1}{2} \left( L \Gamma(f,g)-\Gamma(Lf,g)-\Gamma(f,Lg) \right), $$ for any smooth functions $f$ and $g$ and we denote $\Gamma_2(f):= \Gamma_2(f,f)$. We say that the operator $L$ verifies the $CD(\rho,n)$ curvature-dimension condition with $\rho \in \mathbb R$ and $n \in (0,\infty]$ if for every smooth function $f$ $$ \Gamma_2(f) \geqslant \rho \Gamma(f)+ \frac{1}{n}(Lf)^2. $$ { For instance, $\mathbb R^n$ endowed with the classical Laplacian operator verify the $CD(0,n)$ curvature-dimension condition. With the Ornstein-Uhlenbeck operator, $\mathbb R^n$ verify the $CD(1,\infty)$ curvature-dimension condition. More generally a Riemannian manifold of dimension $n \in \mathbb N$ and with a Ricci tensor bounded from below by $\rho \in \mathbb R$ endowed with his Laplace-Beltrami operator verify the $CD(\rho , n)$ curvature-dimension condition.} We assume that $L$ is the generator of a Markov semigroup $(P_t)_{t \geqslant 0}$, this is for example the case when a $CD(\rho,\infty)$ curvature-dimension condition holds for some $\rho \in \mathbb R$. For every $f \in L^2(m)$ the family $(P_tf)_{t \geqslant 0}$ is defined as the unique solution of the Cauchy system $$ \left\{ \begin{array}{cc}
\partial_tu=Lu, \\
u(\cdot,0)=f( \cdot). \end{array} \right. $$
Under the $CD(\rho,\infty)$ curvature-dimension condition this Markov semigroup admit a probability kernel $p_t(x,dy)$ with density $p_t(x,y)$, that is for every $t \geqslant 0$ and $f \in L^2(m)$ $$ \forall x \in N, \ P_tf(x)= \int f(y)p_t(x,dy)= \int f(y)p_t(x,y) dm(y), $$ for the existence of the kernel see~\cite[Theorem 7.7]{grigor}. We also define the dual semigroup $(P_t^*)_{t \geqslant 0}$ which acts on probability measures. Given a probability measure $\mu$ the family $(P_t^* \mu)_{t \geqslant 0}$ is given by the following equation $$ \int f dP_t^{*} \mu = \int P_tf d\mu, $$
for every $t \geqslant 0$ and every test function $f$. When $\mu \ll m$, we have $\frac{dP_t^* \mu}{dm} =P_t \left( \frac{d \mu}{dm}e^W\right)$. The function $(t,x) \mapsto \frac{dP_t^* \mu}{dx}(x)$ is a solution of the following Fokker-Planck type equation \begin{equation} \label{eq-51} \partial_t \nu_t= L^* \nu_t := \Delta \nu_t+ \nabla \cdot \left(\nu_t \nabla W \right), \end{equation} with initial value $\frac{d \mu}{dx}$. Here $L^*$ is the dual operator of $L$ in $L^2(dx)$.
\subsection{Wasserstein space and absolutely continuous curves}
The set $\mathcal P_2(N)$ of probability measures on $N$ with finite second order moment can be endowed with the Wasserstein distance given for every $\mu,\nu \in \mathcal P_2(N)$ by, $$ W_2^2(\mu,\nu):= \inf \sqrt{\int d^2(x,y)d \pi(x,y}), $$ where the infimum is running over all $\pi \in \mathcal P(N \times N)$ with $\mu$ and $\nu$ as marginals { and $d$ is the Riemannian distance on (N, $\mathfrak{g}$)}. Recall that a path $(\mu_t)_{t \in [0,1]} \subset \mathcal P_2(N)$ is absolutely continuous with to the Wasserstein distance $W_2$ if and only if $$
|\dot \mu_t|:= \underset{s \rightarrow t}{\lim} \frac{W_2(\mu_s,\mu_t)}{|t-s|} \in L^1([0,1]). $$
In this case, there exists a unique vector field $(V_t)_{t \in [0,1]}$ such that $V_t \in L^2(\mu_t)$ and $|\dot \mu_t|=\|V_t\|_{L^2(\mu_t)}$. Furthermore this vector field can be characterized as the solution of the continuity equation $$ \partial_t \mu_t = - \nabla \cdot (V_t \mu_t) $$ with minimal norm in $L^2(\mu_t)$. We denote $\dot \mu_t=V_t$, and $(\dot \mu_t)_{t \in [0,1]}$ is called the velocity vector field of $(\mu_t)_{t \in [0,1]}$ or the velocity for short. Sometimes we also use the notation $\mathbf{dt} \mu_t= \dot \mu_t$. \\ In the famous paper~\cite{benamou-brenier2000} Benamou and Brenier showed that the Wasserstein distance admits a dynamical formulation \begin{equation} \label{eq-50}
W_2^2(\mu,\nu)= \inf \int_0^1 \|\dot \mu_t\|^2_{L^2(\mu_t)}dt, \end{equation} where the infimum is running over all absolutely continuous paths which connect $\mu$ to $\nu$ in $\mathcal P_2(N)$. In his article~\cite{otto2001}, Felix Otto gave birth to a theory which allowed us to consider $(\mathcal P_2(N),W_2)$, heuristically at least, as an infinite dimensionnal Riemannian manifold. This theory was baptised \textquotedblleft Otto calculus\textquotedblright later by Cédric Villani. For every $\mu \in \mathcal P_2(N)$ the tangent space of $\mathcal P_2(N)$ at $\mu$ can be defined as $$ T_{\mu} \mathcal P_2(N):= \overline{\left\{ \nabla \varphi: \ \varphi \in C_c^{\infty}(N)\right\}}^{L^2(\mu)}, $$ and the Riemannian metric is induced by the scalar product $\langle \cdot , \cdot \rangle_{L^2(\mu)}$, see for instance~\cite[Section 1.4]{gigli2012} or~\cite[Section 3.2]{gentil-leonard2017}. \\ As in the Riemannian case, the acceleration of a curve can be defined as the covariant derivative of the veolcity field along the curve itself. If $(\mu_t)_{t \in [0,1]}$ is an absolutely continuous curve in $\mathcal P_2(N)$ and $(v_t)_{t \in [0,1]}$ is a vector field along $(\mu_t)_{t \in [0,1]}$, for every $t \in [0,1]$ we denote by $\mathbf{D_t}v_t$ the covariant derivative of $v_t$ along $(\mu_t)_{t \in [0,1]}$ defined in~\cite[Section 3.3]{gentil-leonard2017}. It turns out that in the case where the velocity field of $(\mu_t)_{t \in [0,1]}$ has the form $(\nabla \varphi_t)_{t \in [0,1]}$ then the acceleration of $(\mu_t)_{t \in [0,1]}$ is given by $$ \forall t \in [0,1], \ \ddot \mu_t:=\mathbf{D_t} \dot \mu_t = \nabla \left( \frac{d}{dt} \varphi_t+ \frac{1}{2} \Gamma(\varphi_t)\right), $$ see~\cite[Section 3.3]{gentil-leonard2017}. Covariant derivative and acceleration can be defined in more general framework, see~\cite[Section 5.1]{gigli2012}.
\subsection{Schrödinger problem} \label{sec-sch}
Here we introduce the Schrödinger problem by his modern definition, following the two seminal papers \cite{leonard2014} and \cite{follmer1988}. The first object of interest is the relative entropy of two measures. The relative entropy of a probability measure $p$ with to a measure $r$ is loosely defined by \begin{equation} \label{eq-41}
H(p|r):= \int \log \left( \frac{dp}{dr} \right)dp, \end{equation} if $p \ll r$ and $+ \infty$ elsewise. This definition is meaningful when $r$ is a probability measure but not necessarily when $r$ is unbounded. Assuming that $r$ is $\sigma$-finite, there exists a function $\mathcal{W}: M \rightarrow [1, \infty)$ such that $z_{\mathcal{W}}:= \int e^{-\mathcal{W}}dr < \infty$. Hence we can define a probability measure $r_{\mathcal{W}}:=z_{\mathcal{W}}^{-1}e^{-\mathcal{W}}r$ and for every measure $p$ such that $\int \mathcal{W} dp < \infty$ $$
H(p|r):= H(p|r_{\mathcal{W}})- \int \mathcal{W} dp - \log(z_{\mathcal{W}}), $$
where $H(p|r_{\mathcal{W}})$ is defined by the equation~(\ref{eq-41}). \\ For $\mu, \nu \in \mathcal P(N)$ we define the Schrödinger cost from $\mu$ to $\nu$ by $$
\mathrm{Sch}(\mu,\nu):= \inf \left\{H(\gamma|R_{01}): \ \gamma \in \mathcal{P}(N \times N), \ \gamma_0=\mu, \ \gamma_1=\nu \right\}, $$ where $R_{01}$ is the joint law of the initial and final position of the Markov process associated with $L$ starting from $m$, which is given by $$ dR_{01}(x,y)=p_1(x,y) dm(x)dm(y). $$ To ensure the existence and unicity of minimizer, more hypothesis are needed. Namely we assume that there exists two non-negative mesurable functions $A,B:N \rightarrow \mathbb R$ such that \begin{enumerate}[(i)] \item $p_1(x,y) \geqslant e^{-A(x)-A(y)}$ uniformly in $x,y\in N$; \item $\int e^{-B(x)-B(y)}p_1(x,y)\,m(dx) m(dy)< \infty$; \item $\int (A+B)\, d\mu, \int (A+B)\, d\nu<\infty$;
\item $- \infty<H(\mu|m), H(\nu|m)<\infty$. \end{enumerate} We define the set $$
\mathcal{P}^*_2(N):= \left\{ \mu \in \mathcal P_2(N): \ -\infty < H(\mu|m) < \infty , \ \int \left(A+B \right) d\mu < \infty \right\}. $$ If $\mu, \nu \in \mathcal P_2^*(N)$, it is proven that the Schrödinger cost $\mathrm{Sch}(\mu,\nu)$ is finite and admits a unique minimizer which takes the form \begin{equation} \label{eq-57} d\gamma= f \otimes g dR_{01}, \end{equation} for two mesurable non-negative functions $f$ and $g$, see~\cite[Proposition 4.1.5]{tamanini2017}. Another fundamental result about the Schrödinger problem is an analogous formula to~(\ref{eq-50}) for the Schrödinger cost.
\begin{ethm}[Benamou-Brenier-Schrödinger formula] \label{BBS-new} Let $\mu,\nu \in \mathcal P_2^*(N)$ be two probability measures compactly supported and with bounded densities with to $m$. Then the following formula holds \begin{equation} \label{eq-56}
\mathrm{Sch}(\mu,\nu)=\frac{ \mathcal{C}(\mu,\nu)}{4}+\frac{\mathbf{\mathcal F}(\mu)+\mathbf{\mathcal F}(\nu)}{2}, \end{equation} where $\mathcal{C}(\mu,\nu)$ is the entropic cost between $\mu$ and $\nu$ given by $$
\mathcal{C}(\mu,\nu):= \inf \int_0^1 \left(\|\dot \mu_s\|_{L^2(\mu_s)}^2 + \mathcal \|\nabla \log (\mu_s)\|_{L^2(\mu_s)}^2\right) ds. $$ Here the infimum is running over every absolutely continuous path $(\mu_s)_{s \in [0,1]}$ which connects $\mu$ to $\nu$ in $\mathcal P_2(N)$ and $\mathbf{\mathcal F}$ is defined as $$
\mathbf{\mathcal F}(\mu):= H(\mu|m). $$ \end{ethm} Different versions of this theorem have been obtained under various hypothesis, see~~\cite{chengeorgiou2016, gentil-leonard2017, gentil-leonard2020, gigli-tamanini2020}. \\ The functional $\mathbf{\mathcal F}: \mathcal P_2(N) \rightarrow [0,\infty]$ is central on this work. Its gradient can be identified by the equation $\frac{d}{dt} \mathbf{\mathcal F}(\mu_t)= \langle \mathbf{\mathbf{grad}}_{\mu_t} \mathbf{\mathcal F} , \dot \mu_t \rangle_{\mu_t}$ and is given for every $\mu \in \mathcal P_2(N)$ with smooth density against $m$ by $$ \mathbf{\mathbf{grad}}_{\mu} \mathbf{\mathcal F} := \nabla \log \left(\frac{d \mu}{dm}\right). $$ Those definitions allowed us to see the Fokker-Planck type equation~(\ref{eq-51}) as the gradient flow equation of $\mathbf{\mathcal F}$. Indeed every solution $(\nu_t)_{t \geqslant 0}$ of this equation verify $$ \dot \nu_t= - \nabla \left(\log \frac{\nu_t}{dx} +W \right) = - \nabla \log \left(\frac{d \nu_t}{dm}\right)=- \mathbf{\mathbf{grad}}_{\nu_t} \mathbf{\mathcal F}, $$ see \cite[Section 3.2]{gentil-leonard2020}. With Otto calculus, we can also introduce the notions of Hessian and covariant derivative. A great fact is that the Hessian of $\mathbf{\mathcal F}$ can be expressed in term of $\Gamma_2$, indeed $$ \forall \mu \in \mathcal P_2(N), \ \forall \ \nabla \varphi, \nabla \psi \in T_{\mu} \mathcal P_2(N), \ \mathbf{Hess}_{\mu} \mathbf{\mathcal F}(\nabla \varphi, \nabla \psi) = \int \Gamma_2(\nabla \varphi , \nabla \psi) d \mu, $$
see~\cite[Section 3.3]{gentil-leonard2017}. The quantity $\mathcal I(\mu):=\left\|\nabla \log \frac{d \mu}{dm}\right\|_{L^2(\mu)}^2$ which appears in the previous definition is central in this work, it is called the Fisher information. According to the Otto calculus formalism, the Fisher information admits the nice interpretation, $$
\mathcal{I}(\mu):= \|\mathbf{\mathbf{grad}}_{\mu} \mathbf{\mathcal F} \|^2_{L^2(\mu)}. $$ Minimizers of the entropic cost $\mathcal{C}(\mu,\nu)$ are called entropic interpolations and take the form $$ \mu_t=P_t f P_{1-t}g dm, $$ where $f$ and $g$ are the two positive functions which appears in the equation~(\ref{eq-57}). Due to this particular structure, velocity and acceleration of entropic interpolations can be explicitly computed. It holds that for every $t \in [0,1]$ $$ \dot \mu_t= \nabla \left(\log P_{1-t}g - \log P_t f\right). $$ But the most important fact, is that entropic interpolations are solutions of the following Newton equation \begin{equation*} \ddot \mu_t= \nabla \frac{d}{dt} \log \mu_t + \nabla^2 \log \mu_t \dot \mu_t , \end{equation*} which can be rewrite in the Otto calculus formalism as \begin{equation} \label{eq-newton} \ddot \mu_t = \mathbf{Hess}_{\mu_t} \mathbf{\mathcal F} \ \mathbf{\mathbf{grad}}_{\mu_t} \mathbf{\mathcal F}. \end{equation} This equation was first derived in~\cite[Theorem 1.2]{conforti2017}, see also~\cite[Sec 3.3, Propositon 3.5]{gentil-leonard2020}.
\subsection{Flow maps} \label{sec-flowmaps}
In this subsection we follow~\cite[Sec 2.1]{gigli2012}. We need this result only in the euclidean framework, hence in this subsection we take $N= \mathbb R^n$ for simplicity. A crucial ingredient of the proof of the Theorem~\ref{lem-1} is, given a path $(\mu_t)_{t \in [0,1]}$, the existence of a family of maps $(T_{t \rightarrow s})_{t,s \in [0,1]}$ such that for every $s,t \in [0,1]$ $$ \frac{d}{ds}T_{t \rightarrow s}= \dot \mu_s \circ T_{t \rightarrow s} $$ and $$ T_{t \rightarrow s} \# \mu_t=\mu_s. $$ These maps are called the flow maps associated with $(\mu_s)_{s \in [0,1]}$. The existence of such maps can be garanted by some regularity assumptions on the path. Before the statement we recall the definition of the Lipschitz constant of a vector field proposed by Gigli in~\cite{gigli2012}. \begin{edefi}[Lipschitz constant of a vector field] \label{constant} For every smooth compactly supported vector field $\zeta$ on $\mathbb{R}^n$ we define $$
L(\zeta):= \underset{x,y \in \mathbb R^n}{\sup} \frac{|\zeta(x)- \zeta(y)|}{|x-y|} $$ Then for every $\mu \in \mathcal{P}_2(N)$ and every $v \in T_{\mu} \mathcal{P}_2(\mathbb R^n)$ we define $$ \mathcal{L}(v):= \inf \underset{n \rightarrow \infty}{\underline{\lim}} L(\zeta_n), $$ where the infimum is taken over sequences $(\zeta_n)_{n \in \mathbb N}$ of smooth compactly supported vector fields which converges to $v$ in $L^2(\mu)$ when $n \rightarrow \infty$. \end{edefi}
Note that in the case where $v$ is smooth and compactly supported $\mathcal{L}(v)$ is the Lipschitz constant of $v$.
\begin{ethm}[Cauchy Lipschitz on manifolds,~\text{\cite[Theorem 2.6]{gigli2012}}] Let $(\mu_t)_{t \in [0,1]} \subset \mathcal P_2(\mathbb R^n)$ be an absolutely continuous path such that $$
\int_0^1 \mathcal{L}(\dot \mu_t)dt< \infty, \ \int_0^1 \| \dot \mu_t \|_{L^2(\mu_t)}dt < \infty. $$ Then there exists a family of maps $(T_{t \rightarrow s})_{t,s \in [0,1]}$ such that \begin{equation*} \left\{ \begin{array}{cc} T_{t \rightarrow s}: \mathrm{supp}(\mu_t) \rightarrow \mathrm{supp} (\mu_s),& \ \forall t,s \in [0,1], \\ T_{t \rightarrow t} (x) =x,& \ \forall x \in \mathrm{supp}(\mu_t), \ t \in [0,1], \\
\frac{d}{dr}T_{{t \rightarrow r}_{|r=s}}= \dot \mu_s \circ T_{t \rightarrow s},& \ \forall t \in [0,1], \ a.e-s\in[0,1]. \end{array} \right. \end{equation*} and the map $x \mapsto T_{t \rightarrow s}(x)$ is Lipschitz for every $s,t \in [0,1]$. Furthermore for every $s,t,r \in [0,1]$ and $x \in \mathrm{supp}(\mu_t)$ $$ T_{r \rightarrow s} \circ T_{t \rightarrow r}(x)=T_{t \rightarrow s} (x), $$ and $$ T_{t \rightarrow s} \# \mu_t = \mu_s. $$
\end{ethm}
\subsection{hypothesis about the heat kernel}
Here is a summary of all hypothesis needed in all the paper. \begin{enumerate}[(H1)] \item The $CD(\rho,\infty)$ curvature-dimension condition holds for some $\rho \in \mathbb R$. \item hypothesis $(i)$ to $(iv)$ in Section~\ref{sec-sch}. \end{enumerate} The first hypothesis~(H1) is needed to defined Markov semigroups as introduced in~\cite{bgl-book}. The second hypothesis~(H2) is needed to ensure existence and unicity of minimizers of the Schrödinger problem. For instance those hypothesis hold true when $N= \mathbb R^n$ is equipped with the classical Laplacian operator or the Ornstein-Ulhenbeck operator, or when $N$ is compact.
\section{Continuity of the entropic cost} \label{sec-cont}
Here we are interested in the continuity of the function $(\mu,\nu) \mapsto \mathcal{C}(\mu,\nu)$ where $\mathcal{C}(\mu,\nu)$ is defined as an infimum over all absolutely continuous paths connecting $\mu$ to $\nu$.
\begin{ethm}[Continuity of the entropic cost] \label{continuityofthecost} Let $\mu,\nu \in \mathcal{P}_2^*(N)$ and $(\mu_k)_{k \in \mathbb N},(\nu_k)_{k \in \mathbb N} \subset \mathcal P_2^*(N)$ be two sequences such that $\mu_k$ converges toward $\mu$ with to the Wasserstein distance (resp.. $\nu_k$ toward $\nu$). We also assume that for every $k \in \mathbb N$ there exists an entropic interpolation from $\mu_k$ to $\nu_k$ (resp. from $\mu$ to $\nu$) and $$ \sup \left\{\mathcal{I}(\mu_k),\mathcal{I}(\nu_k); \, k \in \mathbb{N} \right\}< + \infty $$ and $$ \sup \left\{\mathbf{\mathcal F}(\mu_k),\mathbf{\mathcal F}(\nu_k); \, k \in \mathbb{N} \right\}< + \infty. $$ Then $$ \mathcal{C}(\mu_k,\nu_k) \underset{k \rightarrow \infty}{\rightarrow} \mathcal{C}(\mu,\nu). $$ \end{ethm}
\begin{eproof} To begin we will show that $$ \underset{k \rightarrow \infty}{\overline{\lim}}\mathcal{C}(\mu_k,\nu_k) \leq \mathcal{C}(\mu,\nu). $$ To do so let us consider some particular path from $\mu_k$ to $\nu_k$. For every $k \in \mathbb{N}$, $\varepsilon \in (0,1/2)$ and $\delta \in \left(0, \varepsilon/2 \right)$, we define a path $\eta^{k,\varepsilon , \delta}$ from $\mu_k$ to $\mu$ given by $$ \eta_t^{k,\varepsilon,\delta}= \left\{ \begin{array}{cc}
P_t^*(\mu_k), & \, t \in [0,\varepsilon/2-\delta), \\
P_{\varepsilon/2-\delta}^*(\gamma_t), & t \in [\varepsilon/2- \delta, \varepsilon/2+ \delta], \\
P_{\varepsilon -t}^*(\mu), & t \in (\varepsilon/2+\delta, \varepsilon], \end{array}\right. $$ where for all $t \in (\varepsilon/2- \delta, \varepsilon/2 + \delta)$ we define $\gamma_t= \alpha_{\frac{t-(\varepsilon/2 - \delta)}{2 \delta}}$ and $\left( \alpha_t\right)_{t \in [0,1]}$ is a Wasserstein constant speed geodesic from $\mu_k$ to $\mu$. We also define a path $(\tilde{\eta_t}^{k,\varepsilon,\delta})_{t \in [0,\varepsilon]}$ in the exact same way, but changing $\mu_k$ in $\nu$ and $\mu$ in $\nu_k$, that is a path from $\nu$ to $\nu_k$. \newline We denote by $\left(\mu_t\right)_{t \in [0,1]}$ the entropic interpolation from $\mu$ to $\nu$. Then for every $0 < \varepsilon < 1/2$, $k \in \mathbb{N}$ and $\delta \in (0, \varepsilon/2)$ we define a path $\left(\zeta_t^{k, \varepsilon, \delta}\right)_{t \in[0,1]}$ by \begin{equation*} \zeta_t^{k, \varepsilon, \delta}= \left\{ \begin{array}{cc}
\eta^{k,\varepsilon,\delta}_t,& t \in [0,\varepsilon), \\
\mu_{\frac{t-\varepsilon}{1-2 \varepsilon}},& t \in [\varepsilon,1-\varepsilon], \\
\tilde{\eta}_{t-(1-\varepsilon)}^{k, \varepsilon, \delta},& t \in (1-\varepsilon,1]. \end{array}\right. \end{equation*} This is an absolutely continuous path which connects $\mu_k$ to $\nu_k$, hence, by the very definition of the cost $\mathcal C$ we have $$
\mathcal{C}(\mu_k,\nu_k) \leq \int_0^1 \left\| \dot \zeta_t^{k,\varepsilon, \delta}\right\|^2_{L^2(\zeta_t^{k,\varepsilon, \delta})}+ \mathcal I(\zeta_t^{k, \varepsilon , \delta})dt. $$ Due to the hypothesis (H1) we can apply the local logarithmic Sobolev inequalities stated in~\cite[Theorem 5.5.2]{bgl-book} and the $\rho$-convexity of the entropy (see~\cite[Corollary 17.19]{villani2009}) to find $$ 2 \int_{0}^{\varepsilon/2-\delta} \mathcal{I}(P_t^*\mu_k)dt+2 \int_{0}^{\varepsilon/2-\delta} \mathcal{I}(P_t^*\mu)dt \leq \frac{1-e^{- \rho\varepsilon}}{\rho} \left(\mathcal{I}(\mu)+ \mathcal{I}(\mu_k) \right), $$ and $$ \int_{\varepsilon/2-\delta}^{\varepsilon/2+\delta}\mathcal{I}(P_{\varepsilon/2-\delta}^* \gamma_t)dt \leq \frac{4 \delta \rho}{e^{\rho(\varepsilon-2\delta)}-1}\int_0^1 \mathbf{\mathcal F}(\alpha_t)dt \leq \frac{2 \delta \rho}{e^{\rho(\varepsilon-2\delta)}-1} \left( \mathbf{\mathcal F}(\mu) + \mathbf{\mathcal F} (\mu_k)- \frac{\rho}{2}W_2^2(\mu,\mu_k)\right). $$ Here for $t \in (\varepsilon/2-\delta , \varepsilon/2+\delta)$ we denote $\mathbf{dt}P_{\varepsilon/2 - \delta} \gamma_t = \dot P_{\varepsilon/2 - \delta} \gamma_t$ the velocity field of the path $(P_{\varepsilon/2 - \delta} \gamma_s)_{s \in (\varepsilon/2 - \delta , \varepsilon/2+\delta)}$ at time $t$. We need to estimate $$
\int_{\varepsilon/2 - \delta}^{\varepsilon/2 + \delta} \left\| \mathbf{dt} P_{\varepsilon/2-\delta}^*\gamma_t \right\|_{L^2(P_{\varepsilon/2-\delta}^* \gamma_t)}dt. $$ Using ~\cite[Theorem 8.3.1]{ambrosio-gigli2008}, for every $t \in (\varepsilon/2-\delta, \varepsilon/2+\delta)$ we have \begin{equation*}
\left\| \mathbf{dt} P_{\varepsilon/2-\delta}^*\gamma_t \right\|_{L^2(P_{\varepsilon/2-\delta}^*(\gamma_t))} = \underset{u \rightarrow t}{\lim} \frac{W_2(P_{\varepsilon/2- \delta }^*\gamma_t,P_{\varepsilon/2-\delta}^* \gamma_u)}{|t-u|}. \end{equation*}
Finally, using the $CD(\rho,\infty)$ contraction property~\cite[Theorem 9.7.2]{bgl-book} we obtain \begin{equation*}\left\| \mathbf{dt} P_{\varepsilon/2-\delta}^*\gamma_t \right\|_{L^2(P_{\varepsilon/2-\delta}^*(\gamma_t))} \leq e^{- \rho( \varepsilon/2 - \delta)} \frac{W_2(\mu_k,\mu)}{2 \delta}.\end{equation*} We have shown \begin{multline*}
\int_0^{\varepsilon} \left\| \dot \zeta_t^{n,\varepsilon}\right\|^2_{L^2(\zeta_t^{n,\varepsilon})}+ \mathcal I\left(\zeta_t^{n,\varepsilon}\right)dt \leq \frac{1-e^{- \rho\varepsilon}}{\rho} \left(\mathcal{I}(\mu)+ \mathcal{I}(\mu_k) \right) \\ + \frac{2 \delta \rho}{e^{\rho(\varepsilon-2\delta)}-1} \left( \mathbf{\mathcal F}(\mu) + \mathbf{\mathcal F} (\mu_k)- \frac{\rho}{2}W_2^2(\mu,\mu_k)\right) + e^{-\rho(\varepsilon - 2 \delta)} \frac{W_2^2(\mu_k,\mu)}{4\delta^2}. \end{multline*} A similar estimate hold for the integral from $1-\varepsilon$ to $1$ and we obtain \begin{multline*}
\mathcal{C}(\mu_k,\nu_k)\leq \frac{1-e^{- \rho\varepsilon}}{\rho} \left(\mathcal{I}(\mu)+ \mathcal{I}(\mu_k) + \mathcal I(\nu) + \mathcal I(\nu_k) \right) + \\ \frac{2 \delta \rho}{e^{\rho(\varepsilon-2\delta)}-1} \left( \mathbf{\mathcal F}(\mu) + \mathbf{\mathcal F} (\mu_k) + \mathbf{\mathcal F}(\nu) + \mathbf{\mathcal F}(\nu_k)- \frac{\rho}{2}\left(W_2^2(\mu,\mu_k)+W_2^2(\nu,\nu_k)\right)\right) \\ + e^{-\rho(\varepsilon - 2 \delta)} \frac{W_2^2(\mu_k,\mu)+W_2^2(\nu_k,\nu)}{4\delta^2}+ \int_0^1 \frac{1}{1-2\varepsilon} \left\| \dot \mu_t \right\|_{L^2(\mu_t)}^2 + (1-2 \varepsilon)\mathcal{I}(\mu_t) dt. \end{multline*} Finally, letting in this order $k$ tend to $\infty$, $\delta$ tend to $0$, and $\varepsilon$ tend to $0$ we obtain the desired inequality. \newline To obtain the $\liminf$ inequality, we consider the same path but swapping the role of $\mu_k$ and $\mu$ (resp. $\nu_k$ and $\nu$) and using the fact that $1-2 \varepsilon < \frac{1}{1-2\varepsilon}$, we obtain for every $k \in \mathbb N$, $\varepsilon \in (0,1/2)$ and $\delta \in (0,\varepsilon)$ \begin{multline*} \mathcal{C}(\mu,\nu) \leq \frac{1-e^{-\rho \varepsilon}}{\rho}\left( \mathcal I(\mu) + \mathcal I(\mu_k)+ \mathcal I(\nu)+ \mathcal I(\nu_k)\right) \\ + \frac{2 \delta \rho}{e^{\rho(\varepsilon-2\delta)}-1} \left( \mathbf{\mathcal F}(\mu) + \mathbf{\mathcal F} (\mu_k) + \mathbf{\mathcal F}(\nu) + \mathbf{\mathcal F}(\nu_k)- \frac{\rho}{2}\left(W_2^2(\mu,\mu_k)+W_2^2(\nu,\nu_k)\right)\right) \\ + e^{-\rho(\varepsilon - 2 \delta)} \frac{W_2^2(\mu_k,\mu)+W_2^2(\nu_k,\nu)}{4\delta^2}+ \frac{1}{1-2 \varepsilon} \mathcal{C}(\mu_k,\nu_k). \end{multline*} Letting $k$ tends to $\infty$, $\delta$ tends to $0$ and $\varepsilon$ tend to zero we obtain $$ \mathcal{C}(\mu,\nu) \leq \underset{k \rightarrow \infty}{\underline{\lim}}\mathcal{C}(\mu_k,\nu_k). $$ \end{eproof}
\section{Extension of some properties to the non compactly supported case} \label{sec-appli}
\subsection{Benamou-Brenier-Schrödinger formula}
{ As mentionned before, the Benamou-Brenier-Schrödinger formula has been obtained under various hypothesis. Here we show that the result hold true in the case where both measures are not compactly supported but assuming that they have finite fisher information, finite entropy and locally bounded densities, using continuity properties of the cost proved before and existing results. Recall that, in the existing litterature, this formula is proved assuming that the two measures have bounded supports and densities, see~\cite[Theorem 4.3]{gigli-tamanini2020}}.
\begin{eprop}[Benamou-Brenier-Schrödinger formula] \label{BBS-formula} Let $\mu,\nu \in \mathcal P_2^*(N)$ be two measures with locally bounded densities with respect to $m$ such that $\mathcal I(\mu), \mathcal I(\nu) < \infty$. Furthermore, assume that there exists an entropic interpolation from $\mu$ to $\nu$. Then $$ \mathrm{Sch}(\mu,\nu)= \frac{\mathcal{C}(\mu,\nu)}{4}+ \frac{\mathbf{\mathcal F}(\mu) + \mathbf{\mathcal F}(\nu)}{2}. $$ \end{eprop}
Notice that, the hypothesis of existence of entropic interpolations is not so restrictive. Indeed if $N= \mathbb R^n$, entropic interpolations always exists for measures in $\mathcal{P}_2^*(N)$, see~\cite[Proposition 4.1]{leonard2014}.
\begin{eproof} Let $x \in N$, for every $n \in \mathbb N$, we define $$ \mu_n = \alpha_n \mathds{1}_{B(x,n)} \frac{d \mu}{d m} d m, $$ where $\alpha_n$ is a constant renormalization. Analogously we can define a sequence $(\nu_n)_{n \in \mathbb N}$ which converges to $\nu$ when $n \rightarrow \infty$. As $\mu_n$ and $\nu_n$ are compactly supported, we can apply the Benamou-Brenier-Schrödinger formula, namely \begin{equation} \label{eq-39} \mathrm{Sch}(\mu_n,\nu_n)= \frac{\mathcal{C}(\mu_n,\nu_n)}{4}+ \frac{\mathbf{\mathcal F}(\mu_n) + \mathbf{\mathcal F}(\nu_n)}{2}. \end{equation} It can be easily shown that $W_2(\mu_n,\mu) \underset{n \rightarrow \infty}{\rightarrow} 0$, $\mathcal I(\mu_n) \underset{n \rightarrow \infty}{\rightarrow} \mathcal I(\mu)$, and $\mathbf{\mathcal F}(\mu_n) \underset{n \rightarrow \infty}{\rightarrow} \mathbf{\mathcal F}(\mu)$ (resp. $\nu_n$ and $\nu$). Hence by the theorem~\ref{continuityofthecost} the right-hand side of~(\ref{eq-39}) converges toward $\frac{\mathcal{C}(\mu,\nu)}{4}+ \frac{\mathbf{\mathcal F}(\mu) + \mathbf{\mathcal F}(\nu)}{2}$ when $n \rightarrow \infty$. \\
For the left hand-side, note that by the space restriction property of the Schrödinger cost~\cite[Proposition 4.2.2]{tamanini2017}, for every $n \in \mathbb N$ the optimal transport plan for the Schrödinger problem from $\mu_n$ to $\nu_n$ is given fro every probability set $A$ of $N \times N$ by $$ \gamma_n(A):= \frac{\gamma(A \cap B(x,n)^2)}{\mu(B(x,n)) \nu(B(x,n))}, $$
where $\gamma$ is the optimal transport plan for the Schrödinger problem from $\mu$ to $\nu$. Hence $\mathrm{Sch}(\mu_n,\nu_n)=H(\gamma_n|R_{01}) \underset{n \rightarrow \infty}{\rightarrow} H(\gamma|R_{01})= \mathrm{Sch}(\mu,\nu)$, and the result is proved. \end{eproof}
\subsection{Longtime properties of the entropic cost}
The entropic cost $\mathcal{C}(\mu,\nu)$ can be defined with more generality using a parameter $T>0$. For $\mu,\nu \in \mathcal P_2(N)$ and $T>0$ we define $$
C_T(\mu,\nu):= \inf \int_0^T \|\dot \mu_t\|_{L^2(\mu_t)}+ \mathcal I (\mu_t) dt. $$ In~\cite[Theorem 3.6]{clerclongtime2020} and~\cite[Theorem 1.4]{conforti2017}, estimates are provided for high values of $T$, but only in the case where both measures are compactly supported and smooth. Using the Proposition~\ref{continuityofthecost} we are able to extend these estimates to the non-compactly supported and non-smooth case. The following lemma will be very useful, it is proved in~\cite[Lemma 3.1]{HWI}.
\begin{elem}[Approximation by compactly supported measures] Let $\mu \in \mathcal P_2(N)$ be a probability measure such that $\mathbf{\mathcal F}(\mu) < \infty$ and $\mathcal I(\mu) < \infty$. Then there exists a sequence $(\mu_k)_{k \in \mathbb N} \subset \mathcal P_2(N)$ such that \begin{enumerate}[(i)] \item $\mathbf{\mathcal F}(\mu_k) \underset{k \rightarrow \infty}{ \rightarrow} \mathbf{\mathcal F}(\mu), \ \mathcal I(\mu_k) \underset{k \rightarrow \infty}{ \rightarrow} \mathcal I(\mu)$ and $W_2(\mu_k,\mu) \underset{k \rightarrow \infty}{ \rightarrow} 0$. \item $\frac{d \mu_k}{dm} \in C_c^{\infty}(N)$ for every $k \in \mathbb N$. \end{enumerate} \end{elem}
Using this lemma and the Theorem~\ref{continuityofthecost} we can easily extend the estimates provided in~~\cite[Theorem 1.4]{conforti2017} and~\cite[Theorem 3.6]{clerclongtime2020}. { Note that in~\cite{conforti2017} the author has already extended the estimate which holds under the $CD(\rho , \infty)$ curvature-dimension condition to the non-compact case, but we believe this is a pertinent example to illustrate the utility of Proposition~\ref{continuityofthecost}. The validity of the $CD(0,n)$ estimate for non-compactly supported measures is a new result up to my knowledge.}
\begin{ecor} [Talagrand type inequality for the entropic cost] Let $\mu,\nu \in \mathcal P_2(N)$ be two probability measures with finite entropy and Fisher information. Assume that there exists an entropic interpolation from $\mu$ to $\nu$. Then if the $CD(\rho,\infty)$ curvature-dimension condition holds for some $\rho >0$ $$ C_T(\mu,\nu) \leq 2 \underset{t \in (0,T)}{\inf} \left\{ \frac{1+e^{- 2 \rho t}}{1-e^{-2 \rho t}} \mathbf{\mathcal F}(\mu) + \frac{1+e^{- 2 \rho (T-t)}}{1-e^{-2 \rho (T-t)}} \mathbf{\mathcal F}(\nu) \right\}. $$ If the $CD(0,n)$ curvature-dimension condition holds for some $n>0$ then $$ C_T(\mu,\nu) \leq C_1(\mu, \nu)+2n \log(T). $$
\end{ecor} These estimates are very useful, for instance they are fundamental to show the longtime convergence of entropic interpolations, see~\cite{clerclongtime2020}.
\section{Derivability of the Schrödinger cost} \label{sec-deriv}
In this section, we take $N= \mathbb R^n$ for some $n \in \mathbb N$ and $L= \Delta$ is the classical Laplacian operator. In this case the heat semigroup $(P_t)_{t \geqslant 0}$ is given by the following density $$
\forall x,y \in \mathbb R^n, \ t >0, \ p_t(x,y)= \frac{1}{(4 \pi t)^{n/2}}e^{-\frac{|x-y|^2}{4t}}, $$ and the reversible measure $m$ is the Lebesgue measure. Notice that in this case, the funtions $A$ and $B$ wich appears in hypothesis $(i)$ to $(iv)$ in Section~\ref{sec-sch} can be chosen as $$
\forall x \in \mathbb R^n, \ A(x)=B(x):=|x|^2. $$ Hence in this case $$ \mathcal P_2^*(\mathbb{R}^n)= \left\{ \mu \in \mathcal P_2(\mathbb R^n): \ - \infty < \mathcal{F}(\mu) < \infty \right\}. $$
A natural question is the following: given a probability measure $\nu$ can we find a formula for the derivative of the function $t \mapsto \mathcal{C}(\mu_t,\nu)$ where $(\mu_t)_{t \in [0,1]}$ is a smooth curve in $\mathcal{P}_2(N)$? From a formal point view, we can easily find an answer. { Here we use the notation $\mathbf{dt} \mu_s^t$ (resp. $\mathbf{ds} \mu_s^t$) for the velocity of a given path $(\mu_s^t)_{(s,t) \in [0,1] \times [0,1]}$ wrt to $t$ (resp. wrt $s$), to avoid confusion between the two variables.} For every $t \in [0,1]$, let $(\mu_s^t)_{s \in [0,1]}$ be the entropic interpolation from $\mu_t$ to $\nu$, then \begin{equation*} \begin{split}
\frac{1}{2}\frac{d}{dt}\mathcal{C}(\mu_t,\nu)&=\frac{d}{dt} \int _0^1 \|\mathbf{ds}\mu_s^t\|^2_{L^2(\mu_s^t)}+\|\mathbf{\mathbf{grad}}_{\mu_s^t} \mathbf{\mathcal F}\|_{L^2(\mu_s^t)}^2ds \\ &=\int_0^1 \langle \mathbf{D_t} \mathbf{ds} \mu_s^t, \mathbf{ds} \mu_s^t \rangle_{L^2(\mu_s^t)} + \mathbf{Hess}_{\mu_s^t} \mathbf{\mathcal F} ( \mathbf{ds} \mu_s^t, \mathbf{\mathbf{grad}}_{\mu_s^t} \mathbf{\mathcal F})ds \\ &= \int_0^1 \langle \mathbf{D_s} \mathbf{dt} \mu_s^t, \mathbf{ds} \mu_s^t \rangle_{L^2(\mu_s^t)} + \mathbf{Hess}_{\mu_s^t} \mathbf{\mathcal F} ( \mathbf{ds} \mu_s^t, \mathbf{\mathbf{grad}}_{\mu_s^t} \mathbf{\mathcal F })ds. \end{split} \end{equation*} Here we have used~\cite[Lemma~20]{gentil-leonard2020} to invert the derivatives. Noticing that $$\langle \mathbf{D_s} \mathbf{dt} \mu_s^t, \mathbf{ds} \mu_s^t \rangle_{L^2(\mu_s^t)}= \frac{d}{ds} \langle \mathbf{dt} \mu_s^t, \mathbf{ds} \mu_s^t \rangle_{L^2(\mu_s^t)}- \langle \mathbf{dt} \mu_s^t , \mathbf{Ds} \mathbf{ds} \mu_s^t \rangle_{L^2(\mu_s^t) }$$ and using the Newton equation~(\ref{eq-newton}) we have \begin{equation} \label{eq-1}
\frac{1}{2}\frac{d}{dt}\mathcal{C}(\mu_t,\nu)= \int_0^1 \frac{d}{ds} \langle \mathbf{dt} \mu_s^t, \mathbf{ds} \mu_s^t \rangle_{L^2(\mu_s^t)}ds= -\langle \dot \mu_t , \mathbf{ds} \mu_s^t|_{s=0} \rangle_{L^2(\mu_t)}. \end{equation} Unfortunately we do not see how to turn this proof into a rigorous one. \\
From another point of view, we can try to derive the static formulation of the Schrödinger problem. Once again, we can easily guess a formula from a heuristic point of view. Indeed, let $(\mu_t)_{t \in [0,1]}$ be a smooth curve in $\mathcal P_2(N)$. For every $t \in [0,1]$ we denote by $\gamma_t=f^t \otimes g^t dR_{01}$ the optimal transport plan for the Schrödinger problem from $\mu_t$ to $\nu$. Then \begin{equation*} \begin{split}
\frac{d}{dt}\mathrm{Sch}(\mu_t,\nu)&=\frac{d}{dt}H(\gamma_t|R_{01}) \\ &= \langle \dot \gamma_t, \nabla \log \gamma_t \rangle_{L^2(\gamma_t)}. \\ \end{split} \end{equation*} Using the fact that $\gamma_t$ is a transport plan from $\mu_t$ to $\nu$ it can be easily shown that $\langle \dot \gamma_t, \nabla \log \gamma_t \rangle_{\gamma_t}=\langle \dot \mu_t, \nabla \log f^t \rangle_{\mu_t}$. Hence we obtain \begin{equation*} \frac{d}{dt}\mathrm{Sch}(\mu_t,\nu)=\langle \dot \mu_t, \nabla \log f^t \rangle_{L^2(\mu_t)}. \end{equation*} Note that this is equivalent to the equation~(\ref{eq-1}) thanks to the Benamou-Brenier-Schrödinger formula. This proof is not rigorous because we don't have the regularity properties needed for $\gamma_t$. To prove our results, we follow the idea of Villani in~\cite[Theorem~23.9]{villani2009} where he computes the derivative of the Wasserstein distance along curves. Before the statement of our main theorem a technical lemma is needed. This lemma is an easy corollary of the proof of~\cite[Theorem 4.2.3]{tamanini2017}.
\begin{elem} \label{lemunif} Let $(\mu_k)_{k \in \mathbb N},(\nu_k)_{k \in \mathbb N} \subset \mathcal P^*_2(\mathbb{R}^n)$ and $\mu,\nu \in \mathcal P^*_2(\mathbb{R}^n)$ such that $\mu_k$ converges toward $\mu$ with respect to the Wasserstein distance when $k \rightarrow \infty$ (resp. $\nu_k$ to $\nu$). For every $k \in \mathbb N$, we denote by $\gamma_k=f^k \otimes g^k dR_{01}$ the optimal transport plan for the Schrödinger problem from $\mu_k$ to $\nu_k$ and $\gamma = f \otimes g dR_{01}$ the optimal transport plan for the Schrödinger problem from $\mu$ to $\nu$. Assume that $(\frac{d \mu_k}{d m})_{k \in \mathbb N}$ and $(\frac{d \nu_k}{d m})_{k \in \mathbb N}$ are uniformly bounded in compact sets. Then for every compact set $K \subset N$, up to extraction $(f^k)_{k \in N}$ and $(g^k)_{k \in N}$ are uniformly bounded in $L^{\infty}(K,m)$. Furthermore $$ f^k \underset{k \rightarrow \infty}{\overset{*}{\rightharpoonup}} f, \ g^k \underset{k \rightarrow \infty}{\overset{*}{\rightharpoonup}} g, $$ where the weak star convergence is understood in $L^{\infty}(K,m)$. \end{elem}
In addition to this lemma, the following fact is central in our proof. Given two probability measures $p,r$ on $\mathbb R^n$ and a smooth enough function $\varphi: \mathbb R^n \rightarrow \mathbb R^n$, we have $$
\frac{d \varphi \#p}{dm}=\frac{\frac{dp}{dm}}{|\det J_{\varphi}|} \circ \varphi^{-1}, $$
where $|\det J_{\varphi}|$ is the Jacobian determinant of $\varphi$. We often refer to this result as the Monge-Ampère equation or the Jacobian equation, see~\cite[Theorem 11.1]{villani2009} or \cite[Lemma 5.5.3]{ambrosio-gigli2008}. Using this equation, we obtain \begin{equation} \label{eq-devent}
H(\varphi \#p|r)=H(p|r)- \int \log |\det J_{\varphi}|dp + \int \left( \log \frac{dr}{dm}-\log \frac{dr}{dm} \circ \varphi \right)dp, \end{equation}
where $|\det J_{\varphi}|$ is the Jacobian determinant of $\varphi$. Given a curve $(\mu_t)_{t} \subset \mathcal P_2(N)$ and a measure $\nu \in \mathcal P_2(N)$, the idea of the following proof is to apply equation~(\ref{eq-devent}) with $r=R_{01}$, $p= \gamma_t$ is the optimal transport plan for the Schrödinger problem from $\mu_t$ to $\nu$ and $\varphi= T_{t \rightarrow s} \times \mathrm{Id}$ to bound from above $\mathrm{Sch}(\mu_s,\nu)$, and then let $s \rightarrow t$.
\begin{ethm}[Derivative of the Schrödinger cost] \label{lem-1} Let $(\nu_t)_{t \in (t_1,t_2)} \subset \mathcal P_2^* (\mathbb{R}^n)$ and $(\mu_t)_{t \in (t_1,t_2)} \subset \mathcal P_2^*(\mathbb{R}^n)$ be two absolutely continuous curves, for some $(t_1,t_2) \subset \mathbb R$. Furthermore assume that \begin{enumerate}[(i)]
\item For every $t \in (t_1,t_2)$ the measure $\mu_t$ has smooth bounded density against $m$.
\item There exists a constant $C>0$ such that for every $t \in (t_1,t_2)$ and $x \in \mathbb R^n$ we have $|\dot \mu_t (x)| \leq C(1+|x|)$. \label{h5}
\item The sequence $\left( \frac{d \mu_t}{d m}\right)_{t \in (t_1,t_2)}$ and $\left( \frac{d \nu_t}{d m}\right)_{t \in (t_1,t_2)}$ are uniformly bounded in compact sets.
\item The functions $t \mapsto \mathbf{\mathcal F}( \mu_t)$ and $t \mapsto \mathbf{\mathcal F} (\nu_t)$ are derivable and $\frac{d}{dt} \mathbf{\mathcal F}(\mu_t)= \langle \nabla \log \frac{d \mu_t}{dm} , \dot \mu_t \rangle_{L^2(\mu_t)}$ (resp. $\mathbf{\mathcal F}(\nu_t)$).
\item $\int_{t_1}^{t_2} \mathcal{L}( \dot \mu_t) dt < \infty$ and $\int_{t_1}^{t_2} \| \dot \mu_t \|_{L^2(\mu_t)} dt < \infty$, where $\mathcal{L}(\dot \mu_t)$ is defined at definition~\ref{constant}.
\item \label{h6} For every $t \in (t_1,t_2)$ the functions $f^t$ and $g^t$ are in $L^1(m)$, where $(f^t,g^t)$ is the unique solution in $L^{\infty}(m) \times L^{\infty}(m)$ of the Schrödinger system
$$
\left\{\begin{array}{cc}
\frac{d \mu_t}{dm}=f^t P_1 g^t, \\
\frac{d \nu_t}{dm}=g^t P_1 f^t.
\end{array} \right.
$$ \end{enumerate} Then the application $t \mapsto \mathrm{Sch}(\mu_t,\nu)$ is differentiable almost everywhere and we have for almost every $t \in (t_1,t_2)$ $$ \frac{d}{dt}\mathrm{Sch}(\mu_t,\nu_t)=\langle \dot \mu_t, \nabla \log f^t \rangle_{L^2(\mu_t)}+\langle \dot \nu_t , \nabla \log g^t \rangle_{L^2(\nu_t)}. $$ Furthermore for almost every $t \in (t_1,t_2)$ this equality can be rewritten as $$
\frac{d}{dt} \mathcal C (\mu_t,\nu_t)= \langle \dot \nu_t , \mathbf{ds} {\mu_s^t}_{|s=1} \rangle_{L^2(\nu_t)} - \langle \dot \mu_t , \mathbf{ds} {\mu_s^t}_{|s=0} \rangle_{L^2(\mu_t)}, $$ where $(\mu_s^t)_{s \in [0,1]}$ is the entropic interpolation from $\mu_t$ to $\nu_t$. \end{ethm}
\begin{eproof} To begin we want to show $$ \frac{d }{dt}\mathrm{Sch}(\mu_t,\nu)=\langle \dot \mu_t , \nabla \log f^t \rangle_{L^2(\nu_t)}, $$ for every $\nu \in \mathcal P_2^*(\mathbb R^n)$ such that $\frac{d \nu}{dm} \in L^{\infty}(m)$. \\
For every $t \in [0,1]$, $\gamma_t$ denotes the optimal transport plan in the Schrödinger problem from $\mu_t$ to $\nu$. Let $t \in [0,1]$ be fixed. Then for every $s$ small enough by the very definition of the cost $\mathrm{Sch}(\mu_{t+s},\nu) \leq H((T_{t \rightarrow t+s} \times \mathrm{Id}) \# \gamma_t|R_{01})$ where $(T_{t_1 \rightarrow t_2})_{t_1,t_2 \in [0,1]}$ are the flow maps associated to $(\mu_s)_{s \in [0,1]}$ defined in the subsection~\ref{sec-flowmaps}. Applying the equation~(\ref{eq-devent}) with $r=R_{01}$, $p= \gamma_t$ and $\varphi = T_{t \rightarrow t+s} \times \mathrm{Id}$ we obtain \begin{multline*}
H((T_{t \rightarrow t+s} \times \mathrm{Id}) \# \gamma_t |R_{01}) = H(\gamma_t |R_{01}) + \int \log p_1 d \gamma_t- \int \log p_1 d (T_{t \rightarrow t+s} \times \mathrm{Id}) \# \gamma_t \\ -\int \log |\det J_{T_{t \rightarrow t+s}}(x)| d \mu_t(x). \end{multline*} As noticed in~\cite[Eq~(23.11)]{villani2009}, by the hypothesis~$(\ref{h5})$ there exists a constant $C$ such that for every $y \in \mathbb R^n$ ans $s_1,s_2 \in [0,1]$ \begin{equation} \label{eq-5} \left\{ \begin{array}{cc}
|T_{s_1 \rightarrow s_2}(x)| \leq C(1+|x|) \\
|x - T_{s_1 \rightarrow s_2}(x)| \leq C|s_1 - s_2|(1+|x|). \end{array}\right. \end{equation}
For every $x,y \in \mathbb R^n$, we have $\log p_1(T_{t \rightarrow t+s} x,y)=- \frac{|T_{t \rightarrow t+s}x-y|^2}{4}- \frac{n}{2} \log (4 \pi)$ and \begin{equation*}
\begin{split}
\left| \frac{1}{2}\frac{d}{ds}|T_{t \rightarrow t+s} (x)-y|^2 \right|&= \left| \langle \dot \mu_{t+s}\ \circ T_{t \rightarrow t+s}(x), T_{t \rightarrow t+s}(x)-y \rangle \right| \\
& \leq \frac{|\dot \mu_{t+s}\circ T_{t \rightarrow t+s}(x)|^2}{2} + \frac{|T_{t \rightarrow t+s}(x)-y|^2}{2} \\
& \leq C(1+|x|^2+|y|^2) \in L^1(\gamma_t),
\end{split} \end{equation*} for some constant $C>0$. Hence we can differentiate over the integral at time $s=0$ to find \begin{equation} \label{eq-4} \int \log p_1 \ d (T_{t \rightarrow t+s} \times \mathrm{Id}) \# \gamma_t = \int \log p_1 d\gamma_t + s \int \langle \dot \mu_{t}(x), \nabla_x \log p_1(x,y)\rangle d \gamma_t(x,y)+o(s). \end{equation} Notice that thanks to the Monge-Ampère equation we have $$
\int \log |\det J_{T_{t \rightarrow t+s}} | d \mu_t= \mathbf{\mathcal F} (\mu_t) - \mathbf{\mathcal F} (\mu_{t+s}) = -s \langle \nabla \log \frac{d\mu_t}{dm}, \dot \mu_t \rangle_{L^2(\mu_t)} + o(s). $$ Combining this with the equation~(\ref{eq-4}), we have $$ \mathrm{Sch}(\mu_{t+s},\nu) \leq \mathrm{Sch}(\mu_t,\nu)-s \left( \int \langle \nabla_x \log p_1(x,y),\dot \mu_t(x) \rangle d \gamma_{t}(x,y) - \langle \nabla \log \frac{d\mu_t}{dm}, \dot \mu_t \rangle_{L^2(\mu_t)} \right) + o(s). $$
Observe that using the hypothesis~$(\ref{h6})$ we have \begin{equation*} \begin{split} \int \langle \nabla_x \log p_1(x,y),\dot \mu_t(x) \rangle d \gamma_{t}(x,y)&= \int \int \langle \nabla_x p_1(x,y),\dot \mu_t(x) \rangle f^t(x)g^t(y)dm(x)dm(y) \\ &= \int \langle \nabla_x \int p_1(x,y) g^t(y) dm(y) , \dot \mu_t (x) \rangle f^t(x) dm(x) \\ &= \int \langle \nabla P_1g^t(x), \dot \mu_t(x) \rangle f^t(x) dm(x) \\ &= \int \langle \nabla \log \frac{d \mu_t}{dm}(x), \dot \mu_t(x) \rangle d \mu_t(x) - \int \langle \nabla \log f^t(x), \dot \mu_t(x)\rangle d\mu_t(x). \end{split} \end{equation*} Hence we obtain $$ \overline{\underset{s \rightarrow 0}{\lim}}\frac{\mathrm{Sch}(\mu_{t+s},\nu)-\mathrm{Sch}(\mu_t,\nu)}{s} \leq \langle \dot \mu_t , \nabla \log f^t\rangle_{L^2(\mu_t)}. $$
For the reverse inequality we use the same kind of estimates. By definition we have $\mathrm{Sch}(\mu_t,\nu) \leq H((T_{t+s \rightarrow t} \times Id) \# \gamma_{t+s}|R_{01})$. Applying equation~(\ref{eq-devent}) we have \begin{multline*}
H((T_{t+s \rightarrow t} \times Id) \# \gamma_{t+s}|R_{01})= H(\gamma_{t+s}|R_{01})- \int \log |\det J_{T_{t+s \rightarrow t}}| d \mu_{t+s} \\ - \int \left( \frac{|T_{t+s \rightarrow t}x-y|^2-|x-y|^2}{4} \right) d \gamma_{t+s}. \end{multline*}
As already noticed we have $\int \log |\det J_{T_{t+s \rightarrow t}}| d \mu_{t+s}= \mathbf{\mathcal F}(\mu_{t+s}) - \mathbf{\mathcal F}(\mu_t)= s \langle \dot \mu_t , \nabla \log \mu_t\rangle_{L^2(\mu_t)}+o(s)$. Now we have to deal with a more complicated term. We want to show that $$
\int \left( \frac{|T_{t+s \rightarrow t}x-y|^2-|x-y|^2}{4} \right) d \gamma_{t+s}(x,y)= s \int \langle \nabla_x \log p_1 (x,y), \dot \mu_t(x) \rangle d \gamma_t(x,y) + o(s). $$ Notice that using~(\ref{eq-5}) we have for every $s>0$ \begin{equation} \label{eq-6}
\left||T_{t+s \rightarrow t}x-y|^2-|x-y|^2 \right| \leq Cs (1+|x|^2+|y|^2) \end{equation} for some $C>0$.
For every $s \in \mathbb R$ small enough, we denote $v_s(x,y)= \frac{|T_{t+s \rightarrow t}x-y|^2-|x-y|^2}{s} $ and $v(x,y)=-2 \langle x-y , \dot \mu_{t}(x)\rangle$. Of course for every $x,y \in N$, we have $$v_s(x,y) \underset{s \rightarrow 0}{\rightarrow}v(x,y)$$ and by~(\ref{eq-6}) $$
| v_s(x,y) | \leq P(x,y) :=C(1+|x|^2+|y|^2). $$ Let $\chi_R$ be the product function $\chi_R= \mathds{1}_{B(0,R)} \otimes \mathds{1}_{B(0,R)} $. By the Lemma~\ref{lemunif}, for every $R>0$ there exists a sequence $(s_k^R)_{k \in \mathbb N}$ which tends to zero when $k$ tend to $\infty$ such that the sequences $(f^{t+s_k^R})$, $(g^{t+s_k^R})$ are uniformly bounded in $L^{\infty}(B(0,R),m)$ and $$ f^{t+s_k^R} \underset{k \rightarrow \infty}{\overset{*}{\rightharpoonup}}f^t,\ g^{t+s_k^R} \underset{k \rightarrow \infty}{\overset{*}{\rightharpoonup}}g^t, $$ where the weak star convergence is understood in $L^{\infty}(K^R,m)$. Now for simplicity we denote $s_k^R=s_k$ and $K^R=B(0,R)$. \\
Note that \begin{multline} \label{eq-7} \int v_{s_k^R}(x,y) d \gamma_{t + s_k^R} (x,y) - \int v(x,y) d \gamma_t(x,y) = \int (1- \chi_R(x,y))v_{t+s_k^R}(x,y) d \gamma_{t+s_k^R}(x,y) \\ + \int \chi_R(x,y) \left(v_{t+s_k^R}(x,y)f^{t+s_k^R}(x)g^{t+s_k^R}(y)-v(x,y)f^t(x)g^t(y) \right)dR_{01}(x,y) \\ + \int ( \chi_R(x,y)-1)v(x,y) d \gamma_t(x,y). \end{multline} To obtain the desired estimate we are going to pass to the limsup in $k$, then let $R$ tend to $+ \infty$. The third term is independent of $k$ and by the dominated convergence theorem it is immediate that it tend to $0$ when $R \rightarrow \infty$. Things are trickier for the second term. Denote $$ \varphi_k(x):= \int v_{s_k^R}(x,y)g^{t+s_k^R}(y) p_1(x,y) dm (y) $$ and $$ \varphi(x):= \int v(x,y)g^{t}(y) p_1(x,y) dm (y). $$ Then \begin{equation*} \begin{split}
& \left| \int \chi_R(x,y)v_{s_k^R}(x,y) d \gamma_{t+s_k^R}(x,y) - \int \chi_R(x,y) v(x,y) d \gamma(x,y) \right| \\ &= \left| \int_{K} \varphi_k f^{t+s_k^R}dm - \int_{K^R} \varphi f^t dm \right| \\
& \leq \left| \int_{K^R} f^{t+s_k^R} (\varphi^{k} - \varphi) dm \right| + \left| \int_{K^R} \left( f^{t+ s_k^R} -f^t \right) \varphi dm \right| \\
& \leq \underset{n \in \mathbb N}{\sup} \| f^{t+s_n}\|_{L^{\infty}(K^R,m)} \|\varphi_k - \varphi\|_{L^1(K^R,m)}+ \left| \int_{K^R} \left( f^{t+ s_k^R} -f^t \right) \varphi dm \right|. \end{split} \end{equation*} The second term tends to zero thanks to the weak star convergence of $f^{t+s_k^R}$ toward $f^t$ when $k \rightarrow \infty$. Furthermore the same kind of calculus gives for every $k \in \mathbb N$ \begin{multline*}
|\varphi_k(x)-\varphi(x)| \leq \underset{n \in \mathbb N}{\sup}\|g^{t+s_n}\|_{L^{\infty}(K,m)} \|\left( v_k(x,\cdot) - v(x, \cdot)\right) p_1(x, \cdot)\|_{L^1(K,m)} \\ + \int v(x,y)(g(x)-g^{t+s_k^R}(x))dR_{01}(x,y). \end{multline*} Again the second term tends to zero thanks to the weak star convergence of $g^{t+s_k^R}$. Using the upper bound $$
\left|\left( v_k(x,\cdot) - v(x, \cdot)\right) p_1(x, \cdot) \right| \leq 2 P(x, \cdot)p_1(x, \cdot) \in L^1(K,m) $$ we have by the dominated convergence theorem $$
\|\left( v_k(x,\cdot) - v(x, \cdot)\right) p_1(x, \cdot)\|_{L^1(K,m)} \underset{k \rightarrow \infty}{\rightarrow 0}. $$
Hence $\varphi_k \underset{k \rightarrow \infty}{\rightarrow} \varphi$ pointwise. Noticing that for every $x \in K^R$, we have $$|\varphi_k(x)| \leq \underset{n \in \mathbb N}{\sup} \| g^{t+s_k^R}\|_{L^{\infty}(K^R,m)} \int P(x,y)p_1(x,y)dm(y) \in L^{1}(K^R,m).$$
By the dominated convergence theorem, $$ \|\varphi_k-\varphi\|_{L^1(K^R,m)} \underset{k \rightarrow \infty}{\rightarrow} 0. $$ Thus the second term in~(\ref{eq-7}) tends to zero when $k \rightarrow + \infty$. For the first term term, notice that for $R \geqslant 1$ $$
\int (1- \chi_R) v_{t+s_k^R} d \gamma_{t+s_k^R} \leq 2C \int_{|x| \geqslant R}|x|^2 d \mu_{t+s_k^R}(x) + C \int_{|y| \geqslant R}|y|^2 d \nu(y), $$ thus it converges to zero, see~\cite[Definition 6.8 and Theorem 6.9]{villani2009}. Hence for every $R >0$ by letting $k$ tends to $+ \infty$ and $R$ tends to $+ \infty$ in $$
\frac{Sch(\mu_t , \nu) - Sch(\mu_{t+s_k^R}, \nu)}{s_k^R} \leq - \frac{1}{s_k^R}\int \log |\det J_{T_{t+s_k^R \rightarrow t}}|d\mu_{t+s_k^R} + \int v_{s_k^R}(x,y) d \gamma_{t + s_k^R}(x,y) $$ we obtain $$ \underset{ s \rightarrow 0}{\overline{\lim}} \frac{Sch(\mu_t, \nu) - Sch(\mu_{t+s}, \nu)}{s} \leq \int \langle\nabla \log p_1(x,y),\dot \mu_t(x) \rangle d \gamma_{t}(x,y)+ \langle \dot \mu_t , \nabla \log \frac{d \mu_t}{dm} \rangle_{\mu_t} $$ This is enough to conclude as in the previous case and obtain $$ \underset{s \rightarrow 0}{\underline{\lim}}\frac{\mathrm{Sch}(\mu_{t+s},\nu)-\mathrm{Sch}(\mu_t,\nu)}{s} \geqslant \langle \dot \mu_t , \nabla \log f^t\rangle_{\mu_t}. $$ This ends the case where $\nu_t= \nu$ is constant. Now we need to use a \textquotedblleft doubling of variables \textquotedblright technique. Let $s,s',t \in [0,1]$ and $\gamma_{s,t}$ (resp. $\gamma_{s',t}$) be the optimal transport plan for the Schrödinger problem from $\mu_s$ (resp. $\mu_s'$) to $\nu_t$. Then, using the same tricks as before we have $$
H(\gamma_{s',t}|R_{01}) -H(\gamma_{s,t}|R_{01}) \leq \mathbf{\mathcal F}(\mu_{s'})- \mathbf{\mathcal F}(\mu_s)+ \frac{1}{4} \int \left( |x-y|^2 - |T_{s \rightarrow s'}x-y|^2 \right)d \gamma_{s,t}(x,y). $$ Now using~(\ref{eq-6}), the fact that $s \mapsto \mathbf{\mathcal F} (\mu_s)$ is Lipschitz continuous and the fact that second order moment of of both curves are locally bounded, there exists a constant $C>0$ such that $$
H(\gamma_{s',t}|R_{01}) -H(\gamma_{s,t}|R_{01}) \leq C|s-s'|. $$ By symmetry we can take absolute values in this inequality and it follows that the function $(s,t) \mapsto \mathrm{Sch}(\mu_s,\nu_t)$ is locally absolutely continuous in $s$ uniformly in $t$ (also absolutely continuous in $t$ uniformly in $s$). Hence by~\cite[Lemma 23.28]{villani2009} the desired result follow. \end{eproof}
\begin{eex}[Contraction property along Gaussian curves] For $m, \sigma^2 >0$ we denote by $\mathcal{N}(m,\sigma^2)$ the probability measure on $\mathbb R$ given by $$ d\mathcal{N}(m,\sigma^2)(x):= \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left(- \frac{(x-m)^2}{2 \sigma^2}\right)dx. $$ Considering the measures $\mu:= \mathcal{N}(m_0,1)$ and $\nu:= \mathcal{N}(m_1 , 1)$ it follows from~\cite[Sec A.2]{clerclongtime2020} that the curves $(P_t^* \mu)_{t \geqslant 0}$ and $(P_t^* \nu)_{t \geqslant 0}$ satisfies the hypothesis of Theorem~\ref{lem-1}. If we denote $(\mu_s^t)_{s \in [0,1]}$ the entropic interpolation from $P_t^* \mu$ to $P_t^* \nu$ applying Theorem 5.2 we obtain for almost every $t > 0$ \begin{equation} \label{eq-666}
\begin{split}
\frac{d}{dt} \mathcal C (P_t^* \mu , P_t^* \nu)&= - \langle \mathbf{\mathbf{grad}}_{P_t^* \nu} \mathcal Ent , \mathbf{ds} \mu_s^t|_{s=1}\rangle_{L^2(\nu_t)} + \langle \mathbf{\mathbf{grad}}_{P_t^* \mu} \mathcal Ent , \mathbf{ds} \mu_s^t|_{s=0}\rangle_{L^2(\mu_t)} \\
&=-\frac{d}{ds} \mathcal Ent (\mu_s^t)|_{s=1} + \frac{d}{ds} \mathcal Ent(\mu_s^t)|_{s=0} \\
&=- \int_0^1 \frac{d^2}{ds^2} \mathcal Ent (\mu_s^t) ds. \\
\end{split} \end{equation} Using the Newton equation~(\ref{eq-newton}) and the $CD(0,1)$ curvature-dimension condition, for every $t >0 $ and $s \in [0,1]$ we have \begin{equation*} \begin{split} \frac{d^2}{ds^2} \mathcal Ent(\mu_s^t)&= \mathbf{Hess}_{\mu_s^t} \mathcal Ent (\mathbf{ds} \mu_s^t, \mathbf ds \mu_s^t)+\mathbf{Hess}_{\mu_s^t}\mathcal Ent (\mathbf{\mathbf{grad}}_{\mu_s^t} \mathcal Ent, \mathbf{\mathbf{grad}}_{\mu_s^t} \mathcal Ent) \\
&\geqslant \left( \left( \frac{d}{ds} \mathcal Ent(\mu_s^t)\right)^2 + |\mathbf{\mathbf{grad}}_{\mu_s^t}\mathcal Ent(\mu_s^t)|_{L^2(\mu_s^t)}^4\right) . \end{split} \end{equation*} Then by the Jensen inequality and neglecting the second term we obtain for every $t \geqslant 0$, $$ \int_0^1 \frac{d^2}{ds^2} \mathcal Ent(\mu_s^t) ds \geqslant \left(\mathcal Ent ( P_t^* \mu) - \mathcal Ent (P_t^* \nu ) \right)^2. $$ Using this and integrating the equality~(\ref{eq-666}) we obtain for every $t \geqslant 0$, $$ \mathcal{C} (P_t^* \mu, P_t^* \nu) \leq \mathcal{C}(\mu,\nu) - \int_0^t \left(\mathcal Ent ( P_u^* \mu) - \mathcal Ent( P_u^* \nu ) \right)^2 du. $$ Hence we recover the $(0,1)$-contraction property of the entropic cost proved in~\cite[Theorem 37]{gentil-leonard2020}. The result could, of course, be proven in $\dR^n$ for $n\geq1$.
\end{eex}
\paragraph{Acknowledgements} This work was supported by the French ANR-17-CE40-0030 EFI project. { I want to thanks warmly the anonymous referees for their work. }
{\footnotesize{
}}
\end{document} | arXiv |
\begin{definition}[Definition:Solid Angle/Subtend]
Let $S$ be a surface oriented in space.
Let $P$ be a point in that space.
The '''solid angle subtended''' by $S$ at $P$ is equal to the surface integral:
:$\ds \Omega = \iint_S \frac {\mathbf {\hat r} \cdot \mathbf {\hat n} \rd S} {r^2}$
where:
:$\mathbf {\hat r} = \dfrac {\mathbf r} r$ is the unit vector corresponding to the position vector $\mathbf r$ of the infinitesimal surface $\d S$ with respect to $P$
:$r$ is the magnitude of $\mathbf r$
:$\mathbf {\hat n}$ represents the unit normal to $\d S$.
\end{definition} | ProofWiki |
1.5: Lines and Planes
[ "article:topic", "authorname:mcorral", "showtoc:no", "license:gnufdl" ]
Book: Vector Calculus (Corral)
1: Vectors in Euclidean Space
Contributed by Michael Corral
Professor (Mathematics) at Schoolcraft College
Line through a point, parallel to a vector
Line through two points
Distance between a point and a line
Plane through a point, perpendicular to a vector
Plane containing three noncollinear points
Distance between a point and a plane
Line of intersection of two planes
Now that we know how to perform some operations on vectors, we can start to deal with some familiar geometric objects, like lines and planes, in the language of vectors. The reason for doing this is simple: using vectors makes it easier to study objects in 3-dimensional Euclidean space. We will first consider lines.
Let \(P = (x_{0}, y_{0}, z_{0})\) be a point in \(\mathbb{R}^{3}\), let \(\textbf{v} = (a,b,c)\) be a nonzero vector, and let \(L\) be the line through \(P\) which is parallel to \(\textbf{v}\) (see Figure 1.5.1).
Figure 1.5.1
Let \(\textbf{r} = (x_{0}, y_{0}, z_{0})\) be the \(\textit{vector}\) pointing from the origin to \(P\). Since multiplying the vector \(\textbf{v}\) by a scalar \(t\) lengthens or shrinks \(\textbf{v}\) while preserving its direction if \(t > 0\), and reversing its direction if \(t < 0\), then we see from Figure 1.5.1 that every point on the line \(L\) can be obtained by adding the vector \(t \textbf{v}\) to the vector \(\textbf{r}\) for some scalar \(t\). That is, as \(t\) varies over all real numbers, the vector \(\textbf{r} + t \textbf{v}\) will point to every point on \(L\). We can summarize the \(\textit{vector representation of \(L\)}\) as follows:
For a point \(P = (x_{0}, y_{0}, z_{0})\) and nonzero vector \(\textbf{v}\) in \(\mathbb{R}^{3}\), the line \(L\) through \(P\) parallel to \(\textbf{v}\) is given by
$$\textbf{r} + t \textbf{v}, \text{for} -\infty < t < \infty \label{Eq1.16}$$
where \(\textbf{r} = (x_{0}, y_{0}, z_{0})\) is the vector pointing to \(P\).
Note that we used the correspondence between a vector and its terminal point. Since \(\textbf{v} = (a,b,c)\), then the terminal point of the vector \(\textbf{r} + t \textbf{v}\) is \((x_{0} + at, y_{0} + bt, z_{0} + ct)\). We then get the \(\textit{parametric representation of L}\) with the \(\textit{parameter}\) \(t\):
For a point \(P = (x_{0}, y_{0}, z_{0})\) and nonzero vector \(\textbf{v} = (a,b,c)\) in \(\mathbb{R}^{3}\), the line \(L\) through \(P\) parallel to \(\textbf{v}\) consists of all points \((x,y,z)\) given by
$$x = x_{0} + at,\quad y = y_{0} + bt,\quad z = z_{0} + ct, \text{for} -\infty < t < \infty \label{Eq1.17}$$
Note that in both representations we get the point \(P\) on \(L\) by letting \(t = 0\).
In Equation \ref{Eq1.17}, if \(a \ne 0\), then we can solve for the parameter \(t\): \(t = (x - x_{0})/a\). We can also solve for \(t\) in terms of \(y\) and in terms of \(z\) if neither \(b\) nor \(c\), respectively, is zero: \(t = (y - y_{0})/b\) and \(t = (z - z_{0})/c\). These three values all equal the same value \(t\), so we can write the following system of equalities, called the \(\textit{symmetric representation of L}\):
For a point \(P = (x_{0}, y_{0}, z_{0})\) and vector \(\textbf{v} = (a,b,c)\) in \(\mathbb{R}^{3}\) with \(a\), \(b\) and \(c\) all nonzero, the line \(L\) through \(P\) parallel to \(\textbf{v}\) consists of all points \((x,y,z)\) given by the equations
$$\frac{x - x_{0}}{a} = \frac{y - y_{0}}{b} = \frac{z - z_{0}}{c}$$
What if, say, \(a = 0\) in the above scenario? We can not divide by zero, but we do know that \(x = x_{0} + at\), and so \(x = x_{0} + 0t = x_{0}\). Then the symmetric representation of \(L\) would be:
$$x = x_{0}, \frac{y - y_{0}}{b} = \frac{z - z_{0}}{c}$$
Note that this says that the line \(L\) lies in the \(\textit{plane}\) \(x = x_{0}\), which is parallel to the \(yz\)-plane (Figure 1.5.2). Similar equations can be derived for the cases when \(b = 0\) or \(c = 0\).
You may have noticed that the vector representation of \(L\) in Equation \ref{Eq1.16} is more compact than the parametric and symmetric formulas. That is an advantage of using vector notation. Technically, though, the vector representation gives us the \(\textit{vectors}\) whose terminal points make up the line \(L\), not just \(L\) itself. So you have to remember to identify the vectors \(\textbf{r} + t \textbf{v}\) with their terminal points. On the other hand, the parametric representation \(\textit{always}\) gives just the points on \(L\) and nothing else.
Write the line \(L\) through the point \(P = (2,3,5)\) and parallel to the vector \(\textbf{v} = (4,-1,6)\), in the following forms: (a) vector, (b) parametric, (c) symmetric. Lastly: (d) find two points on \(L\) distinct from \(P\).
(a) Let \(\textbf{r} = (2,3,5)\). Then by Equation \ref{Eq1.16}, \(L\) is given by:
$$\nonumber \textbf{r} + t \textbf{v} = (2,3,5) + t(4,-1,6), \text{for} -\infty < t < \infty$$
(b) \(L\) consists of the points \((x,y,z)\) such that
$$\nonumber x = 2 + 4t, y = 3 - t, z = 5 + 6t, \text{for} -\infty < t < \infty$$
(c) \(L\) consists of the points \((x,y,z)\) such that
$$\nonumber \frac{x - 2}{4} = \frac{y - 3}{-1} = \frac{z - 5}{6}$$
(d) Letting \(t=1\) and \(t=2\) in part(b) yields the points \((6,2,11)\) and \((10,1,17)\) on \(L\).
Let \(P_{1} = (x_{1}, y_{1}, z_{1})\) and P_{2} = (x_{2}, y_{2}, z_{2})\) be distinct points in \(\mathbb{R}^{3}\), and let \(L\) be the line through \(P_{1}\) and \(P_{2}\). Let \(\textbf{r}_{1} = (x_{1}, y_{1}, z_{1})\) and \(\textbf{r}_{2} = (x_{2}, y_{2}, z_{2})\) be the vectors pointing to \(P_{1}\) and \(P_{2}\), respectively. Then as we can see from Figure 1.5.3, \(\textbf{r}_{2} - \textbf{r}_{1}\) is the vector from \(P_{1}\) to \(P_{2}\). So if we multiply the vector \(\textbf{r}_{2} - \textbf{r}_{1}\) by a scalar \(t\) and add it to the vector \(\textbf{r}_{1}\), we will get the entire line \(L\) as \(t\) varies over all real numbers. The following is a summary of the vector, parametric, and symmetric forms for the line \(L\):
Let \(P_{1} = (x_{1}, y_{1}, z_{1})\), \(P_{2} = (x_{2}, y_{2}, z_{2})\) be distinct points in \(\mathbb{R}^{3}\), and let \(\textbf{r}_{1} = (x_{1}, y_{1}, z_{1})\), \(\textbf{r}_{2} = (x_{2}, y_{2}, z_{2})\). Then the line \(L\) through \(P_{1}\) and \(P_{2}\) has the following representations:
\(\textit{Vector:}\)
$$\textbf{r}_{1} + t(\textbf{r}_{2} - \textbf{r}_{1}) \text{, for} -\infty < t < \infty$$
\(\textit{Parametric:}\)
$$x = x_{1} + (x_{2} - x_{1})t, y = y_{1} + (y_{2} - y_{1})t, z = z_{1} + (z_{2} - z_{1})t, \text{for} -\infty < t < \infty \label{Eq1.21}$$
\(\textit{Symmetric:}\)
$$\frac{x - x_{1}}{x_{2} - x_{1}} = \frac{y - y_{1}}{y_{2} - y_{1}} = \frac{z - z_{1}}{z_{2} - z_{1}} \text{if \(x_{1} \ne x_{2}\), \(y_{1} \ne y_{2}\), and \(z_{1} \ne z_{2}\)}$$
Write the line \(L\) through the points \(P_{1} = (-3,1,-4)\) and \(P_{2} = (4,4,-6)\) in parametric form.
By Equation \ref{Eq1.21}, \(L\) consists of the points \((x,y,z)\) such that
$$\nonumber x = -3 + 7t, y = 1 +3t, z = -4 -2t, \text{for} -\infty < t < \infty$$
Let \(L\) be a line in \(\mathbb{R}^{3}\) in vector form as \(\textbf{r} + t \textbf{v}\) (for \(-\infty < t < \infty\)), and let \(P\) be a point not on \(L\). The distance \(d\) from \(P\) to \(L\) is the length of the line segment from \(P\) to \(L\) which is perpendicular to \(L\) (see Figure 1.5.4). Pick a point \(Q\) on \(L\), and let \(\textbf{w}\) be the vector from \(Q\) to \(P\). If \(\theta\) is the angle between \(\textbf{w}\) and \(\textbf{v}\), then \(d = \norm{\textbf{w}}\,\sin \theta\). So since \(\norm{\textbf{v} \times \textbf{w}} = \norm{\textbf{v}}\,\norm{\textbf{w}}\,\sin \theta\) and \(\textbf{v} \ne \textbf{0}\), then:
$$d = \frac{\norm{\textbf{v} \times \textbf{w}}}{\norm{\textbf{v}}}$$
Find the distance \(d\) from the point \(P = (1,1,1)\) to the line \(L\) in Example 1.20.
From Example 1.20, we see that we can represent \(L\) in vector form as: \(\textbf{r} + t \textbf{v}\), for \(\textbf{r} = (-3,1,-4)\) and \(\textbf{v} = (7,3,-2)\). Since the point \(Q = (-3,1,-4)\) is on \(L\), then for \(\textbf{w} = \overrightarrow{QP} = (1,1,1) - (-3,1,-4) = (4,0,5)\), we have:
$$\nonumber \textbf{v} \times \textbf{w} = \left|\begin{array}{rrr}\textbf{i} & \textbf{j} & \textbf{k}\\7 & 3 & -2\\
4 & 0 & 5\end{array}\right|
= \left|\begin{array}{rr}3 & -2\\0 & 5\end{array}\right| \textbf{i} \;-\;
\left|\begin{array}{rr}7 & -2\\4 & 5\end{array}\right| \textbf{j} \;+\;
\left|\begin{array}{rr}7 & 3\\4 & 0\end{array}\right| \textbf{k}
= 15\,\textbf{i} - 43\,\textbf{j} - 12\,\textbf{k} \text{ , so}\\ \nonumber
d = \frac{\norm{\textbf{v} \times \textbf{w}}}{\norm{\textbf{v}}} =
\frac{\norm{15\,\textbf{i} - 43\,\textbf{j} - 12\,\textbf{k}}}{\norm{(7,3,-2)}}
= \frac{\sqrt{15^{2} + (-43)^{2} + (-12)^{2}}}{\sqrt{7^{2} + 3^{2} + (-2)^{2}}}
= \frac{\sqrt{2218}}{\sqrt{62}} = 5.98$$
It is clear that two lines \(L_{1}\) and \(L_{2}\), represented in vector form as \(\textbf{r}_{1} + s \textbf{v}_{1}\) and \(\textbf{r}_{2} + t \textbf{v}_{2}\), respectively, are parallel (denoted as \(L_{1} \parallel L_{2}\)) if \(\textbf{v}_{1}\) and \(\textbf{v}_{2}\) are parallel. Also, \(L_{1}\) and \(L_{2}\) are perpendicular (denoted as \(L_{1} \perp L_{2}\)) if \(\textbf{v}_{1}\) and \(\textbf{v}_{2}\) are perpendicular.
In 2-dimensional space, two lines are either identical, parallel, or they intersect. In 3-dimensional space, there is an additional possibility: two lines can be \(\textbf{skew}\), that is, they do not intersect but they are not parallel. However, even though they are not parallel, skew lines are on parallel planes (see Figure 1.5.5).
To determine whether two lines in \(\mathbb{R}^{3}\) intersect, it is often easier to use the parametric representation of the lines. In this case, you should use different parameter variables (usually \(s\) and \(t\)) for the lines, since the values of the parameters may not be the same at the point of intersection. Setting the two \((x,y,z)\) triples equal will result in a system of 3 equations in 2 unknowns (\(s\) and \(t\)).
Find the point of intersection (if any) of the following lines:
$$\nonumber \frac{x + 1}{3} = \frac{y - 2}{2} = \frac{z - 1}{-1} \text{ and } x + 3 = \frac{y - 8}{-3} = \frac{z + 3}{2}$$
First we write the lines in parametric form, with parameters \(s\) and \(t\):
$$\nonumber x = -1 + 3s, y = 2 + 2s, z = 1 - s \text{ and } x = -3 + t, y = 8 - 3t, z = -3 + 2t$$
The lines intersect when \((-1 + 3s,2 + 2s,1 - s) = (-3 + t,8 - 3t,-3 + 2t)\) for some \(s\), \(t\):
\nonumber -1 + 3s = -3 + t &: \Rightarrow t = 2 + 3s\\ \nonumber
2 + 2s = 8 - 3t &: \Rightarrow 2 + 2s = 8 - 3(2 + 3s) = 2 - 9s \Rightarrow 2s = -9s \Rightarrow s = 0 \Rightarrow t = 2 + 3(0) = 2\\ \nonumber
1 - s = -3 + 2t &: 1 - 0 = -3 + 2(2) \Rightarrow 1 = 1 \checkmark \text{(Note that we had to check this.)}
Letting \(s = 0\) in the equations for the first line, or letting \(t = 2\) in the equations for the second line, gives the point of intersection \((-1,2,1)\).
We will now consider planes in 3-dimensional Euclidean space.
Let \(P\) be a plane in \(\mathbb{R}^{3}\), and suppose it contains a point \(P_{0} = (x_{0}, y_{0}, z_{0})\). Let \(\textbf{n} = (a,b,c)\) be a nonzero vector which isper pendicular to the plane \(P\). Such a vector is called a \(\textbf{normal vector}\) (or just a \(\textit{normal}\)) to the plane. Now let \((x,y,z)\) be any point in the plane \(P\). Then the vector \(\textbf{r} = (x - x_{0}, y - y_{0}, z - z_{0})\) lies in the plane \(P\) (Figure 1.5.6). So if \(\textbf{r} \ne \textbf{0}\), then \(\textbf{r} \perp \textbf{n}\) and hence \(\textbf{n} \cdot \textbf{r} = 0\). And if \(\textbf{r} = \textbf{0}\) then we still have \(\textbf{n} \cdot \textbf{r} = 0\).
Figure 1.5.6 The plane \(P\)
Conversely, if \((x,y,z)\) is any point in \(\mathbb{R}^{3}\) such that \(\textbf{r} = (x - x_{0}, y - y_{0}, z - z_{0}) \ne \textbf{0}\) and \(\textbf{n} \cdot \textbf{r} = 0\), then \(\textbf{r} \perp \textbf{n}\) and so \((x,y,z)\) lies in \(P\). This proves the following theorem:
Theorem 1.18
Let \(P\) be a plane in \(\mathbb{R}^{3}\), let \((x_{0}, y_{0}, z_{0})\) be a point in \(P\), and let \(\textbf{n} = (a,b,c)\) be a nonzero vector which is perpendicular to \(P\). Then \(P\) consists of the points \((x,y,z)\) satisfying the vector equation:
$$\textbf{n} \cdot \textbf{r} = 0$$
where \(\textbf{r} = (x - x_{0}, y - y_{0}, z - z_{0})\), or equivalently:
$$a(x - x_{0}) + b(y - y_{0}) + c(z - z_{0}) = 0 \label{Eq1.25}$$
The above equation is called the \(\textbf{point-normal form}\) of the plane \(P\).
Find the equation of the plane \(P\) containing the point \((-3,1,3)\) and perpendicular to the vector \(\textbf{n} = (2,4,8)\).
By Equation \ref{Eq1.25}, the plane \(P\) consists of all points \((x,y,z)\) such that:
$$\nonumber 2(x + 3) + 4(y - 1) + 8(z - 3) = 0 $$
If we multiply out the terms in Equation \ref{Eq1.25} and combine the constant terms, we get an equation of the plane in \(\textbf{normal form}\):
$$ax + by + cz + d = 0$$
For example, the normal form of the plane in Example 1.23 is \(2x + 4y + 8z - 22 = 0\).
In 2-dimensional and 3-dimensional space, two points determine a line. Two points do not determine a plane in \(\mathbb{R}^{3}\). In fact, three \(\textit{collinear}\) points (i.e. all on the same line) do not determine a plane; an infinite number of planes would contain the line on which those three points lie. However, three \(\textit{noncollinear}\) points do determine a plane. For if \(Q\), \(R\) and \(S\) are noncollinear points in \(\mathbb{R}^{3}\), then \(\overrightarrow{QR}\) and \(\overrightarrow{QS}\) are nonzero vectors which are not parallel (by noncollinearity), and so their cross product \(\overrightarrow{QR} \times \overrightarrow{QS}\) is perpendicular to both \(\overrightarrow{QR}\) and \(\overrightarrow{QS}\). So \(\overrightarrow{QR}\) and \(\overrightarrow{QS}\) (and hence \(Q\), \(R\) and \(S\)) lie in the plane through the point \(Q\) with normal vector \(\textbf{n} = \overrightarrow{QR} \times \overrightarrow{QS}\) (see Figure 1.5.7).
Figure 1.5.7: Noncollinear points \(Q, R, S\)
Find the equation of the plane \(P\) containing the points \((2,1,3)\), \((1,-1,2)\) and \((3,2,1)\).
Let \(Q = (2,1,3)\), \(R = (1,-1,2)\) and \(S = (3,2,1)\). Then for the vectors \(\overrightarrow{QR} = (-1,-2,-1)\) and \(\overrightarrow{QS} = (1,1,-2)\), the plane \(P\) has a normal vector
$$\nonumber \textbf{n} = \overrightarrow{QR} \times \overrightarrow{QS} = (-1,-2,-1) \times (1,1,-2) = (5,-3,1)$$
So using Equation \ref{Eq1.25} with the point \(Q\) (we could also use \(R\) or \(S\)), the plane \(P\) consists of all points \((x,y,z)\) such that:
\[\nonumber 5(x - 2) - 3(y - 1) + (z - 3) = 0\]
or in normal form,
\[\nonumber 5x - 3y + z - 10 = 0\]
We mentioned earlier that skew lines in \(\mathbb{R}^{3}\) lie on separate, parallel planes. So two skew lines do not determine a plane. But two (nonidentical) lines which either intersect or are parallel do determine a plane. In both cases, to find the equation of the plane that contains those two lines, simply pick from the two lines a total of three noncollinear points (i.e. one point from one line and two points from the other), then use the technique above, as in Example 1.24, to write the equation. We will leave examples of this as exercises for the reader.
The distance between a point in \(\mathbb{R}^{3}\) and a plane is the length of the line segment from that point to the plane which is perpendicular to the plane. The following theorem gives a formula for that distance.
Let \(Q = (x_{0}, y_{0}, z_{0})\) be a point in \(\mathbb{R}^{3}\), and let \(P\) be a plane with normal form \(ax + by + cz + d = 0\) that does not contain \(Q\). Then the distance \(D\) from \(Q\) to \(P\) is:
$$D = \frac{|ax_{0} + by_{0} + cz_{0} + d|}{\sqrt{a^{2} + b^{2} + c^{2}}}$$
Proof: Let \(R = (x,y,z)\) be any point in the plane \(P\) (so that \(ax + by + cz + d = 0\)) and let \(\textbf{r} = \overrightarrow{RQ} = (x_{0} - x, y_{0} - y, z_{0} - z)\). Then \(\textbf{r} \ne \textbf{0}\) since \(Q\) does not lie in \(P\). From the normal form equation for \(P\), we know that \(\textbf{n} = (a,b,c)\) is a normal vector for \(P\). Now, any plane divides \(\mathbb{R}^{3}\) into two disjoint parts. Assume that \(\textbf{n}\) points toward the side of \(P\) where the point \(Q\) is located. Place \(\textbf{n}\) so that its initial point is at \(R\), and let \(\theta\) be the angle between \(\textbf{r}\) and \(\textbf{n}\). Then \(0^{\circ} < \theta < 90^{\circ}\), so \(\cos \theta > 0\). Thus, the distance \(D\) is \(\cos \theta \,\norm{\textbf{r}} = |\cos \theta|\,\norm{\textbf{r}}\) (see Figure 1.5.8).
By Theorem 1.6 in Section 1.3, we know that \(\cos \theta = \dfrac{\textbf{n} \cdot \textbf{r}}{\norm{\textbf{n}} \norm{\textbf{r}}}\), so
\[\nonumber \begin{align}D &= |\cos \theta|\,\norm{\textbf{r}}= \dfrac{|\textbf{n} \cdot \textbf{r}|}{\norm{\textbf{n}} \norm{\textbf{r}}}\,\norm{\textbf{r}}= \dfrac{|\textbf{n} \cdot \textbf{r}|}{\norm{\textbf{n}}}= \dfrac{|a(x_{0} - x) + b(y_{0} - y) + c(z_{0} - z)|}{\sqrt{a^{2} + b^{2} + c^{2}}} \\ \nonumber &= \dfrac{ax_{0} + by_{0} + cz_{0} - (ax + by + cz)}{\sqrt{a^{2} + b^{2} + c^{2}}}= \dfrac{|ax_{0} + by_{0} + cz_{0} - (-d)|}{\sqrt{a^{2} + b^{2} + c^{2}}}= \dfrac{|ax_{0} + by_{0} + cz_{0} + d|}{\sqrt{a^{2} + b^{2} + c^{2}}} \\ \end{align}\]
If \(\textbf{n}\) points away from the side of \(P\) where the point \(Q\) is located, then \(90^{\circ} < \theta < 180^{\circ}\) and so \(\cos \theta < 0\). The distance \(D\) is then \(|\cos \theta| \, \norm{\textbf{r}}\), and thus repeating the same argument as above still gives the same result.
Find the distance \(D\) from \((2,4,-5)\) to the plane from Example 1.24.
Recall that the plane is given by \(5x - 3y + z - 10 = 0\). So
$$\nonumber D = \frac{|5(2) - 3(4) + 1(-5) - 10|}{\sqrt{5^{2} + (-3)^{2} + 1^{2}}} = \frac{|-17|}{\sqrt{35}} = \frac{17}{\sqrt{35}} \approx 2.87$$
Note that two planes are parallel if they have normal vectors that are parallel, and the planes are perpendicular if their normal vectors are perpendicular. If two planes do intersect, they do so in a line (see Figure 1.5.9). Suppose that two planes \(P_{1}\) and \(P_{2}\) with normal vectors \(\textbf{n}_{1}\) and \(\textbf{n}_{2}\), respectively, intersect in a line \(L\). Since \(\textbf{n}_{1} \times \textbf{n}_{2} \perp \textbf{n}_{1}\), then \(\textbf{n}_{1} \times \textbf{n}_{2}\) is parallel to the plane \(P_{1}\). Likewise, \(\textbf{n}_{1} \times \textbf{n}_{2} \perp \textbf{n}_{2}\) means that \(\textbf{n}_{1} \times \textbf{n}_{2}\) is also parallel to \(P_{2}\). Thus, \(\textbf{n}_{1} \times \textbf{n}_{2}\) is parallel to the intersection of \(P_{1}\) and \(P_{2}\), i.e. \(\textbf{n}_{1} \times \textbf{n}_{2}\) is parallel to \(L\). Thus, we can write \(L\) in the following vector form:
$$L: \textbf{r} + t(\textbf{n}_{1} \times \textbf{n}_{2}) \text{, for} -\infty < t < \infty$$
where \(\textbf{r}\) is any vector pointing to a point belonging to both planes. To find a point in both planes, find a common solution \((x,y,z)\) to the two normal form equations of the planes. This can often be made easier by setting one of the coordinate variables to zero, which leaves you to solve two equations in just two unknowns.
Find the line of intersection \(L\) of the planes \(5x - 3y + z - 10 = 0\) and \(2x + 4y - z + 3 = 0\).
The plane \(5x - 3y + z - 10 = 0\) has normal vector \(\textbf{n}_{1} = (5,-3,1)\) and the plane \(2x + 4y - z + 3 = 0\) has normal vector \(\textbf{n}_{2} = (2,4,-1)\). Since \(\textbf{n}_{1}\) and \(\textbf{n}_{2}\) are not scalar multiples, then the two planes are not parallel and hence will intersect. A point \((x,y,z)\) on both planes will satisfy the following system of two equations in three unknowns:
\[\nonumber \begin{align}&5x - 3y + z - 10 = 0 \\ \nonumber &2x + 4y - z + 3 = 0 \\ \end{align}\]
Set \(x = 0\) (why is that a good choice?). Then the above equations are reduced to:
\[\nonumber \begin{align} -&3y + z - 10 = 0 \\ \nonumber &4y - z + 3 = 0 \\ \end{align}\]
The second equation gives \(z = 4y + 3\), substituting that into the first equation gives \(y = 7\). Then \(z = 31\), and so the point \((0,7,31)\) is on \(L\). Since \(\textbf{n}_{1} \times \textbf{n}_{2} = (-1,7,26)\), then \(L\) is given by:
$$\nonumber \textbf{r} + t(\textbf{n}_{1} \times \textbf{n}_{2}) = (0,7,31) + t(-1,7,26), \text{for} -\infty < t < \infty$$
or in parametric form:
$$\nonumber x = -t, y = 7 + 7t, z = 31 +26t, \text{for} -\infty < t < \infty$$
Michael Corral (Schoolcraft College). The content of this page is distributed under the terms of the GNU Free Documentation License, Version 1.2.
1.4: Cross Product
1.6: Surfaces
Michael Corral
GNU FDL | CommonCrawl |
\begin{document}
\title[Affine Actions and the YBE]{Affine Actions and the Yang-Baxter Equation} \author[D. Yang] {Dilian Yang} \address{Dilian Yang, Department of Mathematics $\&$ Statistics, University of Windsor, Windsor, ON N9B 3P4, CANADA} \email{[email protected]}
\begin{abstract} In this paper, the relations between the Yang-Baxter equation and affine actions are explored in detail. In particular, we classify solutions of the Yang-Baxter equations in two ways: (i) by their associated affine actions of their structure groups on their derived structure groups, and (ii) by the C*-dynamical systems obtained from their associated affine actions. On the way to our main results, several other useful results are also obtained. \end{abstract}
\subjclass[2010]{16T25.} \keywords{Yang-Baxter equation, set-theoretic solution, affine action} \thanks{The author was partially supported by an NSERC Discovery grant.}
\date{} \maketitle
The Yang-Baxter equation has been extensively studied in the literature since \cite{Yan67}. It plays important roles not only in statistical mechanics, but also in other areas, such as, quantum groups, link invariants, operator algebras, and the conformal field theory. In general, it is a rather challenging problem to find all solutions of the Yang-Baxter equation. Following a suggestion given in \cite{Dri92}, many researchers have done a lot of work on studying a special but important class of solutions, which are now known as set-theoretic solutions. See, for example, \cite{CJO14, CJdR10, ESS, GC, GM, LYZ00, Sol00, Yan16} to name just a few, and the references therein.
The main aim of this paper is to explore the relations between the Yang-Baxter equation and affine actions on groups. The main ideas behind here are motivated by \cite{ESS, LYZ00, Sol00}. The rest of this paper is organized as follows. In Section \ref{S:YBE}, we recall some necessary background on the Yang-Baxter equation which will be needed later. In Section \ref{S:affine}, we first introduce affine actions and some related notions, then associate to every solution of the Yang-Baxter equation a regular affine action of its structure group on its derived structure group (Proposition \ref{P:YBEaff}), and finally describe two constructions of solutions to the Yang-Baxter equation via their associated affine actions. Our main results of this paper are given in Section \ref{S:classification}. We classify injective solutions of the Yang-Baxter equation in terms of their associated affine actions (Theorem \ref{T:giso}). We further obtain a connection with C*-dynamical systems. It is shown that injective solutions can also be classified via their associated C*-dynamical systems (Theorem \ref{T:ds}). We end this paper with an appendix, which provides a commutation relation for semi-direct product of solutions to the Yang-Baxter equation determined by cycle sets, which might be useful in the future studies.
\section{The Yang-Baxter equation}
\label{S:YBE}
In this section, we provide some background on the Yang-Baxter equation which will be useful later.
Let $X$ be a (non-empty) set, and $X^n:=\overbrace{X\times\cdots \times X}^n$ for $n\ge 2$.
\begin{defn} \label{D:YBE} Let $R(x,y)=(\alpha_x(y), \beta_y(x))$ be a bijection on $X^2$. We call $R$ a \textit{set-theoretic solution of the Yang-Baxter equation} (abbreviated as \textit{YBE}) if \begin{align} \label{E:YBE} R_{12}R_{23}R_{12}=R_{23}R_{12}R_{23} \end{align} on $X^3$, where $R_{12}={\operatorname{id}}_X\times R$ and $R_{23}=R\times {\operatorname{id}}_X$. We often simply call $R$ a \textit{YBE solution on $X$}. Sometimes, we write it as $R_X$ or a pair $(R,X)$. A YBE solution $R$ on $X$ is said to be \begin{itemize} \item \textit{involutive} if $R^2={\operatorname{id}}_{X^2}$;
\item \textit{non-degenerate} if, for all $x\in X$, $\alpha_x$ and $\beta_x$ are bijections on $X$;
\item \textit{symmetric} if $R$ is involutive and non-degenerate. \end{itemize} \end{defn}
\subsection*{Standing assumptions:} \textsf{All YBE solutions in the rest of this paper are always assumed to be set-theoretic and non-degenerate.}
\subsection{Two characterizations of YBE solutions} The following lemma is well-known in the literature and also easy to prove.
\begin{lem}
\label{L:basic}
Let $R(x,y)=(\alpha_x(y),\beta_y(x))$. Then $R$ is a YBE solution on $X$, if and only if the following properties hold true:
for all $x,y,z\in X$, \begin{itemize} \item[(i)] $\alpha_x \alpha_y=\alpha_{\alpha_x(y)}\alpha_{\beta_y(x)},$ \item[(ii)] $\beta_y\beta_x=\beta_{\beta_y(x)}\beta_{\alpha_x(y)},$ and \item[(iii)] $\beta_{\alpha_{\beta_y(x)}(z)} (\alpha_x(y))=\alpha_{\beta_{\alpha_y(z)}(x)} (\beta_z(y))$ {\rm (Compatibility Condition)}. \end{itemize} Furthermore, $R$ is involutive if and only if \[ \alpha_{\alpha_x(y)}(\beta_y(x))=x \quad\text{and}\quad \beta_{\beta_y(x)}(\alpha_x(y))=y\quad\text{for all}\quad x,y\in X. \]
\end{lem}
Let us associate to a given YBE solution an important object -- its structure group.
\begin{defn} Let $R(x,y)=(\alpha_x(y),\beta_y(x))$ be a YBE solution on $X$.
The structure group of $R$, denoted as $G_{R_X}$, is the group generated by $X$ with commutation relations determined by $R$: \[ G_{R_X}={}_{\text{gp}}\big\langle X; xy=\alpha_x(y)\beta_y(x)\text{ for all } x,y\in X\big\rangle. \] Sometimes we also write $G_{R_X}$ as $G_{R,X}$ or $G_{X}$. \end{defn}
One can easily rephrase the characterization given in Lemma \ref{L:basic} in terms of actions of structure groups (cf., e.g., \cite{ESS, GM}).
\begin{cor} \label{C:char} A map $R(x,y)=(\alpha_x(y), \beta_y(x))$ is a YBE solution on $X$, if and only if \begin{itemize} \item[(i)] $\alpha$ can be extended to a left action of $G_{R_X}$ on $X$, \item[(ii)] $\beta$ can be extended to a right action of $G_{R_X}$ on $X$, and \item[(iii)] the compatibility condition in Lemma \ref{L:basic} (iii) holds. \end{itemize}
\end{cor}
\subsection{Constructing YBE solutions from old to new} There are several known constructions of YBE solutions from old to new. For our purpose, we only introduce two below. The first one seems to be overlooked in the literature.
\subsection*{$\blacktriangleright$ Dual of $R$} Let $R(x,y)=(\alpha_x(y),\beta_y(x))$ be a YBE solution on $X$. Define $R^\circ$ on $X^2$ by
\[
R^\circ(x,y)=(\beta_x(y),\alpha_y(x))\quad\text{for all}\quad x,y\in X.
\] We call $R^\circ$ the \textit{dual of $R$}. It is also a YBE solution on $X$. Indeed,
this can be seen by switching $x$ and $y$ in the first two identities, and $x$ and $z$ in the third one in Lemma \ref{L:basic}.
We give it such a name because we `dualize' the process $xy=\alpha_x(y)\beta_y(x)$ in $G_{R_X}$ via
$y\circ x=\beta_y(x)\circ \alpha_x(y)$ (by switching the factors on both sides).
Clearly, $R^{\circ\circ}=R$.
Let $\Phi: G_{R_X}\to G_{R^\circ_X}$ be defined via
$\Phi(x):=x$ for $x\in X$ and $\Phi(xy):=y\circ x$ for all $x,y \in X$. Since $\Phi(xy)=\Phi(\alpha_x(y)\beta_y(x))$ for all $x,y\in X$,
$\Phi$ can be extended to an anti-isomorphism from $G_{R_X}$ to $G_{R^\circ_X}$.
\subsection*{$\blacktriangleright$ Derived solution of $R$ \cite{ESS,Sol00}}
Let $R(x,y)=(\alpha_x(y),\beta_y(x))$ be a YBE solution on $X$. Then \[ (x,y)\stackrel{R}\mapsto (\alpha_x(y),\beta_y(x))\stackrel{R}\mapsto \big(\alpha_{\alpha_x(y)}(\beta_y(x)), \beta_{\beta_y(x)}(\alpha_x(y))\big) \] determines a YBE solution
\[
(x,\alpha_x(y))\mapsto \big(\alpha_x(y), \alpha_{\alpha_x(y)}(\beta_y(x))\big),
\]
namely,
\[
R': (x,y)\mapsto \big(y, \alpha_y(\beta_{\alpha_x^{-1}(y)}(x))\big).
\] This solution $R'$ is called the \textit{derived solution of $R$}.
The \textit{derived structure group $A_{R_X}$ of $R$} is defined as \[ A_{R_X}=\big\langle X: x\bullet y=y\bullet\alpha_y(\beta_{\alpha_x^{-1}(y)}(x))\text{ for all } x,y\in X \big\rangle. \] As $G_{R_X}$, $A_{R_X}$ is sometimes also written as $A_{R,X}$ or $A_X$.
\begin{rem} It is often useful to think that $A_{R_X}$ and $G_{R_X}$ have the same generator set $X$ with the relations \[ x\bullet y=x\alpha_{x^{-1}}(y)\quad\text{for all}\quad x,y\in X, \] equivalently, $ xy=x\bullet \alpha_x(y) $ for all $x,y\in X$. \end{rem}
\begin{rem} \label{R:psi}
(i) If $R(x,y)=(\alpha_x(y), x)$, then $R'(x,y)=(y, \alpha_y(x))$. Namely, $R'=R^\circ$.
(ii) As in \cite{Sol00}, one can also define another derived YBE solution \[ {}'\!R(x,y) =\big (\beta_x(\alpha_{\beta_y^{-1}(x)}(y)),x\big)\quad\text{for all}\quad x,y\in X. \]
(iii) One can easily check the following:
$R$ is symmetric $\Leftrightarrow$ ${}'\!R=R'$ $\Leftrightarrow$ $R^\circ$ is symmetric $\Leftrightarrow$ $A_{R_X}$ and $A_{R'_X}$ are abelian $\Leftrightarrow$
$A_{R_X}$ and $A_{R^\circ_X}$ are abelian.
\end{rem}
\subsection{A distinguished action of $G_{R_X}$ on $A_{R_X}$} Let $R(x,y)=(\alpha_x(y),\beta_y(x))$ be a YBE solution on $X$. By Corollary \ref{C:char}, both $\alpha$ and $\beta^{-1}$ can be extended to actions of $G_X$ on $X$.
For our convenience, let \begin{align*} \label{E:phi} \phi_R(x,y)&:=\alpha_y(\beta_{\alpha_x^{-1}(y)}(x)),\\ \psi_R(x,y)&:= \beta_x(\alpha_{\beta_y^{-1}(x)}(y)) \end{align*} for all $x,y\in X$. Similar to \cite[Theorem 2.3]{Sol00}, one has the following.
\begin{lem} \label{L:inv} $\phi$ is $G_{R_X}$-equivariant with respect to the action $\alpha$: \[ \phi_R(\alpha_g(x),\alpha_g(y))=\alpha_g(\phi_R(x,y)) \quad\text{for all}\quad x,y\in X \text {and } g\in G_{R_X}. \] \end{lem}
\begin{proof}
Notice that $\psi_{R^\circ}(x,y)=\phi_R(y,x)$ for all $x,y\in X$. Now first apply \cite[Theorem 2.3]{Sol00} to $R^\circ$ and then use the relation between $R$ and $R^\circ$ to obtain the following: \begin{align*} &\ \alpha_{g^{-1}}\psi_{R^\circ}(x,y) =\psi_{R^\circ}(\alpha_{g^{-1}}(x),\alpha_{g^{-1}}(y))\quad\text{for all}\quad x,y\in X, g\in G_{R^\circ_X}\\ \Rightarrow &\ \alpha_{g^{-1}}\phi_{R}(y,x) =\phi_{R}(\alpha_{g^{-1}}(y),\alpha_{g^{-1}}(x))\quad\text{for all}\quad x,y\in X, g\in G_{R^\circ_X}\\ \Rightarrow &\ \alpha_{g}\phi_{R}(y,x) =\phi_{R}(\alpha_{g}(y),\alpha_{g}(x))\quad\text{for all}\quad x,y\in X, g\in G_{R_X}. \end{align*} We are done. \end{proof}
Let ${\rm Aut}_X(A_{R_X})$ be the group of all automorphisms of $A_{R_X}$ preserving $X$.
\begin{prop}[\cite{Sol00}] \label{P:inv} Keep the above notation. The action $\alpha$ of $G_{R_X}$ on $X$ induces an action of $G_{R_X}$ on $A_{R_X}$ preserving $X$. That is, there is a group homomorphism from $G_{R_X}$ to ${\rm Aut}_X(A_{R_X)}$. \end{prop}
\begin{proof}
Notice that for all $g\in G_{R_X}$ \begin{align*} &\ x\bullet y=y\bullet \phi_R(x,y)\\ \Rightarrow &\ \alpha_g(x)\bullet \alpha_g(y)=\alpha_g(y)\bullet \phi_R(\alpha_g(x),\alpha_g(y))\ (\text{replacing } x,y\text{ by }\alpha_g(x), \alpha_g(y))\\ \Rightarrow &\ \alpha_g(x)\bullet \alpha_g(y)=\alpha_g(y)\bullet \alpha_g(\phi_R(x,y))\ (\text{by Lemma }\ref{L:inv}). \end{align*} This implies that $\alpha_g$ can be extended to an element in ${\rm Aut}_X(A_{R_X})$, as desired.
\end{proof}
By Proposition \ref{P:inv}, one has an action $\alpha: G_{R_X}\curvearrowright A_{R_X}$.
\section{Affine actions on groups}
\label{S:affine}
For a given YBE solution, we associate to it a regular affine action (Proposition \ref{P:YBEaff}). This plays a vital role in Section \ref{S:classification}. Conversely, in Subsection \ref{SS:constructing}, we use the two constructions of affine actions described in Subsection \ref{SS:aff} to construct new YBE solutions.
Let $A$ be a group. Denote by $\operatorname{Aff}(A)$ the semi-direct product \[ \operatorname{Aff}(A)= {\rm Aut}(A)\ltimes A, \] where $(S,a)(T,b)=(ST,aS(b))$ for all $S,T\in {\rm Aut}(A)$ and $a,b\in A$. $\operatorname{Aff}(A)$ acts on $A$ via $(S,a)b=aS(b)$.
\begin{defn} Let $G$ and $A$ be groups. An affine action of $G$ on $A$ is a group homomorphism $\rho: G\to \operatorname{Aff}(A)$. \end{defn}
By definition, any affine action $\rho: G\to \operatorname{Aff}(A)$ has the following form: \[ \rho_g(a)=b(g)\pi_g(a)\quad\text{for all}\quad g\in G\text{ and } a\in A, \] where $\pi:G\to {\rm Aut}(A)$ is a group homomorphism, called the \textit{linear part} of $\rho$, and $b:G\to A$, called the \textit{translational part} of $\rho$, is a 1-cocycle with respect to $\pi$ in coefficient $A$: \[ b(g_1g_2)=b(g_1)\pi_{g_1}(b(g_2))\quad\text{for all}\quad g_1,g_2\in G. \] We sometimes simply write $\rho=(\pi,b)$, and also write $b(g)$ as $b_g$ for convenience.
Recall that a group action is called \textit{regular} if it is transitive and free.
The following lemma should be known. But we include a proof below for completeness.
\begin{lem} An affine action $\rho=(\pi,b)$ of a group $G$ on a group $A$ is regular, if and only if $b$ is bijective. \end{lem}
\begin{proof} $(\Rightarrow)$: Since $\rho$ is regular, for arbitrary $x,y$ in $A$ there is a unique $g\in G$ such that $\rho_g(x)=y$. Letting $x=e$ and $y\in A$ arbitrary shows that $b$ is surjective.
Now suppose that $b(g_1)=b(g_2)$ for some $g_1,g_2\in G$. Then $\rho_{g_1}(e)=b(g_1)\pi_{g_1}(e)=b(g_2)\pi_{g_2}(e)=\rho_{g_2}(e)$. So $g_1=g_2$ as $\rho$ is free. Thus $b$ is injective.
$(\Leftarrow)$: Let $x,y\in A$. Since $b$ is bijective, there is a unique $h_0\in G$ such that $b(h_0)=x$, and further a unique $g\in G$ such that $b(gh_0)=y$. Then $\rho_g(x)=b(g)\pi_g(x)=b(g)\pi_g(b(h_0))=b(gh_0)=y$. Thus $\rho$ is transitive.
To show that $\rho$ is free, suppose that there are $g_1,g_2\in G$ such that $\rho_{g_1}(x)=\rho_{g_2}(x)$ for some $x\in A$. Then $b(g_1)\pi_{g_1}(x)=b(g_2)\pi_{g_2}(x)$. Since $b$ is surjective, there is $g\in G$ such that $b(g)=x$. Hence $b(g_1)\pi_{g_1}(b(g))=b(g_2)\pi_{g_2}(b(g))$, i.e., $b(g_1g)=b(g_2g)$. But $b$ is injective, $g_1g=g_2g$ and so $g_1=g_2$. Therefore, $\rho$ is free. \end{proof}
\begin{defn} \label{D:conj} Let $\rho^i$ be an affine action of a group $G$ on a group $A_i$ ($i=1,2$). A group homomorphism $\varphi:A_1\to A_2$ is said to be $G$-equivariant relative to $(\rho^1,\rho^2)$ if \begin{align} \label{E:rho} \varphi\circ\rho^1_g=\rho^2_g \circ \varphi\quad\text{for all}\quad g\in G. \end{align} That is, for every $g\in G$, the following diagram commutes: \begin{align*}
\xymatrix{
A_1 \ar[r]^{\rho^1_g} \ar[d]_{\varphi} & A_1\ar[d]^{\varphi}& \\
A_2 \ar[r]_{\rho^2_g} &A_2. &
} \end{align*} If, furthermore, the above $f$ is bijective, then $\rho^1$ and $\rho^2$ are said to be conjugate. \end{defn}
\begin{rem} \label{R:equ} (i) Let $\rho^i=(\pi^i,b^i)$ ($i=1,2$). It is easy to see that \eqref{E:rho} is equivalent to \begin{align*} \varphi \circ\pi^1_g&=\pi^2_g\circ \varphi,\\ b^2_g&=\varphi\circ b^1_g \end{align*} for all $g\in G$. So, in particular, $\varphi$ is also $G$-equivariant relative to $(\pi^1,\pi^2)$.
(ii) If $b^1$ is surjective, then using the definition of 1-cocycles, it is easy to see that the second identity in (i) above determines the first one. In fact, from the second one has for all $g,h\in G$ \begin{align*} b^2_{gh}=\varphi(b^1_{gh}) \Rightarrow &\ b^2_g\pi_{g}^2(b^2_h)=\varphi(b^1_g)\varphi(\pi_g^1(b^1_h))\\ \Rightarrow &\ \pi_{g}^2(b^2_h)=\varphi(\pi_g^1(b^1_h))\ (\text{as }b_g^2=\varphi(b_g^1))\\ \Rightarrow &\ \pi_{g}^2(\varphi(b^1_h))=\varphi(\pi_g^1(b^1_h))\ (\text{as }b_h^2=\varphi(b_h^1))\\ \Rightarrow &\ \pi_{g}^2\circ \varphi=\varphi\circ \pi_g^1\ (\text{as } b^1(G)=A_1). \end{align*} \end{rem}
\subsection{Affine actions associated to YBE solutions} This subsection shows why we are interested in affine actions.
\begin{prop}[and \textbf{Definition}] \label{P:YBEaff} Any YBE solution $R$ on $X$ induces a regular affine action $\rho^X$ of $G_X$ on $A_X$.
The action $\rho^X$ is called the {\rm affine action associated to $R_X$}, also denoted as $\rho_{R_X}$ or even just $\rho$ if the context is clear. \end{prop}
\begin{proof} The proof is completely similar to \cite[Theorem 2.5]{Sol00}. We only sketch it here. By Proposition \ref{P:inv}, there is an action $\alpha: G_X\curvearrowright A_X$.
\underline{Step 1}: Extend the mapping \[ \rho: X\to G_X\ltimes_\alpha A_X, \ x\mapsto (x,x) \] to a group homomorphism \[ \rho_G: G_X\to G_X\ltimes_\alpha A_X. \]
To do so, one needs to check that \[ \rho(x)\rho(y)=\rho(\alpha_x(y))\rho(\beta_y(x))\quad\text{for all}\quad x,y\in X. \] But \[ \rho(x)\rho(y)=(x,x)(y,y)=(xy,x\bullet \alpha_x(y)), \] and similarly \[ \rho(\alpha_x(y))\rho(\beta_y(x))=\big(\alpha_x(y)\beta_y(x), \alpha_x(y)\bullet\alpha_{\alpha_x(y)}(\beta_y(x))\big). \] They are obviously equal.
\underline{Step 2}: Let $p: G_X\ltimes_\alpha A_X\to A_X$ be the second projection to $A_X$, and let $b:=p\circ \rho_G$. Then $b$ is a 1-cocycle with respect to the action $G_X\stackrel{\alpha}\curvearrowright A_X$: \[ b(gh)=b(g)\bullet \alpha_g(b(h)) \quad\text{for all}\quad g,h\in G_X. \] In fact, for all $g,h\in X$, \[ b(gh)=p\rho_G(gh)=p\rho((g,g)(h,h))=p(gh,g\bullet \alpha_g(h))=g\bullet\alpha_g(h), \] and \[ b(g)\bullet \alpha_g(b(h))=p\rho_G(g)\bullet \alpha_g(p\rho_G(h))=g\bullet \alpha_g(h). \]
\underline{Step 3}: Check that $b$ is bijective (cf. \cite[Theorem 2.5]{Sol00}). \end{proof}
\begin{rem} In the sequel, we will frequently use that the simple fact that $b(x)=x$ for all $x\in X$ in the associated affine action $\rho^X=(\alpha,b)$ obtained from Proposition \ref{P:YBEaff}. \end{rem}
\subsection{Two constructions of affine actions} \label{SS:aff}
In this subsection, we construct two new affine actions from given ones.
\subsection*{$1^\circ$ Lifting} This generalizes a construction given in \cite{Bac14, BCJ15}, which plays key roles there.
Let $A$ and $H$ be two groups, and $\theta:H\to A$ a homomorphism. Suppose that $\rho=(\pi,b)$ is a regular affine action of $G$ on $A$, and that $\sigma$ is an action of $G$ on $H$, such that $\theta$ is $G$-equivariant relative to $(\sigma, \pi)$: \[ \theta\circ\sigma_g=\pi_g \circ\theta\quad\text{for all}\quad g\in G. \] Introduce a new multiplication $\cdot$ on $H$ via \begin{align} \label{E:mul} x\cdot y:=x\sigma_{b^{-1}\circ\theta(x)}(y) \quad\text{for all}\quad x,y\in H. \end{align} Then the \textit{lifting of $\rho$ from $A$ to $H$} is defined as \[ \tilde\rho_x(y)=x\cdot y \quad\text{for all}\quad x,y\in H. \]
\begin{con} The lifting $\tilde \rho$ is an affine action of $(H,\cdot)$ on $H$. Furthermore, $\theta$ is $(H,\cdot)$-equivariant relative to $(\tilde \rho, \rho\circ b^{-1}\circ\theta)$. \end{con}
Pictorially, one can summarize the above as follows: for all $g\in G$ and $h\in (H,\cdot)$ \begin{align*}
\xymatrix{
H \ar[r]^{\sigma_g} \ar[d]_{\theta} & H\ar[d]^{\theta}& \\
A\ar[r]_{\pi_g} &A &
} \quad \xymatrix{
&\rightsquigarrow&\\
&\rightsquigarrow&\\
&&
} \quad \xymatrix{
H \ar@{-->}[rr]^{\tilde\rho_h} \ar[d]_{\theta} && H\ar[d]^\theta& \\
A\ar[rr]_{\rho_{b^{-1}\circ\theta(h)}} &&A&\\
} \end{align*}
\begin{proof} One can show that $(H, \cdot)$ is indeed a group: $\cdot$ is closed and associative, the identity is (still) $e$, and the inverse of $x$ in $(H,\cdot)$ is $\sigma_{b^{-1}\circ\theta(x)}(x)$. The verification is tedious and left to the reader.
Also, $\tilde \rho$ is an affine action of $(H,\cdot)$ on $H$. In fact, \begin{align*} \tilde \rho_{x_1\cdot x_2}(y) &=(x_1\cdot x_2)\sigma_{b^{-1}\circ\theta(x_1\cdot x_2)}(y)\\ &=x_1\sigma_{b^{-1}\circ\theta(x_1)}(x_2)\sigma_{b^{-1}\circ\theta(x_1\sigma_{b^{-1}\circ\theta(x_1)}(x_2))}(y)\ (\text{by }\eqref{E:mul})\\ &=x_1\sigma_{b^{-1}\circ\theta(x_1)}(x_2)\sigma_{b^{-1}(\theta(x_1) \theta(\sigma_{b^{-1}\circ\theta(x_1)}(x_2)))}(y)\ (\text{as }{\theta} \text{ is a homomorphism})\\ &=x_1\sigma_{b^{-1}\circ\theta(x_1)}(x_2)\sigma_{b^{-1}(\theta(x_1) \pi_{b^{-1}\circ\theta(x_1)}\theta(x_2)))}(y)\ (\text{as }{\theta} \text{ is }G\text{-equivariant})\\ &=x_1\sigma_{b^{-1}\circ\theta(x_1)}(x_2)\sigma_{b^{-1}(\theta(x_1))b^{-1}\circ\theta(x_2)}(y)\ (\text{as } b \text{ is a 1-cocycle w.r.t }\pi)\\ &=x_1\sigma_{b^{-1}\circ\theta(x_1)}(x_2\sigma_{b^{-1}\circ\theta(x_2)}(y))\ (\text{as } \theta \text{ is an action})\\ &=x_1\sigma_{b^{-1}\circ\theta(x_1)}(\rho_{x_2}(y))\ (\text{by }\eqref{E:mul})\\ &=\tilde\rho_{x_1}(\tilde\rho_{x_2}(y))\ (\text{by }\eqref{E:mul}). \end{align*}
Furthermore, $\rho\circ b^{-1}\circ \theta$ is an affine action of $(H,\cdot)$ on $A$. For this, since $\theta$ is $G$-equivariant for $(\sigma,\pi)$ and $b$ is 1-cocycle with respective to $\pi$, one has \begin{align*} b^{-1}\circ\theta(h_1\cdot h_2) &=b^{-1}(\theta(h_1)\theta\circ\sigma_{b^{-1}(\theta(h_1))}(h_2))\\ &=b^{-1}(\theta(h_1)\pi_{b^{-1}(\theta(h_1)}(\theta(h_2)))\\ &=b^{-1}\circ \theta(h_1)b^{-1}\circ \theta(h_2). \end{align*} Hence, for all $h_1,h_2\in H$ and $a\in A$, we get \begin{align*} \rho_{b^{-1}\circ\theta(h_1\cdot h_2)}(a) =b(b^{-1}\circ \theta(h_1)b^{-1}\circ \theta(h_2))\pi_{b^{-1}\circ \theta(h_1)b^{-1}\circ \theta(h_2)}(a) \end{align*} and \begin{align*} &\ \rho_{b^{-1}\circ \theta(h_1)}\rho_{b^{-1}\circ\theta(h_2)}(a)\\ =&\ \rho_{b^{-1}\circ \theta(h_1)}(b(b^{-1}\circ\theta(h_2)\pi_{b^{-1}\circ\theta(h_2)}(a)\\ =&\ b(b^{-1}\circ \theta(h_1))\pi_{b^{-1}\circ\theta(h_1)}(b(b^{-1}\circ\theta(h_2))\pi_{b^{-1}\circ\theta(h_2)}(a))\\ =&\ b(b^{-1}\circ \theta(h_1))\pi_{b^{-1}\circ\theta(h_1)}(b(b^{-1}\circ\theta(h_2)))\pi_{b^{-1}\circ\theta(h_1)b^{-1}\circ\theta(h_2)}(a). \end{align*} This implies \[ \rho_{b^{-1}\circ\theta(h_1\cdot h_2)}=\rho_{b^{-1}\circ \theta(h_1)}\rho_{b^{-1}\circ\theta(h_2)} \] as $b$ is a 1-cocycle with respect to $\pi$.
Using the property that $\theta$ is $G$-equivariant relative to $(\sigma,\pi)$ again, we have for all $x,z\in H$ \begin{align*} \theta (\tilde \rho_z(x)) &=\theta(z\sigma_{b^{-1}\circ\theta(z))}(x)=\theta(z)\theta(\sigma_{b^{-1}\circ\theta(z)}(x))\\ &=b b^{-1}(\theta(z))\pi_{b^{-1}\circ\theta(z)}(\theta(x))\\ &=\rho_{b^{-1}\circ\theta(z)}(\theta(x)). \end{align*} Thus $ \theta \circ \tilde \rho_z=\rho_{b^{-1}\circ\theta(z)}\circ \theta, $ as desired. \end{proof}
\subsection*{$2^\circ$ Semi-direct product}
Let $\rho$ be an affine action of $G$ on $A$, and $\tilde\rho=(\tilde\pi, \tilde b)$ be a regular affine action of $\tilde G$ on $\tilde A$. Suppose $\theta: G\curvearrowright \tilde G$ is an action of $G$ on $\tilde G$ such that \begin{align} \label{E:asemi} \theta_g(\tilde b^{-1}\tilde\pi_{h}\tilde b)=(\tilde b^{-1}\tilde \pi_{\theta_g(h)}\tilde b) \theta_g \quad\text{for all}\quad g\in G, h\in \tilde G. \end{align}
Then the \textit{semi-direct product of $\rho$ and $\tilde \rho$ via $\theta$} is defined as \begin{align*}
\rho\ltimes_\theta\tilde\rho: & G\ltimes_\theta\tilde G\to \operatorname{Aff}(A\times \tilde A)\\
&(g,h)\mapsto (\rho_g, \tilde \rho_{h}\circ\tilde b\circ \theta_g \circ \tilde b^{-1}). \end{align*}
\begin{con} The semi-direct product $\rho\ltimes_\theta\tilde\rho$ is an affine action of $G\ltimes_\theta\tilde G$ on $A\times \tilde A$. \end{con}
\begin{proof} First notice that \eqref{E:asemi} guarantees that the mapping \[ (g,h)\mapsto (\pi_{g},\pi_{h}\tilde b\circ \theta_{g}\circ \tilde b^{-1}) \] is a group homomorphism from $G\ltimes_\theta \tilde G$ to ${\rm Aut}(A\times \tilde A)$. The tedious verification is left to the reader.
We now show the following identity: \begin{align} \label{E:theta} \theta_g(\tilde b^{-1}\tilde\rho_{h}\tilde b)=(\tilde b^{-1}\tilde \rho_{\theta_g(h)}\tilde b) \theta_g \quad\text{for all}\quad g\in G\text{ and } h\in \tilde G. \end{align} In fact, one has \begin{align*} &\ \theta_g(h_1h_2)=\theta_g(h_1)\theta_g(h_2)\quad\text{for all}\quad g\in G, h_1,h_2\in \tilde G\\ \Rightarrow&\ \tilde b(\theta_g(h_1h_2))=\tilde b(\theta_g(h_1)\theta_g(h_2))\\ \Rightarrow&\ \tilde b(\theta_g(\tilde b^{-1}(\tilde b(h_1)\tilde\pi_{h_1}(\tilde b(h_2))))=\tilde b(\theta_g(h_1))\tilde \pi_{\theta_g(h_1)}(\tilde b \theta_g(h_2)) \ (\text{as }\tilde b \text{ is a 1-cocycle})\\ \Rightarrow&\ \tilde b\theta_g\tilde b^{-1}\tilde\rho_{h_1}(\tilde b(h_2))=\tilde \rho_{\theta_g(h_1)}(\tilde b(\theta_g(h_2))) \ (\text{by the definition of }\tilde\rho)\\ \Rightarrow&\ \theta_g(\tilde b^{-1}\tilde\rho_{h_1}\tilde b)=(\tilde b^{-1}\tilde \rho_{\theta_g(h_1)}\tilde b) \theta_g. \end{align*}
Set $\Gamma:= \rho\ltimes_\theta\tilde\rho$. In order to show that $\Gamma$ is an affine action, it suffices to check that \[ \Gamma_{(g,h)(g',h')}=\Gamma_{(g,h)}\Gamma_{(g',h')}\quad\text{for all}\quad g,g'\in G, h,h'\in\tilde G. \] For this, let $y\in G$ and $t\in \tilde G$. We have \begin{align*} \Gamma_{(g,h)(g',h')}(y,t) &=\Gamma_{(gg',h\theta_g(h')}(y,t)\\ &=\big(\rho_{gg'}(y),\tilde\rho_{h\theta_g(h')}\,\tilde b\,\theta_{gg'} \tilde b^{-1}(t)\big)\\ &=\big(\rho_{gg'}(y),\tilde\rho_{h\theta_g(h')}\,\tilde b\,\theta_{g}\theta_{g'}\tilde b^{-1}(t)\big)\\ &=\big(\rho_{gg'}(y),\tilde\rho_{h} \tilde b \tilde b^{-1}\tilde\rho_{\theta_g(h')}\,\tilde b\,\theta_{g} \theta_{g'}\tilde b^{-1}(t)\big)\\ &=\big(\rho_{gg'}(y),\tilde\rho_{h} \tilde b \theta_g\tilde b^{-1}\tilde\rho_{h'} \,\tilde b\, \theta_{g'}\tilde b^{-1}(t)\big)\ (\text{by }\eqref{E:theta}), \end{align*} and \begin{align*} \Gamma_{(g,h)}\Gamma_{(g',h')}(y,t) &=\Gamma_{(g,h)}\big(\rho_{g'}(y),\tilde \rho_{h'}\tilde b\theta_{g'}\tilde b^{-1}(t)\big)\\ &=\big(\rho_g\rho_{g'}(y), \tilde \rho_h\tilde b\theta_g\tilde b^{-1}(\tilde \rho_{h'}\tilde b\theta_{g'}\tilde b^{-1}(t))\big). \end{align*} We are done. \end{proof}
When $\theta$ is the trivial action, then the condition \eqref{E:asemi} is redundant and the corresponding affine action is just the direct product of $\rho$ and $\tilde\rho$.
An application of the above semi-direct product construction is given in the appendix.
\subsection{Constructing YBE solutions} \label{SS:constructing}
Let us first recall the following result.
\begin{thm}\cite{LYZ00} \label{T:gYBE} Let $G$ be a group. Then following two groups of data are equivalent: \begin{enumerate} \item There is a pair of left-right actions $(\alpha,\beta)$ of the group $G$ on $G$, which is compatible (i.e., $gh=\alpha_g(h)\beta_h(g)$ for all $g,h$ in $G$).
\item There is a regular affine action $\rho=(\pi,b)$ of $G$ on some group $A$. \end{enumerate} \end{thm}
\begin{proof} This is proved in \cite{LYZ00}. Since the idea of the proof will be useful later, we sketch it below.
(i)$\Rightarrow$(ii):
Let $A:=G$ as sets but the multiplication $\odot$ on $A$ is given by \[
g\odot h=g\alpha_{g^{-1}}(h)\quad\text{for all}\quad g,h\in G,
\]
namely,
\[
g h=g\odot\alpha_g(h)\quad\text{for all}\quad g,h\in G.
\]
This implies that the identity mapping ${\operatorname{id}}$ is a (bijective) 1-cocycle with respect to $\alpha$.
(ii)$\Rightarrow$(i): Set \[ \alpha_g(h):=b^{-1}\circ\pi_g\circ b(h)\quad\text{and}\quad \beta_h(g):=\alpha_g(h)^{-1}gh \] for all $g,h \in G$. \end{proof}
\begin{rem} \label{R: extension} (i) Let $G$ and $A$ be groups. Given a regular affine action $\rho$ of $G$ on $A$, by Theorem \ref{T:gYBE} and \cite[Corollary 1]{LYZ00}, one obtains a YBE solution on $G$ given by $R(g,h)=(\alpha_g(h),\beta_h(g))$ for all $g,h\in G$.
(ii) Let $R$ be a YBE solution on $X$, and $\rho^X$ be its associated regular affine action of $G_X$ on $A_X$ (see Proposition \ref{P:YBEaff}). From (i) above, there is a YBE solution $\bar{R}$ on $G_X$. From its construction, one can see that this is nothing but the universal extension of $R$ mentioned in \cite[Theorem 4.1]{LYZ00}.
\end{rem}
\begin{rem} This remark shows that there is a natural generalization of the relation $\beta_h(g)=\alpha^{-1}_{\alpha_g(h)}(g)$ holding for symmetric YBE solutions (cf. Lemma \ref{L:basic}).
Let us return to the proof of (i)$\Rightarrow$(ii) in Theorem \ref{T:gYBE}. The property of $b:={\operatorname{id}}$ being a 1-cocycle with respect to $\alpha$ gives \[ g\odot h=g\alpha_{g^{-1}}(h)=g\alpha_g^{-1}(h)\quad\text{for all}\quad g,h\in G, \] which implies \[ gh=g\odot\alpha_g(h)\quad\text{for all}\quad g,h\in G. \] In particular\footnote{To distinguish, we write $\bar g$ as the inverse of $g$ in $A$, while $g^{-1}$ as the inverse of $g$ in $G$ as usual.}, \[ \bar g=\alpha_g(g^{-1})\quad\text{for all}\quad g\in G. \]
If $(\alpha,\beta)$ is a compatible pair, then we claim \[ \beta_h(g)=\alpha_{{\alpha_g(h)}^{-1}}\circ\operatorname{Ad}_{\overline{\alpha_g(h)}}(g), \] where $\operatorname{Ad}_{\overline{\alpha_g(h)}}$ acts on $A$.
Indeed, since $(\alpha,\beta)$ is a compatible pair, one has \begin{align*} &\ g\odot h=g\alpha_{g^{-1}}(h)=h\beta_{\alpha_{g^{-1}}(h)}(g)\\ \Rightarrow&\ g\odot \alpha_g(h)=\alpha_g(h)\beta_h(g)\\ \Rightarrow&\ \beta_h(g)=\alpha_g(h)^{-1}(g\odot \alpha_g(h))\\ &\hskip 1cm =\alpha_g(h)^{-1}\odot \alpha_{\alpha_g(h)^{-1}}\big(g\odot \alpha_g(h)\big)\\ &\hskip 1cm =\alpha_{\alpha_g(h)^{-1}}\left[\alpha_{\alpha_g(h)}(\alpha_g(h)^{-1})\odot g\odot \alpha_g(h)\right]\ (\text{as }\alpha_g\in{\rm Aut}(A))\\ &\hskip 1cm =\alpha_{\alpha_g(h)^{-1}}\left[\overline{\alpha_g(h)}\odot g\odot \alpha_g(h)\right]\\ &\hskip 1cm =\alpha_{{\alpha_g(h)}^{-1}}\left[\overline{\alpha_g(h)}\odot g\odot \alpha_g(h)\right]\\ &\hskip 1cm =\alpha_{{\alpha_g(h)}^{-1}}\circ\operatorname{Ad}_{\overline{\alpha_g(h)}}(g). \end{align*} This proves our claim.
In particular, if the YBE solution $R$ on $G$ determined by $(\alpha,\beta)$ is symmetric, then $A$ is abelian \cite{LYZ00}. So in this case
$\operatorname{Ad}_g$ ($g\in A$) is nothing but the identity mapping on $A$.
\end{rem}
Making use of Theorem \ref{T:gYBE}, Remark \ref{R: extension} and the constructions of affine actions in Subsection \ref{SS:aff}, we get two constructions of YBE solutions on groups.
\subsection*{Lifting revisited} Let ${R_X}$ be a YBE solution. In the lifting construction on affine actions, let $G=G_X$, $A=A_X$, and $\rho$ be the affine action associated to ${R_X}$. Then \eqref{E:mul} becomes \[ x\cdot y=x\sigma_{\theta(x)}(y)\quad\text{for all}\quad x,y\in H. \] In this case, $\tilde\rho_x(y)=x\cdot y$ is a regular affine action, and so it yields a YBE solution on $(H,\cdot)$.
\subsection*{Semi-direct product revisited}
Let ${R_X}$ and ${R_Y}$ be two YBE solutions. Let $G=G_X$, $A=A_X$, $\tilde G=G_Y$, $\tilde A=A_Y$
in the semi-direct product construction on affine actions. Suppose $\theta$ is an action of $G_X$ on $G_Y$ satisfying \eqref{E:asemi}. In this case, \[ \rho^X\ltimes_\theta\rho^Y(g,h)=(\rho^X_g, \rho_h^Y\circ\theta_g), \] and $\rho^X\ltimes_\theta\rho^Y$ is also regular. It follows that $\rho^X\ltimes_\theta\rho^Y$ determines a YBE solution, say $\bar R$, on $G_X\ltimes_\theta G_Y$. Notice that if $\theta$ is the trivial action, then $\bar R$ is nothing but the trivial extension of ${R_X}$ and ${R_Y}$ in the sense of \cite{ESS} (also cf. \cite[2.2 2$^\circ$]{Yan16}).
\section{Classifying solutions of the Yang-Baxter equation via their associated affine actions}
\label{S:classification}
In this section, we state and prove our main results in this paper. We classify all injective YBE solutions in terms of their associated regular affine actions (Theorem \ref{T:giso}). Furthermore, a connection with C*-dynamical systems is obtained: All injective YBE solutions can also be classified via their associated C*-dynamical systems (Theorem \ref{T:ds}).
Let $R_X$ be a YBE solution. Denote by $\iota_G$ and $\iota_A$ the natural mappings from $X$ into $G_X$ and $A_X$, respectively.
\begin{defn} \label{D:inj} If $\iota_G$ is injective, then ${R_X}$ is said to be injective. \end{defn}
It is known from \cite{Sol00} that $\iota_G$ is injective if and only if so is $\iota_A$. Also, every symmetric YBE solution is injective.
Let ${R_X}$ and ${R_Y}$ be two YBE solutions. Recall that a mapping $h: X\to Y$ is a \textit{YB-homomorphism between ${R_X}$ and ${R_Y}$}, if ${R_Y}(h\times h)=(h\times h){R_X}$. This amounts to saying that \begin{align} \label{E:alphainv} \alpha^Y_{h(x)}(h(y))=h(\alpha^X_x(y))\quad \text{and}\quad \beta^Y_{h(x)}(h(y))=h(\beta^X_x(y)) \end{align} for all $x,y\in X$. In this case, we also say that $R_X$ is \textit{homomorphic to $R_Y$ via $h$}. Of course, if $h$ is bijective, then $R_X$ and $R_Y$ are called \textit{isomorphic}.
If ${R_X}$ and ${R_Y}$ are symmetric, then only one of the two identities in \eqref{E:alphainv} suffices.
\begin{prop} \label{P:symcon} Let ${R_X}$ and ${R_Y}$ be two arbitrary YBE solutions. If $R_X$ is homomorphic to $R_Y$ via $h$, then $h$ induces group homomorphisms $h_G: G_X\to G_Y$ and $h_A: A_X\to A_Y$, such that $h_A$ is $G_X$-equivariant relative to $(\rho^X,\rho^Y \circ h_G)$.
If $h$ is furthermore bijective, then $\rho^X$ and $\rho^Y \circ h_G$ are conjugate. \end{prop}
\begin{proof} For convenience, let ${R_X}(x_1,x_2)=(\alpha^X_{x_1}(x_2),\beta^X_{x_2}(x_1))$ for all $x_1,x_2\in X$, and ${R_Y}(y_1,y_2)=(\alpha^Y_{y_1}(y_2),\beta^Y_{y_2}(y_1))$ for all $y_1,y_2\in Y$.
Notice that since $h:X\to Y$ is a YB-homomorphism between $R_X$ and $R_Y$, it is easy to check that $h$ can be extended to a group homomorphism, say $h_G$, from $G_X$ to $G_Y$. Indeed, it follows from \eqref{E:alphainv} and the definition of $G_Y$ that \[ h(\alpha^X_x(y))h(\beta^X_y(x))=\alpha^Y_{h(x)}(h(y))\beta^Y_{h(y)}(h(x))=h(x)h(y) \] for all $x,y\in X$. Obviously, $\rho^Y\circ h_G$ is an affine action of $G_X$ on $A_Y$.
Similarly, one can extend $h$ to a group homomorphism, say $h_A$, from $A_X$ to $A_Y$. In fact, repeatedly using \eqref{E:alphainv} yields \begin{align*} &\ h(\alpha^X_x(y))\bullet h(\alpha^X_{\alpha^X_x(y)}(\beta^X_y(x)))\\ =&\ \alpha^Y_{h(x)}(h(y))\bullet \alpha^Y_{h(\alpha_x(y))}(h(\beta^X_y(x)))\\ =&\ \alpha^Y_{h(x)}(h(y))\bullet \alpha^Y_{\alpha^Y_{h(x)}(h(y))}(\beta^Y_{h(y)}(h(x))) \end{align*} for all $x,y\in X$. But the definition of $A_Y$ gives \[ h(x)\bullet \alpha^Y_{h(x)}(h(y))=\alpha^Y_{h(x)}(h(y))\bullet \alpha^Y_{\alpha^Y_{h(x)}(h(y))}(\beta^Y_{h(y)}(h(x))), \] and so \begin{align*} h(x)\bullet h(\alpha^X_x(y)) &=h(x)\bullet \alpha^Y_{h(x)}(h(y))\\ &=h(\alpha^X_x(y))\bullet h(\alpha^X_{\alpha^X_x(y)}(\beta^X_y(x)))\quad\text{for all}\quad x,y\in X. \end{align*}
In what follows, we show that $h_A$ is $G_X$-equivariant relative to $\rho^X$ and $\rho^Y\circ h_G$. By Remark \ref{R:equ} it is equivalent to show
\begin{align}
\label{E:halpha}
\alpha^Y_{h_G(g)}(h_A(a))&=h_A(\alpha^X_g(a))\quad\text{for all}\quad g\in G_X, a\in A_X,\\
\label{E:hb}
h_A(b^X(g))&=b^Y(h_G(g))\quad\text{for all}\quad g\in G_X.
\end{align} Applying \eqref{E:alphainv} and Proposition \ref{P:inv}, one has that \[ \alpha^Y_{h_G(g)}(h_A(x))=h_A(\alpha^X_g(x))\quad\text{for all}\quad g\in G_X, x\in X. \] Now from this identity and Proposition \ref{P:inv}, one can easily verify \eqref{E:halpha}.
For \eqref{E:hb}, first notice that it is true when $g\in X$, as both sides are equal to $h(g)$. Then the general case follows from \eqref{E:halpha} and the definition of 1-cocycles.
The last assertion of the proposition is clear. \end{proof}
The following theorem generalizes the case of symmetric YBE solutions (cf., e.g., \cite{ESS}).
\begin{thm} \label{T:giso} Let ${R_X}$ and ${R_Y}$ be two injective YBE solutions. Then they are isomorphic, if and only if there is a group isomorphism $\phi:G_X\to G_Y$ such that $\phi(X)=Y$, and $\rho^X$ and $\rho^Y\circ\phi$ are conjugate. \end{thm}
\begin{proof} \underline{``Only if" part:} Let $h:X\to Y$ be a YB-isomorphism between ${R_X}$ and ${R_Y}$. Keep the same notation used in the proof of Proposition \ref{P:symcon}. Then $\phi:=h_G$ has all desired properties, and furthermore $\rho^X$ and $\rho^Y\circ \phi$ are conjugate via $h_A$.
\underline{``If" part:} As before, write $\rho^X=(\alpha^X, b^X)$ and $\rho^Y=(\alpha^Y, b^Y)$. Let $h: A_X\to A_Y$ be a $G_X$-equivariant mapping relative to $(\rho^X, \rho^Y \circ\phi)$. Then by Remark \ref{R:equ} we have \begin{align} \label{E:alpha} h \circ\alpha^X_g&=\alpha^Y_{\phi(g)}\circ h,\\ \label{E:b} b^Y \circ \phi(g) &=h\circ b^X(g) \end{align} for all $g\in G_X$.
On the other hand, it follows from the proof of Theorem \ref{T:gYBE} and Remark \ref{R: extension} that $\rho^X$ and $\rho^Y$ induce YBE solutions $\bar{R}_X$ and $\bar{R}_Y$ on $G_X$ and $G_Y$, respectively. Actually, \begin{align*} \bar{R}_X(g_1,g_2)&=(\tilde\alpha^X_{g_1}(g_2), \tilde\beta^X_{g_2}(g_1))\quad\text{for all}\quad g_1,g_2\in G_X, \\ \bar{R}_Y(g_1',g_2')&=(\tilde\alpha^Y_{g_1'}(g_2'), \tilde\beta^Y_{g_2'}(g_1'))\quad\text{for all}\quad g_1',g_2'\in G_Y, \end{align*} where \[ \tilde\alpha^X_{g_1}:=(b^X)^{-1}\alpha^X_{g_1}b^X, \quad\tilde\beta^X_{g_2}(g_1):=\tilde\alpha^X_{g_1}(g_2)^{-1}g_1g_2, \] and similarly for $\tilde\alpha^Y$, $\tilde\beta^Y$.
From \eqref{E:b} one has \begin{align} \label{E:tildeh} \phi=(b^Y)^{-1}\circ h\circ b^X. \end{align} We claim that $\phi$ is actually a YB-isomorphism between $\bar{R}_X$ and $\bar{R}_Y$. To this end, we must show that the two identities in \eqref{E:alphainv} hold true.
$\blacktriangleright$ Firstly, we check \[ \phi\circ\tilde\alpha^X_g=\tilde\alpha^Y_{\phi(g)}\circ\phi\quad\text{for all}\quad g\in G_X. \] But this follows from \eqref{E:alpha}, the definitions of $\tilde\alpha^X$ and $\tilde\alpha^Y$: \begin{align*} \eqref{E:alpha}\Rightarrow &\ hb^X((b^X)^{-1} \alpha^X_gb^X)=b^Y((b^Y)^{-1}\alpha^Y_{\phi(g)}b^Y)((b^Y)^{-1} hb^X)\\ \Rightarrow&\ [(b^Y)^{-1}hb^X][(b^X)^{-1} \alpha^X_gb^X]=[(b^Y)^{-1}\alpha^Y_{\phi(g)}b^Y][(b^Y)^{-1} hb^X]\\ \Rightarrow&\ \phi \circ \tilde \alpha^X_g=\tilde\alpha^Y_{\phi(g)}\circ\phi. \end{align*}
$\blacktriangleright$ Secondly, we verify that \[ \phi\circ\tilde\beta^X_g=\tilde\beta^Y_{\phi(g)}\circ\phi\quad\text{for all}\quad g\in G_X. \]
Since $b^X$ is a 1-cocycle with respect to $\alpha^X$ in coefficient $A_X$, one has that for all $g_1,g_2\in G_X$ \begin{align} \nonumber &\ b^X_{g_1g_2}=b^X_{g_1}\bullet\alpha^X_{g_1}(b^X_{g_2})\\ \nonumber \Rightarrow &\ g_1(b^X)^{-1}(\alpha^X_{g_1^{-1}}(b^X_{g_2}))=(b^X)^{-1}(b^X_{g_1}\bullet b^X_{g_2})=g_1\odot g_2\\ \nonumber \Rightarrow &\ g_1\tilde\alpha^X_{g_1^{-1}}(g_2)=g_1\odot g_2\\ \nonumber \Rightarrow &\ g_1^{-1}\tilde\alpha^X_{g_1}(g_2)=g_1^{-1}\odot g_2\\ \label{E:alpha-1} \Rightarrow &\ \tilde\alpha^X_{g_1}(g_2)^{-1}=(g_1^{-1}\odot g_2)^{-1}g_1^{-1}. \end{align} Similarly, \[ \tilde\alpha^Y_{g_1'}(g_2')^{-1}=(g_1'^{-1}\odot g_2')^{-1}g_1'^{-1}\quad\text{for all}\quad g_1',g_2'\in G_Y. \tag{12'} \]
Now define a new multiplication $\odot$ on $G_X$ by \[ g_1\odot g_2=(b^{X})^{-1}(b^X_{g_1}\bullet b^X_{g_2}) \quad\text{for all}\quad g_1,g_2\in G_X, \] and similarly on $G_Y$. Then it is easy to check that $(G_X,\odot)$ and $(G_Y,\odot)$ are groups. In what follows, we claim that $\phi$ is also a group homomorphism from $(G_X, \odot)$ to $(G_Y, \odot)$. As a matter of fact, for all $g_1$, $g_2\in G_X$ one has \begin{align*} \phi(g_1\odot g_2) &=\phi\circ (b^X)^{-1}(b^X_{g_1}\bullet b^X_{g_2})\\ &=(b^Y)^{-1}\circ h(b^X_{g_1}\bullet b^X_{g_2})\ (\text{by }\eqref{E:b})\\ &=(b^Y)^{-1}(h(b^X_{g_1})\bullet h(b^X_{g_2}))\ (\text{as }h:A_X\to A_Y\text{ is a homomorphism})\\ &=(b^Y)^{-1}(b^Y_{\phi(g_1)}\bullet b^Y_{\phi(g_2)})\ (\text{by }\eqref{E:b})\\ &=\phi(g_1)\odot \phi(g_2) \ (\text{by the definition of }\odot\text{ in }G_Y). \end{align*}
We now have \begin{align*}
\phi\circ \tilde\beta^X_{g_2}(g_1) &=\phi(\tilde\alpha^X_{g_1}(g_2)^{-1}g_1g_2)\ (\text{by the definition of }\tilde\beta^X)\\ &=\phi((g_1^{-1}\odot g_2)^{-1}g_1^{-1}g_1g_2)\ (\text{by }\eqref{E:alpha-1})\\ &=\phi(g_1^{-1}\odot g_2)^{-1}\phi(g_2)\ (\text{as }\phi \text{ is a homomorphism from }G_X\text{ to }G_Y)\\ &=(\phi(g_1^{-1})\odot \phi(g_2))^{-1}\phi(g_2)\ (\text{by the above claim})\\ &=(\phi(g_1)^{-1}\odot \phi(g_2))^{-1}\phi(g_2)\ (\text{as }\phi \text{ is a homomorphism from }G_X\text{ to }G_Y)\\ &=(\phi(g_1)^{-1}\odot \phi(g_2))^{-1}\phi(g_1)^{-1}\phi(g_1)\phi(g_2)\\
&=\tilde\alpha^X_{\phi(g_1)}(\phi(g_2))^{-1}\phi(g_1)\phi(g_2)\ (\text{by (12')})\\ &=\tilde\beta^X_{\phi(g_2)}(\phi(g_1))\ (\text{by the definition of }\tilde\beta^Y) \end{align*} for all $g_1,g_2\in G_X$.
Therefore, $\phi$ is a YB-isomorphism between $\bar {R}_X$ and $\bar {R}_Y$.
Recall from Remark \ref{R: extension} that $\bar{R}_X$ is an extension of $R_X$ from $X$ to $G_X$ and similarly for $\bar R_Y$.
Since $R_X$ and $R_Y$ are injective and $\phi(X)=Y$, the restriction $\phi|_X$ yields a YB-isomorphism between ${R_X}$ and ${R_Y}$. \end{proof}
We are now ready to provide a characterization when the extensions $\bar{R}_X$ and $\bar{R}_Y$ are isomorphic.
\begin{thm} \label{T:iso} Let ${R_X}$ and ${R_Y}$ be two arbitrary YBE solutions. Then the extensions $\bar R_X$ and $\bar R_Y$ on $G_X$ and $G_Y$ are YB-isomorphic, if and only if there is a group isomorphism $\phi:G_X\to G_Y$ such that $\rho^X$ and $\rho^Y\circ\phi$ are conjugate. \end{thm}
\begin{proof} $(\Leftarrow)$: It directly follows from the proof of ``If" part of Theorem \ref{T:giso}.
($\Rightarrow$): Let $h:G_X\to G_Y$ be a YB-isomorphism between $\bar R_X$ and $\bar R_Y$. Now consider $h|_{\iota_G(X)}$. Then completely similar to the proof of Proposition \ref{P:symcon}, $h|_{\iota_G(X)}$ can be extended to an isomorphism $\phi$ from $G_X$ to $G_Y$, such that $\rho^X$ and $\rho^Y\circ\phi$ are conjugate. \end{proof}
In the rest of this section, we provide a connection with C*-dynamical systems. For any group $G$, by $\mathrm{C}^*(G)$ we mean the group C*-algebra of $G$. Since all groups here are assumed to be discrete, $\mathrm{C}^*(G)$ is unital. Furthermore, $G$ can be canonically embedded to $\mathrm{C}^*(G)$ as its unitary generators. For the background on C*-dynamical systems which is needed below, refer to \cite{Bla06}.
\begin{prop} \label{P:pi}
(i) A YBE solution ${R_X}$ determines an action $\pi^X$ of $G_X$ on $M_2(\mathrm{C}^*(A_X))$ such that \[ \pi_g^X(\operatorname{diag}(x,y))=\operatorname{diag}(\gamma_g(x), \zeta_g(y)) \quad\text{for all}\quad g\in G_X, x,y\in\mathrm{C}^*(A_X), \] where $\gamma$ and $\zeta$ are representations of $G_X$ on $\mathrm{C}^*(A_X)$.
(ii) If $h$ is a YB-homomorphism between ${R_X}$ and ${R_Y}$, then there are group homomorphisms $h_G : G_X\to G_Y$ and $h_A:A_X\to A_Y$ such that the inflation $h_A^{(2)}$ is $G_X$-equivariant relative to $(\pi^X,\pi^Y \circ h_G)$. \end{prop}
\begin{proof} (i) Let $\pi^X$ be defined as \begin{align*} \pi^X_g\left(\begin{matrix}a_1&a_2\\a_3&a_4\end{matrix}\right) &=\left(\begin{matrix}\alpha^X_g(a_1)&\alpha^X_g(a_2)(b^X_g)^*\\ b^X_g\alpha^X_g(a_3)&b^X_g\alpha^X_g(a_4)(b^X_g)^*\end{matrix}\right)\\ &=\left(\begin{matrix}1&0\\ 0& b^X_g\end{matrix}\right) (\alpha_g^X)^{(2)}\left(\begin{matrix}a_1&a_2\\ a_3&a_4\end{matrix}\right)
\left(\begin{matrix}1&0\\ 0& (b^X_g)^*\end{matrix}\right) \end{align*}
for all $g\in G_X$ and $a_1,a_2,a_3,a_4\in A_X$. Then one can use the properties of $\alpha$ and $b$ to easily check that $\pi^X$
is an action of $G_X$ on the matrix C*-algebra $M_2(\mathrm{C}^*(A_X))$. Also $\gamma_g(\cdot)=\alpha_g^X(\cdot)$ and $\zeta_g(\cdot)=b^X_g\alpha^X_g(\cdot)(b^X_g)^*$
are two representations of $G_X$ on $\mathrm{C}^*(A_X)$.
(ii) Since $h$ is a YB-isomorphism between $R_X$ and $R_Y$, as in the proof of Proposition \ref{P:symcon}, it induces group homomorphisms $h_G: G_X\to G_Y$ and $h_A:A_X\to A_Y$ satisfying \eqref{E:halpha} and \eqref{E:hb}. Then we extend $h_A$ to a C*-homomorphism, still denoted by $h_A$, from $\mathrm{C}^*(A_X)$ to $\mathrm{C}^*(A_Y)$. Furthermore, its inflation $h_A^{(2)}: M_2(\mathrm{C}^*(A_X))\to M_2(\mathrm{C}^*(A_Y))$ gives a $G_X$-equivariant mapping relative to $\pi^X$ and $\pi^Y\circ h_G$. In fact, a simple calculation gives \[ h_A^{(2)} \circ \pi^X_g\left(\begin{matrix}a_1&a_2\\a_3&a_4\end{matrix}\right) =\left(\begin{matrix} h_A(\alpha^X_g(a_1))&h_A(\alpha^X_g(a_2))h_A((b^X_g)^*)\\ h_A(b^X_g)h_A(\alpha^X_g(a_3))&h_A(b_g)h_A(\alpha^X_g(a_4))h_A((b^X_g)^*) \end{matrix}\right) \] and \[ \pi^Y_{h_G(g)}\circ h_A^{(2)}\left(\begin{matrix}a_1&a_2\\a_3&a_4\end{matrix}\right) =\left(\begin{matrix} \alpha_{h_G(g)}^Y(h_A(a_1))&\alpha_{h_G(g)}^Y(h_A(a_2))(b^Y_{h_G(g)})^*\\ b^Y_{h_G(g)}\alpha_{h_G(g)}^Y(h_A(a_3))&b^Y_{h_G(g)}\alpha_{h_G(g)}(a_4) (b^Y_{h_G(g)})^* \end{matrix}\right). \] Then apply \eqref{E:halpha} and \eqref{E:hb} to obtain the right hand sides equal. \end{proof}
As a consequence of Proposition \ref{P:pi}, from the associated regular affine action $\rho^X$ of a given YBE solution $R_X$, one obtains a C*-dynamical system $(G_X, M_2(\mathrm{C}^*(A_X)),\pi^X)$.
\begin{thm} \label{T:ds} Two injective YBE solutions ${R_X}$ and ${R_Y}$ are isomorphic, if and only if there is a group isomorphism $\phi: G_X\to G_Y$ mapping $X$ onto $Y$ such that $(G_X, M_2(\mathrm{C}^*(A_X)),\pi^X)$ and $(G_X, M_2(\mathrm{C}^*(A_Y)),\pi^Y\circ \phi)$ are conjugate. \end{thm}
\begin{proof} ($\Rightarrow$): Keep the same notation as in the proof of Proposition \ref{P:pi}. If $h: X\to Y$ is a YB-isomorphism between ${R_X}$ and ${R_Y}$, then $\phi:=h_G$, and $\pi^X$ and $\pi^Y\circ \phi$ are equivalent via $h_A^{(2)}$.
($\Leftarrow$): Let ${\mathfrak{h}}:M_2(\mathrm{C}^*(A_X))\to M_2(\mathrm{C}^*(A_Y))$ be an intertwining homomorphism between $\pi^X$ and $\pi^Y\circ \phi$. Let us write ${\mathfrak{h}}=\left(\begin{matrix} h_{11}&h_{12}\\ h_{21}& h_{22}\end{matrix}\right)$. Then ${\mathfrak{h}}\circ\pi_g^X\left(\begin{matrix} a&0\\0&0\end{matrix}\right)= \pi^X_{\phi(g)}\circ{\mathfrak{h}}\left(\begin{matrix} a&0\\0&0\end{matrix}\right)$ yields \[ h_{11}\alpha^X_g(a)=\alpha^X_{\phi(g)}h_{11}(a)\quad\text{for all}\quad a\in A_X. \] Also ${\mathfrak{h}}\circ\pi_g^X\left(\begin{matrix} 0&I\\0&0\end{matrix}\right)= \pi^Y_{\phi(g)}\circ{\mathfrak{h}}\left(\begin{matrix} 0&I\\ 0&0\end{matrix}\right)$ yields $ (h_{11}(b^X_g))^*=(b^Y_{\phi(g)})^*, $ which implies \[ h_{11}(b^X_g)=b^Y_{\phi(g)}. \] The above two identities give \eqref{E:alpha} and \eqref{E:b}. Then applying the proof of ``If" part of Theorem \ref{T:giso} ends the proof. \end{proof}
\appendix\section{A Commutation Relation for Semi-Direct Products}
\label{S:cycle}
In this appendix, we prove a commutation relation for semi-direct products of YBE solutions derived from cycle sets, which might be useful in the future studies. We further describe a connection between the structure group of the semi-direct product of two YBE solutions and the semi-direct product of their structure groups.
\begin{defn} \label{D:cycle} A non-empty set $X$ with a binary relation $\cdot$ is called a cycle set, if \[ (x\cdot y)\cdot (x\cdot z)=(y\cdot x)\cdot (y\cdot z)\quad\text{for all}\quad x,y,z\in X. \]
A cycle set $X$ is said to be non-degenerate if $x\mapsto x\cdot x$ is bijective. \end{defn}
The main motivation to study cycle sets is the following theorem due to Rump (\cite{Rum05}): There is a one-to-one correspondence between the set of symmetricYBE solutions and the set of non-degenerate cycle sets.
In fact, let $X(,\cdot)$ be a non-degenerate cycle set. If we let $\ell_x(y)=x\cdot y$, then \[ R(x,y)=\left(\ell_{\ell^{-1}_y(x)}(y), \ell_y^{-1}(x)\right) \] is a symmetric YBE solution on $X$. Conversely, given a symmetric YBE solution $R(x,y)=(\alpha_x(y),\beta_y(x))$ on $X$, let $x\cdot y=\beta_x^{-1}(y)$. Then $(X,\cdot)$ is a non-degenerate cycle set.
The following small lemma turns out very handy.
\begin{lem} \label{L:xyx} Keep the above notation. Then \begin{align*} G_{R_X}={}_{{\rm gp}}\left\langle X; (y\cdot x)y=(x\cdot y)x\quad\text{for all}\quad x,y\in X\right\rangle. \end{align*} \end{lem}
\begin{proof} By Lemma \ref{L:basic}, $\alpha_x(y)=\beta^{-1}_{\beta_y(x)}(y)$ for all $x,y \in X$. Hence \begin{align*} R(x,y)=(\beta^{-1}_{\beta_y(x)}(y),\beta_y(x)) \Leftrightarrow &\ R(\beta_y^{-1}(x),y)=(\beta^{-1}_x(y),x)\\ \Leftrightarrow &\ R(y\cdot x,y)=(x\cdot y,x) \end{align*} for all $x,y\in X$. \end{proof}
Let us now recall Rump's semi-direct product of cycle sets below.
\begin{defn} \label{D:rump} Let $X$ and $S$ be two finite cycle sets, and $\pi$ be an action of $X$ on $S$. That is, $\pi: X\times S\to S, \ (x,s)\mapsto \pi_x(s)$, satisfies \begin{enumerate} \item $\pi_x(s\cdot t)=\pi_x(s)\cdot \pi_x(t)$ for every $x\in X$ and for all $s,t\in S$;
\item $\pi_{y\cdot x}\pi_y(s)=\pi_{x\cdot y}\pi_x(s)$ for all $x,y\in X$ and $s\in S$;
\item $\pi_x\in \operatorname{Sym}(S)$ for every $x\in X$. \end{enumerate} \end{defn}
Set \begin{align*} \gamma_{x,y}(s,t)=\pi_{x\cdot y}(s)\cdot \pi_{y\cdot x}(t). \end{align*} Now define \begin{align} \label{E:sdp} (x,s)\cdot (y,t):=(x\cdot y, \gamma_{x,y}(s,t)). \end{align} Then this gives a cycle structure on $X\times S$, which is denoted by $X\ltimes_{\pi} S$, called the \textit{semi-direct product of $X$ and $S$ by $\pi$}. The symmetric YBE solution determined by $X\ltimes_\pi S$ is written as $R_{X\ltimes_\pi S}$.
\begin{rem} \label{R:(i)} Notice that for Definition \ref{D:rump} (i) one has \begin{align*} \pi_x(s\cdot t)=\pi_x(s)\cdot \pi_x(t) &\Leftrightarrow \pi_x(\beta_s^{-1}(t))=\beta_{\pi_x(s)}^{-1}(\pi_x(t))\\ &\Leftrightarrow \beta_{\pi_x(s)}(\pi_x(t))=\pi_x(\beta_s(t))\\ &\Leftrightarrow \pi_x(\alpha_s(t))=\alpha_{\pi_x(s)}(\pi_x(t)) \end{align*} for all $s,t\in S$ and $x\in X$. In particular, this shows that, for every $x\in X$, $\pi_x$ is a YB-isomorphism between $R_S$ and itself. \end{rem}
In the sequel, let us write \[ R_X(x,y)=(\alpha_x(y),\beta_y(x))\quad{and}\quad R_S(s,t)=(\tilde\alpha_s(t), \tilde\beta_t(s)). \]
\begin{cor} \label{C:semi} Let $X$ and $S$ be cycles sets, and $\pi$ be an action of $X$ on $S$. Then the YBE solution $R_{X\ltimes_\pi S}$ is explicitly given by the following formula \[ R_{X\ltimes_\pi S}\big((x,s),(y,t)\big)=\left(\big(\alpha_x(y), \tilde\alpha_s(\pi_x(t))\big),\big(\beta_y(x),\pi_{\alpha_x(y)}^{-1}(\tilde\beta_{\pi_x(t)}(s))\big)\right) \] for all $x,y\in X$ and $s,t\in S$. \end{cor}
\begin{proof} First observe that \[ x\cdot y=\beta_x^{-1}(y)\Rightarrow x\cdot \beta_x(y)=y\Rightarrow y\cdot \beta_y(x)=x \] and \[ x\cdot y=\beta_x^{-1}(y)\Rightarrow \beta_y(x)\cdot y=\beta^{-1}_{\beta_y(x)}(y)=\alpha_x(y). \] The above identities will be frequently used in the sequel.
Suppose that \[ R_{X\ltimes_\pi S}((x,s),(y,t))=\big(\alpha'_{(x,s)}(y,t), \beta'_{(y,t)}(x,s)\big)\quad\text{for all}\quad x,y\in X, s,t\in S. \] Let $ \beta'_{(y,t)}(x,s)=(z,p). $ Then \begin{align*} (x,s) =\ell_{(y,t)}(z,p)=(y,t)\cdot (z,p) =(y\cdot z, \pi_{y\cdot z}(t)\cdot \pi_{z\cdot y}(p)). \end{align*} So $ z=\beta_y(x) $ and \begin{align*} s=\pi_{y\cdot z}(t)\cdot \pi_{z\cdot y}(p) &\Rightarrow\pi_{z\cdot y}(p)=\tilde\beta_{\pi_{y\cdot z}(t)}(s) \Rightarrow p=\pi_{z\cdot y}^{-1}(\tilde\beta_{\pi_{y\cdot z}(t)}(s))\\ &\Rightarrow p=\pi_{\beta_y(x)\cdot y}^{-1}(\tilde\beta_{\pi_{y\cdot \beta_y(x)}(t)}(s)) \Rightarrow p=\pi_{\alpha_x(y)}^{-1}(\tilde\beta_{\pi_x(t)}(s)). \end{align*} Thus \[ \beta'_{(y,t)}(x,s)=\big(\beta_y(x),\pi_{\alpha_x(y)}^{-1}(\tilde\beta_{\pi_x(t)}(s))\big). \]
Now \[ \alpha'_{(x,s)}(y,t)=\beta'^{-1}_{\beta'_{(y,t)}(x,s)}(y,t)=:(u,v). \] Then \begin{align*} (u,v) &=\big(\beta_y(x),\pi_{\alpha_x(y)}^{-1}(\tilde\beta_{\pi_x(t)}(s))\big)\cdot (y,t)\\ &=\big(\beta_y(x)\cdot y, \pi_{\beta_y(x)\cdot y}(\pi^{-1}_{\alpha_x(y)}(\tilde\beta_{\pi_x(t)}(s))\cdot \pi_{y\cdot \beta_y(x)}(t))\big)\\ &=\big(\beta_y(x)\cdot y, \pi_{\alpha_x(y)}(\pi^{-1}_{\alpha_x(y)}(\tilde\beta_{\pi_x(t)}(s))\cdot \pi_{x}(t))\big)\\ &=\big(\beta_y(x)\cdot y, \tilde\beta_{\pi_x(t)}(s)\cdot \pi_{x}(t)\big)\\ &=\big(\beta_y(x)\cdot y, \tilde\alpha_s(\pi_{x}(t))\big). \end{align*} Therefore \[ \alpha'_{(x,s)}(y,t)=\big(\beta_y(x)\cdot y, \tilde\alpha_s(\pi_{x}(t))\big). \] This ends the proof.
\end{proof}
\begin{lem} \label{L:caction} Let $X$ and $S$ be cycle sets, and $\pi$ be an action of $X$ on $S$. Then $\pi$ can be extended to an action of $G_{R_X}$ on $G_{R_S}$. \end{lem}
\begin{proof} Notice that, in Definition \ref{D:rump}, (i) says that $\pi_x$ is a cycle morphism on $S$ for every $x\in X$, and (ii) says that $\pi_{y\cdot x}\pi_y=\pi_{x\cdot y}\pi_x$ for all $x, y\in X$. Thus Lemma \ref{L:xyx} and the latter imply that the action $\pi$ can be extended to an action $ \pi: G_{R_X}\curvearrowright S. $
Applying Lemma \ref{L:xyx} to $G_{R_S}$, one has \begin{align*} (s\cdot t) s=(t\cdot s)t\quad\text{for all}\quad s, t\in S. \end{align*} Since $\pi_x(s), \pi_x(t)\in S$ by Definition \ref{D:rump} (iii),
replacing $s$ and $t$ by $\pi_x(s)$ and $\pi_x(t)$, respectively, in the identity obtained above gives \[ (\pi_x(s)\cdot \pi_x(t))\pi_x(s)= (\pi_x(t)\cdot \pi_x(s))\pi_x(t)\quad\text{for all}\quad x\in X, s,t\in S. \] This implies \[ \pi_x(s\cdot t)\pi_x(s)= \pi_x(t\cdot s)\pi_x(t)\quad\text{for all}\quad x\in X, s,t\in S \] as $\pi_x$ is a cycle morphism on $S$. Therefore, by Lemma \ref{L:xyx}, $\pi$ can be extended to an action $\pi: G_{R_X}\curvearrowright G_{R_S}$. \end{proof}
Under the conditions of Lemma \ref{L:caction}, one can form the semi-direct product $G_{R_X}\ltimes_\pi G_{R_S}$, where \[ \pi: G_{R_X}\times G_{R_S}\to G_{R_S}, \ (x,s)\mapsto \pi_x(s). \] It is also worth mentioning that the identity \eqref{E:asemi} automatically holds true for the action $\pi$ of $G_{R_X}$ on $G_{R_S}$ obtained in Lemma \ref{L:caction}. In fact, let $\theta=\pi$ and so it suffices to show that $\pi_x(\tilde b^{-1}\alpha_s \tilde b)=(\tilde b^{-1}\alpha_{\pi_x(s)}\tilde b)\pi_x$ for all $x\in X$ and $s\in S$. But the restrictions $b$ and $\tilde b$ onto $X$ and $S$ are the identity mappings. Thus this amounts to $\pi_x(\alpha_s(t))=\alpha_{\pi_x(s)}(\pi_x(t))$. But this holds true by Remark \ref{R:(i)}.
Therefore, one obtains a regular affine action $\rho^X\ltimes_\pi\rho^S$ of $G_{R_X}\ltimes_\pi G_{R_S}$ on $A_{R_X}\times A_{R_S}$ (cf. Subsection \ref{SS:aff}), and so a YBE solution $\hat R$ on $G_{R_X}\ltimes_\pi G_{R_S}$.
(cf. Subsection \ref{SS:constructing}). It is natural to write $\hat R|_{(X\times S)^2}$ as $R_X\ltimes_\pi R_S$, called the \textit{semi-direct product of $R_X$ and $R_S$ by $\pi$}. Then we obtain the following commutation relation:
\begin{prop}[\textbf{Commutation Relation for Semi-Direct Products}] \label{P:RXRS} Let $X$ and $S$ be cycle sets, and $\pi$ be an action of $X$ on $S$. Then \[ R_X\ltimes_\pi R_S=R_{X\ltimes_\pi S}. \] \end{prop}
\begin{proof} Assume that \[ \hat R((x,s),(y,t))=\big(\hat\alpha_{(x,s)}(y,t), \hat\beta_{(y,t)}(x,s)\big)\quad\text{for all}\quad x,y\in X, s,t\in S. \] It follows from Subsection \ref{SS:aff} $2^\circ$ that \[ \hat \alpha_{(x,s)}(y,t)=(\alpha_x(y),\tilde\alpha_s(\pi_x(t)). \] Then an easy calculation yields \begin{align*} \hat\beta_{(y,t)}((x,s)) &=\hat\alpha_{(x,s)}(y,t)^{-1}(xy,s\pi_x(t))\\ &=(\alpha_x(y)^{-1}xy, \pi_{\alpha_x(y)^{-1}}(\tilde\alpha_s(\pi_x(t))^{-1}s\pi_x(t)))\\ &=(\beta_y(x), \pi_{\alpha_x(y)}^{-1}(\tilde\beta_{\pi_x(t)}(s)). \end{align*} Therefore comparing the formula of $R_{X\ltimes_\pi S}$ given in Corollary \ref{C:semi} yields the desired commutation relation. \end{proof}
For the structure groups $G_{R_X}$, $G_{R_S}$ and $G_{R_X\ltimes_\pi R_S}$, we have the following:
\begin{prop} \label{P:semidp} Keep the above notation. Then there is a group homomorphism \[ \Pi: G_{R_X\ltimes_\pi R_S}\to G_{R_X}\ltimes_\pi G_{R_S}. \] \end{prop}
\begin{proof} By Proposition \ref{P:RXRS}, $G_{R_X\ltimes_\pi R_S}=G_{R_{X\ltimes_\pi S}}$. Applying Lemma \ref{L:xyx} to $G_{R_{X\ltimes_\pi S}}$, we have the following relations \[ ((x,s)\cdot (y,t))(x,s)=((y,t)\cdot (x,s))(y,t)\quad\text{for all}\quad x,y\in X, \ s,t\in S. \] From \eqref{E:sdp}, this is equivalent to \[ (x\cdot y, \gamma_{x,y}(s,t))(x,s)=(y\cdot x, \gamma_{y,x}(t,s))(y,t). \]
Let $\Pi: X\ltimes_\pi S\to G_{R_X}\ltimes_\pi G_{R_S}$ be defined via \[ \Pi(x,s)=(x,s)\quad\text{for all}\quad x\in X, s\in S. \] Simple calculations show that \begin{align*}
\Pi(x\cdot y, \gamma_{x,y}(s,t))\, \Pi(x,s) &=\big(x\cdot y, \pi_{x\cdot y}(s)\cdot \pi_{y\cdot x}(t)\big)\, (x,s)\\ &=\big((x\cdot y)x, (\pi_{x\cdot y}(s)\cdot \pi_{y\cdot x}(t))\pi_{x\cdot y}(s)\big) \end{align*} and \begin{align*}
\Pi(y\cdot x, \gamma_{y,x}(t,s))\, \Pi(y,t) &=(y\cdot x, \pi_{y\cdot x}(t)\cdot \pi_{x\cdot y}(s))(y,t)\\ &=\big((y\cdot x)y, (\pi_{y\cdot x}(t)\cdot \pi_{x\cdot y}(s))\pi_{y\cdot x}(t)\big). \end{align*}
Therefore, by Lemma \ref{L:xyx}, $\Pi$ can be extended a group homomorphism, still denoted by $\Pi$, from $G_{R_X\ltimes_\pi R_S}$ to $G_{R_X}\ltimes_\pi G_{R_S}$.
\end{proof}
In general, the homomorphism $\Pi$ obtained in Proposition \ref{P:semidp} is not an isomorphism. For instance, if $R_X$ and $R_S$ are the trivial YBE solutions, and $\pi$ is the trivial action of $X$ on $S$, then $G_{R_X\ltimes_\pi R_S}\cong {\mathbb{Z}}^{X\times S}$ by Corollary \ref{C:semi}, while $G_{R_X}\ltimes_\pi G_{R_S}=G_{R_X}\times G_{R_S}\cong {\mathbb{Z}}^{X\sqcup S}$.
\end{document} | arXiv |
\begin{document}
\title[On the Zero First Eigenvalue of the Conformal Laplacian]{The Conformal Laplacian and The Kazdan-Warner Problem: Zero First Eigenvalue Case} \author[J. Xu]{Jie Xu} \address{ Department of Mathematics and Statistics, Boston University, Boston, MA, U.S.A.} \email{[email protected]} \address{ Institute for Theoretical Sciences, Westlake University, Hangzhou, Zhejiang Province, China} \email{[email protected]}
\date{}
\maketitle
\begin{abstract} In this article, we first show that given a smooth function $ S $ either on closed manifolds $ (M, g) $ or compact manifolds $ (\bar{M}, g) $ with non-empty boundary, both for dimensions at least $ 3 $, the condition $ S \equiv 0 $, or $ S $ changes sign and $ \int_{M} S d\text{Vol}_{g} < 0 $ (with zero mean curvature if the boundary is not empty), is both the necessary and sufficient condition for prescribing scalar curvature problems within conformal class $ [g] $, provided that the first eigenvalue of the conformal Laplacian is zero. We then extend the same necessary and sufficient condition, in terms of prescribing Gauss curvature function and zero geodesic curvature, to compact Riemann surfaces with non-empty boundary, provided that the Euler characteristic is zero. These results are the first full extensions since the results of Kazdan and Warner \cite{KW2} on 2-dimensional torus, and of Escobar and Schoen \cite{ESS} on closed manifolds for dimensions $ 3 $ and $ 4 $. We then give results of prescribing nonzero scalar and mean curvature problems on $ (\bar{M}, g) $, still with zero first eigenvalue and dimensions at least $ 3 $. Analogously, results of prescribing Gauss and geodesic curvature problems on compact Riemann surfaces with boundary are given for zero Euler characteristic case. Lastly, we show a generalization of the Han-Li conjecture. Technically the key step for manifolds with dimensions at least $ 3 $ is to apply both the local variational methods, local Yamabe-type equations and a new version of the monotone iteration scheme. The key features include the smoothness of the upper solution, the technical difference between constant and non-constant prescribing scalar curvature functions, etc. \end{abstract}
\section{Introduction} In this article, we give necessary and sufficient conditions for the prescribed scalar curvature problems within a conformal class of metrics $ [g] $ on both closed manifolds $ (M, g) $ and compact manifolds $ (\bar{M}, g) $ with non-empty smooth boundary $ \partial M $, with dimensions $ n = \dim M \geqslant 3 $ or $ n = \dim \bar{M} \geqslant 3 $, provided that the first eigenvalue of the conformal Laplacian (possibly with appropriate Robin boundary condition) is zero. We also give necessary and sufficient conditions for prescribing Gauss curvature problems under conformal deformation on compact Riemann surface with non-empty boundary. This problem was solved on closed Riemann surface by Kazdan and Warner \cite{KW2} in 1974 for the analogously zero Euler characteristic case. When the dimension is either $ 3 $ or $ 4 $, this problem was solved by Escobar and Schoen \cite{ESS} in 1986. Based on our best understanding, this article is the first progress in this direction since then. The results here completely solve this problem for closed manifolds, and for compact manifolds with non-empty, smooth, minimal boundary (zero mean curvature after some conformal change), also on compact Riemann surfaces with zero geodesic curvature after some appropriate conformal change. Our methods, which involves local variational method, construction of lower and upper solutions of a nonlinear PDE, and the monotone iteration scheme, are not dimensionally specific for all manifolds with dimensions at least $ 3 $. On $ 2 $-dimensional case, we apply a variational method since the PDE we are dealing with is different.
In addition, we also show the existence of the positive, smooth solution of a local Yamabe equation with Dirichlet boundary condition, for which the coefficient functions of the nonlinear term $ u^{\frac{n + 2}{n - 2}} $ is non-constant. This result is a generalization of the local Yamabe equation with constant coefficient functions on nonlinear terms \cite{XU6}; this local result plays a key role in solving global problems mentioned above, when the dimensions of the manifolds are at least $ 3 $. We also give new results for prescribing non-zero scalar and mean curvature functions under conformal deformation on $ (\bar{M}, g) $ with zero first eigenvalue of the conformal Laplacian. Due to the new version of the monotone iteration scheme, we show an extension of the Han-Li conjecture, which was first mentioned in \cite{HL} and completely proved in \cite{XU5}.
Given a compact manifold whose dimension is at least $ 3 $, possibly with boundary, the Kazdan-Warner problem considers the prescribed scalar and mean curvature functions for a metric $ \tilde{g} $, which is pointwise conformal to the original metric $ g $. Interchangably we call $ \tilde{g} \in [g] $ be a Yamabe metric. Let $ R_{g} $ and $ h_{g} $ be the scalar curvature and mean curvature of a metric $ g $. Let $ a = \frac{4(n - 1)}{n -2} $, $ p = \frac{2n}{n - 2} $ and let $ -\Delta_{g} $ be the positive definite Laplace-Beltrami operator. We define $ \eta_{1} $ to be the first eigenvalue of the conformal Laplacian $ \Box_{g}u : = -a\Delta_{g} u + R_{g} u $ on closed manifolds $ (M, g) $ \begin{equation*} \Box_{g} u = -a\Delta_{g} \varphi + R_{g} \varphi = \eta_{1} \varphi \; {\rm in} \; M. \end{equation*} Here the positive function $ \varphi \in \mathcal C^{\infty}(M) $ is the associated first eigenfunction. Similarly, we define the first eigenvalue $ \eta_{1}' $ of the conformal Laplacian $ \Box_{g} $ with Robin boundary condition on compact manifolds with non-empty boundary $ \partial M $ \begin{equation*} \Box_{g} u = -a\Delta_{g} \varphi' + R_{g} \varphi' = \eta_{1}' \varphi' \; {\rm in} \; M, B_{g} u : = \frac{\partial \varphi'}{\partial \nu} + \frac{2}{p - 2} h_{g} \varphi' = 0 \; {\rm on} \; \partial M \end{equation*} with the associated first eigenfunction $ \varphi' $.
The first two main results for this article are listed as follows, which are proven in Theorem \ref{zero:thm1} and Theorem \ref{zero:thm2}, respectively. We would like to point out here that almost the same methods are applied to both the closed manifolds case and compact manifolds with boundary case; these methods have no boundary issues at all.
\begin{theorem}\label{intro:thm1} Let $ (M, g) $ be a closed manifold, $ n = \dim M \geqslant 3 $. Let $ S \in \mathcal C^{\infty}(M) $ be a given function. Assume that $ \eta_{1} = 0 $. If the function $ S $ satisfies \begin{equation*} \begin{split} & S \equiv 0; \\ & \text{or $ S $ changes sign and} \; \int_{M} S d\text{Vol}_{g} < 0, \end{split} \end{equation*} then $ S $ can be realized as a prescribed scalar curvature function of some pointwise conformal metric $ \tilde{g} \in [g] $. \end{theorem}
\begin{theorem}\label{intro:thm2} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary $ \partial M $, $ n = \dim \bar{M} \geqslant 3 $. Let $ S \in \mathcal C^{\infty}(\bar{M}) $ be a given function. Assume that $ \eta_{1}' = 0 $. If the function $ S $ satisfies \begin{equation*} \begin{split} & S \equiv 0; \\ & \text{or $ S $ changes sign and} \; \int_{M} S d\text{Vol}_{g} < 0, \end{split} \end{equation*} then $ S $ can be realized as a prescribed scalar curvature function of some pointwise conformal metric $ \tilde{g} \in [g] $ with minimal boundary ($ h_{\tilde{g}} \equiv 0 $). \end{theorem} When the manifold is a compact Riemann surface with non-empty smooth boundary, we denote $ K_{g} $ and $ \sigma_{g} $ be the Gauss and geodesic curvature, respectively. The next result, proven in Theorem \ref{de2:thm1}, is the $ 2 $-dimensional analogy of Theorem \ref{intro:thm2}.
\begin{theorem}\label{intro:thm3} Let $ (\bar{M}, g) $ be a compact Riemann surface with non-empty smooth boundary $ \partial M $. Let $ K \in \mathcal C^{\infty}(\bar{M}) $ be a given function. Assume that $ \chi(\bar{M}) = 0 $. If the function $ K $ satisfies \begin{equation*} \begin{split} & K \equiv 0; \\ & \text{or $ K $ changes sign and} \; \int_{M} K d\text{Vol}_{g} < 0, \end{split} \end{equation*} then $ K $ can be realized as a prescribed Gauss curvature function of some pointwise conformal metric $ \tilde{g} \in [g] $ with $ \sigma_{\tilde{g}} \equiv 0 $. \end{theorem} As introduced in the Yamabe problem \cite{PL}, the Kazdan-Warner problem for prescribing functions $ S, H $ on compact manifolds with dimensions at least $ 3 $ is reduced to the existence of the positive, smooth solution of the following PDEs, where the first one below is for closed manifolds, and the second one below is for compact manifolds with non-empty smooth boundary: \begin{equation}\label{intro:eqn1} -a\Delta_{g} u + R_{g} u = S u^{p-1} \; {\rm in} \; M. \end{equation} \begin{equation}\label{intro:eqn2} -a\Delta_{g} u + R_{g} u = S u^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} + \frac{2}{p-2} h_{g} u = \frac{2}{p-2} H u^{\frac{p}{2}}. \end{equation} We point out that the result of Theorem \ref{intro:thm2} is a special case of (\ref{intro:eqn2}) for $ H \equiv 0 $. One key technique to solve these PDEs is a new version of the monotone iteration scheme for nonlinear elliptic PDEs, which was given in Theorem \ref{pre:thm4}. The monotone iteration scheme requires the constructions of upper and lower solutions. As shown in \cite{XU4, XU5, XU6, XU7, XU3}, we use the solution of some local Yamabe equation to construct the global lower solution, and applying gluing technique between the local solution and some candidate of upper solution to construct the global upper solution. We would like to point out two key features: (a) the upper solution we construct is a smooth function, not just piecewise smooth; (b) the iterative methods are different, depending on whether the prescribed scalar curvature is a globally constant function or not. This procedure is first developed for solving the Yamabe problem, the Escobar problem, and prescribed scalar curvature problems for positive first eigenvalue, or equivalently, positive Yamabe invariant case. The iterative method is inspired by previous work on nonlinear elliptic PDEs and Yamabe-type problems on Euclidean spaces in \cite{XU2, XU}.
On compact Riemann surface with non-empty smooth boundary, the problem of prescribing functions $ K, \sigma \in \mathcal C^{\infty}(\bar{M}) $ is reduced to the existence of some smooth solution $ u $ of the following PDE \begin{equation}\label{intro:eqn3} -\Delta_{g} u + K_{g} = K e^{2u} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} + \sigma_{g} u = \sigma e^{u} \; {\rm on} \; \partial M \end{equation} When $ \chi(\bar{M}) = 0 $, we apply a variational method due to Kazdan and Warner \cite[Thm.~5.3]{KW2}. The reason to use a different method, instead of the monotone iteration scheme is partially due to the lack of a local solution. Note that $ \chi(\bar{M}) = 0 $ is a $ 2 $-dimensional analogy of the classification $ \eta_{1} = 0 $ or $ \eta_{1}' = 0 $ in higher dimensional case.
We also give results for prescribing scalar and non-zero mean curvature functions on $ (\bar{M}, g) $, provided that $ \eta_{1}' = 0 $. The next main result, which was proven in Theorem \ref{zerog:thm1}, Corollary \ref{zerog:cor1} and Corollary \ref{zerog:cor2}, is given below:
\begin{theorem}\label{intro:thm4} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary $ \partial M $, $ n = \dim \bar{M} \geqslant 3 $. Let $ S, H \in \mathcal C^{\infty}(\bar{M}) $ be given nonzero functions. Assume that $ \eta_{1}' = 0 $. If the function $ S $ satisfies \begin{equation*} \text{$ S $ changes sign and} \; \int_{M} S d\text{Vol}_{g} < 0, \end{equation*} then there exists a pointwise conformal metric $ \tilde{g} \in [g] $ that has scalar curvature $ R_{\tilde{g}} = S $ and $ h_{\tilde{g}} = cH $ for some small enough positive constant $ c $. \end{theorem} Beginning with Aubin \cite{Aubin}, Kazdan and Warner \cite{KW2, KW3, KW}, and Nirenberg \cite{AC}, many people have made great progress in the so-called Kazdan-Warner problem. On closed manifolds, other contribution includes the complete solution of the Yamabe problem due to Yamabe, Trudinger, Aubin and Schoen \cite{PL}, and the prescribed scalar curvature problems on $ \mathbb{S}^{n} $ by Y. Y. Li \cite{YYL}, Schoen and Yau \cite{SY}, Bourguignon and Ezin \cite{BE}, Malchiodi and Mayer \cite{MaMa}, etc. We gave a comprehensive solution of the prescribed scalar curvature problem with positive first eigenvalue in \cite{XU6}. On compact manifolds with non-empty boundary, Escobar \cite{ESC, ESC2, Escobar2, ESS} introduced the boundary Yamabe problem and discussed many variations in prescribing different types of scalar and mean curvature functions. Other contribution includes the work of Marques, Brendle, X.Z. Chen, etc., see \cite{BM}. For constant scalar and mean curvatures, we gave complete solution of the Escobar problem in \cite{XU4}, and of the Han-Li conjecture in \cite{XU5}. For positive first eigenvalue with Robin condition, we gave a comprehensive discussion on prescribed scalar curvature problem with minimal boundary in \cite{XU6}. For Dirichlet boundary conditions, we gave a generalization of ``Trichotomy Theorem" in \cite{XU7}.
Due to the new monotone iteration scheme in Theorem \ref{pre:thm4}, we can give a generalization of the Han-Li conjecture for positive and negative first eigenvalue cases:
\begin{theorem}\label{intro:thm5} Let $ (\bar{M}, g) $ be a compact manifold with smooth boundary, $ \dim \bar{M} \geqslant 3 $. Let $ \eta_{1}' $ be the first eigenvalue of the boundary eigenvalue problem $ \Box_{g} = \eta_{1}' u $ in $ M $, $ B_{g} u = 0 $ on $ \partial M $. Then: \\ \begin{enumerate}[(i).] \item If $ \eta_{1}' < 0 $, (\ref{intro:eqn2}) with constant functions $ S = \lambda \in \mathbb R $, $ H = \zeta \in \mathbb R $ admits a real, positive solution $ u \in \mathcal C^{\infty}(\bar{M}) $ with some $ \lambda < 0 $ and $ \zeta < 0 $; \item If $ \eta_{1}' > 0 $, (\ref{intro:eqn2}) with constant functions $ S = \lambda \in \mathbb R $, $ H = \zeta \in \mathbb R $ admits a real, positive solution $ u \in \mathcal C^{\infty}(\bar{M}) $ with some $ \lambda > 0 $ and $ \zeta < 0 $. \end{enumerate} \end{theorem} The original Han-Li conjecture \cite{HL} was stated for positive constant mean curvatures. The new monotone iteration scheme released the restriction of the positivity of the prescribed function $ H $ on $ \partial M $.
This article is organized as follows:
In \S2, we give essential definition and tools for later sections, like Sobolev spaces, $ W^{s, q} $-type elliptic regularities, etc. We also assume the general backgrounds of elliptic PDE theory and Sobolev embeddings. We then introduce two versions of monotone iteration schemes, one for closed manifolds in Theorem \ref{pre:thm3} and the other for compact manifolds with boundary in Theorem \ref{pre:thm4}.
In \S3, we introduce the theory of the local Yamabe-type equation $ -a\Delta_{g} u + R_{g} u = f u^{p-1} \; {\rm in} \; \Omega, u = 0 \; {\rm on} \; \partial \Omega $ on a small enough Riemannian domain $ (\Omega, g) $. In Proposition \ref{local:prop2}, we show the existence of a positive, smooth solution of the local Yamabe-type equation for non-constant positive function $ f $, which is a generalization of constant function $ f $ given in \cite[Lemma.~3.2]{XU6}. This result holds when $ \dim \Omega \geqslant 3 $ and $ g $ is not locally conformally flat, and thus is not dimensionally specific. When $ g $ is locally conformally flat, we introduce the local solution in Proposition \ref{local:prop3}.
In \S4, we give the necessary and sufficient condition for prescribing scalar curvature function $ S $ on both closed manifolds and compact manifolds with non-empty minimal boundary, with dimensions at least $ 3 $, provided that $ \eta_{1}' = 0 $. The condition, $ S \equiv 0 $ or $ S $ changes sign and $ \int_{M} S d\text{Vol}_{g} < 0 $, is exactly the same for 2-dimensional torus, and dimensions $ 3 $ and $ 4 $. We construct lower solutions in Proposition \ref{zero:prop3} and upper solutions in Proposition \ref{zero:prop4}. The two major results are given in Theorem \ref{zero:thm1} for closed manifolds $ (M, g) $, and in Theorem \ref{zero:thm2} for compact manifolds $ (\bar{M}, g) $ with non-empty, smooth, minimal boundary.
In \S5, we generalize the prescribing scalar curvature function $ S \in \mathcal C^{\infty}(\bar{M}) $ and mean curvature function $ H \in \mathcal C^{\infty}(\partial M) $ problems discussed in \S4, in which we assume that both $ S $ and $ H $ are non-zero functions. This section begins with an inequality between the average of the scalar curvature and the average of the mean curvature within the conformal class. Using the monotone iteration schemes, we show that if $ S $ changes sign and $ \int_{M} S d\text{Vol}_{g} < 0 $, then there exists some Yamabe metric $ \tilde{g} \in [g] $ that has scalar curvature $ S $ and mean curvature $ cH $ for some small enough positive constant $ c $. In particular, Theorem \ref{zerog:thm1} associates with the case $ \int_{\partial M} H dS_{g} > 0 $; Corollary \ref{zerog:cor1} is for the case $ \int_{\partial M} H dS_{g} = 0 $; and Corollary \ref{zerog:cor2} is for the case $ \int_{\partial M} H dS_{g} < 0 $. We conjectured that $ S $ changes sign and $ \int_{M} S d\text{Vol}_{g} < 0 $ is also a necessary condition when $ \eta_{1}' = 0 $, but we can only give partial reasoning.
In \S6, we apply a global variation method to consider necessary and sufficient condition for prescribing Gauss curvature function $ K $ on compact Riemann surface with non-empty boundary for some Yamabe metric $ \tilde{g} \in [g] $ with $ \sigma_{\tilde{g}} = 0 $. The main result in Theorem \ref{de2:thm1} is an extension of the result on $ 2 $-dimensional torus \cite[Thm.~5.3]{KW2}.
In \S7, we give a generalization of the Han-Li conjecture for the following two cases: \begin{enumerate}[(i).] \item If $ \eta_{1}' < 0 $, then (\ref{intro:eqn2}) admits a positive, smooth solution with some $ S = \lambda < 0 $ and $ H = \zeta < 0 $; \item If $ \eta_{1}' > 0 $, then (\ref{intro:eqn2}) admits a positive, smooth solution with some $ S = \lambda > 0 $ and $ H = \zeta < 0 $. \end{enumerate} We show the Case (i) in Theorem \ref{HL:thm2} and the Case (ii) in Theorem \ref{HL:thm3}. Overall, the key step is the new version of the monotone iteration scheme in Theorem \ref{pre:thm4}, which is widely used in proving all major results. In addition, the systematic procedure from local variational method, local Yamabe-type equation, gluing method, and iteration scheme we developed in previous work is also crucial.
\section{The Preliminaries and The Monotone Iteration Schemes} In this section, we introduce the Sobolev spaces both on manifolds and on local domains. We then introduce the $ \mathcal L^{q} $-type elliptic regularities, both on closed manifolds and on compact manifolds with non-empty boundary. Next we introduce essential tools for later sections, especially variations of the monotone iteration schemes. The new variation of the monotone iteration scheme is quite useful in dealing with prescribed non-constant scalar and mean curvature problems. We assume the background of standard elliptic theory, the strong and weak maximal principles for second order elliptic operators, Sobolev embeddings for $ W^{s, q} $ -type and $ \mathcal C^{0, \alpha} $-type, the trace theorem, etc.
We begin with a few set-up. Let $ n $ be the dimension of the compact manifold, with or without boundary. Let $ \Omega $ be a connected, bounded, open subset of $ \mathbb R^{n} $ with smooth boundary $ \partial \Omega $ equipped with some Riemannian metric $ g $ that can be extended smoothly to $ \bar{\Omega} $. We call $ (\Omega, g) $ a Riemannian domain. Furthermore, let $ (\bar{\Omega}, g) $ be a compact manifold with smooth boundary extended from $ (\Omega, g) $. Throughout this article, we denote $ (M, g) $ to be a closed manifold with $ \dim M \geqslant 3 $, and $ (\bar{M}, g) $ to be a general compact manifold with interior $ M $ and smooth boundary $ \partial M $; we denote the space of smooth functions with compact support by $ \mathcal C_{c}^{\infty} $, smooth functions by $ \mathcal C^{\infty} $, and continuous functions by $ \mathcal C^{0} $.
Foremost, we define Sobolev spaces on $ (M, g) $, $ (\bar{M}, g) $ and $ (\Omega, g) $, both in global expressions and local coordinates.
\begin{definition}\label{pre:def1} Let $ (\Omega, g) $ be a Riemannian domain. Let $ (M, g) $ be a closed Riemannian $n$-manifold with volume density $ d\text{Vol}_{g}$. Let $u$ be a real valued function. Let $ \langle v,w \rangle_g$ and $ |v|_g = \langle v,v \rangle_g^{1/2} $ denote the inner product and norm with respect to $g$.
(i) For $1 \leqslant q < \infty $, \begin{align*} \mathcal{L}^{q}(\Omega)\ &{\rm is\ the\ completion\ of}\ \left\{ u \in \mathcal C_c^{\infty}(\Omega) : \Vert u\Vert_{\mathcal L^{q}(\Omega)}^q :=\int_{\Omega} \lvert u \rvert^{q} dx < \infty \right\},\\ \mathcal{L}^{q}(\Omega, g)\ &{\rm is\ the\ completion\ of}\ \left\{ u \in \mathcal C_c^{\infty}(\Omega) : \Vert u\Vert_{\mathcal L^{q}(\Omega, g)}^q :=\int_{\Omega} \left\lvert u \right\rvert^{q} d\text{Vol}_{g} < \infty \right\}, \\ \mathcal{L}^{q}(M, g)\ &{\rm is\ the\ completion\ of}\ \left\{ u \in \mathcal C^{\infty}(M) : \Vert u\Vert_{\mathcal L^{q}(M, g)}^q :=\int_{M} \left\lvert u \right\rvert^{q} d\text{Vol}_{g} < \infty \right\}. \end{align*}
(ii) For $ q = \infty $, \begin{equation*} \lVert f \rVert_{\mathcal L^{\infty}(M)} = \inf \lbrace C \geqslant 0 : \lvert f(x) \rvert \leqslant C \; \text{for almost all $ x \in M $} \rbrace. \end{equation*}
(iii) For $\nabla u$ the Levi-Civita connection of $g$, and for $ u \in \mathcal C^{\infty}(\Omega) $ or $ u \in \mathcal C^{\infty}(M) $, \begin{equation*} \lvert \nabla^{k} u \rvert_g^{2} := (\nabla^{\alpha_{1}} \dotso \nabla^{\alpha_{k}}u)( \nabla_{\alpha_{1}} \dotso \nabla_{\alpha_{k}} u). \end{equation*} \noindent In particular, $ \lvert \nabla^{0} u \rvert^{2}_g = \lvert u \rvert^{2} $ and $ \lvert \nabla^{1} u \rvert^{2}_g = \lvert \nabla u \rvert_{g}^{2}.$\\
(iv) For $ s \in \mathbb{N}, 1 \leqslant p < \infty $, \begin{align*} W^{s, q}(\Omega) &= \left\{ u \in \mathcal{L}^{q}(\Omega) : \lVert u \rVert_{W^{s,q}(\Omega)}^{q} : = \int_{\Omega} \sum_{j=0}^{s} \left\lvert D^{j}u \right\rvert^{q} dx < \infty \right\}, \\ W^{s, q}(\Omega, g) &= \left\{ u \in \mathcal{L}^{q}(\Omega, g) : \lVert u \rVert_{W^{s, q}(\Omega, g)}^{q} = \sum_{j=0}^{s} \int_{\Omega} \left\lvert \nabla^{j} u \right\rvert^{q}_g d\text{Vol}_{g} < \infty \right\}, \\ W^{s, q}(M, g) &= \left\{ u \in \mathcal{L}^{q}(M, g) : \lVert u \rVert_{W^{s, q}(M, g)}^{q} = \sum_{j=0}^{s} \int_{M} \left\lvert \nabla^{j} u \right\rvert^{q}_g d\text{Vol}_{g} < \infty \right\}. \end{align*} \noindent Here $ \lvert D^{j}u \rvert^{q} := \sum_{\lvert \alpha \rvert = j} \lvert \partial^{\alpha} u \rvert^{q} $ in the weak sense. Similarly, $ W_{0}^{s, q}(\Omega) $ is the completion of $ \mathcal C_{c}^{\infty}(\Omega) $ with respect to the $ W^{s, q} $-norm.
In particular, $ H^{s}(\Omega) : = W^{s, 2}(\Omega) $ and $ H^{s}(\Omega, g) : = W^{s, 2}(\Omega, g) $, $ H^{s}(M, g) : = W^{s, 2}(M, g) $ are the usual Sobolev spaces. We similarly define $H_{0}^{s}(\Omega), H_{0}^{s}(\Omega,g)$.
(v) We define the $ W^{s, q} $-type Sobolev space on $ (\bar{M}', g) $ the same as in (iii) when $ s \in \mathbb{N}, 1 \leqslant q < \infty $. \end{definition}
In general, if we assume the solvability of the PDE \begin{equation*} Lu = f \; {\rm in} \; M \end{equation*} for some second order elliptic operator $ L $, the standard $ \mathcal L^{p} $-type estimates says \begin{equation*} \lVert u \rVert_{W^{2,p}(M, g)} \leqslant C \left( \lVert f \rVert_{\mathcal L^{p}(M, g)} + \lVert u \rVert_{\mathcal L^{p}(M, g)} \right). \end{equation*} In order to estimate $ \lVert u \rVert_{\mathcal L^{\infty}} $ and $ \lVert \nabla u \rVert_{\mathcal L^{\infty}} $, we would like to remove the term $ \lVert u \rVert_{\mathcal L^{p}(M, g)} $ on the right side of the estimates. The next two results show that sometimes we can do this.
\begin{theorem}\label{pre:thm1}\cite[\S7]{Niren4} Let $ (M, g) $ be a closed manifold and $ q > n = \dim M $ be a given constant. Let $ L: \mathcal C^{\infty}(M) \rightarrow \mathcal C^{\infty}(M) $ be a uniform second order elliptic operator on $ M $ and can be extended to $ L : W^{2, q}(M, g) \rightarrow \mathcal L^{q}(M, g) $. Let $ f \in \mathcal L^{q}(M, g) $ be a given function. Let $ u \in H^{1}(M, g) $ be a weak solution of the following linear PDE \begin{equation}\label{pre:eqn1} L u = f \; {\rm in} \; M. \end{equation} Assume that $ \text{Ker}(L) = \lbrace 0 \rbrace $. If, in addition, $ u \in \mathcal L^{q}(M, g) $, then $ u \in W^{2, q}(M, g) $ with the following estimates \begin{equation}\label{pre:eqn2} \lVert u \rVert_{W^{2, q}(M, g)} \leqslant \gamma \lVert Lu \rVert_{\mathcal L^{q}(M, g)} \end{equation} Here $ \gamma $ depends on $ L, q $ and the manifold $ (M, g) $ and is independent of $ u $. \end{theorem}
\begin{theorem}\label{pre:thm2}\cite[Thm.~2.2]{XU5} Let $ (\bar{M}, g) $ be a compact manifold with smooth boundary $ \partial M $. Let $ \nu $ be the unit outward normal vector along $ \partial M $ and $ q > n = \dim \bar{M} $. Let $ L: \mathcal C^{\infty}(\bar{M}) \rightarrow \mathcal C^{\infty}(\bar{M}) $ be a uniform second order elliptic operator on $ M $ with smooth coefficients up to $ \partial M $ and can be extended to $ L : W^{2, q}(M, g) \rightarrow \mathcal L^{q}(M, g) $. Let $ f \in \mathcal L^{q}(M, g), \tilde{f} \in W^{1, q}(M, g) $. Let $ u \in H^{1}(M, g) $ be a weak solution of the following boundary value problem \begin{equation}\label{pre:eqn3} L u = f \; {\rm in} \; M, Bu = \frac{\partial u}{\partial \nu} + c(x) u = \tilde{f} \; {\rm on} \; \partial M. \end{equation} Here $ c \in \mathcal C^{\infty}(M) $. Assume also that $ \text{Ker}(L) = \lbrace 0 \rbrace $ associated with the homogeneous Robin boundary condition. If, in addition, $ u \in \mathcal L^{q}(M, g) $, then $ u \in W^{2, q}(M, g) $ with the following estimates \begin{equation}\label{pre:eqn4} \lVert u \rVert_{W^{2, q}(M, g)} \leqslant \gamma' \left(\lVert Lu \rVert_{\mathcal L^{q}(M, g)} + \lVert Bu \rVert_{W^{1, q}(M, g)} \right) \end{equation} Here $ \gamma' $ depends on $ L, q, c $ and the manifold $ (\bar{M}, g) $ and is independent of $ u $. \end{theorem} \begin{remark}\label{pre:re1} According to the results of Theorem \ref{pre:thm1} and Theorem \ref{pre:thm2}, we have \begin{equation}\label{pre:eqn5} \begin{split} & \frac{1}{q} - \frac{1}{n} \leqslant -\frac{\alpha}{n} \Rightarrow \lVert u \rVert_{\mathcal C^{1, \alpha}(M)} \leqslant K \lVert u \rVert_{W^{2, q}(M, g)}; \\ & \frac{1}{q} - \frac{1}{n} \leqslant -\frac{\alpha}{n} \Rightarrow \lVert u \rVert_{\mathcal C^{1, \alpha}(\bar{M})} \leqslant K' \lVert u \rVert_{W^{2, q}(M, g)}, \end{split} \end{equation} due to the H\"older-type Schauder estimates. Here the constant $ K, K' $ depend only on $ q, n, \alpha $, the manifolds $ (M, g) $ or $ (\bar{M}, g) $ and is independent of $ u $. The estimates (\ref{pre:eqn5}) give control of the $ \mathcal L^{\infty} $-norms of $ u $ and $ \nabla u $, respectively. \end{remark}
Our local to global analysis in solving Yamabe-type problems construct lower and upper solutions of the Yamabe-type equation, we then apply the monotone iteration schemes to obtain the solution of theses Yamabe-type equations. In particular, we need the following two versions of the monotone iteration schemes, one for closed manifolds, the other for compact manifolds with non-empty boundary.
For closed manifolds, we have \begin{theorem}\label{pre:thm3}\cite[Lemma~2.6]{KW}\cite[Thm.~2.5]{XU3} Let $ (M, g) $ be a closed manifold with $ \dim M \geqslant 3 $. Let $ h, H \in \mathcal C^{\infty}(M) $ for some $ p > n = \dim M $. Let $ m > 1 $ be a constant. If there exists functions $ u_{-}, u_{+} \in \mathcal C_{0}(M) \cap H^{1}(M, g) $ such that \begin{equation}\label{pre:eqn6} \begin{split} -a\Delta_{g} u_{-} + hu_{-} & \leqslant Hu_{-}^{m} \; {\rm in} \; (M, g); \\ -a\Delta_{g} u_{+} + hu_{+} & \geqslant Hu_{+}^{m} \; {\rm in} \; (M, g), \end{split} \end{equation} hold weakly, with $ 0 \leqslant u_{-} \leqslant u_{+} $ and $ u_{-} \not\equiv 0 $, then there is a $ u \in W^{2, p}(M, g) $ satisfying \begin{equation}\label{pre:eqn7} -a\Delta_{g} u + hu = Hu^{m} \; {\rm in} \; (M, g). \end{equation} In particular, $ u \in \mathcal C^{\infty}(M) $. \end{theorem}
For compact manifolds with non-empty boundary, we now introduce a variation of the monotone iteration scheme we used in \cite{XU4}, \cite{XU5} and \cite{XU6}. In particular, we do require $ h_{g} = h \geqslant 0 $ to be some positive constant on $ \partial M $ here, this can be done due to the proof of the Han-Li conjecture in \cite{XU5}. We point out that the proof of the result below is similar to Theorem 4.1 in \cite{XU5}, but technically more subtle. \begin{theorem}\label{pre:thm4} Let $ (\bar{M}, g) $ be a compact manifold with smooth boundary $ \partial M $. Let $ \nu $ be the unit outward normal vector along $ \partial M $ and $ q > \dim \bar{M} $. Let $ S \in \mathcal C^{\infty}(\bar{M}) $ and $ H \in \mathcal C^{\infty}(\bar{M}) $ be given functions. Let the mean curvature $ h_{g} = h \geqslant 0 $ be some positive constant. In addition, we assume that $ \sup_{\bar{M}} \lvert H \rvert $ is small enough. Suppose that there exist $ u_{-} \in \mathcal C_{0}(\bar{M}) \cap H^{1}(M, g) $ and $ u_{+} \in W^{2, q}(M, g) \cap \mathcal C_{0}(\bar{M}) $, $ 0 \leqslant u_{-} \leqslant u_{+} $, $ u_{-} \not\equiv 0 $ on $ \bar{M} $, some constants $ \theta_{1} \leqslant 0, \theta_{2} \geqslant 0 $ such that \begin{equation}\label{pre:eqn8} \begin{split} -a\Delta_{g} u_{-} + R_{g} u_{-} - S u_{-}^{p-1} & \leqslant 0 \; {\rm in} \; M, \frac{\partial u_{-}}{\partial \nu} + \frac{2}{p-2} h_{g} u_{-} \leqslant \theta_{1} u_{-} \leqslant \frac{2}{p-2} H u_{-}^{\frac{p}{2}} \; {\rm on} \; \partial M \\ -a\Delta_{g} u_{+} + R_{g} u_{+} - S u_{+}^{p-1} & \geqslant 0 \; {\rm in} \; M, \frac{\partial u_{+}}{\partial \nu} + \frac{2}{p-2} h_{g} u_{+} \geqslant \theta_{2} u_{+} \geqslant \frac{2}{p-2} H u_{+}^{\frac{p}{2}} \; {\rm on} \; \partial M \end{split} \end{equation} holds weakly. In particular, $ \theta_{1} $ can be zero if $ H \geqslant 0 $ on $ \partial M $, and $ \theta_{1} $ must be negative if $ H < 0 $ somewhere on $ \partial M $; similarly, $ \theta_{2} $ can be zero if $ H \leqslant 0 $ on $ \partial M $, and $ \theta_{2} $ must be positive if $ H > 0 $ somewhere on $ \partial M $. Then there exists a real, positive solution $ u \in \mathcal C^{\infty}(M) \cap \mathcal C^{1, \alpha}(\bar{M}) $ of \begin{equation}\label{pre:eqn9} \Box_{g} u = -a\Delta_{g} u + R_{g} u = S u^{p-1} \; {\rm in} \; M, B_{g} u = \frac{\partial u}{\partial \nu} + \frac{2}{p-2} h_{g} u = \frac{2}{p-2} H u^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{equation} \end{theorem} \begin{proof} From now on, we replace $ h_{g} $ by the constant $ h $ due to the hypothesis. Fix some $ q > \dim \bar{M} $. Denote $ u_{0} = u_{+} $. Due to compactness of $ \bar{M} $, we can choose a constant $ A > 0 $ such that \begin{equation}\label{pre:eqn10} -R_{g}(x) + S(x) (p - 1) u(x)^{p-2} + A > 0, \forall u(x) \in [\min_{\bar{M}} u_{-}(x), \max_{\bar{M}} u_{+}(x) ], \forall x \in \bar{M} \end{equation} pointwise. Similarly, we can also choose a constant $ B \geqslant 0 $ such that \begin{equation}\label{pre:eqn11} -\frac{2}{p-2} h + \frac{p}{p-2} H(y) u(y)^{\frac{p-2}{2}} + B > 0, \forall u(y) \in [\min_{\bar{M}} u_{-}(y), \max_{\bar{M}} u_{+}(y) ], \forall y \in \partial M. \end{equation} For the first step, consider the linear PDE \begin{equation}\label{pre:eqn12} -a\Delta_{g} u_{1} + Au_{1} = Au_{0} - R_{g} u_{0} + S u_{0}^{p-1} \; {\rm in} \; M, \frac{\partial u_{1}}{\partial \nu} + Bu_{1} = Bu_{0} - \frac{2}{p-2} h u_{0} + \frac{2}{p-2} H u_{0}^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{equation} Since $ u_{0} = u_{+} \in W^{2, q}(M, g) \cap \mathcal C_{0}(\bar{M}) $, there exists a unique solution $ u_{1} \in H^{1}(M, g) $, due to the fact that $ A > 0 $ and $ B \geqslant 0 $ and thus the standard Lax-Milgram theorem applies. Since $ u_{0} \in W^{2, q}(M, g) \cap \mathcal C_{0}(\bar{M}) $, and thus $ u_{0} \in \mathcal C^{1, \alpha}(\bar{M}) \cap \mathcal L^{q}(M, g) $ for all $ 1 < q < \infty $, it follows from $ \mathcal L^{p}$-regularity in Theorem 2.1 of \cite{XU5} that $ u_{1} \in W^{2, q}(M, g) $. By standard $ (s, p) $-type Sobolev embedding, it follows that $ u_{1} \in \mathcal C^{1, \alpha}(\bar{M}) $ for some $ \alpha \in (0, 1) $.
Next we show that $ u_{1} \leqslant u_{0} = u_{+} $. Subtracting the second equation in (\ref{pre:eqn8}) by (\ref{pre:eqn12}), we have \begin{equation*} \left( -a\Delta_{g} + A \right) (u_{0} - u_{1}) \geqslant 0 \; {\rm in} \; M, \left( \frac{\partial}{\partial \nu} + B \right) (u_{0} - u_{1}) \geqslant 0 \; {\rm on} \; \partial M. \end{equation*} in the weak sense, due to the choices of $ A $ and $ B $ in (\ref{pre:eqn10}) and (\ref{pre:eqn11}). Denote \begin{equation*} w = \max \lbrace 0, u_{1} - u_{0} \rbrace. \end{equation*} It is immediate that $ w \in H^{1}(M, g) \cap \mathcal C_{0}(\bar{M}) $ and $ w \geqslant 0 $. It follows that \begin{align*} 0 & \geqslant \int_{M} \left( a \nabla_{g} (u_{1} - u_{0}) \cdot \nabla_{g} w + A(u_{1} - u_{0}) w \right) d\omega + \int_{\partial M} B (u_{1} - u_{0}) w dS \\ & = \int_{M} \left( a \lvert \nabla_{g} w \rvert^{2} + A w^{2} \right) d\omega + \int_{\partial M} B w^{2} dS \geqslant 0. \end{align*} The last inequality holds since $ A > 0 $ and $ B \geqslant 0 $. It follows that \begin{equation*} w \equiv 0 \Rightarrow 0 \geqslant u_{1} - u_{0} \Rightarrow u_{0} \geqslant u_{1}. \end{equation*} By the same argument, we can show that $ u_{1} \geqslant u_{-} $ and hence $ u_{-} \leqslant u_{1} \leqslant u_{+} $. Assume inductively that $ u_{-} \leqslant \dotso \leqslant u_{k-1} \leqslant u_{k} \leqslant u_{+} $ for some $ k > 1 $ with $ u_{k} \in W^{2, q}(M, g) $ , the $ (k + 1)th $ iteration step is \begin{equation}\label{pre:eqn13} \begin{split} -a\Delta_{g} u_{k+1} + Au_{k+1} & = Au_{k} - R_{g} u_{k} + S u_{k}^{p-1} \; {\rm in} \; M, \\
\frac{\partial u_{k+1}}{\partial \nu} +Bu_{k + 1} & = Bu_{k} - \frac{2}{p-2} h u_{k} + \frac{2}{p-2} H u_{k}^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{split} \end{equation} Since $ u_{k} \in W^{2, q}(M, g) $ thus $ u_{k} \in \mathcal C^{1, \alpha}(\bar{M}) $ due to Sobolev embedding, by the same reason as the first step, we conclude that there exists $ u_{k+1} \in W^{2, q}(M, g) $ that solves (\ref{pre:eqn13}). In particular, $ u_{k}^{\frac{p}{2}} = u_{k}^{\frac{n}{n-2}} \in \mathcal C^{1}(\bar{M}) $ hence the hypothesis of the boundary condition in Theorem 2.1 and Theorem 2.4 of \cite{XU5} are satisfied.
We show that $ u_{-} \leqslant u_{k + 1} \leqslant u_{k} \leqslant u_{+} $. The $ kth $ iteration step is of the form \begin{equation}\label{pre:eqn14} \begin{split} -a\Delta_{g} u_{k} + Au_{k} & = Au_{k - 1} - R_{g} u_{k - 1} + S u_{k - 1}^{p-1} \; {\rm in} \; M, \\ \frac{\partial u_{k}}{\partial \nu} + Bu_{k} & = Bu_{k - 1} - \frac{2}{p-2} h u_{k - 1} + \frac{2}{p-2} H u_{k - 1}^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{split} \end{equation} Subtracting (\ref{pre:eqn13}) by (\ref{pre:eqn14}), we conclude that \begin{align*} & \left( -a\Delta_{g} + A \right) \left( u_{k + 1} - u_{k} \right) = A(u_{k} - u_{k - 1}) - R_{g} (u_{k} - u_{k - 1}) + \lambda \left( u_{k}^{p-1} - u_{k - 1}^{p-1} \right) \leqslant 0 \; {\rm in} \; M; \\ & \frac{\partial \left(u_{k+1} - u_{k}\right)}{\partial \nu} + B(u_{k + 1} - u_{k}) = B(u_{k} - u_{k-1}) \\ & \qquad - \frac{2}{p-2} h (u_{k+ 1} - u_{k}) + \frac{2}{p-2} H u_{k}^{\frac{p}{2}} - \frac{2}{p-2} H u_{k - 1}^{\frac{p}{2}} \leqslant 0 \; {\rm on} \; \partial M. \end{align*} By induction we have $ u_{-} \leqslant u_{k} \leqslant u_{k - 1} \leqslant u_{+} $. The first inequality above is then due to the pointwise mean value theorem and the choice of $ A $ in (\ref{pre:eqn10}). Similarly, the second inequality is due to the choice of $ B $ in (\ref{pre:eqn11}). Note that since both $ u_{k}, u_{k-1} \in W^{2, q}(M, g) $, above inequalities hold in strong sense. We choose \begin{equation*} \tilde{w} = \max \lbrace 0, u_{k+1} - u_{k} \rbrace. \end{equation*} Clearly $ \tilde{w} \geqslant 0 $ with $ \tilde{w} \in H^{1}(M, g) \cap \mathcal C_{0}(\bar{M}) $. Pairing $ \tilde{w} $ with $ \left( -a\Delta_{g} + A \right) \left( u_{k} - u_{k + 1} \right) \leqslant 0 $, we have \begin{align*} 0 & \geqslant \int_{M} \left( -a\Delta_{g} + A \right) \left( u_{k+1} - u_{k} \right) \tilde{w} d\omega = \int_{M} a \nabla_{g} (u_{k+1} - u_{k}) \cdot \nabla_{g} \tilde{w} d\omega - \int_{\partial M} \frac{\partial \left(u_{k+1} - u_{k}\right)}{\partial \nu} \tilde{w} dS \\ & \geqslant \int_{M} a \nabla_{g} (u_{k+1} - u_{k}) \cdot \nabla_{g} \tilde{w} d\omega + \int_{\partial M} B (u_{k+1} - u_{k}) \tilde{w} dS \\ & = a \lVert \nabla_{g} \tilde{w} \rVert_{\mathcal L^{2}(M, g)}^{2} + B \int_{\partial M} \tilde{w}^{2} dS \geqslant 0. \end{align*} It follows that \begin{equation*} \tilde{w} = 0 \Rightarrow 0 \geqslant u_{k + 1} - u_{k} \Rightarrow u_{k + 1} \leqslant u_{k}. \end{equation*} By the same argument and the induction $ u_{k} \geqslant u_{-} $, we conclude that $ u_{k+1} \geqslant u_{-} $. Thus \begin{equation}\label{pre:eqn15} 0 \leqslant u_{-} \leqslant u_{k+1} \leqslant u_{k} \leqslant u_{+}, u_{k} \in W^{2, q}(M, g), \forall k \in \mathbb{N}. \end{equation} To apply the Arzela-Ascoli Theorem, we need to show that $ \lVert u_{k} \rVert_{W^{2,q}(M, g)} $ is uniformly bounded above in $ k $. By Theorem 2.4 of \cite{XU5}, the operator $ -a\Delta_{g}u + Au $ with the homogeneous Robin condition $ \frac{\partial u}{\partial \nu} + B u = 0 $ for $ B \geqslant 0 $ is injective. Applying $ L^{p} $-regularity in Theorem \ref{pre:thm2}, we conclude from the first iteration step (\ref{pre:eqn12}) that \begin{equation}\label{pre:eqn16} \lVert u_{1} \rVert_{W^{2, q}(M, g)} \leqslant C' \left( \lVert Au_{0} - R_{g} u_{0} + S u_{0}^{p-1} \rVert_{\mathcal L^{q}(M, g)} + \left\lVert \left( B - \frac{2}{p-2} h \right) u_{0} + \frac{2}{p-2} H u_{0}^{\frac{p}{2}} \right\rVert_{W^{1, q}(M, g)} \right). \end{equation} We point out that the constant $ C' $ only depends on the metric $ g $, the differential operators in the interior and on the boundary; in particular, the constant $ C' $ is kept the same if we make $ B $ smaller. Briefly speaking, the $ Bu $ term will be reflected in $ \lVert Bu \rVert_{W^{1, p}(M, g)} $ on the right side; hence the Peter-Paul inequality will allow us to subtract less in terms of $ \lVert u \rVert_{W^{2,p}(M, g)} $ on the left side of the elliptic regularity. Consider the formula (\ref{pre:eqn11}) in which we make the choice of the constant $ B $. Since $ u_{-} $ and $ u_{+} $ are fixed, it follows that the smaller the $ \sup_{\partial M} \lvert H \rvert $, the smaller the constant $ B $ we can choose so that $ B - \frac{2}{p-2} h \rightarrow 0 $.
Choose $ \sup_{\partial M} \lvert H \rvert > 0 $ small enough so that \begin{equation}\label{pre:eqn17} \begin{split} & \left\lVert \left( B - \frac{2}{p-2} h \right) u_{0} + \frac{2}{p-2} H u_{0}^{\frac{p}{2}} \right\rVert_{W^{1, q}(M, g)}\leqslant 1; \\ & \frac{2}{p-2} \sup_{\bar{M}} \left( u_{0}^{\frac{p}{2}} \right) \cdot Vol_{g}(\bar{M}) \left( \sup_{\bar{M}} \lvert H \rvert + \sup_{\partial M} \lvert \nabla H \rvert^{p} \right) \\ & \qquad +\left( B - \frac{2}{p-2}h + \frac{p}{p-2} \sup_{\bar{M}} \lvert H \rvert \sup_{\bar{M}} \left( u_{0}^{\frac{p - 2}{2}} \right) \right) \cdot \\ & \qquad \qquad \cdot C' \left( \left( A + \sup_{\bar{M}} \lvert R_{g} \rvert + \sup_{\bar{M}} \lvert S \rvert \sup_{\bar{M}} \left( u_{0}^{p-2} \right) \right) \sup_{\bar{M}} \left( u_{0} \right) \cdot \text{Vol}_{g}(\bar{M}) + 1 \right) \leqslant 1. \end{split} \end{equation} Note that for smaller $ \sup_{\bar{M}} \lvert H \rvert $, the sub-solution and supers-olution in (\ref{pre:eqn8}) still hold, due to our hypotheses of $ \theta_{1} $ and $ \theta_{2} $. Note also that the choice of $ H $ in terms of (\ref{pre:eqn17}) does not depend on $ k $. Due to (\ref{pre:eqn17}), we conclude that \begin{align*} \lVert u_{1} \rVert_{W^{2, q}(M, g)} & \leqslant C' \left( \lVert Au_{0} - R_{g} u_{0} +S u_{0}^{p-1} \rVert_{\mathcal L^{q}(M, g)} + 1 \right) \\ & \leqslant C' \left( \left( A + \sup_{\bar{M}} \lvert R_{g} \rvert + \sup_{\bar{M}} \lvert S \rvert \sup_{\bar{M}} \left( u_{0}^{p-2} \right) \right) \sup_{\bar{M}} \left( u_{0} \right) \cdot \text{Vol}_{g}(\bar{M}) + 1 \right). \end{align*} Inductively, we assume \begin{equation}\label{pre:eqn18} \lVert u_{k} \rVert_{W^{2, q}(M, g)} \leqslant C' \left( \left( A + \sup_{\bar{M}} \lvert R_{g} \rvert + \sup_{\bar{M}} \lvert S \rvert \sup_{\bar{M}} \left( u_{0}^{p-2} \right) \right) \sup_{\bar{M}} \left( u_{0} \right) \cdot \text{Vol}_{g}(\bar{M}) + 1 \right). \end{equation} For $ u_{k + 1} $, we conclude from (\ref{pre:eqn13}) that \begin{equation}\label{pre:eqn19} \begin{split} \lVert u_{k+1} \rVert_{W^{2, q}(M, g)} & \leqslant C' \lVert Au_{k} - R_{g} u_{k} + \lambda u_{k}^{p-1} \rVert_{\mathcal L^{q}(M, g)} \\ & \qquad + C' \left\lVert \left( B - \frac{2}{p-2} h \right)u_{0} \right\rVert_{W^{1, q}(M, g)} + C' \left\lVert \frac{2}{p-2} \zeta u_{k}^{\frac{p}{2}} \right\rVert_{W^{1, q}(M, g)}. \end{split} \end{equation} The last term in (\ref{pre:eqn19}) can be estimated as \begin{align*} \left\lVert \frac{2}{p-2} H u_{k}^{\frac{p}{2}} \right\rVert_{W^{1, q}(M, g)} & = \frac{2}{p-2} \left( \lVert H u_{k}^{\frac{p}{2}} \rVert_{\mathcal L^{q}(M, g)} + \left\lVert \nabla_{g} \left(H \left( u_{k}^{\frac{p}{2}} \right) \right) \right\rVert_{\mathcal L^{q}(M, g)} \right) \\ & \leqslant \frac{2}{p-2} \sup_{\bar{M}} \lvert H \rvert \sup_{\bar{M}} \left( u_{k}^{\frac{p}{2}} \right) \cdot Vol_{g}(\bar{M}) \\ & \qquad + \frac{p}{p-2} \sup_{\bar{M}} \lvert H \rvert \sup_{\bar{M}} \left( u_{k}^{\frac{p - 2}{2}} \right) \lVert \nabla_{g} u_{k} \rVert_{\mathcal L^{q}(M, g)} \\ & \qquad \qquad + \frac{2}{p-2} \sup_{\bar{M}} \left( u_{k}^{\frac{p}{2}} \right) \sup_{\partial M} \lvert \nabla H \rvert^{p} \cdot Vol_{g}(\bar{M}) \\ & \leqslant \frac{2}{p-2} \sup_{\bar{M}} \left( u_{0}^{\frac{p}{2}} \right) \cdot Vol_{g}(\bar{M}) \left( \sup_{\bar{M}} \lvert H \rvert + \sup_{\partial M} \lvert \nabla H \rvert^{p} \right) \\ & \qquad +\frac{p}{p-2} \sup_{\bar{M}} \lvert H \rvert \cdot \sup_{\bar{M}} \left( u_{0}^{\frac{p - 2}{2}} \right) \lVert u_{k} \rVert_{W^{2, q}(M, g)}. \end{align*} By the choice of $ H $ in (\ref{pre:eqn17}) and induction assumption in (\ref{pre:eqn18}), we conclude that \begin{equation*} \left\lVert \left( B - \frac{2}{p-2} h \right)u_{k} \right\rVert_{W^{1, q}(M, g)} + \left\lVert \frac{2}{p-2} \zeta u_{k}^{\frac{p}{2}} \right\rVert_{W^{1, q}(M, g)} \leqslant 1. \end{equation*} It follows from (\ref{pre:eqn19}) that \begin{equation}\label{pre:eqn20} \begin{split} \lVert u_{k+1} \rVert_{W^{2, q}(M, g)} & \leqslant C' \lVert Au_{k} - R_{g} u_{k} + \lambda u_{k}^{p-1} \rVert_{\mathcal L^{q}(M, g)} \\ & \qquad + C' \left\lVert \left( B - \frac{2}{p-2} h \right)u_{0} \right\rVert_{W^{1, q}(M, g)} + C' \left\lVert \frac{2}{p-2} \zeta u_{k}^{\frac{p}{2}} \right\rVert_{W^{1, q}(M, g)} \\ & \leqslant C' \left( \left( A + \sup_{\bar{M}} \lvert R_{g} \rvert + \lvert \lambda \rvert \sup_{\bar{M}} \left( u_{0}^{p-2} \right) \right) \sup_{\bar{M}} \left( u_{0} \right) \cdot \text{Vol}_{g}(\bar{M}) + 1 \right). \end{split} \end{equation} It follows that the sequence $ \lbrace u_{k} \rbrace_{k \in \mathbb{N}} $ is uniformly bounded in $ W^{2, q} $-norm. By Sobolev embedding, we conclude that the same sequence is uniformly bounded in $ \mathcal C^{1, \alpha} $-norm with some $ \alpha \in (0, 1) $. Thus by Arzela-Ascoli theorem, we conclude that there exists $ u $ such that \begin{equation*} u = \lim_{k \rightarrow \infty} u_{k}, 0 \leqslant u_{-} \leqslant u \leqslant u_{+}, \Box_{g} u = \lambda u^{p-1} \; {\rm in} \; M, B_{g} u = \frac{2}{p-2} \zeta u^{\frac{p}{2}} \; {\rm on} \; \partial M \end{equation*} in the strong sense. Apply the elliptic regularity, we conclude that $ u \in W^{2, q}(M, g) $. A standard bootstrapping argument concludes that $ u \in \mathcal C^{\infty}(M) \cap \mathcal C^{1, \alpha}(\bar{M}) $, due to Schauder estimates. The regularity of $ u $ on $ \partial M $ is determined by $ u^{p-1} $. We then apply the trace theorem \cite[Prop.~4.5]{T} to show that $ u $ is also smooth on $ \partial M $.
Lastly we show that $ u $ is positive. Since $ u \in \mathcal C^{\infty}(M) $ it is smooth locally, the local strong maximum principle says that if $ u = 0 $ in some interior domain $ \Omega $ then $ u \equiv 0 $ on $ \Omega $, a continuation argument then shows that $ u \equiv 0 $ in $ M $. But $ u \geqslant u_{-} $ and $ u_{-} > 0 $ within some region. Thus $ u > 0 $ in the interior $ M $. By the same argument in \cite[\S1]{ESC}, we conclude that $ u > 0 $ on $ \bar{M} $. \end{proof} We have an immediate consequence of Theorem \ref{pre:thm4} for a perturbed conformal Laplacian operator. \begin{corollary}\label{pre:cor1} Let $ (\bar{M}, g) $ be a compact manifold with smooth boundary $ \partial M $. Let $ \nu $ be the unit outward normal vector along $ \partial M $ and $ q > \dim \bar{M} $. Let $ S \in \mathcal C^{\infty}(\bar{M}) $ and $ H \in \mathcal C^{\infty}(\bar{M}) $ be given functions. Let the mean curvature $ h_{g} = h \geqslant 0 $ be some positive constant and $ \beta < 0 $ be some negative constant. In addition, we assume that $ \sup_{\bar{M}} \lvert H \rvert $ is small enough. Suppose that there exist $ u_{-} \in \mathcal C_{0}(\bar{M}) \cap H^{1}(M, g) $ and $ u_{+} \in W^{2, q}(M, g) \cap \mathcal C_{0}(\bar{M}) $, $ 0 \leqslant u_{-} \leqslant u_{+} $, $ u_{-} \not\equiv 0 $ on $ \bar{M} $, some constants $ \theta_{1} \leqslant 0, \theta_{2} \geqslant 0 $ such that \begin{equation}\label{pre:eqn21} \begin{split} -a\Delta_{g} u_{-} + \left(R_{g} + \beta \right) u_{-} - S u_{-}^{p-1} & \leqslant 0 \; {\rm in} \; M, \frac{\partial u_{-}}{\partial \nu} + \frac{2}{p-2} h_{g} u_{-} \leqslant \theta_{1} u_{-} \leqslant \frac{2}{p-2} H u_{-}^{\frac{p}{2}} \; {\rm on} \; \partial M \\ -a\Delta_{g} u_{+} + \left(R_{g} + \beta \right) u_{+} - S u_{+}^{p-1} & \geqslant 0 \; {\rm in} \; M, \frac{\partial u_{+}}{\partial \nu} + \frac{2}{p-2} h_{g} u_{+} \geqslant \theta_{2} u_{+} \geqslant \frac{2}{p-2} H u_{+}^{\frac{p}{2}} \; {\rm on} \; \partial M \end{split} \end{equation} holds weakly. In particular, $ \theta_{1} $ can be zero if $ H \geqslant 0 $ on $ \partial M $, and $ \theta_{1} $ must be negative if $ H < 0 $ somewhere on $ \partial M $; similarly, $ \theta_{2} $ can be zero if $ H \leqslant 0 $ on $ \partial M $, and $ \theta_{2} $ must be positive if $ H > 0 $ somewhere on $ \partial M $. Then there exists a real, positive solution $ u \in \mathcal C^{\infty}(M) \cap \mathcal C^{1, \alpha}(\bar{M}) $ of \begin{equation}\label{pre:eqn22} \Box_{g} u = -a\Delta_{g} u +\left( R_{g} + \beta \right) u = S u^{p-1} \; {\rm in} \; M, B_{g} u = \frac{\partial u}{\partial \nu} + \frac{2}{p-2} h_{g} u = \frac{2}{p-2} H u^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{equation} \end{corollary} \begin{proof} By replacing $ R_{g} $ by $ R_{g} + \beta $, everything is essentially the same as in the proof of Theorem \ref{pre:thm4}. \end{proof}
\section{The Local Analysis on Small Riemannian Domains} Another key step in our local to global analysis is the existence of some positive, smooth solution of the following local Yamabe equation with Dirichlet boundary condition, \begin{equation}\label{local:eqn1} -a\Delta_{g} u + R_{g} u = f u^{p-1} \; {\rm in} \; \Omega, u \equiv 0 \; {\rm on} \; \partial \Omega. \end{equation} Here $ f $ is some positive, smooth function defined on a neighborhood of $ \Omega $. We have shown results for positive constant function $ f = \lambda > 0 $, especially in \cite[Prop.~2.4]{XU6} and \cite[Prop.~2.5]{XU6}. We would like to point out that when $ f $ is a positive constant within some open region and the dimension is at least $ 6 $, then a simpler method can be applied, essentially due to Aubin's local test function for the Yamabe problem. We would like to revisit the constant function case below for this simple case, and then show why the new method developed in \cite{XU6} is essential for the non-constant positive functions $ f $.
Although the methods when $ f $ is not a constant is quite similar to the cases in \cite{XU6}, some subtle technicality forces us to provide all details here. We now discuss the case when the manifold is not locally conformally flat, the analysis is essentially due to the argument in the previous papers, see \cite{XU4}, \cite{XU5}, \cite{XU6} and \cite{XU3}.
Locally we treat (\ref{local:eqn1}) as the general second order linear elliptic PDE with the Dirichlet boundary condition: \begin{equation}\label{local:eqn2} \begin{split} Lu & : = -\sum_{i, j} \partial_{i} \left (a_{ij}(x) \partial_{j} u \right) = b(x) u^{p- 1} + f(x, u) \; {\rm in} \; \Omega; \\ u & > 0 \; {\rm in} \; \Omega, u = 0 \; {\rm on} \; \partial \Omega. \end{split} \end{equation} Recall that $ p - 1 = \frac{n+2}{n -2} $ is the critical exponent with respect to the $ H_{0}^{1} $-solutions of (\ref{local:eqn2}) in the sense of Sobolev embedding. Due to variational method, (\ref{local:eqn2}) is the Euler-Lagrange equation of the functional \begin{equation}\label{local:eqn3} J(u) = \int_{\Omega} \left( \frac{1}{2} \sum_{i, j} a_{ij}(x) \partial_{i}u \partial_{j} u - \frac{b(x)}{p} u_{+}^{p} - F(x, u) \right) dx, \end{equation} with appropriate choices of $ a_{ij}, b $ and $ F $. Here $ u_{+} = \max \lbrace u, 0 \rbrace $ and $ F(x, u) = \int_{0}^{u} f(x, t)dt $. Set \begin{equation}\label{local:eqn4} \begin{split} A(O) & = \text{essinf}_{x \in O} \frac{\det(a_{ij}(x))}{\lvert b(x) \rvert^{n-2}}, \forall O \subset \Omega; \\ T & = \inf_{u \in H_{0}^{1}(\Omega)} \frac{\int_{\Omega} \lvert Du \rvert^{2} dx}{\left( \int_{\Omega} \lvert u \rvert^{p} dx \right)^{\frac{2}{p}}}; \\ K & = \inf_{u \neq 0} \sup_{t > 0} J(tu), K_{0} = \frac{1}{n} T^{\frac{n}{2}} \left( A(\Omega) \right)^{\frac{1}{2}}. \end{split} \end{equation} The core theorem we need to use is due to Wang \cite[Thm.~1.1]{WANG}. \begin{theorem}\label{local:thm1}\cite[Thm.~1.1, Thm.~1.4]{WANG} Let $ \Omega $ be a bounded smooth domain in $ \mathbb R^{n}, n \geqslant 3 $. Let $ Lu = -\sum_{i, j} \partial_{i} \left (a_{ij}(x) \partial_{j} u \right) $ be a second order elliptic operator with smooth coefficients in divergence form. Let ${\rm Vol}_g(\Omega)$ and the diameter of $\Omega$ sufficiently small. Let $ b(x) \neq 0 $ be a nonnegative bounded measurable function. Let $ f(x, u) $ be measurable in $ x $ and continuous in $ u $. Assume \begin{enumerate}[(P1).] \item There exist $ c_{1}, c_{2} > 0 $ such that $ c_{1} \lvert \xi \rvert^{2} \leqslant \sum_{i, j} a_{ij}(x) \xi_{i} \xi_{j} \leqslant c_{2} \lvert \xi \rvert^{2}, \forall x \in \Omega, \xi \in \mathbb R^{n} $; \item $ \lim_{u \rightarrow + \infty} \frac{f(x, u)}{u^{p-1}} = 0 $ uniformly for $ x \in \Omega $; \item $ \lim_{u \rightarrow 0} \frac{f(x, u)}{u} < \lambda_{1} $ uniformly for $ x \in \Omega $, where $ \lambda_{1} $ is the first eigenvalue of $ L $; \item There exists $ \theta \in (0, \frac{1}{2}), M \geqslant 0, \sigma > 0 $, such that $ F(x, u) = \int_{0}^{u} f(x, t)dt \leqslant \theta u f(x, u) $ for any $ u \geqslant M $, $ x \in \Omega(\sigma) = \lbrace x \in \Omega, 0 \leqslant b(x) \leqslant \sigma \rbrace $. \end{enumerate} Furthermore, we assume that $ f(x, u) \geqslant 0 $, $ f(x, u) = 0 $ for $ u \leqslant 0 $. We also assume that $ a_{ij}(x) \in \mathcal C^{0}(\bar{\Omega}) $. If \begin{equation}\label{local:eqn5} K < K_{0} \end{equation} then the Dirichlet problem (\ref{local:eqn2}) possesses a positive solution $ u \in \mathcal C^{\infty}(\Omega) \cap \mathcal C^{0}(\bar{\Omega}) $ which satisfies $ J(u) \leqslant K $. \end{theorem}
When the manifold is not locally conformally flat, and the dimension is at least 6, we apply Aubin's local test result in terms of conformal normal coordinates \cite{PL} to get our first existence result of (\ref{local:eqn1}), provided that the function $ f $ is a positive constant within the domain. \begin{proposition}\label{local:prop1} Let $ (\Omega, g) $ be a Riemannian ball in $\mathbb R^n$ with $C^{\infty} $ boundary, $ n \geqslant 6 $, centered at a point $ P $ such that the Weyl tensor at $ P $ does not vanish. Let $ f = \lambda > 0 $ be a positive constant function in $ \Omega $. Assume that ${\rm Vol}_g(\Omega)$ and the Euclidean diameter of $\Omega$ sufficiently small. In addition, we assume that the first eigenvalue of Laplace-Beltrami operator $ -\Delta_{g} $ on $ \Omega $ with Dirichlet condition satisfies $ \lambda_{1} \rightarrow \infty $ as $ \Omega $ shrinks. If $ R_{g} < 0 $ within the small enough closed domain $ \bar{\Omega} $, then the Dirichlet problem (\ref{local:eqn1}) has a real, positive, smooth solution $ u \in \mathcal C^{\infty}(\Omega) \cap H_{0}^{1}(\Omega, g) \cap \mathcal C^{0}(\bar{\Omega}) $. \end{proposition} \begin{proof} We apply Theorem \ref{local:thm1} to get the existence result. As shown in \cite[Prop.~3.3]{XU3}, the hypothesis (P1) through (P4) are satisfied. The key is to show that the inequality (\ref{local:eqn5}) holds. By \cite[Prop.~2.2]{XU6} and \cite[Prop.3.3]{XU3}, showing $ K < K_{0} $ is equivalent to show that there exists a positive test function $ u \in \mathcal C_{c}^{\infty}(\Omega) $ such that \begin{equation}\label{local:eqn6} J_{0} : = \frac{\int_{\Omega} \sqrt{\det(g)} g^{ij} \partial_{i} u \partial_{j} u dx + \frac{1}{a} \int_{\Omega} \sqrt{\det(g)} R_{g} u^{2} dx}{\left( \int_{\Omega} \sqrt{\det(g)} u^{p} dx \right)^{\frac{2}{p}}} < T. \end{equation} According to the Aubin's choice of test function $ u = \frac{\varphi_{\Omega}(x)}{\left( \epsilon + \lvert x \rvert^{2} \right)^{\frac{n-2}{2}}} $ with the cut-off function $ \varphi_{\Omega} \equiv 1 $ in a small ball of $ P $, Aubin showed that \begin{equation*} J_{0} \leqslant \Lambda < T. \end{equation*} Here $ \Lambda $ is a positive constant that only depends on the evaluation of the Weyl tensor at the point $ P $ and the constant $ \epsilon $ in the test function, see e.g. the proof of Theorem B in \cite{PL}. Thus all hypothesis in Theorem \ref{local:thm1} hold. It follows that (\ref{local:eqn1}) has a smooth, positive solution within a small enough domain $ \Omega $. The regularity argument follows exactly the same as in \cite[Prop.~2.2]{XU6} and \cite[Prop.3.3]{XU3}. \end{proof} \begin{remark}\label{local:re1} If $ f $ is not a constant function, then (\ref{local:eqn6}) becomes \begin{equation*} \frac{\int_{\Omega} \sqrt{\det(g)} g^{ij} \partial_{i} u \partial_{j} u dx + \frac{1}{a} \int_{\Omega} \sqrt{\det(g)} R_{g} u^{2} dx}{\left( \int_{\Omega} \sqrt{\det(g)} f u^{p} dx \right)^{\frac{2}{p}}} < \left( \max_{\bar{\Omega}} (f) \right)^{\frac{2 - n}{n}} T. \end{equation*} We must show that \begin{equation}\label{local:eqn7} \frac{\int_{\Omega} \sqrt{\det(g)} g^{ij} \partial_{i} u \partial_{j} u dx + \frac{1}{a} \int_{\Omega} \sqrt{\det(g)} R_{g} u^{2} dx}{\left( \int_{\Omega} \sqrt{\det(g)} u^{p} dx \right)^{\frac{2}{p}}} < \left( \frac{\min_{\bar{\Omega}} (f)}{\max_{\bar{\Omega}} (f)} \right)^{\frac{n - 2}{n}} T. \end{equation} When we shrink the size of the domain to take $ \left( \frac{\min_{\bar{\Omega}} f}{\max_{\bar{\Omega}} f} \right)^{\frac{n - 2}{n}} \rightarrow 1 $, the choice of $ \epsilon $ will be smaller, since it depends on the size of the domain. Thus it might take the threshold $ \Lambda $ to be closer to $ T $. It is not clear how we can get the inequality (\ref{local:eqn7}) unless we put restrictions on the function $ f $. \end{remark}
Due to the difficulty in Remark \ref{local:re1}, we cannot use Aubin's test function directly, neither the local Yamabe equation. We turn to the perturbed local Yamabe equation \begin{equation}\label{local:eqn8} -a\Delta_{g} u + \left( R_{g} + \beta \right) u = f u^{p-1} \; {\rm in} \; \Omega, u = 0 \; {\rm on} \; \partial \Omega \end{equation} for some constant $ \beta < 0 $ and modify the analysis in \cite[\S2]{XU6}. The goal is to gain a uniform gap between $ K $ and $ K_{0} $ as given in (\ref{local:eqn4}) with respect to the perturbed local Yamabe equation. Then the limiting argument allows use to consider the sequence of solutions of (\ref{local:eqn8}) when $ \beta \rightarrow 0^{-} $. \begin{proposition}\label{local:prop2} Let $ (\Omega, g) $ be a not locally conformally flat Riemannian domain in $\mathbb R^n$ with $C^{\infty} $ boundary, $ n \geqslant 3 $. Let $ f \in \Omega' \supset \Omega $ be a positive, smooth function. Assume that $ {\rm Vol}_g(\Omega) $ and the Euclidean diameter of $\Omega$ sufficiently small. In addition, we assume that the first eigenvalue of Laplace-Beltrami operator $ -\Delta_{g} $ on $ \Omega $ with Dirichlet condition satisfies $ \lambda_{1} \rightarrow \infty $ as $ \Omega $ shrinks. If $ R_{g} < 0 $ within the small enough closed domain $ \bar{\Omega} $, then the Dirichlet problem (\ref{local:eqn1}) has a real, positive, smooth solution $ u \in \mathcal C^{\infty}(\Omega) \cap H_{0}^{1}(\Omega, g) \cap \mathcal C^{0}(\bar{\Omega}) $. \end{proposition} \begin{proof} As in \cite[\S2]{XU6}, first we show that (\ref{local:eqn8}) has a solution by applying Theorem \ref{local:thm1} again, provided that the domain is small enough. Without loss of generality, we may assume that $ \Omega $ is some geodesic normal ball of radius $ r $ centered at some point $ P $. As mentioned in Remark \ref{local:re1}, $ K < K_{0} $ in (\ref{local:eqn5}) is equivalent to the inequality \begin{equation}\label{local:eqn9} J_{1, \beta} : = \frac{\int_{\Omega} \sqrt{\det(g)} g^{ij} \partial_{i} u \partial_{j} u dx + \frac{1}{a} \int_{\Omega} \sqrt{\det(g)} \left(R_{g} + \beta \right) u^{2} dx}{\left( \int_{\Omega} \sqrt{\det(g)} f u^{p} dx \right)^{\frac{2}{p}}} < \left(\max_{\bar{\Omega}}(f) \right)^{\frac{n-2}{n}} T \end{equation} for some appropriate choice of the test function $ u $. It is straightforward to check the equivalent between (\ref{local:eqn5}) and (\ref{local:eqn9}) by formulas in (\ref{local:eqn4}) and a standard argument for critical point of the functional $ J(u) $ in (\ref{local:eqn3}) with \begin{equation*} a_{ij} = a \sqrt{\det(g)} g^{ij}, b(x) = f(x) \sqrt{\det(g)}, F(x, u) = \frac{1}{2} \left( R_{g} + \beta \right) u^{2}. \end{equation*} We have shown this in \cite[Prop.~2.2]{XU6}, \cite[Prop.3.3]{XU3} for constant functions $ f $, only a very minor change is needed. It suffices to show that \begin{equation}\label{local:eqn10} J_{2, \beta, \Omega} : = \frac{\int_{\Omega} \sqrt{\det(g)} g^{ij} \partial_{i} u_{\beta} \partial_{j} u_{\beta} dx + \frac{1}{a} \int_{\Omega} \sqrt{\det(g)} \left(R_{g} + \beta \right) u_{\beta}^{2} dx}{\left( \int_{\Omega} \sqrt{\det(g)} u_{\beta}^{p} dx \right)^{\frac{2}{p}}} < \left( \frac{\min_{\bar{\Omega}}(f)}{\max_{\bar{\Omega}}(f)} \right)^{\frac{n-2}{n}} T \end{equation} for some good choice of the test function $ u_{\beta} $. We showed in Appendix A of \cite{XU3} that for every $ \beta < 0 $, \begin{equation}\label{local:eqn11} J_{2, \beta, \Omega} < T \end{equation} with the test function \begin{equation*} u_{\beta, \epsilon, \Omega} = \frac{\varphi_{\Omega}(x)}{\left( \epsilon + \lvert x \rvert^{2} \right)^{\frac{n-2}{2}}}. \end{equation*} When $ n \geqslant 4 $, we choose $ \varphi_{\Omega} $ to be a radial cut-off function which is equal to $ 1 $ in a neighborhood of $ P $ and is equal to zero at the boundary. When $ n = 3 $, we apply a different function $ \varphi_{\Omega}(x) = \cos \left( \frac{\pi \lvert x \rvert}{2r} \right) $. We point out that the choice of $ \epsilon $ depends on the constant $ \beta $ and the size of the domain $ \Omega $. The smaller the $ \lvert \beta \rvert $ and/or the size of the domain $ \Omega $, the smaller the gap between $ J_{2, \beta, \Omega} $ and $ T $.
Next we show that for any small enough $ \Omega $ in the sense of volume and radial smallness, the functional $ J_{2, \beta, \Omega} $ satisfies \begin{equation}\label{local:eqn12} J_{2, \beta, \Omega} < J_{2, \beta_{0}, \Omega}, \forall \beta \in (\beta_{0}, 0), J_{2, \beta_{0}, \Omega} - J_{2, \beta, \Omega} > \delta > 0, \forall \beta \in (\beta_{0}, 0). \end{equation} Here $ \delta $ is a fixed positive constant, independent of the size of the domain and the choice of $ \beta $ and $ \epsilon $ in the old test function. To get this, we need a new test function. The details below are exactly the same as in \cite[Prop.~2.3]{XU6}. We consider the function $ v_{\beta, \Omega} $ satisfies \begin{equation}\label{local:eqn13} -a\Delta_{g} v_{\beta, \Omega} = -2R_{g} u_{\epsilon, \beta, \Omega} \; {\rm in} \; \Omega, v = 0 \; {\rm on} \; \partial \Omega. \end{equation} Note that $ R_{g} < 0 $ in $ \Omega $ hence $ v_{\beta, \Omega} > 0 $ by maximal principle. Denote \begin{equation}\label{local:eqn14} \begin{split} \Gamma_{1} & : = \frac{\int_{\Omega} (u_{\beta, \epsilon, \Omega} + v_{\beta, \Omega})^{p} d\text{Vol}_{g}}{\int_{\Omega} u_{\beta, \epsilon, \Omega}^{p} d\text{Vol}_{g}}; \\ \Gamma_{2} & : = \frac{a \int_{\Omega} \nabla_{g} (u_{\beta, \epsilon, \Omega} + v_{\beta, \Omega}) \cdot \nabla_{g}(u_{\beta, \epsilon, \Omega} + v_{\beta, \Omega}) d\text{Vol}_{g} + \int_{\Omega} \left( R_{g} + \beta \right) (u_{\beta, \epsilon, \Omega} + v_{\beta, \Omega})^{2} d\text{Vol}_{g}}{a \int_{\Omega} \nabla_{g} u_{\beta, \epsilon, \Omega} \cdot \nabla_{g}u_{\beta, \epsilon, \Omega} d\text{Vol}_{g} + \int_{\Omega} \left( R_{g} + \beta_{0} \right) u_{\beta, \epsilon, \Omega}^{2} d\text{Vol}_{g}}. \end{split} \end{equation} By the same argument in \cite[Prop.~2.3]{XU6}, we showed that \begin{equation*} \Gamma_{2} \leqslant 1 + \frac{\left( 4 \inf_{\Omega} \lvert R_{g} \rvert+ \left( \beta - \beta_{0} \right) \right) \lambda_{1}^{-1}}{a + \lambda_{1}^{-1} \left( - \sup_{\Omega} \lvert R_{g} \rvert + \beta_{0} \right)}. \end{equation*} Here $ \lambda_{1} $ is the first eigenvalue of the Laplace-Beltrami operator and thus increases to positive infinity when the size of $ \Omega $ shrinks. We we consider $ \Gamma_{1} $ in a smaller geodesic ball, say the ball of radius $ \xi r $ centered at $ P $, $ \Gamma_{1} $ is given by the function $ u_{\beta, \epsilon, \Omega}(\xi x) $. Back to the PDE, it is the same as dealing with the PDE (\ref{local:eqn13}) with the metric $ \xi g $. The PDE (\ref{local:eqn13}) is scaling invariant, and thus the new solution is of the form $ v_{\beta, \Omega}(\xi x) $. Changing $ \epsilon $ will not affect the evaluation of $ \Gamma_{1} $ as both numerator and denominator changes at the same rate simultaneously. It follows that $ \Gamma_{1} $, which is larger than $ 1 $, has a uniform lower bound for all geodesic normal balls $ \Omega $ with radius $ \xi r $ centered at $ P $, $ \xi < 1 $, and all $ \epsilon > 0 $, i.e. \begin{equation}\label{local:eqn15} \Gamma_{1} \geqslant 1 + 2B \end{equation} for some constant $ \delta > 0 $. We then make $ \Omega $ small enough, which follows that \begin{equation}\label{local:eqn16} \Gamma_{2} \leqslant 1 + B \end{equation} due to the expression of $ \Gamma_{2} $. As we pointed out, we can apply $ u_{\beta, \epsilon, \Omega} $ into $ J_{2, \beta_{0}} $. The only difference is the choice of $ \epsilon $, which is smaller, thus $ T - J_{2, \beta_{0}, \Omega} $ with the test function $ u_{\beta, \epsilon, \Omega} $ is smaller but still positive. Choosing the test function $ u_{\beta} = u_{\beta, \epsilon, \Omega} + v_{\beta, \Omega} $ for any $ J_{2, \beta, \Omega} $ with some $ \beta \in (\beta_{0}, 0) $, and the test function $ u_{\beta_{0}} = u_{\beta, \epsilon, \Omega} $ for $ J_{2, \beta_{0}, \Omega} $, we have shown that \begin{equation}\label{local:eqn17} \frac{J_{2, \beta, \Omega}}{J_{2, \beta_{0}, \Omega}} \leqslant \frac{1 + B}{1 + 2B} \leqslant 1 - \delta \end{equation} for some fixed $ \delta $. Note that (\ref{local:eqn17}) holds for all $ \beta \in (\beta_{0}, 0) $ and all geodesic normal balls $ \Omega $ with radius $ \xi r $ centered at $ P $, $ \xi < 1 $. We can make $ \Omega $ even smaller so that \begin{equation*} \left( \frac{\min_{\bar{\Omega}}(f)}{\max_{\bar{\Omega}}(f)} \right)^{\frac{n-2}{n}} > 1 - \frac{\delta}{2}. \end{equation*} It then follows that (\ref{local:eqn10}) holds for small enough $ \Omega $, due to (\ref{local:eqn17}). Not only that, we have also shown that there exists a constant $ \delta_{0} $ such that \begin{align*} J_{1, \beta} & \leqslant J_{2, \beta, \Omega} \cdot \left( \min_{\bar{\Omega}}(f) \right)^{\frac{2 - n}{n}} < (1 - \delta ) T \left( \min_{\bar{\Omega}}(f) \right)^{\frac{2 - n}{n}} \\ & = (1 - \delta ) \left( \frac{\max_{\bar{\Omega}}(f)}{\min_{\bar{\Omega}}(f)} \right)^{\frac{n - 2}{n}} \cdot \left(\max_{\bar{\Omega}}(f) \right)^{\frac{2 - n}{n}} T \\ & \leqslant \frac{1 - \delta}{1 - \frac{\delta}{2}} \cdot \left(\max_{\bar{\Omega}}(f) \right)^{\frac{2 - n}{n}} T. \end{align*} Due to the same argument in \cite[Prop.~2.2]{XU6} and \cite[Prop.3.3]{XU3}, we conclude that \begin{equation}\label{local:eqn18} K_{0} - \frac{1}{n} \left( a J_{1, \beta} \right)^{\frac{n}{2}} \geqslant \delta_{0} > 0 \end{equation} for some fixed constant $ \delta_{0} $, provided that $ \Omega $ is small enough. The validity of (\ref{local:eqn10}) implies that all hypotheses in Theorem \ref{local:thm1} hold, hence the perturbed local Yamabe equation (\ref{local:eqn8}) has a solution, for all $ \beta < 0 $.
We now have a sequence of positive, smooth solutions $ \lbrace u_{\beta, *} \rbrace $ that solve (\ref{local:eqn8}) for each $ \beta < 0 $. To take the limit $ \beta \rightarrow 0^{-} $, we need to estimate $ \lVert u_{\beta, *} \rVert_{\mathcal L^{t}(\Omega, g)} $ for some $ t > p = \frac{2n}{n - 2} $. We consider the norm within a region $ \beta \in (\beta_{0}, 0 $. The choice of $ \beta_{0} $ is quite flexible. According to Wang's result \cite[Thm.~1.1]{WANG}, we know that the solutions of (\ref{local:eqn8}) satisfies \begin{equation*} J(u_{\beta, *}) \leqslant \frac{1}{n} \left( a J_{1, \beta} \right)^{\frac{n}{2}} \leqslant K_{0} - \delta_{0}, \forall \beta \in (\beta_{0}, 0). \end{equation*} Pairing (\ref{local:eqn8}) with the solution $ u_{\beta, *} $ on both sides, we have \begin{equation}\label{local:eqn19} a \lVert \nabla_{g} u_{\beta, *} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} + \int_{\Omega} \left( R_{g} + \beta \right) u_{\beta, *}^{2} d\text{Vol}_{g} = \int_{\Omega} f u_{\beta, *}^{p} d\text{Vol}_{g}. \end{equation} Recall that $ p = \frac{2n}{n - 2} $, we read $ J(u_{\beta, *}) \leqslant K_{0} - \delta_{0} $ with the equality (\ref{local:eqn19}) as \begin{align*} J(u_{\beta, *}) \leqslant K_{0} - \delta_{0} & \Leftrightarrow \frac{a}{2} \lVert \nabla_{g} u_{\beta, *} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} - \frac{1}{p} \int_{\Omega} f u_{\beta, *}^{p} d\text{Vol}_{g} + \frac{1}{2} \int_{\Omega} \left( R_{g} + \beta \right) u_{\beta, *}^{2} d\text{Vol}_{g} \leqslant K_{0} - \delta_{0} \\ & \Leftrightarrow \frac{a}{2} \lVert \nabla_{g} u_{\beta, *} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} - \frac{n - 2}{2n} \left( a \lVert \nabla_{g} u_{\beta, *} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} + \int_{\Omega} \left( R_{g} + \beta \right) u_{\beta, *}^{2} d\text{Vol}_{g}\right) \\ & \qquad + \frac{1}{2} \int_{\Omega} \left( R_{g} + \beta \right) u_{\beta, *}^{2} d\text{Vol}_{g} \leqslant K_{0} - \delta_{0} \\ & \Leftrightarrow \frac{a}{n} \lVert \nabla_{g} u_{\beta, *} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} + \frac{1}{n} \int_{\Omega} \left( R_{g} + \beta \right) u_{\beta, *}^{2} d\text{Vol}_{g} \leqslant K_{0} - \delta_{0}. \end{align*} Note that \begin{equation*} K_{0} - \delta_{0} = \frac{K_{0} - \delta_{0}}{K_{0}} \cdot K_{0} = \frac{K_{0} - \delta_{0}}{K_{0}} \cdot \frac{1}{n} \left( \max_{\bar{\Omega}}(f) \right)^{\frac{2 - n}{2}} a^{\frac{n}{2}} T^{\frac{n}{2}}. \end{equation*} It follows from previous two calculations that \begin{equation}\label{local:eqn20} a \lVert \nabla_{g} u_{\beta, *} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} + \int_{\Omega} \left( R_{g} + \beta \right) u_{\beta, *}^{2} d\text{Vol}_{g} \leqslant \frac{K_{0} - \delta_{0}}{K_{0}} \cdot \left( \max_{\bar{\Omega}}(f) \right)^{\frac{2 - n}{2}} a^{\frac{n}{2}}. \end{equation} Put (\ref{local:eqn20}) back into (\ref{local:eqn19}), we conclude that \begin{equation}\label{local:eqn21} \int_{\Omega} f u_{\beta, *}^{p} d\text{Vol}_{g} \leqslant \frac{K_{0} - \delta_{0}}{K_{0}} \cdot \left( \max_{\bar{\Omega}}(f) \right)^{\frac{2 - n}{2}} a^{\frac{n}{2}}, \forall \beta \in (\beta_{0}, 0). \end{equation} Let $ \theta > 0 $ be some constant that will be determined later. Define \begin{equation*} w_{\beta} : = u_{\beta, *}^{1 + \theta} \end{equation*} and pair the perturbed local Yamabe equation (\ref{local:eqn8}) with $ u_{\beta, *}^{1 + 2\theta} $ on both sides, we have \begin{align*} & \int_{\Omega} a \nabla_{g} u_{\beta, *} \cdot \nabla_{g} \left(u_{\beta, *}^{1 + 2\theta} \right) d\text{Vol}_{g} + \int_{\Omega} \left(R_{g} + \beta \right) u_{\beta, *}^{2 + 2\theta} d\text{Vol}_{g} = \int_{\Omega} f u_{\beta, *}^{p + 2\theta} d\text{Vol}_{g}; \\ \Rightarrow & \frac{1 + 2\theta}{(1 + \theta )^{2}} \int_{\Omega} a \lvert \nabla_{g} w_{\beta} \rvert^{2} d\text{Vol}_{g} = \int_{\Omega} f w_{\beta}^{2} u_{\beta, *}^{p-2} d\text{Vol}_{g} - \int_{\Omega} \left(R_{g} + \beta \right) w_{\beta}^{2} d\text{Vol}_{g} \\ & \qquad \leqslant \left( \max_{\bar{\Omega}} (f) \right)^{\frac{n - 2}{n}} \int_{\Omega} w_{\beta}^{2} f^{\frac{p-2}{p}} u_{\beta, *}^{p-2} d\text{Vol}_{g} - \int_{\Omega} \left(R_{g} + \beta \right) w_{\beta}^{2} d\text{Vol}_{g}. \end{align*} The last inequality holds since $ f, w_{\beta}, u_{\beta, *} $ are all positive functions. By the sharp Sobolev inequality on closed manifolds \cite[Thm.~2.3, Thm.~3.3]{PL}, it follows that for any $ \alpha > 0 $, \begin{align*} \lVert w_{\beta} \rVert_{\mathcal L^{p}(\Omega, g)}^{2} & \leqslant (1 + \alpha) \frac{1}{T} \lVert \nabla_{g} w_{\beta} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} + C_{\alpha}' \lVert w_{\beta} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} \\ & = (1 + \alpha) \frac{1}{aT} \cdot \frac{(1 + \theta )^{2}}{1 + 2\theta} \left( \frac{1 + 2\theta}{(1 + \theta )^{2}} \int_{\Omega} a \lvert \nabla_{g} w_{\beta} \rvert^{2} d\text{Vol}_{g} \right) + C_{\alpha}' \lVert w_{\beta} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} \\ & \leqslant (1 + \alpha) \frac{\left( \max_{\bar{\Omega}} (f) \right)^{\frac{n - 2}{n}}}{aT} \cdot \frac{(1 + \theta )^{2}}{1 + 2\theta} \int_{\Omega} w_{\beta}^{2} f^{\frac{p-2}{p}} u_{\beta, *}^{p - 2} d\text{Vol}_{g} + C_{\alpha} \lVert w_{\beta} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} \\ & \leqslant (1 + \alpha) \frac{\left( \max_{\bar{\Omega}} (f) \right)^{\frac{n - 2}{n}}}{aT} \cdot \frac{(1 + \theta )^{2}}{1 + 2\theta} \lVert w_{\beta} \rVert_{\mathcal L^{p}(\Omega, g)}^{2} \left( \int_{\Omega} f u_{\beta, *}^{p} d\text{Vol}_{g} \right)^{p - 2} + C_{\alpha} \lVert w_{\beta} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} \\ & \leqslant (1 + \alpha) \frac{\left( \max_{\bar{\Omega}} (f) \right)^{\frac{n - 2}{n}}}{aT} \cdot \frac{(1 + \theta )^{2}}{1 + 2\theta} \lVert w_{\beta} \rVert_{\mathcal L^{p}(\Omega, g)}^{2} \cdot \left( \frac{K_{0} - \delta_{0}}{K_{0}} \left( \max_{\bar{\Omega}} (f) \right)^{\frac{2 - n}{2}}a^{\frac{n}{2}} T^{\frac{n}{2}} \right)^{\frac{2}{n}} \\ & \qquad + C_{\alpha} \lVert w_{\beta} \rVert_{\mathcal L^{2}(\Omega, g)}^{2} \\ & = (1 + \alpha) \cdot \frac{(1 + \theta )^{2}}{1 + 2\theta} \cdot \left( \frac{K_{0} - \delta_{0}}{K_{0}} \right)^{\frac{2}{n}} \lVert w_{\beta} \rVert_{\mathcal L^{p}(\Omega, g)}^{2} + C_{\alpha} \lVert w_{\beta} \rVert_{\mathcal L^{2}(\Omega, g)}^{2}. \end{align*} It follows that we can choose appropriate positive constants $ \alpha, \theta $ so that \begin{equation*} (1 + \alpha) \cdot \frac{(1 + \theta )^{2}}{1 + 2\theta} \cdot \left( \frac{K_{0} - \delta_{0}}{K_{0}} \right)^{\frac{2}{n}} < 1, \forall \beta \in (\beta_{0}, 0). \end{equation*} Hence we have \begin{equation}\label{local:eqn22} \lVert w_{\beta} \rVert_{\mathcal L^{p}(\Omega, g)}^{2} = \lVert u_{\beta, *} \rVert_{\mathcal L^{p + p \cdot \theta}(\Omega, g)}^{1 + \theta} < C, \forall \beta \in (\beta_{0}, 0). \end{equation} Note that $ p + p \cdot \theta > 2 + \delta_{1} $ for some positive constant $ \delta_{1} $. The rest of the argument is exactly the same as in \cite[Prop.~2.4]{XU6}, which applies bootstrapping methods, Arzela-Ascoli theorem, etc., to take the limit $ \beta \rightarrow 0^{-} $ in the classical sense, and then to get a positive, smooth solution of (\ref{local:eqn1}). \end{proof}
When the manifold is locally conformally flat, we can get the existence result in a different, simpler way. Consider a manifold $ (\bar{M}, g) $, with or without boundary. For the conformal change $ \tilde{g} = \varphi^{p-2} g $, we have \begin{equation}\label{local:eqn23} \left( -a\Delta_{\tilde{g}} + R_{\tilde{g}} \right) u = \varphi^{\frac{n - 2}{n + 2}} \left( -a\Delta_{g} + R_{g} \right) \left( \varphi u \right), u \in \mathcal C^{\infty}(\bar{M}). \end{equation} This is the conformal invariance of the conformal Laplacian. The next result relies on the existence of the solutions of the nonlinear eigenvalue problem $ -\Delta_{e} u = f u^{p-1} $ in some open subset $ \Omega \subset \mathbb R^{n} $, with Dirichlet boundary condition. When $ f $ is a constant function, we refer to Bahri and Coron \cite{BC}; when $ f $ is a general positive function, we refer to Clapp, Faya and Pistoia \cite{CFP}. \begin{proposition}\label{local:prop3}\cite[Prop.~2.5]{XU6} Let $ (\Omega, g) $ be a Riemannian domain in $\mathbb R^n$, $ n \geqslant 3 $, with $C^{\infty} $ boundary. Let the metric $ g $ be locally conformally flat on some open subset $ \Omega' \supset \bar{\Omega} $. For any point $ \rho \in \Omega $ and any positive constant $ \epsilon $, denote the region $ \Omega_{\epsilon} $ to be \begin{equation*}
\Omega_{\epsilon} = \lbrace x \in \Omega | \lvert x - \rho \rvert > \epsilon \rbrace. \end{equation*} Assume that $ Q \in \mathcal C^{2}(\bar{\Omega}) $, $ \min_{x \in \bar{\Omega}} Q(x) > 0 $ and $ \nabla Q(\rho) \neq 0 $. Then there exists some $ \epsilon_{0} $ such that for every $ \epsilon \in (0, \epsilon_{0}) $ the Dirichlet problem \begin{equation}\label{local:eqn24} -a\Delta_{g}u + R_{g} u = Qu^{p-1} \; {\rm in} \; \Omega_{\epsilon}, u = 0 \; {\rm on} \; \partial \Omega_{\epsilon} \end{equation} has a real, positive, smooth solution $ u \in \mathcal C^{\infty}(\Omega_{\epsilon}) \cap H_{0}^{1}(\Omega_{\epsilon}, g) \cap \mathcal C^{0}(\bar{\Omega_{\epsilon}}) $. \end{proposition} \begin{remark}\label{local:re2} The results in Proposition \ref{local:prop2} and \ref{local:prop3} are all we need for this article. We observe that when the manifold is locally conformally flat, the local Yamabe problem can only have nontrivial solution within a topologically nontrivial set, like annulus. We would like to point out that Wang's result \cite[Thm.~1.1]{WANG} can be used to get more interesting local results, especially at critical exponent. For super-critical exponent, people apply Hopf fiberation to get interesting local results, we refer to \cite{CFP}. \end{remark}
\section{Necessary and Sufficient Conditions for Prescribed Scalar Curvature Problem with Zero First Eigenvalue} In this section, we show two main results. For closed manifolds $ (M, g) $, $ n = \dim M \geqslant 3 $ with zero first eigenvalue of the conformal Laplacian, the necessary and sufficient condition for a given function $ S $ to be realized as a prescribed scalar curvature function for some pointwise conformal metric $ \tilde{g} \in [g] $ is either $ S \equiv 0 $ or $ S $ changes sign and $ \int_{M} S d\text{Vol}_{g} < 0 $. This problem is equivalent to the existence of some positive, smooth solution of the PDE \begin{equation}\label{zero:eqn1} -a\Delta_{g} u = S u^{p-1} \; {\rm in} \; M \end{equation} since we may assume the initial metric $ g $ is scalar-flat due to the Yamabe problem, see \cite{XU3}. We always start with the scalar-flat metric $ g $ throughout this section. The closed Riemann surface case was settled by Kazdan and Warner \cite{KW2}. The closed manifolds with dimensions $ 3 $ and $ 4 $ were settled by Escobar and Schoen \cite{ESS}. We extend the same necessary and sufficient condition to all closed manifolds with dimensions at least $ 3 $, provided that the first eigenvalue of the conformal Laplacian is zero.
Next we consider the analogy on compact manifolds $ (\bar{M}, g) $ with non-empty smooth boundary, $ n = \dim \bar{M} \geqslant 3 $, again with zero first eigenvalue of the conformal Laplacian and Robin boundary condition. We show that the necessary and sufficient condition for a given function $ S $ to be realized as a prescribed scalar curvature function for some Yamabe metric $ \tilde{g} \in [g] $ with minimal boundary, i.e. the mean curvature $ h_{\tilde{g}} = 0 $ is either $ S \equiv 0 $ or $ S $ changes sign and $ \int_{M} S d\text{Vol}_{g} < 0 $. Based on our best understanding, this ultimate result is given for the first time. Again we always start with the scalar-flat metric $ g $ with minimal boundary in this case. This problem is equivalent to the existence of some positive, smooth solution of the PDE \begin{equation}\label{zero:eqn2} -a\Delta_{g} u = Su^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} = 0 \; {\rm on} \; \partial M \end{equation} since we may assume the initial metric $ g $ is both scalar-flat and mean-flat due to the Escobar problem, see \cite{XU4}.
The key step for both cases is to construct both lower and upper solutions of the Yamabe equations (\ref{zero:eqn1}) or (\ref{zero:eqn2}). If $ S \equiv 0 $, then the problem is trivial. We are interested in the nontrivial case from now on. In order to construct the upper solution, we need to observe the following relation between (\ref{zero:eqn1}) or (\ref{zero:eqn2}) and a new PDE. This is essentially due to Kazdan and Warner \cite{KW}. \begin{lemma}\label{zero:lemma1} Let $ (M, g) $ be a closed manifold, $ n \geqslant 3 $. Let $ S \in \mathcal C^{\infty}(M) $ be a given function. Then there exists some positive function $ u \in \mathcal C^{\infty}(M) $ satisfying \begin{equation}\label{zero:eqn3} -a\Delta_{g} u \geqslant S u^{p-1} \; {\rm in} \; M \end{equation} if and only if there exists some positive function $ v \in \mathcal C^{\infty}(M) $ satisfying \begin{equation}\label{zero:eqn4} -a\Delta_{g} v + \frac{(p - 1)a}{p - 2} \cdot \frac{\lvert \nabla_{g} v \rvert^{2}}{v} \leqslant (2 - p)S. \end{equation} Moreover, the equality in (\ref{zero:eqn3}) holds if and only if the equality in (\ref{zero:eqn4}) holds. \end{lemma} \begin{proof} Assume that there is a positive function $ u \in \mathcal C^{\infty}(M) $ that satisfies (\ref{zero:eqn3}). Define \begin{equation*} v = u^{2 - p}. \end{equation*} Note that $ 2 - p = -\frac{4}{n - 2} < 0 $ since $ n \geqslant 3 $ by hypothesis. We compute that \begin{equation*} \nabla v = (2 - p) u^{1 - p} \nabla u \Leftrightarrow \nabla u = u^{p-1} (2 - p)^{-1} \nabla v, \end{equation*} and \begin{equation*} \Delta_{g} v = (2 - p) u^{1 - p} \Delta_{g} u + (2 - p)(1 - p) u^{-p} \lvert \nabla_{g} u \rvert^{2}. \end{equation*} Using the relation between $ \nabla u $ and $ \nabla v $, $ v = u^{2-p} $ and the inequality (\ref{zero:eqn3}), we have \begin{align*} a\Delta_{g} v & = (2 - p) u^{1 - p} \left( a\Delta_{g} u \right) +a (2 - p) (1 - p) u^{-p} \lvert \nabla_{g} u \rvert^{2} \\ & \geqslant (p - 2) u^{1-p} S u^{p-1} + a(2 - p) (1 - p) (2 - p)^{-2} u^{2p - 2} u^{-p} \lvert \nabla_{g} v \rvert^{2} \\ & = (p - 2) S + \frac{a(p - 1)}{p - 2} u^{p -2} \lvert \nabla_{g} v \rvert^{2} = (p - 2) S + \frac{a(p - 1)}{p - 2} \frac{ \lvert \nabla_{g} v \rvert^{2}}{v}. \end{align*} Shifting $ (p -2) S $ to the left side and $ a\Delta_{g} v $ to the right side, we get the inequality (\ref{zero:eqn4}). For the other direction, we start with some positive function $ v \in \mathcal C^{\infty}(M) $ and define $ u = v^{\frac{1}{2 - p}} $. The following argument is quite similar, we omit it.
The equality part is also straightforward, we just need to change inequalities above into equalities. \end{proof}
We now construct a good candidate of the upper solution of (\ref{zero:eqn1}) on closed manifolds $ (M, g) $. \begin{proposition}\label{zero:prop1} Let $ (M, g) $ be a closed manifold, $ n = \dim M \geqslant 3 $. Let $ S \not\equiv 0 $ be a given smooth such that \begin{equation}\label{zero:eqn5} \int_{M} S d\text{Vol}_{g} < 0. \end{equation} If $ \eta_{1} = 0 $, there exists a positive function $ u \in \mathcal C^{\infty}(M) $ such that \begin{equation}\label{zero:eqn6} -a\Delta_{g} u \geqslant S u^{p-1} \; {\rm in} \; M. \end{equation} \end{proposition} \begin{proof} By Lemma \ref{zero:lemma1}, it suffice to show that there exists some positive function $ v \in \mathcal C^{\infty}(M) $ such that (\ref{zero:eqn4}) holds. Note that (\ref{zero:eqn6}) implies that $ \int_{M} (2 - p) S d\text{Vol}_{g} > 0 $ since $ 2 - p < 0 $. Define \begin{equation*} \gamma : = \frac{2 - p}{\text{Vol}_{g}(M)} \int_{M} S d\text{Vol}_{g}. \end{equation*} We consider the PDE \begin{equation}\label{zero:eqn7} -a\Delta_{g} v_{0} = (2 - p ) S - \gamma \; {\rm in} \; M. \end{equation} Due to the definition of $ \gamma $, we observe that the average of the function $ (2 - p) S - \gamma $ is zero. By standard elliptic theory, there exists a smooth solution $ v_{0} $ of (\ref{zero:eqn7}). Fix this $ v_{0} $. Take $ C $ large enough so that \begin{equation*} v : = v_{0} + C > 0 \; {\rm on} \; M. \end{equation*} Clearly $ v $ solves (\ref{zero:eqn7}). We enlarge the constant $ C $, if necessary so that \begin{equation*} \frac{a(p - 1)}{ p - 2} \frac{\lvert \nabla_{g} v \rvert^{2}}{v} = \frac{a(p - 1)}{ p - 2} \frac{\lvert \nabla_{g} v_{0} \rvert^{2}}{v_{0} + C} < \gamma. \end{equation*} Fix the constant $ C $. It follows for this choice of $ v $, we have \begin{equation*} -a\Delta_{v} + \frac{(p - 1)a}{p - 2} \cdot \frac{\lvert \nabla_{g} v \rvert^{2}}{v} = ( 2- p) S - \gamma + \frac{(p - 1)a}{p - 2} \cdot \frac{\lvert \nabla_{g} v \rvert^{2}}{v} \leqslant (2 - p) S - \gamma + \gamma \leqslant (2 - p) S. \end{equation*} Therefore (\ref{zero:eqn4}) holds. \end{proof} \begin{remark}\label{zero:re1} Note that the assumption $ \int_{M} S d\text{Vol}_{g} < 0 $ is essential here since otherwise we may not be able to compensate the positive term $ \frac{(p - 1)a}{p - 2} \cdot \frac{\lvert \nabla_{g} v \rvert^{2}}{v} $ pointwise.
Note also that the construction of the upper solution does not require $ S $ to change sign. This condition is necessary when we apply our local analysis in \S3 to construct a lower solution.
Lastly, we need to say that the validity of (\ref{zero:eqn6}) may not be good enough. In order to apply the monotone iteration scheme stated in \S2, the function $ u $ must also be larger than the lower solution we construct. We will handle this later. \end{remark} As we mentioned in Remark \ref{zero:re1}, we have to consider the largeness of $ u $. However, it is in general very hard to get uniform largeness on the whole manifold $ M $ unless we are dealing with the negative first eigenvalue case. We therefore introduce the next result as the preparation for the construction of the upper solution.
\begin{corollary}\label{zero:cor1} Let $ (M, g) $ be a closed manifold, $ n = \dim M \geqslant 3 $. Let $ S \not\equiv 0 $ be a given smooth such that \begin{equation}\label{zero:eqn8} \int_{M} S d\text{Vol}_{g} < 0. \end{equation} If $ \eta_{1} = 0 $, there exists a positive function $ u \in \mathcal C^{\infty}(M) $ and a small enough constant $ \gamma_{0} > 0 $ such that \begin{equation}\label{zero:eqn9} -a\Delta_{g} u \geqslant \left( S + \gamma_{0} \right) u^{p-1} \; {\rm in} \; M. \end{equation} \end{corollary} \begin{proof} Since $ \int_{M} S d\text{Vol}_{g}< 0 $, we can always find some constant $ \gamma_{0} > 0 $ such that \begin{equation*} \int_{M} \left( S + \gamma_{0} \right) d\text{Vol}_{g} < 0. \end{equation*} We then repeat the argument in Proposition \ref{zero:prop1} for the new function $ S + \gamma_{0} $. \end{proof} We now construct the lower solution. There is still an obstacle to construct a lower solution. If the manifold is locally conformally flat, we apply Proposition \ref{local:prop3}) directly since locally the metric is conformal to the Euclidean metric. Precisely speaking, the metric $ g = \phi^{p-2} g_{e} $ for some positive function $ \phi $. By (\ref{local:eqn23}), $ u $ solves (\ref{local:eqn24}) is equivalent to \begin{equation*} -a\Delta_{e} (\phi u) = Q (\phi u)^{p-1} \; {\rm in} \; \Omega_{\epsilon}, \phi u = 0 \; {\rm on} \; \partial \Omega_{\epsilon}. \end{equation*} Extend the function $ \phi u $ by zero serves as a good candidate of the lower solution. But if the manifold is not locally conformally flat, we will use the result of Proposition \ref{local:prop2}, in which we require $ R_{g} < 0 $ within the small domain $ \Omega $. Not only that, we require the region on which $ R_{g} < 0 $ overlaps the region on which $ S > 0 $. The next result, which was first given in \cite[Thm.~4.6]{XU3}, resolves this issue.
\begin{proposition}\label{zero:prop2} Let $ (M, g) $ be a closed manifold with $ n = \dim M \geqslant 3 $. Let $ P $ be any fixed point of $ M $. Then there exists some smooth function $ R_{0} \in \mathcal C^{\infty}(M) $, which is negative at $ P $, such that $ R_{0} $ is realized as the scalar curvature function with respect to some conformal change of the original metric $ g $. \end{proposition} \begin{proof} See \cite[Thm.~4.6]{XU3}. Although \cite[Thm.~4.6]{XU3} is for the positive first eigenvalue case. The same argument applies here. The key observation is that even for the scalar-flat case, the size of the geodesic ball is also comparable with the size of the Euclidean ball with the same radius, provided that the radius is small enough. \end{proof}
We now construct the lower solution of the PDE (\ref{zero:eqn1}). \begin{proposition}\label{zero:prop3} Let $ (M, g) $ be a closed manifold, $ n = \dim M \geqslant 3 $. Let $ S \not\equiv 0 $ be a given smooth function on $ M $; in addition, $ S $ changes sign. If the metric $ g $ is scalar-flat, then there exists a nonnegative function $ u_{-} \in \mathcal C^{0}(M) \cap H^{1}(M, g) $, not identically zero, such that \begin{equation}\label{zero:eqn10} -a\Delta_{g} u_{-} \leqslant S u_{-}^{p-1} \; {\rm in} \; M \end{equation} holds in the weak sense. \end{proposition} \begin{proof} We classify the argument in terms of the vanishing of the Weyl tensor. Let $ O $ be the set on which $ S > 0 $. Since $ S $ changes sign, we may assume, without loss of generality, that there exist a point in $ O $ at which $ \nabla S \neq 0 $.
We assume first that there exists a point $ P \in O $ such that the Weyl tensor does not vanish, so is for some neighborhood of $ P $. By Proposition \ref{zero:prop2}, there exists a conformal metric $ \tilde{g} = v^{p-2} g $, $ v \in \mathcal C^{\infty}(M) $ is positive, such that the scalar curvature $ R_{\tilde{g}} < 0 $ on $ P $ and hence in some neighborhood of $ P $. Due to the conformal invariance of the conformal Laplacian in (\ref{local:eqn23}), it suffices to consider the PDE \begin{equation}\label{zero:eqn11} -a\Delta_{\tilde{g}} u_{0} + R_{\tilde{g}} u_{0} = S u_{0}^{p-1} \; {\rm in} \; \Omega, u_{0} = 0 \; {\rm on} \; \partial \Omega. \end{equation} Here $ \Omega $ is any open Riemannian domain on which $ S > 0 $, the Weyl tensor does not vanish, and $ R_{\tilde{g}} < 0 $. Shrinking $ \Omega $ if necessary, we conclude from Proposition \ref{local:prop2} that (\ref{zero:eqn11}) has a positive solution $ u_{0} \in \mathcal C^{\infty}(\Omega) \cap H_{0}^{1}(\Omega, \tilde{g}) \cap \mathcal C^{0}(\bar{\Omega}) $. Define \begin{equation}\label{zero:eqn12} \tilde{u}_{-} : = \begin{cases} u_{0}, & \; {\rm in} \; \Omega \\ 0, & \; {\rm in} \; M \backslash \Omega \end{cases}. \end{equation} It is easy to check that $ \tilde{u}_{-} \geqslant 0 $, not identically zero, $ \tilde{u}_{-} \in H^{1}(M, g) \cap \mathcal C^{0}(M) $. By the same reason as in, e.g. \cite[Thm.~4.4]{XU3}, we can check that $ \tilde{u}_{-} $ satisfies \begin{equation*} -a\Delta_{\tilde{g}} \tilde{u}_{-} + R_{\tilde{g}} \tilde{u}_{-} \leqslant S \tilde{u}_{-}^{p - 1} \: {\rm in} \; M \end{equation*} in the weak sense. Due to Aubin's result for conformal invariance of the Yamabe quotient \cite[\S5.8]{Aubin}, we observe that for any nonnegative test function $ \phi \in \mathcal C^{0}(M) \cap H^{1}(M, g) $, we have \begin{align*} & \int_{M} a\nabla_{g} (v \tilde{u}_{-}) \cdot \nabla_{g} (v \phi) d\text{Vol}_{g} - \int_{M} S \left( v\tilde{u}_{-} \right)^{p-1} \left( v \phi \right) d\text{Vol}_{g} \\ & \qquad = \int_{M} a \nabla_{\tilde{g}} \tilde{u}_{-} \nabla_{\tilde{g}} \phi d\text{Vol}_{\tilde{g}} + \int_{M} R_{\tilde{g}} \tilde{u}_{-} \cdot \phi d\text{Vol}_{\tilde{g}} + \int_{M} S \tilde{u}_{-}^{p-1} \phi d\text{Vol}_{\tilde{g}} \leqslant 0. \end{align*} Here we use the assumption that $ \eta_{1} = 0 $ thus the model case is scalar-flat due to the Yamabe problem. Since $ v $ is a positive, smooth function, the map \begin{equation*} v \mapsto v \phi, H^{1}(M, g) \rightarrow H^{1}(M, g) \end{equation*} is an isomorphism. It follows that (\ref{zero:eqn10}) holds weakly for \begin{equation}\label{zero:eqn13} u_{-} : = v\tilde{u}_{-}. \end{equation} Clearly $ u_{-} \geqslant 0 $, not identically zero, and $ u_{-} \in \mathcal C^{0}(M) \cap H^{1}(M, g) $.
If the Weyl tensor is identically zero within the domain $ O $, we apply Proposition \ref{local:prop3} to get a positive, smooth solution of (\ref{zero:eqn11}) within a different region $ \Omega $ given in Proposition \ref{local:prop3}, centered at some point $ P $ at which $ \nabla S \neq 0 $. The rest of the argument is exactly the same. \end{proof}
We do not know whether the function $ u $ in (\ref{local:eqn6}) is larger than $ u_{-} $ pointwise. To resolve this issue, we apply the local gluing strategy in \cite[Lemma~3.2]{XU6} and the upper solution in Corollary \ref{zero:cor1} to construct an upper solution. \begin{proposition}\label{zero:prop4} Let $ (M, g) $ be a closed manifold, $ n = \dim M \geqslant 3 $. Let $ S \not\equiv 0 $ be a given smooth function on $ M $; in addition, $ S $ changes sign. If the metric $ g $ is scalar-flat, then there exists a positive function $ u_{+} \in \mathcal C^{\infty}(M) $ such that \begin{equation}\label{zero:eqn14} -a\Delta_{g} u_{+} \geqslant S u_{+}^{p-1} \; {\rm in} \; M. \end{equation} Furthermore, $ 0 \leqslant u_{-} \leqslant u_{+} $ pointwise on $ M $. \end{proposition} \begin{proof} By Corollary \ref{zero:cor1}, we see that there exists a positive, smooth function $ u_{1} $ satisfying (\ref{zero:eqn9}). By conformal invariance of the conformal Laplacian in the strong sense (\ref{local:eqn23}), the same conformal change $ \tilde{g} = v^{p-2} g $ given in Proposition \ref{zero:prop3} above implies that \begin{equation}\label{zero:eqn15} \tilde{u}_{1} : = v u_{1} \Rightarrow -a\Delta_{\tilde{g}} \tilde{u}_{1} + R_{\tilde{g}} \tilde{u}_{1} \geqslant \left( S + \gamma_{0} \right) \tilde{u}_{1}^{p-1} \; {\rm in} \; M. \end{equation} This inequality holds since $ v > 0 $ on $ M $ and $ \eta_{1} = 0 $. Gluing functions $ \tilde{u}_{1} $ in (\ref{zero:eqn15}) and $ u_{0} $ in (\ref{zero:eqn11}) by exactly the same argument in \cite[Lemma~3.2]{XU6}, we conclude that there exists a positive, smooth function $ \tilde{u}_{2} \in \Omega $ such that \begin{equation}\label{zero:eqn16} \begin{split} & -a\Delta_{\tilde{g}} \tilde{u}_{2} + R_{\tilde{g}} \tilde{u}_{2} \geqslant S \tilde{u}_{2}^{p-1} \; {\rm in} \; \Omega; \\ & \tilde{u}_{2} \geqslant u_{0} \; {\rm in} \; \Omega, \tilde{u}_{2} = \tilde{u}_{1} \; {\rm in} \; \Omega_{0} : = \lbrace x \in \Omega : d(x, \partial \Omega) < \xi \; \text{for some $ \xi > 0 $} \rbrace. \end{split} \end{equation} For the full detail the gluing procedure we refer to \cite[Thm.~4.4]{XU3}. We point out that we need the extra $ \gamma_{0} \tilde{u}_{1}^{p-1} $ term in (\ref{zero:eqn15}) to give us some room to compensate the negative terms on $ \partial \Omega $. Precisely speaking, the term $ \beta $ in formula \cite[(49)]{XU6} is given by $ \beta = \max_{\Omega} \gamma_{0} \tilde{u}_{1}^{p-1} $ here. Define \begin{equation}\label{zero:eqn17} \tilde{u}_{+} : = \begin{cases} \tilde{u}_{2}, & \; {\rm in} \; \Omega \\ \tilde{u}_{1}, & \; {\rm in} \; M \backslash \bar{\Omega} \end{cases}. \end{equation} It is straightforward to check that $ \tilde{u}_{+} $ is a positive, smooth function on $ M $ and satisfies \begin{equation}\label{zero:eqn18} -a\Delta_{\tilde{g}} \tilde{u}_{+} + R_{\tilde{g}} \tilde{u}_{+} \geqslant S \tilde{u}_{+}^{p-1} \; {\rm in} \; M. \end{equation} In addition, $ \tilde{u}_{+} \geqslant \tilde{u}_{-} \geqslant 0 $ on $ M $. Define \begin{equation}\label{zero:eqn19} u_{+} : = v\tilde{u}_{+}, \end{equation} The conformal invariance of the conformal Laplacian indicates that (\ref{zero:eqn14}) holds, provided that $ \eta_{1} = 0 $, or equivalently, $ g $ is scalar-flat. Compare the definitions of $ u_{-} $ and $ u_{+} $ in (\ref{zero:eqn13}) and (\ref{zero:eqn19}) respectively, we conclude that \begin{equation*} u_{+} \geqslant u_{-} \geqslant 0 \; \text{pointwise in $ M $}. \end{equation*} \end{proof}
Now we can show the necessary and sufficient condition of prescribed scalar curvature problem within a pointwise conformal class of metrics $ [g] $ on the closed manifolds $ (M, g) $, $ n = \dim M \geqslant 3 $, provided that $ \eta_{1} = 0 $. \begin{theorem}\label{zero:thm1} Let $ (M, g)) $ be a closed manifold, $ n = \dim M \geqslant 3 $. Let $ S \in \mathcal C^{\infty}(M) $ be a given function. Assume that $ \eta_{1} = 0 $. The function $ S $ can be realized as the prescribed scalar curvature of some conformal metric $ \tilde{g} \in [g] $ if and only if \begin{enumerate}[(i).] \item $ S \equiv 0 $ on $ M $; \item $ S $ changes sign, and $ \int_{M} S d\text{Vol}_{g} < 0 $. \end{enumerate} \end{theorem} \begin{proof} Since $ \eta_{1} = 0 $, we may assume that $ g $ is scalar-flat since otherwise we can arrange a conformal change to get this in advance. Then this problem is reduced to the existence of some positive solution $ u \in \mathcal C^{\infty}(M) $ such that \begin{equation}\label{zero:eqn20} -a\Delta_{g} u = S u^{p-1} \; {\rm in} \; M. \end{equation} We consider the sufficient condition first. When $ S \equiv 0 $, the problem is trivial by taking $ u \equiv 1 $. When $ S $ satisfies (ii), we apply Proposition \ref{zero:prop3} and Proposition \ref{zero:prop4} to construct lower solution $ u_{-} $ and upper solution $ u_{+} $. Due to the results of these two propositions, all hypothesis in the monotone iteration scheme Theorem \ref{pre:thm3} hold. Applying Theorem \ref{pre:thm3}, we get the desired solution $ u $ of (\ref{zero:eqn20}).
We now consider the necessary condition. In this case, we know that (\ref{zero:eqn20}) has a positive, smooth solution. We may assume that $ S \not\equiv 0 $ since otherwise we do not need to do anything. Integrating both sides of (\ref{zero:eqn20}), \begin{equation*} \int_{M} -a\Delta_{g} u d\text{Vol}_{g} = \int_{M} S u^{p-1} d\text{Vol}_{g} \Rightarrow \int_{M} S u^{p-1} d\text{Vol}_{g} = 0. \end{equation*} Since $ u > 0 $ on $ M $, it follows that $ S $ must changes sign. Multiple $ u^{1 - p} $ on both sides of (\ref{zero:eqn20}) then integrating both sides, we have \begin{equation*} \int_{M} S d\text{Vol}_{g} = \int_{M} -a\Delta_{g} u \cdot u^{1 - p} d\text{Vol}_{g} = a(1 - p) \int_{M} \lvert \nabla_{g} u \rvert^{2} \cdot u^{-p} d\text{Vol}_{g} < 0. \end{equation*} The last inequality is due to the facts that $ u > 0 $ everywhere on $ M $ and $ (1 - p) = 1 - \frac{2n}{n - 2} < 0 $. \end{proof} \begin{remark}\label{zero:re2} We can explain analytically why $ S $ must change sign. From the construction of the lower and upper solutions by our method, we observe that $ S > 0 $ somewhere in $ M $ is essential since otherwise we cannot get a nontrivial solution of the local Yamabe equation (\ref{local:eqn11}), whether the manifold is locally conformally flat nor not. \end{remark}
We now consider the prescribed scalar curvature problem for pointwise conformal metric on compact manifolds $ (\bar{M}, g) $ with non-empty smooth boundary, $ n = \dim M \geqslant 3 $. in our previous paper that solves the Escobar problem \cite{XU4}, we realized that the local to global method applied on closed manifolds can be transplanted to compact manifolds with non-empty boundary. We follow the same idea, and give the necessary and sufficient condition of prescribed scalar curvature problem within a pointwise conformal class $ [g] $ of the compact manifolds $ (\bar{M}, g) $ with non-empty smooth boundary $ \partial M $, $ n = \dim \bar{M} \geqslant 3 $, provided that $ \eta_{1}' = 0 $. Due to the result of the Escobar problem, the model case is a metric $ g $ which is scalar-flat and zero mean curvature. \begin{theorem}\label{zero:thm2} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary $ \partial M $, $ n = \dim M \geqslant 3 $. Let $ S \in \mathcal C^{\infty}(M) $ be a given function. Assume that $ \eta_{1}' = 0 $. The function $ S $ can be realized as the prescribed scalar curvature of some conformal metric $ \tilde{g} \in [g] $ with $ h_{\tilde{g}} = 0 $ if and only if \begin{enumerate}[(i).] \item $ S \equiv 0 $ on $ M $; \item $ S $ changes sign, and $ \int_{M} S d\text{Vol}_{g} < 0 $. \end{enumerate} \end{theorem} \begin{proof} Since $ \eta_{1}' = 0 $, we may assume that $ g $ is scalar-flat with minimal boundary since otherwise we can arrange a conformal change to get this in advance. Then this problem is reduced to the existence of some positive solution $ u \in \mathcal C^{\infty}(M) $ such that \begin{equation}\label{zero:eqn21} -a\Delta_{g} u = S u^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} = 0 \; {\rm on} \; \partial M. \end{equation} We consider the sufficient condition first. The case $ S \equiv 0 $ is trivial. Assume that $ S $ satisfies (ii). We use the same function as the lower solution of (\ref{zero:eqn21}) in Proposition \ref{zero:prop3} in some interior open region $ \Omega \subset \bar{\Omega} \subset M $. The only extra thing we have to check is the boundary condition. We mention that under any conformal change $ g_{2} = \Phi^{p-2} g_{1} $, the boundary condition satisfies \begin{equation*} \frac{\partial \left( \Phi^{-1} u\right)}{\partial \nu_{g_{2}}} = \Phi^{-\frac{p}{2}} \frac{\partial u}{\partial \nu_{g_{1}}} = 0. \end{equation*} Thus we have Neumann condition for every metric in the same conformal class. It is straightforward to check the boundary condition now, since $ u_{-} \equiv 0 $ near the boundary $ \partial M $. Note that $ u_{-} \in \mathcal C^{0}(\bar{M}) \cap H^{1}(M, g) $, nonnegative and not identically zero, such that \begin{equation*} -a\Delta_{g} u_{-} \leqslant S u_{-}^{p-1} \; {\rm in} \; \partial M, \frac{\partial u_{-}}{\partial \nu} = 0 \; {\rm on} \; \partial M \end{equation*} holds in the weak sense.
For upper solution, we have to adjust result in Corollary \ref{zero:cor1} to get a good ``almost" upper solution except within the region $ \Omega $. Since $ \int_{M} S d\text{Vol}_{g} < 0 $, we can take some $ \gamma_{0} > 0 $ so that \begin{equation*} \gamma : = \frac{1}{\text{Vol}_{g}(M)} \int_{M} \left( S + \gamma_{0} \right) < 0. \end{equation*} By standard elliptic theory, the PDE \begin{equation*} -a\Delta_{g} v_{0} = ( 2- p) \left( S + \gamma_{0} \right) - (2 - p) \gamma \; {\rm in} \; M, \frac{\partial v_{0}}{\partial \nu} = 0 \; {\rm on} \; M \end{equation*} has a solution $ v_{0} \in \mathcal C^{\infty}(\bar{M}) $. We take $ C > 0 $ large enough, so that \begin{equation*} v : = v_{0} + C > 0 \; {\rm on} \; \partial M, \frac{a(p - 1)}{p - 2}\frac{\lvert \nabla_{g} v \rvert^{2}}{v} < (2 - p) \gamma. \end{equation*} It follows that the positive function $ v \in \mathcal C^{\infty}(\bar{M}) $ satisfies \begin{equation*} -a\Delta_{g} v + \frac{a(p - 1)}{p - 2}\frac{\lvert \nabla_{g} v \rvert^{2}}{v} \leqslant (2 - p) \left( S + \gamma_{0} \right) \; {\rm in} \; M, \frac{\partial v}{\partial \nu} = 0 \; {\rm on} \; \partial M. \end{equation*} Set $ u = v^{\frac{1}{2 - p}} $, we conclude that $ u > 0 $ satisfies \begin{equation}\label{zero:eqn22} -a\Delta_{g} u \geqslant \left( S + \gamma_{0} \right) u^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} = 0 \; {\rm on} \; \partial M. \end{equation} The argument for (\ref{zero:eqn22}) follows from Proposition \ref{zero:prop1} exactly. Then we apply the same argument in Proposition \ref{zero:prop4} to construct an upper solution $ u_{+} $ such that $ u_{+} \in \mathcal C^{\infty}(\bar{M}) $ is positive with $ u_{+} \geqslant u_{-} \geqslant 0 $ pointwise on $ \bar{M} $, and \begin{equation*} -a\Delta_{g} u_{+} \geqslant S u_{+}^{p-1} \; {\rm in} \; M, \frac{\partial u_{+}}{\partial \nu} = 0 \; {\rm on} \; \partial M \end{equation*} holds strongly. Thus by the monotone iteration scheme in Theorem \ref{pre:thm4}, we conclude that there exists a positive function $ u \in \mathcal C^{\infty}(\bar{M}) $ that solves (\ref{zero:eqn21}). Note that the regularity of $ u $ is due to \cite{Che}.
For necessary condition, the case $ S \equiv 0 $ is trivial. Assume that $ S \not\equiv 0 $, we integrate (\ref{zero:eqn21}) on both sides, \begin{equation*} \int_{M} S u^{p-1} d\text{Vol}_{g} = \int_{M} -a\Delta_{g} u d\text{Vol}_{g} = - a \int_{\partial M} \frac{\partial u}{\partial \nu} dS_{g} = 0. \end{equation*} This holds since $ u > 0 $ on $ \bar{M} $. Hence $ S $ must change sign. Multiply both sides of (\ref{zero:eqn21}) by $ u^{1 - p} $ and integrate, Neumann condition implies that \begin{equation*} \int_{M} S d\text{Vol}_{g} = \int_{M} - a\Delta_{g} u \cdot u^{1 - p} d\text{Vol}_{g} = a (1 - p) \int_{M} \lvert \nabla_{g} u \rvert^{2} u^{-p} d\text{Vol}_{g} < 0. \end{equation*} \end{proof}
\section{Prescribing Non-Constant Scalar and Mean Curvature Functions with Zero First Eigenvalue} In \S4, we have shown in Theorem \ref{zero:thm2} the necessary and sufficient condition for prescribed scalar curvature problem with minimal boundary on compact manifolds $ (\bar{M}, g) $ with non-empty smooth boundary, $ n = \dim \bar{M} \geqslant 3 $, provided that the first eigenvalue $ \eta_{1}' $ of the conformal Laplacian is zero. Any prescribed scalar curvature functions $ S $ must satisfy $ \int_{M} S d\text{Vol}_{g} < 0 $ and change sign, or identically zero. In this section, we generalize our results to the non-trivial mean curvature scenario, provided that $ \eta_{1}' = 0 $ still. We will discuss one general case in principle:
{\it{Given functions $ S, H \in \mathcal C^{\infty}(\bar{M}) $ such that $ S > 0 $ somewhere and $ H \not\equiv 0 $ on $ \partial M $, when $ S $ and $ H \bigg|_{\partial M} $ can be realized as prescribed scalar and mean curvature functions, respectively}}.
Due to the Escobar problem, we may assume $ R_{g} = h_{g} = 0 $ for our initial metric $ g $ for the whole section. Thus the problem is again reduced to the existence of some positive, smooth solution of the following PDE \begin{equation}\label{zerog:eqn1} -a\Delta_{g} u = Su^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} = \frac{2}{p-2} H u^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{equation} The question above contains the cases like both $ S $ and $ H $ change sign, or $ S > 0 $ everywhere and $ H < 0 $ everywhere, etc. Although the general case is much more flexible than the minimal boundary case, there still have some restrictions on the choices of $ S $ and $ H $ in terms of their relations.
Let's consider the Riemann surfaces as the inspiration, as the problem of prescribed scalar and mean curvature functions is a high dimensional generalization of the prescribing Gauss and geodesic curvature functions on Riemann surface $ (\bar{N}, g) $ with boundary $ \partial N $, provided that the Euler characteristic $ \chi(\bar{N}) = 0 $. In 2-dimensional case, Gauss-Bonnet Theorem says \begin{equation}\label{zerog:eqn2} \int_{N} K_{g} d\text{Vol}_{g} + \int_{\partial N} \sigma_{g} dA_{g} = 2\pi \chi(\bar{N}). \end{equation} Here $ K_{g} $ is the Gauss curvature, $ \sigma_{g} $ is the geodesic curvature, $ dA_{g} $ is the volume form on $ \partial N $. When $ \chi(\bar{N}) = 0 $, it follows from (\ref{zerog:eqn2}) that \begin{equation}\label{zerog:eqn3} \int_{N} K_{g} d\text{Vol}_{g} = - \int_{\partial N} \sigma_{g} dA_{g}. \end{equation}
It is worthwhile to show the fact that the Euler characteristic for Riemann surfaces $ (\bar{N}, g) $ is invariant under conformal change. As an example, we show the case when $ \chi(\bar{N}) < 0 $, the same argument apply equally for the other two cases. By the uniformization theorem, we may assume that $ K_{g} = -1 $ and $ \sigma_{g} = 0 $, provided that $ \chi(\bar{N}) < 0 $. Furthermore, by Gauss-Bonnet \begin{equation*} - \text{Vol}_{g}(\bar{N}) = \int_{N} K_{g} d\text{Vol}_{g} = 2\pi \chi(\bar{N}). \end{equation*} Under the pointwise conformal change $ \tilde{g} = e^{2f} g $ with Gauss and geodesic curvatures $ K_{\tilde{g}} $ and $ \sigma_{\tilde{g}} $ with respect to $ \tilde{g} $, the following PDE holds: \begin{equation}\label{zerog:eqn4} -\Delta_{g} f - 1 = K_{\tilde{g}} e^{2f} \; {\rm in} \; N, \frac{\partial f}{\partial \nu} = \sigma_{\tilde{g}} e^{f} \; {\rm on} \; \partial N. \end{equation} Integrating both sides with the application of (\ref{zerog:eqn3}), \begin{align*} & \int_{N} K_{\tilde{g}} e^{2f} d\text{Vol}_{g} = \int_{N} -\Delta_{g} f d\text{Vol}_{g} + 2\pi \chi(\bar{N}) = \int_{\partial N} - \frac{\partial f}{\partial \nu} dA_{g} + 2\pi \chi(\bar{N}) \\ \Rightarrow & \int_{N} K_{\tilde{g}} e^{2f} d\text{Vol}_{g} = -\int_{\partial N} \sigma_{\tilde{g}} e^{f} dA_{g} + 2\pi \chi(\bar{N}) \Rightarrow \int_{N} K_{\tilde{g}} d\text{Vol}_{\tilde{g}} = - \int_{\partial N} \sigma_{\tilde{g}} dA_{\tilde{g}} + 2\pi \chi(\bar{N}) \\ \Rightarrow & 2 \pi \chi(\bar{N}) = \int_{N} K_{\tilde{g}} d\text{Vol}_{\tilde{g}} + \int_{\partial N} \sigma_{\tilde{g}} dA_{\tilde{g}}. \end{align*} Thus the Euler characteristic is invariant under conformal change, so is the Gauss-Bonnet theorem.
Roughly speaking, (\ref{zerog:eqn3}) indicates that in general the sign of $ K_{g} $ should be opposite to the sign of $ \sigma_{g} $. More precisely, the sign of the average of $ K_{g} $ should be opposite to the sign of the average of $ \sigma_{g} $, or both are identically zero. We are looking for the analogy of (\ref{zerog:eqn3}) for manifolds with dimensions at least 3, provided that $ \eta_{1}' = 0 $. Since the relation (\ref{zerog:eqn3}) is obtained between the conformal change for which the Euler characteristics and the Gauss-Bonnet are invariant, it is natural to consider this type of relations under conformal change. However, due to the lack of the Gauss-Bonnet theorem in high dimensional case, we can only have an inequality as a necessary condition for prescribing scalar and mean curvature functions for some Yamabe metric, provided that $ \eta_{1}' = 0 $.
Recall that the model case for $ \eta_{1}' = 0 $ is $ R_{g} = 0 $ and $ h_{g} = 0 $. Assume that there exists a Yamabe metric $ \tilde{g} = u^{p-2} g $ with $ R_{\tilde{g}} = S $ and $ h_{\tilde{g}} = H $. Then the function $ u > 0 $ satisfies (\ref{zerog:eqn1}). In addition, we have the following relation under conformal change, \begin{equation}\label{zerog:eqn5} d\text{Vol}_{\tilde{g}} = u^{p} d\text{Vol}_{g}, dS_{\tilde{g}} = u^{\frac{p+2}{2}} dS_{g}. \end{equation} Pairing (\ref{zerog:eqn1}) by $ u $, integrating on both sides and using (\ref{zerog:eqn5}), \begin{align*} & \int_{M} S u^{p} d\text{Vol}_{g} = a\int_{M} - \Delta_{g}u \cdot u d\text{Vol}_{g} = -a \int_{\partial M} \frac{\partial u}{\partial \nu} \cdot u dS_{g} + a \int_{M} \lvert \nabla_{g} u \rvert^{2} d\text{Vol}_{g} \\ \Rightarrow & \int_{M} Su^{p} d\text{Vol}_{g} = -a \cdot \frac{2}{p-2} \int_{\partial M} H u^{\frac{p+2}{2}} dS_{g} + a \int_{M} \lvert \nabla_{g} u \rvert^{2} d\text{Vol}_{g} \\ \Rightarrow & \int_{M} S d\text{Vol}_{\tilde{g}} = -2(n - 1) \int_{\partial M} H dS_{\tilde{g}} + a \int_{M} \lvert \nabla_{g} u \rvert^{2} d\text{Vol}_{g} \geqslant -2(n - 1) \int_{\partial M} H dS_{\tilde{g}}. \end{align*} It is straightforward to check that the model case for $ \eta_{1} = 0 $ satisfies the inequality above. In general, we conclude that \begin{proposition}\label{zerog:prop1} Let $ (\bar{M}, g) $ be a compact manifold with smooth boundary $ \partial M $, $ n = \dim \bar{M} \geqslant 3 $. If $ \eta_{1}' = 0 $, then a necessary condition of scalar curvature $ R_{g} $ and mean curvature $ h_{g} $ is \begin{equation}\label{zerog:eqn6} \int_{M} R_{g} d\text{Vol}_{g} \geqslant -2(n - 1) \int_{\partial M} h_{g} dS_{g}. \end{equation} \end{proposition} \begin{proof} One proof is given above. An alternative way to see this is to pair the eigenfunction $ \varphi $ to both sides of the eigenvalue problem \begin{equation*} -a\Delta_{g} \varphi + R_{g} \varphi = 0 \; {\rm in} \; M, \frac{\partial \varphi}{\partial \nu} + \frac{2}{p-2} h_{g} \varphi = 0 \; {\rm on} \; \partial M \end{equation*} and integrate. \end{proof} Analogous to the Riemann surface's case, the necessary condition (\ref{zerog:eqn6}) indicates somehow that the sign of $ R_{g} $ should be opposite to the sign of $ h_{g} $. It is clear when $ R_{g} < 0 $. When $ R_{g} > 0 $, we integrate (\ref{zerog:eqn1}), \begin{equation*} \int_{M} S u^{p-1} d\text{Vol}_{g} = -a\int_{M} \Delta_{g} u d\text{Vol}_{g} = -a \cdot \frac{2}{p-2} \int_{\partial M} H u^{\frac{p}{2}} dS_{g} = -2(n - 1) \int_{\partial M} H u^{\frac{p}{2}} dS_{g}. \end{equation*} Roughly speaking, when $ R_{g} > 0 $, then $ h_{g} $ must be negative somewhere. We also mention that (\ref{zerog:eqn6}) is a Kazdan-Warner type restriction, but it is just a one side control; in addition, this restriction involves the choice of the conformal factor so it is different from the restriction we given in \S4. We will give analytic conditions for the functions $ S, H $, both pointwise and in average with respect to the initial metric $ g $.
We are using the monotone iteration scheme again. As in Lemma \ref{zero:lemma1}, we convert the upper solution of (\ref{zerog:eqn1}) for given functions $ S, H $ into another PDE-type inequalities. \begin{lemma}\label{zerog:lemma1} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary $ \partial M $, $ n \geqslant 3 $. Let $ S, H \in \mathcal C^{\infty}(\bar{M}) $ be given functions. Then there exists some positive function $ u \in \mathcal C^{\infty}(\bar{M}) $ satisfying \begin{equation}\label{zerog:eqn7} -a\Delta_{g} u \geqslant S u^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} \geqslant \frac{2}{p -2 } H u^{\frac{p}{2}} \; {\rm on} \; \partial M \end{equation} if and only if there exists some positive function $ w \in \mathcal C^{\infty}(\bar{M}) $ satisfying \begin{equation}\label{zerog:eqn8} -a\Delta_{g} w + \frac{(p - 1)a}{p - 2} \cdot \frac{\lvert \nabla_{g} w \rvert^{2}}{w} \leqslant (2 - p)S \; {\rm in} \; M, \frac{\partial w}{\partial \nu} \leqslant -2H w^{\frac{1}{2}}. \end{equation} Moreover, the equality in (\ref{zerog:eqn7}) holds if and only if the equality in (\ref{zerog:eqn8}) holds. \end{lemma} \begin{proof} We assume (\ref{zerog:eqn7}) first. Denote \begin{equation*} w = u^{2 - p}. \end{equation*} The proof of the inequalities in the interior $ M $ is exactly the same as in Lemma \ref{zero:lemma1}. For the boundary condition, we observe that $ u = w^{\frac{1}{2 - p}} $ and $ p = \frac{2n}{n - 2} $, it follows that \begin{align*} & \frac{\partial u}{\partial \nu} \geqslant \frac{2}{p -2 } H u^{\frac{p}{2}} \Leftrightarrow \frac{1}{2 - p} w^{\frac{1}{2 - p} - 1} \frac{\partial w}{\partial \nu} \geqslant \frac{2}{p - 2} H w^{\frac{p}{2(2 - p)}} \\ \Leftrightarrow & -\frac{n - 2}{4} w^{-\frac{n}{4} - \frac{1}{2}} \frac{\partial w}{\partial \nu} \geqslant \frac{n-2}{2} H w^{-\frac{n}{4}} \Leftrightarrow \frac{\partial w}{\partial \nu} \leqslant -2H w^{\frac{1}{2}}. \end{align*} The equality holds if and only if all inequalities above are equalities. For the other direction, we assume (\ref{zerog:eqn8}). Denote \begin{equation*} u = w^{\frac{1}{2 - p}}. \end{equation*} We will obtain (\ref{zerog:eqn7}). We omit the details. The same argument applies for equalities. \end{proof}
We now consider the first case: $ S < 0 $ somewhere with $ \int_{M} S d\text{Vol}_{g} < 0 $, and $ H \not\equiv 0 $ on $ \partial M $ with $ \int_{\partial M} H dS_{g} > 0 $. By the relation (\ref{zerog:eqn6}), the condition $ \int_{\partial M} H dS_{g} > 0 $ provides the most flexibility for the choices of $ S $. \begin{theorem}\label{zerog:thm1} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary $ \partial M $, $ n \geqslant 3 $. Let $ S, H \in \mathcal C^{\infty}(\bar{M}) $ be given functions. Assume $ \eta_{1}' = 0 $. If the functions $ S, H $ satisfy \begin{equation}\label{zerog:eqn9} \begin{split} & S \; \text{changes sign in $ M $}, \; \int_{M} S d\text{Vol}_{g} < 0; \\ & \int_{\partial M} H dS_{g} > 0, \end{split} \end{equation}
then there exists some pointwise conformal metric $ \tilde{g} \in [g] $ such that $ R_{\tilde{g}} = S $ and $ h_{\tilde{g}} = cH \bigg|_{\partial M} $ for some small enough constant $ c > 0 $. Equivalently, the PDE (\ref{zerog:eqn1}) has a positive, smooth solution for the given $ S, cH $ satisfying (\ref{zerog:eqn9}). \end{theorem} \begin{proof} By hypotheses, we pick a point $ P $ and a neighborhood $ O $ containing $ P $ such that $ S > 0 $ in $ O \subset \bar{O} \subset M $. According to the conformal invariance of the conformal Laplacian in (\ref{local:eqn23}), we apply some conformal change $ g_{0} = v^{p-2} g $ as in Proposition \ref{zero:prop2}, such that $ R_{g_{0}} < 0 $ in some open subset $ \Omega \subset O $. By the same argument in Proposition \ref{zero:prop3}, in which we used either Proposition \ref{local:prop2} or Proposition \ref{local:prop3}, the following PDE \begin{equation}\label{zerog:eqn10a} -a\Delta_{g_{0}} \tilde{u} + R_{g_{0}} \tilde{u} = S \tilde{u}^{p-1} \; {\rm in} \; \Omega, \tilde{u} = 0 \; {\rm on} \; \partial M \end{equation} has a positive solution $ \tilde{u} \in \mathcal C^{\infty}(\Omega) \cap \mathcal C^{0}(\bar{\Omega}) \cap H_{0}^{1}(\Omega, g_{0}) $ by shrinking $ \Omega $ further if necessary. Note that under any conformal change $ g_{2} = \Phi^{p-2} g_{1} $, the boundary condition satisfies \begin{equation*} B_{g_{2}} \left( \Phi^{-1} u \right) = \Phi^{-\frac{p}{2}} B_{g_{1}} u. \end{equation*} Thus the lower solution for the PDE \begin{equation}\label{zerog:eqn10} -a\Delta_{g_{0}} u + R_{g_{0}} u = S u^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu_{g_{0}}} = cH u^{\frac{p}{2}} \; {\rm on} \; \partial M \end{equation} is given by \begin{equation}\label{zerog:eqn11} u_{-} : = \begin{cases} \tilde{u}, & \; {\rm in} \; \Omega \\ 0, & \; {\rm in} \; \bar{M} \backslash \bar{\Omega} \end{cases} \end{equation} in the weak sense. The interior part is the same as in Proposition \ref{zero:prop3} or see \cite[Thm.~4.4]{XU3}. The boundary condition is trivial to check since $ u_{-} \equiv 0 $ on a collar region of $ \partial M $.
We now construct a good candidate of the upper solution. Since $ \int_{M} S d\text{Vol}_{g} < 0 $, we can choose two small enough constants $ \gamma, \gamma' > 0 $ such that $ \int_{M} (S + \gamma + \gamma' ) d\text{Vol}_{g} < 0 $ still. Take the constant $ \gamma'' < 0 $ such that \begin{equation*} \frac{2- p}{a} \int_{M} (S + \gamma + \gamma' ) d\text{Vol}_{g} = - \int_{\partial M} \gamma'' dS_{g}. \end{equation*} By the standard linear elliptic PDE theory, the equality above implies that the following PDE \begin{equation*} -a \Delta_{g} \phi = (2 - p) (S + \gamma + \gamma') \; {\rm in} \; M, \frac{\partial \phi}{\partial \nu} = \gamma'' \; {\rm on} \; \partial M \end{equation*} has a smooth solution $ \phi $. We define \begin{equation*} \tilde{\phi} = \phi + C \end{equation*} for large enough $ C $ such that \begin{equation}\label{zerog:eqn12} \tilde{\phi} > 0 \; {\rm on} \; \bar{M}, \frac{(p - 1)a}{p - 2} \frac{\lvert \nabla_{g} \tilde{\phi} \rvert^{2}}{\tilde{\phi}} + (2 - p) \gamma'' < 0 \; {\rm in} \; M. \end{equation} The condition (\ref{zerog:eqn12}) implies that \begin{equation*} -a\Delta_{g} \tilde{\phi} + \frac{(p - 1)a}{p - 2} \frac{\lvert \nabla_{g} \tilde{\phi} \rvert^{2}}{\tilde{\phi}} < (2 - p) ( S + \gamma') \; {\rm in} \; M. \end{equation*} Fix this $ \tilde{\phi} $. We now choose the constant $ c > 0 $ small enough such that \begin{equation*} \frac{\partial \tilde{\phi}}{\partial \nu} = \frac{\partial \phi}{\partial \nu} = \gamma'' \leqslant -2cH \tilde{\phi}^{\frac{1}{2}}. \end{equation*} It can be done since $ \gamma'' < 0 $. By Lemma \ref{zerog:lemma1}, the function $ \tilde{\varphi} = \tilde{\phi}^{\frac{1}{2 - p}} $ satisfies \begin{equation*} -a\Delta_{g} \tilde{\varphi} \geqslant (S + \gamma') \tilde{\varphi}^{p - 1} \; {\rm in} \; M, \frac{\partial \tilde{\varphi}}{\partial \nu} \geqslant \frac{2}{p-2} cH \tilde{\varphi}^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{equation*} Applying the conformal change $ g_{0} = v^{p-2} g $, the function $ \varphi : = v \cdot \tilde{\varphi} > 0 $ on $ \bar{M} $ satisfies \begin{equation}\label{zerog:eqn13} -a\Delta_{g_{0}} \varphi + R_{g_{0}} \varphi \geqslant (S + \gamma') \varphi^{p - 1} \; {\rm in} \; M, \frac{\partial \varphi}{\partial \nu} \geqslant \frac{2}{p-2} cH \varphi^{\frac{p}{2}} \; {\rm on} \; \partial M \end{equation} due to the conformal invariance of the conformal Laplacian as well as the boundary condition.
By the same argument in Proposition \ref{zero:prop4}, we apply the solution of the local Yamabe solution in (\ref{zerog:eqn10a}) and the function $ \varphi $ in (\ref{zerog:eqn13}) to conclude that there exists a positive function $ u_{+} \in \mathcal C^{\infty}(\bar{M}) $ such that \begin{equation}\label{zerog:eqn14} -a\Delta_{g_{0}} u_{+} + R_{g_{0}} u_{+} \geqslant (S + \gamma') u_{+}^{p - 1} \; {\rm in} \; M, \frac{\partial u_{+}}{\partial \nu} \geqslant \frac{2}{p-2} cH u_{+}^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{equation} Furthermore, $ 0 \leqslant u_{-} \leqslant u_{+} $. Note that (\ref{zerog:eqn14}) holds for all smaller constants $ c > 0 $. We therefore shrink $ c $ if necessary so that the condition (\ref{pre:eqn17}) holds for the function $ cH $. We apply the monotone iteration scheme in Theorem \ref{pre:thm4} to conclude the existence of a positive, smooth solution $ u $ of (\ref{zero:eqn10}). Due to conformal invariance of the conformal Laplacian and the Robin boundary condition, the function $ v^{-1} u $ solves (\ref{zerog:eqn1}) for $ S $ and $ cH $. Note that the last step holds since $ \eta_{1}' = 0 $. \end{proof}
Analogously, we would like to consider the following two cases: \begin{enumerate}[(i).] \item $ S < 0 $ somewhere with $ \int_{M} S d\text{Vol}_{g} < 0 $, and $ H \not\equiv 0 $ on $ \partial M $ with $ \int_{\partial M} H dS_{g} = 0 $; \item $ S < 0 $ somewhere with $ \int_{M} S d\text{Vol}_{g} < 0 $, and $ H \not\equiv 0 $ on $ \partial M $ with $ \int_{\partial M} H dS_{g} < 0 $. \end{enumerate} Note that although no direct reasoning forces the function $ S $ to change sign, we can see from the case (ii) that if $ H < 0 $ everywhere, then the inequality (\ref{zerog:eqn6}) implies that $ S $ must be positive somewhere also. Since we have assumed that $ \int_{M} S d\text{Vol}_{g} < 0 $, then $ S $ must change sign. We have the following two results. \begin{corollary}\label{zerog:cor1} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary $ \partial M $, $ n \geqslant 3 $. Let $ S, H \in \mathcal C^{\infty}(\bar{M}) $ be given functions. Assume $ \eta_{1}' = 0 $. If the functions $ S, H $ satisfy \begin{equation}\label{zerog:eqn15} \begin{split} & S \; \text{changes sign in $ M $}, \; \int_{M} S d\text{Vol}_{g} < 0; \\ & \int_{\partial M} H dS_{g} = 0, \end{split} \end{equation}
then there exists some pointwise conformal metric $ \tilde{g} \in [g] $ such that $ R_{\tilde{g}} = S $ and $ h_{\tilde{g}} = cH \bigg|_{\partial M} $ for some small enough constant $ c > 0 $. \end{corollary} \begin{proof} It is essentially the same as the proof in Theorem \ref{zerog:thm1}. \end{proof} \begin{corollary}\label{zerog:cor2} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary $ \partial M $, $ n \geqslant 3 $. Let $ S, H \in \mathcal C^{\infty}(\bar{M}) $ be given functions. Assume $ \eta_{1}' = 0 $. If the functions $ S, H $ satisfy \begin{equation}\label{zerog:eqn16} \begin{split} & S \; \text{changes sign in $ M $}, \; \int_{M} S d\text{Vol}_{g} < 0; \\ & \int_{\partial M} H dS_{g} < 0, \end{split} \end{equation}
then there exists some pointwise conformal metric $ \tilde{g} \in [g] $ such that $ R_{\tilde{g}} = S $ and $ h_{\tilde{g}} = cH \bigg|_{\partial M} $ for some small enough constant $ c > 0 $. \end{corollary} \begin{proof} It is essentially the same as the proof in Theorem \ref{zerog:thm1}. \end{proof}
\begin{remark}\label{zerog:re1} According to the previous three results, we see that as long as $ \int_{M} S d\text{Vol}_{g} < 0 $ and $ S $ changes sign, the sign of $ \int_{\partial M} H dS_{g} $ does not matter. We discussed above that we can only see why $ S $ must changes sign when $ H < 0 $ everywhere on $ \partial M $ or $ H \equiv 0 $ on $ \partial M $. Similarly, we have no direct uniform reason to see why we must assume $ \int_{M} S d\text{Vol}_{g} < 0 $. However, we can see this as a necessary condition if we have $ H > 0 $ everywhere on $ \partial M $ or when $ H \equiv 0 $ on $ \partial M $. The latter case $ H \equiv 0 $ is shown in \S4, and it associates with the case $ \int_{M} H d\text{Vol}_{g} = 0 $.
When $ H > 0 $ everywhere, which associates with the case $ \int_{\partial M} H dS_{g} > 0 $, we assume the PDE (\ref{zerog:eqn1}) holds for some non-constant positive, smooth function $ u $, since if $ u $ is a constant then it reduces to the case $ S = H \equiv 0 $, the model case. We multiple both sides by $ u^{1 - p} $ and integrate, we have \begin{align*} \int_{M} S d\text{Vol}_{g} & = -a \int_{M} u^{1 - p} \Delta_{g} u d\text{Vol}_{g} = - a \int_{\partial M} u^{1 - p} \frac{\partial u}{\partial \nu} dS_{g} + a \int_{M} ( 1- p) u^{-p} \lvert \nabla_{g} v \rvert^{2} d\text{Vol}_{g} \\ & = - \frac{2a}{ p -2} \int_{\partial M} H u^{1 - \frac{p}{2}} dS_{g} + a( 1- p) \int_{M} u^{-p} \lvert \nabla_{g} v \rvert^{2} d\text{Vol}_{g}. \end{align*} If $ H > 0 $ everywhere then the first term in the last line above is negative as $ -\frac{2a}{ p -2} = -2(n - 1) < 0, \forall n \geqslant 3 $. Since $ u $ is non-constant, the second term in the last line above is also negative as $ 1 - p = 1 - \frac{2n}{n - 2} < 0, \forall n \geqslant 3 $. Thus we conclude that $ \int_{M} S d\text{Vol}_{g} < 0 $.
Since we have full flexibility to choose the prescribed mean curvature function $ H $ when $ \int_{M} S d\text{Vol}_{g} < 0 $ and $ S $ changes sign, we would like to conjecture, although we only have partial reasoning, that the prescribed scalar curvature function must satisfy the two conditions: $ \int_{M} S d\text{Vol}_{g} < 0 $ and $ S $ changes sign, even when the mean curvature function is not identically zero, provided that $ \eta_{1}' = 0 $. \end{remark}
\section{Prescribing Gauss and Geodesic Curvature Problem on Compact Riemann Surfaces with Boundary with Zero Euler Characteristic} Throughout this section, $ (\bar{M}, g) $ is denoted to be the $ 2 $-dimensional compact Riemann surface with non-empty smooth boundary $ \partial M $ and unit outward normal vector field $ \nu $ along $ \partial M $; throughout this section, the Gauss and geodesic curvatures of $ g $ are denoted to be $ K_{g} $ and $ \sigma_{g} $, respectively. In this section, we try to extend our necessary and sufficient condition of prescribing scalar curvature problem on compact manifolds with boundary with dimensions at least $ 3 $ to $ (\bar{M}, g) $, in terms of prescribing Gauss curvature function and zero geodesic curvature functions for some conformal metric $ \tilde{g} \in [g] $. Instead of the local-to-global analysis and monotone iteration schemes, we apply global variational method here, which is inspired by Kazdan and Warner \cite{KW2}. We then discuss the prescribing Gauss and geodesic curvature problems on $ (\bar{M}, g) $, especially for non-trivial geodesic curvature functions.
The model case when $ \chi(\bar{M}) = 0 $ is $ K_{g} \equiv \sigma_{g} \equiv 0 $, by uniformization theorem and Gauss-Bonnet Theorem. From now on, we always assume our initial metric $ g $ has zero Gauss and zero geodesic curvatures. Given two functions $ K, \sigma \in \mathcal C^{\infty}(\bar{M}) $, the existence of a conformal metric $ \tilde{g} = e^{2u} g $ for some $ u \in \mathcal C^{\infty}(\bar{M}) $ such that $ K_{\tilde{g}} = K $ and $ \sigma_{\tilde{g}} = \sigma $ is reduced to the existence of some smooth solution of the following PDE \begin{equation}\label{de2:eqn1} -\Delta_{g} u = K e^{2u} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} = \sigma e^{u} \; {\rm on} \; \partial M. \end{equation} When we discuss the necessary and sufficient conditions of prescribing Gauss curvature problem with zero geodesic curvature, the boundary condition in (\ref{de2:eqn1}) is further reduced to the Neumann condition, i.e. \begin{equation}\label{de2:eqn2} -\Delta_{g} u = K e^{2u} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} = 0 \; {\rm on} \; \partial M. \end{equation}
Let's discuss the necessary condition by assuming the existence of some solution $ u \in \mathcal C^{\infty}(\bar{M}) $ of (\ref{de2:eqn2}) first. The relation in (\ref{zero:eqn3}) indicates that \begin{equation*} \int_{M} K d\text{Vol}_{\tilde{g}} = 0 \end{equation*} for the metric $ \tilde{g} = e^{2u} g $. Therefore $ K $ must changes sign or identically zero. Multiple $ e^{-2u} $ on both sides of (\ref{de2:eqn2}) and integrate, we have \begin{equation*} \int_{M} K d\text{Vol}_{g} = -2\int_{M} e^{-2u} \lvert \nabla_{g} u \rvert^{2} d\text{Vol}_{g}. \end{equation*} The inequality above is negative unless the function $ u $ is some constant, which is the case exactly when $ K \equiv 0 $. Therefore the necessary condition of prescribing Gauss curvature function $ K $ for the some conformal metric $ \tilde{g} \in [g] $ with zero geodesic curvature is either $ K \equiv 0 $ or $ K $ changes sign and $ \int_{M} K d\text{Vol}_{g} < 0 $.
We would like to show that the condition given above is exactly the sufficient condition also. It reduces to solve the PDE (\ref{de2:eqn2}) for the function $ K $ satisfying the condition above. We apply the variational method. It is standard to handle the weak solution of (\ref{de2:eqn2}) in the standard Sobolev space $ H^{1}(M, g) $ since the weak form of (\ref{de2:eqn2}) can identify the PDE and the boundary condition by using different test functions, provided that the solution is regular enough, say, at least $ \mathcal C^{2}(\bar{M}) $. We observe that the main issue is to control $ e^{2u} $ term, for its largeness in an appropriate functional space.
In $ 2 $-dimensional case with exponential function, we need the idea of Moser and Trudinger. We start with the definition of a different Hilbert space, which is a subspace of $ H^{1}(M, g) $. \begin{definition}\label{de2:def1} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary, $ n = \dim \bar{M} \geqslant 1 $. We define \begin{equation*} H_{\perp}^{1}(M, g) : = \lbrace u \in H^{1}(M, g) : \int_{M} u d\text{Vol}_{g} = 0 \rbrace. \end{equation*} \end{definition} It is standard to know that $ H_{\perp}^{1}(M, g) $ is a Hilbert space.
The next two results are some variation of Trudinger's inequality and a consequence of the Trudinger's inequality. \begin{proposition}\label{de2:prop1}\cite[Formula.~4.15]{T3} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary, $ n = \dim \bar{M} \geqslant 1 $. Then there exists a natural inclusion $ \imath $ such that \begin{equation*} \imath : H^{\frac{n}{2}}(M, g) \rightarrow \mathcal L^{q}(M, g), \forall q \in [1, \infty). \end{equation*} The inclusion map $ \imath $ is compact. In addition, we have \begin{equation}\label{de2:eqn3} \lVert u \rVert_{\mathcal L^{q}(M, g)} \leqslant C_{q} \lVert u \rVert_{H^{\frac{n}{2}}(M, g)}, \forall u \in H^{\frac{n}{2}}(M, g). \end{equation} The constant $ C_{q} $ is independent of the choice of $ u $. \end{proposition} \begin{proposition}\label{de2:prop2} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary, $ n = \dim \bar{M} \geqslant 1 $. Let $ \alpha \in \mathbb R $ be some constant. If here exists a sequence $ \lbrace u_{k} \rbrace $ such that $ u_{k} \rightarrow u $ weakly in $ H^{\frac{n}{2}}(M, g) $-norm, then \begin{equation}\label{de2:eqn4} e^{\alpha u_{k}} \rightarrow e^{\alpha u} \end{equation} strongly in $ \mathcal L^{1}(M, g) $-norm. \end{proposition} It is well-known that the Poincar\'e inequality holds for elements in $ H_{\perp}^{1}(M, g) $, i.e. \begin{equation}\label{de2:eqn5} \lVert u \rVert_{\mathcal L^{2}(M, g)} \leqslant C_{1} \lVert \nabla u \rVert_{\mathcal L^{2}(M, g)}, \forall u \in H_{\perp}^{1}(M, g). \end{equation} The constant $ C_{1} $ is independent of the choice of $ u $. According to the Poincar\'e inequality and Proposition \ref{de2:prop1}, which states that the compact embedding $ H^{1}(M, g) \hookrightarrow \mathcal L^{q}(M, g) $ holds for all $ q \in [1, \infty) $ when $ n = \dim \bar{M} = 2 $, we have the following consequences. \begin{proposition}\label{de2:prop3} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary, $ n = \dim \bar{M} = 2 $. Let $ u \in H_{\perp}^{1}(M, g) $ which satisfies $ \lVert \nabla u \rVert_{\mathcal L^{2}(M, g)} \leqslant 1 $. Then there exist positive constants $ C_{2}, C_{3} $ such that \begin{equation}\label{de2:eqn6} \int_{M} e^{C_{2} u} d\text{Vol}_{g} \leqslant C_{3}. \end{equation} The constants $ C_{2}, C_{3} $ are independent of the choice of $ u $. \end{proposition} \begin{proposition}\label{de2:prop4} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary, $ n = \dim \bar{M} = 2 $. Then for any function $ u \in H^{1}(M, g) $ and any positive constant $ \beta > 0 $, there exist positive constants $ C_{4}, C_{5} $ such that \begin{equation}\label{de2:eqn7} \int_{M} e^{\beta \lvert u \rvert} d\text{Vol}_{g} \leqslant C_{4} e^{\left( \frac{\beta}{\text{Vol}_{g}(M)} \left\lvert \int_{M} u d\text{Vol}_{g} \right\rvert+ \frac{ \beta^{2} \lVert \nabla u \rVert^{2} }{4C_{5} } \right)}. \end{equation} Here $ C_{4}, C_{5} $ are constants independent of the choice of $ u $. \end{proposition} \begin{proposition}\label{de2:prop5} Let $ (\bar{M}, g) $ be a compact manifold with non-empty smooth boundary, $ n = \dim \bar{M} = 2 $. Then \begin{equation}\label{de2:eqn8} u \in H^{1}(M, g) \Rightarrow e^{u} \in \mathcal L^{q}(M, g), \forall q \in [1, \infty). \end{equation} \end{proposition} \begin{remark}\label{de2:re1} The proofs of Proposition \ref{de2:prop3}, Proposition \ref{de2:prop4} and Proposition \ref{de2:prop5} are exactly the same as in \cite{KW2}. Roughly speaking, Proposition \ref{de2:prop3} is proven by a Taylor expansion of the exponential function, due to Trudinger; Proposition \ref{de2:prop4} follows from Proposition \ref{de2:prop3} by applying $ v = \frac{u - \frac{1}{\text{Vol}_{g}(M)}\int_{M}u d\text{Vol}_{g}}{\beta} $ in (\ref{de2:eqn6}), and then apply Young's inequality; Proposition \ref{de2:prop5} is a natural consequence of Proposition \ref{de2:prop4} by choosing appropriate constant $ \beta $ in (\ref{de2:eqn7}). \end{remark}
With all preparations above, we introduce the next result for sufficient conditions of prescribing Gauss curvature with $ \chi(\bar{M}) = 0 $. \begin{theorem}\label{de2:thm1} Let $ (\bar{M}, g) $ be a compact Riemann surface with non-empty smooth boundary $ \partial M $. Assume that $ \chi(\bar{M}) = 0 $. If the given function $ K \in \mathcal C^{\infty}(\bar{M}) $ satisfies \begin{equation}\label{de2:eqn9} \begin{split} & \text{either} \; K \equiv 0; \\ & \text{or} \; \int_{M} K d\text{Vol}_{g} < 0 \; \text{and $ K $ changes sign}, \\ \end{split} \end{equation} then there exists a smooth function $ u $ that solves (\ref{de2:eqn2}) with the function $ K $ given in (\ref{de2:eqn9}), i.e. there exists a metric $ \tilde{g} = e^{2u} g $ such that $ K_{\tilde{g}} = K $ and $ \sigma_{\tilde{g}} = 0 $. \end{theorem} \begin{proof} When $ K \equiv 0 $, it is trivial. So assume that $ K $ is nontrivial such that (\ref{de2:eqn9}) holds. The variational method used here is essentially due to Kazdan and Warner \cite{KW2}. Consider the space \begin{equation}\label{de2:eqn10} B : = \lbrace u \in H_{\perp}^{1}(M, g) : \int_{M} K e^{2u} d\text{Vol}_{g} = 0 \rbrace. \end{equation} Since $ K $ changes sign, due to the same reasoning in \cite{KW2}, the space $ B $ is not empty, i.e. there exists at least one element $ u_{0} \in B $. Define the functional to be \begin{equation}\label{de2:eqn11} J(u) : = \frac{1}{2} \int_{M} \lvert \nabla_{g} u \rvert^{2} d\text{Vol}_{g}. \end{equation} Our goal is to minimize $ J $ within the space $ B $. $ J(u) \geqslant 0, \forall u \in B $, hence there exists a minimizing sequence $ \lbrace u_{k} \rbrace_{k \in \mathbb{N}} \in B $ such that $ J(u_{k}) $ converges from the right to $ A : = \inf_{u \in B} J(u) $. By making further choices of the sequence, if necessary, we may assume that $ J(u_{k}) \leqslant J(u_{0}), \forall k \in \mathbb{N} $. By Poincar\'e inequality, we have \begin{equation*} \lVert u_{k} \rVert_{H^{1}(M, g)} \leqslant \left(1 + C_{1} \right) \lVert \nabla_{g} u_{k} \rVert_{\mathcal L^{2}(M, g)} \leqslant \left(1 + C_{1} \right) J(u_{0})^{\frac{1}{2}}, \forall k \in \mathbb{N}. \end{equation*} Therefore $ \lbrace u_{k} \rbrace_{k \in \mathbb{N}} $ is uniformly bounded in $ H^{1} $-norm. Standard Hilbert space theory as well as the weak compactness implies that there exists a subsequence $ \lbrace u_{k_{j}} \rbrace $ of the original sequence such that \begin{equation}\label{de2:eqn12} u_{k_{j}} \rightharpoonup u \; \text{weakly in $ H^{1}(M, g) $}. \end{equation} For simplicity of notation, we still denote the subsequence to be $ u_{k} $ in this proof. Note that by Rellich's theorem, $ u_{k} \rightarrow u $ strongly in $ \mathcal L^{2} $-norm. We show that $ u \in H_{\perp}^{1}(M, g) $. To see this, we observe that \begin{equation*} \left\lvert \int_{M} u d\text{Vol}_{g} \right\rvert = \left\lvert \int_{M} \left( u - u_{k} \right) d\text{Vol}_{g} \right\rvert \leqslant \text{Vol}_{g}(M)^{\frac{1}{2}} \left( \int_{M} \lvert u_{k} - u \rvert^{2} d\text{Vol}_{g} \right)^{\frac{1}{2}}. \end{equation*} The last term above could be arbitrarily small by making $ k $ large enough. It follows that $ u \in H_{\perp}^{1}(M, g) $. By Proposition \ref{de2:prop2}, the weak convergence in (\ref{de2:eqn12}) implies that \begin{equation*} \left\lvert \int_{M} K e^{2u} d\text{Vol}_{g} \right\rvert = \left\lvert \int_{K} \left( K e^{2u} - K e^{2u_{k}} \right) d\text{Vol}_{g} \right\rvert \rightarrow 0 \end{equation*} as $ k \rightarrow \infty $. Hence $ \int_{M} K e^{2u} d\text{Vol}_{g} = 0 $. We conclude that the weak limit $ u $ in (\ref{de2:eqn12}) is an element of $ B $.
Following exactly the same argument in \cite[Thm.~5.3]{KW2}, we conclude that \begin{equation*} A = J(u), \end{equation*} i.e. $ u $ minimizes the functional $ J $ for all elements in $ B $.
According to the variational method with the constraint, the Euler-Lagrange equation with respect to the minimizer $ u $ is of the form \begin{equation}\label{de2:eqn13} \int_{M} \nabla_{g} u \cdot \nabla_{g} v d\text{Vol}_{g} + \int_{M} c_{1} K e^{2u} v d\text{Vol}_{g} + \int_{M} c_{2} v d\text{Vol}_{g} = 0, \forall v \in H^{1}(M, g) \end{equation} for some constants $ c_{1}, c_{2} $ which will be determined later. (\ref{de2:eqn13}) implies that the weak solution $ u $ has homogeneous Neumann boundary condition as natural boundary conditions. However, (\ref{de2:eqn13}) contains no boundary terms. We consider the regularity of $ u $. Since $ u \in H^{1}(M, g) $, Proposition \ref{de2:prop5} implies that $ K e^{2u} \in \mathcal L^{q}(M, g), \forall q \in [1, \infty) $. We take some $ q > 2 $. It follows from the $ W^{s, q} $-type elliptic regularity, see e.g. \cite[Prop.~2.2]{XU4}, that $ u \in H^{2, q}(M, g) $. Then by the standard bootstrapping argument, we conclude that $ u \in \mathcal C^{\infty}(\bar{M}) $. It follows from (\ref{de2:eqn13}) that $ \frac{\partial u}{\partial \nu} = 0 $ in the strong sense.
To determine $ c_{2} $, we take $ v \equiv 1 $. By the fact that $ u \in B $ hence $ \int_{M} K e^{2u} d\text{Vol}_{g} = 0 $, we have $ c_{2} = 0 $. We take $ v = e^{-2u} $, it then follows that \begin{equation*} \int_{M} - 2e^{-2u} \lvert \nabla_{g} u \rvert^{2} d\text{Vol}_{g} + c_{1} \int_{M} K d\text{Vol}_{g} = 0. \end{equation*} Hence $ c_{1} < 0 $ since $ \int_{M} K d\text{Vol}_{g} < 0 $. Since $ u \in \mathcal C^{\infty}(\bar{M}) $, we conclude that \begin{equation}\label{de2:eqn14} -\Delta_{g} u = -c_{1} K e^{2u} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} = 0 \; {\rm on} \; \partial M \end{equation} holds in the classical sense. Since $ c_{1} < 0 $, we denote $ -c_{1} = e^{2\gamma} $ for some $ 2\gamma $. Denote \begin{equation*} \tilde{u} : = u + \gamma, \end{equation*} we have \begin{equation*} -\Delta_{g} \tilde{u} = K e^{2 \tilde{u}} \; {\rm in} \; M, \frac{\partial \tilde{u}}{\partial \nu} = 0 \; {\rm on} \; \partial M. \end{equation*} $ \tilde{u} $ is the desired solution of (\ref{de2:eqn2}) for $ K $ satisfying (\ref{de2:eqn9}). \end{proof}
\section{The Generalization of the Han-Li Conjecture} In this section, we discuss the generalization of the Han-Li conjecture \cite{HL}.The standard Han-Li conjecture is equivalent to the existence of a positive, smooth solution of the following PDE \begin{equation}\label{HL:eqn1} -a\Delta_{g} u + R_{g} u = \lambda u^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} + \frac{2}{p-2} h_{g} u = \frac{2}{p-2} \zeta u^{\frac{p}{2}} \; {\rm on} \; \partial M \end{equation} for some constants $ \lambda, \zeta \in \mathbb R $. It was first proven in \cite{XU5}, which is listed below. Note that the constant mean curvature $ \zeta $ on $ \partial M $ is required to be positive. \begin{theorem}\label{HL:thm1}\cite[\S1]{XU5} Let $ (\bar{M}, g) $ be a compact manifold with smooth boundary, $ \dim \bar{M} \geqslant 3 $. Let $ \eta_{1}' $ be the first eigenvalue of the boundary eigenvalue problem $ \Box_{g} = \eta_{1}' u $ in $ M $, $ B_{g} u = 0 $ on $ \partial M $. Then: \\ \begin{enumerate}[(i).] \item If $ \eta_{1}' = 0 $, (\ref{HL:eqn1}) with constant functions $ S = \lambda \in \mathbb R $, $ H = \zeta \in \mathbb R $ admits a real, positive solution $ u \in \mathcal C^{\infty}(\bar{M}) $ with $ \lambda = \zeta = 0 $; \item If $ \eta_{1}' < 0 $, (\ref{HL:eqn1}) with constant functions $ S = \lambda \in \mathbb R $, $ H = \zeta \in \mathbb R $ admits a real, positive solution $ u \in \mathcal C^{\infty}(\bar{M}) $ with some $ \lambda < 0 $ and $ \zeta > 0 $; \item If $ \eta_{1}' > 0 $, (\ref{HL:eqn1}) with constant functions $ S = \lambda \in \mathbb R $, $ H = \zeta \in \mathbb R $ admits a real, positive solution $ u \in \mathcal C^{\infty}(\bar{M}) $ with some $ \lambda > 0 $ and $ \zeta > 0 $. \end{enumerate} \end{theorem} With the aid of the new version of the monotone iteration scheme in Theorem \ref{pre:thm4}, we can extend the Han-Li conjecture by showing that on compact manifolds $ (\bar{M}, g) $ with boundary, \begin{enumerate}[(i).] \item If $ \eta_{1}' < 0 $, then (\ref{HL:eqn1}) admits a positive, smooth solution with some $ \lambda < 0 $ and $ \zeta < 0 $; \item If $ \eta_{1}' > 0 $, then (\ref{HL:eqn1}) admits a positive, smooth solution with some $ \lambda > 0 $ and $ \zeta < 0 $. \end{enumerate} As a prerequisite, we need a result in terms of the perturbation of negative first eigenvalue of conformal Laplacian. \begin{proposition}\label{HL:prop1} Let $ (\bar{M}, g) $ be a compact Riemannian manifold with non-empty smooth boundary $ \partial M $, $ n = \dim \bar{M} \geqslant 3 $. Let $ \beta > 0 $ be a small enough constant. If $ \eta_{1}' < 0 $, then the quantity \begin{equation*} \eta_{1, \beta}' = \inf_{u \neq 0} \frac{a\int_{M} \lvert \nabla_{g} u \rvert^{2} d\text{Vol}_{g} + \int_{M} R_{g} u^{2} d\text{Vol}_{g} + \frac{2a}{p-2} \int_{\partial M} (h_{g} + \beta) u^{2} dS}{\int_{M} u^{2} d\text{Vol}_{g}} < 0. \end{equation*} In particular, $ \eta_{1, \beta}' $ satisfies \begin{equation}\label{HL:eqn2} -a\Delta_{g} \varphi + R_{g} \varphi = \eta_{1, \beta}' \varphi \; {\rm in} \; M, \frac{\partial \varphi}{\partial \nu} + \frac{2}{p-2} (h_{g} + \beta) \varphi = 0 \; {\rm on} \; \partial M \end{equation} with some positive function $ \varphi \in \mathcal C^{\infty}(\bar{M}) $. \end{proposition} \begin{proof} Since $ \eta_{1}' < 0 $, the normalized first eigenfunction $ \varphi_{1} $, i.e. $ \int_{M} \varphi_{1}^{2} d\text{Vol}_{g} = 1 $, satisfies \begin{equation*} \eta_{1}' = a\int_{M} \lvert \nabla_{g} \varphi_{1} \rvert^{2} d\text{Vol}_{g} + \int_{M} R_{g} \varphi_{1}^{2} d\text{Vol}_{g} + \frac{2a}{p-2} \int_{\partial M} h_{g} \varphi_{1}^{2} dS \end{equation*} By characterization of $ \eta_{1, \beta}' $, we have \begin{equation*} \eta_{1, \beta}' \leqslant a\int_{M} \lvert \nabla_{g} \varphi_{1} \rvert^{2} d\text{Vol}_{g} + \int_{M} R_{g} \varphi_{1}^{2} d\text{Vol}_{g} + \frac{2a}{p-2} \int_{\partial M} (h_{g} + \beta) \varphi_{1}^{2} dS = \eta_{1}' + \beta \int_{\partial M} \varphi_{1}^{2} dS. \end{equation*} Since $ \varphi_{1} $ is fixed, it follows that $ \eta_{1, \beta}' < 0 $ if $ \beta > 0 $ is small enough. \end{proof} The above result says that when $ \eta_{1}' < 0 $, there is a small room to allow us to perturb the boundary condition, but still keep the sign of the perturbed eigenvalue unchanged. We anticipate the same property for positive first eigenvalue case. But here we need the perturbation on the conformal Laplacian operator but not on the Robin boundary condition for positive first eigenvalue case, which was mentioned in \cite[\S5]{XU4}. \begin{proposition}\label{HL:prop2} Let $ (\bar{M}, g) $ be a compact Riemannian manifold with non-empty smooth boundary $ \partial M $, $ n = \dim \bar{M} \geqslant 3 $. Let $ \beta < 0 $ be a constant with small enough absolute value. If $ \eta_{1}' > 0 $, then the quantity \begin{equation*} \eta_{1, \beta}' = \inf_{u \neq 0} \frac{a\int_{M} \lvert \nabla_{g} u \rvert^{2} d\text{Vol}_{g} + \int_{M} \left( R_{g} + \beta \right) u^{2} d\text{Vol}_{g} + \frac{2a}{p-2} \int_{\partial M} h_{g} u^{2} dS}{\int_{M} u^{2} d\text{Vol}_{g}} > 0. \end{equation*} In particular, $ \eta_{1, \beta}' $ satisfies \begin{equation}\label{HL:eqn3} -a\Delta_{g} \varphi + \left( R_{g} + \beta \right) \varphi = \eta_{1, \beta}' \varphi \; {\rm in} \; M, \frac{\partial \varphi}{\partial \nu} + \frac{2}{p-2} h_{g} \varphi = 0 \; {\rm on} \; \partial M \end{equation} with some positive function $ \varphi \in \mathcal C^{\infty}(\bar{M}) $. \end{proposition} \begin{proof} Set $ \eta_{1, \beta}' = \eta_{1} + \beta $, we have \begin{equation*} -a\Delta_{g} \varphi + \left( R_{g} + \beta \right) \varphi = \eta_{1}' \varphi + \beta \varphi \; {\rm in} \; M, \frac{\partial \varphi}{\partial \nu} + \frac{2}{p-2} h_{g} \varphi = 0 \; {\rm on} \; \partial M. \end{equation*} The function $ \varphi $ is the first eigenfunction with respect to $ \eta_{1}' $. If $ \lvert \beta \rvert $ is small enough, $ \eta_{1, \beta}' > 0 $ for sure. \end{proof} We are ready to show the extension of the Han-Li conjecture in Case (i), i.e. $ \eta_{1}' < 0 $.
\begin{theorem}\label{HL:thm2} Let $ (\bar{M}, g) $ be a compact Riemannian manifold with non-empty smooth boundary $ \partial M $, $ n = \dim \bar{M} \geqslant 3 $. If $ \eta_{1}' < 0 $, there exist some negative constants $ \lambda, \zeta < 0 $ such that the PDE (\ref{HL:eqn1}) with $ S = \lambda $, $ H = \zeta $ admits a positive, smooth solution $ u $ on $ \bar{M} $. Equivalently, there exists a Yamabe metric $ \tilde{g} = u^{p-2} g $ such that $ R_{\tilde{g}} = \lambda < 0 $ and $ h_{\tilde{g}} = \zeta < 0 $. \end{theorem} \begin{proof} We construct lower solution and upper solutions of the following PDE \begin{equation}\label{HL:eqn4} -a\Delta_{g} u + R_{g} u = \lambda u^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} + \frac{2}{p-2} h_{g} u = \frac{2}{p-2} \zeta u^{\frac{p}{2}} \; {\rm on} \; \partial M \end{equation} for some appropriate choices of $ \lambda $ and $ \zeta $. According to the proof of the Han-Li conjecture in Theorem \ref{HL:thm1}, we may assume that $ h_{g} = h > 0 $ for some positive constant $ h $, since otherwise we apply pointwise conformal change to $ g $. Choosing $ \beta > 0 $ small enough so that the conclusion and the equation (\ref{HL:eqn2}) in Proposition \ref{HL:prop1} holds. Fix this $ \beta $. It follows that we can choose some constant $ \lambda < 0 $ such that \begin{equation*} -a\Delta_{g} \varphi + R_{g} \varphi = \eta_{1, \beta}' \varphi \leqslant \lambda \varphi^{p-1} \; {\rm in} \; M. \end{equation*} With fixed $ \lambda < 0 $, we choose $ \zeta < 0 $ with small enough absolute value so that the requirement in (\ref{pre:eqn17}) holds, we can also make $ \lvert \zeta \rvert $ even smaller if necessary so that \begin{equation*} \frac{\partial \varphi}{\partial \nu} + \frac{2}{p-2} h_{g} \varphi = -\frac{2}{p-2} \beta \varphi \leqslant \frac{2}{p-2} \zeta \varphi^{\frac{p}{2}}. \end{equation*} The inequality above holds for small enough $ \lvert \zeta \rvert $ since $ \zeta < 0 $. We then define \begin{equation}\label{HL:eqn5} u_{-} : = \varphi \; {\rm on} \; \bar{M} \end{equation} as a lower solution of (\ref{HL:eqn4}). For upper solution, we take \begin{equation}\label{HL:eqn6} u_{+} : = C \gg1 \; {\rm on} \; \bar{M} \end{equation} When the constant $ C $ is large enough, it is clear that \begin{equation*} -a\Delta_{g} u_{+} + R_{g} u_{+} = R_{g} C \geqslant \lambda C^{p-1} \; {\rm in} \; M \end{equation*} since both $ R_{g} $ and $ \lambda $ are negative. On $ \partial M $, we check that \begin{equation*} -\frac{\partial u_{+}}{\partial \nu} + \frac{2}{p-2} h_{g} u_{+} = \frac{2}{p-2} h C \geqslant 0 \geqslant \frac{2}{p-2} \zeta u_{+}^{\frac{p}{2}}. \end{equation*} We also require the constant $ C \geqslant \sup_{\bar{M}} \varphi = \sup_{\bar{M}} u_{-} $. Since both $ u_{+} $ and $ u_{-} $ are smooth, $ 0 < u_{-} \leqslant u_{+} $, we apply Theorem \ref{pre:thm4} and conclude that (\ref{HL:eqn4}) has a positive, smooth solution. \end{proof}
As we know, the negative first eigenvalue case is relatively easy case in the sense that the global lower solution and upper solution are not hard to find out. As we have shown in \cite{XU4}, \cite{XU5}, \cite{XU6}, \cite{XU7} and \cite{XU3}, it is not easy to apply monotone iteration scheme when the first eigenvalue is positive. In particular, there is a difference between prescribing constant and non-constant scalar curvature functions. In order to show the generalization of the Han-Li conjecture, we need the solution of a perturbed local Yamabe equation. \begin{proposition}\label{HL:prop3}\cite[Prop.~3.3]{XU3} Let $ (\Omega, g) $ be Riemannian domain in $\mathbb R^n$, $ n \geqslant 3 $, with $C^{\infty} $ boundary, and with ${\rm Vol}_g(\Omega)$ and the Euclidean diameter of $\Omega$ sufficiently small. Let $ \beta < 0 $ be any constant. Assume $ S_{g} < 0 $ within the small enough closed domain $ \bar{\Omega} $. Then for any $ \lambda > 0 $, the following Dirichlet problem \begin{equation}\label{HL:eqn8} -a\Delta_{g} u + \left(S_{g} + \beta \right) u = \lambda u^{p-1} \; {\rm in} \; \Omega, u = 0 \; {\rm on} \; \partial \Omega. \end{equation} has a real, positive, smooth solution $ u \in \mathcal C^{\infty}(\Omega) \cap H_{0}^{1}(\Omega, g) $ in a very small domain $ \Omega $ that vanishes at $ \partial \Omega $. \end{proposition} We are ready to show the extension of the Han-Li conjecture in Case (ii), i.e. $ \eta_{1}' > 0 $. Note that there is a significant difference between constant and non-constant prescribing scalar curvatures. When $ S $ is not a globally constant, we can use the local solution in Proposition \ref{local:prop2} and the gluing procedure in \cite[Lemma~3.2]{XU6} to construct the lower and upper solutions. When $ S = \lambda > 0 $ on $ M $, we have to introduce a perturbed Yamabe operator $ -a\Delta_{g} + R_{g} + \beta $ for $ \beta < 0 $, solving the perturbed Yamabe equation first, then take the limit as $ \beta \rightarrow 0^{-} $. In this case, the local solution of (\ref{HL:eqn8}) is the key.
\begin{theorem}\label{HL:thm3} Let $ (\bar{M}, g) $ be a compact Riemannian manifold with non-empty smooth boundary $ \partial M $, $ n = \dim \bar{M} \geqslant 3 $. If $ \eta_{1}' > 0 $, there exist some negative constants $ \lambda> 0, \zeta < 0 $ such that the PDE (\ref{HL:eqn1}) with $ S = \lambda $, $ H = \zeta $ admits a positive, smooth solution $ u $ on $ \bar{M} $. Equivalently, there exists a Yamabe metric $ \tilde{g} = u^{p-2} g $ such that $ R_{\tilde{g}} = \lambda > 0 $ and $ h_{\tilde{g}} = \zeta < 0 $. \end{theorem} \begin{proof} Here we have no restriction on the choice of initial mean curvature $ h_{g} $ for the construction of the upper solution since the target mean curvature is negative. We also have no restriction for the construction of the lower solution since the lower solution will be identically zero within a collar region containing $ \partial M $. By Proposition \ref{zero:prop2}, also see \cite[Thm.~4.6]{XU3} and \cite[Thm.~5.7]{XU4}, we may assume that the initial metric has scalar curvature $ R_{g} $ that is negative somewhere and mean curvature $ h_{g} > 0 $ everywhere on $ \partial M $. Note that the small enough region on which $ R_{g} < 0 $ can be arbitrarily chosen. The following argument is essentially the same as in \cite[Thm.~6.3, Thm.~6.4, Prop.~6.1]{XU5}, so we only give a very concise sketch here. By Proposition \ref{HL:prop2} above, we pick up a small enough constant $ \beta < 0 $ and consider the eigenvalue problem \begin{equation*} -a\Delta_{g} \varphi + \left(R_{g} + \beta \right) \varphi = \eta_{1, \beta}' \varphi \; {\rm in} \; M, \frac{\partial \varphi}{\partial \nu} + \frac{2}{p-2} h_{g} \varphi = 0 \; {\rm on} \; \partial M. \end{equation*} Choose $ \lambda > 0 $ so that \begin{equation*} \eta_{1, \beta}' \inf_{\bar{M}} \varphi > \lambda \cdot 2^{p-2} \cdot \sup_{\bar{M}} \varphi^{p-1}. \end{equation*} Fix this $ \lambda $. Note that we need the strict inequality above to allow some room for the gluing procedure in the construction of the upper solution. This can be done since we have assumed that $ R_{g} $ is negative somewhere; in addition, the introduction of $ \beta $ breaks down the conformal invariance.
We now apply Proposition \ref{HL:prop3} to obtain a local solution $ u_{0} $ of (\ref{HL:eqn8}) on a small enough domain $ \Omega $ on which $ R_{g} < 0 $, with the fixed $ f = \lambda $. Set \begin{equation}\label{HL:eqn11} u_{-} : = \begin{cases} u_{0} & \; {\rm in} \; \Omega \\ 0 \; {\rm in} \; \bar{M} \backslash \bar{\Omega} \end{cases}. \end{equation} It is clear that $ u_{-} $ is a lower solution of \begin{equation}\label{HL:eqn9} -a\Delta_{g} u + \left( R_{g} + \beta \right) u = \lambda u^{p-1} \; {\rm in} \; M, \frac{\partial u}{\partial \nu} + \frac{2}{p - 2} h_{g} u = \frac{2}{p-2} \cdot \zeta u^{\frac{p}{2}} \; {\rm on} \; \partial M. \end{equation} Note that the boundary condition holds since $ u_{-} \equiv 0 $ within a collar region containing $ \partial M $. Furthermore, $ u_{-} \in H^{1}(M, g) \cap \mathcal C^{0}(\bar{M}) $. For details, see e.g. Theorem 6.3 of \cite{XU5}. For upper solution, we glue the two functions $ u_{0} $ and $ \varphi $ in $ \Omega $ together to obtain a new function $ u_{1} \in \mathcal C^{\infty}(\bar{\Omega}) $ which satisfies \begin{equation*} -a\Delta_{g} u_{1} + \left(R_{g} + \beta \right) u_{1} \geqslant \lambda u_{1}^{p-1} \; {\rm in} \; \Omega, u_{1} = \varphi \; {\rm on} \; \partial \Omega, u_{1} \geqslant u_{-} \; {\rm in} \; \bar{\Omega}. \end{equation*} The gluing strategy has been repeatedly used in many previous papers, for details, see e.g. Theorem 6.3 of \cite{XU5}. Set \begin{equation}\label{HL:eqn12} u_{+} : = \begin{cases} u_{1} & \; {\rm in} \; \Omega \\ \varphi \; {\rm in} \; \bar{M} \backslash \bar{\Omega} \end{cases}. \end{equation} Since $ u_{1} = \varphi $ near $ \partial \Omega $, we conclude that $ u_{+} $ is smooth on $ \bar{M} $. Due to the choice of $ \lambda $, it is clear that \begin{equation*} -a\Delta_{g} u_{+} + \left( R_{g} + \beta \right) u_{+} \geqslant \lambda u_{+}^{p-1} \; {\rm in} \; M. \end{equation*} Since $ \zeta < 0 $, the boundary condition satisfies \begin{equation*} \frac{\partial u_{+}}{\partial \nu} + \frac{2}{p-2} h_{g} u_{+} = 0 \geqslant \frac{2}{p-2} \zeta u_{+}^{\frac{p}{2}}, \forall \zeta < 0. \end{equation*} Furthermore, we have $ 0 \leqslant u_{-} \leqslant u_{+} $ with $ u_{-} \not\equiv 0 $. Choosing $ \zeta $ with small enough absolute value so that (\ref{pre:eqn17}) holds, we apply Theorem \ref{pre:cor1} and conclude that (\ref{HL:eqn9}) has a positive solution $ u_{\beta} \in \mathcal C^{\infty}(\bar{M}) $ for appropriate choices of $ \lambda $ and $ \zeta $. Choosing a threshold $ \beta_{0} < 0 $, $ \lvert \beta_{0} \rvert $ small enough, every $ \beta \in (\beta_{0}, 0) $ associates with a solution $ u_{\beta} $ of (\ref{HL:eqn9}). By the same argument as in \cite[Prop.~6.1]{XU5}, we can show that \begin{equation*} \lVert u_{\beta} \rVert_{\mathcal L^{r}(M, g)} \leqslant C, \lVert u_{\beta} \rVert_{\mathcal L^{p}(M, g)} \in [a, b], \forall \beta \in (\beta_{0}, 0). \end{equation*} Here $ r > p $, $ C, a, b $ are some positive constants. According to Arzela-Ascoli, there exists a subsequence of $ \lbrace u_{\beta} \rbrace $, uniformly bounded in $ \mathcal C^{2, \alpha} $-norm for some $ \alpha \in (0, 1) $ due to standard bootstrapping method, that converges to a limit $ u $ in the classical sense. By the same argument in \cite[Thm.~6.4]{XU5}, the limit $ u \in \mathcal C^{\infty}(\bar{M}) $ solves (\ref{HL:eqn1}) with $ S = \lambda > 0 $ and $ H = \zeta < 0 $, as chosen above. \end{proof}
\end{document} | arXiv |
\begin{document}
\title{Twisted Alexander norms give lower bounds on the Thurston norm} \author{Stefan Friedl and Taehee Kim} \date{\today} \address{Rice University, Houston, Texas, 77005-1892} \email{[email protected]}\address{ Department of Mathematics, Konkuk University, Hwayang-dong, Gwangjin-gu, Seoul 143-701, Korea} \email{[email protected]} \def\textup{2000} Mathematics Subject Classification{\textup{2000} Mathematics Subject Classification} \expandafter\let\csname subjclassname@1991\endcsname=\textup{2000} Mathematics Subject Classification \expandafter\let\csname subjclassname@2000\endcsname=\textup{2000} Mathematics Subject Classification \subjclass{Primary 57M27; Secondary 57N10} \keywords{Thurston norm, Twisted Alexander norm, 3-manifolds}
\begin{abstract} We introduce twisted Alexander norms of a compact connected orientable 3-manifold with first Betti number bigger than one, generalizing norms of McMullen and Turaev. We show that twisted Alexander norms give lower bounds on the Thurston norm of a 3-manifold. Using these we completely determine the Thurston norm of many 3-manifolds which are not determined by norms of McMullen and Turaev. \end{abstract}
\maketitle
\section{Introduction} \label{sec:introduction}
Let $M$ be a 3-manifold. Throughout the paper we will assume that all 3-manifolds are compact, connected and orientable. Let $\phi\in H^1(M;\Bbb{Z})$. There exists a (possibly disconnected) properly embedded surface $S$ which represents a homology class which is dual to $\phi$. (We also say that $S$ is \emph{dual} to $\phi$.) The \emph{Thurston norm} of $\phi$ is now defined as
\[
||\phi||_{T,M}=\min \{ -\chi(\hat{S})\, | \, S \subset M \mbox{ properly embedded surface dual to }\phi\} \] where $\hat{S}$ denotes the result of discarding all connected components of $S$ with positive Euler characteristic.
If the manifold $M$ is clear, we will just write $||\phi||_T$.
Thurston \cite{Th86} introduced $||-||_T$ in a preprint in 1976.
He proved that the Thurston norm on $H^1(M;\Bbb{Z})$ is homogeneous and convex (that is, for
$\phi,\phi_1,\phi_2\in H^1(M;\Bbb{Z})$ and $k\in \Bbb{N}$, $||k\phi||_T=k
||\phi||_T$ and $||\phi_1+\phi_2||_T\leq
||\phi_1||_T+||\phi_2||_T$). He also showed that the Thurston norm can be extended to a seminorm on $H^1(M;\Bbb{R})$ and that the Thurston norm ball
(which is the set of $\phi\in H^1(M;\Bbb{R})$ with $||\phi||_T \le 1$) is a (possibly noncompact) finite convex polyhedron. A natural question arises; how do we determine the Thurston norm on $H^1(M;\Bbb{R})$?
To address this question McMullen \cite{Mc02} used a homological approach. It is well-known that for a knot $K$ in the 3-sphere \[ 2\,\mbox{genus}(K) \geq \deg\left(\Delta_K(t)\right), \] where $\Delta_K(t) \in \Bbb{Z}[t^{\pm 1}]$ denotes the Alexander polynomial of $K$. Generalizing this McMullen \cite{Mc02} considered the multivariable Alexander polynomial $\Delta_M \in \Bbb{Z}[FH_1(M;\Bbb{Z})]$ (cf. Section \ref{sectiontwialex} for a definition) where
$FH_1(M;\Bbb{Z}):=H_1(M;\Bbb{Z})/\mbox{Tor}_\Bbb{Z}(H_1(M;\Bbb{Z}))$ is the maximal free abelian quotient of $H_1(M;\Bbb{Z})$. Using the multivariable Alexander polynomial he defined another seminorm (called \emph{the Alexander norm of $M$}) $||-||_A$ on $H^1(M;\Bbb{R})$ as follows. If $\Delta_{M}=0$ then we set
$||\phi||_{A}=0$ for all $\phi\in H^1(M;\Bbb{R})$. Otherwise for $\Delta_{M}=\sum a_if_i$ with $a_i\in \Bbb{Z}$ and $f_i \in FH_1(M;\Bbb{Z})$ and given $\phi \in H^1(M;\Bbb{R})$ we define
\[ ||\phi||_{A} :=\sup \phi(f_i-f_j).\] with the supremum over $(f_i, f_j)$ such that $a_ia_j\ne 0$. Note that $\phi\in H^1(M;\Bbb{R})$ naturally induces a homomorphism $H_1(M;\Bbb{R}) \to \Bbb{R}$.
The Alexander norm ball is again a (possibly noncompact) finite convex polyhedron. McMullen showed that the Alexander norm gives a lower bound on the Thurston norm. More precisely he proved the following theorem.
\begin{theorem}\label{thmalexnorm}\cite[Theorem 1.1]{Mc02} Let $M$ be a 3-manifold whose boundary is empty or consists of tori. Then the Alexander and Thurston norms on $H^1(M;\Bbb{Z})$ satisfy
\[ ||\phi||_T \geq ||\phi||_A - \left\{ \begin{array}{ll} 1+b_3(M), &\mbox{ if }b_1(M)=1 \mbox{ and } H^1(M;\Bbb{Z}) \mbox{ is generated by } \phi, \\ 0, &\mbox{ if }b_1(M)>1. \end{array} \right. \] Equality holds if $\phi : \pi_1(M) \to \Bbb{Z}$ is represented by a fibration $M \to S^1$ such that $M\ne S^1\times D^2$ and $M\ne S^1\times S^2$. \end{theorem}
In \cite{Mc02}, using the Alexander norm, McMullen completely determined the Thurston norm of many link complements. The computation was based on the following observation for the case $b_1(M) >1$. \\
\noindent \emph{Observation:} The Thurston norm ball lies inside the Alexander norm ball. If the Alexander norm ball and the Thurston norm ball agree on all extreme vertices of the Alexander norm ball, then they agree everywhere by convexity. \\
Note that Seiberg-Witten theory \cite{KM97} and Heegard-Floer homology \cite{OS04} can be used to completely determine the Thurston norm (cf. \cite{Kr98, Kr99, Vi99,Vi03}),
but computations are not combinatorial and sometimes difficult to apply in practice. In this paper we will take a homology theoretic approach and find lower bounds on the Thurston norm which are easily computed in a combinatorial way.
McMullen's homological approach has been generalized by many authors. In
\cite{Co04,Ha05,Tu02b,FK05} much stronger lower bounds for $||\phi||_T$ for \emph{specific} $\phi \in H^1(M;\Bbb{R})$ were found. In particular when $b_1(M)=1$ these methods allow us to determine the Thurston norm ball in many cases. For the case $b_1(M) >1$ Turaev introduced \emph{the torsion norm} generalizing McMullen's Alexander norm using \emph{abelian} representations \cite[Chapter 4]{Tu02a}.
In this paper, given any finite dimensional representation over a field, we define the \emph{twisted Alexander norm} and prove that it gives a lower bound on the Thurston norm. This generalizes the work of McMullen \cite{Mc02} and Turaev \cite{Tu02a}. Note that in a separate paper the first author and Shelly Harvey \cite{FH06} will show that the invariants in \cite{Ha05} are a norm as well.
In the following let $\Bbb{F}$ be a commutative field and $\alpha:\pi_1(M)\to \GL(\F,k)$ a representation. Then we define \emph{the twisted multivariable Alexander polynomial} $\Delta_M^\alpha\in \Bbb{F}[FH_1(M;\Bbb{Z})]$ associated to $\alpha$ and the natural surjection $\pi_1(M) \to FH_1(M;\Bbb{Z})$ (see Section \ref{sectiontwialex}). Similarly to the way the multivariable Alexander polynomial gives rise to the Alexander norm we use the twisted multivariable Alexander polynomial to define the twisted Alexander norm
$||-||_A^\alpha$ on $H^1(M;\Bbb{R})$ associated to $\alpha$ (see Section \ref{sec:twisted alexander norm}).
Let $\phi \in H^1(M;\Bbb{Z})$. This defines a homomorphism $\phi:\pi_1(M)\to \Bbb{Z}\cong \langle t^{\pm 1}\rangle$. We now define $\Delta_{\phi}^{\alpha,i}(t) \in \F[t^{\pm 1}]$ to be the order of the $i$--th twisted homology module $H_i^\alpha(M;\F^k \otimes_\Bbb{F} \F[t^{\pm 1}])$ associated to $\alpha$ and $\phi$. (See Section \ref{sectiontwialex}. We also refer to \cite{KL99, FK05}.) We write $\Delta^\alpha_\phi(t)$ for $\Delta_{\phi}^{\alpha,1}(t)$. The notion of twisted Alexander polynomial originated from a preprint of Lin \cite{Lin01} from 1990 and was developed by Wada \cite{Wa94}. The homological definition of twisted Alexander polynomials, which we use in this paper, was first introduced by Kirk and Livingston \cite{KL99}. We also refer to \cite{Kit96, FK05} for more about twisted Alexander polynomials.
In \cite[Theorem~1.1]{FK05} the authors show that twisted one-variable Alexander polynomials give lower bounds on $||\phi||_T$ for \emph{specific} $\phi\in H^1(M;\Bbb{Z})$. The following theorem allows us to translate bounds on $||\phi||_T$ for specific $\phi\in H^1(M;\Bbb{Z})$ from \cite{FK05} to bounds on $||-||_T$ given by twisted Alexander norms. Note that $\phi$ induces a homomorphism $\phi : \Bbb{F}[FH_1(M;\Bbb{Z})] \to \F[t^{\pm 1}]$.
\newtheorem*{thm1a}{Theorem~\ref{mainthmalex}} \begin{thm1a} \it{ Let $M$ be a 3-manifold with $b_1(M)>1$ whose boundary is empty or consists of tori. Let $\alpha : \pi_1(M) \to \GL(\F,k)$ be a representation. Let $\phi\in H^1(M;\Bbb{Z})$. Then \[ \Delta^\alpha_\phi(t)=\phi(\Delta^\alpha_M){\Delta_{\phi}^{\a,0}(t)}\Delta_{\phi}^{\a,2}(t).\] Furthermore if $\phi(\Delta^\alpha_M)\ne 0$, then $\Delta_{\phi}^{\a,0}(t)\ne 0$ and $\Delta_{\phi}^{\a,2}(t)\ne 0$ and hence $\Delta_{\phi}^{\a}(t)\ne 0$.} \end{thm1a}
The proof is based on the functoriality of Reidemeister torsion (see Section \ref{sec:multi}) and builds on ideas of Turaev. The following two theorems are our main results.
\newtheorem*{thm1}{Theorem~\ref{mainthm}} \begin{thm1}[{\bf Main Theorem 1}] \it{ Let $M$ be a 3-manifold with $b_1(M)>1$ whose boundary is empty or consists of tori. Let $\alpha:\pi_1(M)\to \GL(\F,k)$ be a representation. Then for the corresponding twisted Alexander norm
$||-||^\alpha_A$, we have
\[ ||\phi||_T \ge \frac1k ||\phi||^\alpha_A \] for all $\phi\in H^1(M;\Bbb{R})$. } \end{thm1}
Let $M$ be a 3-manifold and $\phi\in H^1(M;\Bbb{Z})$. We say \emph{$(M,\phi)$ fibers over $S^{1}$} if the homotopy class of maps $M\to S^1$ induced by $\phi:\pi_1(M)\to H_1(M;\Bbb{Z})\to \Bbb{Z}$ contains a representative that is a fiber bundle over $S^{1}$. Thurston \cite{Th86} showed that if $(M,\phi)$ fibers over $S^1$, then $\phi$ lies in the cone on a top-dimensional open face of the Thurston norm ball. We denote this cone by $C(\phi)$.
\newtheorem*{thm2}{Theorem~\ref{mainthmfib}} \begin{thm2}[{\bf Main Theorem 2}] \it{ Let $M$ be a 3-manifold with $b_1(M)>1$ whose boundary is empty or consists of tori such that $M\ne S^1\times D^2$ and $M\ne S^1\times S^2$. Let $\alpha:\pi_1(M)\to \GL(\F,k)$ be a representation. If $\phi\in H^1(M;\Bbb{Z})$ is such that $(M,\phi)$ fibers over $S^1$, then
\[ ||\psi||_T = \frac1k ||\psi||^\alpha_A \]
for all $\psi\in C(\phi)$. } \end{thm2}
By Theorem \ref{mainthm} twisted Alexander norms give lower bounds on the Thurston norm. With the same reason as for the Alexander norm ball, twisted Alexander norm balls are (possibly noncompact) finite convex polyhedra. Therefore we can use McMullen's observation in the above to determine the Thurston norm using twisted Alexander norms.
In Section \ref{sec:example} we give examples which show how powerful twisted Alexander norms are. For example we determine the Thurston norm of the complement of the link $L$ in Figure \ref{link11n73intro}, which can not be determined by the (usual) Alexander norm. The components of $L$ are $K_1$, the trefoil, and $K_2=11_{440}$ (here we use \emph{knotscape} notation).
\begin{figure}
\caption{Link $L$}
\label{link11n73intro}
\end{figure} Let $X(L)$ denote the complement of an open tubular neighborhood of $L$ in the 3-sphere. Then
\[ \Delta_{X(L)}(x_1,x_2)=(x_1^2-x_1+1)(x_2^4-2x_2^3+3x_2^2-2x_2+1)\in \Bbb{Q} \xypm.\] The resulting
Alexander norm ball is given in Figure \ref{normballintro} on the left. On the other hand using the program \emph{KnotTwister} \cite{F05} we found a representation $\alpha:\pi_1(X(L))\to \mbox{GL}(\Bbb{F}_{13},2)$ such that \[ \Delta_{X(L)}^\alpha(x_1,x_2)=\Delta_1(x_1)\Delta_2(x_2)\] where $\deg(\Delta_1(x_1))=4$ and $\deg(\Delta_2(x_2))=12$. (Here
$\Bbb{F}_n$ denotes the field of $n$ elements.) Hence the twisted Alexander norm ball for $\frac{1}{2}||-||_A^\alpha$ is the shaded region given in Figure \ref{normballintro} on the right. \begin{figure}
\caption{The untwisted and the twisted Alexander norm ball of $L$.}
\label{normballintro}
\end{figure}
By Theorem \ref{mainthm} we have $||\phi||_T \ge \frac{1}{2}
||\phi||_{A}^\alpha$. It is clear from Figure \ref{normballintro} that
$\frac12||-||_A^\alpha$ gives a strictly sharper bound on the Thurston norm than $||-||_A$ does. In Section \ref{sec:hophlike} we will see that the norms $||-||_T$ and $\frac12||-||_A^\alpha$ agree on the vertices of the norm ball of $\frac12||-||_A^\alpha$. Therefore by McMullen's observation the norms agree everywhere.
Hence the shaded region in Figure \ref{normballintro} on the right is in fact the Thurston norm ball of the link $L$. We point out that it follows immediately from Theorem \ref{mainthmfib} that $(X(L),\phi)$ does not fiber over $S^1$ for any $\phi \in H^1(M;\Bbb{Z})$. See Section \ref{sec:example} for more details.
Our approach works very well in many cases, but sometimes it is difficult to find an appropriate representation. Therefore it is sometimes convenient to find lower bounds on the Thurston norm of a finite cover $\tilde{M}$ of $M$. By a result of Gabai \cite[p.~484]{Ga83} (cf. also Theorem \ref{lemmathurstong}) the Thurston norm on $\tilde{M}$ determines the Thurston norm on $M$. In many cases it is easier to find representations of $\tilde{M}$. This approach allows us to determine the Thurston norm ball of Dunfield's link \cite{Du01} (see Section \ref{sec:dunfield}). \\
{\bf Outline of the paper:} In Section \ref{sec:twisted invariants} we define twisted Alexander modules and twisted Alexander polynomials. In Section \ref{sec:bound} we define twisted Alexander norms and prove the main theorems. We quickly discuss how to compute twisted Alexander polynomials in Section \ref{sec:computation} and give examples in Section \ref{sec:example}. In Section \ref{sec:multi} we give a proof of Theorem \ref{mainthmalex} which shows the precise relationship between the twisted multivariable Alexander polynomials and the twisted one-variable Alexander polynomials. \\
{\bf Notations and conventions:} For a link $L$ in $S^3$, $X(L)$ denotes the exterior of $L$ in $S^3$. (That is, $X(L) = S^3\setminus \nu L$ where $\nu L$ is an open tubular neighborhood of $L$ in $S^3$.) An arbitrary (commutative) field is denoted by $\Bbb{F}$. $\Bbb{F}_n$ denotes the finite field of $n$ elements. We identify the group ring $\Bbb{F}[\Bbb{Z}]$ with $\F[t^{\pm 1}]$. We denote the permutation group of order $k$ by $S_k$. For a 3-manifold $M$ we use the canonical isomorphisms to identify $H^1(M;\Bbb{Z}) = \mbox{Hom}(H_1(M;\Bbb{Z}), \Bbb{Z}) = \mbox{Hom}(\pi_1(M),\Bbb{Z})$. Hence sometimes $\phi\in H^1(M;\Bbb{Z})$ is regarded as a homomorphism $\phi : \pi_1(M) \to \Bbb{Z}$ (or $\phi : H_1(M;\Bbb{Z}) \to \Bbb{Z}$) depending on the context. \\
{\bf Acknowledgments:} The authors would like to thank Stefano Vidussi and Jae Choon Cha for helpful conversations and suggestions.
\section{Twisted Alexander polynomials} \label{sec:twisted invariants}
In this section we give the definition of twisted Alexander polynomials.
\subsection{Torsion invariants} Let $R$ be a commutative Noetherian unique factorization domain (henceforth UFD). An example of $R$ to keep in mind is $\Bbb{F}[t_1^\pm, t_2^\pm, \ldots, t_n^\pm]$, a (multivariable) Laurent polynomial ring over a field $\Bbb{F}$. For a finitely generated $R$-module $A$, we can find a presentation $$ R^r \xrightarrow{P} R^s \to A \to 0 $$ since $R$ is Noetherian. Let $i\ge 0$ and suppose $s-i\le r$. We define $E_i(A)$, \emph{the $i$-th elementary ideal} of $A$, to be the ideal in $R$ generated by all $(s-i)\times (s-i)$ minors of $P$ if $s-i>0$ and to be $R$ if $s-i\le 0$. If $s-i > r$, we define $E_i(A)= 0$. It is known that $E_i(A)$ does not depend on the choice of a presentation of $A$ (cf. \cite{CF77}).
Since $R$ is a UFD there exists a unique smallest principal ideal of $R$ that contains $E_0(A)$. A generator of this principal ideal is defined to be the \emph{order of $A$} and denoted by $\operatorname{ord} (A)\in R$. The order is well-defined up to multiplication by a unit in $R$. Note that $A$ is not $R$-torsion if and only if $\operatorname{ord} (A) =0$. For more details, we refer to \cite{Hi02}.
\subsection{Twisted Alexander invariants}\label{sectiontwialex} Let $M$ be a 3-manifold and $\psi:\pi_1(M)\to F$ a homomorphism to a free abelian group $F$. We do not demand that $\psi$ is surjective. Note that $\Lambda:=\Bbb{F}[F]$ is a commutative Noetherian UFD. Let $\alpha:\pi_1(M)\to \mbox{GL}(\Bbb{F},k)$ be a representation.
Using $\alpha$ and $\psi$, we define a left $\Bbb{Z}[\pi_1(M)]$-module structure on $\Bbb{F}^k\otimes_\Bbb{F} \Lambda=:\Lambda^k$ as follows: \[ g\cdot (v\otimes p):= (\alpha(g)\cdot v)\otimes (\psi(g)p)\] where $g\in \pi_1(M)$ and $v\otimes p \in \Bbb{F}^k\otimes_\Bbb{F} \Lambda=\Lambda^k$. Together with the natural structure of $\Lambda^k$ as a $\Lambda$--module we can view $\Lambda^k$ as a $\Bbb{Z}[\pi_1(M)]$--$\Lambda$ bi--module.
Recall there exists a canonical left $\pi_1(M)$--action on the universal cover $\tilde{M}$. We consider the chain complex $C_*(\tilde{M})$ as a right $\Bbb{Z}[\pi_1(M)]$-module by defining $\sigma \cdot g:=g^{-1}\sigma$ for a singular chain $\sigma$. For $i\ge 0$, we define \emph{the $i$-th twisted Alexander module of $(M,\psi,\alpha)$} to be $$ H_i^\alpha(M;\Lambda^k) := H_i(C_*(\tilde{M})\otimes_{\Bbb{Z}[\pi_1(M)]}\Lambda^k). $$ Since $\Lambda^k$ is a right $\Lambda$-module twisted Alexander modules can be regarded as right $\Lambda$-modules. Since $M$ is compact and $\Lambda$ is Noetherian these modules are finitely generated over $\Lambda$.
\begin{defn} \label{def:polynomial} The \emph{$i$-th (twisted) Alexander polynomial of $(M,\psi,\alpha)$} is defined to be $\operatorname{ord} (H_i^\alpha(M;\Lambda^k))\in \Lambda$ and denoted by $\Delta^{\alpha,i}_{M,\psi}$. When $i=1$, we drop the superscript $i$ and abbreviate $\Delta^{\alpha,i}_{M,\psi}$ by $\Delta^\alpha_{M,\psi}$, and we call it the \emph{(twisted) Alexander polynomial of $(M,\psi,\alpha)$}. \end{defn}
Twisted Alexander polynomials are well-defined up to multiplication by a unit in $\Lambda$. We drop the notation $\psi$ when $\psi$ is the natural surjection to $FH_1(M;\Bbb{Z})$. We also drop $\alpha$ when $\alpha$ is the trivial representation to $\mbox{GL}(\Bbb{Q},1)$ and drop $M$ in the case that $M$ is clear from the context. If $\psi$ is a homomorphism to $\Bbb{Z}$ then we identify $\Bbb{F}[\Bbb{Z}]$ with $\F[t^{\pm 1}]$ and we write $\Delta_{M,\psi}^{\alpha,i}(t)\in \F[t^{\pm 1}]$. The above homological definition of twisted Alexander polynomials was first introduced by Kirk and Livingston \cite{KL99}.
\section{Twisted Alexander norms as lower bounds on the Thurston norm} \label{sec:bound}
In this section we define twisted Alexander norms, which generalize the Alexander norm of McMullen \cite{Mc02} and the torsion norm of Turaev \cite{Tu02a}. We show that twisted Alexander norms give lower bounds on the Thurston norm and that they give fibering obstructions of 3-manifolds.
\subsection{Twisted Alexander norm} \label{sec:twisted alexander norm} Following an idea of McMullen's \cite{Mc02} we now use the twisted multivariable Alexander polynomial corresponding to $\psi:\pi_1(M)\to FH_1(M;\Bbb{Z})$ to define a norm on $H^1(M;\Bbb{R})$. Let $\alpha:\pi_1(M)\to \mbox{GL} (\Bbb{F},k)$ be a representation. If $\Delta_{M}^{\alpha}=0$ then we set
$||\phi||_{A}^{\alpha}=0$ for all $\phi\in H^1(M;\Bbb{R})$. Otherwise we write $\Delta_{M}^{\alpha}=\sum a_if_i$ for $a_i\in \Bbb{F}$ and $f_i \in FH_1(M;\Bbb{Z})$. Given $\phi \in H^1(M;\Bbb{R})$ we then define
\[ ||\phi||_{A}^{\alpha} :=\sup \phi(f_i-f_j),\] with the supremum over $(f_i, f_j)$ such that $a_ia_j\ne 0$. Clearly this defines a seminorm on $H^1(M;\Bbb{R})$ which we call the \emph{twisted Alexander norm of $(M,\alpha)$}. This is a generalization of the Alexander norm introduced by McMullen \cite{Mc02}. Indeed, the Alexander norm is the same as the twisted Alexander norm corresponding to the trivial representation $\alpha :
\pi_1(M) \to \mbox{GL} (\Bbb{Q},1)$. In this case we just write $||-||_{A}$. Twisted Alexander norms also generalize the torsion norm of Turaev \cite{Tu02a}.
\subsection{Lower bounds on the Thurston norm}
Recall that McMullen showed that in the case $b_1(M)>1$ the Alexander norm $||-||_A$ is a lower bound on the Thurston norm (see Theorem \ref{thmalexnorm}). We extend this result to twisted Alexander norms.
\begin{theorem}[{\bf Main Theorem 1}]\label{mainthm}
Let $M$ be a 3-manifold with $b_1(M)>1$ whose boundary is empty or consists of tori. Let $\alpha:\pi_1(M)\to \GL(\F,k)$ be a representation. Then for the corresponding twisted Alexander norm $||-||^\alpha_A$ we have
\[ ||\phi||_T \ge \frac1k ||\phi||^\alpha_A \] for all $\phi\in H^1(M;\Bbb{R})$. \end{theorem} \noindent This theorem generalizes McMullen's theorem (Theorem \ref{thmalexnorm}). Turaev \cite{Tu02a} proved this theorem in the special case of abelian representations.
\begin{theorem}[{\bf Main Theorem 2}]\label{mainthmfib} Let $M$ be a 3-manifold with $b_1(M)>1$ whose boundary is empty or consists of tori such that $M\ne S^1\times D^2$ and $M\ne S^1\times S^2$ . Let $\alpha:\pi_1(M)\to \GL(\F,k)$ be a representation. If $\phi\in H^1(M;\Bbb{Z})$ is such that $(M,\phi)$ fibers over $S^1$,
then
\[ ||\psi||_T = \frac1k ||\psi||^\alpha_A \]
for all $\psi\in C(\phi)$. \end{theorem}
\noindent The idea of the proofs of the main theorems is to combine the lower bounds for one-variable Alexander polynomials from \cite{FK05} with Theorem \ref{mainthmalex}. In \cite{FK05} we proved the following theorem.
\begin{theorem} \cite[Theorem~1.1~and~Theorem~1.2]{FK05} \label{mainthmfk05} Let $M$ be a 3-manifold whose boundary is empty or consists of tori. Let $\phi \in H^1(M)$ be nontrivial and $\alpha:\pi_1(M)\to \GL(\F,k)$ a representation such that $\Delta_{\phi}^{\a}(t) \ne 0$. Then $\Delta_{\phi}^{\a,i}(t)\ne 0$ for $i=0,2$ and \[ ||\phi||_T \geq
\frac{1}{k}\big(\deg\left(\Delta_{\phi}^{\a}(t)\right)- \deg\left(\Delta_{\phi}^{\a,0}(t)\right) -\deg\left(\Delta_{\phi}^{\a,2}(t)\right) \big).\] Furthermore, if $(M,\phi)$ fibers over $S^1$ and if $M\ne S^1\times D^2$ and $M\ne S^1\times S^2$, then equality holds. \end{theorem}
We also need the following theorem to prove the main theorems. This theorem clarifies the precise relationship between the twisted multivariable Alexander polynomial and the twisted one-variable Alexander polynomials of a 3-manifold.
\begin{theorem}\label{mainthmalex} Let $M$ be a 3-manifold with $b_1(M)>1$ whose boundary is empty or consists of tori. Let $\alpha : \pi_1(M) \to \GL(\F,k)$ be a representation. Let $\phi\in H^1(M;\Bbb{Z})$ be nontrivial. Then \[ \Delta_{\phi}^{\a}(t)=\phi(\Delta^\alpha_M){\Delta_{\phi}^{\a,0}(t)}\Delta_{\phi}^{\a,2}(t). \] Furthermore if $\phi(\Delta^\alpha_M)\ne 0$, then $\Delta_{\phi}^{\a,0}(t)\ne 0$ and $\Delta_{\phi}^{\a,2}(t)\ne 0$ and hence $\Delta_{\phi}^{\a}(t)\ne 0$. \end{theorem}
\noindent The idea of the proof of Theorem \ref{mainthmalex} is to go from the twisted multivariable Alexander polynomials to Reidemeister torsion which is functorial, and then to go back to the twisted one-variable Alexander polynomials. The proof of Theorem \ref{mainthmalex} is postponed to Section \ref{sec:funtoriality}. Now we give a proof of Theorem \ref{mainthm}.
\begin{proof}[Proof of Theorem \ref{mainthm}]
If $\Delta_M^\alpha = 0$, then $||\phi||^\alpha_A = 0$ for all $\phi\in H^1(M;\Bbb{R})$, hence the theorem holds. We now consider the case $\Delta_M^\alpha \ne 0$.
First suppose that $\phi\in H^1(M;\Bbb{Z})$ is nontrivial and lies inside the cone on an open top-dimensional face of the twisted Alexander norm ball. Write $\Delta_M^\alpha = \sum a_i f_i$ where $a_i \in \Bbb{F} \setminus \{0\}$ and $f_i \in FH_1(M;\Bbb{Z})$. We have \[ \phi(\Delta_M^\alpha) = \sum a_i t^{\phi(f_i)} \] in $\F[t^{\pm 1}]$. Since $\phi$ is inside the cone on an open top-dimensional face of the twisted Alexander norm ball, the highest and lowest values of $\phi(f_i)$ occur only once in the above equation. Therefore $\phi(\Delta_M^\alpha) \ne 0$ and \[
\deg \left(\phi(\Delta_M^\alpha)\right) =||\phi||^\alpha_A. \] By Theorem \ref{mainthmalex} we have $\Delta_{\phi}^{\a}(t) \ne 0$, $\Delta_{\phi}^{\a,0}(t) \ne 0$, $\Delta_{\phi}^{\a,2}(t)\ne 0$ and \begin{equation} \label{equn1} \deg \left(\Delta_\phi^\alpha(t) \right) =
||\phi||^\alpha_A+\deg\left(\Delta_{\phi}^{\a,0}(t)\right)+ \deg\left(\Delta_{\phi}^{\a,2}(t)\right). \end{equation}
Since $\Delta_{\phi}^{\a}(t)\ne 0$ we get by Theorem \ref{mainthmfk05} that
\begin{equation} \label{equn2} ||\phi||_T \geq \frac{1}{k}\left( \deg\left(\Delta_{\phi}^{\a}(t)\right)- \deg\left(\Delta_{\phi}^{\a,0}(t)\right)-\deg\left(\Delta_{\phi}^{\a,2}(t)\right)\right). \end{equation}
Combining the inequalities
(\ref{equn1}) and (\ref{equn2}) we clearly get $||\phi||_T \geq \frac{1}{k} ||\phi||^\alpha_A$.
This proves Theorem \ref{mainthm} for all $\phi\in H^1(M;\Bbb{Z})$
inside the cone on an open top-dimensional face of the twisted Alexander norm ball. By homogeneity and continuity we get that in fact $||\phi||_T \ge \frac{1}{k} ||\phi||^\alpha_A$ for all $\phi\in H^1(M;\Bbb{R})$. \end{proof}
For the proof of Theorem \ref{mainthmfib} we need the following theorem proved by Thurston \cite{Th86} and which can also be found in \cite[Theorem 9, p.~259]{Oe86}. \begin{theorem}[Thurston] \label{fiberface} Let $M$ be a 3-manifold. If $\phi\in H^1(M;\Bbb{Z})$ is such that $(M,\phi)$ fibers over $S^1$, then $\phi$ lies in the cone on a top-dimensional open face of the Thurston norm ball. Furthermore, if we denote this cone by $C(\psi)$, then $(M,\psi)$ fibers over $S^1$ for all $\psi \in C(\psi)\cap H^1(M;\Bbb{Z})$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{mainthmfib}] Suppose $\phi\in H^1(M;\Bbb{Z})$. If $\phi$ is nontrivial and $(M,\phi)$ fibers over $S^1$ then the inequality in Theorem \ref{mainthmfk05} and hence by the proof of Theorem \ref{mainthm} the inequality in Theorem \ref{mainthm} become equalities. Furthermore, by Theorem \ref{fiberface}, $\phi$ lies in the cone on a top-dimensional open face $C(\psi)$ of the Thurston norm ball, and $(M,\psi)$ fibers over $S^1$ for any $\psi\in C(\psi)\cap H^1(M;\Bbb{Z})$. In particular we have \[
||\psi||_T = \frac1k
||\psi||^\alpha_A \] for every $\psi\in C\cap H^1(M;\Bbb{Z})$ which is nontrivial. By homogeneity and continuity it follows that \[
||\psi||_T=\frac{1}{k}||\psi||_A^{\alpha} \] for all $\psi\in C(\psi)$. \end{proof}
\section{Computation of twisted Alexander norms} \label{sec:computation}
Let $M$ be a 3-manifold and $\psi:\pi_1(M)\to F$ a homomorphism to a free abelian group $F$ such that $\psi:H_1(M;\Bbb{Q})\to F\otimes_\Bbb{Z} \Bbb{Q}$ is surjective. (In this case we say $\psi$ is \emph{rationally surjective}.) Given a representation $\alpha:\pi_1(M)\to \GL(\F,k)$ we quickly outline how to compute $\Delta_{M,\psi}^{\alpha}$ and hence the twisted Alexander norm.
Denote the universal cover of $M$ by $\tilde{M}$. If $p$ is a point in $M$, then denote the preimage of $p$ under the map $\tilde{M}\to M$ by $\tilde{p}$. Then a presentation matrix for $$ H_i^\alpha(M, p;\Bbb{F}^k[F]):=H_i(C_*(\tilde{M},\tilde{p})\otimes_{\Bbb{Z}[\pi_1(M)]}\Bbb{F}^k[F]). $$ can be found using Fox calculus from a presentation of the group $\pi_1(M)$.
We also refer to the literature \cite{Fo53,Fo54,CF77}), but we point out that we view $C_*(\tilde{M})$ as a \emph{right} $\Bbb{Z}[\pi_1(M)]$-module, whereas the literature normally views $C_*(\tilde{M})$ as a \emph{left} $\Bbb{Z}[\pi_1(M)]$-module (cf. also \cite[Section~6]{Ha05}).
By using the long exact sequence of the twisted homology modules of the pair of spaces $(M,p)$, one can obtain the following short exact sequence of $\Bbb{F}[F]$-modules:
$$ 0\to H_1^\alpha(M;\Bbb{F}^k[F]) \to H_1^\alpha(M,p;\Bbb{F}^k[F]) \to A \to 0 $$
where $A=\mbox{Ker}\{H_0^\alpha(p;\Bbb{F}^k[F]) \to H_0^\alpha(M;\Bbb{F}^k[F])\}$. Note that $H_0^\alpha(p;\Bbb{F}^k[F])\cong \Bbb{F}^k[F]$ whereas $H_0^\alpha(M;\Bbb{F}^k[F])$ is a finite-dimensional $\Bbb{F}$-vector space by the following well--known lemma. \begin{lemma} \label{lemmah0m}
Let $X$ be a 3-manifold, $\psi:\pi_1(X) \to F$ a rationally surjective map with $F$ a free abelian group, and $\alpha:\pi_1(X)\to \GL(\F,k)$ a representation. Then \[ H_i^\alpha(X;\F^k[F])=H_i(\mbox{Ker}(\psi);\F^k)^n, i=0,1\]
where $n = |F/\mbox{Im}(\psi)|$. \end{lemma}
It follows that $A$ is an $\Bbb{F}[F]$-module of rank $k$. (For the notion of rank over $\Bbb{F}[F]$ we refer to the first paragraph in Section \ref{sec:calc}.) If $H_0^\alpha(M;\F^k[F])$ is $\Bbb{F}[F]$-torsion, then by \cite[Theorem 3.4]{Hi02} $$ \Delta^\alpha_{M,\psi} = \operatorname{ord}(E_0(H_1^\alpha(M;\F^k[F])))= \operatorname{ord} (E_k(H_1^\alpha(M,p;\F^k[F]))), $$ which can be computed using the presentation matrix for $H_1^{\alpha}(M,p;\Bbb{F}^k[F])$. If $H_1^\alpha(M;\Bbb{F}^k[F])$ is not $\Bbb{F}[F]$-torsion, $E_k(H_1^\alpha(M,p;\F^k[F])) = 0$ and $\Delta^\alpha_{M,\psi} = \operatorname{ord} (E_k(H_1^\alpha(M,p;\F^k[F]))) = 0$.
In the case that $\partial M\ne 0$ we can compute $\Delta_{M,\psi}^{\alpha}$ from Wada's invariant, which tends to be easier to compute. We refer to \cite{Wa94, KL99} for more details.
\section{Examples for twisted Alexander norms} \label{sec:example}
In this section, using twisted Alexander norms, we completely determine the Thurston norm of two examples: certain Hopf-like links and Dunfield's link \cite{Du01}.
\subsection{Hopf-like links} \label{sec:hophlike} In this section, for a link $L$ (possibly with one component), we write $\Delta_L^\alpha$ for $\Delta_{X(L)}^\alpha$. Consider a link $L$ as in Figure \ref{linkk1k2}. We will call these links \emph{Hopf-like}. Denote the meridian of $K_1$ by $\mu_1$ and the meridian of $K_2$ by $\mu_2$. Denote the corresponding elements in $H_1(X(L);\Bbb{Z})$ by $x_1$ and $x_2$. We then identify $\Bbb{Z}[H_1(X(L);\Bbb{Z})]$ with $\Bbb{Z} \xypm$.
Let $D_1$ (respectively, $D_2$) be the annulus cutting through $L$ just below $K_1$ (respectively, above $K_2$). Denote the three components of $X(L)$ cut along $D_1\cup D_2$ by $P_1, P_0, P_2$ (see Figure \ref{linkcut} below). Note that $P_i \cong X(K_i)$, $i=1,2$. In particular any representation $\alpha:\pi_1(X(L))\to \GL(\F,k)$ induces representations $\pi_1(X(K_i))\to \GL(\F,k), i=1,2,$ which we also denote by $\alpha$.
\begin{figure}
\caption{The link $L$ and the link complement cut along annuli $D_1$ and $D_2$}
\label{linkcut}
\label{hopflink}
\label{linkk1k2}
\end{figure}
\begin{proposition}\label{propalexlink} Let $\alpha:\pi_1(X(L))\to \GL(\F,k)$ be a representation. Assume $\Delta_{K_i}^\alpha(x_i)\ne 0$ for $i=1,2$. Then \[ \Delta_L^\alpha(x_1,x_2)=\Delta_1(x_1)\Delta_2(x_2)\in \Bbb{F} [x_1^{\pm 1},x_2^{\pm 1}]\] where \[ \Delta_i(x_i)=\Delta_{K_i}^\alpha(x_i)\frac{\det(\alpha(\mu_i)x_i-\operatorname{id})}{\Delta_{K_i}^{\alpha,0}} \in \Bbb{F}[x_i^{\pm 1}], \phantom{a}i=1,2. \] In particular \[ \deg(\Delta_i(x_i))=\deg\left(\Delta_{K_i}^\alpha(x_i)\right)+k-\deg\left(\Delta_{K_i}^{\alpha,0}(x_i)\right), \phantom{a} i=1,2. \] \end{proposition}
\begin{proof} First note that $D_i$ is homotopy equivalent to the circle for $i=1,2$, hence it follows from Lemma \ref{lemmah0m} that $H_1^\alpha(D_i;\F^k\xypm)= 0$. We now consider the Mayer-Vietoris sequence of $X(L)=P_1\cup_{D_1}P_0\cup_{D_2}P_2$. \[ \begin{array}{rcccccccccccccc} 0&\hspace{-0.1cm} \to \hspace{-0.1cm} & \bigoplus\limits_{i=0}^2 H_1^\alpha(P_i;\F^k \xypm)&\hspace{-0.1cm} \to\hspace{-0.1cm} & H_1^\alpha(X(L);\F^k \xypm)&\hspace{-0.1cm} \to\hspace{-0.1cm} &\\ \bigoplus\limits_{i=1}^2 H_0^\alpha(D_i;\F^k \xypm)&\hspace{-0.1cm} \to\hspace{-0.1cm} &\bigoplus\limits_{i=0}^2 H_0^\alpha(P_i;\F^k \xypm) &\hspace{-0.1cm} \to\hspace{-0.1cm} & H_0^\alpha(X(L);\F^k \xypm)&\hspace{-0.1cm} \to\hspace{-0.1cm} & 0.\end{array} \]
By \cite[Lemma 5, p.~76]{Le67} for any exact sequence of $\F \xypm$--torsion modules the alternating product of the respective orders in $\F \xypm$ equals one. The proposition now follows immediately from the following computations.
By Lemma \ref{lem:delta03} we have that $\operatorname{ord}(H_0^\alpha(X(L);\F^k \xypm))=1$. We compute the orders of the twisted Alexander modules of $P_1$ and $P_2$. Since $P_i\cong X(K_i),$ $i=1,2$, the natural surjection $\psi : \Bbb{Z}[\pi_1(X(L))]\to \Bbb{Z} \xypm$ restricted to $P_i$ only has values in $\Bbb{Z} \xipm$. Thus we get \[ \begin{array}{rcl} H_j^{\alpha}(P_1;\F^k \xypm) &\cong &H_j^\alpha(X(K_1);\F^k \xpm)\otimes_{\F} \F [x_2^{\pm 1}]\mbox{ for all } j, \mbox{ and }\\ H_j^{\alpha}(P_2;\F^k \xypm) &\cong &H_j^\alpha(X(K_2);\F^k \ypm)\otimes_{\F} \F [x_1^{\pm 1}]\mbox{ for all } j. \end{array} \] Therefore \[ \operatorname{ord}\left(H_j^{\alpha}(P_i;\F^k [x_1^{\pm 1},x_2^{\pm 1}])\right) = \Delta^{\alpha,j}_{K_i}(x_i). \] for all $j\ge 0$ and $i=1,2$.
Let us consider $P_0$. $P_0$ is homotopy equivalent to the torus and $\pi_1(P_0)$ is the free abelian group spanned by $\mu_1$ and $\mu_2$. By Lemma \ref{lemmah0m} we have $H_1^\alpha(P_0;\F^k \xypm)=0$. Therefore $\operatorname{ord}(H_1^\alpha(P_0;\F^k \xypm))=1$. Furthermore the argument in the proof of Lemma \ref{lem:delta03} shows that $\operatorname{ord}(H_0^\alpha(P_0;\F^k \xypm))=1$.
Now consider $D_1$ and $D_2$.
Using the cellular chain complex of the circle, one easily sees that \[ \operatorname{ord}(H_0^\alpha(D_i;\F^k \xypm))=\det(\alpha(\mu_i)x_i-\operatorname{id}) \] for $i=1,2$. \end{proof}
\begin{corollary} \label{coralexlink} For the trivial representation $\alpha : \pi_1(X(L)) \to GL(\Bbb{F},1)$, \[ \Delta^\alpha_L(x_1,x_2)=\Delta^\alpha_{K_1}(x_1) \Delta^\alpha_{K_2}(x_2). \] \end{corollary} \begin{proof} Since $\alpha$ is a one-dimensional trivial representation, \[ H_0^\alpha(X(K_1);\F\xpm)=\F\xpm/(x_1-1). \] Hence $\Delta^{\alpha,0}_{K_1}(x_1)=x_1-1$. Also $\det(\alpha(\mu_1)x_1-\operatorname{id}) = x_1 - 1 = \Delta^{\alpha,0}_{K_1}(x_1)$. Similarly $\det(\alpha(\mu_2)x_2-\operatorname{id}) = \Delta^{\alpha,0}_{K_2}(x_2)=x_2-1$. Now use Proposition \ref{propalexlink}. \end{proof}
\begin{corollary} \label{corvertex} Let $d_i:=\deg(\Delta^\alpha_i(x_i)),$ $i=1,2$ in Proposition
\ref{propalexlink}. Then the norm ball of $\frac1k||-||^\alpha_A$ has exactly four extreme vertices namely $(\pm \frac{k}{d_1},0)$ and $(0,\pm \frac{k}{d_2})$. \end{corollary} \noindent The above corollary easily follows from Proposition \ref{propalexlink}.
Now consider the Hopf-like link $L$ in Figure \ref{link11n73}. This consists of the knot $K_1$, the trefoil, and $K_2=11_{440}$ (here we use the \emph{knotscape} notation). By Corollary \ref{coralexlink} the usual multivariable Alexander polynomial with rational coefficients equals \[ \Delta_L(x_1,x_2)=\Delta_{K_1}(x_1)\Delta_{K_2}(x_2)=(x_1^2-x_1+1)(x_2^4-2x_2^3+3x_2^2-2x_2+1)\in \Bbb{Q} \xypm.\]
\begin{figure}
\caption{Link $L$ and knot $K_2=11_{440}$ with meridians.}
\label{link11n73}
\end{figure} Let $\{\phi_1,\phi_2\} \subset H^1(X(L);\Bbb{Z}) = \hom(H_1(X(L);\Bbb{Z}),\Bbb{Z})$ be the dual basis to $\{x_1,x_2\}$. It is known that $\mbox{genus}(K_1)=1$
and $\mbox{genus}(K_2)=3$. We can arrange the minimal Seifert surfaces such that they are punctured once by the other component. It follows that $||\phi_1||_T \leq 2\, \mbox{genus}(K_1)=2$ and $||\phi_2||_T \leq 2\, \mbox{genus}(K_2) = 6$. In fact it is easy to see that the equality holds for each case since each surface dual to $\phi_1$ (respectively $\phi_2$) becomes a Seifert surface for $K_1$ (respectively $K_2$) after adding one or more disks. On the other hand it follows from the calculation of $\Delta_L(x_1,x_2)$ that
$||\phi_1||_A=2$ and $||\phi_2||_A=4$. Therefore the Alexander norm and the Thurston norm do not agree for $L$. We also note that since $H_1(X(L);\Bbb{Z})$ is torsion-free, Turaev's torsion norm \cite{Tu02a} agrees with the Alexander norm.
The fundamental group of $\pi_1(X(K_2))$ is generated by the meridians $a,b,\dots,k$ of the segments in the knot diagram in Figure \ref{link11n73}. Using the program \emph{KnotTwister} \cite{F05} we found the homomorphism $\varphi:\pi_1(X(K_2))\to S_3$ given by
\[ \begin{array}{rclrclrclrclrclrcl}
A&=&(23),& B&=&(12),& C&=&(13),& D&=&(23), &E&=&(23),& F&=&(12),\\
G&=&(13),& H&=&(23),& I&=&(12),& J&=&(13),& K&=&(23),&\end{array} \]
where we use the cycle notation. The generators of $\pi_1(X(K_2))$ are sent to the elements in $S_3$ given by the cycle with the corresponding capital letter. We then consider $\alpha:=\alpha(\varphi):\pi_1(X(K_2))\xrightarrow{\varphi} S_3\to \mbox{GL}(V_2)$ where \[
V_2:=\{(v_1,v_2,v_3)\in \Bbb{F}_{13}^{3} | \sum_{i=1}^{3} v_i =0 \}. \] Clearly $\dim_{\F_{13}}(V_2)=2$ and $S_{3}$ acts on it by permutation. With \emph{KnotTwister} we compute \[ \Delta^\alpha_{K_2}(x_2)=1+3x_2^2+12x_2^4+x_2^6+10x_2^8+12x_2^{10} \in \Bbb{F}_{13}[x_2^{\pm 1}]\] and $H_0^\alpha\left(X(K_2);\Bbb{F}_{13}^2[x_2^{\pm 1}]\right)=0$. Hence $\Delta^{\alpha,0}_{K_2}(x_2)=1$.
Denote the homomorphism $\alpha:\pi_1(X(L))\to \pi_1(X(K_2))\to \mbox{GL}(V_2)$ by $\alpha$ as well. Here the map $\pi_1(X(L)) \to \pi_1(X(K_2))$ is induced from the inclusion. This induces a representation of $\pi_1(X(K_1))$ as in the proof of Proposition \ref{propalexlink}, and we also denote it by $\alpha$. In fact, one easily sees that $\alpha : \pi_1(X(K_1)) \to \mbox{GL}(V_2)$ is trivial. This implies that $\Delta^\alpha_{K_1}(x_1) = (\Delta_{K_1}(x_1))^2 = (1-x_1 + x_1^2)^2$ and $\Delta^{\alpha,0}_{K_1}(x_1)= (x_1-1)^2$. By Proposition \ref{propalexlink} we have \[ \Delta_L^\alpha(x_1,x_2)=\Delta_{1}^\alpha(x_1)\cdot \Delta^\alpha_{2}(x_2) \] where \[ \deg(\Delta_1^\alpha(x_1))=2\, \deg\left(\Delta_{K_1}(x_1)\right)+2-2 = 4 \] and \[ \deg(\Delta_2^\alpha(x_2))=\deg\left(\Delta^\alpha_{K_2}(x_2)\right)+2-0 = 12. \] Hence the twisted Alexander norm ball corresponding to
$\frac{1}{2}||-||_A^\alpha$ has exactly four extreme vertices $(\pm\frac12,0)$ and $(0,\pm\frac16)$ by Corollary
\ref{corvertex}. Since $||\phi_1||_T = 2$ and $||\phi_2||_T = 6$, the norms $||\phi||_T$ and $\frac{1}{2} ||\phi||_{A}^\alpha$ agree at the extreme vertices of the norm ball of $\frac12||-||_{A}^\alpha$. Note that by Theorem \ref{mainthm} we have $||\phi||_T \ge
\frac{1}{2} ||\phi||_{A}^\alpha$. Since the norms $||\phi||_T$ and
$\frac{1}{2} ||\phi||_{A}^\alpha$ agree at all of the extreme vertices of the norm ball of $\frac12 ||-||_{A}^\alpha$, they agree everywhere by convexity. Therefore the shaded region on the right in Figure \ref{normball} is the Thurston norm ball of the link $L$.
\begin{figure}
\caption{The untwisted and the twisted Alexander norm ball of $L$.}
\label{normball}
\end{figure}
In Figure \ref{normball} on the right the closed region bounded by the dashed polygon is the Alexander norm ball. If $(X(L),\phi)$ fibers over $S^1$ for some $\phi\in H^1(X(L);\Bbb{Z})$ then it follows from Theorem \ref{mainthmfib} that the (usual) Alexander norm and the Thurston norm agree on the cone on a top-dimensional face of the Thurston norm ball. Figure \ref{normball} shows that the Alexander norm and the Thurston norm agree only for a multiple of $\phi_1$. Hence $(X(L),\phi)$ does not fiber over $S^1$ for any $\phi \in H^1(X(L);\Bbb{Z})$. We state these results in the proposition below. \begin{proposition} \label{thmhophlikelink} The Thurston norm ball of $X(L)$ is the shaded region on the right in Figure \ref{normball}. Furthermore, $(X(L), \phi)$ does not fiber over $S^1$ for any $\phi \in H^1(X(L);\Bbb{Z})$. \end{proposition}
There exist 36 knots with 12 crossings or less such that $2\, \mbox{genus}(K)>\deg(\Delta_K(t))$. In all but three cases we found representations similar to the above such that the Thurston norm bound from Theorem \ref{mainthmfk05} equals the Thurston norm of $X(K)$. Let $L$ be the Hopf-like link as in Figure \ref{hopflink} with $K_1$ any knot such that $2\, \mbox{genus}(K_1)=\deg(\Delta_{K_1}(t))$ and $K_2$ any of the 33 knots mentioned above. In this case the argument above can be used to show that twisted Alexander norms completely determine the Thurston norm ball of $X(L)$ and it is always strictly smaller than the Alexander norm ball. \\
Now consider the case with $K_1$ the unknot and $K_2=11_{440}$. We use the same representation as above. In this case the norm ball for $\frac{1}{2}||-||_A^\alpha$ is given in Figure \ref{normball2}. The norm ball is a horizontal infinite strip, hence noncompact. \begin{figure}
\caption{Thurston norm ball of $L$.}
\label{normball2}
\end{figure}
To show that $\frac{1}{2}||-||_A^\alpha=||-||_T$ it is enough to show that for $\phi=(n,\pm 1), n\in \Bbb{Z}$ there exists a connected dual surface with $\chi(S)=-6$. Let $S$ be a Seifert surface of genus 3 for $K_2$ which intersects $K_1$ just once. By deleting a disk from $S$ we get a surface $S'$ which is disjoint from $K_1$. The surface $S'$ is dual to $\phi = (0,1)$. We can make $S'$ such that the two boundary components of $S'$ are as close to each other as we wish. Now take a short path from one boundary component of $S'$ to the other boundary component. Cut $S'$ along that path and reglue the cut parts together by giving $n$ full twists. The resulting surface is dual to $\phi=(n,1)$ and has the Euler characteristic -6. Hence the Thurston norm ball in this case is the shaded (infinite) strip in Figure \ref{normball2}.
\subsection{Dunfield's example} \label{sec:dunfield} McMullen had asked whether for a fibered manifold the Thurston norm and the Alexander norm agree everywhere. To answer this question Dunfield \cite{Du01} considers the link $L$ in Figure \ref{dunfield}.
\begin{figure}
\caption{Dunfield's example.}
\label{dunfield}
\end{figure}
Denote the knotted component by $K_1$ and the unknotted component by $K_2$. Let $x,y\in H_1(X(L);\Bbb{Z})$ be the elements represented by a meridian of $K_1$, respectively $K_2$. Then the Alexander polynomial equals
\[ \Delta_{X(L)}=xy-x-y+1 \in \Bbb{Z}[H_1(X(L);\Bbb{Z})]=\Bbb{Z}[x^{\pm 1},y^{\pm 1}].\] We consider $H^1(X(L);\Bbb{Z})$ with the dual basis corresponding to $\{x,y\}\in H_1(X(L);\Bbb{Z})$. The Alexander norm ball is given in Figure \ref{dunfieldanorm}.
\begin{figure}
\caption{Alexander norm ball for Dunfield's link.}
\label{dunfieldanorm}
\end{figure}
Dunfield \cite{Du01} showed that $(X(L),\phi)$ fibers over $S^1$ for all $\phi\in H^1(X(L);\Bbb{Z})$ in the cones on the two open faces of the Alexander norm ball with vertices $(-\frac{1}{2},\frac{1}{2}), (0,1)$ respectively $(0,-1), (\frac{1}{2},-\frac{1}{2})$. Dunfield used the Bieri-Neumann-Strebel (BNS) invariant (see \cite{BNS87}) to show that the Alexander norm and the Thurston norm do not agree for the 3-manifold $X(L)$. We will go one step further and completely determine the Thurston norm of $X(L)$.
We did not find a representation of $\pi_1(X(L))$ for which we can compute the twisted Alexander polynomial and which determines the Thurston norm. Therefore we study the Thurston norm of a 2-fold cover of $X(L)$ for which it is easier to find a representation.
The following theorem by Gabai shows the relationship between the Thurston norm of $X(L)$ and that of a finite cover of $X(L)$.
\begin{theorem} \label{lemmathurstong} \cite[p.~484]{Ga83} Let $M$ be a 3-manifold and $\alpha:\pi_1(M)\to G$ a homomorphism to a finite group $G$. Denote the induced $G$-cover of $M$ by $M_G$. Let $\phi\in H^1(M;\Bbb{Z})$ be nontrivial and denote the induced map $H_1(M_G;\Bbb{Z})\to H_1(M;\Bbb{Z})\to \Bbb{Z}$ by $\phi_G$, which can be regarded as an element in $H^1(M_G;\Bbb{Z})$. Then $\phi_G$ is nontrivial and
\[ |G|\cdot||\phi||_{T,M}= ||\phi_G||_{T,M_G}.\] \end{theorem} \noindent Thus to determine the Thurston norm of $M$, we only need to determine the Thurston norm of $M_G$. For this purpose, we generalize twisted Alexander norms and the main theorems a little bit further as follows.
Let $M$ be a 3-manifold and $\psi:\pi_1(M)\to F$ a homomorphism to a free abelian group, we do not demand that $\psi$ is surjective. We define a norm on $\mbox{Hom} (F,\Bbb{R})$. Note that if $F = FH_1(M;\Bbb{Z})$, then $\mbox{Hom} (F,\Bbb{R}) \cong H^1(M;\Bbb{R})$. Let $\alpha:\pi_1(M)\to \mbox{GL} (\Bbb{F},k)$ be a representation. If $\Delta_{M,\psi}^{\alpha}=0\in \Bbb{F}[F]$ then we set
$||\phi||_{A,\psi}^{\alpha}=0$ for all $\phi\in \mbox{Hom}(F,\Bbb{R})$. Otherwise we write $\Delta_{M,\psi}^{\alpha}=\sum a_if_i$ for $a_i\in \Bbb{F}$ and $f_i \in F$. Given $\phi \in \mbox{Hom}(F,\Bbb{R})$, we define \emph{the (generalized) twisted Alexander norm of $(M,\psi,\alpha)$} to be
\[ ||\phi||_{A,\psi}^{\alpha} :=\sup \phi(f_i-f_j)\] with the supremum over $(f_i, f_j)$ such that $a_ia_j\ne 0$. If we consider the natural surjection
$\psi:\pi_1(M)\to FH_1(M;\Bbb{Z})$, then clearly $||-||_{A,\psi}^{\alpha}=||-||_A^{\alpha}$. Note that
$||-||_{A,\psi}^{\alpha}$ is clearly a seminorm on $\hom(F,\Bbb{R})$. The following theorem generalizes Theorem \ref{mainthm} and Theorem \ref{mainthmfib}. The proof is almost identical.
\begin{theorem} \label{thmgeneral} Let $M$ be a 3-manifold whose boundary is empty or consists of tori. Let $\alpha:\pi_1(M)\to \GL(\F,k)$ be a representation. Let $\psi:\pi_1(M)\to F$ be a homomorphism to a free abelian group such that $\mbox{rank}\, F>1$ and such that $H_1(M;\Bbb{Z})\otimes_\Bbb{Z} \Bbb{Q} \to F\otimes_\Bbb{Z} \Bbb{Q}$ is surjective. Then
\[ ||\phi\circ \psi ||_T \ge \frac1k ||\phi||^\alpha_{A,\psi} \] for all $\phi\in \mbox{Hom}(F,\Bbb{R})$.
Furthermore, if $M\ne S^1\times D^2, M \ne S^1\times S^2$ and if $\phi\in \mbox{Hom}(F,\Bbb{Z})$ is such that $(M,\phi\circ \psi)$ fibers over $S^1$, then $\phi\circ \psi$ lies in the cone on a top-dimensional open face of the Thurston norm ball (denoted by $C$) and for all $\phi'\in \mbox{Hom}(F,\Bbb{R})$ such that $\phi'\circ \psi \in C$ we have
\[ ||\phi'\circ \psi||_T = \frac1k ||\phi'||^\alpha_{A,\psi}. \] \end{theorem}
We now return to the link $L$ in Figure \ref{dunfield}. Let $\varphi:H_1(X(L);\Bbb{Z})\to \Bbb{Z}/2$ be the homomorphism given by $\varphi(x)=1$, $\varphi(y)=0$. Denote the induced two-fold cover by $X(L)_2$. Denote by $\psi$ the homomorphism $\pi_1(X(L)_2) \to H_1(X(L)_2;\Bbb{Z})\to H_1(X(L);\Bbb{Z})$ induced from the covering map $\pi : X(L)_2 \to X(L)$. We found a representation $\alpha:\pi_1(X(L)_2)\to \mbox{GL}(\Bbb{F}_7,1)$ such that \[ \Delta^\alpha_{X(L)_2,\psi} = 3x^6y^2+3x^4y^2+4x^4y+2x^4+x^2y^2+3x^2y-x^2-1 \in \Bbb{F}_7[H_1(X(L);\Bbb{Z})]=\Bbb{F}_7[x^{\pm 1},y^{\pm 1}].\] This polynomial is not of the form $f(ax+by)$ for some polynomial $f(t)$. This shows that $H_1(X(L_2))\to H_1(X(L))=\Bbb{Z}^2$ is rationally surjective, in particular we can apply Theorem \ref{thmgeneral}.
Now let $\phi \in H^1(X(L);\Bbb{Z})$. By Theorem \ref{lemmathurstong} and Theorem \ref{thmgeneral}, we have \[
||\phi||_{T,X(L)}=\frac{1}{2}||\phi \circ\pi ||_{T,X(L)_2} \ge \frac{1}{2} ||\phi||^\alpha_{A,\psi}. \]
The norm ball of $\frac{1}{2} ||-||^\alpha_{A,\psi}$ is drawn as the shaded region in Figure \ref{dunfieldtwinorm}. We claim that this is exactly the Thurston norm ball.
By Theorem \ref{thmgeneral} the twisted Alexander norm ball in Figure \ref{dunfieldtwinorm} is an `outer bound' for the Thurston norm ball of $X(L)$. But as we pointed out above, Dunfield showed that $(X(L),\phi)$ fibers over $S^1$ for all $\phi\in H^1(X(L);\Bbb{Z})$ which lie in the cones on the two open faces of the Alexander norm ball with vertices $(-\frac{1}{2},\frac{1}{2}), (0,1)$ respectively $(0,-1), (\frac{1}{2},-\frac{1}{2})$. In particular, the Thurston norm ball and the twisted Alexander norm ball agree on these cones by the second part of Theorem \ref{thmgeneral}. By continuity, the norms also agree on the vertices $(-\frac{1}{2},\frac{1}{2}), (0,1), (0,-1)$ and $(\frac{1}{2},-\frac{1}{2})$. Now it follows from convexity that the Thurston norm ball coincides everywhere with the twisted Alexander norm ball given in Figure \ref{dunfieldtwinorm}. Therefore the shaded region in Figure \ref{dunfieldtwinorm} is the Thurston norm ball of $X(L)$.
\begin{figure}
\caption{Twisted Alexander norm ball for Dunfield's link}
\label{dunfieldtwinorm}
\end{figure}
Note that our calculation confirms Dunfield's result that $(X(L),\phi)$ does not fiber over $S^1$ for any $\phi$ outside
the cones. We summarize these results in the following proposition.
\begin{proposition} \label{prop:dunfield} The Thurston norm ball of $X(L)$ is the shaded region in Figure \ref{dunfieldtwinorm}. Furthermore, $(X(L),\phi)$ fibers over $S^1$ exactly when $\phi$ lies inside the cones on the open faces of the two smaller faces of the Thurston norm ball of $X(L)$. \end{proposition}
\section{Twisted multivariable Alexander polynomial and twisted one-variable Alexander polynomial} \label{sec:multi}
This section serves for proving Theorem \ref{mainthmalex}. The main idea of the proof is to use the functoriality of Reidemeister torsion. To prove Theorem \ref{mainthmalex} we need some lemmas which show the nontriviality of certain twisted Alexander polynomials. Throughout this section we assume that $M$ is a 3-manifold whose boundary is empty or consists of tori. Furthermore let $\alpha : \pi_1(M) \to \GL(\F,k)$ be a representation.
\subsection{Computation of twisted Alexander polynomials} \label{sec:calc} We introduce the notion of \emph{rank over a UFD}. Let $\Lambda$ be a UFD and $Q(\Lambda)$ its quotient field. Let $H$ be a $\Lambda$-module. Then we define $\mbox{rank}_\Lambda(H):=\dim_{Q(\Lambda)}(H\otimes_{\Lambda}Q(\Lambda))$. We need the following well-known lemma. For the first part we refer to \cite[Remark~4.5]{Tu01}. The second part is well--known. The last statement follows from the fact that $Q(\Lambda)$ is flat over $\Lambda$.
\begin{lemma} \label{lemma:rank} Let $\Lambda$ be a UFD. \begin{enumerate} \item Let $H$ be a finitely generated $\Lambda$-module. Then the following are equivalent: \begin{enumerate} \item $H$ is $\Lambda$-torsion, \item $\operatorname{ord}_{\Lambda}(H)\ne 0$, \item $\mbox{rank}_\Lambda(H)=0$, \item $\hom_{\Lambda}(H,\Lambda)=0$. \end{enumerate} \item Let $N$ be an n-manifold and assume that $\Lambda^k$ has a left $\Bbb{Z}[\pi_1(N)]$-module structure, then \[ \sum \limits_{i=0}^n (-1)^i \mbox{rank}_\Lambda(H_i(N;\Lambda^k))=k\chi(N).\] \item $H_i(N;\Lambda^k\otimes_\Lambda {Q(\Lambda)})=H_i(N;\Lambda^k)\otimes_\Lambda {Q(\Lambda)}$ for any $i$. \end{enumerate} \end{lemma}
\begin{lemma} \label{lem:delta03} Let $M$ be a 3-manifold. Let $\varphi : \pi_1(M) \to H$ be a surjection to a free abelian group. Then $\Delta_{M,\varphi}^{\alpha,3}=1$ and $\Delta_{M,\varphi}^{\alpha,0}\ne 0$. If furthermore $\mbox{rank}\, H > 1$, then $ \Delta_{M,\varphi}^{\alpha,0}=1 \in \Bbb{F}[H]$. \end{lemma}
\begin{proof} We prove the lemma only in the case that $M$ is closed. The proof for the case that $\partial M$ consists of tori is very similar. Let $b:=\mbox{rank}\, H$. Pick a basis $t_1,\dots,t_b$ for $H$. We identify $\Bbb{F}^k[H]:=\Bbb{F}^k \otimes \Bbb{F}[H]$ with $\F^k[t_1^{\pm 1},\dots,t_b^{\pm 1}]$.
Since $M$ is closed it follows that $\chi(M)=0$. Then it is well--known that $M$ has a CW--structure with one cell of dimensions zero and three, and with the same number of cells in dimensions one and two (cf. e.g. \cite[Theorem 5.1]{Mc02}).
Denote the 1-cells by $h_1,\dots,h_n$. Denote the corresponding elements in $\pi_1(M)$ by $h_1,\dots,h_n$ as well. If $\mbox{rank}\, H>1$ then we can arrange that $\varphi(h_i)=t_i$ for $i=1,2$.
Write $\pi:=\pi_1(M)$. From the CW structure we obtain a chain complex $C_*:=C_*(\tilde{M})$ (where $\tilde{M}$ denotes the universal cover of $M$): \[ 0 \to C_3^1 \xrightarrow{\partial_3} C_2^n \xrightarrow{\partial_2} C_1^n \xrightarrow{\partial_1} C_0^1 \to 0 \] for $M$, where the $C_i$ are free $\Bbb{Z}[\pi]$-right modules. In fact $C_i^k\cong \Bbb{Z}[\pi]^k$. Consider the chain complex $C_*\otimes_{\Bbb{Z}[\pi]}\Bbb{F}^k[H]$: \[ 0 \to C_3^1\otimes_{\Bbb{Z}[\pi]}\F^k[H] \xrightarrow{\partial_3\otimes \operatorname{id}} C_2^n\otimes_{\Bbb{Z}[\pi]}\F^k[H] \xrightarrow{\partial_2\otimes \operatorname{id}} C_1^n \otimes_{\Bbb{Z}[\pi]}\F^k[H] \xrightarrow{\partial_1\otimes \operatorname{id}} C_0^1\otimes_{\Bbb{Z}[\pi]}\F^k[H] \to 0. \]
Let $A_i$, $i=0,\dots,3$, be the matrices with entries in $\Bbb{Z}[\pi]$ corresponding to the boundary maps $\partial_i:C_i\to C_{i-1}$ with respect to the bases given by the lifts of the cells of $M$ to $\tilde{M}$. Then $A_3$ and $A_1$ are well--known to be of the form
\[ \begin{array}{rcl} A_3 &=& (a_1(1-g_1), a_2(1-g_2), \ldots, a_n(1-g_n))^t,\\ A_1 &=& (b_1(1-h_1), b_2(1-h_2), \ldots, b_n(1-h_n)), \end{array} \] where $\{g_1,\dots,g_n\}$ and $\{h_1,\dots,h_n\}$ are generating sets for $\pi_1(M)$ and $a_i,b_i \in \pi_1(M)$ for $i=1,\dots,n$. By picking different lifts of the cells in dimensions one and two we can assume that in fact $a_i=b_i=e\in \pi_1(M)$ for $i=1,\dots,n$. We can and will therefore assume that
\[ \begin{array}{rcl} A_3 &=& (1-g_1, 1-g_2, \ldots, 1-g_n)^t,\\ A_1 &=& (1-h_1, 1-h_2, \ldots, 1-h_n). \end{array} \]
Let $B = (b_{rs})$ be a $p\times q$ matrix with entries in $\Bbb{Z}[\pi]$. We write $b_{rs}=\sum b_{rs}^gg$ for $b_{rs}^g\in \Bbb{Z}, g\in \pi$. We define $(\alpha\otimes \varphi)(B)$ to be the $p\times q$ matrix with entries $\sum b_{rs}^g \alpha(g)\varphi(g)$. Since each $\sum b_{rs}^g \alpha(g)\varphi(g)$ is a $k\times k$ matrix with entries in $\Bbb{F}[H]$ we can think of $(\alpha\otimes \varphi)(B)$ as a $pk\times qk$ matrix with entries in $\Bbb{F}[H]$.
Since $\varphi$ is nontrivial there exist $k,l$ such that $\varphi(g_k)\ne 0$ and $\varphi(h_l)\ne 0$. It follows that $(\alpha \otimes \varphi)(A_1)$ and $(\alpha \otimes \varphi)(A_3)$ have full rank over $\F[H]$. The first part of the lemma now follows immediately.
Now assume that $\mbox{rank}\,H>1$. Then $\operatorname{ord}\left(H_0^\alpha(M;\F^k[t_1^{\pm 1},\dots,t_b^{\pm 1}])\right)$ divides $\det(\alpha(h_1)t_1-\operatorname{id})\in \F [t_1^{\pm 1}]$ and $\det(\alpha(h_2)t_2-\operatorname{id})\in \F[t_2^{\pm 1}]$. These two polynomials are clearly relatively prime. This implies that $\operatorname{ord}\left(H_0^\alpha(M;\F^k[t_1^{\pm 1},\dots,t_b^{\pm 1}])\right)=1$.
\end{proof}
\begin{lemma} \label{lem:delta2} Let $\varphi:\pi_1(M)\to H$ be a surjection to a free abelian group $H$. If $\Delta_{M,\varphi}^{\alpha,1}\ne 0$ then $\Delta_{M,\varphi}^{\alpha,2}\ne 0$. \end{lemma}
\begin{proof} Note that by assumption and by Lemma \ref{lem:delta03} we have $\Delta_{M,\varphi}^{\alpha,i} \ne 0$ for $i=0,1,3$. Let $\Lambda:=\Bbb{F}[H]$. It follows from the long exact homology sequence for $(M,\partial M)$ and from duality that $\chi(M)=\frac{1}{2}\chi(\partial M)$. So $\chi(M)=0$ in our case. It follows from Lemma \ref{lemma:rank} that \[ \sum_{i=0}^3 (-1)^i \dim_{Q(\Lambda)} \left( H_i^\alpha(M;\Lambda^k \otimes_\Lambda Q(\Lambda) )\right)=k\chi(M)=0.\] Note that $H_i^\alpha(M;\Lambda^k\otimes_\Lambda Q(\Lambda)) \cong H_i^\alpha(M;\Lambda^k)\otimes_\Lambda Q(\Lambda)$ by Lemma \ref{lemma:rank}. By assumption $H_i^\alpha(M;\Lambda^k)\otimes_{\Lambda} Q(\Lambda)=0$ for $i\ne 2$, hence $H_2^\alpha(M;\Lambda^k)\otimes_\Lambda Q(\Lambda)=0$. \end{proof}
The following corollary is now immediate.
\begin{corollary} \label{coracyclic} Let $\varphi:\pi_1(M)\to H$ be a surjection to a free abelian group $H$. If $\Delta_{M,\varphi}^{\alpha,1}\ne 0$ then $\Delta_{M,\varphi}^{\alpha,i}\ne 0$ for all $i$. \end{corollary}
\begin{lemma} \label{lem:delta2multi} Let $\varphi : \pi_1(M) \to H$ be a surjection to a free abelian group with $\mbox{rank}\, H > 1$. If $\Delta_{M,\varphi}^{\alpha,1} \ne 0$ then \[ \Delta_{M,\varphi}^{\alpha,2} =1 \in \Bbb{F}[H].\] \end{lemma}
\begin{proof} Let $\Lambda:=\Bbb{F}[H]$ and $\pi :=\pi_1(M)$.
By Poincar\'e duality, \[ H_2^\alpha(M;\Lambda^k) \cong H^1_\alpha(M,\partial M;\Lambda^k)=H^1(\mbox{Hom}_{\Bbb{Z}[\pi]}(C_*(\tilde{M},\partial \tilde{M}), \Lambda^k)) \] where $\tilde{M}$ is the universal cover of $M$. On the right we view $\Lambda^k$ as a right $\Bbb{Z}[\pi]$-module by taking $f\cdot g = g^{-1}\cdot f=\varphi(g^{-1})\alpha(g^{-1})f$ for $f\in \Lambda^k$ and $g\in \pi$.
We use an argument in \cite[p.~638]{KL99}. Let $\langle \, , \, \rangle : \F^k \times \F^k \to \F$ be the canonical inner product on $\F^k$.
Then there exists a unique representation $\overline{\alpha}:\pi_1(M)\to \GL(\F,k)$ such that \[ \langle \alpha(g^{-1})v,w \rangle = \langle v,\overline{\alpha}(g)w\rangle\] for all $g\in \pi_1(M)$ and $v,w\in \F^k$. We denote by $\overline{\Lambda^k}$ the left $\Bbb{Z}[\pi]$-module with underlying $\Lambda$-module $\Lambda^k$ and $\Bbb{Z}[\pi]$-module structure given by $\overline{\a} \otimes (-\phi)$.
Using the inner product we get a map \[\begin{array}{rcl}\mbox{Hom}_{\Bbb{Z}[\pi]}(C_*(\tilde{M},\partial \tilde{M}), \Lambda^k)&\to& \hom_{\Lambda}\big(C_*(\tilde{M},\partial \tilde{M})\otimes_{\Bbb{Z}[\pi]} \overline{\Lambda^k},\Lambda\big)\\ f&\mapsto& (c\otimes w)\mapsto \langle f(c),w\rangle. \end{array} \] Using $\langle \alpha(g^{-1})v,w\rangle =\langle v,\overline{\a}(g)w\rangle$ it is now easy to see that this map is well-defined and that it defines in fact an isomorphism of $\Lambda$-module chain complexes.
Now we can apply the universal coefficient spectral sequence to the $\Lambda$-module chain complex $\hom_{\Lambda}\big(C_*(\tilde{M},\partial \tilde{M})\otimes_{\Bbb{Z}[\pi]} \overline{\Lambda^k},\Lambda\big)$ to conclude that there exists a short exact sequence \[ 0\to \ext_{\Lambda}^1(H_0^{\overline{\a}}(M,\partial M;\overline{\Lambda^k}))\to H^1_\alpha(M,\partial M;\Lambda^k) \to \mbox{Hom}_{\Lambda}(H_1^{\overline{\a}}(M,\partial M;\overline{\Lambda^k})).\]
Since $\Delta^{\alpha,2}_{M,\varphi}\ne 0$ by Lemma \ref{lem:delta2} it follows that $H^1_\alpha(M,\partial M;\Lambda^k)$ is $\Lambda$-torsion. Hence \[ H^1_\alpha(M,\partial M;\Lambda^k) \cong \ext_{\Lambda}^1(H_0^{\overline{\a}}(M,\partial M;\overline{\Lambda^k}).\] First assume that $\partial M$ is nonempty. Note that $\pi_1(\partial M)\to \GL(\F,k)$ factors through $\pi_1(M)$. It follows from
\[ H_0^{\overline{\a}}(X;\overline{\Lambda^k})\cong \overline{\Lambda^k} /\{ gv-v | g\in \pi_1(X), v\in \overline{\Lambda^k}\}\] that ${H}_0^{\overline{\a}}(\partial M;\overline{\Lambda^k})$ surjects onto ${H}_0^{\overline{\a}}(M;\overline{\Lambda^k})=0$, hence ${H}_0^{\overline{\a}}(M,\partial M;\overline{\Lambda^k})=0$ (cf. \cite[Lemma~2.6]{FK05}).
Now assume that $M$ is closed. Let $H_0 := {H}_0^{\overline{\a}}(M;\overline{\Lambda^k})$. We define a finitely generated $\Lambda$-module $A$ to be \emph{pseudonull} if $A_\wp = 0$ for every height 1 prime ideal $\wp$ of $\Lambda$ where $A_\wp$ is the localization of $A$ at $\wp$. (See p.~51 in \cite{Hi02}.) By \cite[Theorem 3.1]{Hi02}, $E_0(H_0) \subset \operatorname{Ann}(H_0)$. Since $\Delta^{\alpha,0}_{M,\varphi} = 1$ by Lemma \ref{lem:delta03}, $\widetilde{\operatorname{Ann}}(H_0) = \Lambda$ where $\widetilde{\operatorname{Ann}}(H_0)$ is the smallest principal ideal of $\Lambda$ which contains $\operatorname{Ann}(H_0)$. Thus by \cite[Theorem 3.5]{Hi02}, $H_0$ is pseudonull. Finally, by \cite[Theorem 3.9]{Hi02}, $\Ext^1_\Lambda(H_0, \Lambda) = 0$. Hence $H_2^\alpha(M;\Lambda^k) \cong H^1_\alpha(M,\partial M;\Lambda^k)= 0$. \end{proof}
\subsection{Functoriality of torsion} \label{sec:funtoriality} Define $F$ to be the free abelian group $FH_1(M;\Bbb{Z})$. Let $\psi : \pi_1(M) \to F$ be the natural surjection and $\phi \in H^1(M;\Bbb{Z})$ nontrivial. Note that $\phi$ induces a homomorphism $\phi : \Bbb{F}[F] \to \Bbb{F}[t^{\pm 1}]$. In this section we go back to the notation $\Delta_M^{\alpha,i} = \Delta_{M,\psi}^{\alpha,i}$ and $\Delta_\phi^{\alpha,i} = \Delta_{M,\phi}^{\alpha,i}$.
\begin{theorem} \label{thm:turaev} Suppose $b_1(M) > 1$. \begin{enumerate} \item If $\phi\left(\Delta_{M,\psi}^{\alpha,1}\right) \ne 0$ then $\Delta_{M,\phi}^{\alpha,1} \ne 0$ and \[ \phi\left(\Delta_{M,\psi}^{\alpha,1}\right) = \prod \limits_{i=0}^3 \phi\left(\Delta_{M,\psi}^{\alpha,i}\right)^{(-1)^{i+1}}=\prod \limits_{i=0}^3 \Delta_{M,\phi}^{\alpha,i}(t)^{(-1)^{i+1}}\in \F[t^{\pm 1}]. \] \item If $\phi\left(\Delta_{M,\psi}^{\alpha,1}\right) = 0$ then $\Delta_{M,\phi}^{\alpha,1} = 0$. \end{enumerate} \end{theorem}
\noindent Note that if $\Delta_{M,\psi}^{\alpha,1} \ne 0$, then by Lemmas \ref{lem:delta03} and \ref{lem:delta2multi} $\prod_{i=0}^3 \phi\left(\Delta_{M,\psi}^{\alpha,i}\right)^{(-1)^{i+1}}$ is defined and the first equality in the first part is obvious. Also if $\Delta_{M,\phi}^{\alpha,1} \ne 0$ then by Lemmas \ref{lem:delta03} and \ref{lem:delta2}, $\prod_{i=0}^3 \Delta_{M,\phi}^{\alpha,i}(t)^{(-1)^{i+1}}$ is defined.
\begin{proof} We will only consider the case that $M$ is a closed 3-manifold. The proof for the case that $\partial M\ne \emptyset$ is similar.
Let us prove (1). Write $\pi:=\pi_1(M)$. As in the proof of Lemma \ref{lem:delta03} \label{lem:delta03} we can find a CW-structure for $M$ such that the chain complex $C_*(\tilde{M})$ of the universal cover is of the form \[ 0 \to C_3^1 \xrightarrow{\partial_3} C_2^n \xrightarrow{\partial_2} C_1^n \xrightarrow{\partial_1} C_0^1 \to 0 \] for $M$, where the $C_i$ are free $\Bbb{Z}[\pi]$-right modules. In fact $C_i^k\cong \Bbb{Z}[\pi]^k$. Let $\varphi:\pi_1(M)\to H$ be an epimorphism to a free abelian group $H$. Consider the chain complex $C_*\otimes_{\Bbb{Z}[\pi]}\Bbb{F}^k[H]$: \[ 0 \to C_3^1\otimes_{\Bbb{Z}[\pi]}\Bbb{F}^k[H] \xrightarrow{\partial_3\otimes \operatorname{id}} C_2^n\otimes_{\Bbb{Z}[\pi]}\Bbb{F}^k[H] \xrightarrow{\partial_2\otimes \operatorname{id}} C_1^n \otimes_{\Bbb{Z}[\pi]}\Bbb{F}^k[H]\xrightarrow{\partial_1\otimes \operatorname{id}} C_0^1\otimes_{\Bbb{Z}[\pi]}\Bbb{F}^k[H] \to 0. \] Lifting the cells of $M$ to $\tilde{M}$ makes $C_*$ a based complex. Denote the quotient field of $\Bbb{F}[H]$ by $Q(H)$. If
\[ C_*\otimes_{\Bbb{Z}[\pi]}Q(H)^k:=C_*\otimes_{\Bbb{Z}[\pi]}\Bbb{F}^k[H] \otimes_{\Bbb{F}[H]} Q(H)\] is acyclic, then we can define the Reidemeister torsion $\tau(M,\alpha,\varphi)\in Q(H)\setminus \{0\}$ which is well-defined up to multiplication by a unit in $\Bbb{F}[H]$. We refer to \cite{Tu01} for the definition of Reidemeister torsion and its properties.
Let $A_i$, $i=0,\dots,3$, be the matrices with entries in $\Bbb{Z}[\pi]$ corresponding to the boundary maps $\partial_i:C_i\to C_{i-1}$ with respect to the bases given by the lifts of the cells of $M$ to $\tilde{M}$. Then we can arrange the lifts such that
\[ \begin{array}{rcl} A_3 &=& (1-g_1, 1-g_2, \ldots, 1-g_n)^t,\\ A_1 &=& (1-h_1, 1-h_2, \ldots, 1-h_n), \end{array} \] where $\{g_1,\dots,g_n\}$ and $\{h_1,\dots,h_n\}$ are generating sets for $\pi_1(M)$. Since $\phi$ is nontrivial there exist $k,l$ such that $\phi(g_k)\ne 0$ and $\phi(h_l)\ne 0$. Let $B_3$ be the $k$-th row of $A_3$. Let $B_2$ be the result of deleting the $k$-th column and the $l$-th row. Let $B_1$ be the $l$-th column of $A_1$.
Note that \[ \det((\alpha\otimes \phi)(B_3))=\det(\operatorname{id}-(\alpha\otimes \phi)(g_k)) =\det(\operatorname{id}-\phi(g_k)\alpha(g_k)) \ne 0 \in \Bbb{F}[t^{\pm 1}] \]
since $\phi(g_k)\ne 0$. Similarly $\det((\alpha\otimes \phi)(B_1))\ne 0$ and $\det((\alpha\otimes \psi)(B_i))\ne 0, i=1,3$. We need the following theorem. Note that $C_*\otimes_{\Bbb{Z}[\pi]}Q(H)$ is acyclic if and only if $\Delta_{M,\varphi}^{\alpha,1}\ne 0$ by Corollary \ref{coracyclic}.
\begin{theorem}\cite[Theorem~2.2, Lemma~2.5 and Theorem~4.7]{Tu01} \label{thm:Tu22} Let $\varphi:\pi\to H$ be a homomorphism to a free abelian group. Suppose $\det((\alpha\otimes \varphi)(B_i))\ne 0$, $i=1,3$. \begin{enumerate} \item $C_*\otimes_{\Bbb{Z}[\pi]}Q(H)^k$ is acyclic $\Leftrightarrow \det((\alpha\otimes \varphi)(B_2))\ne 0 \Leftrightarrow \Delta_{M,\varphi}^{\alpha,1} \ne 0$. \item If $C_*\otimes_{\Bbb{Z}[\pi]}Q(H)^k$ is acyclic then \[ \tau(M,\alpha,\varphi) = \prod\limits_{i=1}^3 \det((\alpha\otimes \varphi)(B_i))^{(-1)^{i+1}} =\prod\limits_{i=0}^3 \left(\Delta_{M,\varphi}^{\alpha,i}\right)^{(-1)^{i+1}} .\] \end{enumerate} \end{theorem}
By Theorem \ref{thm:Tu22} we only need to prove that $C_*\otimes_{\Bbb{Z}[\pi]}Q(F)^k$ and $C_*\otimes_{\Bbb{Z}[\pi]}\Bbb{F}(t)^k$ are acyclic and $\tau(M,\alpha,\phi)=\phi(\tau(M,\alpha,\psi))$. (We define $\phi(f/g) := \phi(f)/\phi(g)$ for $f,g \in \Bbb{F}[F]$.)
Since $\phi\left(\Delta_{M,\psi}^{\alpha,1}\right)\ne 0$ by our assumption, $\Delta_{M,\psi}^{\alpha,1}\ne 0$. Therefore $C_*\otimes_{\Bbb{Z}[\pi]}Q(F)^k$ is acyclic by Corollary \ref{coracyclic}. Since $\det((\alpha\otimes \psi)(B_i))\ne 0, i=1,3$, it follows from Theorem \ref{thm:Tu22} that $\det((\alpha\otimes \psi)(B_2))\ne 0$ and \[ \tau(M,\alpha,\psi)=\prod\limits_{i=1}^3 \det((\alpha\otimes \psi)(B_i))^{(-1)^{i+1}}.\] Note that \[ \begin{array}{rcl} \prod\limits_{i=1}^3 \det((\alpha\otimes \phi)(B_i))^{(-1)^{i+1}}&=& \prod\limits_{i=1}^3 \phi\big(\det((\alpha\otimes \psi)(B_i))\big)^{(-1)^{i+1}}\\ &=& \prod\limits_{i=0}^3 \phi \left(\Delta_{M,\psi}^{\alpha,i}\right)^{(-1)^{i+1}}\\ &=&\phi(\tau(M,\alpha,\psi)). \end{array} \] In the above the second equality follows from Theorem \ref{thm:Tu22}. Since $\phi(\Delta^{\alpha,1}_{M,\psi})\ne 0$ and $\det((\alpha\otimes \phi)(B_i))\ne 0$ for $i=1,3$, it follows that $\det((\alpha\otimes \phi)(B_2))\ne 0$. It follows from Theorem \ref{thm:Tu22} that $C_*\otimes_{\Bbb{Z}[\pi]}\Bbb{F}(t)^k$ is acyclic and \[ \tau(M,\alpha,\phi)=\prod\limits_{i=1}^3 \det((\alpha\otimes \phi)(B_i))^{(-1)^{i+1}}.\] Therefore $\tau(M,\alpha,\phi)=\phi(\tau(M,\alpha,\psi))$.
For the part (2), using similar arguments as above one can easily show that if $\Delta_{M,\phi}^{\alpha,1} \ne 0$ then $\phi\left(\Delta_{M,\psi}^{\alpha,1}\right) \ne 0$.
\end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{mainthmalex}}] Clearly Theorem \ref{mainthmalex} follows from Theorem \ref{thm:turaev} and Lemmas \ref{lem:delta03}, \ref{lem:delta2} (applied to $\psi:\pi_1(M)\to FH_1(M)$ and $\phi:\pi_1(M)\to \Bbb{Z}$) and from Lemma \ref{lem:delta2multi}. \end{proof}
\end{document} | arXiv |
Room square
A Room square, named after Thomas Gerald Room, is an n × n array filled with n + 1 different symbols in such a way that:
1. Each cell of the array is either empty or contains an unordered pair from the set of symbols
2. Each symbol occurs exactly once in each row and column of the array
3. Every unordered pair of symbols occurs in exactly one cell of the array.
An example, a Room square of order seven, if the set of symbols is integers from 0 to 7:
0,71,54,62,3
3,41,72,60,5
1,64,52,70,3
0,25,63,71,4
2,51,30,64,7
3,62,40,15,7
0,43,51,26,7
It is known that a Room square (or squares) exist if and only if n is odd but not 3 or 5.
History
The order-7 Room square was used by Robert Richard Anstice to provide additional solutions to Kirkman's schoolgirl problem in the mid-19th century, and Anstice also constructed an infinite family of Room squares, but his constructions did not attract attention.[1] Thomas Gerald Room reinvented Room squares in a note published in 1955,[2] and they came to be named after him. In his original paper on the subject, Room observed that n must be odd and unequal to 3 or 5, but it was not shown that these conditions are both necessary and sufficient until the work of W. D. Wallis in 1973.[3]
Applications
Pre-dating Room's paper, Room squares had been used by the directors of duplicate bridge tournaments in the construction of the tournaments. In this application they are known as Howell rotations. The columns of the square represent tables, each of which holds a deal of the cards that is played by each pair of teams that meet at that table. The rows of the square represent rounds of the tournament, and the numbers within the cells of the square represent the teams that are scheduled to play each other at the table and round represented by that cell.
Archbold and Johnson used Room squares to construct experimental designs.[4]
There are connections between Room squares and other mathematical objects including quasigroups, Latin squares, graph factorizations, and Steiner triple systems.[5]
See also
• Combinatorial design
• Magic square
• Square matrices
References
1. O'Connor, John J.; Robertson, Edmund F., "Robert Anstice", MacTutor History of Mathematics Archive, University of St Andrews.
2. Room, T. G. (1955), "A new type of magic square", The Mathematical Gazette, 39: 307, doi:10.2307/3608578, JSTOR 3608578, S2CID 125711658
3. Hirschfeld, J. W. P.; Wall, G. E. (1987), "Thomas Gerald Room. 10 November 1902–2 April 1986", Biographical Memoirs of Fellows of the Royal Society, 33: 575–601, doi:10.1098/rsbm.1987.0020, JSTOR 769963, S2CID 73328766; also published in Historical Records of Australian Science 7 (1): 109–122, doi:10.1071/HR9870710109; an abridged version is online at the web site of the Australian Academy of Science
4. Archbold, J. W.; Johnson, N. L. (1958), "A construction for Room's squares and an application in experimental design", Annals of Mathematical Statistics, 29: 219–225, doi:10.1214/aoms/1177706719, MR 0102156
5. Wallis, W. D. (1972), "Part 2: Room squares", in Wallis, W. D.; Street, Anne Penfold; Wallis, Jennifer Seberry (eds.), Combinatorics: Room Squares, Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics, vol. 292, New York: Springer-Verlag, pp. 30–121, doi:10.1007/BFb0069909, ISBN 0-387-06035-9; see in particular p. 33
Further reading
• Dinitz, J. H.; Stinson, D. R. (1992), "Room squares and related designs", in Dinitz, J. H.; Stinson, D. R. (eds.), Contemporary Design Theory: A Collection of Surveys, Wiley–Interscience Series in Discrete Mathematics and Optimization, John Wiley & Sons, pp. 137–204, ISBN 0-471-53141-3
• Weisstein, Eric W., "Room Square", MathWorld
| Wikipedia |
Archimedean class
A class resulting from the subdivision induced by the Archimedean equivalence relation on a totally ordered semi-group. This equivalence is defined as follows. Two elements $ a $ and $ b $ of a semi-group $ S $ are called Archimedean equivalent if one of the following four relations is satisfied:
$$ \begin{array}{ll} a \leq b \leq a ^ {n} ,\ &b \leq a \leq b ^ {n} ,\ \\ a ^ {n} \leq b \leq a , &b ^ {n} \leq a \leq b ; \\ \end{array} $$
which amounts to saying that $ a $ and $ b $ generate the same convex sub-semi-group in $ S $. Thus, the subdivision into Archimedean classes is a subdivision into pairwise non-intersecting convex sub-semi-groups. Moreover, each subdivision into pairwise non-intersecting convex sub-semi-groups, can be extended to a subdivision into Archimedean classes.
The Archimedean equivalence on a totally ordered group is induced by the Archimedean equivalence of its positive cone: It is considered that $ a \sim b $ if there exist positive integers $ m $ and $ n $ such that
$$ | a | < | b | ^ {m} \ \textrm{ and } \ \ | b | < | a | ^ {n} , $$
$$ | x | = \max \{ x , x ^ {-1} \} . $$
The positive cone of an Archimedean group consists of a single Archimedean class.
Archimedean class. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Archimedean_class&oldid=45211
This article was adapted from an original article by O.A. Ivanova (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Archimedean_class&oldid=45211"
TeX auto | CommonCrawl |
\begin{document}
\title{Re-thinking Spatial Confounding in Spatial Linear Mixed Models}
\begin{abstract} In the last two decades, considerable research has been devoted to a phenomenon known as spatial confounding. Spatial confounding is thought to occur when there is collinearity between a covariate and the random effect in a spatial regression model. This collinearity is considered highly problematic when the inferential goal is est-imating regression coefficients, and various methodologies have been proposed to ``alleviate’’ it. Recently, it has become apparent that many of these methodologies are flawed, yet the field continues to expand. In this paper, we offer the first attempt to synthesize work in the field of spatial confounding. We propose that there are at least two distinct phenomena currently conflated with the term spatial confounding. We refer to these as the analysis model and the data generation types of spatial confounding. We show that these two issues can lead to contradicting conclusions about whether spatial confounding exists and whether methods to alleviate it will improve inference. Our results also illustrate that in most cases, traditional spatial linear mixed models \textit{do} help to improve inference of regression coefficients. Drawing on the insights gained, we offer a path forward for research in spatial confounding. \end{abstract}
\section{Introduction} In myriad applications, the use of standard regression models for spatially referenced data can result in spatial dependence in the residuals. For the better part of a century, the solution to this problem was to use a spatial regression model. In these models, a spatial random effect is introduced to account for the residual spatial dependence and thereby (theoretically) improve inference, whether the inferential goal was associational or predictive.
This practice continued, unchallenged, until about two decades ago. At that time, a phenomenon now known as spatial confounding was introduced by \citet{Reich} and \citet{Hodges_fixedeffects} \citep[see also,][]{Pac_spatialconf}. If, historically, spatial statisticians believed that incorporating spatial dependence with spatial regression models would improve inference; now those interested in spatial confounding suggest that incorporating spatial dependence with traditional models will distort inference. Originally focused on a setting where the estimation of individual covariate effects were important, interest in spatial confounding has since expanded to other inferential focuses \citep[e.g.,][]{page2017estimation,papadogeorgou2019adjusting}. Spatial confounding is typically described as occurring when there is multicollinearity between a spatially-referenced covariate and a spatial random effect. It is thought to be quite problematic. For example, \citet{marques2022mitigating} states spatial confounding can lead to ``severely biased'' regression coefficients, \citet{Reich} claims that it can lead to ``large changes'' in these estimates, and \citet{prates2019alleviating} argues that both the ``sign and relevance of covariates can change drastically'' in the face of spatial confounding.
Despite the fact that many of these claims are not empirically supported, research into spatial confounding and methods to alleviate it has exploded \citep{Hanks,Keller_spatialconf,adin2021alleviating,marques2022mitigating,Hughes,azevedo2021mspock,azevedo2022alleviating,Thaden,prates2019alleviating,chiou2019adjusted,dupont2020spatial+,Hefley,hui2021spatial,nobre2021effects}. A closer look at the body of work highlights inconsistencies in definitions of spatial confounding as well as the purported impact it can have on inference \citep{khan2020restricted,zimmerman2021deconfounding,nobre2021effects,Hanks}. Recently, many of the methods designed to alleviate spatial confounding have been shown to lead to counterintuitive results by \citet{khan2020restricted} and have even been classified as ``bad statistical practice'' \citep{zimmerman2021deconfounding}. Yet, efforts to study and alleviate spatial confounding continue without any attempt to address these observations, increasingly influencing new fields of study such as causal inference and even criminology \citep{reich2021review,kelling2021modeling}
\sloppypar{ In this paper, we (1) synthesize the existing body of work in spatial confounding, reviewing it in the context of historical teachings from spatial statistics; (2) characterize two distinct albeit related phenomena currently conflated with the term spatial confounding; and (3) show, through theoretical and simulation results, that these two issues can lead to contradicting conclusions about whether spatial confounding exists and whether methods to alleviate it will actually improve inference. Importantly, by examining spatial confounding in this way, these three key understandings show how ignoring the nuances of ``spatial confounding" can lead to methodologies that distort inferences in the very settings for which they are designed to be used.}
The rest of this paper is organized as follows: In \cref{sec:background}, we introduce the analytical set-up for the rest of the paper. Using this set-up, we provide an overview of spatial confounding in the broader context of spatial statistics. \cref{sec:nomenclature} provides a framework for understanding the two types of spatial confounding and illustrates how current (and past) research fits into this scheme. It also explores how efforts to mitigate spatial confounding can be organized into this framework. \cref{sec:mainresults} introduces theoretical results assessing the impact of both sources of spatial confounding on bias for a regression coefficient. In \cref{sec:simstudies}, we use simulation studies to explore settings that have been identified in the literature as situations in which spatial confounding will lead to increased bias in regression coefficient settings. We illustrate that in these cases, traditional spatial analysis models often outperform both non-spatial models and models designed to alleviate spatial confounding. Finally, in \cref{sec:conclusion}, we propose a clear path towards resolving the contradictions explored in this paper.
\section{Background} \label{sec:background} We begin by introducing the analytical set-up that will be used throughout the rest of the paper. We then use it to provide a brief history of how spatial confounding became a topic of concern in spatial statistics research and explore where it has gone since.
\subsection{Analytical Set-Up} \label{analytical} Throughout this paper, we distinguish between a \textit{data generating} model and an \textit{analysis} model. The former is a model meant to approximate how the data likely arose; while the latter is a model used to analyze the observed data.
Spatial regression models are traditionally used when there is residual spatial dependence after accounting for measured variables. Residual spatial dependence is thought to be the result of either an unobserved, spatially varying variable or an unobserved spatial process \citep{Waller}. To define a data generating model, we focus on the former as this most closely matches the intuition motivating efforts to mitigate spatial confounding \citep[see e.g.,][]{Reich,Pac_spatialconf,dupont2020spatial+,page2017estimation}.
Specifically, we assume $y_i$ is observed at location $\bm{s}_i \in \mathbb{R}^2$ for $i= 1, \ldots, n$ and it can be modeled as follows: \begin{flalign} \label{eq:model_0}
\textrm{\textbf{Generating Model: }} y_i (\bm{s}_i) = \beta_0 + \beta_x x_i (\bm{s}_i) + \beta_z z_i (\bm{s}_i) + \epsilon_i, \end{flalign} where $\bm{x}\left( \bm{s} \right) = \left( x_1(\bm{s}_1), \ldots, x_n (\bm{s}_n) \right)^T$ and $\bm{z} \left( \bm{s} \right)= \left( z_1(\bm{s}_1), \ldots, z_n (\bm{s}_n) \right)^T$ are each univariate variables, $\bm{\epsilon} = \left( \epsilon_1, \ldots, \epsilon_n \right)^T$ is the vector of errors with mean $\bm{0}$ and variance-covariance matrix $\sigma^2 \bm{I}$, and $\bm{\phi} = \left( \beta_0, \beta_x, \beta_z, \sigma^2 \right)^T$ are unknown.
Throughout this paper, we assume that $\bm{x} \left( \bm{s} \right)$ and $\boldsymbol{y}\left( \bm{s} \right)$ are observed and $\bm{z}\left( \bm{s} \right)$ is unobserved. We also assume that the primary inferential interest is on $\beta_x$. We consider three possible approaches to modeling the relationship between $\boldsymbol{y} \left( \bm{s} \right)$ and $\bm{x} \left( \bm{s} \right)$: 1) A non-spatial linear approach, 2) a ``traditional'' spatial approach, and 3) an ``adjusted'' spatial approach. Each framework is associated with one or more analysis models that can be fit to the observed $\boldsymbol{y}\left( \bm{s} \right)$ and $\bm{x}\left( \bm{s} \right)$. \begin{eqnarray}
\textrm{ \small \textbf{Non-Spatial Analysis Model:}}&& y_i (\bm{s}_i)= \beta_0 + \betaxp{NS} x_i (\bm{s_i}) + \epsilon_i \label{eq:OLSmodel} \\ \textrm{\small \textbf{Spatial Analysis Model:}}&& y_i (\bm{s}_i)= \beta_0 + \betaxp{S} x_i (\bm{s}_i) + g(
\bm{s}_i) + \epsilon_i \label{eq:genericSpatial}\\ \textrm{ \small\textbf{Adj. Spatial Analysis Model:}}&& \tilde{y}_i (\bm{s}_i)= \beta_0 + \betaxp{AS} \tilde{x}_i (\bm{s}_i) + h(
\bm{s}_i) + \epsilon_i \label{eq:genericadjSpatial} \end{eqnarray} For \eqref{eq:OLSmodel}--\eqref{eq:genericadjSpatial}, $\epsilon_i$ are i.i.d with mean $0$ and unknown variance $\sigma^2$. The regression coefficients $\beta_0$, $\betaxp{NS}$, $\betaxp{S}$, and $\betaxp{AS}$ are unknown. We note that $\sigma^2$ and $\beta_0$ will vary based on the analysis model chosen. In other words, to be precise, we would use notation such as $\beta_0^{NS}$,$\beta_0^{S}$, and $\beta_0^{AS}$. As our primary interest is $\beta_x$, we refrain from doing so for the sake of simplicity.
The spatial random effects $g(\bm{s})$ and $h(\bm{s})$ are assumed to have mean zero and unknown, positive-definite variance-covariance matrices. We note that models relying on Gaussian Markov Random Fields (GMRFs) can be considered as special cases of this if the variance-covariance matrices are defined to be pseudo-inverses of the singular precisions \citep{Pac_ICAR}. The tildes over $\boldsymbol{y}\left( \bm{s} \right)$ and $\bm{x}\left( \bm{s} \right)$ in \eqref{eq:genericadjSpatial} reflect that they may be functions of the originally observed $\boldsymbol{y}\left( \bm{s} \right)$ and $\bm{x}\left( \bm{s} \right)$ respectively. In future sections, we distinguish between a realization of the variables $\bm{x}\left( \bm{s} \right)$ and $\bm{z}\left( \bm{s} \right)$ and the stochastic processes that could have generated such realizations. We use capital letters (e.g., $\bm{X}(\bm{s})$ and $\bm{Z}(\bm{s})$ ) to refer to stochastic processes and lower case letters to indicate a realization of the variables (e.g., $\bm{x}(\bm{s})$ and $\bm{z}(\bm{s})$ ). After this, we drop notation indicating the dependence on spatial location unless it is needed for clarity.
\subsection{Spatial Models} \label{subsec:spatregmodels} When there is residual spatial dependence, the conventional wisdom in spatial statistics literature is that a model which accounts for this spatial dependence will offer better inference than a model which does not account for it \citep{Cressie93,bivand2008applied,Waller}. Historically, this view first appeared in the context of geostatistics and interpolation efforts. There, the goal was to improve predictions for the values of a stochastic process at unobserved locations \citep[e.g.,][]{wikle2010low}. In other words $\betaxp{S}$ in \eqref{eq:genericSpatial} was merely a tool to de-trend the data, and the primary interest was often estimating the variance-covariance matrix of the spatial random effect. The idea that accounting for spatial dependence improves inference later inspired many popular spatial models proposed for areal data. These models were often developed with the goal, either implicit or explicit, of ensuring that $\betaxp{S}$ in \eqref{eq:genericSpatial} was ``close'' to $\beta_x$ in \eqref{eq:model_0} \citep{besag1991bayesian,Hodges_fixedeffects,Clayton}. In recent decades, the lines delineating methods for geostatistical data and areal data have became blurred with advancements in computing and the popular class of models proposed by \citet{Diggle}. However, across analysis goals and types of data, the consensus continued to be that models accounting for spatial dependence should be preferred over models that did not account for spatial dependence.
Recently, however, this view has shifted. The challenge to the prevailing view arose in a line of research about a phenomena now known as ``spatial confounding''.
\subsection{Spatial Confounding} \citet{Clayton} is often referenced as the first article to describe spatial confounding. These authors noticed what they referred to as ``confounding by location'': estimates for regression coefficients changed when a spatial random effect was added to the analysis model. \citet{Clayton} interpreted this as a favorable change - one in which the estimates of the association between a response and an observed covariate was adjusted to account for an unobserved spatially-varying confounder \citep[see also,][]{Hodges_fixedeffects}. The modern conceptualization of spatial confounding arose in work by \citet{Reich} and \citet{Hodges_fixedeffects}. These articles were the first to suggest that fitting spatial models could induce bias in the estimates of the regression coefficients and an ``over-inflation'' of the uncertainty associated with these estimates. These works have inspired a serious and active line of research into the phenomena of spatial confounding \citep{Hughes,Pac_spatialconf,Thaden,Hefley,nobre2021effects,prates2019alleviating,azevedo2021mspock,dupont2020spatial+,yang2021estimation,marques2022mitigating}.
\sloppypar{Spatial confounding is almost always introduced as an issue of multicollinearity between a spatially varying covariate and a spatial random effect in a spatial analysis model \citep{Reich,Hodges_fixedeffects,Hefley,reich2021review,dupont2020spatial+,Thaden}. This statement is often deemed sufficient to identify the phenomena of spatial confounding. However, there is no consensus on a formal definition for spatial confounding. While there have been two previous efforts to formalize spatial confounding, both were definitions considering special cases of a broader phenomena \citep{Thaden,khan2020restricted}.}
Despite the ambiguity of spatial confounding as a concept, researchers using the term have developed shared expectations for the phenomenon. These expectations have, in turn, shaped multiple methods aimed at alleviating spatial confounding. Some researchers have noticed inconsistencies and contradictions arising in some of the conclusions reached by the spatial confounding literature. For example, \citet{Hanks} and \citet{nobre2021effects} have both observed that a distortion in inference for $\beta_x$ can occur in the absence of stochastic dependence between $\bm{x}$ and $\bm{z}$, contradicting some stated expectations for spatial confounding. These inconsistencies have largely remained unresolved even as research on spatial confounding has increasingly begun influencing other lines of work, such as causal inference \citep[e.g.,][]{reich2021review,papadogeorgou2019adjusting}.
We propose that some of these contradictions arise because at least two distinct categories of issues are being studied by researchers in spatial confounding. Loosely speaking, we can think of these categories as encompassing a data generation phenomena and an analysis model phenomena. Importantly, once teased apart, these two issues can lead to different conclusions about whether spatial confounding is present and whether spatial analysis models should be adjusted.
\section{Types of Spatial Confounding} \label{sec:nomenclature} As previously noted, spatial confounding is typically described as an issue of multicollinearity between a spatially varying covariate and a spatial random effect in a spatial analysis model. It appears, however, that researchers can disagree about the source of multicollinearity as well as what it means for a covariate to be spatially varying (in a problematic sense). In this section, we tease apart what we refer to as data generation spatial confounding and analysis model spatial confounding. In \cref{fig:organization}, we summarize how the problematic relationships that are thought to cause spatial confounding differ by type of spatial confounding, and we elaborate on these relationships shortly. We emphasize that this framework is not currently in use. Instead, it is a novel attempt meant to help organize some of the existing conceptualizations of spatial confounding in the literature. Importantly, many articles can have references to both types of spatial confounding within them. In the following discussion, we sort works based on the primary focus of the article.
\begin{center} \begin{minipage}{.7\textwidth} \includegraphics[width=\linewidth]{figures/organization.pdf} \end{minipage}
\captionof{figure}{Primary Source of Spatial Confounding by Type} \label{fig:organization} \end{center}
\subsection{Analysis Model Spatial Confounding} \label{subsec:analysis} \citet{Reich} and \citet{Hodges_fixedeffects} are the works that introduced the modern conceptualization of spatial confounding. These papers, and many of the works they inspired, focused on what we will refer to as analysis model spatial confounding. Research motivated by the analysis model issue often does not consider how $\boldsymbol{y}$ or $\bm{x}$ were generated. In other words, these works do not assume there is a missing $\bm{z}$ or a data generation model of the form \eqref{eq:model_0} \citep{Hodges_fixedeffects}. Instead, this conceptualization of spatial confounding focuses on the relationship between an observed $\bm{x}$ and the spatial random effect in a spatial analysis model \citep{Reich,Hodges_fixedeffects,Hughes,Hanks,Hefley,prates2019alleviating,azevedo2021mspock,hui2021spatial}.
In this line of work, identifying the problematic source of multicollinearity and defining what it means for $\bm{x}$ to be spatially varying both rely on the analysis model. More specifically, in the context of our analytical set-up, they typically rely on the eigenvectors of the estimated precision matrix $\hat{\bm{\Sigma}}_g^{-1}$ of the spatial random effect $g(\bm{s})$ in \eqref{eq:genericadjSpatial} \citep{Reich,Hanks,Hefley,prates2019alleviating,azevedo2021mspock}. For example, statistics developed to identify spatial confounding involve using both the observed $\bm{x}$ and the estimated precision matrix for a particular spatial analysis model \citep{Reich,Hefley,prates2019alleviating}. These statistics all identify, loosely, whether $\bm{x}$ is correlated with low-frequency eigenvectors of a decomposition of $\hat{\bm{\Sigma}}_g^{-1}$. Similarly, $\bm{x}$ is considered spatially varying (in a problematic sense) if it is highly correlated with such a low-frequency eigenvector of $\hat{\bm{\Sigma}}_g^{-1}$. We note that, in spatial confounding literature, no one has precisely defined what it means for an eigenvector to be low-frequency \citep[but see,][]{Reich_var}. However, when displayed graphically, they tend to show spatial patterns where nearby things are more similar than others. Thus, the problematic relationship which causes data generation spatial confounding is thought to be primarily between $\bm{x}$ and $\hat{\bm{\Sigma}}_g^{-1}$, as summarized in \cref{fig:organization}.
There are several common beliefs underlying work focused on this conceptualization of spatial confounding. First, spatial confounding occurs as a result of fitting a spatial analysis model. While a distortion to inference can be expected in any spatial analysis model \citep{Hodges_fixedeffects}, it is plausible that the degree of distortion may vary based on the particular analysis model chosen \citep[see e.g.,][]{Hefley}. Second, efforts should be taken to determine whether spatial confounding needs to be adjusted for in the analysis model. In this line of work, many authors acknowledge that it not clear when spatial confounding needs to be accounted for \citep{prates2019alleviating,Hanks,hui2021spatial}. In other words, there is at least an implicit understanding that spatial analysis models may still be preferable over adjusted spatial analysis models at times. Finally, determining whether spatial confounding exists will involve studying characteristics of the observed data (in particular $\bm{x}$) along with properties of the chosen analysis model.
\subsection{ Data Generation Spatial Confounding } \label{subsec:datagen} In work that focuses on data generation spatial confounding, researchers often do assume that $\boldsymbol{y}$ is generated from a model of the form \eqref{eq:model_0}. In the context of our analytical set-up, the interest is typically on how the relationship between $\bm{X}$ or $\bm{Z}$ (or alternatively $\bm{x}$ and $\bm{z}$) impacts inference on $\betaxp{S}$ when a spatial analysis model of the form \eqref{eq:genericSpatial} is used to fit the data \citep{Pac_spatialconf,Thaden,page2017estimation,dupont2020spatial+,nobre2021effects}.
In this line of work, spatial confounding is still often defined as an issue of multicollinearity \citep{dupont2020spatial+,Thaden}. However, the source of the multicollinearity and the definition of spatially varying (in the problematic sense) are not always clear. \citet{Pac_spatialconf} has shaped much of the current work focused on data generation spatial confounding, as well as many of the most recent methods designed to alleviate spatial confounding \citep[see e.g.,][]{dupont2020spatial+,Thaden,page2017estimation,keller2020selecting,marques2022mitigating}. In that article and the many that followed, researchers make assumptions about the variables ($\bm{x}$ and $\bm{z}$) or the stochastic processes that generated them ($\bm{X}$ and $\bm{Z}$).
Researchers who focus on $\bm{X}$ and $\bm{Z}$ often assume that these processes are generated from spatial random fields parameterized by some set of known parameters \citep{Pac_spatialconf,page2017estimation,nobre2021effects}. $\bm{X}$ and $\bm{Z}$ are typically assumed to be generated in such a way that $\bm{X}$ has two components of spatial structure: 1) one that is shared with $\bm{Z}$ (the confounded component), and 2) one that is not shared with $\bm{Z}$ (the unconfounded component). Based on characteristics of these assumed processes, theoretical results or observations have been used to identify when fitting a spatial analysis model of the form \eqref{eq:genericSpatial} will distort inference on $\beta_x$ \citep{Pac_spatialconf,nobre2021effects}. In other words, the problematic relationship is between $\bm{X}$ and $\bm{Z}$, as summarized in \cref{fig:organization}.
Most of the theoretical results related to the data generation source of spatial confounding focus on $\bm{X}$ and $\bm{Z}$. However, when it comes to methods designed to alleviate spatial confounding, there can be assumptions made about $\bm{x}$ and $\bm{z}$. For example, \citet{dupont2020spatial+} and \citet{Thaden} assume that $\bm{x}$ is a linear combination of $\bm{z}$ and Gaussian noise. In these cases, $\bm{z}$ is either chosen to have a fixed spatial structure or is generated from a spatial random field or process. The focus on the relationship between $\bm{X}$ and $\bm{Z}$ suggests that the problematic multicollinearity is between $\bm{x}$ and $\bm{z}$, as summarized in \cref{fig:organization}. The fact that some methods designed to alleviate spatial confounding focus on situations where $\bm{x}$ and $\bm{z}$ are collinear lend support to this idea. However, the theoretical results in this line of work are usually not related to characteristics of the observed realization $\bm{x}$ (or $\bm{z}$), and the assumptions made in the theoretical results do not always ensure empirical collinearity between a given set of realizations $\bm{x}$ and $\bm{z}$. In a similar manner, the characteristics of a particular realization $\bm{x}$ are not assessed in determining whether it is spatially dependent in a problematic sense. It is possible that the underlying belief is that if $\bm{x}$ and $\bm{z}$ are collinear and ``spatial'', then there will be collinearity between $\bm{x}$ and a spatial random effect in an analysis model. However, papers in this line of work spend very little time discussing the impact of spatial analysis models. For example, \citet{Thaden} defines spatial confounding as occurring when: 1) $\bm{X}$ and $\bm{Z}$ are stochastically dependent, 2) $\textrm{E} \left( \bm{Y} | \bm{X}, \bm{Z} \right) \neq \textrm{E} \left( \bm{Y} | \bm{X} \right)$, and 3) $\bm{Z}$ has a ``spatial'' structure. Notice, that in this definition, the emphasis is on the relationship between $\bm{X}$, $\bm{Z}$, and $\bm{Y}$, and it mirrors more general definitions of confounders in causal inference research. It is not entirely clear what it means for $\bm{X}$ to have spatial structure or why it is problematic for $\bm{X}$ to have such a structure. More importantly, by this definition, spatial confounding exists regardless of the analysis model chosen.
We note that not every paper completely ignores the analysis model. For example, \citet{dupont2020spatial+} explicitly stated they were viewing spatial confounding from the perspective of fitting spatial models via thin plate splines. While they stated the smoothing that comes from fitting a spatial model contributes to the problem, the emphasis seemed to still be on the relationship between $\bm{x}$ and $\bm{z}$. For example, the authors emphasized ``if the correlation between the covariate and the spatial confounder is high, the smoothing applied to the spatial term in the model can disproportionately affect the estimate of the covariate effect.'' In other words, it did not appear that the smoothing alone was problematic. It is for this reason we group this work here, rather than the analysis model spatial confounding, although we note this work is clearly one with elements of both types of spatial confounding.
We take a moment to highlight several notable beliefs commonly found in the data generation spatial confounding line of work. First, the primary source of spatial confounding comes from the (potentially unknown) process that generated the data rather than the process of fitting a model. Second, fitting a spatial analysis model will lead to distortion in inferences when spatial confounding is present. However, here, spatial analysis models -- whether of the form of \eqref{eq:genericSpatial}, a generalized additive model (GAM), or something else-- are often treated as interchangeable. There is often no exploration of the impact of a particular choice of spatial model on inference, and inferior inferences for one type of spatial model are assumed to hold for other spatial models. Finally, it seems researchers assume the observed data (i.e. $\boldsymbol{y}$ and $\bm{x}$) do not give insight into whether spatial confounding is present or should be accounted for in analyses.
\subsection{Approaches to Alleviating Spatial Confounding} \label{subsec:alleviating} There have been numerous methods designed to alleviate spatial confounding. In this sub-section, we take a moment to point out that most of them can be categorized as being motivated by either the analysis model or data generation type of spatial confounding.
The first methods to alleviate spatial confounding were motivated by the analysis model source of spatial confounding. For areal analyses, \citet{Reich} and \citet{Hodges_fixedeffects} first proposed a methodology sometimes known as restricted spatial regression. This method suggested to, in a sense, replace the spatial random effect $g(\bm{s})$ in a spatial analysis model with a new spatial random effect $h(\bm{s})$ in an adjusted spatial analysis model. This new spatial random effect is projected onto the orthogonal complement of the column space of $\bm{x}$. By ``smoothing'' orthogonally to the fixed effects, this methodology aimed to alleviate collinearity between the $\bm{x}$ and the estimated variance-covariance matrix of $h(\bm{s})$. In doing so, it directly addresses the analysis model source of confounding. This approach motivated and continues to motivate many further methodologies designed to alleviate spatial confounding \citep{Hughes,Hanks,prates2019alleviating,marques2022mitigating,chiou2019adjusted,hui2021spatial,azevedo2021mspock,adin2021alleviating}. Most of these methods continue to involve changing the spatial random effect (or analogue of it for other models) in the spatial analysis model. In other words, the adjustment from a model of the from \eqref{eq:genericSpatial} to \eqref{eq:genericadjSpatial} primarily involves replacing the spatial random effect and the data remains unaltered. As noted previously, these adjusted analysis models are typically offered with the caveat that there may be some situations when traditional analysis models would be more appropriate (although it is currently unclear when that is).
In this paper, we do not explore methods influenced by analysis model spatial confounding in the rest of the paper. Most of these methods have been influenced by restricted spatial regression analysis models. Recently, these models have been shown to perform poorly. \citet{khan2020restricted} demonstrated that inference on $\beta_x$ is often worse with restricted spatial regression analysis models than with non-spatial analysis models. \citet{zimmerman2021deconfounding} subsequently offered a more in-depth, thorough review of restricted spatial regression analysis models. These authors showed that smoothing orthogonally to the fixed effects distorted inference for a variety of inferential goals and concluded that employing such analysis models was ``bad statistical practice.''
Researchers motivated by data generation spatial confounding rely heavily on assumptions about how the data arose when developing methodology to alleviate spatial confounding. Thus, there can be various formulations. We focus on two methodologies proposed by \citet{Thaden} and \citet{dupont2020spatial+} as illustrative examples of such approaches (described in more detail in \cref{subsec:biasadj}). In both these works, the authors assume that the observed data are truly from a model with a form similar to \eqref{eq:model_0} (in simulation studies \citet{dupont2020spatial+} introduced another unobserved spatial random effect to this model) and that $\bm{x} = \beta_z \bm{z} + \bm{\epsilon}_x$, where $\bm{\epsilon}_x$ is Gaussian noise. Based on these assumptions, the authors proposed methodologies to alleviate spatial confounding that replace (or are equivalent to replacing) either $\boldsymbol{y}$ or $\bm{x}$ in the analysis model. The details of these approaches are given in \cref{subsec:biasadj}.
Recall, \citet{Thaden} offered little discussion of the impact of the spatial analysis model on inference, and \citet{dupont2020spatial+} felt that their proposed methodology would work in settings beyond the thin plate splines setting they explored. Subsequent work has claimed both approaches are useful for other types of spatial models \citep{schmidt2021discussion,dupont2020spatial+}. As discussed in \cref{subsec:datagen}, this is characteristic of work motivated by data generation spatial confounding. The unspoken belief is that something must be known about how the data were generated to appropriately analyze it. If the data were truly generated in line with the assumptions made, the proposed methodologies should be superior to traditional spatial regression analysis models (and non-spatial analysis models).
In the rest of this paper, we give theoretical results that show that both the analysis model and data generation types of spatial confounding can impact inference, sometimes in competing ways. Importantly, we also show that methods designed to alleviate spatial confounding that focus on only one type of spatial confounding can, in some cases, distort inference more than a spatial regression model.
\section{Two Views of Spatial Confounding Bias} \label{sec:mainresults} In this section, we introduce theoretical results exploring the bias in estimates of $\beta_x$ for various analysis models. We compare and contrast results derived with an emphasis on data generation and analysis model spatial confounding.
Throughout all sub-sections, we assume that data are originally generated from a model of the form \eqref{eq:model_0}. We consider a non-spatial analysis model of the form \eqref{eq:OLSmodel}, spatial analysis models of the form \eqref{eq:genericSpatial}, and adjusted spatial analysis models of the form \eqref{eq:genericadjSpatial}. For the last category, we focus on the geoadditive structural equation modeling (GSEM) and Spatial+ approaches developed by \citet{Thaden} and \citet{dupont2020spatial+} respectively (previously referenced in \cref{subsec:alleviating}).
\subsection{Bias: Non-Spatial and Spatial Analysis Models} \label{subsec:nssp} In this sub-section, we consider how the data generation and analysis model types of spatial confounding may impact bias in the estimation of $\beta_x$. We consider this for the non-spatial analysis and spatial analysis models. To do so, we follow the set-up explored in \citet{Pac_spatialconf}. This article has shaped much of the current work focused on the data generation issue, as well as many of the most recent methods designed to alleviate spatial confounding \citep[see e.g.,][]{dupont2020spatial+,Thaden,page2017estimation,keller2020selecting,marques2022mitigating}.
Mirroring the work in \citet{Pac_spatialconf}, we begin by assuming that our response variable was generated from a model of the form \cref{eq:model_0}. However, instead of a particular set of realizations for $\bm{x}$ and $\bm{z}$, we use the processes $\bm{X}$ and $\bm{Z}$: \begin{eqnarray} \label{stochastic}
\bm{Y} (\bm{s}_i) = \beta_0 + \beta_x \bm{X} (\bm{s}) + \beta_z \bm{Z} (\bm{s}) + \epsilon_i, \end{eqnarray} where $\epsilon_i$ is defined as in \eqref{eq:model_0}. We assume that $\bm{X}$ and $\bm{Z}$ are each generated from Gaussian random processes with positive-definite, symmetric covariance structures. In \citet{Pac_spatialconf}, the author considered two settings: one in which he stated there was no confounding in the data generation process and one in which he stated there was confounding in the data generation process. We restrict our attention to the situation where there is confounding in the data generation process.
Throughout this section, we assume $\bm{X}$ and $\bm{Z}$ are generated from Gaussian processes with Mat\'ern spatial correlations: \begin{eqnarray} \label{maternclass}
\bm{C}\left( h | \theta, \nu \right) = \frac{1}{\Gamma(\nu) 2^{\nu-1}} \left( \frac{ 2 \sqrt{\nu} h}{\theta} \right)^{\nu} K_{\nu} \left( \frac{ 2 \sqrt{\nu} h}{\theta} \right), \end{eqnarray} where $h$ is the Euclidean distance between two locations, $K_{\nu}$ is the modified Bessel function of the second order with smoothness parameter $\nu$, and $\theta$ is the spatial range. We allow $\bm{X} = {\bm{X}}_c + {\bm{X}}_u$, where $\textrm{Cov} \left( \bm{X} \right) = \sigma_c^2 \Rtwo{c} + \sigma_u^2 \Rtwo{u} $, $\textrm{Cov} \left( \bm{Z} \right) = \sigma_z^2 \Rtwo{c} $, and $\textrm{Cov} \left( \bm{X}, \bm{Z} \right) = \rho \sigma_c \sigma_z \Rtwo{c}$. We assume that $\Rtwo{c}$ and $\Rtwo{u}$ are each members of \eqref{maternclass} with the same $\nu$ and potentially different spatial range parameters. We stress that the source of confounding here is $\rho$, and the spatial aspect of the confounding is the shared spatial correlation functions in $\Rtwo{c}$ and $\Rtwo{u}$. There is no guarantee that a particular set of realizations $\bm{x}$ and $\bm{z}$ will be collinear or share specific spatial patterns.
\subsubsection{Data Generation Confounding} \label{subsubsec:datagen} We first explore bias from the perspective of data generation spatial confounding. Work on data generation confounding tends to treat $\bm{X}$ and $\bm{Z}$ stochastically when deriving bias terms \citep{Pac_spatialconf,page2017estimation}. When considering bias for a spatial regression analysis model, generalized least squares estimators are used. We adopt this approach here.
In \cref{stoch_ols_bias} we calculate the bias terms $\bias{\betaxp{NS} | \bm{X}^* }= \beta_x - \textrm{E} \left( \hat{\beta}^{NS} | \bm{X}^* \right),$ for a non-spatial regression analysis model, and $\bias{\betaxp{S} | \bm{X}^* }= \beta_x - \textrm{E} \left( \hat{\beta}^{S} | \bm{X}^* \right),$ for a spatial regression analysis model of the form \eqref{eq:genericSpatial}. Here, $ \bm{X}^* = [\bm{1} ~ \bm{X} ]$.
\begin{rmk} \label{stoch_ols_bias}
Let the data generating model be of the form \eqref{stochastic} with $\bm{X} = {\bm{X}}_c + {\bm{X}}_u$ and $\bm{Z}$ having the following characteristics:
\begin{enumerate}
\item $\textrm{Cov} \left( \bm{X} \right) = \sigma_c^2 \Rtwo{c} + \sigma_u^2 \Rtwo{u} $
\item $\textrm{Cov} \left( \bm{Z} \right) = \sigma_z^2 \Rtwo{c} $, and
\item $\textrm{Cov} \left( \bm{X}, \bm{Z} \right) = \rho \sigma_c \sigma_z \Rtwo{c}$
\end{enumerate}
where $\Rtwo{c}$ and $\Rtwo{u}$ are of the form \eqref{maternclass} with the same $\nu$. If a non-spatial analysis model of the form \eqref{eq:OLSmodel} is employed with variance parameters assumed known, then the $\bias{\beta_X^{NS} | \bm{X}^* }= \beta_x - \textrm{E} \left( \hat{\beta}_X^{NS} | \bm{X}^* \right)$ can be expressed as:
\begin{eqnarray} \label{olsstochbias}
\beta_z \rho \frac{\sigma_z}{\sigma_c} \left[ \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{K} \left( \bm{X} - \mu_x \bm{1} \right) \right]_2 \end{eqnarray}
If instead, a spatial analysis model of the form \eqref{eq:genericSpatial} is employed with variance parameters assumed known, then $\bias{\beta_X^S | \bm{X}^* }= \beta_x - \textrm{E} \left( \hat{\beta}_X^{S} | \bm{X}^* \right)$ can be expressed as: \begin{eqnarray} \label{glsstochbias}
\beta_z \rho \frac{\sigma_z}{\sigma_c} \left[ \left( { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{K} \left( \bm{X} - \mu_x \bm{1} \right) \right]_2 \end{eqnarray} \noindent where $\bm{K}= p_c \left(p_c \bm{I} + (1-p_c) \Rtwo{u} \Rtwo{c}^{-1} \right)^{-1} $, $p_c=\frac{\sigma_c^2}{\sigma_c^2 + \sigma_u^2}$, $\bm{\Sigma} = \beta_z^2 \sigma_z^2 \Rtwo{c} + \sigma^2 \bm{I}$, and $[ ]_2$ indicates the second element of the vector.
\end{rmk}
\begin{proof} See \cref{app:stoch_ols_bias} and \cref{app:stoch_gls_bias} for derivations. \end{proof} We note that \eqref{glsstochbias} is equivalent to Equation (6) in \citet{Pac_spatialconf} when $\nu=2$. The bias terms \eqref{olsstochbias} and \eqref{glsstochbias} are very complicated. We take a moment to point out several things. First, for the spatial model the ``true'' precision of $\bm{Y}$ (conditional on $\bm{X}$), $\Sigma^{-1}$, is used, effectively ignoring the impact of the particular analysis model chosen. As we have discussed, this is very common in explorations of bias influenced by the data generation spatial confounding. However, we note that \citet{Pac_spatialconf} did include a brief description of the impact of analysis models in Section 2.1 of that paper. Second, it is difficult to derive insights from these forms of bias. They are heavily dependent not only on the spatial range parameters and the various other variance parameters, but also on the distributional assumptions on $\bm{X}$.
In \citet{Pac_spatialconf}, he measured the bias due to spatial confounding with the term $c_S(\bm{X})= \left( { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{K} \left( \bm{X} - \mu_x \bm{1} \right)$ from \cref{glsstochbias}. Here, we also introduce the non-spatial equivalent $c_{NS}(\bm{X})= \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{K} \left( \bm{X} - \mu_x \bm{1} \right)$ from \cref{olsstochbias}. More specifically, he considered $\textrm{E}_{\bm{X}} \left( c_S(\bm{X}) \right)$. To control for the influence for the marginal variance parameters and $\beta_z$, he calculated (via simulations) $\textrm{E}_{\bm{X}} \left( c_S(\bm{X}) \right)$ for various values of $p_c$ (defined in \cref{stoch_ols_bias}) and the term $p_z= \frac{\beta_z^2 \sigma_z^2}{\beta_z^2 \sigma_z^2 + \sigma^2}$. He did this for the case where $\Rtwo{c} $ and $\Rtwo{u}$ are members of \eqref{maternclass} with $\nu =2$.
The results, replicated from his code available at \url{https://www.stat.berkeley.edu/~paciorek/research/code/code.html}, suggested that a spatial regression analysis model could result in reduced bias relative to a non-spatial analysis when $\theta_u \ll \theta_c $. On the other hand, a spatial regression analysis could also increase bias relative to a non-spatial analysis when $\theta_u \gg \theta_c$. To see this, note that \citet{Pac_spatialconf} stated that $\textrm{E}_{\bm{X}} \left( c_{NS}(\bm{X}) \right) \approx p_c $. It can also be shown that $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right) \approx p_c $ when $\theta_c = \theta_u$. Figure \ref{fig:data_illustrationa} provides images of $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$ for 100 locations on a grid of the unit square for different fixed values of $p_c$, $p_v$, $\theta_c$, and $\theta_u$ when $\nu = 2$. The upper left subplot of the image matrix provides a colored image of $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$ when $p_c = p_z = 0.1$ and $\theta_c$ varies from 0 to 1 (x-axis) and $\theta_u$ varies from 0 to 1 (y-axis). As $\theta_c$ increases, holding all else constant, $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$ decreases, In contrast, as $\theta_u$ increases, holding all else constant, $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$ increases. Moving to the other subplots within the image matrix shows the same colored representation of $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$, but for different values of $p_c$ and $p_z$. As either $p_c$ or $p_z$ increase, $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$ also increases. Notice, however, that for any given value of $p_c$ and $p_z$, we see the same behavior for $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$ as $\theta_u$ and $\theta_c$ changes that we saw in the first subplot considered. Namely that reduced bias is observed when $\theta_u \ll \theta_c$.
\begin{center} \begin{minipage}{.6\textwidth} \includegraphics[width=\textwidth]{figures/pac_fig2.png} \end{minipage} \captionof{figure}{This image depicts $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$ for 100 locations on a grid of the unit square when $\Rtwo{u}$ and $\Rtwo{c}$ belong to the Mat\'ern class with $\nu=2$. Recall, $\textrm{E}_{\bm{X}} \left( c_{NS}(\bm{X}) \right) \approx p_c $, and $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right) \approx p_c $ when $\theta_c = \theta_u$. Thus, terms lower than the diagonal represent a reduction in bias by modeling the residual spatial dependence. This image was created from Christopher Paciorek's code using the \texttt{fields} and \texttt{lattice} packages \citep{lattice,fields}.} \label{fig:data_illustrationa} \end{center}
\citet{Pac_spatialconf} explicitly acknowledged that the case where $\theta_u \gg \theta_c$ is likely of limited interest in real applications. However, this case has increasingly influenced further research in spatial confounding. Or rather, the fact that bias for a spatial analysis model can be increased relative to the bias for a non-spatial model has influenced further research. Other papers often use this observation to support statements suggesting that spatial confounding occurs when ``spatial range of the observed risk factors is larger than the unobserved counterpart'' \citep{marques2022mitigating}. However, it is rarely acknowledged that these simulations considered only a very specific case \citep[there are exceptions, see e.g.,][]{keller2020selecting}.
In the context of data generation spatial confounding, this can be problematic because the behavior of bias from spatial confounding is so dependent on the distributional assumptions for $\bm{X}$. To illustrate this issue, we now repeat the simulation study for the case when $\Rtwo{c} $ and $\Rtwo{u}$ are members of \eqref{maternclass} with $\nu =.5$. Here, the spatial process is less smooth than the case considered in \citet{Pac_spatialconf}. As in \citet{Pac_ICAR}, for this case $\textrm{E}_{\bm{X}} \left( c_{NS}(\bm{X}) \right) \approx p_c $, and $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right) \approx p_c $ when $\theta_c = \theta_u$. For small values of $\theta_u$ and $\theta_c$, it turns out that the images can look fairly flat. To better illustrate trends, we consider values of $\theta_u$ and $\theta_c$ up to 10. In \cref{fig:data_illustrationb}, we see the bias modification term is almost always equal to the non-spatial equivalent. It appears that bias reduction can occur when $\theta_u$ is less than 2, regardless of of the value of $\theta_c$. Similarly, bias can be increased when $\theta_c$ is less than 2, across all values of $\theta_u$. In other words, there is no longer strong evidence to support statements that spatial confounding impacts bias when the ``spatial range of the observed risk factors is larger than the unobserved counterpart.''
\begin{center} \begin{minipage}{.6\textwidth} \includegraphics[width=\textwidth]{figures/exp_kk.png} \end{minipage} \captionof{figure}{This image depicts $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right)$ for locations on grid of the unit square when $\Rtwo{u}$ and $\Rtwo{c}$ belong to the Mat\'ern class $\nu=.5$. Again, $\textrm{E}_{\bm{X}} \left( c_{NS}(\bm{X}) \right) \approx p_c $, and $\textrm{E}_{\bm{X}} \left( c_{S}(\bm{X}) \right) \approx p_c $ when $\theta_c = \theta_u$. Thus, terms lower than the diagonal represent a reduction in bias by modeling the residual spatial dependence. This images was created from an adaptation of Christopher Paciorek's code using the \texttt{fields} and \texttt{lattice} packages \citep{lattice,fields}. } \label{fig:data_illustrationb} \end{center}
Importantly, these examples illustrate how sensitive our conclusions about the impact of spatial confounding are to the distributional assumptions we make about $\bm{X}$ and $\bm{Z}$.
\subsubsection{Analysis Model Source of Spatial Confounding} In this sub-section, we focus on the analysis model type of spatial confounding. In order to make our results comparable to the setting explore in \cref{subsubsec:datagen}, we assume that for a particular set of realizations $\bm{x}$ and $\bm{z}$, the response $\bm{y}$ is generated from a model of the form \cref{eq:model_0}. We can assume that the processes $\bm{X}$ and $\bm{Z}$ are generated as before. However, the results in this section do not depend on any distributional assumptions about $\bm{X}$ and $\bm{Z}$. Unlike in \cref{subsubsec:datagen}, we assume that all variance parameters are unknown. As we will see, this results in conceptualizing spatial confounding by the relationships that the $\bm{x}$, $\boldsymbol{y}$, and $\bm{z}$ have with the eigenvectors of an estimated precision matrix $\hat{\bm{\Sigma}}^{-1}$.
We consider both the non-spatial analysis model and the class of spatial analysis models of the form \eqref{eq:genericSpatial}. First, in \cref{lemma:olsfixed}, we derive the bias term that results from fitting a non-spatial analysis model. \begin{lemma} \label{lemma:olsfixed} Let the data generating model be of the form \eqref{eq:model_0} with $\boldsymbol{y}$ and $\bm{x}$ known. If a non-spatial analysis model of the form \eqref{eq:OLSmodel} is fit, then $\bias{\hat{\beta}_{x}^{NS}} = \beta_x - E\left(\hat{\beta}_{x}^{NS} \right)$ can be expressed as:
\begin{eqnarray*} \label{eq:obsolsfixed}
\frac{\beta_z}{ \euclnorm{\bm{1}}^2 \euclnorm{\bm{x}}^2 - \left[ \euclmetric{\bm{x}}{\bm{1}} \right]^2 } \left( \euclnorm{\bm{1}}^2 \euclmetric{\bm{x}}{\bm{z}} - \euclmetric{\bm{x}}{\bm{1}} \euclmetric{\bm{z}}{\bm{1}}\right),
\end{eqnarray*}
where $ \langle \cdot, \cdot \rangle $ is the standard Euclidean inner product and $||\cdot||$ represents the norm induced by it. \end{lemma} Because this ends up being a special case of the bias for the GLS estimators discussed next, we delay a discussion of these terms. Now, we assume that we fit a spatial analysis model of the form \eqref{eq:genericSpatial}. \begin{lemma} \label{lemma:glsfixed} Let the data generating model be of the form \cref{eq:model_0} with $\boldsymbol{y}$ and $\bm{x}$ known. If a spatial analysis model of the form \eqref{eq:genericSpatial} is fit, and results in the positive definite estimate $\hat{\bm{\Sigma}}$, then the bias $\bias{\hat{\beta}_{x}^{S} }= \beta_x - E\left(\hat{\beta}_{x}^{S}\right)$ can be expressed as: \begin{eqnarray} \label{eq:obsglsfixed} \frac{\beta_z}{ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2 } \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \right). \end{eqnarray} \end{lemma} \begin{proof} See \cref{app:obsglsfixed} for the calculations. \end{proof}
Here, the estimate of precision matrix $\bm{\Sigma}^{-1}$ is $\hat{\bm{\Sigma}}^{-1}$. We define the inner product $\langle m, n \rangle_{\bm{\hat{\bm{\Sigma}}}^{-1}} = m^T \bm{\hat{\bm{\Sigma}}}^{-1} n$ for $m,n \in \mathbb{R}^n$, and we let $|| \cdot ||_{\bm{\hat{\bm{\Sigma}}^{-1}}}$ be the norm induced by it (see \cref{appa_diff_geometry} for more details). We do not make any assumptions of how the term $\hat{\bm{\Sigma}}^{-1}$ is estimated (e.g., Bayesian vs. residual maximum likelihood), but we acknowledge two different methods of fitting the same analysis model could result in different $\hat{\bm{\Sigma}}^{-1}$. Finally, we note in \cref{ols_special_gls} that the bias term in \cref{eq:obsolsfixed} is a special case of the bias term in \cref{eq:obsglsfixed}.
\begin{rmk} \label{ols_special_gls} When $\hat{\bm{\Sigma}}^{-1} = \bm{I}$, $\bias{\hat{\beta}_{x}^{NS}}$ in \cref{eq:obsolsfixed} is a special case of $\bias{\hat{\beta}_{x}^{S} }$ in \cref{eq:obsglsfixed}. \end{rmk}
The bias term in \eqref{eq:obsglsfixed} is a function of $\beta_z$, which makes intuitive sense. Although this will, of course, not be known, we note that its impact on inference is the same across all analysis models belonging to the spatial analysis models as well as the non-spatial analysis model. For the moment, we focus on the other terms. We begin with the numerator of \eqref{eq:obsglsfixed} (ignoring $\beta_z$): \begin{eqnarray} \label{eq:numobsglsfixed}
\sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}}. \end{eqnarray} Broadly speaking, this term tends to get smaller when one of two things happens. The first situation occurs when the low frequency eigenvectors of $\hat{\bm{\Sigma}}^{-1}$ are ``flat''. We say an eigenvector is ``flat'' if it there is a small angle (with respect to the Euclidean norm) between it and the column vector of ones, $\bm{1}$. We say an eigenvector is ``low-frequency'' if its associated eigenvalue is less than 1. When this occurs, all terms involving $\bm{1}$ (i.e., $ \sigmainvnorm{\bm{1}}^2$, $\sigmainvdot{\bm{x}}{\bm{1}}$, and $\sigmainvdot{\bm{z}}{\bm{1}}$) will become smaller in magnitude. To illustrate this, we randomly generate locations for 140 observations on a $[0,10] \times [0,10]$ window. Using these locations, we represent different potential $\hat{\bm{\Sigma}}^{-1}$'s by calculating the inverses of variance-covariance matrices for members of the Mat\'ern class. In \cref{fig:bias_illustration}, we use colors to denote three possible values of $\nu$: $\nu = .5$ (the exponential), $\nu =1$ (Whittle), and $\nu =2$. For fixed $\nu$, we then calculate various variance-covariance matrices by allowing $\theta$ to vary. For the inverse of each unique matrix, we calculate $\sigmainvnorm{\bm{1}}$. For fixed $\nu$, we can expect the lowest frequency eigenvectors of the eigendecomposition of the associated $\hat{\bm{\Sigma}}^{-1}$ to become flatter as $\theta$ increases. In \cref{fig:bias_illustration}(a), we can see that $\sigmainvnorm{\bm{1}}$ decreases in magnitude as $\theta$ increases for all values of $\nu$. This trend will also be seen in cross-products involving $\bm{1}$, as can be seen in \cref{fig:bias_illustration}b). Note that in these plots, the black line denotes the Euclidean norm. Almost all of the values of $ \sigmainvnorm{\bm{1}}$ are less than this in magnitude. In many practical situations where spatial covariance matrices are employed, this will tend to happen.
The second situation that will tend to decrease the magnitude of \eqref{eq:numobsglsfixed} occurs when there are small angles (again with respect to the Euclidean norm) between either $\bm{x}$ or $\bm{z}$ and low frequency eigenvectors of $\hat{\bm{\Sigma}}^{-1}$. When this occurs, we say that $\bm{x}$ (or $\bm{z}$) is spatially smooth with respect to $\hat{\bm{\Sigma}}^{-1}$. Recall, for us, low frequency eigenvectors are those with associated eigenvalues less than 1. We note that \eqref{eq:numobsglsfixed} is symmetric in $\bm{x}$ and $\bm{z}$. As just one of these variables becomes more correlated with a low frequency eigenvector, all terms involving it will tend to decrease in magnitude. Both variables being correlated with low frequency eigenvectors will tend to be associated with a further reduction in the magnitude of the bias. As an illustration, we again use the 140 locations just discussed. We generate a realization $\bm{x}$ from an exponential process (\eqref{maternclass} with $\nu=.5$) at these locations with $\theta=10$. In \cref{fig:bias_illustration}c), we illustrate how this realization appears spatially smooth. Because $\bm{x}$ is spatially smooth, it will often be correlated with low frequency eigenvectors. Unsurprisingly, the $\sigmainvnorm{\bm{x}}$ is always smaller than the corresponding Euclidean norm. We note that if either $\bm{x}$ or $\bm{z}$ is linearly dependent with $\bm{1}$, then the bias term in \eqref{eq:numobsglsfixed} will be 0. Thus, the flatter $\bm{x}$ and $\bm{z}$ become, the smaller the bias.
\begin{center} \begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/1_norm.png} \captionof*{figure}{a) $\sigmainvnorm{\bm{1}}$} \end{minipage}
\begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/x_one_cross.png} \captionof*{figure}{b) $\sigmainvdot{\bm{x}}{\bm{1}}$ } \end{minipage} \begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/x.png} \captionof*{figure}{Illustration of $\bm{x}$} \end{minipage}
\begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/x_norm.png} \captionof*{figure}{ $\sigmainvnorm{\bm{x}}$} \end{minipage} \captionof{figure}{Illustrations of components of \eqref{eq:numobsglsfixed}. All plots were made with \citet{ggplot2}. } \label{fig:bias_illustration} \end{center} The behavior of \eqref{eq:numobsglsfixed} supports the traditional view that fitting a spatial analysis model helps improve inference on $\beta_x$. When the low frequency eigenvectors of $\hat{\bm{\Sigma}}^{-1}$ mirrors the patterns of either $\bm{x}$ or $\bm{z}$, fitting a spatial analysis model will tend to result in better estimates of $\beta_x$ than a non-spatial model. It also highlights that what it means to be ``spatially smooth'' for the purposes of bias reduction depends on the analysis model chosen. To see this, note that in \cref{fig:bias_illustration} a), b), and d) the magnitudes can be quite different for different choices of $\hat{\bm{\Sigma}}^{-1}$, particularly when $\theta$ is small.
Recall from our discussion in \cref{subsec:datagen}, researchers are sometimes concerned with collinearity between $\bm{x}$ and $\bm{z}$ as a possible source of confounding bias. We note that when $\bm{z} = \alpha \bm{x}$, for $\alpha \neq 0$, then \eqref{eq:numobsglsfixed} is always less than or equal to $\alpha \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2$. If $\bm{x}$ is correlated with low-frequency eigenvectors or the low-frequency eigenvectors are flat, this term will typically be smaller for a spatial analysis model than for a non-spatial analysis model. In other words, this suggests spatial analysis models can still reduce bias relative to a non-spatial analysis model when $\bm{x}$ or $\bm{z}$ are collinear so long as at least one of them is spatially smooth.
We now turn our attention to the denominator of \eqref{glsstochbias}: \begin{eqnarray*} \label{eq:demobsglsfixed} \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2 = \\
\sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 \sin{\sigmainvangle{\bm{x}}{\bm{1}}}^2, \end{eqnarray*} where $\sigmainvangle{\bm{x}}{\bm{1}}$ is the angle between $\bm{x}$ and $\bm{1}$ with respect to the Riemannian metric induced by $\hat{\bm{\Sigma}}^{-1}$ (see \cref{appa_diff_geometry} for more details). The term $\sin{\sigmainvangle{\bm{x}}{\bm{1}}}$ will be minimized when $\bm{x}$ is linearly dependent with $\bm{1}$, and it will be maximized when $\bm{x}$ is perpendicular (with respect to the Reimannian metric induced by $\hat{\bm{\Sigma}}^{-1}$) to $\bm{1}$. In other words, because we are considering the denominator of the bias, the flatter $\bm{x}$ becomes, the larger the bias. This behavior supports the insights from research into the analysis model source of spatial confounding: $\bm{x}$ which are ``too'' spatially smooth \textit{can} distort inference on $\beta_x$.
Pulling these insights together, we see that, generally speaking, bias will decrease with a spatial analysis model in settings where $\bm{x}$, $\bm{z}$ are spatially smooth or cases in which low frequency eigenvectors of $\hat{\bm{\Sigma}}^{-1}$ are flat. We emphasize again that what it means to be spatially smooth depends on the relationship of $\bm{x}$ and $\bm{z}$ with the low frequency eigenvectors of $\hat{\bm{\Sigma}}^{-1}$. However, for cases when $\bm{x}$ is not only spatially smooth, but flat, the numerator and denominator of \eqref{glsstochbias} work in opposite directions. At the extreme, when $\bm{x}$ is collinear with $\bm{1}$, the bias will be 0 (where we use the mathematical convention that $\frac{0}{0} =0$). However, as $\bm{x}$ becomes flatter, it's possible that the denominator will shrink faster than the numerator in some settings. In this case, the flatness of $\bm{x}$ can effectively serve to increase the bias. This reinforces the observations made by researchers influenced by analysis model spatial confounding. Finally, we note that for the case of collinearity between $\bm{x}$ and $\bm{z}$ (i.e., returning to $\bm{z} = \alpha \bm{x}$, $\alpha \neq 0$), the overall bias term is $\alpha$ for both spatial analysis models and non-spatial analysis models. This suggests, contrary to some research in data generation spatial confounding, that bias induced by collinearity between $\bm{x}$ and $\bm{z}$ is not exacerbated by fitting a spatial analysis model.
\subsection{Bias: Adjusted Spatial Analysis Models} \label{subsec:biasadj} In this section, we consider the impact of the analysis model on inference on $\beta_x$ for the GSEM and Spatial+ approaches referenced in \cref{subsec:alleviating}. Recall, these models were developed to improve inference on $\beta_x$ when certain assumptions about the data generation process are assumed to be true. For the GSEM and Spatial+ methods, these assumptions include that $\bm{x} = \beta_z + \epsilon_x$. When this is the case, the data generation source of spatial confounding suggests that fitting GSEM or Spatial+ will reduce bias relative to a spatial analysis model or non-spatial analysis model.
We take a moment to give details on both approaches. The GSEM approach, summarized in \cref{gsem}, is equivalent to replacing $\boldsymbol{y}$ and $\bm{x}$ with $\bm{r}_{\y}$ and $\bm{r}_{\x}$ \citep{Thaden}. This latter set of variables are defined to be the residuals, respectively, from spatial analysis models using $\boldsymbol{y}$ and $\bm{x}$ as the response variable with no covariates. These residuals are then used to fit a non-spatial analysis model of the form \eqref{eq:OLSmodel}, and inference for $\beta_x$ is based on the outcome. We note that while \citet{Thaden} did claim that the GSEM approach is equivalent to these steps, their work did not explore this equivalence. \citet{dupont2020spatial+} utilized the approach for GSEM described in \cref{gsem} and found that the GSEM approach improved inference only when smoothing was used in Steps 1 and 2, and we adopt this convention from hereon out. The Spatial+ approach, summarized in \cref{spatialplus}, involves replacing $\bm{x}$ with $\bm{r}_{\x}$. The analysis model used for inference is then a spatial regression analysis model with response $\boldsymbol{y}$ and covariate $\bm{r}_{\x}$.
\begin{analysis}[GSEM] \label{gsem} The GSEM approach can be summarized as follows: \begin{enumerate}
\item Define $\bm{r}_{\x}$ to be the residuals from a spatial regression model with $\bm{x}$ as the response and only an intercept
\item Define $\bm{r}_{\y}$ to be the residuals from a spatial regression model with $\boldsymbol{y}$ as the response and only an intercept
\item Fit an analysis model of the form \eqref{eq:OLSmodel} with response $\bm{r}_{\y}$ and covariate $\bm{r}_{\x}$ \end{enumerate} \end{analysis}
\begin{analysis}[Spatial+] \label{spatialplus} The Spatial+ approach can be summarized as follows: \begin{enumerate}
\item Define $\bm{r}_{\x}$ to be the residuals from a spatial regression model with $\bm{x}$ as the response and only an intercept
\item Fit an analysis model of the form \eqref{eq:genericadjSpatial} with response $\boldsymbol{y}$ and covariate $\bm{r}_{\x}$ \end{enumerate} \end{analysis}
Both GSEM and Spatial+ can be framed as special cases of adjusted spatial regression analysis models of the form \eqref{eq:genericadjSpatial}. To see this, we assume, unless otherwise stated, that every step of these methods use spatial analysis models of the form \eqref{eq:genericSpatial} (i.e., we use models of the form \eqref{eq:genericSpatial} to find $\bm{r}_{\y}$ and $\bm{r}_{\x}$ in \cref{gsem} and \cref{spatialplus}, as outlined in Section \ref{subsec:biasadj}). In \cref{thm:adj_bias}, we consider the bias in estimating $\beta_x$ when $\bm{r}_{\x}$ replaces $\bm{x}$ in a final analysis model.
\begin{thm} \label{thm:adj_bias} Let the data generating model be of the form \cref{eq:model_0} with $\boldsymbol{y}$ and $\bm{x}$ known. We assume that $\boldsymbol{r}_x$ are the residuals from a spatial analysis model of the form \eqref{eq:genericadjSpatial} with response $\bm{x}$ and only an intercept.
If the final analysis model is a spatial analysis model of the form \eqref{eq:genericSpatial} with $\bm{x} = \bm{r}_{\x}$ and results in the estimate $\hat{\bm{\Sigma}}$, then the bias $\bias{\hat{\beta}_{x}^{AS} }= \beta_x - E\left(\hat{\beta}_{x}^{AS} \right)$ can be expressed as: \begin{eqnarray} \label{eq:obsadjglsfixed}
\frac{ \beta_z \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \right) } { \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2} , \end{eqnarray} \end{thm} \begin{proof} See \cref{app:obsadjglsfixed} for the proof. \end{proof}
\begin{lemma} \label{ols_special_ols_adj} If a non-spatial final analysis model of the form \eqref{eq:OLSmodel} is used instead of a spatial analysis model in \cref{thm:adj_bias}, then $\bias{\hat{\beta}_{x}^{AS} }$ is: \begin{eqnarray} \label{eq:obsadjolsfixed} \frac{ \beta_z \left( \euclnorm{\bm{1}}^2 \euclmetric{\bm{x}}{\bm{z}} - \euclmetric{\bm{x}}{\bm{1}} \euclmetric{\bm{z}}{\bm{1}} \right) } { \euclnorm{\bm{1}}^2 \euclnorm{\bm{x}}^2 - \left[ \euclmetric{\bm{x}}{\bm{1}} \right]^2}. \end{eqnarray} \end{lemma} \begin{proof} Consider the case when $\hat{\bm{\Sigma}}^{-1} = \bm{I}.$ \end{proof} The bias for the GSEM method is equivalent to the bias for a non-spatial analysis model (given in \eqref{eq:obsadjolsfixed}). Thus, an immediate insight is that the GSEM approach will result in inference on $\beta_x$ equivalent to that of a non-spatial analysis using the originally observed $\boldsymbol{y}$ and $\bm{x}$. We note that \citet{dupont2020spatial+} showed, theoretically, that the bias from the GSEM methodology would be equivalent to the bias from a non-spatial model in the context of thin plate splines when no smoothing occurs. However, their simulations did not find this to be true when smoothing was used. There is a close connection between thin plate splines and mixed models \citep{ruppert2003semiparametric}. Here, our mixed model results are most akin to a thin plate spline model where smoothing is used, and thus our results are at odds with those in \citet{dupont2020spatial+}. Whereas \citet{dupont2020spatial+}'s simulations suggest that the GSEM methodology would improve inference relative to the spatial model when smoothing is used, our results suggest the opposite. As discussed previously, the non-spatial bias will tend to be larger than the bias from a spatial analysis model when $\bm{x}$ is spatially smooth. Thus, in cases where $\bm{x}$ is spatially smooth, the GSEM method can lead to inferior inference compared to a spatial analysis model.
On the other hand, the bias for the Spatial+ method is of the same form as the bias for a spatial analysis model (given in \eqref{eq:obsadjglsfixed}). All of the discussion involving the behavior of these terms in \cref{subsec:analysis} are relevant here. Interpreting the impact of performing the Spatial+ method relative to the performance of a spatial analysis model is difficult, however. For example, consider comparing the Spatial + method to the spatial analysis model employed in step 2 of \cref{spatialplus}. The difference between using $\bm{r}_{\x}$ and $\bm{x}$ in this spatial analysis model boils down to the estimated $\hat{\bm{\Sigma}}^{-1}$. If $\bm{r}_{\x}$ is defined to the residuals from a spatial analysis model of the form \eqref{eq:genericSpatial}, then $\bm{r}_{\x} = \bm{x} - \delta \bm{1}$ for $\delta \geq 0$. The fact that $\bm{r}_{\x}$ is a translation of $\bm{x}$ suggests that the estimated covariances will likely be similar when the analysis uses a positive-definite covariance structure (we would expect the largest difference to be in the estimation of $\beta_0$). This insight will not necessarily hold for models employing GMRF's, where the precision is non-singular. Proving these insights theoretically is difficult. We rely on simulation studies in \cref{sec:simstudies} to explore these ideas more thoroughly. If these intuitions hold, however, then the Spatial+ method will yield almost equivalent inference as a traditional spatial analysis model with positive-definite covariance structures.
Importantly, we emphasize these results suggest adjusted spatial analysis models will not improve inference for regression coefficients even in the settings they are designed to be used in when spatial linear mixed models are used to fit them.
\section{Simulation Studies} \label{sec:simstudies} In all of the following simulation studies, we consider settings that have been identified in the literature as times when spatial confounding can distort inference for a regression coefficient. For each of these settings, we consider the absolute value of the bias for a regression coefficient for non-spatial, spatial, and adjusted spatial analysis models. Each simulation study is designed to explore whether insights from analysis model spatial confounding explored in \cref{subsec:analysis} or the data generation spatial confounding explored in \cref{subsec:datagen} have any relevance to the patterns of bias observed for estimates of regression coefficients.
The results of this paper have primarily focused on spatial linear mixed models that involve positive-definite covariance structures. However, the intrinsic conditional autoregressive (ICAR) model plays an important role in the spatial confounding literature. It was the model first considered in \citet{Hodges_fixedeffects} and \citet{Reich} in the modern introduction to the phenomena of spatial confounding. As referenced previously, the Spatial + methodology was originally developed for the thin spline plate setting, but the authors stated that the methodology should extend to the ICAR model \citep{besag1991bayesian}. The methodology in \citet{Thaden} was also originally proposed for areal data where the ICAR model is traditionally used, and the spatial model these authors considered is thought to be equivalent to the ICAR model. Thus, in these simulation studies, we consider both geostatistical data fit to the class of models considered in our results (referred to as the ``Geostatistical data setting'') as well as areal data fit to an ICAR model (referred to as the ``Areal data setting'' because the ICAR model employs a GRMF).
Because we have not previously defined the ICAR model before, we take a moment to do so here. The ICAR model incorporates spatial dependence for areal data with the introduction of an underlying, undirected graph $G = (V, E)$. Non-overlapping spatial regions that partition the study area are represented by vertices, $V = \{ 1, \ldots, n\}$, and edges $E$ defined so that each pair $(i,j)$ represents the proximity between region $i$ and region $j$. We represent $G$ by its $n \times n$ binary adjacency matrix $\bm{A}$ with entries defined such that $\textrm{diag}(\bm{A}) = 0$ and $\bm{A}_{i,j} = \mathbbm{1}_{(i,j) \in E, i \neq j}$. The ICAR model could be considered a generalization of the spatial analysis model of the form \eqref{eq:genericSpatial}, by stating that the spatial random effect has a distribution proportional to a multivariate normal distribution with mean $\bm{0}$ and precision matrix $\tau^2 \left( \bm{I} \bm{A} \bm{1} - \bm{A} \right) = \tau^2 \bm{Q}$, where $\tau^2$ controls the rate decay for the spatial dependence and $\bm{Q}$ is the graph Laplacian.
This precision matrix is not of full rank, so we use a Bayesian analysis to fit all the relevant spatial models. We note there is a close connection between certain types of Bayesian analysis in this setting and modeling spatial random effects through the use of a smoothing penalty, as is done in the thin plate spline setting \citep{dupont2020spatial+,Rue,kimeldorf1970spline}. Because the graphs we consider are connected, there is an implicit intercept present in the ICAR model \citep{Pac_ICAR}. Therefore, we omit an intercept from our spatial analysis models. For a Bayesian analysis, $\sigma^2$ and $\tau^2$ require priors. Here, we give them Inverse-Gamma distributions with scale and rate .01 each. Finally, to make the non-spatial model comparable, we also use a Bayesian analysis, giving the $\sigma^2$ parameter an Inverse-Gamma prior with the same hyperparameters as the spatial model. All models are fit with using Markov Chain Monte Carlo (MCMC) algorithms with Gibbs updates. All MCMC's are run for 80,000 iterations with a 20,000 burn-in.
\subsection{Non-spatial and Spatial Analysis Models} \label{sim1} In this sub-section, we use simulation studies to compare a spatial and non-spatial model. For the spatial linear mixed model setting, we simulate data to ensure that spatial confounding from a data generation perspective is present. For the Gaussian Markov Random Field setting, we simulate data to ensure that spatial confounding from an analysis model setting is present.
\subsubsection{Geostatistical Data Setting} \label{sim1:slmm} In this subsection, we simulate data to replicate the setting explored in \cref{subsec:datagen}. The data are all generated from a model of the form \eqref{eq:model_0} as follows: \[ \boldsymbol{y}_i = 0.3 + \bm{x}_i + 2 \bm{z}_i + \bm{\epsilon}_i, \] where $\epsilon_i$ are independently simulated from a normal distribution with mean 0 and variance 0.1.
The 200 locations of the data are randomly generated on $[0,10] \times [0,10]$ window one time, and these locations are then held fixed. The realizations $\bm{x}$ and $\bm{z}$ are simulated from mean zero spatial processes, denoted respectively $\bm{X}$ and $\bm{Z}$, with spatial covariance structures defined by $\Rcovst{} = 0.1 \exp \{ \frac{-h}{\theta} \}$ for euclidean distance $h$ (i.e., the exponential field).
We define $\bm{X} = {\bm{X}}_c + {\bm{X}}_u$ and $\bm{Z}$ as follows: $\textrm{Cov} \left( \bm{X} \right) = \Rtwo{c} + \Rtwo{u} $, $\textrm{Cov} \left( \bm{Z} \right) = \Rtwo{c} $, and
$\textrm{Cov} \left( \bm{X}, \bm{Z} \right) = \rho \Rtwo{c}$.
We generate 1000 datasets for each $\left( \theta_u, \theta_c, \rho \right) \in \{1,5,10\} \times \{1,5,10\} \times \{-.9,-.6,-.3,0,.3, .6,.9\}$. For each dataset, we then fit a non-spatial analysis model of the form \eqref{eq:OLSmodel} and a spatial analysis model of the form \eqref{eq:genericSpatial}. For the latter, $g()$ is assumed to have spatial structure defined by $\Rcovst{} = \sigma_{s}^2 \exp \{ \frac{-h}{\theta} \} + \sigma_{\epsilon}^2 \bm{I}$, with unknown $\theta$, $\sigma_{s}^2$, and $\sigma_{\epsilon}^2$. Both analysis models are fit via residual maximum likelihood (REML).
We consider the absolute value of the bias for $\beta_x$ for both the non-spatial and spatial analysis models. Recall from \cref{subsec:datagen} that many researchers use results from \citet{Pac_spatialconf} (visualized in \cref{fig:data_illustrationa}) to support statements that fitting a spatial analysis model will lead to increased bias whenever $\theta_c \ll \theta_u$. In \cref{fig:bias_illustration}, we can see that the absolute bias tends to be larger for non-spatial models than for spatial models for all possible combinations for $\theta_c$ and $\theta_u$. These results support our findings in \cref{subsec:analysis} (as well as findings in Section 2.1 of \citet{Pac_ICAR} regarding a spatial model fit via REML) that spatial analysis models will tend to reduce bias relative to non-spatial models. The discrepancy may simply be due to the fact that \citet{Pac_ICAR}'s original observation was made about a different type of spatial structure. However, when we repeated his results for the exponential spatial structure used here (visualized in \cref{fig:data_illustrationb}), the data generation focus on spatial confounding suggested that bias for spatial analysis models would increase (relative to non-spatial models) when $\theta_c < 2$. However, this did not appear to be true here in these simulations. Importantly, this is evidence that focusing on the data generation source of spatial confounding alone may not be able to explain bias in regression coefficients.
Of course, \cref{fig:bias_analysismodel}a) considers how bias behaved across all datasets. If we compare the absolute value of the bias from a non-spatial analysis model and a spatial analysis model for a fixed dataset, the spatial analysis model resulted in less bias approximately 72\% of the time. This remained true across all possible combinations of $\theta_c$ and $\theta_u$ (with the percentage of times the spatial bias was preferable varying from 62\% to 77\%). There did not appear to be a strong relationship in the patterns of bias as a function of $\rho$.
\begin{center} \begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/sim511_absbias.png} \end{minipage}
\captionof{figure}{Boxplots of the absolute value of the observed bias i.e., $|\textrm{Bias } (\hat{\beta_x}) |$. Plot made with \texttt{ggplot2} \citet{ggplot2}. } \label{fig:bias_analysismodel} \end{center}
\begin{comment} \begin{center} \begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/sim1_absbias.png}
\captionof*{figure}{a) $|\textrm{Bias } (\hat{\beta_x}) |$} \end{minipage}
\begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/sim1_absbiasbounds.png}
\captionof*{figure}{b) Bounds on $|\textrm{Bias } (\hat{\beta_x}) |$ } \end{minipage} \captionof{figure}{Boxplots of a) the absolute value of the observed bias, and b) the theoretical bound on the absolute value of bias. All plots were made with \citet{ggplot2}. } \label{fig:bias_illustration} \end{center} \end{comment}
Across all combinations of $\theta_c$ and $\theta_u$, the maximum absolute bias observed for the spatial model was 0.74. On the other hand, the maximum absolute bias for the non-spatial model was 1.6, and approximately 1.1\% of the time the non-spatial analysis model resulted in an absolute value of bias over 1. To get a feel for the more general trends, we consider cases in which the bias for an analysis model accounted for over a 25\% change in $\beta_x$ (i.e., when $\left| \frac{ \hat{\beta_x}^{.} - \beta_x }{\beta_x} \right| > .25$). Such cases occurred approximately 15\% of the time for the spatial analysis model and 42\% of the time for the non-spatial model. When these more extreme cases of bias occurred for the spatial analysis model tended to vary across particular combinations of $\theta_u$ and $\theta_c$. In particular when $\theta_u$ is 1, the bias from a spatial analysis model accounted for a 25\% change in estimates $\beta_x$ less frequently (less than 1\% of the time) compared to over 20\% of the time for cases when $\theta_u>1$. The insight for data generation spatial confounding were not particularly relevant in predicting the performance (with respect to absolute bias) for the spatial analysis model.
In summary, these results suggest that, on average, in the presence of either a spatially smooth $\bm{x}$ or residual spatial dependence ($\bm{z}$), a spatial analysis model will result in less bias than the a non-spatial analysis model. Perhaps more importantly, the magnitude of the bias when things go wrong is much larger for the non-spatial analysis model than the spatial analysis model. We emphasize that these results suggest that the spatial analysis model outperforms the non-spatial analysis model in settings where the data generation spatial confounding focus suggests the opposite should happen.
\subsubsection{Areal Data Setting} \label{sim1:icar} In the second setting, we work with areal data on an $11 \times 11$ grid on the unit square. Recall, work in analysis model spatial confounding suggest that a covariate which is collinear with low-frequency eigenvectors of the precision matrix of the spatial random effect could induce bias in the estimation of $\beta_x$. This is thought to be true regardless of whether there is a ``missing'' spatially dependent covariate. Here, we attempt to explore whether that is the case by simulating datasets with both a spatially-smooth covariate and with a covariate without much spatial structure.
For all simulated datasets, the response $\boldsymbol{y}$ is generated from a model of the form \eqref{eq:model_0} as follows: \[ \boldsymbol{y}_i = 0.3 + 3 \bm{x}_i + \bm{\epsilon}_i, \] where each $\epsilon_i$ is independently distributed from a normal distribution with mean 0 and variance 1. We explicitly leave out any residual spatial dependence from the data generation model in order to explore the impact of a covariate alone.
We consider two possible choices of $\bm{x}$: one in which $\bm{x}$ is spatially-smooth from an analysis model spatial confounding perspective and one in which it is not. For the latter category, we simply generate $\bm{x}$ from a normal distribution with mean 0 and variance $\sqrt{.06}$ once, as depicted in the left plot of \cref{fig:illustrations_covariate}. We hold this vector fixed and simulate the response variable 100 times. In order to generate the spatially-smooth covariate we use the eigenvectors of the graph Laplacian $\bm{Q}$. For the ICAR model, there is not a variance-covariance matrix, but rather the singular precision matrix. However, we can treat this as the pseudo-inverse of a variance-covariance matrix \citep{Pac_ICAR}. In this case, then, if $\bm{x}$ is strongly correlated with a low-frequency eigenvector of $\bm{Q}$, the spatial analysis model may perform more poorly than the non-spatial model. Thus, we let $\bm{x}$ to be the eigenvector of $\bm{Q}$ associated with smallest, non-zero eigenvalue, depicted in the right plot of \cref{fig:illustrations_covariate}. As before, we hold this vector fixed for 100 simulated datasets.
For each of the 200 datasets, we consider 2 analysis models: 1) a non-spatial analysis model, and 2) a spatial analysis model. Here, the spatial analysis models is the ICAR model. We use a Bayesian approach for both the spatial and the non-spatial analysis models, as described in the introduction of this section.
When $\bm{x}$ is not spatially smooth (i.e., randomly generated from a normal distribution), the spatial analysis and non-spatial analysis models gave relatively similar inferences for $\beta_x$. In the left hand plot of \cref{fig:results_covariate}, we see that across datasets, the absolute bias was relatively similar for both analysis models. In the right hand plot of \cref{fig:results_covariate}, we see that for individual datasets, the inference was also fairly similar for the two analysis models as well. In this plot, the absolute bias for the spatial analysis model is on the x-axis and the absolute bias for the non-spatial analysis model is on the y-axis. Datasets for which the covariate is not spatially smooth are colored red. The black dashed line represents when the spatial and non-spatial analysis models had equivalent bias; while the gray dashed lines represent where the absolute bias differed by 1 between the analysis models. A triangular shape indicates that the spatial analysis model had a smaller absolute bias than the non-spatial analysis model. The spatial model resulted in less absolute bias 49\% of the time.
\begin{center} \begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/non-smooth-x-cropped.jpg} \end{minipage}
\begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/spatial-smooth-x-cropped.jpg} \end{minipage} \captionof{figure}{Visualization of a) $\bm{x}$ generated from a normal distribution (``random''), b) $\bm{x}$ defined as the low-frequency eigenvector of $\bm{Q}$ (``spatially smooth''). Plots made with \texttt{raster} package \citep{rasterpackage}. } \label{fig:illustrations_covariate} \end{center}
When $\bm{x}$ is spatially smooth (i.e., it is the low frequency eigenvector of $\bm{Q}$), the story does change a bit. In the left hand plot of \cref{fig:results_covariate}, we see that across datasets, the absolute bias of the spatial model has a slightly more right-skewed distribution. In the right hand plot of \cref{fig:results_covariate}, we see that for individual datasets, the inference was still fairly similar for the two analysis models. In this plot, recall, the absolute bias for the spatial analysis model is on the x-axis and the absolute bias for the non-spatial analysis model is on the y-axis. Datasets for which the covariate is spatially smooth are colored blue. The spatial model resulted in less absolute bias 39\% of the time.
\begin{center} \begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/sim512_absbias.png} \end{minipage}
\begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/sim512_absbiascompare.png} \end{minipage} \captionof{figure}{On the left, we see boxplots of the absoute bias for the spatial and non-spatial analysis models. On the right, the points are individual simulated datasets with the absolute bias from the spatial analysis model on the x-axis and absolute bias from the non-spatial analysis model on the y-axis. Plots made with \texttt{ggplot2} package \citep{ggplot2}. } \label{fig:results_covariate} \end{center}
In sum, there is evidence that a spatially-smooth covariate (i.e., one correlated with a low frequency eigenvector of the graph Laplacian in this setting) may cause the spatial analysis model to have higher absolute bias than the non-spatial analysis model. However, the impact may not be particularly large in magnitude. Here we considered a covariate that was perfectly correlated with the eigenvector associated with the smallest (non-zero) eigenvalue. This is essentially a worst-case scenario, and the spatial analysis model and non-spatial analysis models stilled yielded similar inference. In fact, as seen in \cref{fig:results_covariate}, when the absolute bias for the non-spatial and spatial model differed by more than 1, the non-spatial model tended to have the higher absolute bias.
\subsection{Spatial Analysis and Adjusted Spatial Analysis Models} \label{sim2a} In this sub-section, we generate data to replicate the setting explored in \citep{Thaden} and \citep{dupont2020spatial+}. For the geostatistical data setting, we seek to explore whether a spatial analysis model of the form \eqref{eq:genericSpatial} induces more bias than two adjusted spatial analysis models of the form \eqref{eq:genericadjSpatial}. We restrict our attention to the the Spatial Linear Mixed Model setting, as the impropriety of the ICAR model would require careful consideration of how to define a residual in a model with no covariates.
\subsubsection{Geostatistical Data Setting} In this setting, the 400 locations of the data are randomly generated on $[0,10] \times [0,10]$ window one time, and these locations are then held fixed throughout the subsequent simulations. \citet{Thaden} and \citet{dupont2020spatial+} studied similar set-ups. Both papers considered settings in which: \[ \bm{x} = 0.5 \bm{z} + \epsilon_x. \] \citet{Thaden} chose $\bm{z}$ to be fixed to three possible spatial patterns. \citet{dupont2020spatial+} generated $\bm{z}$ from a exponential process (i.e., a spatial structure of the form \eqref{maternclass} with $\nu=.5$) with $\theta =5$ and then replaced $\bm{z}$ with the fitted values of a spatial thin plate regression spline fitted to it. This approach was meant to ensure that both the response variable and the covariates can be described by thin splines, and therefore eliminate bias due to model misspecification. Their supplemental materials suggested simply using the Gaussian processes yielded similar results, though (see Web Appendix F of \citet{dupont2020spatial+}). Because we find covariates defined to be the fitted values from a thin plate spline to be very restrictive, we adopt the convention of the supplemental material from \citet{dupont2020spatial+} here (i.e., a spatial structure of the form \eqref{maternclass} with $\nu=.5$ and $\theta =5$). We chose $\epsilon_x$ to be independently distributed from a normal with mean 0 and variance $\sigma_x^2= 0.1$. The response $\boldsymbol{y}$ is generated from a model of the form \eqref{eq:model_0} as follows: \[ \boldsymbol{y}_i = 0.3 + 2 \bm{x}_i - \bm{z}_i + \bm{\epsilon}_i, \] where $\epsilon_i$ are independently generated from a normal distribution with mean 0 and variance $\sigma_y^2=0.1$.
We consider 4 analysis models: 1) a non-spatial analysis model of the form \eqref{eq:OLSmodel} (``NS''), 2) a spatial analysis model of the form \eqref{eq:genericadjSpatial} (``S''), 3) the GSEM adjusted spatial analysis model, and 4) a Spatial+ adjusted spatial analysis model. All models are fit with REML and the spatial random effects are all represented with a spatial structure defined by $\Rcovst{} = \sigma_{s}^2 \exp \{ \frac{-h}{\theta} \} + \sigma_{\epsilon}^2 \bm{I}$, with unknown $\theta$, $\sigma_{s}^2$, and $\sigma_{\epsilon}^2$.
As predicted in \cref{subsec:biasadj}, the GSEM model yields the same inference as the non-spatial analysis model for all simulated datasets. Similarly, the Spatial+ model yields essentially the same observed biases as the spatial analysis model. In \cref{fig:bias_illustration_sim2}a) we plot the absolute value for the observed bias across all analysis models. The spatial analysis model and Spatial+ model tend to result in significantly less bias than the GSEM and non-spatial model. We note that for a fixed data set, the spatial analysis model always produced less bias than the non-spatial and GSEM.
\begin{center} \begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/sim522_absbias.png}
\end{minipage} \captionof{figure}{Boxplots of the absolute value of the observed bias, All plots were made with \citet{ggplot2}. } \label{fig:bias_illustration_sim2} \end{center}
\begin{comment} \begin{center} \begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/sim2_absbias.png}
\captionof*{figure}{a) $|\textrm{Bias } (\hat{\beta_x}) |$} \end{minipage}
\begin{minipage}{0.45\textwidth} \includegraphics[width=\linewidth]{figures/sim2_absbiasbounds.png}
\captionof*{figure}{b) Bounds on $|\textrm{Bias } (\hat{\beta_x}) |$ } \end{minipage} \captionof{figure}{Boxplots of a) the absolute value of the observed bias, and b) the theoretical bound on the absolute value of bias. All plots were made with \citet{ggplot2}. } \label{fig:bias_illustration_sim2} \end{center} \end{comment} In summary, we find that even if there is high collinearity between $\bm{x}$ and $\bm{z}$, the spatial linear mixed model significantly improves inference on $\beta_x$ relative to a non-spatial model. Additionally, the GSEM and Spatial+ methodologies in this context do not improve inference for $\beta_x$. Importantly, this again illustrates that insights from a data generation perspective of spatial confounding may not be particularly useful in explaining the patterns of bias.
\subsection{Approaches for Fitting Adjusted Spatial Analysis Models} The Spatial+ and the GSEM models were proposed in papers that did not utilize spatial linear mixed models in simulation studies. Instead, both considered spatial analysis models that fit with the R package \texttt{mgcv}. The Spatial + method employed thin plate splines, while the GSEM model utilized a smoothing penalty that is supposed to be equivalent to the ICAR model. In this subsection, we consider both the geostatistical data setting and the areal data setting and we now we fit our spatial models with the \texttt{mgcv} package, utilizing thin plate splines as in \citet{dupont2020spatial+}.
\subsubsection{Geostatistical Data Setting} \label{sim3:slmm} Here, we simulate data exactly as in \cref{sim2a}. We explore fitting 4 analysis models. The first three are: 1) a spatial analysis model (``S PS''), 2) A GSEM analysis model (``GSEM PS''), and 3) a Spatial+ model (``Spatial + PS''). We fit all associated spatial models with the default settings of the \texttt{mgcv} package as in \citet{Thaden} and \citet{dupont2020spatial+}. The default settings involve using penalized thin plate splines. We note that manual increases of the number of knots did not substantially change results for a simulated dataset, so we simply used the default selections. For comparison, we include the fourth analysis model (``S''). This is the same spatial analysis model considered in \cref{sim2a} fit via REML.
We plot the absolute value of the biases for all analysis models in \cref{fig:bias_illustration_sim3}. If we only considered models fit with the \texttt{mgcv} package, we note that both the Spatial+ and GSEM analysis models tend to have slightly smaller bias than the spatial model. However, the spatial analysis model fit via REML outperforms all models.
\begin{center} \begin{minipage}{.45\textwidth} \includegraphics[width=\linewidth]{figures/sim531_absbias.png} \end{minipage}
\captionof{figure}{Boxplots of the absolute value of the observed bias. Plot made with the \texttt{ggplot2} package \citep{ggplot2}. } \label{fig:bias_illustration_sim3} \end{center}
This suggests, as did our results in \cref{subsec:analysis} and \cref{subsec:alleviating}, that collinearity between $\bm{x}$ and $\bm{z}$ is not necessarily problematic in the spatial linear mixed model setting. In fact, relative to both a non-spatial model (explored in \cref{sim2a}) and the two adjusted spatial analysis models here, the spatial analysis model is the best at reducing absolute bias for estimates of $\beta_x$. In the context of the broader literature, we note the finding that the Spatial + and GSEM improve inference relative to a penalized thin plate splines agrees with the simulation results in Spatial +. In other words, it's possible that the smoothing in this setting in combination with collinearity between $\bm{x}$ and $\bm{z}$ may increase absolute bias in the estimates for $\beta_x$. This problem may be mitigated by the Spatial + and GSEM approaches, however, still results in increased bias relative to a spatial linear mixed model.
\subsubsection{Areal Data Setting} \label{sim3:grmf} In the second setting, we work with areal data on an $11 \times 11$ grid on the unit square as in \cref{sim1:icar}. Following the simulation studies in \citet{Thaden}, we hold $\bm{z}$ fixed for the generation of all data. We then define the rest of the model as in \citet{Thaden} as follows: \[ \bm{x} = 0.5 \bm{z} + \epsilon_x, \] where $\epsilon_x$ is independently distributed from a normal distribution with mean 0 and variance $\sigma_x^2$. The response $\boldsymbol{y}$ is generated from a model of the form \eqref{eq:model_0} as follows: \[ \boldsymbol{y}_i = 0.3 + 3 \bm{x}_i - \bm{z}_i + \bm{\epsilon}_i, \] where each $\epsilon_i$ is independently distributed from a normal distribution with mean 0 and variance $\sigma_y^2$. In \citet{Thaden}, the authors observed the largest differences between the GSEM analysis model and the spatial analysis model for $(\sigma_x, \sigma_y) = \{ (0.15,1), (0.15,0.15) \}$. We consider these two settings, and choose $\bm{z}$ to be the eigenvector of $\bm{Q}$ associated with the smallest non-zero eigenvalue as in \cref{sim1:icar} (depicted in \cref{fig:illustrations_covariate}b). We choose this value to allow for some comparisons to our findings in \cref{sim1:icar}. Although we note the exact $\bm{x}$ and its sample variance will be, of course, different, $\bm{x}$ will still tend to be highly collinear with the lowest frequency eigen-vector of the graph Laplacian.
For both combinations of $(\sigma_x, \sigma_y)$, we simulate 100 datasets for analysis. We consider four analysis models. The first three are again: 1) a spatial analysis model (``S PS''), 2) A GSEM analysis model (``GSEM PS''), and 3) a Spatial+ model (``Spatial + PS''). All these models are again fit with the \texttt{mgcv} package, utilizing thin plate splines as in \cref{sim3:slmm}. For comparisons, we also consider the ICAR model (``S''). In \cref{fig:bias_illustration_sim3b}, we summarize the results for the absolute bias. The absolute bias, unsurprisingly, increases for all analysis models as the $\frac{\sigma_x}{\sigma_y}$ decreases. For both combinations of $(\sigma_x, \sigma_y)$, the ICAR analysis model tends to have the smallest absolute bias followed by the penalized thin plate analysis model. The GSEM and Spatial+ analysis models gave similar inferences, and tended to be higher than either of the unadjusted spatial models.
\begin{center} \begin{minipage}{.45\textwidth} \includegraphics[width=\linewidth]{figures/sim532_ICAR_absbias.png} \end{minipage}
\captionof{figure}{Boxplots of the absolute value of the observed bias. On the left hand side $(\sigma_x, \sigma_y) = (0.15,0.15)$, and on the right hand side $(\sigma_x, \sigma_y) = (0.15,1)$. Plot made with the \texttt{ggplot2} package \citep{ggplot2}. } \label{fig:bias_illustration_sim3b} \end{center}
We take a moment to compare our results from the ICAR analysis model here to the ICAR analysis model in \cref{sim1:icar}. Here, the setting most similar to that in the previous section is when $\sigma_y=1$, and in both simulation studies the bias we observe is very similar. For example, here the mean of the absolute biases was approximately 2.4 and the median of the absolute biases was approximately 1.9. In \cref{sim1:icar}, the mean of the absolute biases was approximately 2.1 and the median was approximately 1.9. Recall, in \cref{sim1:icar}, there was no missing confounder ($\bm{z}$) or residual spatial dependence in the data generation model. For the ICAR analysis model, the fact that we observe similar bias patterns in \cref{sim1:icar} and \cref{sim3:grmf} offers evidence that the collinearity between $\bm{x}$ and $\bm{z}$ is not the sole source of the bias we observe here. Instead, it would seem the collinearity between $\bm{x}$ and the Graph Laplacian is the primary driver of bias.
We note that the fact that the Spatial+ and GSEM approaches yield larger absolute biases (although the differences are not huge) than either the ICAR analysis model or the penalized thin plate splines directly contradicts the findings in \citet{Thaden} and \citet{dupont2020spatial+}. This offers, at least, some evidence that these approaches may not be entirely appropriate in the Bayesian context.
\section{Discussion} \label{sec:conclusion} In this paper, we have synthesized the broad, and often muddled, literature on spatial confounding. We have introduced two broad focuses in the spatial confounding literature: the analysis model focus and the data generation focus. Using the spatial linear mixed model, we have shown how papers focused on the former category often conceptualize the problem of spatial confounding as originating from the relationship between an observed covariate $\bm{x}$ and the estimated precision matrix $\hat{\bm{\Sigma}}^{-1}_g$ of a spatial random effect. We then showed how papers focused on the latter category typically identify the problem of spatial confounding as originating from the relationship between an observed covariate ($\bm{x}$) and a collinear, unobserved covariate ($\bm{z}$).
Our results highlight two important conclusions: 1) the original conceptualization of spatial confounding as ``problematic'' may not have been entirely correct, and 2) the analysis model and data generation perspectives of spatial confounding can lead to directly contradictory conclusions about whether spatial confounding exists and whether it adversely impacts inference on regression coefficients. With respect to the first point, the modern conceptualization of spatial confounding arose in work by \citet{Reich} and \citet{Hodges_fixedeffects}. In our proposed framework, these papers focused on an analysis model type of spatial confounding. In the context of an ICAR model, they argued that whenever $\bm{x}$ was collinear with a low-frequency eigenvector of the graph Laplacian $\bm{Q}$, the regression coefficients would be biased (relative the regression coefficients obtained from a non-spatial model). Our results suggest, that, in general, collinearity between $\bm{x}$ and low-frequency eigenvectors of the graph Laplacian helps to \emph{reduce} bias in regression coefficients. It is only in relatively extreme cases, where $\bm{x}$ is ``flat'' and there is no spatially smooth residual dependence that bias for regression coefficients can increase. In our simulation study, we produced such a setting by generating $\bm{x}$ to be perfectly correlated with a low frequency eigenvector of the graph Laplacian. Even in this extreme scenario, however, the bias seen in a spatial analysis model was not that much different from the bias seen in a non-spatial analysis model.
Turning our attention to the second point, the data generation perspective of spatial confounding often relies on very specific assumptions about the processes that generated $\bm{x}$ and $\bm{z}$ or on very specific assumptions about the relationship between these variable (i.e., $\bm{x}$ is a combination of $\bm{z}$ and some Gaussian noise). Our results suggested that many of the scenarios that are identified as problematic from a data generation perspective are not problematic (at all) from an analysis model perspective. This is potentially problematic because many of these papers propose methods to ``alleviate'' spatial confounding based on the perceived problem (the relationship between $\bm{x}$ and $z$). In our simulation studies, we studied scenarios identified in the literature as being problematic from a data generation focus on spatial confounding. We considered two settings: a geostatistical data setting and an areal data setting. For the geostatistical data setting, a spatial analysis model fit with REML tended to outperform a non-spatial analysis model in all cases. Additionally, this spatial model either outperformed or was equivalent to the inference derived from adjusted spatial analysis models. For the areal data setting, we found that the adjusted spatial analysis models increased the absolute bias relative to two types of spatial analysis model. In other words, focusing on the relationship between $\bm{x}$ and $\bm{z}$ (or the processes that generated them) did not help identify settings where a spatial analysis model distorted inference. Using these insights to ``adjust'' for spatial confounding lead to inferences on regression coefficients that were worse than a standard spatial analysis model.
Taken together, the results and simulation studies in this paper offer support for conventional wisdom of spatial statistics: accounting for residual spatial dependence tends to improve inference on regression coefficients. However, spatial analysis models are not interchangeable: the analysis model \textit{and the method used to fit it} matter. For example, \citet{dupont2020spatial+} correctly identified settings in which collinearity between $\bm{x}$ and $\bm{z}$ could lead to bias when penalized thin plate splines were used. In those settings, the Spatial+ methodology does reduce bias relative to the spatial penalized thin plate splines model. However, a spatial linear mixed model fit via REML outperforms both of these models with respect to bias. Importantly, the Spatial+ methodology did not continue to improve inference for a Bayesian approach. In order to avoid the pitfalls that currently plague the field of spatial confounding, future work motivated by spatial confounding needs to be more careful to both precisely define what is being studied and the analysis model being utilized.
\pagebreak \setcitestyle{numbers}
\appendix
\section{Useful Facts about Differential Geometry and Linear Algebra} \label{appa_diff_geometry}
\subsection{Notation for metrics and norms}
\begin{definition}[Standard Euclidean Inner Product and Norm] \label{euclidean_metric} We use the notation $\euclmetric{\cdot}{\cdot}$ to denote the standard Euclidean inner product on the vector space of $\mathbb{R}^n$. The notation $\euclnorm{\cdot}$ is then used to refer to the norm inducted by this metric. Specifically, for $\boldsymbol{a},\boldsymbol{b} \in \mathbb{R}^n$: \begin{eqnarray*} \euclmetric{\boldsymbol{a}}{\boldsymbol{b}} &=& {\boldsymbol{a}}^T \boldsymbol{b} \\ \euclnorm{\boldsymbol{a}} &=& \sqrt{{\boldsymbol{a}}^T \boldsymbol{a} } \end{eqnarray*} \end{definition}
\begin{notation}[Angles with respect to the Standard Euclidean Inner Product] \label{euclidean_angle} Given $a,b \in \mathbb{R}^n$, we use $\euclangle{a}{b}$ to refer to the angle between these two vectors with respect to the standard Euclidean norm. Specifically: \begin{eqnarray*} \euclangle{\boldsymbol{a}}{\boldsymbol{b}} = \arccos \left( \frac{\euclmetric{\boldsymbol{a}}{\boldsymbol{b}}}{ \euclnorm{\boldsymbol{a}}\euclnorm{\boldsymbol{b}}} \right) \end{eqnarray*} \end{notation}
\begin{notation}[Spectral Decomposition of $\hat{\bm{\Sigma}}^{-1}$] \label{spectral_decomp} Let $\hat{\bm{\Sigma}}^{-1}$ be a $n\times n$ real, symmetric, positive definite matrix.
We define $\bm{U} \bm{D} \bm{U} = \hat{\bm{\Sigma}}^{-1}$ to be the spectral decomposition of $\hat{\bm{\Sigma}}^{-1}$ with $\bm{D}$ a diagonal matrix with diagonal $d_1 \geq \ldots \geq d_n > 0$. \end{notation}
\begin{notation}[Angles between Vector and Eigenvectors of $\hat{\bm{\Sigma}}^{-1}$] \label{coef_notation} Let $\hat{\bm{\Sigma}}^{-1}$ be a $n\times n$ real, symmetric, positive definite matrix, and $\boldsymbol{v}$ be an arbitrary vector in $\mathbb{R}^n$.
Let $\bm{U} \bm{D} \bm{U} = \hat{\bm{\Sigma}}^{-1}$ be the spectral decomposition of $\hat{\bm{\Sigma}}^{-1}$ as defined in \cref{spectral_decomp}. In this paper, we use the notation $\bm{\theta}_{\boldsymbol{v},\bm{U}}$ to define a $n\times 1$ vector whose $i$th element is the angle $\euclangle{\boldsymbol{v}}{\boldsymbol{u}_i}$ (with respect to the Euclidean norm as in \cref{euclidean_angle}) between $\boldsymbol{v}$ and the $i$th column $\boldsymbol{u}_i$ of $\bm{U}$. \end{notation}
\begin{definition}[Precision Matrix Induced Inner Product and Norm] \label{precmatrix_metric} Given a $n \times n$ real, symmetric, positive definite matrix, $\hat{\bm{\Sigma}}^{-1}$ we use the notation $\sigmainvdot{\cdot}{\cdot}$ to denote the inner product defined by the matrix on the vector space of $\mathbb{R}^n$. The notation $\sigmainvnorm{\cdot}$ is then used to refer to the norm induced by this inner product. More precisely, for $\boldsymbol{a},\boldsymbol{b} \in \mathbb{R}^n$: \begin{eqnarray*} \sigmainvdot{\boldsymbol{a}}{\boldsymbol{b}} &=& \boldsymbol{a}^T \hat{\bm{\Sigma}}^{-1} \boldsymbol{b} \\ \sigmainvnorm{\boldsymbol{a}} &=& \sqrt{\boldsymbol{a}^T \hat{\bm{\Sigma}}^{-1} \boldsymbol{a} } \end{eqnarray*} \end{definition}
\begin{notation}[Angles with respect to the Precision Matrix Induced Inner Product] \label{precmatrix_angle} Given $\boldsymbol{a},\boldsymbol{b} \in \mathbb{R}^n$, we use $\sigmainvangle{\boldsymbol{a}}{\boldsymbol{b}}$ to refer to the angle between them with respect to the standard Euclidean norm. Technically, it would be more appropriate to use $\sigmainvangle{\boldsymbol{a}}{\boldsymbol{b}}^{\hat{\bm{\Sigma}}^{-1}}$. However, we drop the dependency on $\hat{\bm{\Sigma}}^{-1}$ unless it is required for ease of reading. Specifically: \begin{eqnarray*} \sigmainvangle{\boldsymbol{a}}{\boldsymbol{b}} = \arccos \left( \frac{\sigmainvdot{\boldsymbol{a}}{\boldsymbol{b}}}{ \sigmainvnorm{\boldsymbol{a}}\sigmainvnorm{\boldsymbol{b}}} \right) \end{eqnarray*} \end{notation}
\begin{comment} \begin{definition}[$n-1$ sphere] \label{nsphere}
Given an inner product $\euclmetric{\cdot}{\cdot}_*$ on the vector space $\mathbb{R}^n$, we define the unit $n-1$ sphere \[\mathbb{S}^{n-1}_* = \left \{ \boldsymbol{v} \in \mathbb{R}^n ~\big |~ \euclnorm{\boldsymbol{v}}_* = 1 \right \}, \] where $\euclnorm{\cdot}_*$ is the norm induced by $\euclmetric{\cdot}{\cdot}_*$. \end{definition} \end{comment}
\subsection{Useful Facts}
\begin{fact}[Re-expression of Standard Euclidean Inner Product] For $\boldsymbol{a},\boldsymbol{b} \in \mathbb{R}^n$, the standard Euclidean metric defined in \cref{euclidean_metric} can be re-expressed as $\euclmetric{\boldsymbol{a}}{\boldsymbol{b}} =\euclnorm{\boldsymbol{a}} \euclnorm{\boldsymbol{b}} \cos \left( \euclangle{\boldsymbol{a}}{\boldsymbol{b}} \right)$, where $\euclangle{\boldsymbol{a}}{\boldsymbol{b}}$ is the angle between $\boldsymbol{a}$ and $\boldsymbol{b}$ with respect to the standard Euclidean metric as defined in \cref{euclidean_angle}. \end{fact}
\begin{fact}[Re-expression of Precision Matrix Induced Inner Product] \label{precmatmetric_rexpression} For $\boldsymbol{a},\boldsymbol{b} \in \mathbb{R}^n$, the precision matrix induced inner product defined in \cref{precmatrix_metric} can be re-expressed as $\sigmainvdot{\boldsymbol{a}}{\boldsymbol{b}} =\sigmainvnorm{\boldsymbol{a}} \sigmainvnorm{\boldsymbol{b}} \cos \left( \sigmainvangle{\boldsymbol{a}}{\boldsymbol{b}} \right)$, where $\sigmainvangle{\boldsymbol{a}}{\boldsymbol{b}}$ is the angle between $\boldsymbol{a}$ and $\boldsymbol{b}$ with respect to the precision matrix induced metric as defined in \cref{precmatrix_angle}. \end{fact}
\begin{fact}[Preservation of Angles with Eigenvectors of $\hat{\bm{\Sigma}}^{-1}$] \label{uniquecoef} Suppose $\hat{\bm{\Sigma}}^{-1}$ is a $n \times n$ real, symmetric, positive definite matrix, and $\boldsymbol{v}^s$ is an arbitrary unit vector with respect to the norm $\sigmainvnorm{\cdot}$ as defined in \cref{precmatrix_metric} (i.e., $\sigmainvnorm{\boldsymbol{v}^s}=1$). Let $\boldsymbol{v}_{\alpha} = \alpha \boldsymbol{v}^s$ for $\alpha >0$. Define $\bm{\theta}_{\boldsymbol{v}^s,\bm{U}}$ and $\bm{\theta}_{\boldsymbol{v}_{\alpha},\bm{U}}$ as in \cref{coef_notation}.
It is the case that $\bm{\theta}_{\boldsymbol{v}^s,\bm{U}} \equiv \bm{\theta}_{\boldsymbol{v}_{\alpha},\bm{U}}$. \end{fact}
\begin{fact}[Re-Expression of Precision Matrix Induced Norm] \label{rexpression_norm} For a given vector $\boldsymbol{v} \in \mathbb{R}^n$ and $n \times n$ real, symmetric, positive definite matrix $\hat{\bm{\Sigma}}^{-1}$, it is possible to re-express $\sigmainvnorm{\boldsymbol{v}}$ as a function of of the sample mean $\bar{\boldsymbol{v}}$, sample variance $s_{\boldsymbol{v}}^2$, and a unique set of $n$ angles (defined with respect to the standard Euclidean norm).
Let $\bm{U} \bm{D} \bm{U} = \hat{\bm{\Sigma}}^{-1}$ be the spectral decomposition of $\hat{\bm{\Sigma}}^{-1}$ with $\bm{D}$ a diagonal matrix with diagonal $d_1 \geq \ldots \geq d_n > 0$. Define $\bm{\theta}_{\boldsymbol{v},\bm{U}}$ to be a $n\times 1$ vector whose $i$th element is the angle $\euclangle{\boldsymbol{v}}{\boldsymbol{u}_i}$ (with respect to the Euclidean norm as in \cref{euclidean_angle}) between $\boldsymbol{v}$ and the $i$th column $\boldsymbol{u}_i$ of $\bm{U}$. \begin{eqnarray*} \sigmainvnorm{\boldsymbol{v}} = \sqrt{ \left[ \left(n-1\right) s_{\boldsymbol{v}}^2 + n \bar{\boldsymbol{v}}^2 \right] \sum_{i=1}^n \cos \left( \euclangle{\boldsymbol{v}}{\boldsymbol{u}_i} \right)^2 d_i } \end{eqnarray*} \end{fact}
\begin{fact}[Inner Products of Certain Kinds of Differences between Vectors] \label{shifted_vectors} Given an inner product $\euclmetric{\cdot}{\cdot}_*$ on the vector space $\mathbb{R}^n$ and a vector $\boldsymbol{a} \in \mathbb{R}^n$, let $\boldsymbol{b} = \left[ \boldsymbol{a} - \alpha \bm{1} \right]$ for $\alpha \geq 0$. Finally define $\boldsymbol{c} \in \mathbb{R}^n$ to be an arbitrary vector. The following will hold \begin{enumerate}
\item $\sigmainvdot{\boldsymbol{a}}{\boldsymbol{b}} = \sigmainvnorm{\boldsymbol{a}}^2 - \alpha \sigmainvdot{\boldsymbol{a}}{\bm{1}}$
\item $\sigmainvdot{\boldsymbol{b}}{\bm{1}}= \sigmainvdot{\boldsymbol{a}}{\bm{1}} - \alpha \sigmainvnorm{\bm{1}}^2$
\item $\sigmainvnorm{\boldsymbol{b}}^2 = \sigmainvnorm{\boldsymbol{a}}^2 - 2 \alpha \sigmainvdot{\boldsymbol{a}}{\bm{1}} + \alpha^2 \sigmainvnorm{\bm{1}}^2$
\item $\sigmainvdot{\boldsymbol{b}}{\boldsymbol{c}} = \sigmainvdot{\boldsymbol{a}}{\boldsymbol{c}} - \alpha \sigmainvdot{\boldsymbol{c}}{\bm{1}}$ \end{enumerate} \end{fact}
\section{Proofs and Derivations}
\subsection{Derivation of \cref{stoch_ols_bias} Non-Spatial Bias} \label{app:stoch_ols_bias} \begin{eqnarray*}
\textrm{E} \left( \hat{\bm{\beta}}_{NS} | \bm{X}^* \right)&=& \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \left( \beta_0 \bm{1} + \beta_x \bm{X} + \beta_z \textrm{E} \left( \bm{Z} | \bm{X} \right) \right) \\
&=& \bm{\beta} + \beta_z \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \left( \mu_z \bm{1} + \rho \sigma_c \sigma_z \Rtwo{c}^T \left(\sigma_c^2 \Rtwo{c} + \sigma_u^2 \Rtwo{u} \right)^{-1} \right. \\ & & \left. \left( \bm{X} - \mu_x \bm{1} \right) \right)\\
&=& \bm{\beta} + \beta_z \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \left[ \mu_z \bm{1} + \rho \sigma_c \sigma_z \left(\sigma_c^2 \bm{I} + \sigma_u^2 \Rtwo{u} \Rtwo{c}^{-1} \right)^{-1} \right. \\ & & \left. \left( \bm{X} - \mu_x \bm{1} \right) \right]\\
&=& \bm{\beta} + \beta_z \left[ \mu_z \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{1} + \rho \sigma_c \sigma_z \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \left(\sigma_c^2 \bm{I} + \sigma_u^2 \Rtwo{u} \Rtwo{c}^{-1} \right)^{-1} \right. \\ & & \left. \left( \bm{X} - \mu_x \bm{1} \right) \right]\\
&=& \bm{\beta} + \beta_z \left[ \mu_z \begin{bmatrix}
1 \\
0 \\
\end{bmatrix} + \rho \sigma_c \sigma_z \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \left(\sigma_c^2 \bm{I} + \sigma_u^2 \Rtwo{u} \Rtwo{c}^{-1} \right)^{-1} \right. \\ & & \left. \left( \bm{X} - \mu_x \bm{1} \right) \right]\\
&=& \bm{\beta} + \beta_z \left[ \mu_z \begin{bmatrix}
1 \\
0 \\
\end{bmatrix} + \rho \frac{\sigma_z}{\sigma_c} \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{K} \left( \bm{X} - \mu_x \bm{1} \right) \right]\\
\end{eqnarray*} where $\bm{K}= p_c \left(p_c \bm{I} + (1-p_c) \Rtwo{u} \Rtwo{c}^{-1} \right)^{-1} $ and $p_c=\frac{\sigma_c^2}{\sigma_c^2 + \sigma_u^2}$. We now restrict our attention to the second element: \begin{eqnarray*}
\textrm{E} \left( \hat{\beta}_x^{NS} | \bm{X}^* \right)&=& \beta_x + \beta_z \rho \frac{\sigma_z}{\sigma_c} \left[ \left( { \bm{X}^* }^T \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{K} \left( \bm{X} - \mu_x \bm{1} \right) \right]_2 \end{eqnarray*}
\subsection{Derivation of \cref{stoch_ols_bias} Spatial Bias} \label{app:stoch_gls_bias} \begin{eqnarray*}
\textrm{E} \left( \hat{\bm{\beta}}^{S} | \bm{X}^* \right)&=& \left( { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{\Sigma}^{-1} \left( \beta_0 \bm{1} + \beta_x \bm{X} + \beta_z \textrm{E} \left( \bm{Z} | \bm{X} \right) \right) \\
&=& \bm{\beta} + \left( { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{\Sigma}^{-1} \beta_z \left( \mu_z \bm{1} + \rho \sigma_c \sigma_z \Rtwo{c}^T \left(\sigma_c^2 \Rtwo{c} + \sigma_u^2 \Rtwo{u} \right)^{-1} \right. \\ & & \left. \left( \bm{X} - \mu_x \bm{1} \right) \right)\\
&=& \bm{\beta} + \left( { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{\Sigma}^{-1} \beta_z \left[ \mu_z \bm{1} + \rho \sigma_c \sigma_z \left(\sigma_c^2 \bm{I} + \sigma_u^2 \Rtwo{u} \Rtwo{c}^{-1} \right)^{-1} \right. \\ & & \left. \left( \bm{X} - \mu_x \bm{1} \right) \right]\\
&=& \bm{\beta} + \beta_z \left[ \mu_z \begin{bmatrix}
1 \\
0 \\
\end{bmatrix} + \rho \sigma_c \sigma_z \left( { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{\Sigma}^{-1} \left(\sigma_c^2 \bm{I} + \sigma_u^2 \Rtwo{u} \Rtwo{c}^{-1} \right)^{-1} \right. \\ & & \left. \left( \bm{X} - \mu_x \bm{1} \right) \right]\\
&=& \bm{\beta} + \beta_z \left[ \mu_z \begin{bmatrix}
1 \\
0 \\
\end{bmatrix} + \rho \frac{\sigma_z}{\sigma_c} \left( { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{K} \left( \bm{X} - \mu_x \bm{1} \right) \right]\\
\end{eqnarray*} where, $\bm{K}= p_c \left(p_c \bm{I} + (1-p_c) \Rtwo{u} \Rtwo{c}^{-1} \right)^{-1} $, $p_c=\frac{\sigma_c^2}{\sigma_c^2 + \sigma_u^2}$ , and $\bm{\Sigma} = \beta_z^2 \sigma_z^2 \Rtwo{c} + \sigma^2 \bm{I}$. This calculation of the variance is done conditional on $\bm{X}$ to mirror the results in \citet{Pac_spatialconf}.
We restrict our attention to the second element, and note: \begin{eqnarray*}
\textrm{E} \left( \hat{\beta}_X^{S} | \bm{X}^* \right) &=& \beta_x + \beta_z \rho \frac{\sigma_z}{\sigma_c} \left[ \left( { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{X}^* \right)^{-1} { \bm{X}^* }^T \bm{\Sigma}^{-1} \bm{K} \left( \bm{X} - \mu_x \bm{1} \right) \right]_2 \end{eqnarray*}
\subsection{Derivation of \cref{eq:obsolsfixed}} \label{app:obsolsfixed} \begin{eqnarray*}
\textrm{E} \left( \hat{\bm{\beta}}^{NS} \right)&=& \left( { \bm{x}^* }^T \bm{x}^* \right)^{-1} { \bm{x}^* }^T \left( \beta_0 \bm{1} + \beta_x \bm{x} + \beta_z \bm{z} \right) \\
&=& \bm{\beta} + \beta_z \left( { \bm{x}^* }^T \bm{x}^* \right)^{-1} { \bm{x}^* }^T \bm{z} \\
&=& \bm{\beta} + \beta_z \left( \begin{bmatrix}
{\bm{1}}^T \bm{1} & {\bm{1}}^T \bm{x}\\
{\bm{x}}^T \bm{1} & {\bm{x}}^T \bm{x} \\
\end{bmatrix} \right)^{-1} { \bm{x}^* }^T \bm{z} \\
&=& \bm{\beta} + \frac{\beta_z}{ {\bm{1}}^T \bm{1} {\bm{x}}^T \bm{x} - {\bm{1}}^T \bm{x} {\bm{x}}^T \bm{1} } \left( \begin{bmatrix}
{\bm{x}}^T \bm{x} {\bm{1}}^T \bm{z} - {\bm{1}}^T \bm{x} {\bm{x}}^T \bm{z}\\
{\bm{1}}^T \bm{1} {\bm{x}}^T \bm{z} - {\bm{x}}^T \bm{1} {\bm{1}}^T \bm{z}\\
\end{bmatrix} \right) \\
&=& \bm{\beta} + \frac{\beta_z}{ \euclnorm{\bm{1}}^2 \euclnorm{\bm{x}}^2 - \left[ \euclmetric{\bm{x}}{\bm{1}} \right]^2 } \begin{bmatrix}
\euclnorm{\bm{x}}^2 \euclmetric{\bm{z}}{\bm{1}} - \euclmetric{\bm{x}}{\bm{1}} \euclmetric{\bm{x}}{\bm{z}}\\
\euclnorm{\bm{1}}^2 \euclmetric{\bm{x}}{\bm{z}} - \euclmetric{\bm{x}}{\bm{1}} \euclmetric{\bm{z}}{\bm{1}}\\
\end{bmatrix} . \\ \end{eqnarray*}
Restricting our attention to the second element: \begin{eqnarray*}
\textrm{E} \left( \hat{\bm{\beta}}_x^{NS} \right) &=& \beta_x +
\frac{\beta_z}{ \euclnorm{\bm{1}}^2 \euclnorm{\bm{x}}^2 - \left[ \euclmetric{\bm{x}}{\bm{1}} {\bm{x}}^T \right]^2 } \left( \euclnorm{\bm{1}}^2 \euclmetric{\bm{x}}{\bm{z}} - \euclmetric{\bm{x}}{\bm{1}} \euclmetric{\bm{z}}{\bm{1}} \right) \end{eqnarray*}
Thus, $\bias{\hat{\bm{\beta}}_x^{NS}} = \frac{\beta_z}{ \euclnorm{\bm{1}}^2 \euclnorm{\bm{x}}^2 - \left[ \euclmetric{\bm{x}}{\bm{1}} \right]^2 } \left( \euclnorm{\bm{1}}^2 \euclmetric{\bm{x}}{\bm{z}} - \euclmetric{\bm{x}}{\bm{1}} \euclmetric{\bm{z}}{\bm{1}} \right)$
\subsection{Proof of \cref{eq:obsglsfixed}} \label{app:obsglsfixed} \begin{eqnarray*}
\textrm{E} \left( \hat{\bm{\beta}}^g \right)&=& \left( { \bm{x}^* }^T \hat{\bm{\Sigma}}^{-1} \bm{x}^* \right)^{-1} { \bm{x}^* }^T \hat{\bm{\Sigma}}^{-1} \left( \beta_0 \bm{1} + \beta_x \bm{x} + \beta_z \bm{z} \right) \\
&=& \bm{\beta} + \beta_z \left( { \bm{x}^* }^T \hat{\bm{\Sigma}}^{-1} \bm{x}^* \right)^{-1} { \bm{x}^* }^T \hat{\bm{\Sigma}}^{-1} \bm{z} \\
&=& \bm{\beta} + \beta_z \left( \begin{bmatrix}
{\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} & {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{x}\\
{\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} & {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} \\
\end{bmatrix} \right)^{-1} { \bm{x}^* }^T \hat{\bm{\Sigma}}^{-1} \bm{z} \\
&=& \bm{\beta} + \frac{\beta_z}{ {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} - {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} } \left( \begin{bmatrix}
{\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} & - {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{x}\\
- {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} & {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} \\
\end{bmatrix} \right) { \bm{x}^* }^T \hat{\bm{\Sigma}}^{-1} \bm{z} \\
&=& \bm{\beta} + \frac{\beta_z}{ {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} - {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} } \begin{bmatrix}
{\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} & - {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{x}\\
- {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} & {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} \\
\end{bmatrix} \begin{bmatrix}
{\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{z} \\
{\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{z} \\
\end{bmatrix} \\
&=& \bm{\beta} + \frac{\beta_z}{ {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} - {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} } \begin{bmatrix}
{\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{z} - {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{x} {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{z}\\
{\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{z} - {\bm{x}}^T \hat{\bm{\Sigma}}^{-1} \bm{1} {\bm{1}}^T \hat{\bm{\Sigma}}^{-1} \bm{z} \\
\end{bmatrix} \\
&=& \bm{\beta} + \frac{\beta_z}{ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2 } \begin{bmatrix}
\sigmainvnorm{\bm{x}}^2 \sigmainvdot{\bm{z}}{\bm{1}} - \sigmainvdot{\bm{x}}{\bm{1}} \sigmainvdot{\bm{x}}{\bm{z}}\\
\sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \\
\end{bmatrix}. \end{eqnarray*}
Restricting our attention to the second element: \begin{eqnarray*} \textrm{E} \left( \hat{\bm{\beta}}_x^g \right) = \beta_x + \frac{\beta_z}{ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2 } \left(
\sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \right). \end{eqnarray*}
\subsection{Derivation of \cref{eq:obsadjglsfixed}} \label{app:obsadjglsfixed} We assume that $\bm{r}_{\x}$ are the known residuals from a model of the form \eqref{eq:genericSpatial} with $\bm{x}$ as the response and only an intercept. We assume that the model used to obtain these residuals gave an estimate of $\sigmahatgen{\bm{x}}$. We note that this means $\bm{r}_{\x} = \left[ \bm{I} - \bm{1} \left( \bm{1}^T \sigmahatgen{\bm{x}} \bm{1} \right)^{-1} \bm{1}^T \sigmahatgen{\bm{x}} \right] \bm{x}$. For the following, we denote $\bm{r}_{\x}^* = \left[ \bm{1} ~ \bm{r}_{\x} \right]$. \begin{eqnarray}
\textrm{E} \left( \hat{\bm{\beta}}^h \right)&=& \left( {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x}^* \right)^{-1} {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \left( \beta_0 \bm{1} + \beta_x \bm{x} + \beta_z \bm{z} \right) \nonumber \\
&=& \left( {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x}^* \right)^{-1} {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \left( \beta_0 \bm{1} + \beta_x \bm{x} \right) + \left( {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x}^* \right)^{-1} {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \left( \beta_z \bm{z} \right) \nonumber \\ &=&
\textrm{A} \left( \bm{r}_{\x}, \bm{x} \right) + \textrm{B} \left( \bm{r}_{\x}, \bm{x} \right) \label{eq:adj_bias} \end{eqnarray} The first term $\textrm{A} \left( \bm{r}_{\x}, \bm{x} \right)$ is no longer simply $\bm{\beta}$ as it was in \cref{app:obsglsfixed}. For clarity, we consider the two terms $\textrm{A} \left( \bm{r}_{\x}, \bm{x} \right)$ and $\textrm{B} \left( \bm{r}_{\x}, \bm{x} \right)$ separately.
\paragraph{Simplifying $\textrm{A} \left( \bm{r}_{\x}, \bm{x} \right)$} In this sub-section, we focus on the first term of \eqref{eq:adj_bias}. We employ the notation of \cref{precmatrix_metric} in this section.
\begin{eqnarray*} \textrm{A} \left( \bm{r}_{\x}, \bm{x} \right)= \left( {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x}^* \right)^{-1} {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \left( \beta_0 \bm{1} + \beta_x \bm{x} \right) &=& \left[ \left( \begin{bmatrix}
{\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} & {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x} \\
{\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} & {\bm{r}_{\x}}^T \bm{\Sigma}^{-1} \bm{r}_{\x} \\
\end{bmatrix} \right)^{-1} {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} \beta_0 + \right. \\ & & \left. \left( \begin{bmatrix}
{\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} & {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x} \\
{\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} & {\bm{r}_{\x}}^T \bm{\Sigma}^{-1} \bm{r}_{\x} \\ \end{bmatrix} \right)^{-1} {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{x} \beta_x \right] \nonumber \\ &=& \frac{1}{ {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} {\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x} - {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x} {\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} } \times \\ & & \left[ \begin{bmatrix}
{\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x} {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} - {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x} {\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} \\
{\bm{1}}^T \bm{\hat{\bm{\Sigma}}} ^{-1} \bm{1} {\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} - {\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} \\ \end{bmatrix} \beta_0 + \right.\nonumber \\ & & \left. \begin{bmatrix}
{\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x} {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{x} - {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x} {\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{x} \\
{\bm{1}}^T \bm{\hat{\bm{\Sigma}}} ^{-1} \bm{1} {\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{x} - {\bm{r}_{\x}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{1} {\bm{1}}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{x} \\ \end{bmatrix} \beta_x \right] \nonumber \\ &=& \frac{1}{ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{r}_{\x}}^2 - \left[ \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \right]^2 } \times \\ & & \left[ \begin{bmatrix}
\sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{r}_{\x}}^2 - \left[ \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \right]^2 \\
0 \\ \end{bmatrix} \beta_0 + \right.\nonumber \\ & & \left. \begin{bmatrix}
\sigmainvnorm{\bm{r}_{\x}}^2 \sigmainvdot{\bm{x}}{\bm{1}} - \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \sigmainvdot{\bm{r}_{\x}}{\bm{x}} \\
\sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{r}_{\x}}{\bm{x}} - \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \sigmainvdot{\bm{x}}{\bm{1}} \\ \end{bmatrix} \beta_x \right] \label{adj_bias_a} \end{eqnarray*}
Now, we restrict our attention to the second component. To further simplify we note that $\bm{r}_{\x} = [ \bm{x}- \alpha \bm{1} ]$ with $\alpha=\frac{ \sigmainvdotx{\bm{x}}{\bm{1}}}{\sigmainvnormx{\bm{1}}^2}\geq 0$. We can therefore use observations enumerated in \cref{shifted_vectors}. to simply further: \begin{eqnarray} \label{adj_bias_a} \textrm{A}_2 \left( \bm{r}_{\x}, \bm{x} \right) &=& \frac{ \beta_x \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{r}_{\x}}{\bm{x}} - \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \sigmainvdot{\bm{x}}{\bm{1}}\right) } { \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{r}_{\x}}^2 - \left[ \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \right]^2 } \nonumber \\ &=& \frac{ \beta_x \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{r}_{\x}}{\bm{x}} - \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \sigmainvdot{\bm{x}}{\bm{1}}\right) } { \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2} \nonumber \\ &=& \textrm{ Here we make use of the identities in \cref{shifted_vectors} } \nonumber \\ &=& \frac{ \beta_x \left( \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2 \right) } { \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2} \nonumber \\ &=& \beta_x \end{eqnarray}
\paragraph{Simplifying $\textrm{B} \left( \bm{r}_{\x}, \bm{x} \right)$} In this sub-section, we focus on the second term of \eqref{eq:adj_bias}. We again employ the notation of \cref{precmatrix_metric}. We note that the term is equivalent to the second term of \cref{app:obsglsfixed} with $\bm{r}_{\x}$ replacing $\bm{x}$. Therefore, we can borrow the result there to note:
\begin{eqnarray*} \textrm{B} \left( \bm{r}_{\x}, \bm{x} \right) &=& \left( {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \bm{r}_{\x}^* \right)^{-1} {\bm{r}_{\x}^*}^T \bm{\hat{\bm{\Sigma}}}^{-1} \left( \beta_z \bm{z} \right) \\ &=& \frac{\beta_z}{ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{r}_{\x}}^2 - \left[ \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \right]^2 } \begin{bmatrix}
\sigmainvnorm{\bm{r}_{\x}}^2 \sigmainvdot{\bm{z}}{\bm{1}} - \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \sigmainvdot{\bm{r}_{\x}}{\bm{z}}\\
\sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{r}_{\x}}{\bm{z}} - \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \\
\end{bmatrix} \end{eqnarray*}
Note, that restricting our attention to the second component and again using \cref{shifted_vectors}, we can further simplify as follows: \begin{eqnarray*} \label{adj_bias_b} \textrm{B}_2 \left( \bm{r}_{\x}, \bm{x} \right) &=& \frac{ \beta_z \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{r}_{\x}}{\bm{z}} - \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \right) } { \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{r}_{\x}}^2 - \left[ \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \right]^2} \nonumber\\ \textrm{ Using the \cref{shifted_vectors} for the denominator } \nonumber \\ &=& \frac{ \beta_z \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{r}_{\x}}{\bm{z}} - \sigmainvdot{\bm{r}_{\x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \right) } { \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2} \nonumber \\ &=& \textrm{ Using the \cref{shifted_vectors} for the numerator } \nonumber\\ &=& \frac{ \beta_z \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \right) } { \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2} \end{eqnarray*}
\paragraph{Bias for $\beta_x$} Combining our results from \eqref{adj_bias_a} and \eqref{adj_bias_b}:
\begin{eqnarray*}
\textrm{E} \left( \hat{\bm{\beta}}^{AS} \right) - \beta_x = A_2 \left( \bm{r}_{\x}, \bm{x} \right) - \beta_x + B_2 \left( \bm{r}_{\x}, \bm{x} \right) = \\
\frac{ \beta_z \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{x}}{\bm{1}} \sigmainvdot{\bm{z}}{\bm{1}} \right) } { \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{x}}{\bm{1}} \right]^2} \end{eqnarray*}
\begin{comment} \section{Bounds on Bias}
\subsection{Bounds on Bias} \label{subsec:bounds} As referenced previously, the impact of spatial confounding is often described as being capable of radically changing inference on $\beta_x$. To our knowledge, no one has previously attempted to quantify how large this bias can be. In this sub-section, we bound the absolute value of the bias for the non-spatial analysis model, spatial analysis models, and adjusted spatial analysis models of the forms considered in \cref{subsec:biasadj}.
It turns out the bound for non-spatial and the adjusted spatial analysis models can be considered as special cases of the bound for the spatial analysis model. We introduce the result for the spatial analysis model first.
\begin{thm} \label{thm:bounds_generic}
Let the data generating model be of the form \eqref{eq:model_0} with $\boldsymbol{y}$ and $\bm{x}$ known. If a spatial analysis model of the form \eqref{eq:genericSpatial} is fit, and results in the estimate $\hat{\bm{\Sigma}}$, then $|\bias{\hat{\beta}_{x}^{S } }|= |\beta_x - E\left(\hat{\beta}_{x}^{S} \right)|$ can be bound as follows:
\begin{eqnarray}
|\bias{\hat{\beta}_{x}^{S } }| \leq \frac{|\beta_z|}{|\sin\left(\sigmainvangle{\bm{x}}{\bm{1}} \right)|} \frac{\sigmainvnorm{\bm{z}}}{\sigmainvnorm{\bm{x}}},
\end{eqnarray} where $\sigmainvnorm{\bm{x}}$ is the angle between $\bm{x}$ and $\bm{1}$ with respect to the Reimannian metric induced by $\hat{\bm{\Sigma}}^{-1}$. \end{thm} \begin{proof} See \cref{app:proof_thm_bounds_generic} for the proof. \end{proof} Because the bias term for the adjusted spatial analysis models are functionally equivalent to those for the spatial analysis models, the bounds on the bias are also equivalent. This is stated more precisely in \cref{cor:bounds_adj}.
\begin{cor} \label{cor:bounds_adj}
Let the data generating model be of the form \cref{eq:model_0} with $\boldsymbol{y}$ and $\bm{x}$ known. We assume that $\boldsymbol{r}_x$ are the residuals from a spatial analysis model of the form \eqref{eq:genericadjSpatial} with response $\bm{x}$ and only an intercept. If a spatial analysis model of the form \eqref{eq:genericSpatial} with $\bm{x} = \boldsymbol{r}_x$ results in the estimate $\hat{\bm{\Sigma}}$, then $|\bias{\hat{\beta}_{x}^{AS} }|$ can be bound as follows:
\begin{eqnarray*}
|\bias{\hat{\beta}_{x}^{AS} }| \leq \frac{|\beta_z|}{|\sin\left(\sigmainvangle{\bm{x}}{\bm{1}} \right)|} \frac{\sigmainvnorm{\bm{z}}}{\sigmainvnorm{\bm{x}}},
\end{eqnarray*} with all terms defined as in \cref{thm:bounds_generic}. \end{cor} \begin{proof} This follows from observing \cref{thm:bounds_generic} applies to the bias term in \cref{thm:adj_bias}. \end{proof}
Finally, as with the bias terms themselves, the bounds for the bias induced by fitting a non-spatial analysis model are again a special case of those for the bounds for a spatial analysis model.
\begin{cor} \label{cor:bounds_ols}
Let the data generating model be of the form \cref{eq:model_0} with $\boldsymbol{y}$ and $\bm{x}$ known. If non-spatial analysis model the form \eqref{eq:OLSmodel}, then $|\bias{\hat{\beta}_{x}^{NS} }|$ can be bound as follows:
\begin{eqnarray*}
|\bias{\hat{\beta}_{x}^{NS} }| \leq \frac{|\beta_z|}{|\sin\left(\euclangle{\bm{x}}{\bm{1}} \right)|} \frac{\euclnorm{\bm{z}}}{\euclnorm{\bm{x}}},
\end{eqnarray*} \end{cor} \begin{proof} See \cref{app:proof_thm_bounds_ols}. \end{proof} We note these bounds need not be particularly tight, except in one special case that is likely not of practical interest. However, we take a moment to explore what they mean. We note that the larger the magnitude of $\beta_z$, the larger the potential ''worst-case'' scenario for bias. As before, this makes intuitive sense. The impact of $\beta_z$ is constant across all potential analysis models considered. The behavior of \cref{thm:bounds_generic} as a function of $\bm{x}$ is more complicated. At the extremes, if $\bm{x}$ is either too flat and/or too spatially smooth, this can increase the magnitude of the bound. To see this, note that as $\bm{x}$ gets more flat, the $\sin\left( \sigmainvangle{\bm{x}}{\bm{1}} \right)$ in the denominator of \cref{thm:bounds_generic} will decrease. Similarly, the more that spatially smooth $\bm{x}$ is (i.e., more correlated with low-frequency eigenvectors of $\hat{\bm{\Sigma}}^{-1}$), the smaller the term $\sigmainvnorm{\bm{x}}$ in the denominator of \cref{thm:bounds_generic} becomes. Finally, as $\bm{z}$ becomes more spatially smooth, the bound on the bias will decrease. In other words, if the low-frequency eigenvectors of $\hat{\bm{\Sigma}}^{-1}$ explain the spatial distribution of $\bm{z}$, the ``worst-case'' scenario of bias will decrease.
For an individual analysis model, the upper bound on the absolute bias is a function of the unknown $\bm{z}$. We can tease apart this dependence into two groups of factors: 1) the mean and variance of $\bm{z}$, 2) the relationship $\bm{z}$ has with the eigenvectors of $\Sigmainv$. Although we know $\bm{x}$, we can also tease apart this relationship for it in the same way. In \cref{rmk:restating}, we do this. Effectively, this re-expression further illustrates as $\bm{z}$ becomes more highly correlated with low-frequency eigenvectors of $\hat{\bm{\Sigma}}^{-1}$, the bound will decrease.
\begin{rmk} \label{rmk:restating} Assume $\hat{\bm{\Sigma}}^{-1}$ is given for some analysis model. Define $\bm{U} \bm{D} \bm{U} = \hat{\bm{\Sigma}}^{-1}$ be the spectral decomposition of $\hat{\bm{\Sigma}}^{-1}$, with $\bm{D}$ a diagonal matrix such that $d_1 \geq \cdots d_n > 0$. The bound defined in \cref{thm:bounds_generic} can be re-expressed as: \begin{eqnarray} \label{restatement}
\frac{|\beta_z|}{|\sin\left(\sigmainvangle{\bm{x}}{\bm{1}} \right)|} \frac{\sqrt{ \left( n-1 \right) s_z^2 + n \bar{z}^2 } \sqrt{ \sum_{i=1}^n \cos^2 \left( \euclangle{\bm{z}}{\boldsymbol{u}_i} \right) d_i}}{\sqrt{ \left( n-1 \right) s_x^2 + n \bar{x}^2 } \sqrt{ \sum_{i=1}^n \cos^2 \left( \euclangle{\bm{x}}{\boldsymbol{u}_i} \right) d_i}}, \end{eqnarray} where $\bar{x}$ and $s_x^2$ are the sample mean and sample variance of $\bm{x}$, and $\euclangle{\bm{x}}{\boldsymbol{u}_i}$ is the angle between $\bm{x}$ and the $i$th column of $\bm{U}$ with respect to the standard Euclidean norm. The terms involving $\bm{z}$ are defined analogously.
In addition, the bound defined in \cref{cor:bounds_ols} can be re-expressed as: \begin{eqnarray}
\frac{|\beta_z|}{|\sin\left(\euclangle{\bm{x}}{\bm{1}} \right)|} \frac{\sqrt{ \left( n-1 \right) s_z^2 + n \bar{z}^2 } }{\sqrt{ \left( n-1 \right) s_x^2 + n \bar{x}^2 }} \end{eqnarray} \end{rmk}
The bounds on absolute bias for all of the analysis models we have explored have the same form. This allows for the comparison of two models by taking ratios of the bounds. For example, a spatial analysis model has the potential for more extreme magnitudes of bias than a non-spatial analysis model when
\[ \sqrt{ \sum_{i=1}^n \cos^2 \left( \euclangle{\bm{z}}{\boldsymbol{u}_i} \right) d_i} > \sqrt{ \sum_{i=1}^n \cos^2 \left( \euclangle{\bm{x}}{\boldsymbol{u}_i} \right) d_i} \frac{|\sin\left(\euclangle{\bm{x}}{\bm{1}} \right)|} {|\sin\left(\sigmainvangle{\bm{x}}{\bm{1}} \right)|}. \] If both analysis models are fit, the left-hand side of this equation is known. The eigenvalues and eigenvectors in the left hand side will also be known for a fixed model. \todo{Think to just take all of this out?} \cb{I think yes. I think there is valuable stuff here, but I think because the bounds are still dependent on unobservables and the specific estimated covariance matrix, we might be focusing on trees that aren't as useful to describing the forest as the other trees you've focused on.}
\begin{center}
\begin{tabular}{| m{2cm} | m{5cm} | m{5cm} | }
\rowcolor{tuftsblue}
\makecell{\textbf{\textcolor{white}{Analysis Model}}} & \makecell{\textbf{\textcolor{white}{Bias }}} &
\makecell{\textbf{\textcolor{white}{Bound }}} \\
\hline
Non-Spatial & & $\frac{|\beta_z|}{|\sin\left(\euclangle{\bm{x}}{\bm{1}} \right)|} \frac{\euclnorm{\bm{z}}}{\euclnorm{\bm{x}}}$ \\
\hline
GSEM & & $\frac{|\beta_z|}{|\sin\left(\euclangle{\bm{x}}{\bm{1}} \right)|} \frac{\euclnorm{\bm{z}}}{\euclnorm{\bm{x}}}$ \\
\hline
Spatial & & $\frac{|\beta_z|}{|\sin\left(\sigmainvangle{\bm{x}}{\bm{1}} \right)|} \frac{\sigmainvnorm{\bm{z}}}{\sigmainvnorm{\bm{x}}}$ \\
\hline
Spatial + & & $\frac{|\beta_z|}{|\sin\left(\sigmainvangle{\bm{x}}{\bm{1}} \right)|} \frac{\sigmainvnorm{\bm{z}}}{\sigmainvnorm{\bm{x}}}$ \\
\hline \end{tabular} \captionof{table}{Summary of Bias by Analysis Model \label{tab:bias}} \end{center} \end{comment}
\begin{comment} \section{Bounds on Bias}
\subsection{Proof of \cref{thm:bounds_generic}} \label{app:proof_thm_bounds_generic}
For clarity, we re-express the form of bias for $E\left(\hat{\beta}_{x}^{g} \right) - \beta_x$ given in \cref{eq:obsglsfixed}: \begin{equation}
\beta_z \frac{ \left( || \bm{1} ||_{\bm{\hat{\bm{\Sigma}}}^{-1}}^2 \langle \bm{x}, \bm{z} \rangle_{\bm{\hat{\bm{\Sigma}}}^{-1}} - \langle \bm{1}, \bm{x} \rangle_{\bm{\hat{\bm{\Sigma}}}^{-1}} \langle \bm{1}, \bm{z} \rangle_{\bm{\hat{\bm{\Sigma}}}^{-1}} \right)}{ || \bm{1} ||_{\bm{\hat{\bm{\Sigma}}}^{-1}} ^2 || \bm{x}||_{\bm{\hat{\bm{\Sigma}}}^{-1}}^2 - \left[ \langle \bm{1}, \bm{x} \rangle_{\bm{\hat{\bm{\Sigma}}}^{-1}} \right]^2} \label{eq:restate_biasforproof} \end{equation}
In this proof, we derive the closed form expressions of the lower and upper bound of \eqref{eq:restate_biasforproof}. We assume that we know $\bm{x}$ and $\hat{\bm{\Sigma}}^{-1}$, but that we do not know $\bm{z}$. Most of the proof involves employing relatively trivial facts about inner products to re-express \eqref{eq:restate_biasforproof} in terms of angles between vectors. The heavy-lifting part of this proof derives bounds on the numerator of \eqref{eq:restate_biasforproof}. To do so, it reduces finding extrema of a 3 variable function of angles between vectors to finding extrema of a 1 variable function. This reduction stems from a gauge symmetry type of argument which utilizes facts from spherical trigonometry.
\paragraph{Bounding the Numerator of \eqref{eq:restate_biasforproof}}
We begin by focusing on the numerator of \eqref{eq:restate_biasforproof}. Using \cref{precmatmetric_rexpression}, we can re-express each of the three inner product and simplify to show that: \begin{eqnarray}
\sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{1}}{\bm{x}} \sigmainvdot{\bm{1}}{\bm{z}}= \nonumber \\ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}} \sigmainvnorm{\bm{z}} \left[ \cos \left( \sigmainvangle{\bm{x}}{\bm{z}} \right) - \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \cos \left( \sigmainvangle{\bm{z}}{\bm{1}} \right) \right] \label{eq:reexpressed_numerator} \end{eqnarray} where $\sigmainvangle{\bm{x}}{\bm{z}}$, $\sigmainvangle{\bm{x}}{\bm{1}}$, and $\sigmainvangle{\bm{z}}{\bm{1}}$ are the angles (with respect to the precision matrix induced metric induced by $\hat{\bm{\Sigma}}^{-1}$ as defined in \cref{precmatrix_angle}) between $\bm{x}$ and $\bm{z}$, $\bm{x}$ and $\bm{1}$, and $\bm{z}$ and $\bm{1}$ respectively.
Much of this sub-section will focus on bounding the following portion in \eqref{eq:reexpressed_numerator}: \begin{eqnarray} \cos \left( \sigmainvangle{\bm{x}}{\bm{z}} \right) - \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \cos \left( \sigmainvangle{\bm{z}}{\bm{1}} \right) \label{eq:thecoses} \end{eqnarray}
It is possible to find closed expressions for the lower and upper bounds for \eqref{eq:thecoses}. To do so, we consider the $n-1$ sphere defined by the inner product $\sigmainvdot{\cdot}{\cdot}$ as in \cref{nsphere}. We consider the unit vectors $\bm{x}^s = \frac{\bm{x}}{\sigmainvnorm{\bm{x}}}$, $\bm{1}^s = \frac{\bm{1}}{\sigmainvnorm{\bm{1}}}$, and $\bm{z}^s = \frac{\bm{z}}{\sigmainvnorm{\bm{z}}}$ on this sphere. We note that scaling two vectors will preserve the angles between them. In other words, $\sigmainvangle{\bm{x}^s}{\bm{z}^s}=\sigmainvangle{\bm{x}}{\bm{z}}$, $\sigmainvangle{\bm{x}^s}{\bm{1}^s}=\sigmainvangle{\bm{x}}{\bm{1}}$, and $\sigmainvangle{\bm{z}^s}{\bm{1}^s}=\sigmainvangle{\bm{z}}{\bm{1}}$. So, although we make the following arguments with respect to the scaled vectors for clarity, the results hold for \eqref{eq:reexpressed_numerator}.
Now, because we know $\bm{x}^s$ and $\bm{1}^s$, we also know $\sigmainvangle{\bm{x}^s}{\bm{1}^s}$. However, because $\bm{z}$ (and therefore $\bm{z}^s$) is unknown, this leaves two unknowns in \eqref{eq:thecoses}: $\sigmainvangle{\bm{z}^s}{\bm{1}^s}$ and $\sigmainvangle{\bm{x}^s}{\bm{z}^s}$. To facilitate maximizing and minimizing \eqref{eq:thecoses}, we first reduce this problem to just one unknown by considering an arbitrary known angle $\sigmainvangle{\bm{z}^s}{\bm{1}^s}^*$.
Now, for any $\bm{z}^s$ on the $n-1$ sphere defined by $\sigmainvdot{\cdot}{\cdot}$ such that $\sigmainvangle{\bm{z}^s}{\bm{1}^s}= \sigmainvangle{\bm{z}^s}{\bm{1}^s}^*$, we note that we can find a $2$-sphere that contains $\bm{1}^s$, $\bm{x}^s$, and $\bm{z}^s$. To see this, define $\textrm{O}\left( n \right)$ to be the rotation group that preserves $\sigmainvdot{\cdot}{\cdot}$ on $\mathbb{R}^n$. Given a $\bm{z}^s$ such that $\sigmainvangle{\bm{z}^s}{\bm{1}^s}= \sigmainvangle{\bm{z}^s}{\bm{1}^s}^*$, we can always find a rotation, which will depend on $\bm{z}^s$, such that the three vectors are contained on a $3$-dimensional subspace of $\mathbb{R}^n$. On this subspace, the three vectors are on a $2$-sphere, as depicted in \cref{fig:2sphere}.
Therefore, for known $\bm{x}^s$ and $\bm{1}^s$ and arbitrary $\sigmainvangle{\bm{z}^s}{\bm{1}^s}^*$, maximizing and minimizing \eqref{eq:thecoses} reduces to maximizing and minimizing: \[ \cos \left( \sigmainvangle{\bm{x}^s}{\bm{z}^s} \right) \] subject to the constraint that $\bm{z}^s$ lies on a circle on the surface of the $2$-sphere centered at $\bm{1}^s$ with radius $\sigmainvangle{\bm{z}^s}{\bm{1}^s}^*$. To help visualize this, an illustration is shown in \cref{fig:2sphere}.
Subject to this constraint, the extrema of $\cos \left( \sigmainvangle{\bm{x}^s}{\bm{z}^s} \right)$ will always occur when $\bm{z}^s$ is contained in the plane spanned by $\bm{1}^s$ and $\bm{x}^s$. In other words, the extrema will occur on the $2$-sphere at the intersection of the plane spanned by $\bm{1}^s$ and $\bm{x}^s$ and the circle centered at $\bm{1}^s$ with radius $\sigmainvangle{\bm{z}^s}{\bm{1}^s}^*$.
\begin{figure}\label{fig:2sphere}
\end{figure}
Recall, our goal is to reduce the problem of two unknowns in \eqref{eq:thecoses} to one unknown. The process of finding a $2$-sphere can be repeated for any chosen $\sigmainvangle{\bm{z}^s}{\bm{1}^s}^*$. This means that if extrema exist for \eqref{eq:thecoses}, they will be on the $2$-sphere at the intersection of the plane spanned by $\bm{1}^s$ and $\bm{x}^s$ and the circle centered at $\bm{1}^s$ with radius $\sigmainvangle{\bm{z}^s}{\bm{1}^s}^*$. Because the $2$-sphere is compact and \eqref{eq:thecoses} is continuous on it, the extrema will exist. Effectively, this allows us to reduce finding the extrema of \eqref{eq:thecoses} to finding the extrema with respect to $e$ of: \[ \cos( e) - \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} + e \right). \]
This can be done with Mathematica and after doing so, we find the maximum value of $\cos \left( \sigmainvangle{\bm{x}}{\bm{z}} \right) - \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \cos \left( \sigmainvangle{\bm{z}}{\bm{1}} \right)$ is: \begin{eqnarray} \label{eq:initialupperbound} & &\sin \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \sin \left[ \sigmainvangle{\bm{x}}{\bm{1}} + 2 \arctan \left( \sec \left[ \sigmainvangle{\bm{x}}{\bm{1}} \right] - \tan \left[ \sigmainvangle{\bm{x}}{\bm{1}} \right] \right) \right] = \nonumber \\ & & \sin \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \end{eqnarray} This maximum occurs when the angle between $\bm{x}$ and $\bm{z}$ is: \begin{eqnarray} \label{max_angle} \sigmainvangle{\bm{x}}{\bm{z}}^U = 2 \arctan \left[ \sec \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) - \tan \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \right]. \end{eqnarray} In the same way, we can find the minimum value of $\cos \left( \sigmainvangle{\bm{x}}{\bm{z}} \right) - \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \cos \left( \sigmainvangle{\bm{z}}{\bm{1}} \right)$ is: \begin{eqnarray} \label{eq:initiallowerbound} & & \sin \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \sin \left[ \sigmainvangle{\bm{x}}{\bm{1}} - 2 \arctan \left( \sec \left[ \sigmainvangle{\bm{x}}{\bm{1}} \right] + \tan \left[ \sigmainvangle{\bm{x}}{\bm{1}} \right] \right) \right] \nonumber = \\ & & - \sin \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \end{eqnarray} which occurs when the angle between $\bm{x}$ and $\bm{z}$ is: \begin{eqnarray} \label{min_angle} \sigmainvangle{\bm{x}}{\bm{z}}^L = - 2 \arctan \left[ \sec \left( \sigmainvangle{\bm{x}}{\bm{1}}\right) + \tan \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \right]. \end{eqnarray} Using \eqref{eq:initialupperbound} and \eqref{eq:initiallowerbound}, we can derive a for the absolute value of \eqref{eq:reexpressed_numerator} in the following way: \begin{eqnarray}
\sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}} \sigmainvnorm{\bm{z}} | \left[ \cos \left( \sigmainvangle{\bm{x}}{\bm{z}} \right) - \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \cos \left( \sigmainvangle{\bm{z}}{\bm{1}} \right) \right]| \leq \nonumber \\ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}} \sigmainvnorm{\bm{z}} \sin \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \end{eqnarray}
At this point, both the bound is a function of $\sigmainvnorm{\bm{z}}$. We do not assume the norm of $\bm{z}$ is known. However, we can use \cref{rexpression_norm} to express these bounds as a function of the unknown sample mean $\bar{z}$ and unknown sample variance $s_z^2$: \begin{eqnarray}
\sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}} \sigmainvnorm{\bm{z}} | \left[ \cos \left( \sigmainvangle{\bm{x}}{\bm{z}} \right) - \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \cos \left( \sigmainvangle{\bm{z}}{\bm{1}} \right) \right]| \leq \nonumber \\ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}} \sqrt{ \left( n-1 \right) s_z^2 + n \bar{z}^2 } \sqrt{ \sum_{i=1}^n \cos \left( \euclangle{\bm{z}_U^s}{\boldsymbol{u}_i} \right)^2 d_i} \sin \left( \sigmainvangle{\bm{x}}{\bm{1}} \right) \label{eq:newupperbound} \end{eqnarray} where $\boldsymbol{u}_i$ and $d_i$ for $i=1,\ldots, n$ are defined in \cref{spectral_decomp}.
\paragraph{Re-Expressing Denominator of \eqref{eq:restate_biasforproof}}
We use \cref{precmatmetric_rexpression} to re-express the denominator of \eqref{eq:restate_biasforproof}:
\begin{eqnarray}
\sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{1}}{\bm{x}} \right]^2 &=& \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 \left[ 1 - \cos \left( \sigmainvangle{\bm{x}}{\bm{1}} \right)^2 \right] \\
&=& \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 \sin \left( \sigmainvangle{\bm{x}}{\bm{1}} \right)^2 \nonumber \label{eq:reexpressed_denominator} \end{eqnarray}
\paragraph{Final Form of Bounds} We combine our results to note find final bound on the absolute value of \eqref{eq:restate_biasforproof}. To do so, we use \eqref{eq:newupperbound} and \eqref{eq:reexpressed_denominator} as follows:
\begin{eqnarray*}
\leftvert \beta_z \frac{ \left( \sigmainvnorm{\bm{1}}^2 \sigmainvdot{\bm{x}}{\bm{z}} - \sigmainvdot{\bm{1}}{\bm{x}} \sigmainvdot{\bm{1}}{\bm{z}} \right)}{ \sigmainvnorm{\bm{1}}^2 \sigmainvnorm{\bm{x}}^2 - \left[ \sigmainvdot{\bm{1}}{\bm{x}} \right]^2}\rightvert &\leq& \frac{| \beta_z| } { | \sin \left( \sigmainvangle{\bm{x}}{\bm{1}} \right)| } \frac{\sqrt{ \left( n-1 \right) s_z^2 + n \bar{z}^2 } \sqrt{ \sum_{i=1}^n \cos \left( \euclangle{\bm{z}_U^s}{\boldsymbol{u}_i} \right)^2 d_i}}{ \sigmainvnorm{\bm{x}} } \end{eqnarray*}
Finally, if the analysis model involves ...Note that the bias term in \cref{eq:obsadjglsfixed} is functionally equivalent to \cref{eq:obsglsfixed}. Thus, the same argument as in \cref{app:proof_thm_bounds_generic} can be used here.
\subsection{Proof of \cref{eq:bounds_ols}} \label{app:proof_thm_bounds_ols} Recall from \cref{ols_special_gls} that the functional form of the bias from using a non-spatial analysis model is a special case of the bias from using a spatial analysis model. Thus, the same argument as in \cref{app:proof_thm_bounds_generic} can be used here with $\hat{\bm{\Sigma}}^{-1}= \bm{I}$.
To respect the notation of \cref{euclidean_angle} and \cref{precmatrix_angle}, we re-write the bounds in \cref{generic_upper} and \cref{generic_lower} for this special case. We omit the $d_i$'s because the eigenvalues of $\bm{I}$ are all $1$. The bound on the absolute bias is thus:
\begin{equation*} \label{eucl_upper}
\frac{| \beta_z| } { | \sin \left( \euclangle{\bm{x}}{\bm{1}} \right)| } \frac{\sqrt{ \left( n-1 \right) s_z^2 + n \bar{z}^2 } \sqrt{ \sum_{i=1}^n \cos \left( \euclangle{\bm{z}_U^s}{\boldsymbol{u}_i} \right)^2 }}{ \euclnorm{\bm{x}} } \end{equation*} \end{comment}
\end{document} | arXiv |
\begin{definition}[Definition:Octagonal Number/Definition 1]
Octagonal numbers are defined as:
:$O_n = \begin{cases}
0 & : n = 0 \\
O_{n - 1} + 6 n - 5 & : n > 0 \end{cases}$
for $n = 1, 2, 3, \ldots$
\end{definition} | ProofWiki |
\begin{document}
\nocite{*} \date{}
\title{Peridynamics and Material Interfaces} \author{Bacim Alali$^{\tiny a}$ \and Max Gunzburger$^{\tiny b}$} \maketitle \let\thefootnote\relax\footnote{$^{ a}$Department of Mathematics, Kansas State University, Manhattan, KS 66506.\\{\it [email protected].} }
\let\thefootnote\relax\footnote{$^{ b}$
Department of Scientific Computing, Florida State University,
Tallahassee, FL 32306.\\{\it [email protected].} }
\begin{abstract}
The convergence of a peridynamic model for solid mechanics inside heterogeneous media in the limit of vanishing nonlocality is analyzed. It is shown that the operator of linear peridynamics for an isotropic heterogeneous medium converges to the corresponding operator of linear elasticity when the material properties are sufficiently regular. On the other hand, when the material properties are discontinuous, i.e., when material interfaces are present, it is shown that the operator of linear peridynamics diverges, in the limit of vanishing nonlocality, at material interfaces.
Nonlocal interface conditions, whose local limit implies the classical interface conditions of elasticity, are then developed and discussed.
A peridynamics material interface model is introduced which generalizes the classical interface model of elasticity. The model consists of a new peridynamics operator along with nonlocal interface conditions.
The new peridynamics interface model converges
to the classical interface model of linear elasticity. \end{abstract}
\section{Introduction} \label{sec_intro}
Peridynamics \cite{Silling39,silling2007peridynamic} is a nonlocal theory for continuum mechanics. Material points interact through forces that act over a finite distance with the maximum interaction radius being called the peridynamics {\it horizon}.
Peridynamics is a generalization to elasticity theory in the sense that
peridynamics operators converge to corresponding elasticity operators in the limit of vanishing horizon.
These convergence results have been shown for different cases; see \cite{emmrich2007well,peridconvg,nonlocal_calc_peridy_2013,tadele_du_2013}. For example, in a linear isotropic homogeneous medium,
and under certain regularity assumptions on the vector field ${\bf v}$, it has been shown in \cite{emmrich2007well} that \begin{equation}
\lim_{\delta\rightarrow 0}\mathcal{L}^\delta_s{\bf v}=\mathcal{N}_s{\bf v} \;\;\;\; \mbox{ in } L^\infty(\Omega)^3 \end{equation} and in \cite{nonlocal_calc_peridy_2013} it has been shown that \begin{equation}
\lim_{\delta\rightarrow 0} \mathcal{L}^\delta{\bf v}=\mathcal{N}{\bf v} \;\;\;\; \mbox{ in } H^{-1}(\mathbb{R}^3), \end{equation}
where $\Omega$ is a bounded domain, $\mathcal{L}^\delta_s$ is the bond-based and $\mathcal{L}^\delta$ is the state-based linear peridynamics operators, and $\mathcal{N}_s$ and $\mathcal{N}$ are the corresponding linear elasticity operators, respectively (see Section \ref{overview_sec} for the definitions of these operators).
In this work, we study the behavior of linear peridynamics inside heterogeneous media in the limit of vanishing horizon. We focus on the linear peridynamics model for solids given in \cite{silling2007peridynamic,lin_Silling}. We note that other models for linear peridynamic solids have been proposed; see for example \cite{aguiar2013constitutive}. In Theorem~\ref{thm_convg} and Proposition~\ref{prop1} of this work we show that when the vector-field ${\bf v}$ and the material properties are sufficiently differentiable then
\begin{equation} \label{convg1}
\lim_{\delta\rightarrow 0} \mathcal{L}^\delta{\bf v}=\mathcal{N}{\bf v} \;\;\;\; \mbox{ in } L^p(\Omega)^3, \;\;\; 1\leq p<\infty, \end{equation} and \begin{equation} \label{convg2}
\lim_{\delta\rightarrow 0} \mathcal{L}^\delta_s{\bf v}=\mathcal{N}_s{\bf v} \;\;\;\; \mbox{ in } L^p(\Omega)^3, \;\;\; 1\leq p<\infty, \end{equation}
where $\mathcal{L}^\delta$ and $\mathcal{L}^\delta_s$ are the state-based and bond-based linear peridynamics operator for an isotropic heterogeneous medium and $\mathcal{N}$ and $\mathcal{N}_s$ are the corresponding operators of linear elasticity, respectively.
In addition, we show that continuity of the material properties is a necessary condition for the convergence of peridynamics to elasticity. Indeed, if the material properties have jump discontinuities, as for example in multi-phase composites, then
it is shown in Theorem~\ref{nonconvg_thm} and Lemma~\ref{nonconvg_1} that the local limits of the peridynamic operators do not exist. In particular, we find that for points ${\bf x}$ on the interface, \begin{equation} \label{nonconvg1}
\lim_{\delta\rightarrow 0} \left(\mathcal{L}^\delta{\bf v}\right)({\bf x}) \mbox{ does not exist}, \end{equation} and \begin{equation} \label{nonconvg2}
\lim_{\delta\rightarrow 0} \left(\mathcal{L}^\delta_s{\bf v}\right)({\bf x}) \mbox{ does not exist}.
\end{equation}
We consider the classical interface model in linear elasticity inside a two-phase composite. The strong form of the elastic equilibrium problem
is given by the following system of partial differential equations and interface conditions: \begin{numcases}{\label{interface_pde0}} \label{interface_pde0_1} \nabla\cdot\sigma({\bf x})&$\displaystyle={\bf b}({\bf x}),\;\;\; {\bf x}\in\Omega_+$ \\ \label{interface_pde0_2} \nabla\cdot\sigma({\bf x})&$\displaystyle={\bf b}({\bf x}),\;\;\; {\bf x}\in\Omega_-$\\[1.5ex]
\label{interface_pde0_3} \sigma{\bf n}({\bf x}^+) &$\displaystyle=\sigma{\bf n}({\bf x}^-),\;\; {\bf x}\in \Gamma$\\ \label{interface_pde0_4} {\bf u}({\bf x}^+) &$\displaystyle={\bf u}({\bf x}^-),\;\;\;\; {\bf x}\in \Gamma$, \end{numcases} where $\sigma$ is the stress tensor, ${\bf u}$ is the displacement field, ${\bf b}$ is a body force density, ${\bf n}$ is the unit normal to the interface, and $\Omega=\Omega_+\cup\Omega_-\cup\Gamma$,
with $\Gamma$ being the interface between the two phases $\Omega_+$ and $\Omega_-$. Equations \eqref{interface_pde0_3} and \eqref{interface_pde0_4} are the interface jump conditions, assuming continuity of the displacement field and traction across the interface.
Developing a material interface model which generalizes the classical interface model of elasticity to the nonlocal setting is an open problem in peridynamics.
The fact that interface conditions are necessary for a classical solution of \eqref{interface_pde0} to exist together with the fact that the peridynamics operator diverges, in the limit of vanishing horizon, at material interfaces strongly suggests that {\it nonlocal interface conditions} must be imposed in a peridynamics model for heterogeneous media in the presence of material interfaces.
Therefore, a peridynamics interface model which is locally consistent with the interface model of elasticity is required to satisfy the following three conditions: \begin{itemize}
\item[] \hspace{-.5cm}{\bf C(i)} \hspace{.35cm} Nonlocal interface conditions must be imposed such that the peridynamics operator converges, in the local limit, to the corresponding elasticity operator.
\item[] \hspace{-.5cm}{\bf C(ii)} \hspace{.1cm} The interface conditions in elasticity are recovered from the local limit of the nonlocal interface conditions in peridynamics. \item[] \hspace{-.5cm}{\bf C(iii)} \hspace{.1cm} The nonlocal interface conditions are integral equations that do not include spatial derivatives of the displacement field. \end{itemize} We note that condition C(i) implies that the peridynamics operator is required not to diverge, in the limit of vanishing horizon, at material interfaces. Condition C(iii) requires that the nonlocal interface conditions be compatible with the peridynamics model. Peridynamics is formulated with integral equations and oriented towards modeling discontinuities, and thus peridynamics equations do not include spatial derivatives of the displacement field.
We consider the following peridynamics model, under equilibrium conditions, for heterogeneous media in the presence of material interfaces
\begin{numcases}{\label{natural_interface_sys}} \label{natural_interface_sys_1}
\mathcal{L}^\delta{\bf u}({\bf x}) ={\bf b}({\bf x}),\;\;\; {\bf x}\in\Omega
\\ \label{natural_interface_sys_2} \displaystyle \mathcal{L}^\delta{\bf u}({\bf x})=0,\;\;\;\;\;\;\;\;\, {\bf x}\in \Gamma. \end{numcases}
Equation \eqref{natural_interface_sys_2} is a nonlocal interface condition.
By imposing \eqref{natural_interface_sys_2}, and under certain regularity assumptions on the material properties and the displacement field, we show that \[
\lim_{\delta\rightarrow 0} \mathcal{L}^\delta{\bf v}=\mathcal{N}{\bf v} \;\;\;\; \mbox{ in } L^p(\Omega)^3 \] for $1\leq p<\infty$. Thus, the peridynamics interface model \eqref{natural_interface_sys} satisfies conditions C(i) and C(iii). However, it is shown in Proposition \ref{prop_wrong_interface_conditions} that this model does not satisfy C(ii). Therefore, the interface model \eqref{natural_interface_sys} is not a valid generalization of the local interface model of elasticity.
We note that this result remains true if an inhomogeneity is introduced into \eqref{natural_interface_sys_2}. We also note that, in the nonlocal setting, the material interface remains sharp; however, because of the nonlocality of interactions, nonlocal interface conditions involve points on both sides of the material interface, and not just points on the material interface.
In Section \ref{sec_perid_interface_model}, we propose a solution to the interface problem in peridynamics.
We develop a peridynamics interface model which is locally consistent with elasticity's interface model \eqref{interface_pde0}. Our model is defined by \begin{numcases}{\label{interface_model_sys}} \label{interface_model_sys_1}
\mathcal{L}^\delta_*{\bf u}({\bf x}) ={\bf b}({\bf x}),\;\;\; {\bf x}\in\Omega
\\ \label{interface_model_sys_2} \displaystyle \mathcal{L}^\delta_*{\bf u}({\bf x})=0,\;\;\;\;\;\;\;\;\, {\bf x}\in \Gamma_\delta, \end{numcases} where $\Gamma_\delta$ is an extended interface, which is a three-dimensional set of thickness $2\delta$, and the operator $\mathcal{L}^\delta_*$ is of the form \begin{eqnarray} \label{L*_0_t} \mathcal{L}^\delta_*{\bf u}=\mathcal{L}^\delta{\bf u}+1_{\Gamma_\delta}\, \mathcal{L}^\delta_{\Gamma_\delta}{\bf u}, \end{eqnarray} with $1_{\Gamma_\delta}$ being the indicator function of the set $\Gamma_\delta$. The new operator $\mathcal{L}^\delta_{\Gamma_\delta}$ acts on the displacement field but only at points in the extended interface $\Gamma_\delta$. The set $\Gamma_\delta$ and the operator $\mathcal{L}^\delta_{\Gamma_\delta}$ are explicitly defined in Section \ref{sec_perid_interface_model}.
Equation \eqref{interface_model_sys_2} is the peridynamics nonlocal interface condition for our interface model. By imposing \eqref{interface_model_sys_2}, and under the assumptions that
the material properties are sufficiently differentiable in $\Omega\setminus\Gamma$ and have jump discontinuities at the interface $\Gamma$, and that the displacement field ${\bf u}$ is sufficiently differentiable in $\Omega\setminus\Gamma$ and continuous across $\Gamma$, we show in Theorem~\ref{thm_interface_model} that \[
\lim_{\delta\rightarrow 0} \mathcal{L}^\delta_*{\bf v}=\mathcal{N}{\bf v} \;\;\;\; \mbox{ in } L^p(\Omega)^3 \] for $1\leq p<\infty$. Moreover, we show that the local interface condition \eqref{interface_pde0_3} can be recovered from the local limit of the nonlocal interface condition \ref{interface_model_sys_2}. Therefore, the peridynamics material interface model \eqref{interface_model_sys} satisfies the three conditions C(i)--C(iii) and hence serves as a peridynamics generalization to the classical elasticity interface model.
Here we discuss the mechanical interpretations and implications of the main results in this work.\\
The convergence results given by \eqref{convg1} and \eqref{convg2}, which are introduced in Theorem~\ref{thm_convg} and Proposition~\ref{prop1}, respectively, imply that peridynamics (bond-based or state-based) is a nonlocal generalization of the local continuum theory in the case of isotropic heterogeneous media with smoothly varying material properties. This extends the previous peridynamics convergence results for homogeneous media \cite{emmrich2007well,peridconvg,nonlocal_calc_peridy_2013}.
In the case of heterogeneous media with discontinuous material properties, our results given by \eqref{nonconvg1} and \eqref{nonconvg2}, which are introduced in Theorem~\ref{nonconvg_thm} and Lemma~\ref{nonconvg_1}, respectively, imply that the local limit of the peridynamic force is infinite at the material interface.
This divergence behavior can be explained mathematically through the fact that material interfaces break the inherent symmetry of the peridynamic operators. Mechanically, the divergence of peridynamics is due to the mismatch in the nonlocal tractions on each side of the interface. In fact, the divergence of the local limit of peridynamics at material interfaces is not surprising because in the local interface problem \eqref{interface_pde0} one must impose interface conditions to obtain a well-posed system. Therefore, when material interfaces are present, nonlocal interface conditions must be imposed in order for peridynamics to converge to a local theory. The goal of imposing nonlocal interface conditions is to fix the mismatch in the nonlocal tractions on each side of the interface. One way to achieve this is by imposing the nonlocal interface condition given by \eqref{natural_interface_sys_2}. Indeed, in Section~\ref{interf_model_sec_1} it is shown that when \eqref{natural_interface_sys_2} is imposed, then $\mathcal{L}^\delta{\bf u}$ converges in the limit as $\delta\rightarrow 0$. However, it is shown in Proposition~\ref{prop_wrong_interface_conditions}
that the peridynamic system given by \eqref{natural_interface_sys} does not converge to the local elastic interface model given by \eqref{interface_pde0}. We conclude in Section~\ref{interf_model_sec} that imposing nonlocal interface conditions alone is not sufficient to achieve a peridynamic interface model that recovers the classical interface model in the local limit. We therefore propose the peridynamic interface model given by \eqref{interface_model_sys} which consists of introducing a new peridynamic operator $\mathcal{L}^\delta_*$ together with imposing a nonlocal interface condition given by \eqref{interface_model_sys_2}. The new operator satisfies $\mathcal{L}^\delta_*{\bf u}({\bf x})=\mathcal{L}^\delta{\bf u}({\bf x})$ for points ${\bf x}\in\Omega\setminus\Gamma$. For points ${\bf x}$ on the interface $\Gamma$, the expression $\delta \mathcal{L}^\delta_*{\bf u}({\bf x})$ represents the jump in the nonlocal traction across the interface. This is justified in Section~\ref{sec_perid_interface_model} in which it is shown that \begin{equation} \label{nonlocal_traction} \lim_{\delta\rightarrow 0} \delta\mathcal{L}^\delta_* {\bf u}({\bf x})= \frac{45}{32} \jump{\sigma}{\bf n}. \end{equation} The operator $\mathcal{L}^\delta_{\Gamma_\delta}$ in \eqref{L*_0_t}, given explicitly by \eqref{L_Gamma}, which acts on points on the extended interface $\Gamma_\delta$, can be interpreted as the missing term in peridynamics which modifies the jump in the nonlocal traction such that \eqref{nonlocal_traction} holds true. It follows from \eqref{nonlocal_traction}, as described in Section~\ref{sec_perid_interface_model}, that the nonlocal interface condition \eqref{interface_model_sys_2} is the nonlocal analogue of the local interface condition \eqref{interface_pde0_3}. Theorem~\ref{thm_interface_model} implies that the peridynamic interface model given by \eqref{interface_model_sys} is the nonlocal analogue of the local interface model given by \eqref{interface_pde0}.
This article is organized as follows. Section \ref{overview_sec} provides an overview of linear peridynamics and linear elasticity inside isotropic heterogeneous media. The convergence of linear peridynamics operator to linear elasticity operator for the case of heterogeneous media is given in Section \ref{convg_sec}. The divergence of the peridynamics operator, in the local limit, at material interfaces is addressed in Section \ref{nonconvg_sec}. Finally, in Section \ref{interf_model_sec} nonlocal interface conditions are developed and discussed and our new peridynamics material interface model is introduced and justified.
\section{Overview} \label{overview_sec}
\subsection{The peridynamics model for solid mechanics} \label{peridyn} We consider the state-based peridynamics model introduced in \cite{silling2007peridynamic} for the dynamics of
deformable solids. To simplify the presentation, we provide a direct description of this model without adhering to the notation used in \cite{silling2007peridynamic}. Following the presentation of peridynamics given in \cite{Alali_Gunzburger1},
let $\Omega$ denote a domain in $\mathbb{R}^3$, ${\bf u}({\bf x},t)$ the displacement vector field, $\rho({\bf x})$ the mass density, and ${\bf b}({\bf x},t)$ a prescribed body force density. Let $B_\delta({\bf x})$ denote the ball centered at ${\bf x}$ having radius $\delta$; here, $\delta$ denotes the peridynamics horizon. Then the linear peridynamics equation of motion for an isotropic heterogeneous medium is given by \begin{equation} \label{pd1}
\rho({\bf x}) \ddot{{\bf u}}({\bf x},t) = (\mathcal{L}^\delta {\bf u})({\bf x}) + {\bf b}({\bf x},t),\;\;\;\;\; {\bf x}\in\Omega, \end{equation} where \begin{equation} \label{Ldel0} \mathcal{L}^\delta=\mathcal{L}^\delta_s+\mathcal{L}^\delta_d, \end{equation} and, for a vector field ${\bf v}$, the operators $\mathcal{L}^\delta_s$ and $\mathcal{L}^\delta_d$ are given by \small \begin{eqnarray} \label{Ldel_s_1} (\mathcal{L}^\delta_s {\bf v})({\bf x})&=& \int_{B_\delta({\bf x})}
\frac{15}{m}\big(\mu({\bf x})+\mu({\bf y})\big) w(|{\bf y}-{\bf x}|)\frac{({\bf y}-{\bf x})
\otimes({\bf y}-{\bf x})}{|{\bf y}-{\bf x}|^2}\big( {\bf v}({\bf y})-{\bf v}({\bf x})\big)
\,d{\bf y},\\
\nonumber\\
\nonumber
(\mathcal{L}^\delta_d {\bf v})({\bf x})&=& \int_{B_\delta({\bf x})}\int_{B_\delta({\bf x})}
\frac{9}{m^2}
\Big(\lambda({\bf x})-\mu({\bf x})\Big) w(|{\bf y}-{\bf x}|)w(|{\bf z}-{\bf x}|)({\bf y}-{\bf x}) \otimes({\bf z}-{\bf x})\big( {\bf v}({\bf z})-{\bf v}({\bf x})\big)
\,d{\bf z} d{\bf y}\\
\nonumber\\
\nonumber
&&+ \int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\frac{9}{m^2}
\Big(\lambda({\bf y})-\mu({\bf y})\Big) w(|{\bf y}-{\bf x}|)w(|{\bf z}-{\bf y}|)
({\bf y}-{\bf x}) \otimes({\bf z}-{\bf y})\big( {\bf v}({\bf z})-{\bf v}({\bf y})\big)
\,d{\bf z} d{\bf y}.\\
\label{Ldel_d_1} \end{eqnarray}
\normalsize Here $\lambda$ and $\mu$ are Lam\'{e} parameters, with $\mu$ denoting the shear modulus, $w$ is a weighting function, and $m$ denotes a scalar weight given by $m=\int_\Omega w(|{\bf y}-{\bf x}|) |{\bf y}-{\bf x}|^2 d{\bf y}$. Since $w$ is a radial function in (\ref{Ldel_s_1}), (\ref{Ldel_d_1}), the material is isotropic, and $w$ can be taken to be of the form (see, for example, \cite{Silling39}) \begin{equation}
w(|\xi|) = \left\{\begin{aligned}
\frac{1}{|\boldsymbol\xi|^r}\;, \qquad& \mbox{if $|\boldsymbol\xi|<\delta$} \\
0\;, \qquad& \text{otherwise}.
\end{aligned} \right. \end{equation} In this case \begin{equation}
m = \int_{B_{\delta}(0)} |\boldsymbol\xi|^{2-r} d\boldsymbol\xi = 4\pi\frac{\delta^{5-r}}{5-r}. \end{equation}
Note that when $r<5$, $m$ is finite. To simplify the presentation, and without loss of generality, we assume that $r=2$; consequently, $m = \frac{4}{3}\pi\delta^3 = |B_{\delta}|$ and \begin{eqnarray} \label{Ldel_s_2}
(\mathcal{L}^\delta_s {\bf v})({\bf x})&=& \frac{15}{|B_\delta|}\int_{B_\delta({\bf x})}
\big(\mu({\bf x})+\mu({\bf y})\big) \frac{({\bf y}-{\bf x})
\otimes({\bf y}-{\bf x})}{|{\bf y}-{\bf x}|^4}\big( {\bf v}({\bf y})-{\bf v}({\bf x})\big)
\,d{\bf y},\\
\nonumber\\
\nonumber
(\mathcal{L}^\delta_d {\bf v})({\bf x})&=& \frac{9}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf x})}
\Big(\lambda({\bf x})-\mu({\bf x})\Big)
\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\otimes\frac{{\bf z}-{\bf x}}{|{\bf z}-{\bf x}|^2}\big( {\bf v}({\bf z})-{\bf v}({\bf x})\big)
\,d{\bf z} d{\bf y}\\
\nonumber\\
\nonumber
& & + \frac{9}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\Big(\lambda({\bf y})-\mu({\bf y})\Big) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\otimes\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\big( {\bf v}({\bf z})-{\bf v}({\bf y})\big)
\,d{\bf z} d{\bf y}.\\
\label{Ldel_d_2} \end{eqnarray} Due to symmetry we have the following identity \begin{equation} \label{sym}
\int_{B_\delta({\bf p})}\frac{{\bf q}-{\bf p}}{|{\bf q}-{\bf p}|^2}\,d{\bf p}=0. \end{equation} For points ${\bf x}\in\Omega$ with a distance of at least $2\delta$ from the boundary $\partial \Omega$, the operator $\mathcal{L}^\delta_d$ in \eqref{Ldel_d_2} reduces to \begin{equation} \label{Ldel_d_3}
(\mathcal{L}^\delta_d {\bf v})({\bf x})=\frac{9}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\Big(\lambda({\bf y})-\mu({\bf y})\Big) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\otimes\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf v}({\bf z})
\,d{\bf z} d{\bf y}, \end{equation} where we have applied (\ref{sym}). Throughout this article, we will use the notation $A\colon B$ to denote the inner product of the same-order tensors $A$ and $B$. For example, if $A$ and $B$ are third-order tensors then \[ A\colon B = \sum_{i,j,k} A_{i j k} B_{i j k}. \]
\subsection{Linear elasticity} \label{sec_elasticity}
In linear elasticity, the stress tensor for an isotropic heterogeneous medium is given by \begin{equation} \label{sigma} \sigma({\bf x})=\lambda({\bf x}) \nabla\cdot{\bf u}({\bf x})\, I+\mu({\bf x})(\nabla{\bf u}({\bf x})+\nabla{\bf u}({\bf x})^T), \end{equation} where ${\bf u}$ is the displacement field, $I$ is the identity tensor, and $\lambda$ and $\mu$ are Lam\'{e} parameters. The equation of motion in this case is given by \begin{equation} \label{lin_elast}
\rho({\bf x}) \ddot{{\bf u}}({\bf x},t) = (\mathcal{N} {\bf u})({\bf x}) + {\bf b}({\bf x},t),\;\;\;\;\; {\bf x}\in\Omega, \end{equation}
where $\mathcal{N}$ is the Navier operator of linear elasticity which is given by \begin{eqnarray} \label{N} \nonumber \mathcal{N}{\bf u} &=& \nabla\cdot\sigma\\ &=& \nabla(\lambda \nabla\cdot{\bf u}) + \nabla\cdot(\mu (\nabla{\bf u}+\nabla{\bf u}^T)). \end{eqnarray} We decompose the operator of linear elasticity as \begin{equation} \label{N_decomp} \mathcal{N}=\mathcal{N}_s+\mathcal{N}_d, \end{equation} where the operators $\mathcal{N}_s$ and $\mathcal{N}_d$ are defined by \begin{eqnarray} \label{Ns} \mathcal{N}_s{\bf v} &=& \nabla(\mu \nabla\cdot{\bf v}) + \nabla\cdot(\mu (\nabla{\bf v}+\nabla{\bf v}^T)),\\ \mathcal{N}_d{\bf v} &=& \nabla((\lambda-\mu) \nabla\cdot{\bf v}), \end{eqnarray} for sufficiently regular vector field ${\bf v}$.
We note that the above decomposition of $\mathcal{N}$ is not a standard one; however, this decomposition will be useful for studying the relationship between the nonlocal operator of peridynamics $\mathcal{L}^\delta$, defined in Section \ref{peridyn}, and the local operator of elasticity $\mathcal{N}$; see Section \ref{convg_sec}. \begin{remark} It is easy to see that $\mathcal{N}=\mathcal{N}_s$ for materials in which $\lambda=\mu$ or, equivalently, materials with Poisson ratio $\nu=\frac{1}{4}$. \end{remark}
\section{Convergence of Linear Peridynamics to Linear Elasticity Inside Heterogeneous Media} \label{convg_sec}
In this section we show that in a heterogeneous medium and under certain regularity assumptions on the material properties and the vector field ${\bf v}$, the linear peridynamics operator $\mathcal{L}^\delta$ converges to the linear elasticity operator $\mathcal{N}$ in the limit of vanishing horizon. This is given by Theorem \ref{thm_convg} in the last part of this section.
We start by defining an operator $\mathcal{L}^\delta_{0}$, which is independent of material properties, \begin{eqnarray} \label{Lz} \nonumber
(\mathcal{L}^\delta_{0} {\bf v})({\bf x})&=& \frac{30}{|B_\delta|}\int_{B_\delta({\bf x})}
\frac{({\bf y}-{\bf x})
\otimes({\bf y}-{\bf x})}{|{\bf y}-{\bf x}|^4}\big( {\bf v}({\bf y})-{\bf v}({\bf x})\big)
\,d{\bf y}\\
&=& \frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\,\big( {\bf v}({\bf x}+{\bf z})-{\bf v}({\bf x})\big)
\,d{\bf z}, \end{eqnarray} where ${\bf v}$ is a vector field and ${\bf x}\in \mathbb{R}^3$. We note $\mathcal{L}^\delta_{0}$ is a bounded linear operator on $L^p(\Omega)^3$ for $1\leq p\leq \infty$; see for example \cite{ALperid1}. \\
\begin{lem} \label{lemma1} If ${\bf v}\in C^3(\Omega)^3$ then \begin{equation} \label{lim_Lz} \lim_{\delta\rightarrow 0} \mathcal{L}^\delta_{0}{\bf v}=2 \nabla(\nabla\cdot{\bf v}) +\Delta{\bf v},\;\;\;\; \mbox{ in } L^p(\Omega)^3\end{equation}
for $1\leq p<\infty$. \end{lem}
\begin{proof} The Taylor expansion of ${\bf v}$ about ${\bf z}={\bf x}$ is given by \begin{equation} \label{taylor1} {\bf v}({\bf x}+{\bf z})={\bf v}({\bf x})+\nabla{\bf v}({\bf x}) {\bf z}+\frac{1}{2} \nabla\nabla{\bf v}({\bf x})\, ({\bf z}\otimes{\bf z})+{\bf r}({\bf v};{\bf x},{\bf z}), \end{equation} where \begin{equation} \label{remainder1} {\bf r}({\bf v};{\bf x},{\bf z})=\frac{1}{3!} \nabla\nabla\nabla{\bf v}({\bf x}+ t {\bf z})\, ({\bf z}\otimes{\bf z}\otimes{\bf z}) \end{equation} for some $t\in(0,1)$. By inserting \eqref{taylor1} in \eqref{Lz}, expanding the integral, and then rearranging the tensor products, we obtain \begin{eqnarray} \label{Lz2} \nonumber
(\mathcal{L}^\delta_{0} {\bf v})({\bf x})&=& \frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{{\bf z}\otimes{\bf z}\otimes {\bf z}}{|{\bf z}|^4}\,d{\bf z}\, \nabla{\bf v}({\bf x})+
\frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{{\bf z}\otimes{\bf z}\otimes {\bf z}\otimes{\bf z}}{|{\bf z}|^4}\,d{\bf z}\, \frac{1}{2} \nabla\nabla{\bf v}({\bf x})\\
& & + \frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\,{\bf r}({\bf v};{\bf x},{\bf z})\,d{\bf z}. \end{eqnarray} We note that, due to symmetry, the integral \begin{equation} \label{zzz0} \int_{B_\delta(0)}
\frac{{\bf z}\otimes{\bf z}\otimes {\bf z}}{|{\bf z}|^4}\,d{\bf z}=0, \end{equation} with the obvious notation that $0$ in the right hand side of \eqref{zzz0} denotes the third-order zero tensor. Thus the first term in \eqref{Lz2} vanishes. We note that the third term in \eqref{Lz2} vanishes in the limit as $\delta\rightarrow 0$ because \begin{eqnarray} \label{O_delta}
\left|\frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\,{\bf r}({\bf v};{\bf x},{\bf z})\,d{\bf z}\; \right| &\leq&
\frac{M}{\delta^3} \int_{B_\delta(0)} |{\bf z}|\,d\,{\bf z}=\BigO{\delta}, \end{eqnarray} for some $M>0$. For the second term in \eqref{Lz2}, a straightforward calculation, using spherical coordinates, shows that the following fourth-order tensor
satisfies \begin{equation} \label{zzzz1}
\frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{z_i z_j z_k z_l}{|{\bf z}|^4}\,d{\bf z}= \left\{
\begin{array}{ll}
6, & \mbox{ if } i=j=k=l,\\
\\
2, & \mbox{ if } (i=j,k=l, \mbox{ and } i\neq k) \\
& \;\; \mbox{ or } (i=k,j=l, \mbox{ and } i\neq j)\\
& \;\; \mbox{ or } (i=l,j=k, \mbox{ and } i\neq j),\\
\\
0, & \mbox{otherwise}.
\end{array} \right. \end{equation} Using \eqref{zzzz1} the $i$-th component of the second term in \eqref{Lz2} becomes \begin{eqnarray} \label{T2} \nonumber
\sum_{j,k,l} \frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{z_i z_j z_k z_l}{|{\bf z}|^4}\,d{\bf z}\, \frac{1}{2} \frac{\partial^2 v_j}{\partial x_l\partial x_k}
&=& \frac{1}{2} \left( 6 \frac{\partial^2 v_i}{\partial x_i^2}+
2 \sum_{k\neq i}\frac{\partial^2 v_i}{\partial x_k^2}+
4 \sum_{j\neq i}\frac{\partial^2 v_j}{\partial x_i x_j}
\right)\\
\nonumber
&=& \sum_{k} \frac{\partial^2 v_i}{\partial x_k^2}
+ 2 \sum_{j} \frac{\partial^2 v_j}{\partial x_i x_j} \\
&=& \Delta v_i + 2 \left(\nabla(\nabla\cdot {\bf v})\right)_i. \end{eqnarray} By combining \eqref{Lz2} with \eqref{zzz0}, \eqref{O_delta}, and \eqref{T2}, we conclude that \begin{equation} \label{lim_Lz_pt} \lim_{\delta\rightarrow 0} (\mathcal{L}^\delta_{0} {\bf v})({\bf x})= 2 \nabla(\nabla\cdot{\bf v}({\bf x})) +\Delta{\bf v}({\bf x}) \end{equation} for all ${\bf x}$ in $\mathbb{R}^3$. Equation \eqref{lim_Lz} follows from the point-wise convergence result \eqref{lim_Lz_pt} and Lebesgue's dominated convergence theorem, completing the proof.
\end{proof}
The operator $\mathcal{L}^\delta_{0}: L^p(\Omega)^3\rightarrow L^p(\Omega)^3$ can also be defined to operate on scalar-fields \[
(\mathcal{L}^\delta_{0} f)({\bf x}):=\frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\,\big( f({\bf x}+{\bf z})-f({\bf x})\big)
\,d{\bf z}, \] in which case $f\in L^p(\Omega)\mapsto \mathcal{L}^\delta_{0} f\in L^p(\Omega)^{3\times 3}$. The convergence result in this case is given by the following lemma, whose proof is similar to that of Lemma \ref{lemma1}. \begin{lem} \label{lemma2} If $f\in C^3(\Omega)$ then \begin{equation} \label{lim_Lz_scalar} \lim_{\delta\rightarrow 0} \mathcal{L}^\delta_{0} f =2 \nabla\nabla f +\Delta f \;I,\;\;\;\; \mbox{ in } L^p(\Omega)^{3\times 3}\end{equation}
for $1\leq p<\infty$. \end{lem} We use Lemma~\ref{lemma1} and Lemma~\ref{lemma2} to show the following convergence result for the operator $\mathcal{L}^\delta_s$ defined in \eqref{Ldel_s_2}. \begin{prop} \label{prop1} Assume that the vector field ${\bf v}$ is in $C^3(\Omega)^3$ and the shear modulus $\mu$ is in $C^3(\Omega)$. Then as $\delta \rightarrow 0$, \begin{equation} \label{lim_Ls} \mathcal{L}^\delta_s {\bf v}\longrightarrow \mathcal{N}_s{\bf v},\;\;\;\; \mbox{ in } L^p(\Omega)^3 \end{equation}
for $1\leq p<\infty$. \end{prop} \begin{proof} The operator $\mathcal{L}^\delta_s$ in \eqref{Ldel_s_2}, after the change of variables ${\bf z}={\bf y}-{\bf x}$, becomes \begin{eqnarray} \label{Ls3}
(\mathcal{L}^\delta_s{\bf v})({\bf x})=\frac{15}{|B_\delta|}\int_{B_\delta(0)}
\left(\mu({\bf x})+\mu({\bf x}+{\bf z})\right) \frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\,\big( {\bf v}({\bf x}+{\bf z})-{\bf v}({\bf x})\big)\,d{\bf z}. \end{eqnarray} We decompose the operator $\mathcal{L}^\delta_s$ as $\mathcal{L}^\delta_s=\mathcal{L}^\delta_{s1}+\mathcal{L}^\delta_{s2}$, where \begin{eqnarray} \label{Ls1}
(\mathcal{L}^\delta_{s1}{\bf v})({\bf x})&=&\frac{1}{2}\mu({\bf x})\;\frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\,\big( {\bf v}({\bf x}+{\bf z})-{\bf v}({\bf x})\big)\,d{\bf z},\\ \label{Ls2}
(\mathcal{L}^\delta_{s2}{\bf v})({\bf x})&=&\frac{1}{2}\;\frac{30}{|B_\delta|}\int_{B_\delta(0)}
\mu({\bf x}+{\bf z})\; \frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\,\big( {\bf v}({\bf x}+{\bf z})-{\bf v}({\bf x})\big)\,d{\bf z}, \end{eqnarray} Using Lemma~\ref{lemma1} we find that, as $\delta\rightarrow 0$ \begin{equation} \label{Ls1_2} \mathcal{L}^\delta_{s1}{\bf v}\longrightarrow \frac{1}{2}\, \mu \left( \frac{}{} 2 \nabla(\nabla\cdot{\bf v}) +\Delta{\bf v}\right),\;\;\;\; \mbox{ in } L^p(\Omega)^3. \end{equation} The integral in \eqref{Ls2} can be written as \begin{eqnarray} \label{Ls2_2} \nonumber
(\mathcal{L}^\delta_{s2}{\bf v})({\bf x})&=&\frac{1}{2}\;\frac{30}{|B_\delta|}\int_{B_\delta(0)}
\frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\,\big(\mu({\bf x}+{\bf z}) {\bf v}({\bf x}+{\bf z})-\mu({\bf x}) {\bf v}({\bf x})\big)\,d{\bf z}\\
&& - \frac{1}{2}\;\frac{30}{|B_\delta|} {\bf v}({\bf x}) \int_{B_\delta(0)}
\frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\,\big(\mu({\bf x}+{\bf z})-\mu({\bf x}) \big)\,d{\bf z}. \end{eqnarray} By applying Lemma~\ref{lemma1} on the first term of the right hand side of \eqref{Ls2_2} and Lemma~\ref{lemma2} on the second term we find that, as $\delta\rightarrow 0$ \begin{equation} \label{Ls2_convg} \mathcal{L}^\delta_{s2}{\bf v}\longrightarrow \frac{1}{2} \left( \frac{}{} 2 \nabla(\nabla\cdot(\mu{\bf v})) +\Delta(\mu{\bf v})\right) - \frac{1}{2} \left( \frac{}{} \left(2 \nabla\nabla \mu +\Delta\mu) \,I\right){\bf v}\right), \end{equation} in $L^p(\Omega)^3$. From \eqref{Ls1_2} and \eqref{Ls2_convg}, we obtain the following convergence in $L^p(\Omega)^3$ \begin{eqnarray} \label{Ls_convg}
\lim_{\delta\rightarrow 0}\mathcal{L}^\delta_{s}{\bf v} &=&\frac{1}{2}\left( \frac{}{} 2\mu\nabla(\nabla\cdot{\bf v})+\mu\Delta{\bf v}+ 2 \nabla(\nabla\cdot(\mu{\bf v})) +\Delta(\mu{\bf v}) - 2 (\nabla\nabla \mu){\bf v} -(\Delta\mu){\bf v} \right). \end{eqnarray} Expanding $\Delta(\mu{\bf v})$ and $\nabla(\nabla\cdot(\mu{\bf v}))$ in the right hand side of \eqref{Ls_convg}, using the identities \begin{eqnarray} \label{calc_identities} \nonumber \Delta(\mu{\bf v}) &=& \mu \Delta{\bf v}+2\nabla{\bf v}\nabla\mu+(\Delta\mu)\;{\bf v},\\ \nonumber \nabla(\nabla\cdot(\mu{\bf v})) &=& (\nabla\nabla\mu)^T{\bf v}+(\nabla{\bf v})^T \nabla\mu+\mu\nabla(\nabla\cdot{\bf v})+(\nabla\cdot{\bf v})\nabla\mu, \end{eqnarray} and then simplifying, one finds that \begin{eqnarray} \label{Ls_convg_2} \nonumber &&\frac{1}{2}\left( \frac{}{} 2\mu\nabla(\nabla\cdot{\bf v})+\mu\Delta{\bf v}+ 2 \nabla(\nabla\cdot(\mu{\bf v})) +\Delta(\mu{\bf v}) - 2 (\nabla\nabla \mu){\bf v} -(\Delta\mu){\bf v} \right)\\ \nonumber &&\;\;\;\;\;\;\;= (\nabla\cdot{\bf v})\nabla\mu+2\mu\nabla(\nabla\cdot{\bf v})+\mu\Delta{\bf v}+\nabla{\bf v}\nabla\mu+(\nabla{\bf v})^T\nabla\mu\\
&&\;\;\;\;\;\;\;= \nabla\left(\mu\nabla\cdot{\bf v}\right) + \nabla\cdot\left(\mu(\left(\nabla{\bf v}+(\nabla{\bf v})^T\right)\right).
\end{eqnarray} Finally, equation \eqref{lim_Ls} follows from \eqref{Ns}, \eqref{Ls_convg}, and \eqref{Ls_convg_2}.
\end{proof} In the next result we consider the convergence of the operator $\mathcal{L}^\delta_d$ defined in \eqref{Ldel_d_3}. \begin{prop} \label{prop2} Assume that the vector field ${\bf v}$ is in $C^3(\Omega)^3$ and that the material properties $\mu$ and $\lambda$ are in $C^2(\Omega)$. Then as $\delta \rightarrow 0$, \begin{equation} \label{lim_Ld} \mathcal{L}^\delta_d {\bf v}\longrightarrow \mathcal{N}_d{\bf v},\;\;\;\; \mbox{ in } L^p(\Omega)^3 \end{equation}
for $1\leq p<\infty$. \end{prop} \begin{proof} Let ${\bf x}$ be a point in the interior of $\Omega$ and $c=\lambda-\mu$. Then by changing variables (${\bf w}={\bf z}-{\bf y}$ then ${\bf h}={\bf y}-{\bf x}$) in \eqref{Ldel_d_3}, $\mathcal{L}^\delta_d{\bf v}$ can be written as \begin{eqnarray} \label{Ld} \nonumber
(\mathcal{L}^\delta_d {\bf v})({\bf x})&=&\frac{9}{|B_\delta|^2}\int_{B_\delta(0)}\int_{B_\delta(0)}
c({\bf x}+{\bf h}) \;\frac{{\bf h}\otimes{\bf w}}{|{\bf h}|^2 |{\bf w}|^2} \,{\bf v}({\bf x}+{\bf h}+{\bf w})\,d{\bf w}\; d{\bf h}\\
&=&\frac{9}{|B_\delta|^2}\int_{B_\delta(0)}
c({\bf x}+{\bf h}) \;\frac{{\bf h}}{|{\bf h}|^2}\int_{B_\delta(0)}\frac{{\bf w}}{|{\bf w}|^2} \cdot \,{\bf v}({\bf x}+{\bf h}+{\bf w})\,d{\bf w} \,d{\bf h}.
\end{eqnarray} The Taylor expansion of ${\bf v}$ about ${\bf w}={\bf x}+{\bf h}$ is given by \begin{equation} \label{Ld_taylor1} {\bf v}({\bf x}+{\bf h}+{\bf w})={\bf v}({\bf x}+{\bf h})+\nabla{\bf v}({\bf x}+{\bf h}) {\bf w}+\frac{1}{2} \nabla\nabla{\bf v}({\bf x}+{\bf h})\, ({\bf w}\otimes{\bf w})+{\bf r}_1({\bf v};{\bf x}+{\bf h},{\bf w}), \end{equation} where \begin{equation} \label{Ld_remainder1} {\bf r}_1({\bf v};{\bf x}+{\bf h},{\bf w})=\frac{1}{3!} \nabla\nabla\nabla{\bf v}(\boldsymbol\xi)\, ({\bf w}\otimes{\bf w}\otimes{\bf w}) \end{equation} for some $\boldsymbol\xi$ on the line segment joining ${\bf x}+{\bf h}$ and ${\bf w}$. By inserting \eqref{Ld_taylor1} in the inner integral of \eqref{Ld}, expanding the integral, and then rearranging the tensor products, we find \begin{eqnarray} \label{Ld_inner_1} \nonumber
\int_{B_\delta(0)}\frac{{\bf w}}{|{\bf w}|^2}\cdot\,{\bf v}({\bf x}+{\bf h}+{\bf w})\,d{\bf w} &=&
\int_{B_\delta(0)}\frac{{\bf w}}{|{\bf w}|^2}\,d{\bf w}\cdot {\bf v}({\bf x}+{\bf h})+
\int_{B_\delta(0)}\frac{{\bf w}\otimes{\bf w}}{|{\bf w}|^2}\,d{\bf w}\colon \nabla{\bf v}({\bf x}+{\bf h})\\ \nonumber &&\hspace{-2cm} +
\int_{B_\delta(0)}\frac{{\bf w}\otimes{\bf w}\otimes{\bf w}}{|{\bf w}|^2}\,d{\bf w}\colon \frac{1}{2}\nabla\nabla{\bf v}({\bf x}+{\bf h})
+ \int_{B_\delta(0)}\frac{{\bf w}}{|{\bf w}|^2}\cdot\,{\bf r}({\bf v};{\bf x}+{\bf h},{\bf w})\,d{\bf w}.\\ \end{eqnarray} We note that due to symmetry, the integrals in the first and third terms of the right hand side of \eqref{Ld_inner_1} are identical to zero. A straightforward calculation shows that \begin{equation} \label{identity_identity}
\int_{B_\delta(0)}\frac{{\bf w}\otimes{\bf w}}{|{\bf w}|^2}\,d{\bf w} = \frac{|B_\delta|}{3} \,I, \end{equation} and thus \eqref{Ld_inner_1} is equivalent to \begin{eqnarray} \label{Ld_inner_2} \nonumber
\int_{B_\delta(0)}\frac{{\bf w}}{|{\bf w}|^2}\cdot\,{\bf v}({\bf x}+{\bf h}+{\bf w})\,d{\bf w} &=&
\frac{|B_\delta|}{3} \; \nabla\cdot{\bf v}({\bf x}+{\bf h})
+ \int_{B_\delta(0)}\frac{{\bf w}}{|{\bf w}|^2}\cdot\,{\bf r}({\bf v};{\bf x}+{\bf h},{\bf w})\,d{\bf w}.\\ \end{eqnarray} Substituting \eqref{Ld_inner_2} in \eqref{Ld} one finds that \begin{eqnarray} \label{Ld_taylor2} \nonumber (\mathcal{L}^\delta_d {\bf v})({\bf x})&=&
\frac{3}{|B_\delta|}\int_{B_\delta(0)}c({\bf x}+{\bf h})\nabla\cdot{\bf v}({\bf x}+{\bf h})\frac{{\bf h}}{|{\bf h}|^2}\,d{\bf h}\\
&& +\frac{9}{|B_\delta|^2}\int_{B_\delta(0)}c({\bf x}+{\bf h})\;\frac{{\bf h}}{|{\bf h}|^2}
\int_{B_\delta(0)}\frac{{\bf w}}{|{\bf w}|^2} \cdot \,{\bf r}({\bf v};{\bf x}+{\bf h},{\bf w})\,d{\bf w} \,d{\bf h}. \end{eqnarray} Using \eqref{Ld_remainder1} we obtain that for some $K>0$, \begin{eqnarray} \label{Ld_estimate} \nonumber
\left|
\frac{9}{|B_\delta|^2}\int_{B_\delta(0)}c({\bf x}+{\bf h})\;\frac{{\bf h}}{|{\bf h}|^2}
\int_{B_\delta(0)}\frac{{\bf w}}{|{\bf w}|^2} \cdot \,{\bf r}({\bf v};{\bf x}+{\bf h},{\bf w})\,d{\bf w} \,d{\bf h}
\right| &\leq& K \frac{9}{|B_\delta|^2} \int_{B_\delta(0)}\frac{1}{|{\bf h}|}\,d{\bf h} \int_{B_\delta(0)}|{\bf w}|^2 \,d{\bf w}\\ &=& \BigO{\delta}, \end{eqnarray}
where we used the facts that $\int_{B_\delta(0)}\frac{1}{|{\bf h}|}\,d{\bf h}=\BigO{\delta^2}$ and
$\int_{B_\delta(0)}|{\bf w}|^2 \,d{\bf w} =\BigO{\delta^5}$. Therefore, the second term in the right hand side of \eqref{Ld_taylor2} vanishes in the limit as $\delta\rightarrow 0$.
For the first term in the right hand side of \eqref{Ld_taylor2}, we first expand $c\nabla\cdot{\bf v}$ in a Taylor series about ${\bf h}={\bf x}$ \begin{equation} \label{Ld_taylor3} \left(c \nabla\cdot{\bf v}\right)({\bf x}+{\bf h})=\left(c \nabla\cdot{\bf v}\right)({\bf x})+\nabla\left(c \nabla\cdot{\bf v}\right)({\bf x})\cdot {\bf h}+{\bf r}_2({\bf v};{\bf x},{\bf h}), \end{equation} where \begin{equation} \label{Ld_remainder2} {\bf r}_2({\bf v};{\bf x},{\bf h})=\frac{1}{2} \nabla\nabla\left(c \nabla\cdot{\bf v}\right)(\boldsymbol\xi)\colon ({\bf h}\otimes{\bf h}) \end{equation} for some $\boldsymbol\xi$ on the line segment joining ${\bf x}$ and ${\bf h}$. Then by substituting \eqref{Ld_taylor3} in the first term in the right hand side of \eqref{Ld_taylor2}, we find \begin{eqnarray} \label{Ld_taylor2_first} \nonumber
\frac{3}{B_\delta}\int_{B_\delta(0)}c({\bf x}+{\bf h}) \nabla\cdot{\bf v}({\bf x}+{\bf h})\; \frac{{\bf h}}{|{\bf h}|^2}\,d{\bf h} &=&
\frac{3}{B_\delta}\int_{B_\delta(0)} \frac{{\bf h}}{|{\bf h}|^2}\,d{\bf h}\,\, c({\bf x}) \nabla\cdot{\bf v}({\bf x})\\ \nonumber &&
\nonumber +
\frac{3}{B_\delta}\int_{B_\delta(0)} \frac{{\bf h}\otimes{\bf h}}{|{\bf h}|^2}\,d{\bf h}\, \nabla(c\nabla\cdot{\bf v})({\bf x})
+\frac{3}{B_\delta}\int_{B_\delta(0)} {\bf r}_2\frac{{\bf h}}{|{\bf h}|^2}\,d{\bf h} \\ \nonumber \\ &=& \nabla(c\nabla\cdot{\bf v})({\bf x}) + \BigO{\delta}. \end{eqnarray} We note that, in order to obtain \eqref{Ld_taylor2_first} we used \eqref{identity_identity}, the identity \begin{equation} \label{zero_integral_1}
\int_{B_\delta(0)}\frac{{\bf h}}{|{\bf h}|^2}\,d{\bf h} = 0, \end{equation} and the estimate \begin{eqnarray} \label{Ld_estimate2} \nonumber
\left|
\frac{3}{B_\delta}\int_{B_\delta(0)}{\bf r}_2({\bf v};{\bf x},{\bf h})\;\frac{{\bf h}}{|{\bf h}|^2} \,d{\bf h}\right|
&\leq& M \frac{3}{B_\delta}\int_{B_\delta(0)} |{\bf h}|\,d{\bf h} \\ &=& \BigO{\delta} \end{eqnarray} for some $M>0$. By substituting \eqref{Ld_taylor2_first} in \eqref{Ld_taylor2} and using \eqref{Ld_estimate}, one finds \begin{equation} \label{Ld_lim2} \lim_{\delta\rightarrow 0} (\mathcal{L}^\delta_d {\bf v})({\bf x}) = \nabla(c\nabla\cdot{\bf v})({\bf x}). \end{equation} The result follows from \eqref{Ld_lim2} and Lebesgue's dominated convergence theorem.
\end{proof} We conclude this section by the following result which follows from combining Propositions \ref{prop1} and \ref{prop2}.
\begin{thm} \label{thm_convg} Assume that the vector field ${\bf v}$ is in $C^3(\Omega)^3$ and that the material properties $\mu$ and $\lambda$ are in $C^3(\Omega)$. Then \begin{equation} \label{lim_Ldel_2} \lim_{\delta\rightarrow 0}(\mathcal{L}^\delta {\bf v})({\bf x})= (\mathcal{N}{\bf v})({\bf x}),\;\;\;\; {\bf x}\in\mathring{\Omega}. \end{equation} Moreover, as $\delta \rightarrow 0$, \begin{equation} \label{lim_Ldel} \mathcal{L}^\delta {\bf v}\longrightarrow \mathcal{N}{\bf v}\;\;\;\; \mbox{ in } L^p(\Omega)^3, \end{equation}
for $1\leq p<\infty$. \end{thm} \begin{remark} The regularity assumptions on the vector field ${\bf v}$ and the material properties $\mu$ and $\lambda$ in Theorem~\ref{thm_convg} as well as in the other results in this section can be relaxed. However, in Section~\ref{nonconvg_sec}, we show that the material properties must at least be continuous for the convergence of peridynamics to elasticity results to hold. \end{remark}
\section{Non-Convergence of Peridynamics at Interfaces} \label{nonconvg_sec} In this section we show that continuity of the material properties is a necessary condition for the convergence of linear peridynamics to linear elasticity as described in Theorem \ref{thm_convg}.
Let $\Gamma$ be an interface separating different phases inside a heterogeneous medium occupying the region $\Omega$, as illustrated in Figure \ref{fig_gamma}. We assume that the surface $\Gamma$ is $C^1$. In this case, the material properties $\lambda$ and $\mu$ have jump discontinuities at the interface. To simplify the presentation, we assume that the medium is a two-phase composite with $\Omega=\Omega_+\cup\Omega_-\cup\Gamma$, where $\Omega_+$ and $\Omega_-$ are two open disjoint sets, and the material properties are piecewise constants, which are given by \begin{equation} \label{lambda} \lambda({\bf x})= \left\{
\begin{array}{ll}
\lambda_+,\; {\bf x}\in\Omega_+\cup\Gamma\\
\lambda_-,\; {\bf x}\in\Omega_-
\end{array} \right.,\;\;\;\;\;\;\;\;\;\;\; \mu({\bf x})= \left\{
\begin{array}{ll}
\mu_+,\; {\bf x}\in\Omega_+\cup\Gamma\\
\mu_-,\; {\bf x}\in\Omega_-
\end{array} \right. \end{equation}
In the remaining part of this article, we will use the following notations. Given a point ${\bf x}_0\in\Gamma$, let ${\bf n}({\bf x}_0)$ be the unit normal to the interface at ${\bf x}_0$. We suppose that ${\bf n}$ is directed outward from the $-$side of the interface, pointing toward the $+$ side, as illustrated in Figure \ref{fig_gamma}. For a scalar , vector, or, tensor field $F$, we define \begin{eqnarray*} F({\bf x}_0^+)&:=& \lim_{{\bf y}\rightarrow {\bf x}_0, {\bf y}\in\Omega_+} F({\bf y}), \\ F({\bf x}_0^-)&:=& \lim_{{\bf y}\rightarrow {\bf x}_0, {\bf y}\in\Omega_-} F({\bf y}).
\end{eqnarray*}
In addition, we define \begin{eqnarray*} \label{Bpm} \Bp{{\bf x}_0}&:=& B_\delta({\bf x}_0)\cap\Omega_+\cap\Gamma,\\ \Bm{{\bf x}_0}&:=& B_\delta({\bf x}_0)\cap\Omega_-. \end{eqnarray*} Note that the sets $\Bp{{\bf x}_0}$ and $\Bm{{\bf x}_0}$ depend on the normal ${\bf n}$. Furthermore, we use the following notation to denote the jump in $F$ across the interface \[
\left[F\right]^{+}_{-}:=F({\bf x}^+)-F({\bf x}^-),\;\;\; {\bf x}\in\Gamma \]
\begin{figure}
\caption{ The interface $\Gamma$ separates the two phases $\Omega_+$ and $\Omega_-$.}
\label{fig_gamma}
\end{figure}
The behavior of the operator $\mathcal{L}^\delta_s$ at material interfaces, in the local limit, is described by the following result. \begin{lem} \label{nonconvg_1}
Assume that the shear modulus $\mu$ is given by \eqref{lambda} and that the vector field ${\bf v}$ is continuous on $\Omega$ and smooth on $\Omega\setminus\Gamma$.
Then for ${\bf x}\in\Gamma$, \[
\lim_{\delta\rightarrow 0} \left(\mathcal{L}^\delta_s{\bf v}\right)({\bf x}) \mbox{ does not exist}. \] Moreover, the sequence $\left(\mathcal{L}^\delta_s{\bf v}\right)_\delta$ is unbounded in $L^p(\Omega)$, with $1\leq p<\infty$. \end{lem} \begin{remark} This result holds in the more general case when $\mu$ is differentiable on $\Omega\setminus\Gamma$ and has a jump discontinuity across the interface $\Gamma$ rather than just piecewise constant. \end{remark} \begin{proof}
Let ${\bf x}$ be a point on the interface $\Gamma$ away from $\partial\Omega$ by a distance of at least $\delta$.
Then $(\mathcal{L}^\delta_s{\bf v})({\bf x})$ in \eqref{Ldel_s_2}, after a change of variables and using the fact that $B_\delta({\bf x})=\Bp{{\bf x}}\cup\Bm{{\bf x}}$, can be written as \begin{eqnarray} \label{Ldel_s_3} \nonumber
(\mathcal{L}^\delta_s {\bf v})({\bf x})&=& \frac{15}{|B_\delta|}\int_{\Bp{{\bf 0}}}
\big(\mu_+ +\mu_+\big) \frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\big( {\bf v}({\bf x}+{\bf z})-{\bf v}({\bf x})\big)
\,d{\bf z}\\
&& + \frac{15}{|B_\delta|}\int_{\Bm{{\bf 0}}}
\big(\mu_+ +\mu_-\big) \frac{{\bf z}
\otimes{\bf z}}{|{\bf z}|^4}\big( {\bf v}({\bf x}+{\bf z})-{\bf v}({\bf x})\big)
\,d{\bf z}, \end{eqnarray} where $\Bp{{\bf 0}}=\Bp{{\bf x}}-{\bf x}$ and $\Bm{{\bf 0}}=\Bm{{\bf x}}-{\bf x}$. Note that in \eqref{Ldel_s_3} we have used the facts that, for ${\bf x}\in\Gamma$,
$\mu({\bf x})=\mu_+$, $\mu({\bf x}+{\bf z})=\mu_+$ for ${\bf z}\in\Bp{{\bf 0}}$, and $\mu({\bf x}+{\bf z})=\mu_-$ for ${\bf z}\in\Bm{{\bf 0}}$. Since ${\bf v}$ is smooth on each side of $\Gamma$, then for ${\bf z}$ in the $+$side of $\Gamma$ (i.e., ${\bf z}\in\Bp{{\bf 0}}$), ${\bf v}$ can be expanded as \begin{eqnarray} \label{Ls_taylor4_1} {\bf v}({\bf x}+{\bf z})-{\bf v}({\bf x})=\nabla{\bf v}({\bf x}^+){\bf z}+{\bf r}_+({\bf v};{\bf x},{\bf z}), \end{eqnarray} where \begin{eqnarray}
\label{Ls_remainder4_1} {\bf r}_+({\bf v};{\bf x},{\bf z})&=&\frac{1}{2} \nabla\nabla{\bf v}(\boldsymbol\xi_+)\;{\bf z}\otimes{\bf z} \end{eqnarray} for some $\boldsymbol\xi_+$ on the line segment joining ${\bf x}$ and ${\bf x}+{\bf z}$. Similarly, ${\bf v}$ can be expanded in a Taylor series in the $-$side of $\Gamma$. For ${\bf z}\in\Bm{{\bf 0}}$, \begin{eqnarray} \label{Ls_taylor4_2} {\bf v}({\bf x}+{\bf z})-{\bf v}({\bf x})=\nabla{\bf v}({\bf x}^-){\bf z}+{\bf r}_-({\bf v};{\bf x},{\bf z}), \end{eqnarray} where \begin{eqnarray}
\label{Ls_remainder4_2} {\bf r}_-({\bf v};{\bf x},{\bf z})&=&\frac{1}{2} \nabla\nabla{\bf v}(\boldsymbol\xi_-)\;{\bf z}\otimes{\bf z} \end{eqnarray} for some $\boldsymbol\xi_-$ on the line segment joining ${\bf x}$ and ${\bf x}+{\bf z}$. Substituting \eqref{Ls_taylor4_1} and \eqref{Ls_taylor4_2} in \eqref{Ldel_s_3}, expanding the integrals, and rearranging the tensor products, we find \begin{eqnarray} \label{Ldel_s_4} \nonumber
(\mathcal{L}^\delta_s {\bf v})({\bf x})&=& \frac{15}{|B_\delta|}(2\mu_+)\int_{\Bp{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4} \,d{\bf z}\;\nabla{\bf v}({\bf x}^+) \nonumber
+ \frac{15}{|B_\delta|}(2\mu_+)\int_{\Bp{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\;{\bf r}_+({\bf v};{\bf x},{\bf z})\,d{\bf z}\\ \nonumber
&+& \frac{15}{|B_\delta|}(\mu_++\mu_-)\int_{\Bm{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4} \,d{\bf z}\;\nabla{\bf v}({\bf x}^-)
+ \frac{15}{|B_\delta|}(\mu_++\mu_-)\int_{\Bm{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\;{\bf r}_-({\bf v};{\bf x},{\bf z})\,d{\bf z}.\\ \end{eqnarray} Using \eqref{Ls_remainder4_1}, we obtain the following bound \begin{eqnarray} \label{Ls_estimate3_1} \nonumber
\left|
\frac{15}{|B_\delta|}(2\mu_+)\int_{\Bp{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\;{\bf r}_+({\bf v};{\bf x},{\bf z})\,d{\bf z}\right|
&\leq& \frac{15}{|B_\delta|}(2\mu_+)\int_{\Bp{{\bf 0}}}
\frac{\left|{\bf z}\otimes{\bf z}\otimes{\bf z}\otimes{\bf z}\right|}{|{\bf z}|^4}\,d{\bf z} \left|\frac{1}{2}\nabla\nabla{\bf v}(\boldsymbol\xi_+)\right|\\ &=& \BigO{1}. \end{eqnarray} Similarly, one finds \begin{eqnarray} \label{Ls_estimate3_2}
\frac{15}{|B_\delta|}(\mu_++\mu_-)\int_{\Bm{{\bf 0}}}\frac{{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\;{\bf r}_-({\bf v};{\bf x},{\bf z})\,d{\bf z}=\BigO{1}, \end{eqnarray} and hence the second and fourth terms in \eqref{Ldel_s_4} are finite in the limit as $\delta\rightarrow 0$. Using \eqref{Ls_estimate3_1}, \eqref{Ls_estimate3_2}, and using the fact that \[ 0=\int_{B_\delta({\bf 0})}
\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4} \,d{\bf z}= \int_{\Bp{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4} \,d{\bf z}+\int_{\Bm{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4} \,d{\bf z}, \] equation \eqref{Ldel_s_4} becomes \begin{eqnarray} \label{Ldel_s_5} \nonumber
(\mathcal{L}^\delta_s {\bf v})({\bf x})&=& \frac{15}{|B_\delta|}\int_{\Bp{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4} \,d{\bf z}\;\left(2\mu_+\nabla{\bf v}({\bf x}^+)-(\mu_++\mu_-)\nabla{\bf v}({\bf x}^-)\right)+\BigO{1}.\\ \end{eqnarray} From Lemma \ref{lem_K_delta} (see Section \ref{interf_model_sec}), the third-order tensor \begin{equation} \label{K_delta}
\mathbb{K}_\delta := \frac{1}{|B_\delta|}\int_{\Bp{{\bf 0}}}\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\,d{\bf z} \end{equation} behaves, in the limit as $\delta\rightarrow 0$, as
\begin{equation} \label{K_delta_2}
\mathbb{K}_\delta \approx \frac{1}{\delta}\; \mathbb{K} \end{equation} for a constant third-order tensor $ \mathbb{K}$. Thus, equations \eqref{Ldel_s_5}, \eqref{K_delta}, and \eqref{K_delta_2} imply that \begin{equation} \label{lim_Ls_interface}
\lim_{\delta\rightarrow 0} \left(\mathcal{L}^\delta_s{\bf v}\right)({\bf x})=\infty\;\;\;\; \mbox{ for } {\bf x}\in\Gamma, \end{equation} and, consequently, that the sequence $\left(\mathcal{L}^\delta_s{\bf v}\right)_\delta$ is unbounded in $L^p(\Omega)$.
\end{proof} \begin{remark} If ${\bf v}$ is smooth at the interface then \eqref{Ldel_s_5}, in the proof above, becomes \begin{equation}
\label{Ls_5} \left(\mathcal{L}^\delta_s{\bf v}\right)({\bf x})=
\frac{15}{|B_\delta|}\int_{\Bp{{\bf 0}}}\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\,d{\bf z}\;\left(\mu_+-\mu_-\right)\nabla{\bf v}({\bf x}) +\BigO{1}, \end{equation} and hence \eqref{lim_Ls_interface} still hold in this case. \end{remark}
The behavior of the operator $\mathcal{L}^\delta_d$ in \eqref{Ldel_d_3} at material interfaces, in the local limit, is given by the following result. \begin{lem} \label{nonconvg_2}
Assume that $\mu$ and $\lambda$ are given by \eqref{lambda} and that the vector field ${\bf v}$ is continuous on $\Omega$ and smooth on $\Omega\setminus\Gamma$.
Then for ${\bf x}\in\Gamma$, \[
\lim_{\delta\rightarrow 0} \left(\mathcal{L}^\delta_d{\bf v}\right)({\bf x}) \mbox{ does not exist}. \] Moreover, the sequence $\left(\mathcal{L}^\delta_d{\bf v}\right)_\delta$ is unbounded in $L^p(\Omega)$, with $1\leq p<\infty$. \end{lem} The proof of this lemma is similar to that of Lemma \ref{nonconvg_1} and thus will not be presented here. However, we note that for ${\bf x}\in\Gamma$, it can be shown that \begin{equation}
\label{Ld_5} \left(\mathcal{L}^\delta_d{\bf v}\right)({\bf x})= \left((\lambda_+ -\mu_+)(\nabla\cdot{\bf v})({\bf x}^+)-(\lambda_- -\mu_-)(\nabla\cdot{\bf v})({\bf x}^-)\right)\;
\frac{3}{|B_\delta|}\int_{\Bp{{\bf x}}}\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}\,d{\bf y}\; +\BigO{1}, \end{equation} and \begin{equation}
\label{Ld_6}
\lim_{\delta\rightarrow 0}\frac{3\delta}{|B_\delta|}\int_{\Bp{{\bf x}}}\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}\,d{\bf y}\;=\frac{9}{8}{\bf n}. \end{equation}
The following result provides a summary to the behavior of the linear peridynamics operator $\mathcal{L}^\delta$, in the limit as $\delta\rightarrow 0$, in the presence of material interfaces. \begin{thm} \label{nonconvg_thm} Assume that the material properties $\mu$ and $\lambda$ are smooth on $\Omega\setminus\Gamma$ and have jump discontinuities across the interface $\Gamma$. Assume further that the vector field ${\bf v}$ is continuous on $\Omega$ and smooth on $\Omega\setminus\Gamma$.
Then \begin{enumerate} \item[(i)] for ${\bf x}\in\Omega\setminus\Gamma$, \[
\lim_{\delta\rightarrow 0} \left(\mathcal{L}^\delta{\bf v}\right)({\bf x}) = \mathcal{N}{\bf v}({\bf x}), \] where $\mathcal{N}$ is the operator of linear elasticity given by \eqref{N}, and
\item[(ii)] for ${\bf x}\in\Gamma$, \[
\lim_{\delta\rightarrow 0} \left(\mathcal{L}^\delta{\bf v}\right)({\bf x}) \mbox{ does not exist}. \] Moreover, the sequence $\left(\mathcal{L}^\delta_d{\bf v}\right)_\delta$ is unbounded in $L^p(\Omega)$, with $1\leq p<\infty$. \end{enumerate} \end{thm} \begin{proof} Part (ii) follows from Lemma \ref{nonconvg_1} and Lemma \ref{nonconvg_2}. For part (i), let ${\bf x}\in\Omega\setminus\Gamma$. Then for sufficiently small $\delta$, the ball $B_\delta({\bf x})$ is away from the interface. Thus, Theorem \ref{thm_convg} applies and part (i) follows.
\end{proof}
\section{A New Peridynamics Model for Material Interfaces} \label{interf_model_sec} In this section we introduce a peridynamics model for heterogeneous media in the presence of material interfaces. Our model consists of a modified version of the linear peridynamics operator $\mathcal{L}^\delta$, given by \eqref{Ldel0}, \eqref{Ldel_s_2}, and \eqref{Ldel_d_3}, together with a {\it nonlocal interface condition}. This new model is shown, in Theorem \ref{thm_interface_model}, to converge to the classical interface model of linear elasticity.
\subsection{Peridynamics interface conditions} \label{interf_model_sec_1} The divergence of peridynamics at interfaces in the limit of vanishing nonlocality (see Theorem \ref{nonconvg_thm}) is, in fact, not surprising. Indeed, let us consider the corresponding interface problem in linear elasticity inside a two-phase composite, with $\Omega=\Omega_+\cup\Omega_-\cup\Gamma$ as described in Section \ref{nonconvg_sec}. The strong form of the elastic equilibrium interface problem is given by the following system of partial differential equations \begin{equation} \label{interface_pde}
\left\{ \begin{array}{rll}
\displaystyle \nabla\cdot\sigma({\bf x}) &={\bf b}({\bf x}),\;\;\; &{\bf x}\in\Omega_+ \\
\displaystyle \nabla\cdot\sigma({\bf x}) &={\bf b}({\bf x}),\;\;\; &{\bf x}\in\Omega_- \\ \\ \displaystyle\jump{\sigma{\bf n}}&=0,\;\;\; &{\bf x}\in \Gamma\\ \displaystyle\jump{{\bf u}}&=0,\;\;\; &{\bf x}\in \Gamma
\end{array} \right. \end{equation} where $\sigma$ is the stress tensor given by \eqref{sigma}. We emphasize that imposing interface jump conditions (the last two equations of \eqref{interface_pde}) is necessary for a classical solution ${\bf u}$ defined on $\Omega$ to exist.
Therefore, in order to recover the interface problem in elasticity, given by \eqref{interface_pde} as the local limit of peridynamics inside heterogeneous media in the presence of material interfaces, we need to introduce a peridynamics interface model and impose nonlocal interface conditions such that the model satisfies the conditions C(i)-C(iii), introduced in Section \ref{sec_intro} (Introduction).
We note that Theorem \ref{thm_convg} and Theorem \ref{nonconvg_thm}
imply that if we assume \begin{equation}
\label{natural} (\mathcal{L}^\delta{\bf u})({\bf x})=0,\;\;\; {\bf x}\in\Gamma, \end{equation} then as $\delta\rightarrow 0$, \begin{equation} \label{natural_convg} \mathcal{L}^\delta {\bf u}\longrightarrow \mathcal{N}{\bf u}\;\;\;\; \mbox{ in } L^p(\Omega)^3 \end{equation}
for $1\leq p<\infty$. This means that the following system \begin{equation*} \label{natural_interface}
\left\{ \begin{array}{rll}
\displaystyle \mathcal{L}^\delta{\bf u}({\bf x}) &={\bf b}({\bf x}),\;\;\; &{\bf x}\in\Omega\\
\\
\displaystyle \mathcal{L}^\delta{\bf u}({\bf x}) &=0,\;\;\; &{\bf x}\in \Gamma
\end{array} \right.\tag{\ref{natural_interface_sys}} \end{equation*}
satisfies conditions C(i) and C(iii), and, since the peridynamics operator $\mathcal{L}^\delta$ has been kept without modifications we call \eqref{natural} {\it peridynamics natural interface condition}.
However, we show in Proposition \ref{prop_wrong_interface_conditions} that \eqref{natural} does not satisfy requirement (ii) since the local limit of \eqref{natural} is different from the local interface condition \begin{equation} \label{traction_cond}
\sigma{\bf n}({\bf x}^+) =\sigma{\bf n}({\bf x}^-),\;\;\; {\bf x}\in \Gamma. \end{equation}
By applying a coordinate translation, we may assume that the unit vector ${\bf n}$ is the normal to the interface at the origin (i.e., ${\bf n}={\bf n}({\bf 0})$). \begin{lem} \label{lem_K_delta} Let $\mathbb{K}_\delta$ be given by \eqref{K_delta}. Then \begin{equation}
\label{K_delta_limit} \lim_{\delta\rightarrow 0} \delta\;\mathbb{K}_\delta=\mathbb{K}, \end{equation} where the third-order tensor $\mathbb{K}$ satisfies \begin{equation}
\label{K_A} \mathbb{K} A = \frac{3}{32} \left(\left(A+A^T\right){\bf n} + \left(\mbox{\normalfont{tr}}(A)-A{\bf n}\cdot{\bf n}\right){\bf n}\right) \end{equation} for any second-order tensor $A$. \end{lem} \begin{proof}
To emphasize the dependence of the set $\Bp{{\bf 0}}$ on the normal ${\bf n}$, we denote this set by $B_\delta^{{\bf n}+}({\bf 0})$. Using spherical coordinates the unit normal ${\bf n}$ can be represented by \[ {\bf n}=\left( \begin{array}{c} \cos{\phi}\sin{\theta}\\ \sin{\phi}\sin{\theta}\\ \cos{\theta} \end{array} \right), \] where $0\leq\phi\leq 2\pi$ and $0\leq\theta\leq \pi$. Define the rotation matrix \begin{equation} \label{R} R=\left( \begin{array}{ccc} \cos{\phi} \cos{\theta}& \sin{\phi}\cos{\theta} & -\sin{\theta} \\ -\sin{\phi} & \cos{\phi} & 0 \\ \cos{\phi} \sin{\theta}& \sin{\phi}\sin{\theta} & \cos{\theta} \end{array} \right), \end{equation} and notice that \[
R\;{\bf n}=\hat{{\bf z}}_3 =\left( \begin{array}{c} 0\\ 0\\ 1 \end{array} \right). \] Then, by applying the change of coordinates ${\bf z}=R {\bf w}$, we find \begin{eqnarray}
\label{K_delta_chng} \nonumber
\delta\;\mathbb{K}_\delta &=& \frac{\delta}{|B_\delta|}\int_{B_\delta^{{\bf n}+}({\bf 0})}\frac{{\bf w}\otimes{\bf w}\otimes{\bf w}}{|{\bf w}|^4}\,d{\bf w}\\
&=& \frac{\delta}{|B_\delta|}\int_{B_\delta^{\hat{{\bf z}}_3 +}({\bf 0})}\frac{R^{-1}{\bf z}\otimes R^{-1}{\bf z}\otimes R^{-1}{\bf z}}{|R^{-1}{\bf z}|^4} \det(R^{-1})\,d{\bf z}\\ \nonumber
&=& \frac{\delta}{|B_\delta|}\int_{B_\delta^{\hat{{\bf z}}_3 +}({\bf 0})}\frac{R^{T}{\bf z}\otimes R^{T}{\bf z}\otimes R^{T}{\bf z}}{|{\bf z}|^4} \,d{\bf z}, \end{eqnarray}
where in the last step we have used the facts that $R^{-1}=R^T$, $|R^T {\bf z}|=|{\bf z}|$, and $\det(R^{-1})=1$. Since the interface $\Gamma$ is smooth, we may assume that, in the limit as $\delta\rightarrow 0$, the set $B_\delta^{\hat{{\bf z}}_3 +}$ is a half ball. Thus, a straightforward calculation using spherical coordinates shows that \[ \lim_{\delta\rightarrow 0} \delta\;\mathbb{K}_\delta=\mathbb{K}, \] where the entries $\mathbb{K}_{i j k}$ of the third-order tensor $\mathbb{K}$ are given by \begin{equation} \label{K_entries} \left\{ \begin{array}{ccl}
\mathbb{K}_{1 1 1} &=& \frac{3}{32} \cos{\phi} \;\sin{\theta}\left(3-\cos^2{\phi} \;\sin^2{\theta}\right),\\
\mathbb{K}_{1 1 2} &=& \frac{3}{32} \sin{\phi} \;\sin{\theta}\left(1-\cos^2{\phi} \;\sin^2{\theta}\right) =\mathbb{K}_{1 2 1}=\mathbb{K}_{2 1 1},\\
\mathbb{K}_{1 1 3} &=& \frac{3}{32} \cos{\theta}\left(1-\cos^2{\phi} \;\sin^2{\theta}\right) =\mathbb{K}_{1 3 1}=\mathbb{K}_{3 1 1},\\
\mathbb{K}_{1 2 2} &=& \frac{3}{32} \cos{\phi} \;\sin{\theta}\left(1-\sin^2{\phi} \;\sin^2{\theta}\right) =\mathbb{K}_{2 1 2}=\mathbb{K}_{2 2 1},\\
\mathbb{K}_{1 2 3} &=& -\frac{3}{32} \sin{\phi}\;\cos{\phi} \;\sin^2{\theta}\;\cos{\theta} =\mathbb{K}_{1 3 2}=\mathbb{K}_{2 1 3}=\mathbb{K}_{2 3 1}=\mathbb{K}_{3 1 2}=\mathbb{K}_{3 2 1},\\
\mathbb{K}_{1 3 3} &=& \frac{3}{32} \cos{\phi} \;\sin^3{\theta} =\mathbb{K}_{3 1 3}=\mathbb{K}_{3 3 1},\\
\mathbb{K}_{2 2 3} &=& \frac{3}{32} \cos{\theta} \left(1-\sin^2{\phi} \;\sin^2{\theta}\right) =\mathbb{K}_{2 3 2}=\mathbb{K}_{3 2 2},\\
\mathbb{K}_{2 3 3} &=& \frac{3}{32} \sin{\phi} \;\sin^3{\theta} =\mathbb{K}_{3 2 3}=\mathbb{K}_{3 3 2},\\
\mathbb{K}_{2 2 2} &=& \frac{3}{32} \sin{\phi}\;\sin{\theta} \left(3-\sin^2{\phi} \;\sin^2{\theta}\right),\\
\mathbb{K}_{3 3 3} &=& \frac{3}{32} \cos{\theta} \left(3-\cos^2{\theta}\right). \end{array} \right. \end{equation}
By calculating $\mathbb{K} A$, using \eqref{K_entries}, and comparing it with $\frac{3}{32} \left(\left(A+A^T\right){\bf n} + \left(\mbox{\normalfont{tr}}(A)-A{\bf n}\cdot{\bf n}\right){\bf n}\right)$, one finds that \eqref{K_A} holds true.
\end{proof} Applying Lemma \ref{lem_K_delta} with $A=\nabla{\bf v}$ we obtain the following result. \begin{cor} \label{cor_K_delta} Assume that ${\bf v}$ is a differentiable vector-field. Then \begin{equation}
\label{K_delta_grad_v}
\lim_{\delta\rightarrow 0} \frac{\delta}{|B_\delta|}\int_{\Bp{{\bf 0}}}\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4}\,d{\bf z}\;\nabla{\bf v}= \frac{3}{32} \left( \left(\nabla{\bf v}+\nabla{\bf v}^T\right){\bf n} + (\nabla\cdot{\bf v})\;{\bf n}-\left(\nabla{\bf v}\;{\bf n}\cdot{\bf n}\right){\bf n}\right). \end{equation} \end{cor} The local limit of peridynamics' natural interface condition \eqref{natural} is given by the following result.
\begin{prop} \label{prop_wrong_interface_conditions}
Assume that the material properties $\mu$ and $\lambda$ are given by \eqref{lambda}. Assume further that ${\bf u}$ is continuous on $\Omega$ and smooth on $\Omega\setminus\Gamma$. Then for ${\bf x}\in\Gamma$ \begin{eqnarray}
\label{local_of_perid_natural} \nonumber \lim_{\delta\rightarrow 0} \delta(\mathcal{L}^\delta {\bf u})({\bf x}) &=& \frac{45}{32} \bigg(\jump{(\mu_++\mu)(\nabla{\bf u}+\nabla{\bf u}^T)}{\bf n}+ \jump{(\mu_++\mu)\nabla\cdot{\bf u}}{\bf n} \\ &&\;\;\;\;\;\;\;\;\;\; - \jump{(\mu_++\mu)\nabla{\bf u}}{\bf n}\cdot{\bf n}\;{\bf n} + \frac{4}{5}\jump{(\lambda-\mu)\nabla\cdot{\bf u}}{\bf n}\bigg). \end{eqnarray}
\end{prop} \begin{proof} Let ${\bf x}\in\Gamma$. Using Corollary \ref{cor_K_delta} and from \eqref{Ldel_s_5}
, one finds that \begin{eqnarray} \label{L_s_6} \nonumber \lim_{\delta\rightarrow 0}\delta(\mathcal{L}^\delta_s {\bf v})({\bf x})&=& \frac{45}{32} \bigg(\jump{(\mu_++\mu)(\nabla{\bf u}+\nabla{\bf u}^T)}{\bf n}+ \jump{(\mu_++\mu)\nabla\cdot{\bf u}}{\bf n}
- \jump{(\mu_++\mu)\nabla{\bf u}}{\bf n}\cdot{\bf n}\;{\bf n}\bigg).\\ \end{eqnarray} And from \eqref{Ld_5} and \eqref{Ld_6}, one finds that \begin{eqnarray}
\label{Ld_7} \lim_{\delta\rightarrow 0}\delta\left(\mathcal{L}^\delta_d{\bf u}\right)({\bf x})=
\frac{9}{8}\jump{(\lambda-\mu)\nabla\cdot{\bf u}}{\bf n}. \end{eqnarray} Equation \eqref{local_of_perid_natural} follows from \eqref{L_s_6} and \eqref{Ld_7}.
\end{proof} \begin{remark} Note that for ${\bf x}\in\Gamma$ and $\sigma$ given by \eqref{sigma} then \begin{eqnarray} \label{sigma_jump}
\jump{\sigma}{\bf n}=\jump{\lambda\nabla\cdot{\bf u}}{\bf n}+\jump{\mu(\nabla{\bf u}+\nabla{\bf u}^T}{\bf n}. \end{eqnarray} Comparing \eqref{sigma_jump} and \eqref{local_of_perid_natural} we conclude that the local interface condition \eqref{traction_cond} is not recoverable from the nonlocal interface condition \eqref{natural}. \end{remark}
\begin{figure}
\caption{The extended interface $\Gamma_\delta$.}
\label{fig_gamma_delta}
\end{figure}
\subsection{A peridynamic interface model} \label{sec_perid_interface_model} Let $\Gamma_\delta$ be the set defined by \[
\Gamma_\delta=\{{\bf x}\in\Omega:|{\bf x}-\Gamma|<\delta\}, \]
where $|{\bf x}-\Gamma|$ denotes the distance between the point ${\bf x}$ and the interface. We refer to this three-dimensional set as the {\it extended interface}. An illustration of this set is shown in Figure \ref{fig_gamma_delta}.
The peridynamics material interface model, under conditions of equilibrium, is given by \begin{equation} \label{interface_model}
\left\{ \begin{array}{rll}
\displaystyle \mathcal{L}^\delta_*{\bf u}({\bf x}) &={\bf b}({\bf x}),\;\;\; &{\bf x}\in\Omega\\
\\
\displaystyle \mathcal{L}^\delta_*{\bf u}({\bf x}) &=0,\;\;\; &{\bf x}\in \Gamma_\delta
\end{array}, \right. \end{equation} where \begin{eqnarray} \label{L*} \mathcal{L}^\delta_*{\bf u}=\mathcal{L}^\delta{\bf u}+1_{\Gamma_\delta}\, \mathcal{L}^\delta_{\Gamma_\delta}{\bf u}. \end{eqnarray} Here $\mathcal{L}^\delta$ is given by \eqref{Ldel0}, \eqref{Ldel_s_2}, and \eqref{Ldel_d_3}, $1_{\Gamma_\delta}$ is the indicator function \begin{equation*}
1_{\Gamma_\delta}({\bf x})=\left\{ \begin{array}{ll}
\displaystyle 1,& {\bf x}\in\Gamma_\delta\\
\displaystyle 0,& {\bf x}\not\in\Gamma_\delta
\end{array}, \right. \end{equation*} and the operator $\mathcal{L}^\delta_{\Gamma_\delta}$ is defined by \begin{eqnarray} \label{L_Gamma} \nonumber
\mathcal{L}^\delta_{\Gamma_\delta}{\bf u}({\bf x})&=& -\frac{15}{|B_\delta|}\int_{B_\delta({\bf x})}
\mu({\bf x}) \frac{({\bf y}-{\bf x})
\otimes({\bf y}-{\bf x})}{|{\bf y}-{\bf x}|^4}\big( {\bf u}({\bf y})-{\bf u}({\bf x})\big)
\,d{\bf y},\\
\nonumber
&& +\frac{1}{4}\frac{9}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\left(\lambda({\bf y})-\mu({\bf y})\right) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\otimes\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf u}({\bf z})
\,d{\bf z} d{\bf y}\\
& & +\frac{5}{4} \frac{9}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\cdot\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf u}({\bf z})
\,d{\bf z} d{\bf y} \cdot {\bf n}({\bf x})\;\;{\bf n}({\bf x}). \end{eqnarray}
Similarly, the general peridynamics material interface model is given by
\begin{equation} \label{interface_model_dynamics}
\left\{ \begin{array}{rll}
\displaystyle \rho({\bf x}) \ddot{{\bf u}}({\bf x},t) &= \mathcal{L}^\delta_* {\bf u}({\bf x}) + {\bf b}({\bf x},t),
\;\;\; &{\bf x}\in\Omega\\
\\
\displaystyle \mathcal{L}^\delta_*{\bf u}({\bf x}) &=0,\;\;\; &{\bf x}\in \Gamma_\delta
\end{array}. \right. \end{equation}
The convergence of the peridynamics interface model \eqref{interface_model} to the local interface model \eqref{interface_pde} is given by the next result. \begin{thm} \label{thm_interface_model}
Assume that $\mu$ and $\lambda$ are given by \eqref{lambda} and that the vector field ${\bf u}$ is continuous on $\Omega$ and smooth on $\Omega\setminus\Gamma$. Then if \begin{equation} \label{nonlocal_inteface_cond} \mathcal{L}^\delta_*{\bf u}({\bf x}) =0,\;\;\; \mbox{for all } \;{\bf x}\in \Gamma_\delta, \end{equation} then as $\delta \rightarrow 0$, \begin{enumerate} \item \begin{equation} \label{lim_Ldel*} \mathcal{L}^\delta_* {\bf u}\longrightarrow \mathcal{N}{\bf u},\;\;\;\; \mbox{ in } L^p(\Omega)^3,\;\;\;\mbox{ for } 1\leq p<\infty, \mbox{ and } \end{equation}
\item
\begin{equation} \label{jump_cond_1} \jump{\sigma}{\bf n}=0, \;\;\;\mbox{ for } {\bf x}\in\Gamma. \end{equation} \end{enumerate}
\end{thm} \begin{remark} \begin{itemize}
\item The first part of Theorem \ref{thm_interface_model} shows that imposing the nonlocal interface condition \eqref{nonlocal_inteface_cond} implies that the Navier operator $\mathcal{N}$ is the local limit of the operator $\mathcal{L}^\delta_*$ and, consequently, the model \eqref{interface_model} satisfies C(i). \item The second part of Theorem \ref{thm_interface_model} shows that the local interface condition \eqref{jump_cond_1} can be recovered from the local limit of the nonlocal interface condition \eqref{nonlocal_inteface_cond} and, consequently, the model \eqref{interface_model} satisfies C(ii). \item The peridynamics interface model \eqref{interface_model} satisfies C(iii). \end{itemize}
\end{remark}
\begin{proof} \noindent {\bf Part (1)}. We show that \eqref{nonlocal_inteface_cond} implies \eqref{lim_Ldel*}.
Let ${\bf x}\in\Omega$ with a distance of at least $2\delta$ from $\partial \Omega$. Then for ${\bf x}\not\in\Gamma_\delta$, and by using \eqref{L*} we obtain \begin{equation} \label{L*isL} \mathcal{L}^\delta_*{\bf u}({\bf x})=\mathcal{L}^\delta{\bf u}({\bf x}). \end{equation} From \eqref{L*isL} and by using Theorem \ref{thm_convg}, one finds \begin{equation} \label{limL*1} \lim_{\delta\rightarrow 0}\mathcal{L}^\delta_*{\bf u}({\bf x})=\mathcal{N}({\bf x}). \end{equation} On the other hand, for ${\bf x}\in\Gamma_\delta$, and by using the assumption \eqref{nonlocal_inteface_cond}, one finds that \begin{equation} \label{limL*2} \lim_{\delta\rightarrow 0} \mathcal{L}^\delta_*{\bf u}({\bf x})=0. \end{equation} Since $\Gamma_\delta\rightarrow \Gamma$ as $\delta\rightarrow 0$ and that
$|\Gamma|=0$, it follows from \eqref{limL*1} and \eqref{limL*2} that \begin{equation} \label{limL*3} \lim_{\delta\rightarrow 0} \mathcal{L}^\delta_*{\bf u}({\bf x})=\mathcal{N}({\bf x}), \mbox{ for \it{almost every }} {\bf x}\in \Omega. \end{equation} Using \eqref{limL*3} and Lebesgue's dominated convergence theorem, \eqref{lim_Ldel*} follows.\\
\noindent {\bf Part (2)}. We show that \eqref{nonlocal_inteface_cond} implies \eqref{jump_cond_1}.
Let ${\bf x}\in\Gamma$. Then, by multiplying both sides of \eqref{nonlocal_inteface_cond} by $\delta$ and taking the limit, one obtains \begin{equation} \label{limdL*1} \lim_{\delta\rightarrow 0} \delta\mathcal{L}^\delta_*{\bf u}({\bf x})=0. \end{equation} Next, we show that \begin{eqnarray}
\label{limdL*2} \lim_{\delta\rightarrow 0} \delta\mathcal{L}^\delta_* {\bf u}({\bf x}) &=& \frac{45}{32} \bigg(\jump{\lambda\nabla\cdot{\bf u}}{\bf n}+\jump{\mu(\nabla{\bf u}+\nabla{\bf u}^T}{\bf n}\bigg)\\ \label{limdL*3}
&=& \frac{45}{32} \jump{\sigma}{\bf n}. \end{eqnarray} Equation \eqref{jump_cond_1} follows from \eqref{limdL*1} and \eqref{limdL*3}. Thus, it remains to prove \eqref{limdL*2} to complete the proof.
From \eqref{L*} and since ${\bf x}\in\Gamma$, $\mathcal{L}^\delta_*{\bf u}({\bf x})$ can be written as \begin{eqnarray} \label{L*decompose} \nonumber \mathcal{L}^\delta_*{\bf u}({\bf x})&=& \mathcal{L}^\delta{\bf u}({\bf x})+\mathcal{L}^\delta_{\Gamma_\delta}{\bf u}({\bf x})\\ \nonumber &=& \left(\mathcal{L}^\delta_s{\bf u}({\bf x})+\mathcal{L}^\delta_d{\bf u}({\bf x})\right)+\left(\mathcal{L}^\delta_1{\bf u}({\bf x})+\frac{1}{4}\mathcal{L}^\delta_d{\bf u}({\bf x})+\mathcal{L}^\delta_2{\bf u}({\bf x})\right)\\ &=& \mathcal{L}^\delta_s{\bf u}({\bf x})+\frac{5}{4}\mathcal{L}^\delta_d{\bf u}({\bf x})+\mathcal{L}^\delta_1{\bf u}({\bf x})+\mathcal{L}^\delta_2{\bf u}({\bf x}), \end{eqnarray} where \begin{eqnarray} \label{L*decompose2}
\mathcal{L}^\delta_1{\bf u}({\bf x})&=& -\frac{15}{|B_\delta|}\int_{B_\delta({\bf x})}
\mu({\bf x}) \frac{({\bf y}-{\bf x})
\otimes({\bf y}-{\bf x})}{|{\bf y}-{\bf x}|^4}\big( {\bf u}({\bf y})-{\bf u}({\bf x})\big)
\,d{\bf y},\\ \nonumber\\ \label{L*decompose3} \mathcal{L}^\delta_2{\bf u}({\bf x})&=&
\frac{5}{4} \frac{9}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\cdot\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf u}({\bf z})
\,d{\bf z} d{\bf y} \cdot{\bf n}({\bf x})\;\;{\bf n}({\bf x}). \end{eqnarray} We note that the definition of $\mathcal{L}^\delta_1$ is similar to that of $\mathcal{L}^\delta_s$. Thus, and since $\mu$ is given by \eqref{lambda}, an argument similar to the derivation of \eqref{Ldel_s_5} in Lemma \ref{nonconvg_1} yields \begin{eqnarray} \label{Ldel_1_1}
(\mathcal{L}^\delta_1 {\bf u})({\bf x})&=& -\frac{15}{|B_\delta|}\int_{\Bp{{\bf 0}}}
\frac{{\bf z}\otimes{\bf z}\otimes{\bf z}}{|{\bf z}|^4} \,d{\bf z}\;\;\mu_+\left(\nabla{\bf u}({\bf x}^+)-\nabla{\bf u}({\bf x}^-\right)+\BigO{1}. \end{eqnarray} From \eqref{Ldel_1_1} and \eqref{K_delta_grad_v}, one finds that \begin{eqnarray} \label{L_1_2}
\lim_{\delta\rightarrow 0}\delta(\mathcal{L}^\delta_1 {\bf u})({\bf x})&=& \frac{45}{32} \;\mu_+\;\bigg(\jump{\nabla{\bf u}+\nabla{\bf u}^T}{\bf n}+ \jump{\nabla\cdot{\bf u}}{\bf n}
- \jump{\nabla{\bf u}}{\bf n}\cdot{\bf n}\;{\bf n}\bigg). \end{eqnarray} Combining \eqref{L_1_2}, \eqref{L_s_6}, and \eqref{Ld_7}, we obtain \begin{eqnarray} \label{L_sd1} \nonumber \lim_{\delta\rightarrow 0}\delta\left(\mathcal{L}^\delta_s+\frac{5}{4}\mathcal{L}^\delta_d+\mathcal{L}^\delta_1\right)({\bf u})({\bf x})&=& \frac{45}{32} \bigg(\jump{\lambda\nabla\cdot{\bf u}}{\bf n}+\jump{\mu(\nabla{\bf u}+\nabla{\bf u}^T}{\bf n} - \jump{\nabla{\bf u}}{\bf n}\cdot{\bf n}\;{\bf n}\bigg).\\ \end{eqnarray} Equation \eqref{limdL*2} follows from \eqref{L_sd1}, \eqref{L*decompose3}, \eqref{L*decompose}, and Lemma \ref{nnn_lemma}, completing the proof of Part (2).
\end{proof}
\begin{lem} \label{nnn_lemma} Let ${\bf x}$ be a point on the interface $\Gamma$ and ${\bf n}={\bf n}({\bf x})$ be the unit normal to the interface. Assume that the vector field ${\bf v}$ is smooth on $\Omega\setminus\Gamma$. Then \begin{eqnarray} \label{nnn} \lim_{\delta\rightarrow 0}\delta(\mathcal{L}^\delta_2 {\bf v})({\bf x})&=& \frac{45}{32}\jump{\nabla{\bf v}}{\bf n}\cdot{\bf n}\;{\bf n}. \end{eqnarray} \end{lem}
\begin{proof} It is sufficient to show that \begin{eqnarray} \label{n_1}
\lim_{\delta\rightarrow 0}\frac{3\delta}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\cdot\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf v}({\bf z})
\,d{\bf z} d{\bf y}&=& \frac{3}{8}\jump{\nabla{\bf v}}{\bf n}. \end{eqnarray} The Taylor expansion of ${\bf v}$ about ${\bf z}={\bf y}$ is given by \begin{equation} \label{L2_taylor1} {\bf v}({\bf z})={\bf v}({\bf y})+\nabla{\bf v}({\bf y}) \;({\bf z}-{\bf y})+{\bf r}({\bf v};{\bf z},{\bf y}), \end{equation} where \begin{equation} \label{L2_remainder1} {\bf r}({\bf v};{\bf z},{\bf y})=\frac{1}{2} \nabla\nabla{\bf v}(\boldsymbol\xi)\, \left(({\bf z}-{\bf y})\otimes({\bf z}-{\bf y})\right) \end{equation} for some $\boldsymbol\xi$ on the line segment joining ${\bf z}$ and ${\bf y}$. Using \eqref{L2_taylor1}, the integral on the left hand side of \eqref{n_1} becomes \begin{eqnarray} \label{n_2} \nonumber
\frac{3\delta}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\cdot\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf v}({\bf z})
\,d{\bf z} d{\bf y}&&\\
\nonumber \\
\nonumber
&&\hspace*{-3cm}= \frac{3\delta}{|B_\delta|^2}
\int_{B_\delta({\bf x})}\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}\cdot\int_{B_\delta({\bf y})}
\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,d{\bf z} \;{\bf v}({\bf y})
d{\bf y}\\
\nonumber \\
\nonumber
&&\hspace*{-3cm}+ \frac{3\delta}{|B_\delta|^2}
\int_{B_\delta({\bf x})}\mu({\bf y}) \nabla{\bf v}({\bf y})\int_{B_\delta({\bf y})}
\frac{({\bf z}-{\bf y})\otimes({\bf z}-{\bf y})}{|{\bf z}-{\bf y}|^2}\,d{\bf z} \;\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y}\\
\nonumber \\
\nonumber
&&\hspace*{-3cm}+ \frac{3\delta}{|B_\delta|^2}
\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}\cdot
\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf r}({\bf v};{\bf z},{\bf y})\,d{\bf z}
d{\bf y}.\\ \end{eqnarray} We note that by using \eqref{sym}, the first term on the right hand side of \eqref{n_2} is equal to zero and it is straightforward to show that the third term on the right hand side of \eqref{n_2} is $\BigO{\delta}$. Thus, by using \eqref{identity_identity} in the second term, equation \eqref{n_2} becomes
\begin{eqnarray} \label{n_3} \nonumber
\hspace*{-.5cm}\frac{3\delta}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\cdot\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf v}({\bf z})
\,d{\bf z} d{\bf y}
&=& \frac{\delta}{|B_\delta|}
\int_{B_\delta({\bf x})}\mu({\bf y}) \nabla{\bf v}({\bf y})\;\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y}
+ \BigO{\delta}\\ \nonumber
&=& \frac{\delta}{|B_\delta|}
\int_{\Bp{{\bf x}}}\mu({\bf y}) \nabla{\bf v}({\bf y})\;\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y}\\
\nonumber
&&+\frac{\delta}{|B_\delta|}
\int_{\Bm{{\bf x}}}\mu({\bf y}) \nabla{\bf v}({\bf y})\;\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y} + \BigO{\delta}.\\ \end{eqnarray} Since ${\bf v}$ is smooth on each side of $\Gamma$, then for ${\bf y}$ on the $+$side of $\Gamma$ (i.e., ${\bf y}\in\Bp{{\bf x}}$), $\nabla{\bf v}$ can be expanded as \begin{eqnarray} \label{L2_taylor_1} \nabla{\bf v}({\bf y})=\nabla{\bf v}({\bf x})+{\bf R}_+(\nabla{\bf v};{\bf x},{\bf y}), \end{eqnarray} where \begin{eqnarray}
\label{L2_remainder_1} {\bf R}_+(\nabla{\bf v};{\bf x},{\bf y})&=&\nabla\nabla{\bf v}(\boldsymbol\xi_+)\;({\bf y}-{\bf x}) \end{eqnarray} for some $\boldsymbol\xi_+$ on the line segment joining ${\bf x}$ and ${\bf y}$. Similarly, $\nabla{\bf v}$ can be expanded on the $-$side of $\Gamma$. For ${\bf y}\in\Bm{{\bf x}}$, \begin{eqnarray} \label{L2_taylor_2} \nabla{\bf v}({\bf y})=\nabla{\bf v}({\bf x})+{\bf R}_-(\nabla{\bf v};{\bf x},{\bf y}), \end{eqnarray} where \begin{eqnarray}
\label{L2_remainder_2} {\bf R}_-(\nabla{\bf v};{\bf x},{\bf y})&=&\nabla\nabla{\bf v}(\boldsymbol\xi_-)\;({\bf y}-{\bf x}) \end{eqnarray} for some $\boldsymbol\xi_-$ on the line segment joining ${\bf x}$ and ${\bf y}$. By substituting \eqref{L2_taylor_1} and \eqref{L2_taylor_2} on the right hand side of \eqref{n_3} and expanding the integrals, we find \begin{eqnarray} \label{n_4} \nonumber
\frac{3\delta}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\cdot\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf v}({\bf z})
\,d{\bf z} d{\bf y}
&=&\frac{\delta}{|B_\delta|}
\int_{\Bp{{\bf x}}}\mu_+ \nabla{\bf v}({\bf x}^+)\;\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y}\\
\nonumber
&&+\frac{\delta}{|B_\delta|}
\int_{\Bm{{\bf x}}}\mu_- \nabla{\bf v}({\bf x}^-)\;\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y}\\
\nonumber
&&+\frac{\delta}{|B_\delta|}
\int_{\Bp{{\bf x}}}\mu_+ {\bf R}_+(\nabla{\bf v};{\bf x},{\bf y})\;\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y}\\
\nonumber
&& +\frac{\delta}{|B_\delta|}
\int_{\Bm{{\bf x}}}\mu_- {\bf R}_-(\nabla{\bf v};{\bf x},{\bf y})\;\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y}\\
\nonumber
&&+\BigO{\delta}.\\ \end{eqnarray} Using \eqref{L2_remainder_1} and \eqref{L2_remainder_2} one can easily show that the third and the fourth terms on the right hand side of \eqref{n_4} are $\BigO{\delta}$, and using \eqref{sym} for the first and second terms on the right hand side of \eqref{n_4}, we find \begin{eqnarray} \label{n_5} \nonumber
\frac{3\delta}{|B_\delta|^2}\int_{B_\delta({\bf x})}\int_{B_\delta({\bf y})}
\mu({\bf y}) \frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
\cdot\frac{{\bf z}-{\bf y}}{|{\bf z}-{\bf y}|^2}\,{\bf v}({\bf z})
\,d{\bf z} d{\bf y} &&
\\
\nonumber \\ \nonumber
&&\hspace*{-3cm} =
(\mu_+ \nabla{\bf v}({\bf x}^+)-\mu_- \nabla{\bf v}({\bf x}^-))\;\frac{\delta}{|B_\delta|}\;\int_{\Bp{{\bf x}}}\frac{{\bf y}-{\bf x}}{|{\bf y}-{\bf x}|^2}
d{\bf y}+\BigO{\delta}.\\ \end{eqnarray} Equation \eqref{n_1} follows from \eqref{n_5} and \eqref{Ld_6}, completing the proof.
\end{proof}
We conclude this section by providing a mechanical interpretatation to \eqref{L*} and \eqref{L_Gamma}. Equation \eqref{limdL*3} in the proof of Theorem~\ref{thm_interface_model} provides an important relationship between the jump in the local traction across the interface and the nonlocal operator $\mathcal{L}^\delta_*$. This implies that, for points ${\bf x}\in\Gamma$, the expression $\frac{32}{45}\delta \mathcal{L}^\delta_*{\bf u}({\bf x})$ represents the nonlocal analogue of $\jump{\sigma}{\bf n}$. Therefore, we can interpret $\frac{32}{45}\delta \mathcal{L}^\delta_*{\bf u}({\bf x})$ as the jump in the nonlocal traction across the interface. Moreover, the operator $\mathcal{L}^\delta_{\Gamma_\delta}$, given by \eqref{L_Gamma}, can be interpreted as the missing term in peridynamics which modifies the jump in the nonlocal traction such that \eqref{limdL*3} holds true. Furthermore, Theorem~\ref{thm_interface_model} and \eqref{limdL*3} imply that the nonlocal interface condition \eqref{interface_model_sys_2} is the nonlocal analogue of the local interface condition \eqref{interface_pde0_3} and that the peridynamic interface model given by \eqref{interface_model_sys} is the nonlocal analogue of the local interface model given by \eqref{interface_pde0}.
\end{document} | arXiv |
We investigate the spectrum of the two-dimensional model for a thin plate with a sharp edge. The model yields an elliptic $3\times3$ Agmon–Douglis–Nirenberg system on a planar domain with coefficients degenerating at the boundary. We prove that in the case of a degeneration rate $\alpha<2$, the spectrum is discrete, but, for $\alpha\geq2$, there appears a nontrivial essential spectrum. A first result for the degenerating scalar fourth order plate equation is due to Mikhlin. We also study the positive definiteness of the quadratic energy form and the necessity to impose stable boundary conditions. These results differ from the ones that Mikhlin published. | CommonCrawl |
\begin{definition}[Definition:Piecewise Continuous Function/Bounded]
Let $f$ be a real function defined on a closed interval $\closedint a b$.
$f$ is a '''bounded piecewise continuous function''' {{iff}}:
:there exists a finite subdivision $\set {x_0, x_1, \ldots, x_n}$ of $\closedint a b$, where $x_0 = a$ and $x_n = b$, such that:
::$(1): \quad$ for all $i \in \set {1, 2, \ldots, n}$, $f$ is continuous on $\openint {x_{i − 1} } {x_i}$
::$(2): \quad$ $f$ is bounded on $\closedint a b$.
\end{definition} | ProofWiki |
Munn semigroup
In mathematics, the Munn semigroup is the inverse semigroup of isomorphisms between principal ideals of a semilattice (a commutative semigroup of idempotents). Munn semigroups are named for the Scottish mathematician Walter Douglas Munn (1929–2008).[1]
Construction's steps
Let $E$ be a semilattice.
1) For all e in E, we define Ee: = {i ∈ E : i ≤ e} which is a principal ideal of E.
2) For all e, f in E, we define Te,f as the set of isomorphisms of Ee onto Ef.
3) The Munn semigroup of the semilattice E is defined as: TE := $\bigcup _{e,f\in E}$ { Te,f : (e, f) ∈ U }.
The semigroup's operation is composition of partial mappings. In fact, we can observe that TE ⊆ IE where IE is the symmetric inverse semigroup because all isomorphisms are partial one-one maps from subsets of E onto subsets of E.
The idempotents of the Munn semigroup are the identity maps 1Ee.
Theorem
For every semilattice $E$, the semilattice of idempotents of $T_{E}$ is isomorphic to E.
Example
Let $E=\{0,1,2,...\}$. Then $E$ is a semilattice under the usual ordering of the natural numbers ($0<1<2<...$). The principal ideals of $E$ are then $En=\{0,1,2,...,n\}$ for all $n$. So, the principal ideals $Em$ and $En$ are isomorphic if and only if $m=n$.
Thus $T_{n,n}$ = {$1_{En}$} where $1_{En}$ is the identity map from En to itself, and $T_{m,n}=\emptyset $ if $m\not =n$. The semigroup product of $1_{Em}$ and $1_{En}$ is $1_{E\operatorname {min} \{m,n\}}$. In this example, $T_{E}=\{1_{E0},1_{E1},1_{E2},\ldots \}\cong E.$
References
1. O'Connor, John J.; Robertson, Edmund F., "Walter Douglas Munn", MacTutor History of Mathematics Archive, University of St Andrews
• Howie, John M. (1995), Introduction to semigroup theory, Oxford: Oxford science publication.
• Mitchell, James D. (2011), Munn semigroups of semilattices of size at most 7.
| Wikipedia |
Functions of a complex variable, theory of
In the broad sense of the term, the theory of functions defined on some set of points $ z $ in the complex plane $ \mathbf C = \mathbf C ^ {1} $ (functions of a single complex variable) or on a set of points $ z = ( z _ {1} \dots z _ {n} ) $ of a complex Euclidean space $ \mathbf C ^ {n} $, $ n > 1 $ (functions of several complex variables). In the narrow sense of the term, the theory of function of a complex variable is the theory of analytic functions (cf. Analytic function) of one or several complex variables.
As an independent discipline, the theory of functions of a complex variable took shape in about the middle of the 19th century as the theory of analytic functions. The fundamental work here was that of A.L. Cauchy, K. Weierstrass and B. Riemann, who approached the development of the theory from various (different) points of view.
According to Weierstrass, a function $ w = f ( z) $ is called analytic (or holomorphic) in a domain $ D \subset \mathbf C $ if it can be expanded in a power series in a neighbourhood of each point $ z _ {0} \in D $:
$$ \tag{1 } w = f ( z) = \ \sum _ {k = 0 } ^ \infty c _ {k} ( z - z _ {0} ) ^ {k} ; $$
in the case of several complex variables, when $ D \subset \mathbf C ^ {n} $, $ n > 1 $, the series (1) is interpreted as a multiple power series. To define an analytic function it is even sufficient that the convergent series (1) be given in a neighbourhood of a single point $ z _ {0} $, for its values at another point $ z _ {1} $ and the corresponding series can be determined by the process of analytic continuation along various paths in the complex plane $ \mathbf C $ (or in $ \mathbf C ^ {n} $, $ n > 1 $) joining $ z _ {0} $ and $ z _ {1} $.
In the course of analytic continuation one may come across singular points (cf. Singular point), to which it is impossible to carry out analytic continuation along any path. These singular points determine the general behaviour of an analytic function in the sense that if two paths $ L _ {1} $ and $ L _ {2} $ joining the same fixed points $ z _ {0} $ and $ z _ {1} $ are not homotopic, that is, if it is impossible to deform $ L _ {2} $ continuously into $ L _ {1} $ without thereby passing through any singular point, then the values of the function $ f ( z _ {1} ) $ obtained by analytic continuation along $ L _ {1} $ and $ L _ {2} $ may turn out to be different. Consequently, the complete analytic function $ w = f ( z) $ obtained by analytic continuation of an initial element (1) along all possible paths may turn out to be multiple-valued in its natural domain of definition in $ \mathbf C $ (or in $ \mathbf C ^ {n} $, $ n > 1 $). Examples of this are the functions $ w = z ^ {1/2} $ or $ w = \mathop{\rm ln} z $. One can escape from this multiple-valuedness by forbidding analytic continuation along certain paths, by constructing so-called cuts in the complex plane, and by distinguishing single-valued branches of an analytic function (cf. Branch of an analytic function). But the most perfect method of converting a multiple-valued function into a single-valued one consists in regarding it not as a function of a point of the complex plane, but as a function of a point of a Riemann surface, consisting of several sheets that cover the complex plane, and joined to one another in a certain way. In the case of several variables, instead of a Riemann surface there arises a Riemannian domain, a multiple-sheeted cover of $ \mathbf C ^ {n} $, $ n > 1 $.
In his construction of the theory of analytic functions, Cauchy started from the concept of monogeneity. He called a function $ w = f ( z) $, $ z \in D \subset \mathbf C $, monogenic if it has a monodromic (that is, single-valued and continuous, except for poles) derivative everywhere in $ D $. Extending this concept somewhat, by a monogenic function $ w = f ( z) $ on a subset $ E \subset D $ one usually means a (single-valued) function for which there exists at all points $ z _ {0} \in E $ a derivative with respect to $ E $,
$$ \tag{2 } f _ {E} ^ { \prime } ( z _ {0} ) = \ \lim\limits _ {\begin{array}{c} z \rightarrow z _ {0} , \\ z \in E \end{array} } \ \frac{f ( z) - f ( z _ {0} ) }{z - z _ {0} } . $$
Monogeneity in the sense of Cauchy is the same as analyticity when $ E = D $. Cauchy developed the theory of integration of analytic functions, proved the important theorem on residues (cf. Residue of an analytic function), the Cauchy integral theorem, and introduced the concept of the Cauchy integral:
$$ \tag{3 } f ( z) = \ { \frac{1}{2 \pi i } } \int\limits _ \Gamma \frac{f ( \zeta ) d \zeta }{\zeta - z } , $$
which expresses the value of an analytic function $ f ( z) $ in terms of its values on any closed contour $ \Gamma $ surrounding $ z $ and not containing any singular points of $ f ( z) $ inside or on $ \Gamma $. As the simplest integral representation of analytic functions, the concept of the Cauchy integral can also be retained for functions of several variables.
If one introduces complex variables $ z = x + iy $, $ \overline{z} = x - iy $, one can describe any function of two variables $ x $ and $ y $, $ w = f ( x, y) = u ( x, y) + iv ( x, y) $, as a function of $ z $ and $ \overline{z} $. The Cauchy-Riemann equations, which pick out those among such functions that are analytic, demand that the functions $ w = u ( x, y) + iv ( x, y) $ be differentiable with respect to both variables $ ( x, y) $, while everywhere in $ D $ the equation
$$ \tag{4 } \frac{\partial w }{\partial \overline{z} } = 0 $$
must hold, or, in full, $ u _ {x} = v _ {y} $, $ u _ {y} = - v _ {x} $.
The conditions (4) mean that the real and imaginary parts $ u ( x, y) $ and $ v ( x, y) $ of an analytic function must be conjugate harmonic functions. In the case of analytic functions of several complex variables, the conditions (4) must be satisfied with respect to all the variables $ \overline{z} _ \nu $, $ \nu = 1 \dots n $.
For Riemann, the most important thing was the circumstance that an analytic function $ w = f ( z) $, as picked out by the conditions (4), effects, under certain conditions, a conformal mapping of $ D $ onto some other domain in the plane of the complex variable $ w $. The connection between analytic functions and conformal mappings opens a way to solving a number of problems in mathematical physics.
The subsequent development of the theory of functions of a complex variable has been and still is above all a deepening and extension of the theory of analytic functions (see, for example, Boundary value problems of analytic function theory; Boundary properties of analytic functions; Uniqueness properties of analytic functions; Integral representation of an analytic function; Meromorphic function; Multivalent function; Univalent function; Entire function). Problems, related to analytic functions, of approximation and interpolation of functions have an important significance. In these it turns out that in the theory of analytic functions of several variables the specific nature and difficulty of the problems are such that they only yield a solution when one invokes the most modern methods of algebra, topology and analysis.
The boundary properties of holomorphic functions, in particular of the integral of Cauchy type (see Cauchy integral) obtained from (3) when the values of $ f ( \zeta ) $ on the contour $ \Gamma $ are given totally arbitrarily, are of great theoretical and practical significance, as are multi-dimensional analogues of this and other integral representations.
Generalized analytic functions (cf. Generalized analytic function), which are important for applications, are obtained in their simplest form as solutions of an equation generalizing (4):
$$ \frac{\partial w }{\partial \overline{z}} + A ( z) w + B ( z) \overline{w} = F ( z). $$
Their main properties (in the case of a single variable) have been investigated in fair detail.
The study of quasi-conformal mappings (cf. Quasi-conformal mapping) is of great significance for the theory of analytic functions itself (in particular, for the theory of Riemann surfaces) and for its applications.
A theory of abstract analytic functions (cf. Abstract analytic function) with values in various vector spaces has also been developed.
[1] I.I. [I.I. Privalov] Priwalow, "Einführung in die Funktionentheorie" , 1–3 , Teubner (1958–1959) (Translated from Russian)
[2] A.I. Markushevich, "Theory of functions of a complex variable" , 1–2 , Chelsea (1977) (Translated from Russian)
[3] M.A. Lavrent'ev, B.V. Shabat, "Methoden der komplexen Funktionentheorie" , Deutsch. Verlag Wissenschaft. (1967) (Translated from Russian)
[4] V.S. Vladimirov, "Methods of the theory of functions of several complex variables" , M.I.T. (1966) (Translated from Russian)
[5] B.V. Shabat, "Introduction of complex analysis" , 1–2 , Moscow (1969) (In Russian)
[6] I.N. Vekua, "Generalized analytic functions" , Pergamon (1962) (Translated from Russian)
[7] A. Hurwitz, R. Courant, "Vorlesungen über allgemeine Funktionentheorie und elliptische Funktionen" , Springer (1964)
[8] R.C. Gunning, H. Rossi, "Analytic functions of several complex variables" , Prentice-Hall (1965)
[9] L. Hörmander, "An introduction to complex analysis in several variables" , North-Holland (1973)
[a1] L.V. Ahlfors, "Complex analysis" , McGraw-Hill (1979) pp. 24–26
[a2] C. Carathéodory, "Theory of functions of a complex variable" , 1–2 , Chelsea, reprint (1964) (Translated from German)
[a3] J.B. Garnett, "Bounded analytic functions" , Acad. Press (1981) pp. 40
[a4] W. Rudin, "Real and complex analysis" , McGraw-Hill (1987) pp. 24
[a5] S. Saks, A. Zygmund, "Analytic functions" , PWN (1965) (Translated from Polish)
[a6] J.B. Conway, "Functions of a complex variable" , Springer (1973)
[a7] E. Hille, "Analytic function theory" , 1–2 , Chelsea, reprint (1974)
[a8] S.G. Krantz, "Function theory of several complex variables" , Wiley (1982)
[a9] R.M. Range, "Holomorphic functions and integral representation in several complex variables" , Springer (1986) pp. Chapt. 6
[a10] R.P. Boas, "Invitation to complex analysis" , Random House (1987)
[a11] R.B. Burckell, "An introduction to classical complex analysis" , 1 , Acad. Press (1979)
[a12] P. Henrici, "Applied and computational complex analysis" , 1–3 , Wiley (1974–1986)
[a13] M. Heins, "Complex function theory" , Acad. Press (1968)
[a14] R. Narasimhan, "Complex analysis in one variable" , Birkhäuser (1985)
Functions of a complex variable, theory of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Functions_of_a_complex_variable,_theory_of&oldid=52037
This article was adapted from an original article by E.D. Solomentsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Functions_of_a_complex_variable,_theory_of&oldid=52037" | CommonCrawl |
\begin{document}
\title[Two-party Bell inequalities via triangular elimination]
{Two-party Bell inequalities derived from combinatorics via
triangular elimination}
\author{David Avis$\dag$, Hiroshi Imai$\ddag\S$,
Tsuyoshi Ito$\ddag$ and Yuuya Sasaki$\ddag$}
\address{\dag\ School of Computer Science, McGill University,
3480 University, Montreal, Quebec, Canada H3A 2A7}
\address{\ddag\ Department of Computer Science, University of Tokyo,
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan}
\address{\S\ ERATO Quantum Computation and Information Project,
5-28-3 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan}
\eads{\mailto{[email protected]},
\mailto{\symbol{`\{}imai,tsuyoshi,y\_sasaki\symbol{`\}}@is.s.u-tokyo.ac.jp}}
\begin{abstract}
We establish a relation between the two-party Bell
inequalities for two-valued measurements and a high-dimensional
convex polytope called the cut polytope in polyhedral combinatorics.
Using this relation, we propose a method, \emph{triangular
elimination}, to derive tight Bell inequalities from facets of the cut
polytope.
This method gives two hundred million inequivalent tight Bell
inequalities from
currently known results on the cut polytope.
In addition, this method gives general formulas which represent
families of infinitely many Bell inequalities.
These results can be used to examine general properties of Bell
inequalities. \end{abstract}
\pacs{03.65.Ud, 02.10.Ud}
\maketitle
\newcommand{\bm}{\bm}
\newcommand{{\mathbb{R}}}{{\mathbb{R}}} \newcommand{{\mathrm{A}}}{{\mathrm{A}}} \newcommand{{\mathrm{B}}}{{\mathrm{B}}} \newcommand{{\mathrm{X}}}{{\mathrm{X}}} \newcommand{{\mathrm{K}}}{{\mathrm{K}}} \newcommand{{\mathrm{T}}}{{\mathrm{T}}} \newcommand{\mathrm{CUT}^\square}{\mathrm{CUT}^\square} \newcommand{\mathrm{CUT}}{\mathrm{CUT}} \newcommand{\mathop{\mbox{Pr}}}{\mathop{\mbox{Pr}}}
\makeatletter \newcommand{\revddots}
{\mathinner{\mkern1mu\raise\p@
\vbox{\kern7\p@\hbox{.}}\mkern2mu
\raise4\p@\hbox{.}\mkern2mu\raise7\p@\hbox{.}\mkern1mu}} \makeatother
\newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{observation}[theorem]{Observation} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{fact}[theorem]{Fact} \newtheorem{claim}{Claim}[section]
\newtheorem{definition}{Definition}[section] \newtheorem{example}{Example}[section]
\newtheorem{remark}{Remark}[section]
\section{Introduction}
Bell inequalities have been intensively studied in quantum theory~\cite{WerWol-QIC01,KruWer-0504166}, and it is known that they can be obtained from the structure of certain convex polytopes~\cite{Pit-JMP86,Pit-MP91,Per:all99}. Bell inequalities are not the only example of the use of convex polytopes in quantum theory. In a pioneering paper, McRae and Davidson~\cite{McrDav-JMP72} used the theory of convex polytopes to obtain inequalities bounding the range of possible solutions to some problems in quantum mechanics. Their method is summarized as follows: First they prove that the possible solutions form a convex polytope and obtain the set of vertices of the polytope. Then they obtain a minimum set of inequalities that describe the polytope using a convex hull algorithm. Interestingly, one of the polytopes McRae and Davidson considered coincides with the \emph{correlation polytope} Pitowsky introduced in \cite{Pit-JMP86}, in connection with Bell inequalities. This polytope arises in many fields under different names, and a comprehensive source for results on this polytope and the related cut polytope (described later) is the book by Deza and Laurent~\cite{DezLau:cut97}.
In this paper we consider the results of correlation experiments between two parties, where one party has $m_{\mathrm{A}}$ choices of possible two-valued measurements and the other party has $m_{\mathrm{B}}$ choices. The relevant polytope can be described as follows. The results of a series of such correlation experiments are represented as a vector of $m_{\mathrm{A}}+m_{\mathrm{B}}+m_{\mathrm{A}} m_{\mathrm{B}}$ probabilities. In classical mechanics, the set of vectors which are possible results of a correlation experiment forms an $(m_{\mathrm{A}}+m_{\mathrm{B}}+m_{\mathrm{A}} m_{\mathrm{B}})$-dimensional convex polytope which is a projection of the correlation polytope onto the complete bipartite graph $K_{m_{\mathrm{A}} , m_{\mathrm{B}}}$. A Bell inequality is nothing but a linear inequality satisfied by all the points in such a polytope. A tight Bell inequality is a Bell inequality which cannot be represented as a positive weighted sum of other Bell inequalities and defines a facet of the polytope. Two examples of these facet defining inequalities are the nonnegativity inequality and the Clauser-Horne-Shimony-Holt (CHSH) inequality~\cite{ClaHorShiHol-PRL69}.
By considering these polytopes, Bell's original inequality~\cite{Bel-Phys64}, the CHSH and many other known Bell inequalities can be understood in a unified manner. Fine's necessary and sufficient conditions~\cite{Fin-PRL82} for $m_{\mathrm{A}}=m_{\mathrm{B}}=2$ can be seen as the complete inequality representation of the correlation polytope of the complete bipartite graph ${\mathrm{K}}_{2,2}$. Pitowsky and Svozil~\cite{PitSvo-PRA01} and Collins and Gisin~\cite{ColGis-JPA04} apply convex hull algorithms to obtain a complete list of tight Bell inequalities in other experimental settings. As a result, we know the complete list of Bell inequalities in the cases $m_{\mathrm{A}}=2$~\cite{ColGis-JPA04}, $(m_{\mathrm{A}},m_{\mathrm{B}})=(3,3)$~\cite{PitSvo-PRA01} and $(m_{\mathrm{A}},m_{\mathrm{B}})=(3,4)$~\cite{ColGis-JPA04}. Several software packages for convex hull computation such as cdd~\cite{Fuk:cdd} and lrs~\cite{Avi:lrs} are readily available. It is unlikely, however, that there exists a compact representation of the complete set of Bell inequalities in arbitrarily large settings. This follows from the fact that testing whether a vector of correlations lies in the correlation polytope of the bipartite graph $K_{m_{\mathrm{A}},m_{\mathrm{B}}}$ is NP-complete~\cite{AviImaItoSas:0404014}. Therefore it is natural to look for families of Bell inequalities, especially those that are facet producing. In this direction, Collins and Gisin~\cite{ColGis-JPA04} give a family $I_{mm22}$ of Bell inequalities in the case $m_{\mathrm{A}}=m_{\mathrm{B}}=m$ for general $m$. In addition there are several extensions~\cite{ColGisLinMasPop-PRL02,Mas-QIC03,ColGis-JPA04} of the CHSH inequality for multi-valued measurements.
In the field of polyhedral combinatorics a polytope isomorphic to the correlation polytope, called the \emph{cut polytope}, has been studied in great detail~\cite{DezLau:cut97}. The correlation and cut polytopes are isomorphic via a linear mapping~\cite{Ham-OR65} and so the inequalities representing them correspond one-to-one. This relationship enables us to apply results for the cut polytope to the study of Bell inequalities. Related to this, Pironio~\cite{Pir-JMP05} uses lifting, which is a common approach in combinatorial optimization, to generate tight Bell inequalities for a larger system from those for a smaller system. Since the mathematical description of the facet structure of cut polytopes is simpler than that for correlation polytopes, the former are preferred in polyhedral combinatorics. Large classes of facets for the cut polytope $\mathrm{CUT}^\square_n$ of the complete graph ${\mathrm{K}}_n$ are known for general $n$ \cite{DezLau:cut97}, and a complete or conjectured complete list of all facets for $\mathrm{CUT}^\square_n$ is known for $n\le9$~\cite{SMAPO}. We make use of these results in this paper.
The cut polytope of the complete graph has been the most extensively studied. However, the case we are interested in corresponds to the correlation polytope of the complete bipartite graph ${\mathrm{K}}_{m_{\mathrm{A}},m_{\mathrm{B}}}$, which maps to the cut polytope of the complete tripartite graph ${\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}}$. To overcome this gap, we introduce a method called triangular elimination to convert an inequality valid for $\mathrm{CUT}^\square_n$ to another inequality valid for $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$, which is then converted to a Bell inequality via the isomorphism. The CHSH inequality and some of the other previously known inequalities can be explained in this manner. More importantly, triangular elimination converts a facet inequality of $\mathrm{CUT}^\square_n$ to a facet inequality of $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$, which corresponds to a tight Bell inequality.
A complete list of facets of $\mathrm{CUT}^\square_n$ for $n \leq 7$ and a conjectured complete list for $n=8,9$ are known. We apply triangular elimination to these facets to obtain 201,374,783 tight Bell inequalities. On the other hand, several formulas which represent many different inequalities valid for $\mathrm{CUT}^\square_n$ are known. We apply triangular elimination to these formulas to obtain new families of Bell inequalities. We discuss their properties such as tightness and inclusion of the CHSH inequality.
The rest of this paper is organized as follows. In Section~\ref{sect:elimination}, we introduce triangular elimination to derive tight Bell inequalities from facets of the cut polytope of the complete graph, and show its properties. We also give a computational result on the number of Bell inequalities obtained by triangular elimination. In Section~\ref{sect:families} we apply triangular elimination to some of the known classes of facets of the cut polytope of the complete graph to obtain general formulas representing many Bell inequalities. Section~\ref{sect:concluding} concludes the paper by giving the relation of our result to some of the open problems posed in~\cite{KruWer-0504166}.
\section{Triangular elimination} \label{sect:elimination}
\subsection{Bell inequalities and facets of cut polytopes}
Consider a system composed of subsystems ${\mathrm{A}}$ (Alice) and ${\mathrm{B}}$ (Bob). Suppose that on both subsystems, one of $m_{\mathrm{A}}$ observables for Alice and one of $m_{\mathrm{B}}$ observables for Bob are measured. For each observable, the outcome is one of two values (in the rest of the paper, we label the outcomes as $0$ or $1$). The experiment is repeated a large number of times. The result of such a correlation experiment consists of the probability distribution of the $m_{\mathrm{A}} m_{\mathrm{B}}$ joint measurements by both parties. Throughout this paper, we represent the experimental result as a vector $\bm{q}$ in $m_{\mathrm{A}} + m_{\mathrm{B}} + m_{\mathrm{A}} m_{\mathrm{B}}$ dimensional space in the following manner: $q_{A_{i}}$, $q_{B_{j}}$ and $q_{A_{i}B_{j}}$ correspond to the probabilities $\mathop{\mbox{Pr}}[A_{i} = 1]$, $\mathop{\mbox{Pr}}[B_{j} = 1]$ and $\mathop{\mbox{Pr}}[A_{i} = 1 \wedge B_{j} = 1]$
respectively.
In classical mechanics, the result of a correlation experiment must correspond to a probability distribution over all \emph{classical configurations}, where a classical configuration is an assignment of the outcomes $\{0,1\}$ to each of the $m_{\mathrm{A}} + m_{\mathrm{B}}$ observables. The experimental result has a \emph{local hidden variable model} if and only if a given experimental result can be interpreted as a result of such a classical correlation experiment.
\emph{Bell inequalities} are valid linear inequalities for every experimental result which has a local hidden variable model. Specifically using the above formulation, we represent a Bell inequality in the form \[ \sum_{ 1 \leq i \leq m_{\mathrm{A}}}b_{A_{i}}q_{A_{i}} + \sum_{ 1 \leq j \leq m_{\mathrm{B}}}b_{B_{j}}q_{B_{j}} + \sum_{ 1 \leq i \leq m_{\mathrm{A}}, 1 \leq j \leq m_{\mathrm{B}}}b_{A_{i}B_{j}}q_{A_{i}B_{j}} \leq b_{0}. \] for suitably chosen constants $b_x$.
For example, Clauser, Horn, Shimony and Holt~\cite{ClaHorShiHol-PRL69} have shown that the following CHSH inequality is a valid Bell inequality: \[ -q_{A_{1}} -q_{B_{1}} + q_{A_{1}B_{1}} + q_{A_{1}B_{2}} + q_{A_{2}B_{1}} - q_{A_{2}B_{2}} \leq 0. \]
In general, the set of all experimental results with a local hidden variable model forms a convex polytope with extreme points corresponding to the classical configurations. If the results of the experiment are in the above form, the polytope is called a \emph{correlation polytope}, a name introduced by Pitowsky~\cite{Pit:prob89}. (Such polyhedra have been discovered and rediscovered several times, see for instance Deza and Laurent~\cite{DezLau:cut97}.) From such a viewpoint, Bell inequalities can be considered as the boundary, or face inequalities, of that polytope. Since every polytope is the intersection of finitely many half spaces represented by linear inequalities, every Bell inequality can be represented by a convex combination of finitely many extremal inequalities. Such extremal inequalities are called \emph{tight} Bell inequalities. Non-extremal inequalities are called \emph{redundant}.
In polytopal theory, the maximal extremal faces of a polytope are called \emph{facets}. Therefore, tight Bell inequalities are facet inequalities of the polytope formed by experimental results with a local hidden variable model. Note that for a given linear inequality $\bm{b}^{{\mathrm{T}}} \bm{q} \leq b_{0}$ and $d$ dimensional polytope, the face represented by the inequality is a facet of that polytope if and only if the dimension of the convex hull of the extreme points for which the equality holds is $d-1$.
\subsubsection{Cut polytope of complete tripartite graph} We introduce a simple representation of an experimental setting as a graph. Consider a graph which consists of vertices corresponding to observables $A_{i}$ or $B_{j}$ and edges corresponding to joint measurements between $A_{i}$ and $B_{j}$. In addition, to represent probabilities which are the results of single (not joint) measurements, we introduce a vertex $X$ (which represents the trace out operation of the other party) and edges between $X$ and $A_{i}$ for every
$1 \leq i \leq m_{\mathrm{A}}$, and between $X$ and $B_{j}$ for every $1 \leq j \leq m_{\mathrm{B}}$. This graph is a complete
tripartite graph since there exist edges between each party of vertices (observables) $\{ X\}$,
$\{ A_{i}\}$ and $\{ B_{j}\}$. Using this graph, we can conveniently represent either the result probabilities or the coefficients of a Bell inequality as edge labels. We denote this graph by ${\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}}$.
In polyhedral combinatorics, a polytope affinely isomorphic to the correlation polytope has been well studied. Specifically, if we consider the probabilities $x_{A_{i}B_{j}} = \mathop{\mbox{Pr}}[A_{i} \neq B_{j}]$ instead of $q_{A_{i}B_{j}} = \mathop{\mbox{Pr}}[A_{i} = 1 \wedge B_{j} = 1]$ for each edge, the probabilities form a polytope called the \emph{cut polytope}. Thus, the cut polytope is another formulation of the polytope formed by Bell inequalities.
A \emph{cut} in a graph is an assignment of $\{0,1\}$ to each vertex, $1$ to an edge between vertices with different values assigned, and $0$ to an edge between vertices with the same values assigned. In the above formulation, each cut corresponds to a classical configuration. Note that since the $0,1$ exchange of all values of vertices yields the same edge cut, we can without loss of generality assume that the vertex $X$ is always assigned the label $0$.
Let the \emph{cut vector} $\bm{\delta}'(S') \in {\mathbb{R}}^{ \{XA_{i}\} \cup \{XB_{j}\} \cup \{A_{i}B_{j}\} }$ for some cut $S'$ be $\delta'_{uv}(S') = 1$ if vertices $u$ and $v$ are assigned different values, and $0$ if assigned the same values. Then, the convex combination of all the cut vectors $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}}) = \left\{ \bm{x} = \sum_{S'\colon\text{cut}} \lambda_{S'}\bm{\delta}'(S')\mid
\sum_{S'\colon\text{cut}} \lambda_{S'} = 1 \text{ and } \lambda_{S'} \geq 0 \right\}$ is called the cut polytope of the
complete tripartite graph. The cut polytope has full dimension. Therefore, $\dim(\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})) = m_{\mathrm{A}} + m_{\mathrm{B}} + m_{\mathrm{A}} m_{\mathrm{B}}$.
In this formulation, a tight Bell inequality $\bm{b}^{{\mathrm{T}}} \bm{q} \leq b_{0}$ corresponds to a facet inequality $\bm{a}^{\prime{\mathrm{T}}} \bm{x} \leq a_{0}$ of the cut polytope. The affine isomorphisms between them are: \begin{equation}
\fl
\left\{ \begin{array}{l}
x_{XA_{i}}=q_{A_{i}},\\
x_{XB_{j}}=q_{B_{j}},\\
x_{A_{i}B_{j}}= q_{A_{i}} + q_{B_{j}} - 2q_{A_{i}B_{j}},
\end{array} \right. \qquad \left\{
\begin{array}{l}
q_{A_{i}}=x_{XA_{i}},\\
q_{B_{j}}=x_{XB_{j}},\\
q_{A_{i}B_{j}}= \frac{1}{2}(x_{XA_{i}} + x_{XB_{j}} - x_{A_{i}B_{j}}).
\end{array} \right.
\label{eq:isomorphism} \end{equation}
Actually, because cut polytopes are symmetric under the switching operation (explained in Section~\ref{symmetry-of-cut-polytope}) we can assume that the right hand side of a facet inequality of the cut polytope is always $0$. This means that a given Bell inequality is tight if and only if for the corresponding facet inequality $\bm{a}^{{\mathrm{T}}} \bm{x} \leq 0$ of the cut polytope, there exist $m_{\mathrm{A}} + m_{\mathrm{B}} + m_{\mathrm{A}} m_{\mathrm{B}} - 1$ linearly independent cut vectors $\bm{\delta}'(S')$ for which $\bm{a}^{\prime{\mathrm{T}}} \bm{\delta}'(S') = 0$.
For example, there exists a facet inequality $-x_{A_{1}B_{1}} -x_{A_{1}B_{2}} -x_{A_{2}B_{1}} +x_{A_{2}B_{2}} \leq 0$ for $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}}, m_{\mathrm{B}}})$, $1 \leq m_{\mathrm{A}}, m_{\mathrm{B}}$ which corresponds to the CHSH inequality. Therefore, the CHSH inequality is tight in addition to being valid.
A consequence of the above affine isomorphisms is that any theorem concerning facets of the cut polytope can be immediately translated to give a corresponding theorem for tight Bell inequalities. Recently, Collins and Gisin~\cite{ColGis-JPA04} gave the following conjecture about the tightness of Bell inequalities: if a Bell inequality $\bm{b}^{{\mathrm{T}}} \bm{q} \leq b_{0}$ is tight in a given setting $m_{\mathrm{A}}, m_{\mathrm{B}}$, then for each $ m^{\prime}_{A} \geq m_{\mathrm{A}}$ and $m^{\prime}_{B} \geq m_{\mathrm{B}}$, the inequality $\bm{b}^{\prime{\mathrm{T}}} \bm{q}^{\prime} \leq b_{0}$ is also tight. Here $\bm{b}^{\prime}$ is the vector $b_{uv}^{\prime} = b_{uv}$ if the vertices (observables) $u,v$ appear in $\bm{b}$ and is zero otherwise. They gave empirical evidence for this conjecture based on numerical experiments. In fact, a special case of the \emph{zero-lifting theorem} by De~Simone~\cite{Des-ORL90} gives a proof of their conjecture.
\subsection{Triangular elimination}
\subsubsection{Cut polytope of complete graph} In the previous section we saw that the problem of enumerating tight Bell inequalities is equivalent to that of enumerating facet inequalities of the cut polytope of a corresponding complete tripartite graph. The properties of facet inequalities of the cut polytope of the complete graph ${\mathrm{K}}_n$ are well studied and there are rich results. For example, several general classes of facet inequalities with relatively simple representations are known.
For $n \le 7$ the complete list of facets is known~\cite{Gri-EJC90}, and for $n=8,9$ a conjectured complete list is known~\cite{ChrRei-IJCGA01,SMAPO}. In addition, the symmetry of the polytope is also well-understood. We show how to apply such results to our complete tripartite graph case.
First, we introduce the cut polytope of complete graph. The graph is denoted by ${\mathrm{K}}_n$, has $n$ vertices, and has an edge between each pair of vertices. As before, a cut is an assignment of $\{ 0,1\}$ to each vertex, and an edge is labeled by $1$ if the endpoints of the edge are labeled differently or $0$ if labeled the same. The cut vectors ${\delta}(S)$ of the complete graph are defined in the same manner as before. The set of all convex combinations of cut vectors $\mathrm{CUT}^\square({\mathrm{K}}_{n}) = \left\{ \bm{x} = \sum_{S\colon\text{cut}} \lambda_{S} \bm{\delta}(S)\mid \sum_{S\colon\text{cut}} \lambda_{S} = 1 \text{ and } \lambda_{S} \geq 0\right\}$ is called the cut polytope of the complete graph. $\mathrm{CUT}^\square({\mathrm{K}}_n)$ is also written as $\mathrm{CUT}^\square_n$.
In contrast to the complete tripartite graph, the space on which the cut polytope of the complete graph exists has elements corresponding to probabilities of joint measurement by the same party. Because of the restrictions of quantum mechanics, such joint measurements are prohibited. Therefore, if we want to generate tight Bell inequalities from the known facet inequalities of the cut polytope of the complete graph, we must transform the inequalities to eliminate joint measurement terms. In polyhedral terms, $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}}, m_{\mathrm{B}}})$ is a projection of $\mathrm{CUT}^\square({\mathrm{K}}_{n})$ onto a lower dimensional space.
\subsubsection{Definition of triangular elimination}
\begin{figure}
\caption{
The most simple case of triangular elimination:
The sum of two triangle inequalities is the CHSH inequality.}
\label{fig:chsh}
\end{figure}
A well known method for projecting a polytope is called Fourier-Motzkin elimination. This is essentially the summation of two facet inequalities to cancel out the target term. For example, it is well known that the triangle inequality $x_{uv} - x_{uw} - x_{wv} \leq 0$, for any three vertices $u,v,w$, is valid for the cut polytope of the complete graph. In fact, Bell's original inequality~\cite{Bel-Phys64} is essentially this inequality. The CHSH inequality $-x_{A_{1}B_{1}} -x_{A_{1}B_{2}} -x_{A_{2}B_{1}} +x_{A_{2}B_{2}} \leq 0$ is the sum of $x_{A_{1}A_{2}} - x_{A_{1}B_{1}} - x_{A_{2}B_{1}} \leq 0$ and $x_{A_{2}B_{2}} -x_{A_{1}B_{2}} - x_{A_{1}A_{2}} \leq 0$ (see \fref{fig:chsh}).
In general, the result of Fourier-Motzkin elimination is not necessarily a facet. For example, it is known that the pentagonal inequality \begin{equation}
\fl
x_{XA_{1}} + x_{XA_{2}} - x_{XB_{1}} - x_{XB_{2}} + x_{A_{1}A_{2}} - x_{A_{1}B_{1}} - x_{A_{1}B_{2}} - x_{A_{2}B_{1}} - x_{A_{2}B_{2}} + x_{B_{1}B_{2}} \leq 0
\label{eq:pent} \end{equation} is a facet inequality of $\mathrm{CUT}^\square({\mathrm{K}}_{5})$. If we eliminate joint measurement terms $x_{A_{1}A_{2}}$ and $x_{B_{1}B_{2}}$ by adding triangle inequalities $x_{A_{1}B_{2}} - x_{A_{1}A_{2}} - x_{A_{2}B_{2}} \leq 0$ and $x_{A_{2}B_{1}} - x_{B_{1}B_{2}} - x_{A_{2}B_{2}} \leq 0$, the result is $x_{XA_{1}} + x_{XA_{2}} - x_{XB_{1}} - x_{XB_{2}} - x_{A_{1}B_{1}} - 3x_{A_{2}B_{2}} \leq 0$. Therefore, this inequality is a valid inequality for $\mathrm{CUT}^\square({\mathrm{K}}_{1,3,3})$. However, the inequality is a summation of four valid triangle inequalities for $\mathrm{CUT}^\square({\mathrm{K}}_{1,3,3})$, namely $x_{XA_{1}} - x_{XB_{1}} - x_{A_{1}B_{1}} \leq 0$, $x_{XA_{2}} - x_{XB_{2}} - x_{A_{2}B_{2}} \leq 0$, $x_{XA_{2}} - x_{XB_{2}} - x_{A_{2}B_{2}} \leq 0$ and $- x_{XA_{2}} + x_{XB_{2}} - x_{A_{2}B_{2}} \leq 0$. This means that the inequality with eliminated terms is redundant.
Fourier-Motzkin elimination often produces large numbers of redundant inequalities, causing the algorithm to be computationally intractable when iterated many times. Therefore, it is important to find situations where the new inequalities found are guaranteed to be tight.
The difference between the two examples is that in the CHSH case, the second triangle inequality introduced a new vertex $B_{2}$ where ``new'' means that the first triangle inequality had no term with subscript labeled ${B_{2}}$. Generalizing this operation, we will show that Fourier-Motzkin elimination by triangle inequalities which introduce new vertices, is almost always guaranteed to yield non-redundant inequalities. We call the operation \emph{triangular elimination}.
\begin{definition}[triangular elimination]
For a given valid inequality for $\mathrm{CUT}^\square({\mathrm{K}}_{1 + n_{\mathrm{A}} + n_{\mathrm{B}}})$ \begin{eqnarray} \sum_{ 1 \leq i \leq n_{\mathrm{A}}}a_{XA_{i}}x_{XA_{i}} + \sum_{ 1 \leq j \leq n_{\mathrm{B}}}a_{XB_{j}}x_{XB_{j}} + \sum_{ 1 \leq i \leq n_{\mathrm{A}}, 1 \leq j \leq n_{\mathrm{B}}}a_{A_{i}B_{j}}x_{A_{i}B_{j}} \nonumber\\ +\sum_{ 1 \leq i < i^{\prime} \leq n_{\mathrm{A}}}a_{A_{i}A_{i^{\prime}}}x_{A_{i}A_{i^{\prime}}} +\sum_{ 1 \leq j < j^{\prime} \leq n_{\mathrm{B}}}a_{B_{j}B_{j^{\prime}}}x_{B_{j}B_{j^{\prime}}}
\leq a_{0}, \label{eq:before-elimination} \end{eqnarray} the triangular elimination is defined as follows: \begin{eqnarray}
\sum_{ 1 \leq i \leq n_{\mathrm{A}}}a_{XA_{i}}x_{XA_{i}}
+ \sum_{ 1 \leq j \leq n_{\mathrm{B}}}a_{XB_{j}}x_{XB_{j}}
+ \sum_{ 1 \leq i \leq n_{\mathrm{A}}, 1 \leq j \leq n_{\mathrm{B}}}
a_{A_{i}B_{j}}x_{A_{i}B_{j}} \nonumber\\
+ \sum_{ 1 \leq i < i^{\prime} \leq n_{\mathrm{A}}}
( a_{A_{i}A_{i^{\prime}}}x_{A_{i}B'_{A_{i}A_{i^{\prime}}}}
- |a_{A_{i}A_{i^{\prime}}}| x_{A_{i^{\prime}}B'_{A_{i}A_{i^{\prime}}}}
) \nonumber\\
+\sum_{ 1 \leq j < j^{\prime} \leq n_{\mathrm{B}}}
( a_{B_{j}B_{j^{\prime}}}x_{A'_{B_{j}B_{j^{\prime}}}B_{j}}
- |a_{A_{j}A_{j^{\prime}}}|x_{A'_{B_{j}B_{j^{\prime}}}B_{j^{\prime}}}
)
\leq a_{0}. \label{eq:after-elimination} \end{eqnarray} This is an inequality for $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$, where $m_{\mathrm{A}} = n_{\mathrm{A}} + \frac{n_{\mathrm{B}}(n_{\mathrm{B}}-1)}{2},m_{\mathrm{B}} = n_{\mathrm{B}} + \frac{n_{\mathrm{A}}(n_{\mathrm{A}}-1)}{2}$. We denote (\ref{eq:before-elimination}) by $\bm{a}^{{\mathrm{T}}}\bm{x} \leq 0$, $\bm{a},\bm{x} \in{\mathbb{R}}^{\frac{(n_{\mathrm{A}} + n_{\mathrm{B}})(n_{\mathrm{A}} + n_{\mathrm{B}} + 1)}{2}} $ and (\ref{eq:after-elimination}) by $\bm{a}^{\prime{\mathrm{T}}}\bm{x}^{\prime} \leq 0, \bm{a}^{\prime},\bm{x}^{\prime} \in{\mathbb{R}}^{m_{\mathrm{A}} + m_{\mathrm{B}} + m_{\mathrm{A}} m_{\mathrm{B}}}$, respectively. \end{definition}
Note that forbidden terms of the form $x_{A_{i}A_{i^{\prime}}}$ and $x_{B_{j}B_{j^{\prime}}}$ do not appear in (\ref{eq:after-elimination}).
\begin{figure}
\caption{
The $I_{3322}$ inequality is generated by triangular
elimination from the pentagonal inequality of $\mathrm{CUT}^\square_5$.}
\label{fig:i3322}
\end{figure}
As an example, let us see how the $I_{3322}$ inequalities is generated by triangular elimination (see \fref{fig:i3322}) of the pentagonal inequality (\ref{eq:pent}). This inequality has two terms $x_{{\mathrm{A}}_1{\mathrm{A}}_2}$ and $x_{{\mathrm{B}}_1{\mathrm{B}}_2}$ which correspond to joint measurements of two observables in one subsystem and are not allowed. Therefore, we eliminate these terms by adding two new nodes ${\mathrm{A}}'_{{\mathrm{B}}_1{\mathrm{B}}_2}$ and ${\mathrm{B}}'_{{\mathrm{A}}_1{\mathrm{A}}_2}$ and adding two triangle inequalities $-x_{{\mathrm{A}}_1{\mathrm{A}}_2}+x_{{\mathrm{A}}_1{\mathrm{B}}'_{{\mathrm{A}}_1{\mathrm{A}}_2}}-x_{{\mathrm{A}}_2{\mathrm{B}}'_{{\mathrm{A}}_1{\mathrm{A}}_2}}\le0$ and $-x_{{\mathrm{B}}_1{\mathrm{B}}_2}+x_{{\mathrm{A}}'_{{\mathrm{B}}_1{\mathrm{B}}_2}{\mathrm{B}}_1}-x_{{\mathrm{A}}'_{{\mathrm{B}}_1{\mathrm{B}}_2}{\mathrm{B}}_2}\le0$. If we rewrite the resulting inequality in terms of the vector $\bm{q}$ instead of the vector $\bm{x}$ by using the isomorphism~(\ref{eq:isomorphism}), this inequality becomes the $I_{3322}$ inequality. As we will see in the next subsection, this gives another proof of the tightness of the $I_{3322}$ inequality than directly checking the dimension of the face computationally.
\subsection{Triangular elimination and facet} In this subsection, we show the main theorem of this paper: under a very mild condition, the triangular elimination of a facet is a facet.
\begin{theorem} \label{thrm:kn-k1mm}
The triangular elimination of a facet inequality
$\bm{a}^{{\mathrm{T}}}\bm{x}\le0$ of $\mathrm{CUT}^\square({\mathrm{K}}_{1+n_{\mathrm{A}}+n_{\mathrm{B}}})$ is
facet inducing for $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$ except for the cases
that the inequality $\bm{a}^{{\mathrm{T}}}\bm{x}\le0$ is a triangle
inequality labelled as either
$-x_{XA_1}-x_{XA_2}+x_{A_1A_2}\le0$ or
$-x_{A_1A_2}-x_{A_1A_3}+x_{A_2A_3}\le0$. \end{theorem}
For example, as we saw, the CHSH inequality is the triangular elimination of Bell's original inequality, which is a triangle inequality. The $I_{3322}$ inequality, found by Pitowsky and Svozil~\cite{PitSvo-PRA01} and Collins and Gisin~\cite{ColGis-JPA04}, is the triangular elimination of a pentagonal inequality.
\begin{proof}
Let $r_{F}$ be the set of cut vectors on the hyperplane $\bm{a}^{\prime{\mathrm{T}}} \bm{x}^{\prime} = 0$:
$r_{F} =\left\{ \bm{\delta}^{\prime}(S^{\prime}) \mid \bm{a}^{\prime{\mathrm{T}}} \bm{\delta}^{\prime} = 0, S^{\prime}:\text{cut} \right\} $ for $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$. We prove the theorem by exhibiting a linearly independent subset of these cut vectors with cardinality $m_{\mathrm{A}} + m_{\mathrm{B}} + m_{\mathrm{A}} m_{\mathrm{B}} -1$.
In the following proof, we consider a simple case of $n_{\mathrm{B}}=1$. We consider the other case later. In addition, we assume that $a_{A_{i}A_{i^{\prime}}} \leq 0$ for all eliminated terms. For the other cases, the proof is similar.
By the above restriction, $m_{\mathrm{A}} + m_{\mathrm{B}} + m_{\mathrm{A}} m_{\mathrm{B}} -1 = (n_{\mathrm{A}}^3+3n_{\mathrm{A}})/2$.
A sketch of proof is as follows: first, we restrict $r_F$ and decompose the whole space of
$\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$ into two subspaces. For each subspace, we can pick a set of cut vectors which are linearly independent in that subspace. Next, we show that these sets of cut vectors are linearly independent in the whole space.
First, let the subset $r^{\prime}_{F}$ of $r_F$ be those cuts such that, for any $1 \leq i < i^{\prime} \leq n_{\mathrm{A}}$, two vertices $A_{i^{\prime}}$ and $B'_{A_{i}A_{i^{\prime}}}$ are assigned same value. Then, consider the intersection of the space spanned by $\bm{\delta}^{\prime}(S^{\prime}) \in r^{\prime}_{F}$ and the subspace \[W = \left\{ ( x_{XA_{i}}, x_{XB_{j}}, x_{A_{i}B_{j}}, x_{A_{i}B'_{A_{i}A_{i^{\prime}}}})^{\mathrm{T}}
_{1 \leq i < i^{\prime} \leq n_{\mathrm{A}}, 1 \leq j \leq n_{\mathrm{B}}}
\right\}. \] From the definition of $r^{\prime}_{F}$, $\delta^{\prime}_{A_{i^{\prime}}B'_{A_{i}A_{i^{\prime}}}}(S^{\prime}) = 0$. Therefore, \begin{eqnarray*}
\bm{a}^{\prime{\mathrm{T}}} \bm{\delta}^{\prime}(S^{\prime})
= \sum_{1 \leq i \leq n_{\mathrm{A}}} a_{XA_{i}} \delta_{XA_{i}} (S^{\prime}) +
\sum_{1 \leq j \leq n_{\mathrm{B}}} a_{XB_{j}} \delta_{XB_{j}} (S^{\prime}) \\
+ \sum_{1 \leq i \leq n_{\mathrm{A}},1 \leq j \leq n_{\mathrm{B}}} a_{A_{i}B_{j}} \delta_{A_{i}B_{j}} (S^{\prime})
+ \sum_{1 \leq i < i^{\prime} \leq n_{\mathrm{A}}} a_{A_{i}A_{i^{\prime}}} \delta^{\prime}_{A_{i}B'_{A_{i}A_{i^{\prime}}}} (S^{\prime}) = 0. \end{eqnarray*} This means that the intersection of space spanned by $\bm{\delta}^{\prime}(S^{\prime}) \in r^{\prime}_{F}$ and $W$ is equivalent to the space spanned by the cut vectors $r_{f} = \left\{ \bm{\delta}(S) \mid \bm{a}^{\mathrm{T}} \bm{\delta} = 0, S:\text{cut} \right\} $ of $\mathrm{CUT}^\square({\mathrm{K}}_{1 + n_{\mathrm{A}} + n_{\mathrm{B}}})$. Therefore, from the assumption that the inequality $\bm{a}^{{\mathrm{T}}} \bm{x} \leq 0$ is facet supporting, we can pick $(n_{\mathrm{A}}^2 + 3n_{\mathrm{A}})/2$ linearly independent cut vectors and transform the cut vectors of $\mathrm{CUT}^\square({\mathrm{K}}_{1 + n_{\mathrm{A}} + n_{\mathrm{B}}})$ into corresponding cut vectors of $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$. Let this set of linearly independent cut vectors be $D_{0}$.
The remaining subspace of $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$ is \[V = \bigoplus_{i < i^{\prime}} V_{A_{i}A_{i^{\prime}}} = \bigoplus_{i < i^{\prime} } \left\{ \left( x_{XB'_{A_{i}A_{i^{\prime}}}}, x_{A_{i^{\prime}}B'_{A_{i}A_{i^{\prime}}}},
x_{A_{i^{\prime\prime}}B'_{A_{i}A_{i^{\prime}}}} \right)_{i^{\prime\prime} \neq i, i^{\prime}}^{{\mathrm{T}}} \right\} \] for each eliminated term $A_{i}A_{i^{\prime}}, 1 \leq i < i^{\prime} \leq n_{\mathrm{A}}$.
Instead of $V$, we consider the space \[
\fl
V^{\prime} = \bigoplus_{i < i^{\prime}} V^{\prime}_{A_{i}A_{i^{\prime}}}
= \bigoplus_{i < i^{\prime}}
\left\{\left(
x_{XB'_{A_{i}A_{i^{\prime}}}}-x_{A_{i^{\prime}}B'_{A_{i}A_{i^{\prime}}}},
x_{A_{i^{\prime}}B'_{A_{i}A_{i^{\prime}}}},
x_{\alpha_{A_{i},A_{i^{\prime}},A_{i^{\prime\prime}} }}
\right)_{i^{\prime\prime} \neq i, i^{\prime}}^{{\mathrm{T}}} \right\} \] where \[
\fl
x_{\alpha_{A_{i},A_{i^{\prime}},A_{i^{\prime\prime}}}}
= \left\{ \begin{array}{ll}
\frac{1}{2} ( x_{A_{i^{\prime\prime}} B'_{A_{i}A_{i^{\prime}} } }
- x_{A_{i^{\prime}} B'_{A_{i^{\prime}}A_{i^{\prime\prime}} } }
- x_{A_{i^{\prime}} B'_{A_{i}A_{i^{\prime}} } }
+ 3x_{A_{i^{\prime\prime}} B'_{A_{i^{\prime}}A_{i^{\prime\prime}} } } )
& (i^{\prime} < i^{\prime\prime} ) \\
\frac{1}{2}(x_{A_{i^{\prime\prime}} B'_{A_{i}A_{i^{\prime}} } }
- x_{A_{i^{\prime\prime}} B'_{A_{i^{\prime\prime}}A_{i^{\prime}} } }
- x_{A_{i^{\prime}} B'_{A_{i}A_{i^{\prime}} } }
- x_{A_{i^{\prime}} B'_{A_{i^{\prime\prime}}A_{i^{\prime}} } } )
& (i^{\prime\prime} < i^{\prime} )
\end{array} \right. \] in the following. Since the transform $V$ to $V^{\prime}$ is linear, the linear independence of vectors in $V$ is equivalent to that in $V^{\prime}$.
Then, we consider the subset $r^{\prime\prime}_{F,A_{i}A_{i^{\prime}} }$ of $r_F$ for each $A_{i}A_{i^{\prime}}$ restricted as follows: $A_{i^{\prime}}$ must be assigned $0$ and both $B'_{A_{i}A_{i^{\prime}}}$ and $A_{i}$ must be assigned $1$. For other terms $A_{i^{\prime\prime\prime}}A_{i^{\prime\prime\prime\prime}}(1 \leq i^{\prime\prime\prime} < i^{\prime\prime\prime\prime} \leq n_{\mathrm{A}})$, vertices $A_{i^{\prime\prime\prime\prime}}$ and $B'_{A_{i^{\prime\prime\prime}}A_{i^{\prime\prime\prime\prime}}}$ must be assigned the same value. From that restriction, the equations \begin{eqnarray*} \delta^{\prime}_{XB'_{A_{i}A_{i^{\prime}}}}(S^{\prime\prime}) - \delta^{\prime}_{A_{i^{\prime}}B'_{A_{i}A_{i^{\prime}}}}(S^{\prime\prime}) = -\delta_{XA_{i^{\prime}}}(S), \\ \delta^{\prime}_{A_{i^{\prime}}B'_{A_{i}A_{i^{\prime}}}}(S^{\prime\prime}) = 1, \\ \delta^{\prime}_{\alpha_{A_{i}, A_{i^{\prime}}, A_{i^{\prime\prime}} }}(S^{\prime\prime}) = -\delta_{A_{i^{\prime\prime}}A_{i^{\prime}}}(S) \end{eqnarray*} hold for $\bm{\delta}^{\prime}(S^{\prime\prime}) \in r^{\prime\prime}_{F,A_{i}A_{i^{\prime}}} $. This means that the intersection of the space spanned by $\bm{\delta}^{\prime}(S^{\prime\prime})$ and the subspace $V^{\prime}_{A_{i}A_{i^{\prime}}}$ is equivalent to that of the space spanned by $\bm{\delta}(S) \in r_{f}$ and the subspace \[U_{A_{i}A_{i^{\prime}}}= \left\{ \left( x_{XA_{i^{\prime}}}, 1, x_{A_{i^{\prime\prime}}A_{i^{\prime}}} \right)_{i^{\prime\prime} \neq i, i^{\prime}}^{{\mathrm{T}}} \right\}. \]
Now, because $r_f$ is on the hyperplane $\bm{a}^{{\mathrm{T}}} \bm{x} = 0$, the above intersection has dimension $n_{\mathrm{A}}$ or $n_{\mathrm{A}}-1$. However, from the condition on the inequality $\bm{a}^{\mathrm{T}}\bm{x}\le0$, the space spanned by $r_f$ is not parallel to $U_{A_{i}A_{i^{\prime}}}$. Therefore, the dimension is $n_{\mathrm{A}}$ and we can extract $n_{\mathrm{A}}$ cut vectors which are linearly independent in the subspace $V^{\prime}$ using the cut vectors from $r_{f}$. Let this set of cut vectors be
$D_{A_{i}A_{i^{\prime}}}$.
Finally, we show that $D_0 \cup \bigcup_{1\le i<i'\le n_{\mathrm{A}}}D_{A_{i}A_{i^{\prime}}}$ is a linearly independent set of cut vectors. Suppose that the linear combination \[ \sum_{\bm{\delta}^{\prime{\mathrm{T}}}(S^{\prime}) \in D_{0} } \kappa_{S^{\prime}}\bm{\delta}^{\prime{\mathrm{T}}}(S^{\prime}) + \sum_{1 \leq i < i^{\prime} \leq n_{\mathrm{A}}} \sum_{\bm{\delta}^{\prime{\mathrm{T}}}(S^{\prime\prime}) \in D_{A_{i}A_{i^{\prime}}} } \lambda^{A_{i}A_{i^{\prime}}}_{S^{\prime\prime}}\bm{\delta}^{\prime{\mathrm{T}}}(S^{\prime\prime}) = 0\] holds. Consider the subspace $V^{\prime}_{A_{i}A_{i^{\prime}}}$ of the above linear combination. From the construction, for $D_0$ and $ D_{A_{i^{\prime\prime}}A_{i^{\prime\prime\prime}}} $, the elements of cut vectors in that subspace are all zero. Therefore, for the linear combination to hold, it must be that $ \sum_{\bm{\delta}^{\prime{\mathrm{T}}}(S^{\prime\prime}) \in D_{A_{i}A_{i^{\prime}}} } \lambda^{A_{i}A_{i^{\prime}}}_{S^{\prime\prime}}\bm{\delta}^{\prime{\mathrm{T}}}(S^{\prime\prime}) = 0$. However, the linear independence of $D_{A_{i}A_{i^{\prime}}}$ means that the coefficients are all zero. By repeating this argument, we can conclude that the coefficient $ \lambda^{A_{i}A_{i^{\prime}}}_{S^{\prime\prime}}$ must be zero. So, from the linear independence of $D_0$, the coefficients $ \kappa_{S^{\prime}}$ are also zero. This completes the proof for the case $n_{\mathrm{B}}=1$.
Now we describe the outline of the proof for general case. The idea of the proof is to perform triangular elimination in two steps: eliminate the edges ${\mathrm{A}}_i{\mathrm{A}}_{i'}$ for $1\le i<i'\le n_{\mathrm{A}}$ in one step and then the edges ${\mathrm{B}}_j{\mathrm{B}}_{j'}$ for $1\le j<j'\le n_{\mathrm{B}}$ in the other. To do this, we need the notion of the cut polytope $\mathrm{CUT}^\square(G)\subseteq{\mathbb{R}}^E$ of a general graph $G=(V,E)$, which is obtained from the cut polytope of the complete graph on node set $V$ by removing the coordinates corresponding to the edges missing in $E$. In particular we consider the cut polytopes of the following two intermediate graphs: the graph $G_1(n_{\mathrm{A}},n_{\mathrm{B}})$ obtained from ${\mathrm{K}}_{1,n_{\mathrm{A}},m_{\mathrm{B}}}$ by adding edges ${\mathrm{B}}_j{\mathrm{B}}_{j'}$ and ${\mathrm{B}}_j{\mathrm{B}}'_{{\mathrm{A}}_i{\mathrm{A}}_{i'}}$, and the graph $G_2(n_{\mathrm{A}},n_{\mathrm{B}})$ obtained from ${\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}}$ by adding edges ${\mathrm{B}}_j{\mathrm{B}}'_{{\mathrm{A}}_i{\mathrm{A}}_{i'}}$.
The next lemma is a basic fact from polytope theory (see Lemma~26.5.2~(ii) in~\cite{DezLau:cut97}; though the statement there restricts $G$ to be a complete graph, that restriction is not necessary).
\begin{lemma} \label{lemm:zero-proj}
Let $G$ be a graph and $G'$ be a subgraph of $G$.
If $\bm{a}^{\mathrm{T}}\bm{x}\le0$ is facet inducing for $\mathrm{CUT}^\square(G)$ and
$a_e=0$ for all edges $e$ belonging to $G$ but not to $G'$, then
$\bm{a}^{\mathrm{T}}\bm{x}\le0$ is facet inducing also for $\mathrm{CUT}^\square(G')$. \end{lemma}
The inequality after the first step of triangular elimination is as follows: \begin{eqnarray}
\fl
\sum_{ 1 \leq i \leq n_{\mathrm{A}}}a_{XA_{i}}x_{XA_{i}}
+ \sum_{ 1 \leq j \leq n_{\mathrm{B}}}a_{XB_{j}}x_{XB_{j}}
+ \sum_{ 1 \leq i \leq n_{\mathrm{A}}, 1 \leq j \leq n_{\mathrm{B}}}
a_{A_{i}B_{j}}x_{A_{i}B_{j}} \nonumber\\
\fl
+ \sum_{ 1 \leq i < i^{\prime} \leq n_{\mathrm{A}}}
( a_{A_{i}A_{i^{\prime}}}x_{A_{i}B'_{A_{i}A_{i^{\prime}}}}
- |a_{A_{i}A_{i^{\prime}}}| x_{A_{i^{\prime}}B'_{A_{i}A_{i^{\prime}}}}
)
+\sum_{ 1 \leq j < j^{\prime} \leq n_{\mathrm{B}}}
a_{B_{j}B_{j^{\prime}}}x_{B_{j}B_{j^{\prime}}}
\leq a_0. \label{eq:elimination-alice} \end{eqnarray}
For the case $n_{\mathrm{B}}=1$, the inequality (\ref{eq:elimination-alice}) is exactly the same as (\ref{eq:after-elimination}). We proved above for the case $n_{\mathrm{B}}=1$ that the inequality (\ref{eq:elimination-alice}) is facet inducing for $\mathrm{CUT}^\square({\mathrm{K}}_{1,n_{\mathrm{A}},m_{\mathrm{B}}})$. Except for when the original inequality is a triangle inequality, we can extend this argument to prove that the inequality (\ref{eq:elimination-alice}) is facet inducing also for $\mathrm{CUT}^\square(G_1(n_{\mathrm{A}},n_{\mathrm{B}}))$. This can be generalized for the case $n_{\mathrm{B}}>1$: the inequality (\ref{eq:elimination-alice}) is facet inducing for $\mathrm{CUT}^\square(G_1(n_{\mathrm{A}},n_{\mathrm{B}}))$. Then we can repeat a similar argument to prove the final inequality (\ref{eq:after-elimination}) is facet inducing for $\mathrm{CUT}^\square(G_2(n_{\mathrm{A}},n_{\mathrm{B}}))$. Since $G_2(n_{\mathrm{A}},n_{\mathrm{B}})$ is a supergraph of the desired graph ${\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}}$, the inequality (\ref{eq:after-elimination}) is facet inducing also for $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$ from Lemma~\ref{lemm:zero-proj}. \end{proof}
\subsection{Triangular elimination and symmetry}\label{symmetry-of-cut-polytope}
Many Bell inequalities are equivalent to each other due to the arbitrariness in the labelling of the party, observable and value identifiers. This corresponds to symmetries of the underlying polytope. We consider ways of representing nonequivalent Bell inequalities in this section.
The nonequivalence of Bell inequalities can be translated into two questions about facet inequalities $f$ and $f^{\prime}$ of a given cut polytope of a complete graph, and their triangular eliminations $F$ and $F^{\prime}$, respectively: \begin{enumerate}
\item does the equivalence of $f$ and $f^{\prime}$ imply the equivalence of $F$ and $F^{\prime}$?
\item does the equivalence of $F$ and $F^{\prime}$ imply the equivalence of $f$ and $f^{\prime}$? \end{enumerate} The answers are both affirmative if we define equivalence appropriately, so equivalence before triangular elimination is logically equivalent to equivalence after triangular elimination. This means that, for example, to enumerate the nonequivalent Bell inequalities, we need only enumerate the facet inequalities of the cut polytope of the complete graph up to symmetry by party, observable and value exchange.
In $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$, the relabelling of all vertices of Alice to that of Bob and vice versa corresponds to a party exchange. On the other hand, the local relabelling of some vertices of Alice (or Bob) corresponds to an observable exchange. Thus by the observable exchange of Alice represented by the permutation $\sigma$ over $\{A_{1}, \ldots, A_{m_{\mathrm{A}}}\}$, an inequality $\bm{a}^{{\mathrm{T}}} \bm{x} \leq a_{0}$ is transformed into $\bm{a}^{\prime{\mathrm{T}}} \bm{x} \leq a_{0}$ where $a^{\prime}_{\sigma(A_{i})V} = a_{A_{i}V}$ for any vertex $V$.
In addition, there is an operation which corresponds to a value exchange of some observables, called a \emph{switching} in the theory of cut polytopes. By the switching corresponding to the value exchange of an Alice's observable $A_{i_{0}}$, an inequality $\bm{a}^{{\mathrm{T}}} \bm{x} \leq a_{0}$ is transformed into $\bm{a}^{\prime{\mathrm{T}}} \bm{x} \leq a_{0} - \sum_{V}a_{A_{i_{0}}V }$ where $a^{\prime}_{A_{i_{0}}V} = -a_{A_{i_{0}}V}$, and $a^{\prime}_{A_{i}V} =a_{A_{i}V}$ for any $i \neq i_{0}$ and any vertex $V \neq A_{i_{0}}$ (definitions for Bob's exchange are similar).
It is well known, and easily shown, that by repeated application of the switching operation we may reduce the right hand side of any facet inequality to zero.
Let $n_{\mathrm{A}}\le n_{\mathrm{B}}$ and $n=1+n_{\mathrm{A}}+n_{\mathrm{B}}$. Let $f$ and $f'$ be facets of $\mathrm{CUT}^\square_n$ where the $n$ nodes of ${\mathrm{K}}_n$ is labelled by $V=\{A_1,\dots,A_{n_{\mathrm{A}}},\allowbreak B_1,\dots,B_{n_{\mathrm{B}}},\allowbreak X\}$. The two facets $f$ and $f'$ are said to be \emph{equivalent} and denoted $f\sim f'$ if $f$ can be transformed to $f'$ by applying zero or more of the following operations: (1) (only applicable in the case $n_{\mathrm{A}}=n_{\mathrm{B}}$) swapping labels of nodes
$A_i$ and $B_i$ for all $1\le i\le n_{\mathrm{A}}$, (2) relabelling the nodes within $A_1,\dots,A_{n_{\mathrm{A}}}$, (3) relabelling the nodes within $B_1,\dots,B_{n_{\mathrm{B}}}$, and (4) switching. \footnote{ The two facets $f$ and $f'$ are said to be \emph{equivalent} and denoted $f\sim f'$ if $f$ can be transformed to $f'$ by permutation and switching where the permutation $\tau$ on $V$ satisfies: (1) $\tau(X)=X$ and (2) $\tau$ either fixes two sets $\{A_1,\dots,A_{n_{\mathrm{A}}}\}$ and $\{B_1,\dots,B_{n_{\mathrm{B}}}\}$ setwise or (in the case $n_{\mathrm{A}}=n_{\mathrm{B}}$) swaps these two sets. }
Two facets $F$ and $F'$ of $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$ are said to be \emph{equivalent} and denoted $F\sim F'$ if $F$ can be transformed to $F'$ by applying permutation which fixes node $X$, switching, or both. This notion of equivalence of facets of $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$ corresponds to equivalence of tight Bell inequalities up to party, observable and value exchange.
\begin{theorem} \label{thrm:sw-perm-n-even} Let the triangular elimination of facet inequalities $f$ and $f^{\prime}$ be $F$ and $F^{\prime}$, respectively. Then, $f \sim f^{\prime}$ $\iff$ $F \sim F^{\prime}$. \end{theorem} \begin{proof} A sketch of the proof is as follows. Since the permutation and switching operations are commutative, it is sufficient to prove the proposition under each operation separately. Because the $\Rightarrow$ direction is straightforward for both permutation and switching, we concentrate on the proof of $\Leftarrow$ direction.
First, consider switching. Suppose $F$ is obtained from a switching of $F^{\prime}$. The switching could involve either (i) a new observable introduced by the triangular elimination, or (ii) an observable which had a joint measurement term eliminated. Since a switching of type (i) has no effect on $f$ and $f^{\prime}$, we need only consider type (ii). We can view the triangular elimination of the term $A_{i}A_{i^{\prime}}$ as addition of triangle inequality $x_{A_{i}A_{i^{\prime}}} - x_{A_{i}B'_{A_{i}A_{i^{\prime}}}} - x_{A_{i^{{\mathrm{T}}}}B'_{A_{i}A_{i^{\prime}}}} \leq 0$ or its switching equivalent inequality $- x_{A_{i}A_{i^{\prime}}} - x_{A_{i}B'_{A_{i}A_{i^{\prime}}}} + x_{A_{i^{{\mathrm{T}}}}B'_{A_{i}A_{i^{\prime}}}} \leq 0$ according to the sign of the coefficient $a_{A_{i}A_{i^{\prime}}}$. Thus, if $F$ is switching of $F^{\prime}$ of vertices $A_{i}$ and $B'_{A_{i}A_{i^{\prime}}}$ then $f$ is switching of $f^{\prime}$ of $A_{i}$.
Next, consider the permutation corresponding to an observable exchange. Observe that for any vertex $A_{i} (1 \leq i \leq n_{\mathrm{A}})$, triangular elimination does not change the number of terms $A_{i}V$ with non-zero coefficient. In addition, it can be shown that for any facet inequality $f$ of the cut polytope of the complete graph other than the triangle inequality, there is no vertex satisfying the following conditions: (a) there are exactly two terms $A_{i}V$,
with non-zero coefficients, and (b)
for those non-zero coefficients $a_{A_{i}W}$ and $a_{A_{i}U}$, $|a_{A_{i}W}| = |a_{A_{i}U}|$~\cite{AviImaItoSas:0404014}. This means that if $F \sim F^{\prime}$, then the corresponding permutation $\sigma$ is always in the following form: for permutations $\tau_{A}$ over $\{ A_{1}, \ldots, A_{n_{\mathrm{A}}}\}$ and $\tau_{B}$ over $\{ B_{1}, \ldots, B_{n_{\mathrm{B}}}\}$, $\sigma(A_{i}) = \tau_{A}(A_{i})$ and $\sigma(B'_{A_{i}A_{i^{\prime}}}) = B_{\tau_{A}(A_{i})\tau_{A}(A_{i^{\prime}})}$. The situation is the same for Bob.
Therefore, $f$ and $f^{\prime}$ are equivalent under the permutations $\tau_{A}$ and $\tau_{B}$. \end{proof}
\subsection{Computational results}
\begin{table}
\caption{\label{table:count}
The number of inequivalent facets of $\mathrm{CUT}^\square_n$ and
the number of inequivalent tight Bell inequalities
obtained as the triangular eliminations of the facets of
$\mathrm{CUT}^\square_n$.
Asterisk (*) indicates the value is a lower bound.}
\begin{indented} \item
\begin{tabular}{@{}ccc} \br
$n$ & Facets of $\mathrm{CUT}^\square_n$ &
Tight Bell ineqs.\ via triangular elimination \\ \mr
3 & 1 & 2 \\
4 & 1 & 2 \\
5 & 2 & 8 \\
6 & 3 & 22 \\
7 & 11 & 323 \\
8 & 147* & 40,399* \\
9 & 164,506* & 201,374,783* \\ \br
\end{tabular}
\end{indented} \end{table}
By Theorem~\ref{thrm:sw-perm-n-even}, we can compute the number of the classes of facets of $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_A,m_B})$ of the same type obtained by applying triangular elimination to non-triangular facets of $\mathrm{CUT}^\square_n$. We consulted De~Simone, Deza and Laurent~\cite{DesDezLau-DM94} for the H-representation of $\mathrm{CUT}_7$, and the ``conjectured complete description'' of $\mathrm{CUT}_8$ and the ``description possibly complete'' of $\mathrm{CUT}_9$ in SMAPO~\cite{SMAPO}. The result is summarized in \tref{table:count}. For $n=8$ and $9$, the number is a lower bound since the known list of the facets of $\mathrm{CUT}^\square_n$ is not proved to be complete. A program to generate Bell inequalities from the list in \cite{SMAPO} are available from an author's webpage at \url{http://www-imai.is.s.u-tokyo.ac.jp/~tsuyoshi/bell/}. The list of the generated Bell inequalities for $n=8$ is also available.
\section{Families of Bell inequalities} \label{sect:families}
While a large list of individual tight Bell inequalities is useful in some applications, a few formulas which give many different Bell inequalities for different values of parameters are easier to treat theoretically. The cut polytope of the complete graph has several classes of valid inequalities whose subclasses of facet-inducing inequalities are partially known (see \cite[Chapters~27--30]{DezLau:cut97} for details). In this section, we apply triangular elimination to two typical examples of such classes to obtain two general formulas for Bell inequalities. In addition, we prove sufficient conditions for these formulas to give a tight Bell inequality.
In this section, terms of the left hand side of an inequality are arrayed in the format introduced by Collins and Gisin~\cite{ColGis-JPA04}; each row corresponds to coefficients of each observable of party $A$ and each column corresponds to that of party $B$. Because of switching equivalence, we can assume that the right hand side of inequality are always zero. The example of the CHSH $ -q_{A_{1}} -q_{B_{1}} + q_{A_{1}B_{1}} + q_{A_{1}B_{2}} + q_{A_{2}B_{1}} - q_{A_{2}B_{2}} \leq 0$ is arrayed in the form as follows: \[
\left( \begin{array}{c||cc}
&-1 & 0\\ \hline \\[-14pt] \hline
-1& 1 & 1\\
0& 1 &-1
\end{array} \right) \le0. \]
\subsection{Bell inequalities derived from hypermetric inequalities}
\emph{Hypermetric inequalities} are a fundamental class of inequalities valid for the cut polytope of the complete graph. Here we derive a new family of Bell inequalities by applying triangular elimination to the hypermetric inequalities.
A special case of this
family, namely the triangular eliminated pure hypermetric inequality,
contains four previously known Bell inequalities: the trivial
inequalities like $q_{A_{1}} \leq 1$, the well known CHSH inequality found by Clauser, Horne, Shimony and Holt~\cite{ClaHorShiHol-PRL69}, the inequality named $I_{3322}$ by Collins and Gisin~\cite{ColGis-JPA04}, originally found by Pitowsky and Svozil~\cite{PitSvo-PRA01}, and the $I_{3422}^2$ inequality by Collins and Gisin~\cite{ColGis-JPA04}.
Let $s$ and $t$ be nonnegative integers and $b_{{\mathrm{A}}_1},\dots,b_{{\mathrm{A}}_s},\allowbreak
b_{{\mathrm{B}}_1},\dots,b_{{\mathrm{B}}_t}$ be integers. We define $b_{{\mathrm{X}}}=1-\sum_{i=1}^s b_{{\mathrm{A}}_i}-\sum_{j=1}^t b_{{\mathrm{B}}_j}$. Then it is known that $\sum_{uv} b_u b_v x_{uv}\le0$, where the sum is taken over the $\binom{s+t+1}{2}$ edges of the complete graph on nodes ${\mathrm{X}},{\mathrm{A}}_1,\dots,{\mathrm{A}}_s,\allowbreak {\mathrm{B}}_1,\dots,{\mathrm{B}}_t$, is valid for $\mathrm{CUT}^\square_{s+t+1}$. This inequality is called the \emph{hypermetric inequality} defined by the weight vector $\bm{b}=(b_{{\mathrm{X}}},\allowbreak b_{{\mathrm{A}}_1},\dots,b_{{\mathrm{A}}_s},\allowbreak b_{{\mathrm{B}}_1},\dots,b_{{\mathrm{B}}_t})$.
We apply triangular elimination to this hypermetric inequality. Let $s_+$ and $t_+$ be the number of positive entries of the form $b_{{\mathrm{A}}_i}$ and of the form $b_{{\mathrm{B}}_j}$, respectively. Without loss of generality, we assume that $b_{{\mathrm{A}}_1},\dots,b_{{\mathrm{A}}_{s_+}},\allowbreak
b_{{\mathrm{B}}_1},\dots,b_{{\mathrm{B}}_{t_+}}>0$, and $b_{{\mathrm{A}}_{s_++1}},\dots,b_{{\mathrm{A}}_s},\allowbreak
b_{{\mathrm{B}}_{t_++1}},\dots,b_{{\mathrm{B}}_t}\le0$. By assigning $a_{uv}=b_ub_v$ in the formula~(\ref{eq:after-elimination}), the Bell inequality obtained by triangular elimination is: \begin{eqnarray}
\sum_{i=1}^{s_+}b_{{\mathrm{A}}_i}\biggl(\frac{1-b_{{\mathrm{A}}_i}}{2}-\sum_{i'=1}^{i-1}b_{{\mathrm{A}}_{i'}}\biggr)q_{{\mathrm{A}}_i}
+\sum_{i=s_++1}^s b_{{\mathrm{A}}_i}\biggl(\frac{1-b_{{\mathrm{A}}_i}}{2}-\sum_{i'=s_++1}^{i-1}b_{{\mathrm{A}}_{i'}}\biggr)q_{{\mathrm{A}}_i} \nonumber\\
+\sum_{j=1}^{t_+}b_{{\mathrm{B}}_j}\biggl(\frac{1-b_{{\mathrm{B}}_j}}{2}-\sum_{j'=1}^{j-1}b_{{\mathrm{B}}_{j'}}\biggr)q_{{\mathrm{B}}_j}
+\sum_{j=t_++1}^t b_{{\mathrm{B}}_j}\biggl(\frac{1-b_{{\mathrm{B}}_j}}{2}-\sum_{j'=t_++1}^{j-1}b_{{\mathrm{B}}_{j'}}\biggr)q_{{\mathrm{B}}_j} \nonumber\\
+\sum_{j=1}^{t_+}\sum_{j'=t_++1}^t b_{{\mathrm{B}}_j}b_{{\mathrm{B}}_{j'}}q_{{\mathrm{A}}'_{jj'}}
+\sum_{i=1}^{s_+}\sum_{i'=s_++1}^s b_{{\mathrm{A}}_i}b_{{\mathrm{A}}_{i'}}q_{{\mathrm{B}}'_{ii'}}
-\sum_{i=1}^s\sum_{j=1}^t b_{{\mathrm{A}}_i}b_{{\mathrm{B}}_j}q_{{\mathrm{A}}_i{\mathrm{B}}_j} \nonumber\\
-\sum_{1\le i<i'\le s}b_{{\mathrm{A}}_i}b_{{\mathrm{A}}_{i'}}q_{{\mathrm{A}}_i{\mathrm{B}}'_{ii'}}
+\sum_{1\le i<i'\le s}| b_{{\mathrm{A}}_i}b_{{\mathrm{A}}_{i'}}| q_{{\mathrm{A}}_{i'}{\mathrm{B}}'_{ii'}}
\nonumber\\
-\sum_{1\le j<j'\le t}b_{{\mathrm{B}}_j}b_{{\mathrm{B}}_{j'}}q_{{\mathrm{A}}'_{jj'}{\mathrm{B}}_j}
+\sum_{1\le j<j'\le t}| b_{{\mathrm{B}}_j}b_{{\mathrm{B}}_{j'}}| q_{{\mathrm{A}}'_{jj'}{\mathrm{B}}_{j'}}
\le0.
\label{eq:hypermetric-bell} \end{eqnarray}
Though the formula~(\ref{eq:hypermetric-bell}) represents a Bell inequality for any choice of weight vector $\bm{b}$, this Bell inequality is not always tight. Many sufficient conditions for a hypermetric inequality to be facet-inducing are known in study of cut polytopes. By Theorem~\ref{thrm:kn-k1mm}, these sufficient conditions give sufficient conditions for the Bell inequality~(\ref{eq:hypermetric-bell}) to be tight. The sufficient conditions stated in \cite[Corollary~27.2.5]{DezLau:cut97} give the following theorem.
\begin{theorem} \label{thrm:hypermetric-bell-facet}
The Bell inequality~(\ref{eq:hypermetric-bell}) is tight if one of
the following conditions is satisfied.
\begin{enumerate}[(i)]
\item \label{enum:pure-hypermetric}
For some $l>1$, the integers
$b_{{\mathrm{A}}_1},\dots,b_{{\mathrm{A}}_s},\allowbreak b_{{\mathrm{B}}_1},\dots,b_{{\mathrm{B}}_t}$
and $b_{{\mathrm{X}}}$ contain $l+1$ entries equal to $1$ and $l$ entries
equal to $-1$, and the other entries (if any) are equal to $0$.
\item
At least $3$ and at most $n-3$ entries in
$b_{{\mathrm{A}}_1},\dots,b_{{\mathrm{A}}_s},\allowbreak b_{{\mathrm{B}}_1},\dots,b_{{\mathrm{B}}_t}$
and $b_{{\mathrm{X}}}$ are positive, and all the other entries are equal
to $-1$.
\end{enumerate} \end{theorem}
Now we consider some concrete cases when the formula~(\ref{eq:hypermetric-bell}) represents a tight Bell inequality. If we let $s+t=2l$, $s\le l$, $l>1$, $b_{{\mathrm{A}}_1}=\dots=b_{{\mathrm{A}}_s}=b_{{\mathrm{B}}_1}=\dots=b_{{\mathrm{B}}_{l-s}}=1$, and $b_{{\mathrm{B}}_{l-s+1}}=\dots=b_{{\mathrm{B}}_t}=-1$, then $b_{{\mathrm{X}}}=1$ and by case~(\ref{enum:pure-hypermetric}) of Theorem~\ref{thrm:hypermetric-bell-facet}, the Bell inequality~(\ref{eq:hypermetric-bell}) is tight. In this case, the Bell inequality~(\ref{eq:hypermetric-bell}) is in the following form. \begin{eqnarray}
\fl
-\sum_{i=1}^s(i-1)q_{{\mathrm{A}}_i}
-\sum_{j=1}^{l-s}\sum_{j'=l-s+1}^t q_{{\mathrm{A}}'_{jj'}}
-\sum_{j=1}^{l-s}(j-1)q_{{\mathrm{B}}_j}
-\sum_{j=l-s+1}^t(j-(l-s))q_{{\mathrm{B}}_j} \nonumber\\
\fl
-\sum_{i=1}^s\sum_{j=1}^{l-s} q_{{\mathrm{A}}_i{\mathrm{B}}_j}
+\sum_{i=1}^s\sum_{j=l-s+1}^t q_{{\mathrm{A}}_i{\mathrm{B}}_j}
-\sum_{1\le i<i'\le s}q_{{\mathrm{A}}_i{\mathrm{B}}'_{ii'}}
+\sum_{1\le i<i'\le s}q_{{\mathrm{A}}_{i'}{\mathrm{B}}'_{ii'}} \nonumber\\
\fl
-\sum_{1\le j<j'\le l-s}q_{{\mathrm{A}}'_{jj'}{\mathrm{B}}_j}
-\sum_{l-s+1\le j<j'\le t}q_{{\mathrm{A}}'_{jj'}{\mathrm{B}}_j}
+\sum_{j=1}^{l-s}\sum_{j'=l-s+1}^t q_{{\mathrm{A}}'_{jj'}{\mathrm{B}}_j}
+\sum_{1\le j<j'\le t}q_{{\mathrm{A}}'_{jj'}{\mathrm{B}}_{j'}} \le0.
\label{eq:pure-hypermetric-bell-1} \end{eqnarray}
Examples of tight Bell inequality in the form~(\ref{eq:pure-hypermetric-bell-1}) are $I_{3322}$ and $I_{3422}^2$ inequalities~\cite{ColGis-JPA04}.
In case of $l=1$, Theorem~\ref{thrm:hypermetric-bell-facet} does not guarantee that the Bell inequality~(\ref{eq:pure-hypermetric-bell-1}) is tight. However, in cases of $(l,s,t)=(1,1,1)$ and $(1,1,2)$, the Bell inequality~(\ref{eq:pure-hypermetric-bell-1}) becomes trivial and CHSH inequalities, respectively, both of which are tight.
Letting $(l,s,t)=(2,2,2)$ in (\ref{eq:pure-hypermetric-bell-1}) gives: \begin{equation}
\fl
-q_{{\mathrm{A}}_2}-q_{{\mathrm{B}}_1}-2q_{{\mathrm{B}}_2}
+q_{{\mathrm{A}}_1{\mathrm{B}}_1}+q_{{\mathrm{A}}_1{\mathrm{B}}_2}+q_{{\mathrm{A}}_2{\mathrm{B}}_1}+q_{{\mathrm{A}}_2{\mathrm{B}}_2}
-q_{{\mathrm{A}}_1{\mathrm{B}}'_{12}}+q_{{\mathrm{A}}_2{\mathrm{B}}'_{12}}-q_{{\mathrm{A}}'_{12}{\mathrm{B}}_1}+q_{{\mathrm{A}}'_{12}{\mathrm{B}}_2} \le0.
\label{eq:i3322-1} \end{equation} Following the notation in~\cite{ColGis-JPA04}, we write the inequality~(\ref{eq:i3322-1}) by arraying its coefficients: \[
\left(\begin{array}{cc||ccc}
& & ({\mathrm{A}}_2) & ({\mathrm{A}}_1) & ({\mathrm{A}}'_{12}) \\
& & -1 & 0 & 0 \\ \hline\\[-14pt]\hline
({\mathrm{B}}_2) & -2 & 1 & 1 & 1 \\
({\mathrm{B}}_1) & -1 & 1 & 1 & -1 \\
({\mathrm{B}}'_{12}) & 0 & 1 & -1 & 0
\end{array}\right) \le 0. \] Now it is clear that the Bell inequality~(\ref{eq:i3322-1}) is $I_{3322}$ inequality.
Letting $(l,s,t)=(2,1,3)$ in (\ref{eq:pure-hypermetric-bell-1}) gives: \begin{equation}
\left(\begin{array}{cc||ccc}
& & ({\mathrm{B}}_2) & ({\mathrm{B}}_3) & ({\mathrm{B}}_1) \\
& & -1 & -2 & 0 \\ \hline\\[-14pt]\hline
({\mathrm{A}}_1) & 0 & 1 & 1 & -1 \\
({\mathrm{A}}'_{13}) & -1 & 0 & 1 & 1 \\
({\mathrm{A}}'_{12}) & -1 & 1 & 0 & 1 \\
({\mathrm{A}}'_{23}) & 0 & -1 & 1 & 0
\end{array}\right) \le 0.
\label{eq:i34222-1} \end{equation} After exchanging the two values $1$ and $0$ of the observable ${\mathrm{A}}_1$, and doing the same to the two values of the observable ${\mathrm{B}}_3$, the Bell inequality~(\ref{eq:i34222-1}) becomes: \[
\left(\begin{array}{cc||ccc}
& & ({\mathrm{B}}_2) & (\overline{{\mathrm{B}}_3})
& ({\mathrm{B}}_1) \\
& & 0 & 1 & -1 \\ \hline\\[-14pt]\hline
(\overline{{\mathrm{A}}_1})
& -1 & -1 & 1 & 1 \\
({\mathrm{A}}'_{13}) & 0 & 0 & -1 & 1 \\
({\mathrm{A}}'_{12}) & -1 & 1 & 0 & 1 \\
({\mathrm{A}}'_{23}) & 1 & -1 & -1 & 0
\end{array}\right) \le 1, \] which is $I_{3422}^2$ inequality~\cite{ColGis-JPA04}. This means that the Bell inequality~(\ref{eq:i34222-1}) is equivalent to $I_{3422}^2$ inequality.
\subsection{Bell inequalities derived from pure clique-web inequalities}
Clique-web inequalities~\cite[Chapter~29]{DezLau:cut97} are generalization of hypermetric inequalities. One of the important subclasses of clique-web inequalities are the pure clique-web inequalities, which are always facet-inducing. Here we introduce an example of Bell inequalities derived from some pure clique-web inequalities.
For nonnegative integers $s$, $t$ and $r$ with $s\ge t\ge2$ and $s-t=2r$, we consider the pure clique-web inequality with parameters $n=s+t+1$, $p=s+1$, $q=t$ and $r$. After relabelling the $n$ vertices of ${\mathrm{K}}_n$ by ${\mathrm{A}}_1,\dots,{\mathrm{A}}_s,{\mathrm{X}},{\mathrm{B}}_1,\dots,{\mathrm{B}}_t$ in this order, the Bell inequality~(\ref{eq:after-elimination}) corresponding to the clique-web inequality is: \begin{eqnarray}
-\sum_{i=r+1}^{s-r}(i-r-1)q_{{\mathrm{A}}_i}-2t\sum_{i=s-r+1}^s q_{{\mathrm{A}}_i}
-\sum_{j=1}^t (j-r)q_{{\mathrm{B}}_j}
+\sum_{i=1}^s\sum_{j=1}^t q_{{\mathrm{A}}_i{\mathrm{B}}_j} \nonumber\\
+\sum_{\substack{1\le i<i'\le s \cr r+1\le j-i\le s-r}}
(-q_{{\mathrm{A}}_i{\mathrm{B}}'_{ii'}}+q_{{\mathrm{A}}_{i'}{\mathrm{B}}'_{ii'}})
+\sum_{1\le j<j'\le t}(-q_{{\mathrm{A}}'_{jj'}{\mathrm{B}}_j}+q_{{\mathrm{A}}'_{jj'}{\mathrm{B}}_{j'}})\le0.
\label{eq:pure-cw-bell-1} \end{eqnarray}
The next theorem is a direct consequence of Theorem~\ref{thrm:kn-k1mm}.
\begin{theorem} \label{thrm:pure-cw-bell-facet}
For any nonnegative integers $s$, $t$ and $r$ with $s\ge t\ge2$ and
$s-t=2r$, the Bell inequality~(\ref{eq:pure-cw-bell-1}) is tight. \end{theorem}
\subsection{Inclusion relation}
Collins and Gisin~\cite{ColGis-JPA04} pointed out that the following $I_{3322}$ inequality becomes the CHSH inequality if we fix two measurements ${\mathrm{A}}_3$ and ${\mathrm{B}}_1$ to a deterministic measurement whose result is always $0$. \begin{eqnarray*}
\text{$I_{3322}$:} \quad &
\left(\begin{array}{cc||ccc}
& & ({\mathrm{A}}_1) & ({\mathrm{A}}_2) & ({\mathrm{A}}_3) \\
& & -1 & 0 & 0 \\ \hline\\[-14pt]\hline
({\mathrm{B}}_1) & -2 & 1 & 1 & 1 \\
({\mathrm{B}}_2) & -1 & 1 & 1 & -1 \\
({\mathrm{B}}_3) & 0 & 1 & -1 & 0
\end{array} \right)\le0, \\
\text{CHSH:} \quad &
\left(\begin{array}{cc||cc}
& & ({\mathrm{A}}_1) & ({\mathrm{A}}_2) \\
& & -1 & 0 \\ \hline\\[-14pt]\hline
({\mathrm{B}}_2) & -1 & 1 & 1 \\
({\mathrm{B}}_3) & 0 & 1 & -1
\end{array} \right)\le0. \end{eqnarray*} As stated in \cite{ColGis-JPA04}, this fact implies the CHSH inequality is irrelevant if the $I_{3322}$ inequality is given. In other words, if a quantum state satisfies the $I_{3322}$ inequality with every set of measurements, then it also satisfies the CHSH inequality with every set of measurements.
We generalize this argument and define \emph{inclusion relation} between two Bell inequalities: A Bell inequality $\bm{a}^{\mathrm{T}}\bm{q}\le0$ \emph{includes} another Bell inequality $\bm{b}^{\mathrm{T}}\bm{q}\le0$ if we can obtain the inequality $\bm{b}^{\mathrm{T}}\bm{q}\le0$ by fixing some measurements in the inequality $\bm{a}^{\mathrm{T}}\bm{q}\le0$ to deterministic ones.
We do not know whether all the Bell inequalities (except positive probability) include the CHSH inequality. However, we can prove that many Bell inequalities represented by (\ref{eq:hypermetric-bell}) or (\ref{eq:pure-cw-bell-1}) include the CHSH inequality.
\begin{theorem} \label{thrm:hypermetric-bell-facet-chsh}
If $b_{{\mathrm{A}}_1}=b_{{\mathrm{A}}_2}=1$ and $b_{{\mathrm{B}}_{t_++1}}=-1$, then the Bell
inequality represented by~(\ref{eq:hypermetric-bell}) contains the
CHSH inequality. \end{theorem}
\begin{proof}
The Bell inequality~(\ref{eq:hypermetric-bell}) contains
$s+\binom{t}{2}$ observables of Alice and $t+\binom{s}{2}$
observables of Bob.
By fixing all but 4 observables ${\mathrm{A}}_1$, ${\mathrm{A}}_2$, ${\mathrm{B}}_{t_++1}$ and
${\mathrm{B}}'_{12}$ to the one whose value is always $0$, we obtain the
following CHSH inequality:
$
-q_{{\mathrm{A}}_2}-q_{{\mathrm{B}}_{t_++1}}
+q_{{\mathrm{A}}_1{\mathrm{B}}_{t_++1}}+q_{{\mathrm{A}}_2{\mathrm{B}}_{t_++1}}
-q_{{\mathrm{A}}_1{\mathrm{B}}'_{12}}+q_{{\mathrm{A}}_2{\mathrm{B}}'_{12}}\le0
$. \end{proof}
\begin{theorem} \label{thrm:pure-cw-bell-facet-chsh}
All the Bell inequalities in the form~(\ref{eq:pure-cw-bell-1})
include the CHSH inequality. \end{theorem}
\begin{proof}
By fixing all but 4 observables ${\mathrm{A}}_{r+1}$, ${\mathrm{A}}_{r+2}$, ${\mathrm{B}}_{r+1}$
and ${\mathrm{B}}'_{r{+}1,r{+}2}$ to the one whose value is always $0$, the
Bell inequality~(\ref{eq:pure-cw-bell-1}) becomes the following CHSH
inequality:
$
-q_{{\mathrm{A}}_{r+2}}-q_{{\mathrm{B}}_{r+1}}
+q_{{\mathrm{A}}_{r+1}{\mathrm{B}}_{r+1}}+q_{{\mathrm{A}}_{r+2}{\mathrm{B}}_{r+1}}
-q_{{\mathrm{A}}_{r+1}{\mathrm{B}}'_{r{+}1,r{+}2}}+q_{{\mathrm{A}}_{r+2}{\mathrm{B}}'_{r{+}1,r{+}2}}\le0
$. \end{proof}
\subsection{Relationship between $I_{mm22}$ and triangular eliminated Bell inequality} Collins and Gisin~\cite{ColGis-JPA04} proposed a family of tight Bell inequalities obtained by the extension of CHSH and $I_{3322}$ as $I_{mm22}$ family, and conjectured that $I_{mm22}$ is always facet supporting (they also confirmed that for $m \leq 7$, $I_{mm22}$ is actually facet supporting by computation). Therefore, whether their $I_{mm22}$ can be obtained by triangular elimination of some facet class of $\mathrm{CUT}^\square({\mathrm{K}}_{n})$ is an interesting question.
The $I_{mm22}$ family has the structure as follows: \[
\left( \begin{array}{c||cccccc}
& -1 & 0 & \cdots & 0 & 0 & 0\\ \hline\\[-14pt]\hline -(m-1)& 1 & 1 & \cdots & 1 & 1 & 1\\ -(m-2)& 1 & 1 & \cdots & 1 & 1 &-1\\ -(m-3)& 1 & 1 & \cdots & 1 & -1 & 0\\ \vdots&\vdots&\vdots&\revddots&\revddots&\revddots&\vdots\\
-1 & 1 & 1 & -1 & 0 & \cdots & 0\\
0 & 1 & -1 & 0 & 0 & \cdots & 0
\end{array} \right)\le0. \]
From its structure, it is straightforward that if $I_{mm22}$ can be obtained by triangular elimination of some facet class of $\mathrm{CUT}^\square_{n}$, then only $A_{m}$ and $B_{m}$ are new vertices introduced by triangular elimination, since the other vertices have degree more than $2$. For $m=2,3,4$, the $I_{mm22}$ inequality is the triangular elimination of the triangle, pentagon and Grishukhin inequality $\sum_{1 \leq i < j \leq 4}x_{ij} +x_{56} +x_{57} -x_{67} -x_{16} -x_{36} -x_{27} -x_{47} - 2 \sum_{1 \leq i \leq 4}x_{i5} \leq 0$, respectively. In general, $I_{mm22}$ inequality is the triangular elimination of a facet-inducing inequality of $\mathrm{CUT}^\square_{2m-1}$ and it is tight~\cite{AviIto-JH05}.
\subsection{Known tight Bell inequalities other than the triangular
elimination of $\mathrm{CUT}^\square({\mathrm{K}}_{n})$}
Since we have obtained a large number of tight Bell inequalities by triangular elimination of $\mathrm{CUT}^\square({\mathrm{K}}_{n})$, the next question is whether they are complete i.e., whether all families and their equivalents form the whole set of facets of $\mathrm{CUT}^\square({\mathrm{K}}_{1,m_{\mathrm{A}},m_{\mathrm{B}}})$.
For the case $m_{\mathrm{A}}=m_{\mathrm{B}}=3$, the answer is affirmative. Both \'{S}liwa~\cite{Sli-PLA03} and Collins and Gisin~\cite{ColGis-JPA04} showed that there are only three kinds of inequivalent facets: positive probabilities, CHSH and $I_{3322}$, corresponding to the triangle facet, the triangular elimination of the triangle facet and the triangular elimination of the pentagonal facet of $\mathrm{CUT}^\square({\mathrm{K}}_{n})$, respectively.
On the other hand, in the case $m_{\mathrm{A}}=3$ and $m_{\mathrm{B}}=4$, the answer is negative. Collins and Gisin enumerated all of the tight Bell inequalities and classified them into 6 families of equivalent inequalities~\cite{ColGis-JPA04}. While positive probabilities, CHSH, $I_{3322}$ and $I_{3422}^2$ inequalities are either facets of $\mathrm{CUT}^\square({\mathrm{K}}_{n})$ or their triangular eliminations, the other two are not: \[ \fl I^{1}_{3422} =
\left( \begin{array}{c||ccc}
& 1& 1&-2\\ \hline\\[-14pt]\hline
1&-1&-1& 1\\
0&-1& 1& 1\\
0& 1&-1& 1\\
1&-1&-1&-1
\end{array}\right)\le2, \qquad I^{3}_{3422} =
\left( \begin{array}{c||ccc}
& 1& 0&-1\\ \hline\\[-14pt]\hline
0&-2& 1& 1\\
0& 0&-1& 1\\ -1& 1& 1& 1\\
2&-1&-1&-1
\end{array}\right)\le2. \]
\section{Concluding remarks} \label{sect:concluding}
We introduced triangular elimination to derive tight Bell inequalities from the facet inequalities of the cut polytope of the complete graph. Though it does not give the complete list of Bell inequalities, this method derives not only many individual tight Bell inequalities from individual known facet inequalities of cut polytope, but also several families of Bell inequalities. This gives a partial answer to the $N=K=2$ case of the problem posed by Werner~\cite[Problem~1]{KruWer-0504166}.
Gill poses the following problem in~\cite[Problem~26.B]{KruWer-0504166}: is there any Bell inequality that holds for all quantum states, other than the inequalities representing nonnegativity of probabilities? Theorems~\ref{thrm:hypermetric-bell-facet-chsh} and \ref{thrm:pure-cw-bell-facet-chsh} give a partial answer to this problem. If a Bell inequality $\bm{a}^{\mathrm{T}}\bm{q}\le0$ includes the CHSH inequality, then the Bell inequality $\bm{a}^{\mathrm{T}}\bm{q}\le0$ is necessarily violated in any quantum states violating the CHSH inequality.
Further investigation of inclusion relation and families of Bell inequalities may be useful to understand the structure of Bell inequalities such as the answer to Gill's problem.
\section*{References}
\end{document} | arXiv |
List of dimensionless quantities
This is a list of well-known dimensionless quantities illustrating their variety of forms and applications. The tables also include pure numbers, dimensionless ratios, or dimensionless physical constants; these topics are discussed in the article.
Biology and medicine
Name Standard symbol Definition Field of application
Basic reproduction number$R_{0}$number of infections caused on average by an infectious individual over entire infectious periodepidemiology
Body fat percentagetotal mass of fat divided by total body mass, multiplied by 100biology
Kt/VKt/Vmedicine (hemodialysis and peritoneal dialysis treatment; dimensionless time)
Waist–hip ratiowaist circumference divided by hip circumferencebiology
Waist-to-chest ratiowaist circumference divided by chest circumferencebiology
Waist-to-height ratiowaist circumference divided by heightbiology
Chemistry
Name Standard symbol Definition Field of application
Activity coefficient$\gamma $$\gamma ={\frac {a}{x}}$chemistry (Proportion of "active" molecules or atoms)
Arrhenius number$\alpha $$\alpha ={\frac {E_{a}}{RT}}$chemistry (ratio of activation energy to thermal energy)[1]
Atomic weightMchemistry (mass of one atom divided by the atomic mass constant, 1 Da)
Bodenstein numberBo or Bd$\mathrm {Bo} =vL/{\mathcal {D}}=\mathrm {Re} \,\mathrm {Sc} $chemistry (residence-time distribution; similar to the axial mass transfer Peclet number)[2]
Damkohler numberDa$\mathrm {Da} =k\tau $chemistry (reaction time scales vs. residence time)
Hatta numberHa$\mathrm {Ha} ={\frac {N_{\mathrm {A} 0}}{N_{\mathrm {A} 0}^{\mathrm {phys} }}}$chemical engineering (adsorption enhancement due to chemical reaction)
Jakob numberJa$\mathrm {Ja} ={\frac {c_{p}(T_{\mathrm {s} }-T_{\mathrm {sat} })}{\Delta H_{\mathrm {f} }}}$chemistry (ratio of sensible to latent energy absorbed during liquid-vapor phase change)[3]
pH$\mathrm {pH} $$\mathrm {pH} =-\log _{10}(a_{{\textrm {H}}^{+}})$chemistry (the measure of the acidity or basicity of an aqueous solution)
van 't Hoff factori$i=1+\alpha (n-1)$quantitative analysis (Kf and Kb)
Wagner numberWa$\mathrm {Wa} ={\frac {\kappa }{l}}{\frac {\mathrm {d} \eta }{\mathrm {d} i}}$electrochemistry (ratio of kinetic polarization resistance to solution ohmic resistance in an electrochemical cell)[4]
Weaver flame speed numberWea$\mathrm {Wea} ={\frac {w}{w_{\mathrm {H} }}}100$combustion (laminar burning velocity relative to hydrogen gas)[5]
Physics
Physical constants
Fluids and heat transfer
Name Standard symbol Definition Field of application
Archimedes numberAr$\mathrm {Ar} ={\frac {gL^{3}\rho _{\ell }(\rho -\rho _{\ell })}{\mu ^{2}}}$fluid mechanics (motion of fluids due to density differences)
Asakuma numberAs$\mathrm {As} ={\frac {W}{\alpha \rho d_{p}H}}$heat transfer (ratio of heat generation of microwave dielectric heating to thermal diffusion )[6]
Atwood numberA$\mathrm {A} ={\frac {\rho _{1}-\rho _{2}}{\rho _{1}+\rho _{2}}}$fluid mechanics (onset of instabilities in fluid mixtures due to density differences)
Bagnold numberBa$\mathrm {Ba} ={\frac {\rho d^{2}\lambda ^{1/2}{\dot {\gamma }}}{\mu }}$fluid mechanics, geology (ratio of grain collision stresses to viscous fluid stresses in flow of a granular material such as grain and sand)[7]
Bejan number
(fluid mechanics)
Be$\mathrm {Be} ={\frac {\Delta PL^{2}}{\mu \alpha }}$fluid mechanics (dimensionless pressure drop along a channel)[8]
Bejan number
(thermodynamics)
Be$\mathrm {Be} ={\frac {{\dot {S}}'_{\mathrm {gen} ,\,\Delta T}}{{\dot {S}}'_{\mathrm {gen} ,\,\Delta T}+{\dot {S}}'_{\mathrm {gen} ,\,\Delta p}}}$thermodynamics (ratio of heat transfer irreversibility to total irreversibility due to heat transfer and fluid friction)[9]
Bingham numberBm$\mathrm {Bm} ={\frac {\tau _{y}L}{\mu V}}$fluid mechanics, rheology (ratio of yield stress to viscous stress)[1]
Biot numberBi$\mathrm {Bi} ={\frac {hL_{C}}{k_{b}}}$heat transfer (surface vs. volume conductivity of solids)
Blake numberBl or B$\mathrm {B} ={\frac {u\rho }{\mu (1-\epsilon )D}}$geology, fluid mechanics, porous media (inertial over viscous forces in fluid flow through porous media)
Bond numberBo$\mathrm {Bo} ={\frac {\rho aL^{2}}{\gamma }}$geology, fluid mechanics, porous media (buoyant versus capillary forces, similar to the Eötvös number) [10]
Brinkman numberBr$\mathrm {Br} ={\frac {\mu U^{2}}{\kappa (T_{w}-T_{0})}}$heat transfer, fluid mechanics (conduction from a wall to a viscous fluid)
Brownell–Katz numberNBK$\mathrm {N} _{\mathrm {BK} }={\frac {u\mu }{k_{\mathrm {rw} }\sigma }}$fluid mechanics (combination of capillary number and Bond number) [11]
Capillary numberCa$\mathrm {Ca} ={\frac {\mu V}{\gamma }}$porous media, fluid mechanics (viscous forces versus surface tension)
Chandrasekhar numberQ$\mathrm {Q} ={\frac {{B_{0}}^{2}d^{2}}{\mu _{0}\rho \nu \lambda }}$magnetohydrodynamics (ratio of the Lorentz force to the viscosity in magnetic convection)
Colburn J factorsJM, JH, JDturbulence; heat, mass, and momentum transfer (dimensionless transfer coefficients)
Darcy friction factorCf or fDfluid mechanics (fraction of pressure losses due to friction in a pipe; four times the Fanning friction factor)
Dean numberD$\mathrm {D} ={\frac {\rho Vd}{\mu }}\left({\frac {d}{2R}}\right)^{1/2}$turbulent flow (vortices in curved ducts)
Deborah numberDe$\mathrm {De} ={\frac {t_{\mathrm {c} }}{t_{\mathrm {p} }}}$rheology (viscoelastic fluids)
Drag coefficientcd$c_{\mathrm {d} }={\dfrac {2F_{\mathrm {d} }}{\rho v^{2}A}}\,,$aeronautics, fluid dynamics (resistance to fluid motion)
Eckert numberEc$\mathrm {Ec} ={\frac {V^{2}}{c_{p}\Delta T}}$convective heat transfer (characterizes dissipation of energy; ratio of kinetic energy to enthalpy)
Ekman numberEk$\mathrm {Ek} ={\frac {\nu }{2D^{2}\Omega \sin \varphi }}$geophysics (viscous versus Coriolis forces)
Eötvös numberEo$\mathrm {Eo} ={\frac {\Delta \rho \,g\,L^{2}}{\sigma }}$fluid mechanics (shape of bubbles or drops)
Ericksen numberEr$\mathrm {Er} ={\frac {\mu vL}{K}}$fluid dynamics (liquid crystal flow behavior; viscous over elastic forces)
Euler numberEu$\mathrm {Eu} ={\frac {\Delta {}p}{\rho V^{2}}}$hydrodynamics (stream pressure versus inertia forces)
Excess temperature coefficient$\Theta _{r}$$\Theta _{r}={\frac {c_{p}(T-T_{e})}{U_{e}^{2}/2}}$heat transfer, fluid dynamics (change in internal energy versus kinetic energy)[12]
Fanning friction factorffluid mechanics (fraction of pressure losses due to friction in a pipe; 1/4th the Darcy friction factor)[13]
Fourier numberFo$\mathrm {Fo} ={\frac {\alpha t}{L^{2}}}$heat transfer, mass transfer (ratio of diffusive rate versus storage rate)
Froude numberFr$\mathrm {Fr} ={\frac {v}{\sqrt {g\ell }}}$fluid mechanics (wave and surface behaviour; ratio of a body's inertia to gravitational forces)
Galilei numberGa$\mathrm {Ga} ={\frac {g\,L^{3}}{\nu ^{2}}}$fluid mechanics (gravitational over viscous forces)
Görtler numberG$\mathrm {G} ={\frac {U_{e}\theta }{\nu }}\left({\frac {\theta }{R}}\right)^{1/2}$fluid dynamics (boundary layer flow along a concave wall)
Graetz numberGz$\mathrm {Gz} ={D_{H} \over L}\mathrm {Re} \,\mathrm {Pr} $heat transfer, fluid mechanics (laminar flow through a conduit; also used in mass transfer)
Grashof numberGr$\mathrm {Gr} _{L}={\frac {g\beta (T_{s}-T_{\infty })L^{3}}{\nu ^{2}}}$heat transfer, natural convection (ratio of the buoyancy to viscous force)
Hagen numberHg$\mathrm {Hg} =-{\frac {1}{\rho }}{\frac {\mathrm {d} p}{\mathrm {d} x}}{\frac {L^{3}}{\nu ^{2}}}$heat transfer (ratio of the buoyancy to viscous force in forced convection)
Hydraulic gradienti$i={\frac {\mathrm {d} h}{\mathrm {d} l}}={\frac {h_{2}-h_{1}}{\mathrm {length} }}$fluid mechanics, groundwater flow (pressure head over distance)
Karlovitz numberKa$\mathrm {Ka} ={\frac {t_{F}}{t_{\eta }}}$turbulent combustion (characteristic chemical time scale to Kolmogorov time scale)
Keulegan–Carpenter numberKC$\mathrm {K_{C}} ={\frac {V\,T}{L}}$fluid dynamics (ratio of drag force to inertia for a bluff object in oscillatory fluid flow)
Knudsen numberKn$\mathrm {Kn} ={\frac {\lambda }{L}}$gas dynamics (ratio of the molecular mean free path length to a representative physical length scale)
Kutateladze numberKu$\mathrm {Ku} ={\frac {U_{h}\rho _{g}^{1/2}}{\left({\sigma g(\rho _{l}-\rho _{g})}\right)^{1/4}}}$fluid mechanics (counter-current two-phase flow)[14]
Laplace numberLa$\mathrm {La} ={\frac {\sigma \rho L}{\mu ^{2}}}$fluid dynamics (free convection within immiscible fluids; ratio of surface tension to momentum-transport)
Lewis numberLe$\mathrm {Le} ={\frac {\alpha }{D}}={\frac {\mathrm {Sc} }{\mathrm {Pr} }}$heat and mass transfer (ratio of thermal to mass diffusivity)
Lift coefficientCL$C_{\mathrm {L} }={\frac {L}{q\,S}}$aerodynamics (lift available from an airfoil at a given angle of attack)
Lockhart–Martinelli parameter$\chi $$\chi ={\frac {m_{\ell }}{m_{g}}}{\sqrt {\frac {\rho _{g}}{\rho _{\ell }}}}$two-phase flow (flow of wet gases; liquid fraction)[15]
Mach numberM or Ma$\mathrm {M} ={\frac {v}{v_{\mathrm {sound} }}}$gas dynamics (compressible flow; dimensionless velocity)
Magnetic Reynolds numberRm$\mathrm {R} _{\mathrm {m} }={\frac {UL}{\eta }}$magnetohydrodynamics (ratio of magnetic advection to magnetic diffusion)
Manning roughness coefficientnopen channel flow (flow driven by gravity)[16]
Marangoni numberMg$\mathrm {Mg} =-{\frac {\mathrm {d} \sigma }{\mathrm {d} T}}{\frac {L\Delta T}{\eta \alpha }}$fluid mechanics (Marangoni flow; thermal surface tension forces over viscous forces)
Markstein number${\mathcal {M}}$${\mathcal {M}}={\frac {{\mathcal {L}}_{b}}{\delta _{L}}}$fluid dynamics, combustion (turbulent combustion flames)
Morton numberMo$\mathrm {Mo} ={\frac {g\mu _{c}^{4}\,\Delta \rho }{\rho _{c}^{2}\sigma ^{3}}}$fluid dynamics (determination of bubble/drop shape)
Nusselt numberNu$\mathrm {Nu} _{d}={\frac {hd}{k}}$heat transfer (forced convection; ratio of convective to conductive heat transfer)
Ohnesorge numberOh$\mathrm {Oh} ={\frac {\mu }{\sqrt {\rho \sigma L}}}={\frac {\sqrt {\mathrm {We} }}{\mathrm {Re} }}$fluid dynamics (atomization of liquids, Marangoni flow)
Péclet numberPe$\mathrm {Pe} _{d}={\frac {du\rho c_{p}}{k}}=\mathrm {Re} _{d}\,\mathrm {Pr} $heat transfer (advection–diffusion problems; total momentum transfer to molecular heat transfer)
Péclet numberPe$\mathrm {Pe} _{d}={\frac {du}{D}}=\mathrm {Re} _{d}\,\mathrm {Sc} $mass transfer (advection–diffusion problems; total momentum transfer to diffusive mass transfer)
Prandtl numberPr$\mathrm {Pr} ={\frac {\nu }{\alpha }}={\frac {c_{p}\mu }{k}}$heat transfer (ratio of viscous diffusion rate over thermal diffusion rate)
Pressure coefficientCP$C_{p}={p-p_{\infty } \over {\frac {1}{2}}\rho _{\infty }V_{\infty }^{2}}$aerodynamics, hydrodynamics (pressure experienced at a point on an airfoil; dimensionless pressure variable)
Rayleigh numberRa$\mathrm {Ra} _{x}={\frac {g\beta }{\nu \alpha }}(T_{s}-T_{\infty })x^{3}$heat transfer (buoyancy versus viscous forces in free convection)
Reynolds numberRe$\mathrm {Re} _{L}={\frac {vL\rho }{\mu }}$fluid mechanics (ratio of fluid inertial and viscous forces)[1]
Richardson numberRi$\mathrm {Ri} ={\frac {gh}{u^{2}}}={\frac {1}{\mathrm {Fr} ^{2}}}$fluid dynamics (effect of buoyancy on flow stability; ratio of potential over kinetic energy)[17]
Roshko numberRo$\mathrm {Ro} ={fL^{2} \over \nu }=\mathrm {St} \,\mathrm {Re} $fluid dynamics (oscillating flow, vortex shedding)
Schmidt numberSc$\mathrm {Sc} _{D}={\frac {\nu }{D}}$mass transfer (viscous over molecular diffusion rate)[18]
Shape factorH$H={\frac {\delta ^{*}}{\theta }}$boundary layer flow (ratio of displacement thickness to momentum thickness)
Sherwood numberSh$\mathrm {Sh} _{D}={\frac {KL}{D}}$mass transfer (forced convection; ratio of convective to diffusive mass transport)
Sommerfeld numberS$\mathrm {S} =\left({\frac {r}{c}}\right)^{2}{\frac {\mu N}{P}}$hydrodynamic lubrication (boundary lubrication)[19]
Stanton numberSt$\mathrm {St} ={\frac {h}{c_{p}\rho V}}={\frac {\mathrm {Nu} }{\mathrm {Re} \,\mathrm {Pr} }}$heat transfer and fluid dynamics (forced convection)
Stokes numberStk or Sk$\mathrm {Stk} ={\frac {\tau U_{o}}{d_{c}}}$particles suspensions (ratio of characteristic time of particle to time of flow)
Strouhal numberSt or Sr$\mathrm {St} ={\omega L \over v}$fluid dynamics (continuous and pulsating flow; nondimensional frequency)[20]
Stuart numberN$\mathrm {N} ={\frac {B^{2}L_{c}\sigma }{\rho U}}={\frac {\mathrm {Ha} ^{2}}{\mathrm {Re} }}$magnetohydrodynamics (ratio of electromagnetic to inertial forces)
Taylor numberTa$\mathrm {Ta} ={\frac {4\Omega ^{2}R^{4}}{\nu ^{2}}}$fluid dynamics (rotating fluid flows; inertial forces due to rotation of a fluid versus viscous forces)
Ursell numberU$\mathrm {U} ={\frac {H\,\lambda ^{2}}{h^{3}}}$wave mechanics (nonlinearity of surface gravity waves on a shallow fluid layer)
Vadasz numberVa$\mathrm {Va} ={\frac {\phi \,\mathrm {Pr} }{\mathrm {Da} }}$porous media (governs the effects of porosity $\phi $, the Prandtl number and the Darcy number on flow in a porous medium) [21]
Wallis parameterj*$j^{*}=R\left({\frac {\omega \rho }{\mu }}\right)^{\frac {1}{2}}$multiphase flows (nondimensional superficial velocity)[22]
Weber numberWe$\mathrm {We} ={\frac {\rho v^{2}l}{\sigma }}$multiphase flow (strongly curved surfaces; ratio of inertia to surface tension)
Weissenberg numberWi$\mathrm {Wi} ={\dot {\gamma }}\lambda $viscoelastic flows (shear rate times the relaxation time)[23]
Womersley number$\alpha $$\alpha =R\left({\frac {\omega \rho }{\mu }}\right)^{\frac {1}{2}}$biofluid mechanics (continuous and pulsating flows; ratio of pulsatile flow frequency to viscous effects)[24]
Zel'dovich number$\beta $$\beta ={\frac {E}{RT_{f}}}{\frac {T_{f}-T_{o}}{T_{f}}}$fluid dynamics, Combustion (Measure of activation energy)
Solids
Name Standard symbol Definition Field of application
Coefficient of kinetic friction$\mu _{k}$mechanics (friction of solid bodies in translational motion)
Coefficient of static friction$\mu _{s}$mechanics (friction of solid bodies at rest)
Dieterich-Ruina-Rice number$\mathrm {R_{u}} $$\mathrm {R_{u}} ={\frac {W}{L}}{\frac {(b-a){\bar {\sigma }}}{G}}$mechanics, friction, rheology, geophysics (stiffness ratio for frictional contacts)[25]
Föppl–von Kármán number$\gamma $$\gamma ={\frac {Yr^{2}}{\kappa }}$virology, solid mechanics (thin-shell buckling)
Rockwell scale–mechanical hardness (indentation hardness of a material)
Rolling resistance coefficientCrr$C_{rr}={\frac {F}{N_{f}}}$vehicle dynamics (ratio of force needed for motion of a wheel over the normal force)
Optics
Name Standard symbol Definition Field of application
Abbe numberV$V={\frac {n_{d}-1}{n_{F}-n_{C}}}$optics (dispersion in optical materials)
f-numberN$N={\frac {f}{D}}$optics, photography (ratio of focal length to diameter of aperture)
Fresnel numberF${\mathit {F}}={\frac {a^{2}}{L\lambda }}$optics (slit diffraction)[26]
Refractive indexn$n={\frac {c}{v}}$electromagnetism, optics (speed of light in vacuum over speed of light in a material)
TransmittanceT$T={\frac {I}{I_{0}}}$optics, spectroscopy (the ratio of the intensities of radiation exiting through and incident on a sample)
Mathematics and statistics
Name Standard symbol Definition Field of application
Coefficient of determination$R^{2}$statistics (proportion of variance explained by a statistical model)
Coefficient of variation${\frac {\sigma }{\mu }}$${\frac {\sigma }{\mu }}$statistics (ratio of standard deviation to expectation)
Correlationρ or r${\frac {\operatorname {E} [(X-\mu _{X})(Y-\mu _{Y})]}{\sigma _{X}\sigma _{Y}}}$statistics (measure of linear dependence)
Courant–Friedrich–Levy numberC or 𝜈$C={\frac {u\,\Delta t}{\Delta x}}$mathematics (numerical solutions of hyperbolic PDEs)[27]
Euler's numbere$e=\displaystyle \sum \limits _{n=0}^{\infty }{\dfrac {1}{n!}}\approx 2.71828$mathematics (base of the natural logarithm)
Feigenbaum constants$\alpha $, $\delta $$\alpha \approx 2.50290,$
$\ \delta \approx 4.66920$
chaos theory (period doubling)[28]
Golden ratio$\varphi $$\varphi ={\frac {1+{\sqrt {5}}}{2}}\approx 1.61803$mathematics, aesthetics (long side length of self-similar rectangle)
Pi$\pi $$\pi ={\frac {C}{D}}\approx 3.14159$mathematics (ratio of a circle's circumference to its diameter)
Radian measurerad${\text{arc length}}/{\text{radius}}$mathematics (measurement of planar angles, 1 radian = 180/π degrees)
Steradian measuresrmeasurement of solid angles
Geography, geology and geophysics
Name Standard symbol Definition Field of application
Albedo$\alpha $$\alpha =(1-D){\bar {\alpha }}(\theta _{i})+D{\bar {\bar {\alpha }}}$climatology, astronomy (reflectivity of surfaces or bodies)
Love numbersh, k, lgeophysics (solidity of earth and other planets)
Porosity$\phi $$\phi ={\frac {V_{\mathrm {V} }}{V_{\mathrm {T} }}}$geology, porous media (void fraction of the medium)
Rossby numberRo$\mathrm {Ro} ={\frac {U}{Lf}}$geophysics (ratio of inertial to Coriolis force)
Sport
Name Standard symbol Definition Field of application
Blondeau number$B_{\kappa }$$\mathrm {B_{\kappa }} ={\frac {t_{g}v_{f}}{l_{mf}}}$sport science, team sports[29]
Gain ratio–bicycling (system of representing gearing; length traveled over length pedaled)[30]
Goal average–${\text{Goal average }}={\frac {\text{goals scored}}{\text{goals conceded}}}$Association football[31]
Runs Per Wicket RatioRpW ratio${\text{RpW ratio }}={\frac {\text{runs scored}}{\text{wickets lost}}}\div {\frac {\text{runs conceded}}{\text{wickets taken}}}$cricket[32]
Winning percentage–Various, e.g. ${\frac {\text{Games won}}{\text{Games played}}}$ or ${\frac {\text{Points won}}{\text{Points contested}}}$Various sports
Other fields
Name Standard symbol Definition Field of application
Capacity factor${\frac {\text{actual electrical energy output}}{\text{maximum possible electrical energy output}}}$energy
Cohesion number Coh $Coh={\frac {1}{\rho g}}\left({\frac {\Gamma ^{5}}{{E^{*}}^{2}{R^{*}}^{8}}}\right)^{\frac {1}{3}}$ Chemical engineering, material science, mechanics (A scale to show the energy needed for detaching two solid particles)[33][34]
Cost of transportCOT$\mathrm {COT} ={\frac {E}{mgd}}$energy efficiency, economics (ratio of energy input to kinetic motion)
Damping ratio$\zeta $$\zeta ={\frac {c}{2{\sqrt {km}}}}$mechanics, electrical engineering (the level of damping in a system)
Darcy numberDa$\mathrm {Da} ={\frac {K}{d^{2}}}$porous media (ratio of permeability to cross-sectional area)
DecibeldBacoustics, electronics, control theory (ratio of two intensities or powers of a wave)
Dukhin numberDu$\mathrm {Du} ={\frac {\kappa ^{\sigma }}{{\mathrm {K} _{m}}a}}$colloid science (ratio of electric surface conductivity to the electric bulk conductivity in heterogeneous systems)
Elasticity
(economics)
E$E_{x,y}={\frac {\partial \ln(x)}{\partial \ln(y)}}={\frac {\partial x}{\partial y}}{\frac {y}{x}}$economics (response of demand or supply to price changes)
Fine-structure constant$\alpha $$\alpha ={\frac {e^{2}}{4\pi \varepsilon _{0}\hbar c}}$quantum electrodynamics (QED) (coupling constant characterizing the strength of the electromagnetic interaction)
Gain–electronics (signal output to signal input)
Havnes parameter$P_{H}$$P_{H}={\frac {Z_{d}n_{d}}{n_{i}}}$In Dusty plasma physics, ratio of the total charge $Z_{d}$ carried by the dust particles $d$ to the charge carried by the ions $i$, with $n$ the number density of particles
Helmholtz number$He$$He={\frac {wa}{c_{0}}}=k_{0}a$The most important parameter in duct acoustics. If $\omega $ is the dimensional frequency, then $k_{0}$ is the corresponding free field wavenumber and $He$ is the corresponding dimensionless frequency [35]
Iribarren numberIr$\mathrm {Ir} ={\frac {\tan \alpha }{\sqrt {H/L_{0}}}}$wave mechanics (breaking surface gravity waves on a slope)
Load factor${\frac {\text{average load}}{\text{peak load}}}$energy
Lundquist numberS$S={\frac {\mu _{0}LV_{A}}{\eta }}$plasma physics (ratio of a resistive time to an Alfvén wave crossing time in a plasma)
Peel numberNP$N_{\mathrm {P} }={\frac {\text{Restoring force}}{\text{Adhesive force}}}$coating (adhesion of microstructures with substrate)[36]
PerveanceK${K}={\frac {I}{I_{0}}}\,{\frac {2}{{\beta }^{3}{\gamma }^{3}}}(1-\gamma ^{2}f_{e})$charged particle transport (measure of the strength of space charge in a charged particle beam)
Pierce parameter$C$$C^{3}={\frac {Z_{c}I_{K}}{4V_{K}}}$Traveling wave tube
Pixelpxdigital imaging (smallest addressable unit)
Beta (plasma physics)$\beta $$\beta ={\frac {nk_{B}T}{B^{2}/2\mu _{0}}}$Plasma (physics) and Fusion power. Ratio of plasma thermal pressure to magnetic pressure, controlling the level of turbulence in a magnetised plasma.
Poisson's ratio$\nu $$\nu =-{\frac {\mathrm {d} \varepsilon _{\mathrm {trans} }}{\mathrm {d} \varepsilon _{\mathrm {axial} }}}$elasticity (strain in transverse and longitudinal direction)
Power factorpf$pf={\frac {P}{S}}$ electrical (real power to apparent power)
Power numberNp$N_{p}={P \over \rho n^{3}d^{5}}$fluid mechanics, power consumption by rotary agitators; resistance force versus inertia force)
Prater numberβ$\beta ={\frac {-\Delta H_{r}D_{TA}^{e}C_{AS}}{\lambda ^{e}T_{s}}}$reaction engineering (ratio of heat evolution to heat conduction within a catalyst pellet)[37]
Q factorQ$Q=2\pi f_{r}{\frac {\text{Energy Stored}}{\text{Power Loss}}}$physics, engineering (Damping ratio of oscillator or resonator; energy stored versus energy lost)
Relative densityRD$RD={\frac {\rho _{\mathrm {substance} }}{\rho _{\mathrm {reference} }}}$hydrometers, material comparisons (ratio of density of a material to a reference material—usually water)
Relative permeability$\mu _{r}$$\mu _{r}={\frac {\mu }{\mu _{0}}}$magnetostatics (ratio of the permeability of a specific medium to free space)
Relative permittivity$\varepsilon _{r}$$\varepsilon _{r}={\frac {C_{x}}{C_{0}}}$electrostatics (ratio of capacitance of test capacitor with dielectric material versus vacuum)
Rouse numberP or Z$\mathrm {P} ={\frac {w_{s}}{\kappa u_{*}}}$sediment transport (ratio of the sediment fall velocity and the upwards velocity of grain)
Shields parameter$\tau _{*}$ or $\theta $$\tau _{\ast }={\frac {\tau }{(\rho _{s}-\rho )gD}}$sediment transport (threshold of sediment movement due to fluid motion; dimensionless shear stress)
Specific gravitySG(same as Relative density)
Stefan numberSte$\mathrm {Ste} ={\frac {c_{p}\Delta T}{L}}$phase change, thermodynamics (ratio of sensible heat to latent heat)
Strain$\epsilon $$\epsilon ={\cfrac {\partial {F}}{\partial {X}}}-1$materials science, elasticity (displacement between particles in the body relative to a reference length)
References
1. "Table of Dimensionless Numbers" (PDF). Retrieved 2009-11-05.
2. Becker, A.; Hüttinger, K. J. (1998). "Chemistry and kinetics of chemical vapor deposition of pyrocarbon—II pyrocarbon deposition from ethylene, acetylene and 1,3-butadiene in the low temperature regime". Carbon. 36 (3): 177. doi:10.1016/S0008-6223(97)00175-9.
3. Incropera, Frank P. (2007). Fundamentals of heat and mass transfer. John Wiley & Sons, Inc. p. 376. ISBN 9780470055540.
4. Popov, Konstantin I.; Djokić, Stojan S.; Grgur, Branimir N. (2002). Fundamental Aspects of Electrometallurgy. Boston, MA: Springer. pp. 101–102. ISBN 978-0-306-47564-1.
5. Kuneš, J. (2012). "Technology and Mechanical Engineering". Dimensionless Physical Quantities in Science and Engineering. pp. 353–390. doi:10.1016/B978-0-12-416013-2.00008-7. ISBN 978-0-12-416013-2.
6. Asakuma, Y. (2020). "A dimensionless number for microwave non-equilibrium local heating through surfactant desorption". Colloids and Surfaces A: Physicochemical and Engineering Aspects. Vol. 591. p. 124560.
7. Bagnold number Archived 2005-05-10 at the Wayback Machine
8. Bhattacharjee S.; Grosshandler W.L. (1988). "The formation of wall jet near a high temperature wall under microgravity environment". ASME MTD. 96: 711–6. Bibcode:1988nht.....1..711B.
9. Paoletti S.; Rispoli F.; Sciubba E. (1989). "Calculation of exergetic losses in compact heat exchanger passager". ASME AES. 10 (2): 21–9.
10. Bond number Archived 2012-03-05 at the Wayback Machine
11. "Home". OnePetro. 2015-05-04. Retrieved 2015-05-08.
12. Schetz, Joseph A. (1993). Boundary Layer Analysis. Englewood Cliffs, NJ: Prentice-Hall, Inc. pp. 132–134. ISBN 0-13-086885-X.
13. "Fanning friction factor". Archived from the original on 2013-12-20. Retrieved 2015-10-07.
14. Tan, R. B. H.; Sundar, R. (2001). "On the froth–spray transition at multiple orifices". Chemical Engineering Science. 56 (21–22): 6337. Bibcode:2001ChEnS..56.6337T. doi:10.1016/S0009-2509(01)00247-0.
15. Lockhart–Martinelli parameter
16. "Manning coefficient" (PDF). 10 June 2013. (109 KB)
17. Richardson number Archived 2015-03-02 at the Wayback Machine
18. Schmidt number Archived 2010-01-24 at the Wayback Machine
19. Sommerfeld number
20. Strouhal number, Engineering Toolbox
21. Straughan, B. (2001). "A sharp nonlinear stability threshold in rotating porous convection". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 457 (2005): 87–88. Bibcode:2001RSPSA.457...87S. doi:10.1098/rspa.2000.0657. S2CID 122753376.
22. Petritsch, G.; Mewes, D. (1999). "Experimental investigations of the flow patterns in the hot leg of a pressurized water reactor". Nuclear Engineering and Design. 188: 75–84. doi:10.1016/S0029-5493(99)00005-9.
23. Weissenberg number Archived 2006-11-01 at the Wayback Machine
24. Womersley number Archived 2009-03-25 at the Wayback Machine
25. Barbot, S. (2019). "Slow-slip, slow earthquakes, period-two cycles, full and partial ruptures, and deterministic chaos in a single asperity fault". Tectonophysics. 768: 228171. Bibcode:2019Tectp.76828171B. doi:10.1016/j.tecto.2019.228171.
26. Fresnel number Archived 2011-10-01 at the Wayback Machine
27. Courant–Friedrich–Levy number Archived 2008-06-05 at the Wayback Machine
28. Feigenbaum constants
29. Blondeau, J. (2021). "The influence of field size, goal size and number of players on the average number of goals scored per game in variants of football and hockey: the Pi-theorem applied to team sports". Journal of Quantitative Analysis in Sports. 17 (2): 145–154. doi:10.1515/jqas-2020-0009. S2CID 224929098.
30. Gain Ratio – Sheldon Brown
31. "goal average". Cambridge Dictionary. Retrieved 11 August 2021.
32. "World Test Championship Playing Conditions: What's different?" (PDF). International Cricket Council. Retrieved 11 August 2021.
33. Behjani, Mohammadreza Alizadeh; Rahmanian, Nejat; Ghani, Nur Fardina bt Abdul; Hassanpour, Ali (2017). "An investigation on process of seeded granulation in a continuous drum granulator using DEM" (PDF). Advanced Powder Technology. 28 (10): 2456–2464. doi:10.1016/j.apt.2017.02.011.
34. Alizadeh Behjani, Mohammadreza; Hassanpour, Ali; Ghadiri, Mojtaba; Bayly, Andrew (2017). "Numerical Analysis of the Effect of Particle Shape and Adhesion on the Segregation of Powder Mixtures". EPJ Web of Conferences. 140: 06024. Bibcode:2017EPJWC.14006024A. doi:10.1051/epjconf/201714006024. ISSN 2100-014X.
35. S.W. RIENSTRA, 2015, Fundamentals of Duct Acoustics, Von Karman Institute Lecture Notes
36. Van Spengen, W. M.; Puers, R.; De Wolf, I. (2003). "The prediction of stiction failures in MEMS". IEEE Transactions on Device and Materials Reliability. 3 (4): 167. doi:10.1109/TDMR.2003.820295.
37. Davis, Mark E.; Davis, Robert J. (2012). Fundamentals of Chemical Reaction Engineering. Dover. p. 215. ISBN 978-0-486-48855-4.
| Wikipedia |
What is the sum of the odd integers from 11 through 39, inclusive?
We want to sum the arithmetic series $11 + 13 + \cdots + 39$, which has common difference 2. Suppose the series has $n$ terms. 39 is the $n$th term, so $39 = 11 + (n-1)\cdot2$. Solving, we get $n = 15$.The sum of an arithmetic series is equal to the average of the first and last term, multiplied by the number of terms, so the sum is $(11 + 39)/2 \cdot 15 = \boxed{375}$. | Math Dataset |
\begin{document}
\title{\LARGE \bf Revisiting Normalized Gradient Descent:\\ Fast Evasion of Saddle Points}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnotetext[1]{These authors contributed equally.} \footnotetext[2]{Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA ([email protected], [email protected]).} \footnotetext[3]{Department of Mathematics, Pennsylvania State University, State College, PA, USA ([email protected]).}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\thispagestyle{empty} \begin{abstract} The note considers \emph{normalized gradient descent} (NGD), a natural modification of classical gradient descent (GD) in optimization problems. A serious shortcoming of GD in non-convex problems is that GD may take arbitrarily long to escape from the neighborhood of a saddle point. This issue can make the convergence of GD arbitrarily slow, particularly in high-dimensional non-convex problems where the relative number of saddle points is often large. The paper focuses on continuous-time descent. It is shown that, contrary to standard GD, NGD escapes saddle points ``quickly.'' In particular, it is shown that (i) NGD ``almost never'' converges to saddle points and (ii) the time required for NGD to escape from a ball of radius $r$ about a saddle point $x^*$ is at most $5\sqrt{\kappa}r$, where $\kappa$ is the condition number of the Hessian of $f$ at $x^*$. As an application of this result, a global convergence-time bound is established for NGD under mild assumptions. \end{abstract}
\section{Introduction} Given a differentiable function $f:\mathbb{R}^d\to\mathbb{R}$, the canonical first-order optimization procedure is the method of gradient descent (GD). In continuous-time, GD is defined by the differential equation \begin{equation} \label{eqn:grad-dynamics} \dot \vx = -\nabla f(\vx) \end{equation} and in discrete-time, GD is defined by the difference equation \begin{equation} \label{eq_GD_DE} x_{n+1} = x_n - \alpha_n\nabla f(x_n), \end{equation} where $\{\alpha_n\}_{n\geq 1}$ is some step-size sequence. The discrete-time GD process \eqref{eq_GD_DE} is merely a \emph{sample and hold} (or \emph{Euler}) discretization of the differential equation \eqref{eqn:grad-dynamics}, and the properties of solutions of \eqref{eqn:grad-dynamics} and \eqref{eq_GD_DE} are closely related \cite{khalil1996noninear,benaim1996dynamical,stoer2013introduction}. Owing to their simplicity and ease of implementation, GD and related first-order optimization procedures are popular in practice, particularly in large-scale problems where second-order information such as the Hessian can be costly to compute \cite{boyd2004convex}. When the objective function $f$ is convex, GD can be both practical and effective as an optimization procedure. However, when $f$ is non-convex, GD can perform poorly in practice, even when the goal is merely to find a local minimum.
The underlying issue is the presence of saddle points in non-convex functions; the gradient $\nabla f(x)$ vanishes near saddle points, which causes GD to ``stall'' in neighboring regions \cite{dauphin2014identifying} (see also Section \ref{sec:saddle-points-GD}). This both slows the overall convergence rate and makes detection of local minima difficult. The detrimental effects of this issue become particularly severe in high-dimensional problems where the number of saddle points may proliferate. Recent work \cite{dauphin2014identifying} showed that in some high-dimensional problems of interest, the number of saddle points increases exponentially relative to the number of local minima, which can dramatically increase the time required for GD to find even a local minimum.
Since first-order dynamics such as GD tend to be relatively simple to implement in large-scale applications, there has been growing interest in understanding the issue of saddle-point slowdown in non-convex problems and how to overcome it \cite{dauphin2014identifying,ge2015escaping,JordanStableManifold,JordanPNAS,AartiSlowdown,reddi2017saddles}. For example, there has been a surge of recent research on this topic in the machine learning community where large-scale non-convex optimization and first-order methods are of growing importance in many applications \cite{choromanska2015loss,sun2016geometric,dauphin2014identifying,sun2015complete,ge2015escaping}.
One intuitively simple method that has been proposed to mitigate this issue is to consider \emph{normalized} gradient descent (NGD). In continuous time, NGD (originally introduced in \cite{cortes2006}) is defined by the differential equation \begin{equation}\label{eqn:normalized-dynamics}
\dot\vx = -\frac{\nabla f(\vx)}{\|\nabla f(\vx)\|} \end{equation} and in discrete time, NGD (originally introduced in \cite{Nesterov1984NGD}) is defined by the difference equation \begin{equation}\label{eq_NGD_DE}
x_{n+1} = x_{n} - \alpha_n\frac{\nabla f(x_n)}{\|\nabla f(x_n)\|}, \end{equation} where $\{\alpha_n\}_{n\geq 1}$ is some step-size sequence. As with GD, discrete-time NGD \eqref{eq_NGD_DE} is merely a sample and hold discretization of its continuous-time counterpart \eqref{eqn:normalized-dynamics}.
The normalized gradient $\frac{\nabla f(\vx)}{\|\nabla f(\vx)\|}$ preserves the direction of the gradient but ignores magnitude. Because $\frac{\nabla f(\vx)}{\|\nabla f(\vx)\|}$ does not vanish near saddle points, the intuitive expectation (corroborated by evidence \cite{Levy}) is that NGD should not slow down in the neighborhood of saddle points and should therefore escape ``quickly.''
In this note, our goal is to elucidate the key differences between GD and NGD and, more importantly, give rigorous theoretical justification to the intuition that NGD ``escapes saddle points quickly.'' We will focus, in this work, on continuous-time descent. From the control perspective this may be seen as extending the seminal work of \cite{cortes2006} by characterizing saddle-point behavior of NGD. From the optimization perspective, focusing on continuous-time dynamics allows us to more easily characterize the fundamental properties of NGD using a wealth of available analysis tools and follows in the spirit of recent works studying optimization processes through the lens of differential equations \cite{su2014differential,krichene2015accelerated}.
We have three main results, which we state informally here:
\textbf{Main Result 1} (Theorem \ref{prop:stable-manifold}): Our first main result is to show that NGD can only converge to saddle points from a set of initial conditions with measure zero.
We note that this result implies that, generically, NGD only converges to minima of $f$.\footnote{When we say that a property holds generically for an ODE, we mean that it holds from all initial conditions except, possibly, some set with Lebesgue measure zero.} However, it provides no guarantees about convergence time or saddle-point escape time. (Indeed, this same result is known to hold for GD, which performs poorly in practice due to saddle-point slowdown.)
This result follows as a relatively straightforward application of the stable-manifold theorem from classical ODE theory (see Proposition \ref{prop:arc-length}, Theorem \ref{prop:stable-manifold} and proofs thereof).
\textbf{Main Result 2} (Theorem \ref{thm:main-thm}): Our second
main result is to show that NGD always escapes from saddle points ``quickly.'' More precisely, we show that the maximum amount of time a trajectory of NGD can spend in a ball of radius $r>0$ about a (non-degenerate) saddle point $x^*$ is $5\sqrt{\kappa}r$, where $\kappa$ is the condition number of the Hessian of $f$ at $x^*$ (see Theorem \ref{thm:main-thm}).\footnote{In Theorem \ref{thm:main-thm} we show a slightly more refined result than this; namely, we show that the time spent in the $r$-ball can be upper bounded by $C\sqrt{\kappa}r$, where $C$ is any constant strictly greater than 4. For clarity of presentation we simply fix the constant to be 5 here. See Remark \ref{remark:constant1} for more details.}
We note that this result is independent of the dimension of the problem.
In contrast to this, the saddle-point escape time of GD (i.e., the maximum amount of time a trajectory of GD may take to leave a ball of radius $r$ about a saddle point) is always \emph{infinite}, independent of the function $f$, the particular saddle point $x^*$, or the dimension of the problem. (See Theorem \ref{thm:main-thm} for a precise definition of saddle-point escape time and Remark \ref{remark:GD-escape-time} for a discussion of GD saddle-point escape time.) This is precisely the issue which causes GD to perform poorly in high-dimensional problems with many saddle points.
While a characterization of saddle-point escape time such as Theorem \ref{thm:main-thm} is essential in understanding how NGD can mitigate the problem of saddle-point slowdown in high dimensional optimization \cite{dauphin2014identifying}, the issue is challenging to study due to the discontinuity in the right-hand side of \eqref{eqn:normalized-dynamics}. In particular, the system is not amenable to classical analytical techniques.
We prove Theorem \ref{thm:main-thm} by studying the rate of ``potential energy dissipation'' (to use an analogy from physics) of NGD near saddle points. The methods used are flexible and can be applied to a variety of discontinuous dynamical systems (see Remark \ref{remark-proof-techniques} and proof of Proposition \ref{prop:time-bound}).
\textbf{Main Result 3} (Corollary \ref{cor-finite-time}): As our final main result, using the local saddle-point analysis noted above (Theorem \ref{thm:main-thm}) we provide a simple global bound on the convergence time of NGD under mild assumptions on $f$.\footnote{We note that classical first-order methods converge to a local minimum over an infinite time horizon. Continuous-time NGD, on the other hand, converges in finite time (cf. \cite{cortes2006}). Hence the bound we provide concerns the convergence time rather than the convergence rate.}
\textbf{Literature Review}: Continuous-time NGD dynamics were first introduced by Cortes \cite{cortes2006} in the context of distributed multi-agent coordination. In \cite{cortes2006} it was shown that NGD converges to critical points of $f$ in finite time and this result was used to develop distributed gradient coordination algorithms that achieve a desired task in finite time. Our results differ from \cite{cortes2006} primarily in that we characterize the saddle-point behavior of NGD, including demonstrating non-convergence to saddle points and providing a strong characterization of saddle-point escape time. Furthermore, our results differ from \cite{cortes2006} in that (i) our results show that NGD almost always converges to local minima rather than just the set of critical points of $f$ and
(ii) \cite{cortes2006} considered only local bounds on the convergence time of NGD to local minima. Because we characterize the saddle point behavior of NGD, our results enable \emph{global} bounds on the convergence time of NGD to minima of non-convex functions (see Corollary \ref{cor-finite-time}).
Discrete-time NGD was first introduced by Nesterov \cite{Nesterov1984NGD} and variants have received increasing attention in the optimization and machine learning communities \cite{kiwiel2001convergence,konnov2003convergence,hazan2015beyond,Levy}. The problem of coping with saddle points in non-convex optimization has received significant recent attention (see \cite{dauphin2014identifying,ge2015escaping,JordanStableManifold,JordanPNAS,AartiSlowdown,reddi2017saddles} and references therein). Of particular relevance to the present work are results dealing with first-order methods. Recent work along these lines includes the following. The work \cite{JordanStableManifold} shows that the classical stable manifold theorem implies that gradient descent only converges to minima. The work \cite{AartiSlowdown} shows that, even with random initialization, discrete-time GD can take exponential time to escape saddle points. The work \cite{ge2015escaping} showed that noisy discrete-time GD converges to a local minimum in a polynomial number of iterations. Our work differs from \cite{ge2015escaping} primarily in that we investigate the role of normalization of the dynamics (rather than noise injection) as a means of accelerating escape from saddle points.
The use of normalization in GD has also been studied in \cite{Levy} where it was shown that discrete-time NGD with noise injection can outperform GD with noise injection \cite{ge2015escaping} in terms of dimensional dependence and the number of iterations required to reach the basin of a local minimum. Numerical simulations of discrete-time noisy NGD and comparisons with discrete-time noisy GD in several problems of interest were also performed in \cite{Levy}. Our work differs from \cite{Levy} in that we study the continuous-time deterministic NGD dynamics \eqref{eqn:normalized-dynamics} (which may be viewed as the mean dynamics of the noise-injected discrete-time NGD \cite{Levy} as the step size is brought to zero), we characterize the stable-manifold for these dynamics near saddle points, and we explicitly characterize the saddle-point escape time.
The work \cite{jin2017escape} improved on the dimensional dependence of the results of \cite{ge2015escaping} and \cite{Levy}, showing that GD with noise injection can reach the basin of a local minimum in a number of iterations with only polylog dependence on dimension. Our work differs from \cite{jin2017escape} in that we again study the underlying continuous dynamics and perform an explicit local analysis of the dynamics near saddle points. We demonstrate that the local saddle point escape time of NGD can be bounded independent of dimension (Theorem \ref{thm:main-thm}). Moreover, because we show that NGD is a path-length reparametrization of GD, our results also have implications for classical GD. In particular, Theorem \ref{thm:main-thm} together with Proposition \ref{prop:arc-length} shows that a classical GD trajectory can have at most length $5\sqrt{\kappa} r$ (where $\kappa$ is the condition number of the Hessian of $f$ at $x^*$) before it must exit a ball of radius $r$ about a saddle point.
\textbf{Organization}: Section \ref{sec:notation} sets up notation. Section \ref{sec:examples} presents a simple example illustrating the salient features of GD and NGD near saddle points. Section \ref{sec:struct_properties} studies the structural relationship between GD and NGD and presents Theorem \ref{prop:stable-manifold} which shows generic non-convergence to saddle points. Section \ref{sec:main_result} presents Theorem \ref{thm:main-thm} which gives the saddle-point escape-time bound for NGD. Section \ref{sec:global-bound} presents a simple global convergence-time bound for NGD (Corollary \ref{cor-finite-time}). The proofs of all results are deferred to Section \ref{sec:proofs}.
\section{Preliminaries} \label{sec:notation} Suppose $f:\mathbb{R}^d\to \mathbb{R}$ is a twice differentiable function. We use the following notation. \begin{itemize} \item $\nabla f(x)$ denotes the gradient of $f$ at $x$ \item $D^2f(x)$ denotes the Hessian of $f$ at $x$ \item Given a set $S\subset \mathbb{R}^d$, the closure of $S$ is given by $\textup{cl}(S)$ and the boundary of $S$ is given by $\partial S$ \item $\calL^d$, $d\geq 1$ denotes the $d$-dimensional Lebesgue measure \item $B_r(x)$ denotes the ball of radius $r$ about $x\in \mathbb{R}^d$
\item $\| \cdot \|$ denotes the Euclidean norm \item $d(\cdot,\cdot)$ denotes Euclidean distance \item $\dot \vx$ is shorthand for $\ddt \vx(t)$
\item Given $C>0$, $|D^3 f(x)|<C$ means that $\vert\frac{\partial ^3 f(x)}{\partial x_i \partial x_j \partial x_k}\vert < C$, $i,j,k=1,\ldots,d$ \item For $A\in \mathbb{R}^{n\times n}$, $\sigma(A)$ denotes the spectrum of $A$
\item $|\lambda|_{\textup{min}}(A):= \min\{|\lambda|:\lambda\in\sigma(A)\}$
\item $|\lambda|_{\textup{max}}(A):= \max\{|\lambda|:\lambda\in\sigma(A)\}$ \item The \emph{condition number} of $A$ is given by $\frac{|\lambda|_{\textup{max}}(A)}{|\lambda|_{\textup{min}}(A)}$ \item $\textup{diag}(\lambda_1,\ldots,\lambda_d)$ gives a $d\times d$ matrix with $\lambda_1,\ldots,\lambda_d$ on the diagonal \end{itemize}
We say that a saddle point $x^*$ of $f$ is \emph{non-degenerate} if $D^2f(x^*)$ is non-singular.
For $k\geq 1$, let $C^k$ denote the set of all functions from $\mathbb{R}^d$ to $\mathbb{R}$ that are $k$-times continuously differentiable. Unless otherwise specified, we will assume the following throughout the paper. \begin{assumption} \label{a:twice-differentiable} The objective function $f$ is of class $C^2$. \end{assumption}
We say that a continuous mapping $\vx:I \to R^d$, over some interval $I =[0,T)$, $0 < T \leq \infty$, is a solution to an ODE with initial condition $x_0$ if $\vx \in C^1$, $\vx$ satisfies the ODE for all $t\in I$, and $\vx(0) = x_0$.
Under assumption \ref{a:twice-differentiable}, there exists a unique solution to \eqref{eqn:grad-dynamics} which exists on the interval $I = \mathbb{R}$ for every initial condition. A solution $\vx$ to \eqref{eqn:normalized-dynamics} with initial condition $x_0$ satisfying $\nabla f(x_0) \not = 0$, will have a unique solution on some \emph{maximal interval of existence} $[0,T)$, where $T$ is dependent on $x_0$ (see \cite{Perko_ODE} for a formal definition of the maximal interval of existence). Practically, for solutions of \eqref{eqn:normalized-dynamics} the maximal interval of existence is the maximal time interval for which a solution $\vx$ does not intersect with a critical point of $f$. When we refer to a solution of \eqref{eqn:normalized-dynamics} we mean the solution defined over its maximal interval of existence.
\begin{remark}[Fillipov solutions] We note that one can handle the discontinuity in the right hand side of \eqref{eqn:normalized-dynamics} by considering solutions of the associated Fillipov differential inclusion \cite{Filippov,cortes2006}. In order to keep the presentation simple and broadly accessible we have elected to avoid this approach and instead consider solutions only on intervals on which they are classically defined. Practically, the main differences between the two approaches are that (i) solutions in the classical sense cease to exist when they reach a saddle point or local minimum whereas Fillipov solutions remain well defined at these points, and (ii) Fillipov solutions may not be differentiable at times when solutions reach or depart from critical points. In particular, Fillipov solutions to \eqref{eqn:normalized-dynamics} may sojourn indefinitely at saddle points (and local maxima) of $f$ and remain at non-degenerate minima of $f$ once reached.
Our results and analysis extend readily to solutions in this sense modulo minor technical modifications. \end{remark}
The following two definitions are standard from classical ODE theory. \begin{definition}[Orbit of an ODE] \label{def:orbit}
Let $x(t)$ be the solution of some ODE on the interval $[0,T)$. Assume that $x(0) = x_0$ and that $[0,T)$ is the maximal interval on which $x(t)$ is the unique solution of the ODE with initial value $x_0$ (here $T=\infty$ is permitted). Then the \emph{orbit} corresponding to the initial condition $x_0$ is defined to be the set $\gamma_{x_0}^+ := \{x\in \mathbb{R}^d:~ \vx(t)= x \mbox{ for some } t \in [0,T)\}$. \end{definition} We note that $\gamma_{x_0}^+$ in the above definition is often referred to as a \emph{forward orbit}; to simplify nomenclature, we will refer to it simply as an orbit.
Given a differentiable curve $\vx: [0,T)\to \mathbb{R}^d$, the \emph{arc length} of $\vx$ at time $t< T$ is given by $L(t):= \int_{0}^t |\dot \vx(s)|\, ds$, and we let $L(T) := \lim_{t\to T} L(t)$. \begin{definition} [Arc-Length Reparametrization] \label{def:arc-reparam}
Suppose $\vx:[0,T) \to\mathbb{R}^d$ is a differentiable curve in $\mathbb{R}^d$ with arc length at time $t$ given by $L(t)$. We say that $\tilde \vx: I \to \mathbb{R}^d$, $I=[0,L(T))$ is an \emph{arc-length reparametrization} of $\vx(t)$ if there holds $\vx(t) = \tilde \vx(L(t))$ for all $t\in [0,T)$. \end{definition}
We say that a property holds for \emph{almost every} element in a set $A\subseteq \mathbb{R}^d$, $d\geq 1$, if the subset of $A$ where the property fails to hold has $\calL^d$-measure zero. Likewise, we say that a property holds for almost every solution of an ODE if the property holds for solutions starting from almost every initial condition.
\section{Saddle-Point Behavior of GD and NGD: Examples and Intuition} \label{sec:examples}
\subsection{Saddle Points and GD} \label{sec:saddle-points-GD}
The following simple example illustrates the behavior of GD near saddle points. \begin{example} \label{example-GD} Suppose the objective function is given by \begin{equation} \label{eqn:example-f}
f(x) = \frac{1}{2} x^T A x, \quad\quad A = \begin{pmatrix}
1 & 0\\
0 & -1 \end{pmatrix} \end{equation} and note that the origin is a saddle point of $f$. The associated GD dynamics \eqref{eqn:grad-dynamics} reduce to a simple linear system of the form \begin{equation} \label{eqn:lin-system} \ddt \vx(t) = -A\vx(t) \end{equation}
with solution $\vx(t) = e^{-At}x_0$, for initial condition $\vx(0) = x_0 \in \mathbb{R}^2$.
By classical linear systems theory we see that solutions of this system will only converge to the origin if they start with initial conditions in the stable eigenspace of $-A$, which is given by $E_s := \{x=(x^1,x^2)\in\mathbb{R}^2: x^2 = 0\}$). Note that this is a set of initial conditions with Lebesgue measure zero.
Let $r>0$ and consider the following question: What is the maximum amount of time that a solution of \eqref{eqn:lin-system} may spend in a ball of radius $r>0$ about the origin?
It is straightforward to verify that trajectories not converging to $0$ may take arbitrarily long to leave $B_r(0)$, and so the time it could potentially take to escape saddle points is unbounded. Indeed, note that for $\varepsilon \in (0,r)$, a trajectory of \eqref{eqn:lin-system} starting on $\partial B_r(0)$ which enters $B_\epsilon(0)$ must spend at least time $-r\log(\varepsilon)$ inside the $r$-ball before it may enter the $\varepsilon$ ball. \end{example}
These same basic properties generalize to GD in higher dimensional systems: Solutions of GD may only converge to a saddle point from a set of initial conditions with measure zero, but the time required to escape neighborhoods of the saddle is always infinite. This is made precise in the following remark. \begin{remark}[Saddle-Point Escape Time of GD] \label{remark:GD-escape-time} Informally, given a function $f$, a saddle point $x^*$ of $f$, and an $r>0$ we refer to the ``saddle-point escape time'' of an optimization process as the maximum amount of time a trajectory which does not converge to $x^*$ may spend in a ball of radius $r$ about $x^*$. In GD, the saddle point escape time is always infinite. That is, for arbitrary objective function $f$, saddle point $x^*$, and radius $r>0$ there holds
\begin{equation}
\sup_{\substack{x_0\in \partial B_r(x^*)\\ x^* \notin \textup{cl}(\gamma_{x_0}^+)}} \calL^1\bigg( \Big\{ t\in [0,\infty):~ \vx_{x_0}(t)\in B_r(x^*) \Big\} \bigg) = \infty, \end{equation} where $\vx_{x_0}$ is the solution of \eqref{eqn:grad-dynamics} with initial condition $x_0$.
\end{remark} This is precisely the issue which causes GD to perform poorly in high-dimensional problems with many saddle points. In this paper we will see that NGD significantly mitigates this issue---rather than having an infinite saddle-point escape time, the saddle point escape time of NGD is at most $5\sqrt{\kappa}r$, where $\kappa$ is the condition number of $D^2 f(x^*)$.
\subsection{Saddle Points and NGD} \label{sec:examples-NGD} We will now consider the behavior of NGD near the saddle point in the above example.
In order to better understand this issue, it is helpful to characterize the relationship between GD and NGD. In Section \ref{sec:struct_properties} we will see that GD and NGD are closely linked---the two systems are ``topologically equivalent'' \cite{Perko_ODE} and solutions of NGD are merely arc-length reparameterizations of GD solutions (see Definition \ref{def:arc-reparam}). In practical terms this means that if one considers orbits of NGD and GD starting from the same initial condition $x_0\in\mathbb{R}^d$, the orbits generated by the two systems are identical (see Definition \ref{def:orbit}). The solutions of each system only vary in how quickly they move along the common orbit. In particular, since NGD always ``moves with speed 1'' (i.e., $\|\dot \vx(t)\| = 1, ~\forall t\geq 0$) the length of an arc generated by NGD up to time to time $t$ is precisely $t$ (this is what it means to be an arc-length reparameterization). As an important result of this characterization, we will see that NGD ``almost never'' converges to saddle points (see Theorem \ref{prop:stable-manifold}).
While a solution of GD may move arbitrarily slowly as it passes near a saddle point, a solution of NGD starting at the same initial condition will move along the same orbit with constant speed, not slowing near the saddle point. This is illustrated in Fig. \ref{fig:NGD_v_GD}.
Consider NGD with $f$ as defined in Example \ref{example-GD} (see \eqref{eqn:example-f}). Given the simple linear structure of the corresponding GD ODE \eqref{eqn:lin-system} it is straightforward to verify that the arc-length of any trajectory of GD (or equivalently NGD) intersecting $B_r(0)$ is upper bounded by $2r$ and hence the maximum time a trajectory of NGD may spend in $B_r(0)$ is $2r$ (see Fig. \ref{fig:NGD_v_GD}).
This simple example may be generalized to higher dimensions. Let $f:\mathbb{R}^d\to \mathbb{R}$, $d\geq 2$ be given by $f(x) = x^T Ax$, with $A = \textup{diag}(\lambda_1,\ldots,\lambda_d)$ with $|\lambda_i|=1$ for all $i=1,\ldots,d$, and at least one $\lambda_i>0$ and one $\lambda_i<0$. Given the simple structure of the corresponding GD ODE $\dot \vx = -A\vx$, it is straightforward to show that the arc-length of any trajectory of GD intersecting $B_r(0)$ (and hence the amount of time spent by NGD in $B_r(0)$) is upper bounded by $2r$, independent of the dimension $d$.
Note that in this example, the condition number of $D^2 f(0)$ is 1. In general, as the condition number increases, the time spent by NGD in $B_r(0)$ may increase. Theorem \ref{thm:main-thm} captures this relationship for general $f$ (satisfying Assumption \ref{a:twice-differentiable}).
\begin{remark} We note that the bound that will be established in Theorem \ref{thm:main-thm} is conservative. In particular, suppose $f:\mathbb{R}^d\to\mathbb{R}$ is quadratic of the form $f(x) = x^TAx$, with $A\in\mathbb{R}^{d\times d}$ diagonal and non-singular. Then one can show that time spent by a trajectory of NGD in $B_r(0)$ is at most $2\sqrt{d}r$. This bound holds even as the condition number of $D^2 f(0)$ is brought to $\infty$.\footnote{This is shown by bounding the arc length of the corresponding linear GD ODE $\dot \vx = -A\vx$. Intuitively, if $A$ is well conditioned, then trajectories of the ODE passing near 0 travel along a ``direct route'' to and away from 0. If $A$ is ill conditioned, then trajectories of the ODE travel a ``Manhattan route'' to and away from 0, with movement tangential to the stable eigenspace of $A$ occurring along only one stable eigenvector at a time.} Thus, while an ill-conditioned saddle point can slow the escape time of NGD, this example suggests that in the worst case as the condition number is brought to $\infty$, the time spent by NGD in $B_r(x^*)$ about a saddle point $x^*$ can be bounded by $C\sqrt{d}r$, where $C>0$ is some universal constant independent of dimension and condition number. An in-depth investigation of this issue is outside the scope of this note.
\end{remark}
\begin{figure}
\caption{ }
\label{fig:gull}
\caption{\small (a) Common orbit shared by the solutions of GD and NGD starting at the same initial condition $x_0$ with the objective function given by \eqref{eqn:example-f}. At time $t$, trajectory of GD (given by x(t)) stalls near the saddle point while trajectory of NGD (given by ~x(t)) moves along the same orbit with constant speed without slowing down near the saddle point. (b) As $x_0$ approaches the stable eigenspace (the horizontal axis) the length of the orbit inside the ball approaches $2r$.}
\label{fig:NGD_v_GD}
\end{figure}
\section{NGD: Structural Properties and Generic Convergence to Local Minima} \label{sec:struct_properties}
The following proposition establishes the basic structural relationship between GD and NGD. \begin{proposition}\label{prop:arc-length}
Let $\vx(t)$ and $\tilde \vx(t)$ be solutions of \eqref{eqn:grad-dynamics} and \eqref{eqn:normalized-dynamics} respectively, with the same initial condition $x_0$, over maximal intervals $[0,T)$ and $[0,\tilde T)$ respectively. Then $\tilde \vx(t)$ is an arc length reparametrization of $\vx(t)$, and $\tilde \vx(t) = \vx(s(t))$ for some strictly increasing function $s(t)$, with $s(0) = 0$ and $s(\tilde{T}) = T$. \end{proposition}
This result means that (classical) solutions of \eqref{eqn:grad-dynamics} and \eqref{eqn:normalized-dynamics} starting at the same initial condition have identical orbits (see Definition \ref{def:orbit}); the solutions only differ in the speed with which they move along the common orbit.\footnote{In other words, the dynamical systems defined by \eqref{eqn:grad-dynamics} and \eqref{eqn:normalized-dynamics} are topologically equivalent \cite{Perko_ODE} with the concomitant homeomorphism given by the identity.}
The following result shows that NGD may only converge to non-degenerate saddle points from a measure-zero set of initial conditions. Part (i) of the proposition considers a slightly weaker condition than non-degeneracy as discussed in Section \ref{sec:notation}. In particular, we will require that at least one eigenvalue of $D^2 f(x^*)$ be negative. Saddle points satisfying this condition are sometimes referred to in the literature as \emph{rideable} or \emph{strict} saddle points \cite{ge2015escaping,sun2015nonconvex}.
\begin{theorem} [Non-Convergence to Saddle Points] \label{prop:stable-manifold} $~$\\
\noindent (i) Let $x^*$ be a saddle point of $f$ such that there exists a $\lambda \in \sigma(D^2 f(x^*))$ with $\lambda < 0$. Then solutions to \eqref{eqn:normalized-dynamics} can only reach or converge to $x^*$ from a set of initial conditions with Lebesgue measure zero.\\
\noindent (ii) Suppose that each saddle point of $f$ is non-degenerate. Then the set of initial conditions from which solutions to \eqref{eqn:normalized-dynamics} reach or converge to a saddle point has Lebesgue measure zero. \end{theorem}
Since a non-degenerate saddle point $x^*$ necessarily has at least one strictly negative eigenvalue in $\sigma(D^2 f(x^*))$, Theorem \ref{prop:stable-manifold} immediately implies that solutions to \eqref{eqn:normalized-dynamics} may only converge to non-degenerate saddle points from a measure zero set of initial conditions.
\begin{remark} [Uniqueness of Fillipov Solutions] While we do not deal explicitly with Fillipov solutions to \eqref{eqn:normalized-dynamics} in this note, we note that Fillipov solutions of \eqref{eqn:normalized-dynamics} are classical so long as they do not intersect with critical points of $f$. In particular, for functions in which all critical points are non-degenerate, Fillipov solutions are classical until they intersect with critical points, and unique so long as they do not intersect with saddle points or maxima. Theorem \ref{prop:stable-manifold} shows that Fillipov solutions are unique from almost all initial conditions in functions with non-degenerate critical points. \end{remark}
It follows from Propositions \ref{prop:arc-length} and \ref{prop:stable-manifold} that solutions of NGD exist and are unique for almost every initial condition. We note that both of these results follow as elementary applications of classical ODE theory (See Section \ref{sec:proofs}).
We also note that this issue (generic non-convergence to saddle points, as in Proposition \ref{prop:stable-manifold}) was considered for discrete-time GD \eqref{eq_GD_DE} in the recent work \cite{JordanStableManifold}. Addressing the question of ``stable manifold'' theorems for the discrete analog of \eqref{eqn:normalized-dynamics} will be a subject of future work.
\section{Fast Escape From Saddle Points} \label{sec:main_result} The following theorem gives our main result regarding fast escape from saddle points. The theorem provides a simple estimate on the amount of time that trajectories of NGD can spend near saddle points. \begin{theorem} [Saddle-Point Escape Time] \label{thm:main-thm}
Let $C>4$ and suppose $x^*$ is a non-degenerate saddle point of $f$. Then for all $r>0$ sufficiently small, any trajectory of \eqref{eqn:normalized-dynamics}
that does not reach or converge to $x^*$
can spend time at most time $C\sqrt{\kappa}r$ in the ball $B_r(x^*)$, where $\kappa$ is the condition number of $D^2 f(x^*)$. That is, if $\vx_{x_0}$ is a solution to \eqref{eqn:normalized-dynamics} with initial condition $x_0$ and maximal interval of existence $[0,T_{x_0})$, $T_{x_0}\leq \infty$, and $x^* \notin \textup{cl}(\gamma_{x_0}^+)$, then
$$
\calL^1\bigg(\Big\{t \in [0,T_{x_0}): \vx_{x_0}(t) \in B_r(x^*) \Big\} \bigg) \leq C\sqrt{\kappa}r.
$$
\end{theorem}
We recall that by Theorem \ref{prop:stable-manifold}, solutions of \eqref{eqn:normalized-dynamics} can only reach or converge to saddle points from a set of initial conditions with measure zero, hence the theorem hold for solutions starting from almost every initial condition.
In order to underscore the significance of this result, we recall that the saddle-point escape time of GD (i.e., the time required to escape a ball of radius $r>0$ about a saddle point) is infinite, independent of $f$, $d$, $x^*$, and $r$ (see Remark \ref{remark:GD-escape-time}), which causes GD to perform poorly in problems with many saddle points. In contrast to this, Theorem \ref{thm:main-thm} shows that trajectories of NGD always escape a ball of radius $r$ within time $5\sqrt{\kappa}r$.\footnote{As in the introduction, to emphasize the key features of this result we fix the constant $C$ to be 5 here. Of course, the theorem holds for the constant $C$ fixed to any value strictly greater than 4. See Remark \ref{remark:constant1} for more details.}
Furthermore, we recall that Proposition \ref{prop:arc-length} showed that orbits of GD and NGD coincide. Thus, away from saddle points (where GD is generally ``well behaved'') GD and NGD behave in an essentially identical manner in that they follow identical trajectories and the velocity of each can be bounded from below.
A few remarks are now in order. \begin{remark}[Values of constant $C$] \label{remark:constant1} The above theorem holds with the constant $C$ set to any value strictly greater than 4. The proof of the estimate in the theorem utilizes several Taylor series approximations. There is a tradeoff inherent in this proof technique---as $C$ approaches 4, the range of permissible values of $r>0$ where the Taylor approximation (and hence, the theorem) is applicable shrinks to zero. For clarity of presentation and to emphasize the key features of this result we find it convenient to simply fix the constant to be 5 in the abstract and introduction. See Proposition \ref{prop:time-bound} and proof thereof for more details. \end{remark} \begin{remark}[Permissible values of $r$] \label{remark:values-of-r}
The range of values of $r>0$ where Theorem \ref{thm:main-thm} holds depends both on the constant $C$ and the magnitude of higher order derivatives near the saddle point $x^*$. In particular, the result holds so long as the Taylor estimates \eqref{inequality2}, \eqref{eqn:taylor-estimate2} used in the proof are valid. If one assumes that $f$ is more than twice differentiable and assumes bounds on the magnitude of the higher order derivatives near $x^*$, then the radius where these estimates hold can be bounded, and a more precise statement can be made about the permissible values of $r$ in Theorem \ref{thm:main-thm}. For example, if one assumes that $|D^3 f(x)| < \hat C$ is uniformly bounded for some $\hat C>0$
then Theorem \ref{thm:main-thm} holds for all $r\in (0,\bar r)$, where $\bar r = 6\kappa^{-1/2}\hat C^{-1}|\lambda|_{\textup{max}}(D^2 f(x^*))\left(\frac{C(3\kappa+2)}{6C\kappa+16} - \frac{1}{2} \right)$, and where $\kappa$ is the condition number of $D^2 f(x^*)$. This is verified by confirming that the Taylor estimates \eqref{inequality2}, \eqref{eqn:taylor-estimate2} used in the proof of Proposition \ref{prop:time-bound} are valid in the ball $B_{\hat r}(0)$, $\hat r = \kappa^{1/2} r$, for values of $r$ in this range. \end{remark}
\begin{remark}[Non-Applicability of the Hartman-Grobman Theorem]\label{remark:hart-grob} The Hartman-Grobman theorem from classical differential equations states that near non-degenerate saddle points one can construct a homeomorphism mapping the trajectories of a non-linear ODE to trajectories of the associated linearized system \cite{Perko_ODE}. It is simple to show that Theorem \ref{thm:main-thm} holds when $f$ is quadratic (and hence the associated NGD system is topologically identical to a linear system); see Section \ref{sec:saddle-points-GD}. Thus, one might expect Theorem \ref{thm:main-thm} to hold for general (non-quadratic) $f$ by the Hartman-Grobman theorem. However, the homeomorphisms constructed in the Hartman-Grobman theorem are in general not smooth, and so will not preserve trajectory length, and cannot be used to prove a bound such as Theorem \ref{thm:main-thm}. Instead one must resort to more analytical techniques to study path length; see the proof of Proposition \ref{prop:time-bound} below. \end{remark} \begin{remark}[Theorem \ref{thm:main-thm} Proof Technique] \label{remark-proof-techniques} Here, the key idea of the proof of Theorem \ref{thm:main-thm} relies on establishing a differential inequality between the ``potential'' $f$ and the ``potential dissipation rate'' $\frac{d}{dt} f({ \bf x}(t))$. The methods are flexible, and may be applicable to other non-smooth settings. In a previous work \cite{swenson2017fictitious} the authors utilized similar techniques to study non-smooth dynamics in game-theoretical problems. \end{remark}
\section{A Global Convergence-Time Bound}\label{sec:global-bound} We will now use the above results to prove a simple corollary bounding the maximum amount of time that trajectories can take to reach local minima under \eqref{eqn:normalized-dynamics}.
We will make the following assumptions. \begin{assumption}
\label{a:bdd-3rd-derivative}
The function $f$ is of class $C^3$ and $|D^3 f(x)| \leq \hat C$ uniformly for all $x\in \mathbb{R}^d$, for some $\hat C>0$. \end{assumption} This assumption ensures that there exists a single $r>0$ such that Proposition \ref{prop:time-bound} holds within a ball of radius $r$ about \emph{every} critical point (see Remark \ref{remark:values-of-r}).
Next we assume a uniform bound on the magnitude of eigenvalues of the Hessian at critical points. \begin{assumption}
\label{a:uniform-eig-bd} There exist constants $|\lambda|_{\textup{max}}, |\lambda|_{\textup{min}} >0$
such that for every critical point $x^*$ of $f$ there holds $|\lambda|_{\textup{min}} \leq |\lambda| \leq |\lambda|_{\textup{max}}$ for all $\lambda \in \sigma(D^2 f(x^*))$. \end{assumption}
The next assumption ensures that at any point $x\in \mathbb{R}^d$, either the gradient of $f$ at $x$ is large (guaranteeing fast local improvement of descent techniques), or $x$ is close to a critical point. \begin{assumption} \label{a:strict-saddle} Fix $C>4$. Assuming Assumptions \ref{a:bdd-3rd-derivative} and \ref{a:uniform-eig-bd} hold, let $r>0$ be chosen so that Theorem \ref{thm:main-thm} (or equivalently, Proposition \ref{prop:time-bound}) holds with constant $C$ at every critical point and so that $r \leq \frac{|\lambda|_{\textup{min}}}{\hat C}$.
Furthermore, assume that there exists a constant $\nu>0$ such that for all $x\in \mathbb{R}^d$ either \begin{displaymath}
\|x-x^*\| < r, \text{ with } \nabla f(x^*) =0 \textbf{ or } \|\nabla f(x)\| > \nu. \end{displaymath} \end{assumption}
Assumptions \ref{a:uniform-eig-bd} and \ref{a:strict-saddle} together are similar to the strict saddle property assumed in \cite{ge2015escaping},\cite{Levy}. The main difference is that here we assume a uniform (lower) bound on the minimum-magnitude eigenvalue of the Hessian at all critical points rather than just saddle points, and we assume a uniform (upper) bound on the maximum-magnitude eigenvalue of the Hessian at all critical points. The final assumption ensures that a descent process will eventually converge to some point rather than expanding out infinitely. This assumption is naturally satisfied, for example, if $f$ is coercive (i.e., $f(x)\to \infty$ as $\|x\|\to\infty$). \begin{assumption} \label{a:invariant-set} There exists an $R>0$ such that trajectories of \eqref{eqn:normalized-dynamics} that begin in $B_R(0)$ remain in $B_R(0)$ for all $t\geq 0$. \end{assumption}
Let $R>0$ be as in Assumption \ref{a:invariant-set} and let \begin{equation}\label{def-M}
M:= \sup_{x\in B_R(0)} |f(x)|. \end{equation} Note that since $f$ is continuous, $M<\infty$.
The following result gives a simple estimate on the amount of time the dynamics \eqref{eqn:normalized-dynamics} will take to reach a local minimum. \begin{corollary}\label{cor-finite-time}
Suppose that every saddle point $x^*$ of $f$ is non-degenerate and that Assumptions \ref{a:bdd-3rd-derivative}--\ref{a:invariant-set} hold. Then for almost every initial condition inside $B_R(0)$, solutions of \eqref{eqn:normalized-dynamics} will converge to a local minimum in at most time $2M\nu^{-1} + C\sqrt{\frac{|\lambda|_{\textup{max}}}{|\lambda|_{\textup{min}}}}\frac{(R+r)^d}{r^{d-1}}$, where $C>4$ is the constant in Assumption \ref{a:strict-saddle}. \end{corollary}
\section{Proofs of Main Results} \label{sec:proofs} We now present the proofs of the results found in Sections \ref{sec:struct_properties} -- \ref{sec:main_result}.
We begin by presenting the proofs of Propositions \ref{prop:arc-length} and Theorem \ref{prop:stable-manifold}, which follow from elementary applications of classical ODE theory.
\begin{proof}[Proof of Proposition \ref{prop:arc-length}]
Given a solution $\vx$ to \eqref{eqn:grad-dynamics}, one can reparametrize the trajectory by arc length, i.e., $\hat { \bf x} (t) = { \bf x}(L(t))$, and $\|\ddt \hat { \bf x}(t)\| = 1$. Using the chain rule we find that $\frac{d}{dt} \hat { \bf x}(t) = -\frac{\nabla f(\hat { \bf x}(t))}{\|\nabla f(\hat { \bf x}(t))\|}$. Since the solutions are classical, uniqueness of solutions for ODE gives us that $\tilde { \bf x}$ and $\hat { \bf x}$ must be equal. \end{proof}
\begin{proof}[Proof of Theorem \ref{prop:stable-manifold}] We begin by proving part (i) of the theorem. Solutions to \eqref{eqn:grad-dynamics} which converge to such a saddle point are contained within a \emph{stable manifold}, i.e. a smooth surface of at most dimension $n-1$. Such a surface will be a set with Lebesgue measure zero. Proof and details of such a result may be found in \cite{Perko_ODE}. The result then follows from Proposition \ref{prop:arc-length}.
Part (ii) of the theorem follows from the fact that if all saddle points are non-degenerate, then all saddle points are isolated. Hence, the set of saddle points is countable. By part (i) of the theorem, the union of the stable manifolds for all saddle points is a set with Lebesgue measure zero. \end{proof}
The following proposition proves Theorem \ref{thm:main-thm}. The proposition is stated in slightly more general terms than Theorem \ref{thm:main-thm} in order to account for the behavior of NGD near minima as well as saddle points.
\begin{proposition}\label{prop:time-bound} Let $C>4$, let $x^*\in\mathbb{R}^d$ be a non-degenerate critical point of $f$, and let $\vx(t)$ be a solution of \eqref{eqn:normalized-dynamics} with arbitrary initial condition $x_0\not = x^*$ and maximal interval of existence $[0,T_{x_0})$. For all $r>0$ sufficiently small, the time spent by $\vx(t)$ in $B_r(x^*)\backslash \{x^*\}$ is bounded according to $$ \calL^1\big(\big\{t\geq [0,T_{x_0}): \vx(t) \in B_r(x^*)\backslash \{x^*\} \big\}\big) \leq C\sqrt{\kappa} r, $$ where $\kappa = \frac{|\lambda|_{\textup{max}}(D^2 f(x^*))}{|\lambda|_{\textup{min}}(D^2 f(x^*))}$. \end{proposition}
\begin{proof}
Without loss of generality, assume $x^*=0$ and let $H := D^2 f(0)$. For $x\in \mathbb{R}^d$ define $\tilde d(x) := \sqrt{x^T |H| x}$, where $|B| := \sqrt{B^T B}$ for a square matrix $B$. The function $\tilde d$ will be a convenient modified distance for the proof. For convenience in notation, throughout the proof we use the shorthand $|\lambda|_{\textup{max}} := |\lambda|_{\textup{max}}(D^2 f(x^*))$ and $|\lambda|_{\textup{min}} := |\lambda|_{\textup{min}}(D^2 f(x^*))$.
Note that for $a\geq 0$ we have the following relationships \begin{align} \label{eq:inclusion-ineq1}
\|x\| \leq \frac{a}{\sqrt{|\lambda|_{\textup{max}}}} & \implies \tilde d(x) \leq a\\
\label{eq:inclusion-ineq2} \tilde d(x) \leq a & \implies \|x\| \leq \frac{a}{\sqrt{|\lambda|_{\textup{min}}}}. \end{align}
By Taylor's theorem and the non-degeneracy of $x^*$, for any $C_2 > \frac{1}{2}$ there exists a neighborhood of $0$ such that \begin{equation} \label{inequality2}
|f(x) - f(0)| \leq C_1\tilde d(x)^2. \end{equation} Using the chain rule we see that along the path $\vx(t)$, the potential changes as $$
\frac{d}{dt} f(\vx(t)) = -\|\nabla f(\vx(t))\|. $$ Let $C_2< 1$ be arbitrary. Again using Taylor's theorem and the non-degeneracy of $H$, for $\vx(t)$ in a neighborhood of $0$ we have that \begin{align}
\nonumber \|\nabla f(\vx(t))\| &\geq C_2 \| H \vx(t)\| \\
\nonumber &= C_2\| |H|^{1/2} |H|^{1/2} \vx(t)\| \\
\nonumber &\geq C_2\sqrt{|\lambda|_{\textup{min}}}\||H|^{1/2} \vx(t)\| \\
&= C_2\sqrt{|\lambda|_{\textup{min}}} \tilde d(\vx(t)), \label{eqn:taylor-estimate2} \end{align} where $|\lambda|_{\textup{min}}$ denotes the magnitude of the smallest-magnitude eigenvalue of $H$. In turn \begin{equation}\label{inequality1}
-\frac{d}{dt} f(\vx(t)) \geq C_2 \sqrt{|\lambda|_{\textup{min}}} \tilde d(\vx(t)). \end{equation} Let $\hat r>0$ be such that the estimates \eqref{inequality2} and \eqref{eqn:taylor-estimate2} hold inside the closed ball $B_{\hat r}(0)$.
Suppose that $\vx(t)\in B_{\hat r}(0)$ for $t\in[t_1,t_2]$. Letting $e(t) := \tilde d(\vx(t))$ and integrating \eqref{inequality1} gives $$ f(\vx(t_1)) - f(\vx(t_2)) \geq C_2\sqrt{|\lambda|_{\textup{min}}} \int_{t_1}^{t_2} e(s)ds. $$ Let $r := \kappa^{-\frac{1}{2}}\hat r$. Suppose $\eta \leq \sqrt{|\lambda|_{\textup{max}}} r$ and note that by \eqref{eq:inclusion-ineq2}, $\tilde d(x) \leq \eta$ implies that $x\in B_{\hat r}(0)$.
Furthermore, suppose $e(t) \leq \eta$ for some $t\geq 0$, and let $t_0$ be the first time where $e(t) \leq \eta$. Let $t_3$ be the last time when $e(t) = \eta$; i.e., $t_3 = \sup\{t\in [0,\infty):~ e(t) \leq \eta\}$. If $t_3 = \infty$, then in an abuse of notation we let $f(\vx(\infty)) = \lim_{t\to\infty} f(\vx(t))$, where we note that the limit exists since $f(\vx(t))$ is monotone non-increasing in $t$.
It follows that \begin{align}
f(\vx(t_0)) - f(\vx(t_3)) & = \int_{t_0}^{t_3} -\frac{d}{ds} f(\vx(s))\,ds \\
& \geq \int_{e(s) \leq \eta} -\frac{d}{ds} f(\vx(s))\,ds\\
& \geq C_2 \sqrt{|\lambda|_{\textup{min}}}\int_{e(s) \leq \eta} e(s) \,ds, \end{align} where we use the fact that $\frac{d}{dt} f(\vx(t)) \leq 0$, and the previous inequality on subintervals where $e(\cdot) \leq \eta$. Adding and subtracting $f(0)$ to the left hand side above and using \eqref{inequality2} we obtain $$ \frac{2C_1}{C_2\sqrt{|\lambda|_{\textup{min}}}} \eta^2 \geq \int_{e(s) \leq \eta} e(s)ds. $$
Markov's inequality \cite{federer2014geometric} then gives \begin{align} \calL^1\left(\{s: \eta \geq e(s) \geq \frac{\eta}{2} \}\right) & \leq \frac{2}{\eta} \int_{e(s) \leq \eta} e(s)ds \\ &\leq \frac{4C_1}{\eta C_2\sqrt{|\lambda|_{\textup{min}}}} \eta^2\\ & = \frac{4C_1}{C_2\sqrt{|\lambda|_{\textup{min}}}}\eta. \end{align} We can iteratively apply this inequality to obtain \begin{align}
& \calL^1\left(\{s: \eta \geq e(s) >0 \}\right)\\
& = \sum_{i=0}^\infty \calL^1\left(\{s: \frac{\eta}{2^i} \geq e(s) \geq \frac{\eta}{2^{i+1}} \} \right) \\
& \leq \sum_{i=0}^{\infty} \frac{4 C_2\eta}{C_2\sqrt{|\lambda|_{\textup{min}}}2^{i}}\\
& \leq \frac{8 C_1}{C_2\sqrt{ |\lambda|_{\textup{min}}}}\eta. \label{eqn:time-bound} \end{align} By \eqref{eq:inclusion-ineq1} we see that
$
\{s:0<\|\vx(s)\|\leq r\} \subset \{s: 0<\tilde d(\vx(s)) \leq \sqrt{|\lambda|_{\textup{max}}}r\}. $ Letting $\eta = \sqrt{|\lambda|_{\textup{max}}} r$ in \eqref{eqn:time-bound}, and letting $C:= \frac{8C_1}{C_2}$, we get \begin{align}
& \calL^1\Big(\{s: 0< \|\vx(s)\| \leq r \} \Big)\\
& \leq \calL^1\left(\{s: 0< \tilde d(\vx(s)) \leq \sqrt{|\lambda|_{\textup{max}}}r \}\right) \leq C\frac{\sqrt{|\lambda|_{\textup{max}}}}{\sqrt{|\lambda|_{\textup{min}}}}r, \end{align} where we recall that $r = \kappa^{-1/2}\hat r$ and $\hat r$ is the radius of the ball where \eqref{inequality2} and \eqref{eqn:taylor-estimate2} hold and is dependent on $C_1$ and $C_2$. Since $C_1>\frac{1}{2}$ and $C_2 < 1$ were arbitrary, the constant $C$ may be brought arbitrarily close to $4$ with the range of permissible values of $r$ changing accordingly with the choice of $C_1$ and $C_2$. This proves the desired result. \end{proof}
\begin{proof}[Proof of Corollary \ref{cor-finite-time}] First, we claim that critical points must be separated by a distance of at least $2r$. Let $x^*$ be a critical point. Then \begin{align*}
&\nabla f(x) = \int_0^1 D^2 f( (1-s)x^* + s x) (x-x^*) \,ds\\
&= \int_0^{1} D^2 f(x^*)(x-x^*)\, ds\\
&+ \int_0^1\int_0^s D^3_{x-x^*} f( (1-\tau)x^* + \tau x)(x-x^*) \,d\tau\, ds, \end{align*}
\noindent where by $D^3_{x-x^*}$ we mean the matrix representing the third derivative evaluated in the direction $x-x^*$. We can then bound \begin{displaymath}
|\nabla f(x)| \geq |\lambda|_{\textup{min}} \|x-x^*\| - \frac{\hat C}{2}\|x-x^*\|^2 \end{displaymath}
\noindent where $\hat C$ is the bound on our third derivatives. Note that by Assumption \ref{a:strict-saddle} we have $2r \leq \frac{2|\lambda|_{\textup{min}}}{\hat C}$. Thus we see that for any $x\in B_{2r}(x^*) \subset B_{\frac{2|\lambda|_{\textup{min}}}{\hat C}}(x^*)$ we have $\nabla f(x) \neq 0$. Hence critical points must be separated by a distance of at least $2r$.
Now, let $\vx(t)$ be a classical solution of \eqref{eqn:normalized-dynamics} (which, by Theorem \ref{prop:stable-manifold}, holds for a.e. solution of \eqref{eqn:normalized-dynamics}). Let $[t_1,t_2] = I$ be the maximal interval of existence for this classical solution. Our goal is to prove that $(t_2-t_1)$ can be bounded uniformly.
To this end, we divide $I$ into two subsets, $I_c,I_0$, where $I_c$ are the times where $\|\vx(t) - x^*\|\leq r$ for some critical point $x^*$, and $I_0$ are points where $\|\nabla f(\vx(t))\| \geq \nu$.
Using the chain rule we see that $\frac{d}{dt} f(\vx(t)) = -\|\nabla f(\vx(t))\|$. By Assumption \ref{a:invariant-set} and \eqref{def-M} we have $|f(\vx(t))| < M$ along any trajectory of \eqref{eqn:normalized-dynamics} starting in $B_R(0)$. Thus, we immediately have that $|I_0| < 2M\nu^{-1}$.
Let $\kappa = \frac{|\lambda|_{\textup{max}}}{|\lambda|_{\textup{min}}}$. By Proposition \ref{prop:time-bound} we can spend at most time $C\sqrt{\kappa}r$ near any particular critical point. Since critical points are separated by at least distance $2r$, we can cover all the critical points with disjoint balls of radius $r$. By then estimating the volume, the total number of critical points within distance $R$ of the origin is at most $\frac{(R+r)^d}{r^d}$. Thus we find that $|I_c| < C\sqrt{\kappa}r \frac{(R+r)^d}{r^d}$.
In summary, we find that $|I| \leq 2M\nu^{-1} + C\sqrt{\kappa} \frac{(R+r)^d}{r^{d-1}}.$
This implies that classical trajectories can be of length at most $M\nu^{-1} + C\sqrt{\kappa}\frac{(R+r)^d}{r^{d-1}}$. Since a.e. initial condition does not reach any saddle point, almost every initial condition will converge to a local minimizer of $f$ in $2M\nu^{-1} + C\sqrt{\kappa}\frac{(R+r)^d}{r^{d-1}}$ time. This concludes the proof. \end{proof}
\end{document} | arXiv |
Tag: machine learning
Research by CS Undergrad Published in Cell
Payal Chandak (CC '21) developed a machine learning model, AwareDX, that helps detect adverse drug effects specific to women patients. AwareDX mitigates sex biases in a drug safety dataset maintained by the FDA.
Below, Chandak talks about how her internship under the guidance of Nicholas Tatonetti, associate professor of biomedical informatics and a member of the Data Science Institute, inspired her to develop a machine learning tool to improve healthcare for women.
Payal Chandak
How did the project come about?
I initiated this project during my internship at the Tatonetti Lab (T-lab) the summer after my first year. T-lab uses data science to study the side effects of drugs. I did some background research and learned that women face a two-fold greater risk of adverse events compared to men. While knowledge of sex differences in drug response is critical to drug prescription, there currently isn't a comprehensive understanding of these differences. Dr. Tatonetti and I felt that we could use machine learning to tackle this problem and that's how the project was born.
How many hours did you work on the project? How long did it last?
The project lasted about two years. We refined our machine learning (ML) model, AwareDX, over many iterations to make it less susceptible to biases in the data. I probably spent a ridiculous number of hours developing it but the journey has been well worth it.
Were you prepared to work on it or did you learn as the project progressed?
As a first-year student, I definitely didn't know much when I started. Learning on the go became the norm. I understood some things by taking relevant CS classes and through reading Medium blogs and GitHub repositories –– this ability to learn independently might be one of the most valuable skills I have gained. I am very fortunate that Dr. Tatonetti guided me through this process and invested his time in developing my knowledge.
What were the things you already knew and what were the things you had to learn while working on the project?
While I was familiar with biology and mathematics, computer science was totally new! In fact, T-Lab launched my journey to exploring computer science. This project exposed me to the great potential of artificial intelligence (AI) for revolutionizing healthcare, which in turn inspired me to explore the discipline academically. I went back and forth between taking classes relevant to my research and applying what I learned in class to my research. As I took increasingly technical classes like ML and probabilistic modelling, I was able to advance my abilities.
Looking back, what were the skills that you wished you had before the project?
Having some experience with implementing real-world machine learning projects on giant datasets with millions of observations would have been very valuable.
Was this your first project to collaborate on? How was it?
This was my first project and I worked under the guidance of Dr. Tatonetti. I thought it was a wonderful experience – not only has it been extremely rewarding to see my work come to fruition, but the journey itself has been so valuable. And Dr. Tatonetti has been the best mentor that I could have asked for!
Did working on this project make you change your research interests?
I actually started off as pre-med. I was fascinated by the idea that "intelligent machines" could be used to improve medicine, and so I joined T-Lab. Over time, I've realized that recent advances in machine learning could redefine how doctors interact with their patients. These technologies have an incredible potential to assist with diagnosis, identify medical errors, and even recommend treatments. My perspective on how I could contribute to healthcare shifted completely, and I decided that bioinformatics has more potential to change the practice of medicine than a single doctor will ever have. This is why I'm now hoping to pursue a PhD in Biomedical Informatics.
Do you think your skills were enhanced by working on the project?
Both my knowledge of ML and statistics and my ability to implement my ideas have grown immensely as a result of working on this project. Also, I failed about seven times over two years. We were designing the algorithm and it was an iterative process – the initial versions of the algorithm had many flaws and we started from scratch multiple times. The entire process required a lot of patience and persistence since it took over 2 years! So, I guess it has taught me immense patience and persistence.
Why did you decide to intern at the T-Lab?
I was curious to learn more about the intersection of artificial intelligence and healthcare. I'm endlessly fascinated by the idea of improving the standards of healthcare by using machine learning models to assist doctors.
Would you recommend volunteering or seeking projects out to other students?
Absolutely. I think everyone should explore research. We have incredible labs here at Columbia with the world's best minds leading them. Research opens the doors to work closely with them. It creates an environment for students to learn about a niche discipline and to apply the knowledge they gain in class.
New Machine Learning Tool Predicts Devastating Intestinal Disease in Premature Infants
CS researchers develop a new machine learning approach that shows promise in predicting necrotizing enterocolitis; could lead to improved medical decision-making in neonatal ICUs.
Can AI Help Doctors Predict and Prevent Preterm Birth?
Almost 400,000 babies were born prematurely—before 37 weeks gestation—in 2018 in the United States. One of the leading causes of newborn deaths and long-term disabilities, preterm birth (PTB) is considered a public health problem with deep emotional and challenging financial consequences to families and society. If doctors were able to use data and artificial intelligence (AI) to predict which pregnant women might be at risk, many of these premature births might be avoided.
21 papers from CS researchers accepted to NeurIPS 2019
The 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) fosters the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects.
The annual meeting is one of the premier gatherings in artificial intelligence and machine learning that featured talks, demos from industry partners as well as tutorials. Professor Vishal Misra, with colleagues from the Massachusetts Institute of Technology (MIT), held a tutorial on synthetic control.
At this year's NeurIPS, 21 papers from the department were accepted to the conference. Computer science professors and students worked with researchers from the statistics department and the Data Science Institute.
Noise-tolerant Fair Classification
Alex Lamy Columbia University, Ziyuan Zhong Columbia University, Aditya Menon Google, Nakul Verma Columbia University
Fairness-aware learning involves designing algorithms that do not discriminate with respect to some sensitive feature (e.g., race or gender) and is usually done under the assumption that the sensitive feature available in a training sample is perfectly reliable.
This assumption may be violated in many real-world cases: for example, respondents to a survey may choose to conceal or obfuscate their group identity out of fear of potential discrimination. In the paper, the researchers show that fair classifiers can still be used given noisy sensitive features by simply changing the desired fairness-tolerance. Their procedure is empirically effective on two relevant real-world case-studies involving sensitive feature censoring.
Poisson-randomized Gamma Dynamical Systems
Aaron Schein UMass Amherst, Scott Linderman Columbia University, Mingyuan Zhou University of Texas at Austin, David Blei Columbia University, Hanna Wallach MSR NYC
This paper presents a new class of state space models for count data. It derives new properties of the Poisson-randomized gamma distribution for efficient posterior inference.
Using Embeddings to Correct for Unobserved Confounding in Networks
Victor Veitch Columbia University, Yixin Wang Columbia University, David Blei Columbia University
This paper address causal inference in the presence of unobserved confounder when proxy is available for the confounders in the form of a network connecting the units. For example, the link structure of friendships in a social network reveals information about the latent preferences of people in that network. The researchers show how modern network embedding methods can be exploited to harness the network estimation for efficient causal adjustment.
Variational Bayes Under Model Misspecification
Yixin Wang Columbia University, David Blei Columbia University
The paper characterizes the theoretical properties of a popular machine learning algorithm, variational Bayes (VB). The researchers studied the VB under model misspecification, which is the setting that is most aligned with the practice, and show that the VB posterior is asymptotically normal and centers at the value that minimizes the Kullback-Leibler (KL) divergence to the true data-generating distribution.
As a consequence, they found that the model misspecification error dominates the variational approximation error in VB posterior predictive distributions. In other words, VB pays a negligible price in producing posterior predictive distributions. It explains the widely observed phenomenon that VB achieves comparable predictive accuracy with MCMC even though VB uses an approximating family.
Poincaré Recurrence, Cycles and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games
Emmanouil-Vasileios Vlatakis-Gkaragkounis Columbia University, Lampros Flokas Columbia University, Georgios Piliouras Singapore University of Technology and Design
The paper introduces a model that captures a min-max competition over complex error landscapes and shows that even a simplified model can provably replicate some of the most commonly reported failure modes of GANs (non-convergence, deadlock in suboptimal states, etc).
Moreover, the researchers were able to understand the hidden structure in these systems — the min-max competition can lead to system behavior that is similar to that of energy preserving systems in physics (e.g. connected pendulums, many-body problems, etc). This makes it easier to understand why these systems can fail and gives new tools in the design of algorithms for training GANs.
Near-Optimal Reinforcement Learning in Dynamic Treatment Regimes
Junzhe Zhang Columbia University, Elias Bareinboim Columbia University
Dynamic Treatment Regimes (DTRs) are particularly effective for managing chronic disorders and is arguably one of the key aspects towards more personalized decision-making. The researchers developed the first adaptive algorithm that achieves near-optimal regret in DTRs in online settings, while leveraging the abundant, yet imperfect confounded observations. Applications are given to personalized medicine and treatment recommendation in clinical decision support.
Paraphrase Generation with Latent Bag of Words
Yao Fu Columbia University, Yansong Feng Peking University, John Cunningham University of Columbia
The paper proposes a latent bag of words model for differentiable content planning and surface realization in text generation. This model generates paraphrases with clear steps, adding interpretability and controllability of existing neural text generation models.
Adapting Neural Networks for the Estimation of Treatment Effects
Claudia Shi Columbia University, David Blei Columbia University, Victor Veitch Columbia University
This paper addresses how to design neural networks to get very accurate estimates of causal effects from observational data. The researchers propose two methods based on insights from the statistical literature on the estimation of treatment effects.
The first is a new architecture, the Dragonnet, that exploits the sufficiency of the propensity score for estimation adjustment. The second is a regularization procedure, targeted regularization, that induces a bias towards models that have non-parametrically optimal asymptotic properties "out-of-the-box". Studies on benchmark datasets for causal inference show these adaptations outperform existing methods.
Efficiently Avoiding Saddle Points with Zero Order Methods: No Gradients Required
The researchers prove that properly tailored zero-order methods are as effective as their first-order counterparts. This analysis requires a combination of tools from optimization theory, probability theory and dynamical systems to show that even without perfect knowledge of the shape of the error landscape, effective optimization is possible.
Metric Learning for Adversarial Robustness
Chengzhi Mao Columbia University, Ziyuan Zhong Columbia University, Junfeng Yang Columbia University, Carl Vondrick Columbia University, Baishakhi Ray Columbia University
Deep networks are well-known to be fragile to adversarial attacks. The paper introduces a novel Triplet Loss Adversarial (TLA) regulation that is the first method that leverages metric learning to improve the robustness of deep networks. This method is inspired by the evidence that deep networks suffer from distorted feature space under adversarial attacks. The method increases the model robustness and efficiency for the detection of adversarial attacks significantly.
Efficient Symmetric Norm Regression via Linear Sketching
Zhao Song University of Washington, Ruosong Wang Carnegie Mellon University, Lin Yang Johns Hopkins University, Hongyang Zhang TTIC, Peilin Zhong Columbia University
The paper studies linear regression problems with general symmetric norm loss and gives efficient algorithms for solving such linear regression problems via sketching techniques.
Rethinking Generative Coverage: A Pointwise Guaranteed Approach
Peilin Zhong Columbia University, Yuchen Mo Columbia University, Chang Xiao Columbia University, Pengyu Chen Columbia University, Changxi Zheng Columbia University
The paper presents a novel and formal definition of mode coverage for generative models. It also gives a boosting algorithm to achieve this mode coverage guarantee.
How Many Variables Should Be Entered in a Principal Component Regression Equation?
Ji Xu Columbia University, Daniel Hsu Columbia University
The researchers studied the least-squares linear regression over $N$ uncorrelated Gaussian features that are selected in order of decreasing variance with the number of selected features $p$ can be either smaller or greater than the sample size $n$. And give an average-case analysis of the out-of-sample prediction error as $p,n,N \to \infty$ with $p/N \to \alpha$ and $n/N \to \beta$, for some constants $\alpha \in [0,1]$ and $\beta \in (0,1)$. In this average-case setting, the prediction error exhibits a "double descent" shape as a function of $p$. This also establishes conditions under which the minimum risk is achieved in the interpolating ($p>n$) regime.
Adaptive Influence Maximization with Myopic Feedback
Binghui Peng Columbia University, Wei Chen Microsoft Research
The paper investigates the adaptive influence maximization problem and provides upper and lower bounds for the adaptivity gaps under myopic feedback model. The results confirm a long standing open conjecture by Golovin and Krause (2011).
Towards a Zero-One Law for Column Subset Selection
Zhao Song University of Washington, David Woodruff Carnegie Mellon University, Peilin Zhong Columbia University
The researchers studied low-rank matrix approximation with general loss function and showed that if the loss function has several good properties, then there is an efficient way to compute a good low-rank approximation. Otherwise, it could be hard to compute a good low-rank approximation efficiently.
Average Case Column Subset Selection for Entrywise l1-Norm Loss
The researchers studied how to compute an l1-norm loss low-rank matrix approximation to a given matrix. And showed that if the given matrix can be decomposed into a low-rank matrix and a noise matrix with a mild distributional assumption, we can obtain a (1+eps) approximation to the optimal solution.
A New Distribution on the Simplex with Auto-Encoding Applications
Andrew Stirn Columbia University, Tony Jebara Spotify, David Knowles Columbia University
The researchers developed a surrogate distribution for the Dirichlet that offers explicit, tractable reparameterization, the ability to capture sparsity, and has barycentric symmetry properties (i.e. exchangeability) equivalent to the Dirichlet. Previous works have used the Kumaraswamy distribution in a stick-breaking process to create a non-exchangeable distribution on the simplex. The method was improved by restoring exchangeability and demonstrating that approximate exchangeability is efficiently achievable. Lastly, the method was showcased in a variety of VAE semi-supervised learning tasks.
Discrete Flows: Invertible Generative Models of Discrete Data
Dustin Tran Google Brain, Keyon Vafa Columbia University, Kumar Agrawal Google AI Resident, Laurent Dinh Google Brain, Ben Poole Google Brain
While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. The researchers extend normalizing flows to discrete events, using a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Empirically, they find that discrete flows obtain competitive performance with or outperform autoregressive baselines on various tasks, including addition, Potts models, and language models.
Characterization and Learning of Causal Graphs with Latent Variables from Soft Interventions
Murat Kocaoglu MIT-IBM Watson AI Lab IBM Research, Amin Jaber Purdue University, Karthikeyan Shanmugam MIT-IBM Watson AI Lab IBM Research NY, Elias Bareinboim Columbia University
This work is all about learning causal relationships – the classic aim of which is to characterize all possible sets that could produce the observed data. In the paper, the researchers provide a complete characterization of all possible causal graphs with observational and interventional data involving so-called 'soft interventions' on variables when the targets of soft interventions are known.
This work potentially could lead to discovery of other novel learning algorithms that are both sound and complete.
Identification of Conditional Causal Effects Under Markov Equivalence
Amin Jaber Purdue University, Jiji Zhang Lingnan University, Elias Bareinboim Columbia University
Causal identification is the problem of deciding whether a causal distribution is computable from a combination of qualitative knowledge about the underlying data-generating process, which is usually encoded in the form of a causal graph, and an observational distribution. Despite the obvious need for identifying causal effects throughout the data-driven sciences, in practice, finding the causal graph is a notoriously challenging task.
In this work, the researchers provide a relaxation of the requirement of having to specify the causal graph (based on substantive knowledge) and allow the input of the inference to be an equivalence class of causal graphs, which can be inferred from data. Specifically, they propose the first general algorithm to learn conditional causal effects entirely from data. This result is particularly useful for evaluating the impact of conditional plans and stochastic policies, which appear both in AI (in the context of reinforcement learning) and in the data-driven sciences.
Efficient Identification in Linear Structural Causal Models with Instrumental Cutsets
Daniel Kumor Purdue University, Bryant Chen Brex Inc., Elias Bareinboim Columbia University
Regression analysis is one of the most common tools used in modern data science. While there is a great understanding and powerful technology to perform regression analysis in high dimensional spaces, the output of such a method is purely associational and devoid of any causal interpretation.
The researchers studied the problem of identification of structural (causal) coefficients in linear systems (deciding whether regression coefficients are amenable to causal interpretation, etc). Building on a technique called instrumental variables, they developed a new method called Instrumental Cutset, which partitions the systems into tractable components such that identification can be decided more efficiently. The resulting algorithm was efficient and strictly more powerful than the current state-of-the-art methods. | CommonCrawl |
\begin{document}
\begin{abstract}
In this paper, we prove a conditional H\"older stability estimate for the inverse spectral problem of the biharmonic operator. The proof employs the resolvent estimate and a Weyl-type law for the biharmonic operator which were obtained by the authors in \cite{LYZ}. This work extends nontrivially the result in \cite{stefanov} from the second order Schr\"{o}dinger operator to the fourth order biharmonic operator.
\end{abstract}
\maketitle
\section{Introduction}
The topic of meromorphic continuation of the outgoing resolvent and related resolvent estimates for elliptic operators is central in scattering theory (see e.g. \cite{DJ18, Dy15a, Zw17}). Physically, the poles of the meromorphic continuation are closely related to the scattering resonances, which appear in many research areas of mathematics, physics, and engineering. We refer to the monograph \cite{Dyatlov} for a comprehensive introduction to mathematical theory of this subject. Recently, the stability estimates for the inverse source problems were obtained in \cite{LYZ, LZZ} by using the holomorphic domain and an upper bound for the resolvent of the elliptic operator. Another application can be found in \cite{Cakoni} for a study on the duality between scattering poles and transmission eigenvalues in scattering theory. To further explore the applications of the scattering theory to other topics in the field of inverse problems, in this paper, we intend to study an inverse spectral problem for the biharmonic operator. The inverse spectral problem may be considered as an inverse boundary value problem. As a representative example, a fundamental work can be found in \cite{Uhlmann} on the Calder\'{o}n problem where the scattering theory played a crucial role.
\vskip0.15cm
We briefly review the existing literature on the inverse spectral problem for the Schr\"{o}dinger operator. The classical one-dimensional inverse spectral problem was studied in \cite{borg, levinson}. A uniqueness result was established in \cite{NSU} for the multi-dimensional problem by representing the Dirichlet-to-Neumann (DtN) map in terms of the spectral data. The uniqueness of the inverse spectral problem with partial spectral data was discussed in \cite{Isozaki}. For inverse spectral problems on Riemannian manifolds and in a periodic waveguide, we refer the reader to \cite{BK, Kurylev, KKL, Kian}. Stability of the inverse spectral problems was addressed in \cite{AS, stefanov}. Recent developments on numerical methods can be found in \cite{BXZ, XZ} for the one-dimensional inverse spectral problems.
\vskip0.15cm
Since there is already a vast amount of literature on the inverse spectral problems for the Schr\"{o}dinger operator, we wish to extend the results to higher order elliptic operators. The inverse problems of biharmonic operators have significant applications in various areas including the theory of vibration of beams, the hinged plate configurations and the scattering by grating stacks \cite{GGS, MMMP}. We refer the reader to \cite{ikehata, isakov} for some uniqueness results of the inverse problems of higher order elliptic operators. In \cite{katya}, the uniqueness with full or incomplete spectral data was studied for the elliptic operators of higher order with constant coefficients. However, to the best of our knowledge, there is no stability estimate so far for the inverse spectral problem of the elliptic operators of higher order.
\vskip0.15cm
This work is motivated by \cite{AS, Isozaki, stefanov}, which were concerned with the inverse spectral problem of determining the potential function of the Schr\"odinger operator from the spectral data consisting of the eigenvalues and normal derivatives of the eigenfunctions on the boundary. In \cite{Isozaki}, the author showed that even if a finite number of spectral data is unavailable, the potential can still be uniquely determined. The proof utilized an idea of the Born approximation in scattering theory. A stability theorem for the inverse spectral problem was obtained in \cite{AS} by using partial spectral data. The approach was to connect the hyperbolic DtN map associated with a hyperbolic equation with the DtN map of the stationary Schr\"odinger operator. The proof of the stability estimate was built upon \cite{Rakesh}, which studied an inverse problem for the wave equation by hyperbolic DtN map. Based on \cite{AS, Isozaki}, the authors proved in \cite{stefanov} the uniqueness result \cite[Theorem 2.1]{stefanov} by assuming that the spectral data are only known asymptotically for the Schr\"odinger operator. Moreover, a H\"older stability estimate was obtained in \cite[Theorem 2.2]{stefanov}, which assumes that a finite number of spectral data is not available. The proof of \cite[Theorem 2.2]{stefanov} combines the crucial integral identity introduced in \cite[Lemma 2.2]{Isozaki} and the method used in \cite{AS}. We also point out that the proofs in \cite{Isozaki, stefanov} rely on the resolvent estimate for the Schr\"odinger operator and a Weyl-type law is crucial in the proof of the stability estimate.
\vskip0.15cm
Recently, we proved an increasing stability estimate for the inverse source problem of the biharmonic operator \cite{LYZ}. Meanwhile, we obtained the resolvent estimate and a Weyl-type inequality for the biharmonic operator. As a consequence, we hope to extend the results in \cite{AS, Isozaki, stefanov} from the Schr\"odinger operator to the biharmonic operator. Clearly, the extension is nontrivial. Compared with the elliptic operators of second order, the biharmonic operator is more sophisticated. For instance, it is required to investigate two sets of the DtN maps and use more spectral data in order to study the inverse problems of the biharmonic operator. Moreover, the resolvent set and resolvent estimate of the biharmonic operator differ significantly from the Schr\"odinger operator. As pointed out in \cite{May}, the methods for the second order equations may not work for higher-order equations. The solutions of higher-order equations have more complicated properties. In this work, we prove a conditional H\"{o}lder stability for the inverse spectral problem of the biharmonic operator. The proof is based on a combination of an Isozaki's representation formula (cf. Lemma \ref{Isozaki_bi}) and a Weyl-type law of the Dirichlet eigenvalue problem for the biharmonic operator with a potential (cf. Lemma \ref{eigenfunction_est1}).
\vskip0.15cm
Next we introduce some notations and state the main result of this paper.
\vskip0.15cm
Let $B_R = \{x\in\mathbb R^n ~:~ |x|< R\}$, where $n\geq 3$ is odd and $R>0$ is a constant. Denote by $\partial B_R$ the boundary of $B_R$. We consider the eigenvalue problem with the Navier boundary condition \[ \begin{cases} (\Delta^2 + V) \phi_k=\lambda_k\phi_k&\quad\text{in }B_R,\\ \Delta \phi_k = \phi_k=0&\quad\text{on }\partial B_R, \end{cases} \] where $\{\lambda_j, \phi_j\}_{j=1}^\infty$ denotes the positive increasing eigenvalues and orthonormal eigenfunctions.
\vskip0.15cm
Hereafter, the notation $a\lesssim b$ stands for $a\leq Cb,$ where $C>0$ is a generic constant which may change step by step in the proofs. The following Weyl-type law for the biharmonic operator with a potential given in Lemma \ref{eigenfunction_est1} is crucial in the proof of the stability: \begin{align}\label{weyl}
|\lambda_k| \sim k^{4/n}, \quad \|\partial_\nu\phi_k\|_{L^2(\partial B_R)}\lesssim k^{2/n}, \quad \|\partial_\nu(\Delta\phi_k)\|_{L^2(\partial B_R)}\lesssim k^{4/n}, \end{align} where $\nu$ is the unit outward normal vector to $\partial B_R$. We mention that the Weyl-type law \eqref{weyl} for the biharmonic operator was proved in \cite{LYZ} by using an argument of commutator, which would yield a sharper result than using only the standard elliptic regularity theory for the Schr\"odinger operator \cite[Lemma 2.5]{AS}. Consequently, this sharper Weyl-type law \eqref{weyl} leads to a better stability estimate for the inverse spectral problem of the biharmonic operator.
Consider an integer $m$ such that \[ m>n/4 + 1. \] It follows from \eqref{weyl} that both the series \[
\sum_{k\geq 1}k^{-4m/n} \|\partial_\nu\phi_k\|_{L^2(\partial B_R)} \quad \text{and}\quad \sum_{k\geq 1}k^{-4m/n} \|\partial_\nu(\Delta\phi_k)\|_{L^2(\partial B_R)} \] converge absolutely in $L^2(\partial B_R)$.
\vskip0.15cm
For two potential functions $V_1, V_2\in L^\infty(B_R)$, we denote the positive increasing eigenvalues and orthonormal eigenfunctions of $V_1$ and $V_2$ by $\{\lambda^{(1)}_j, \phi^{(1)}_j\}_{j=1}^\infty$ and $\{\lambda^{(2)}_j, \phi^{(2)}_j\}_{j=1}^\infty$, respectively. Let $E\geq 0$ be any fixed integer and define the spectral data discrepancy by \begin{align*}
\varepsilon_0 &= \max_{k\geq 1} |\lambda^{(1)}_{k + E} - \lambda^{(2)}_{k + E}|,\\
\varepsilon_1 &= \sum_{k\geq 1} k^{-4m/n} \|\partial_\nu\phi^{(1)}_{k + E} - \partial_\nu\phi^{(2)}_{k + E}\|_{L^2(\partial B_R)},\\
\varepsilon_2 &= \sum_{k\geq 1}k^{-4m/n} \|\partial_\nu(\Delta\phi^{(1)}_{k + E}) - \partial_\nu(\Delta\phi^{(2)}_{k + E})\|_{L^2(\partial B_R)}. \end{align*}
\vskip0.15cm
The following theorem concerns the stability of the inverse problem and is the main result of the paper.
\begin{theorem}\label{main} For $V_1, V_2\in L^\infty(B_R)$ satisfying $V:= V_1 - V_2\in H_0^1(B_R)$ and \[
\|V_1\|_{L^\infty(B_R)} + \|V_2\|_{L^\infty(B_R)} + \|V\|_{H_0^1(B_R)} \leq Q, \] there exist two constants $C = C(m, Q, n)$ and $0<\delta<1$ such that \begin{align}\label{stability}
\|V_1 - V_2\|_{L^2(B_R)}\leq C\varepsilon^\delta, \end{align} where $\varepsilon = \varepsilon_0 + \varepsilon_1+ \varepsilon_2$. \end{theorem}
The assumption $V:= V_1 - V_2\in H_0^1(B_R)$ will be used to control the high frequency tail of the Fourier transform of $V$. This is a commonly used argument in the study of the inverse problems (cf. \cite[Proof of Proposition 1]{Ales}, \cite[$(4.3)$]{Isakov1}, \cite{LYZ}).
\vskip0.15cm
The above result extends \cite[Theorem 2.2]{stefanov} from the Schr\"odinger operator to the biharmonic operator. It can be seen from \eqref{stability} that even if a finite number of spectral data is not available, the conditional H\"{o}lder stability can still be obtained, which clearly implies the uniqueness of the inverse spectral problem. Compared with \cite[Theorem 2.2]{stefanov}, the analysis of the biharmonic operator is more involved. Specifically, it is required to investigate two sets of the DtN maps and use more spectral data in order to study the inverse problems of the biharmonic operator. As a result, we must extend the crucial integral identity presented in \cite[Lemma 2.2]{Isozaki} and several important lemmas proved in \cite{AS} from the Schr\"odinger operator to the biharmonic operator. The extensions require the Weyl-type inequality \eqref{weyl} and the resolvent estimate for the biharmonic operator which were proved in \cite{LYZ}.
\vskip0.15cm
The paper is organized as follows. The two sets of DtN maps are introduced in Section \ref{DtN maps}. Section \ref{proof} is devoted to the proof of the stability. In Appendix, we present the estimates of the resolvent and a Weyl-type law for the biharmonic operator.
\section{The DtN maps}\label{DtN maps}
In this section, we consider two families of the DtN maps and study their mapping properties. Let $V\in L^\infty(B_R)$ and $\lambda\notin \{\lambda_k\}_{k=1}^\infty$. Given any $f\in H^{3/2}(\partial B_R)$ and $g\in H^{-1/2}(\partial B_R)$, consider the boundary value problem \begin{align}\label{eqn} \begin{cases} H_V u - \lambda u = 0 &\quad \text{in}\, B_R,\\ u = f &\quad \text{on}\,\partial B_R,\\ \Delta u = g &\quad \text{on}\,\partial B_R, \end{cases} \end{align} where $H_V=\Delta^2+V$. Clearly, it has a unique weak solution $u\in H^2(B_R)$. We introduce two DtN maps \begin{align*} \Lambda_1(\lambda)&: f \rightarrow \partial_\nu u\vert_{\partial B_R},\\ \Lambda_2(\lambda)&: g\rightarrow \partial_\nu (\Delta u)\vert_{\partial B_R}, \end{align*} where $\Lambda_1(\lambda)$ and $\Lambda_2(\lambda)$ define bounded operators from $H^{3/2}(\partial B_R)$ to $H^{1/2}(\partial B_R)$ and from $H^{-1/2}(\partial B_R)$ to $H^{-3/2}(\partial B_R)$, respectively.
\vskip0.15cm
Next, we derive formal representations of $\Lambda_1(\lambda)$ and $\Lambda_2(\lambda)$ by using the spectral data. Multiplying both sides of \eqref{eqn} by $\phi_k$ and using the integration by parts, we have \[ \int_{B_R} u \phi_k {\rm d}x = \frac{1}{\lambda_k - \lambda} \Big(\int_{\partial B_R} \partial_\nu\phi_k f {\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi_k)g{\rm d}s(y)\Big), \] which formally gives \[ u(x, \lambda) = \sum_{k=1}^\infty \phi_k \frac{1}{\lambda_k - \lambda} \Big(\int_{\partial B_R} \partial_\nu\phi_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi_k)g{\rm d}s(y)\Big), \quad x\in B_R. \] Thus, for $\lambda\notin \{\lambda_k\}_{k=1}^\infty$, the DtN maps can be represented by \begin{align*} \Lambda_1(\lambda)(f, g) &= \sum_{k = 1}^\infty \partial_\nu\phi_k\Big\vert_{\partial B_R} \frac{1}{\lambda_k - \lambda} \Big(\int_{\partial B_R} \partial_\nu\phi_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi_k)g{\rm d}s(y)\Big) \end{align*} and \begin{align*} \Lambda_2 (\lambda)(f, g) &= \sum_{k = 1}^\infty \partial_\nu(\Delta\phi_k)\Big\vert_{\partial B_R} \frac{1}{\lambda_k - \lambda} \Big( \int_{\partial B_R} \partial_\nu\phi_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi_k)g{\rm d}s(y)\Big).
\end{align*}
\vskip0.15cm
However, the series on the right hand side may not converge absolutely. It was shown in \cite[Lemma 2.6]{AS} that some higher order formal derivatives converge absolutely. Let
\begin{align*}
\Lambda_1^{(m)}(\lambda):= \frac{{\rm d}^m}{{\rm d}\lambda^m} \Lambda_1(\lambda),\quad
\Lambda_2^{(m)}(\lambda):= \frac{{\rm d}^m}{{\rm d}\lambda^m} \Lambda_2(\lambda).
\end{align*}
By the Weyl-type law \eqref{weyl}, for $m\gg 1$, the above two series converge absolutely. Precisely, we have the following lemma.
\begin{lemma}\label{derivative}
For $m>n/4 + 1$ and $\lambda\notin \{\lambda_k\}_{k=1}^\infty$, the series
\begin{align*}
\Lambda^{(m)}_1(\lambda)(f,g)&= -m!\sum_{k = 1}^\infty \partial_\nu\phi_k\Big\vert_{\partial B_R} \frac{1}{(\lambda_k - \lambda)^{m + 1}} \Big(\int_{\partial B_R} \partial_\nu\phi_k f{\rm d}s(y) \\ &\quad + \int_{\partial B_R} \partial_\nu(\Delta\phi_k)g{\rm d}s(y)\Big) \end{align*} and \begin{align*} \Lambda^{(m)}_2(\lambda)(f,g)&= -m!\sum_{k = 1}^\infty \partial_\nu(\Delta\phi_k)\Big\vert_{\partial B_R} \frac{1}{(\lambda_k - \lambda)^{m + 1}} \Big( \int_{\partial B_R} \partial_\nu\phi_k f{\rm d}s(y) \\ &\quad+ \int_{\partial B_R} \partial_\nu(\Delta\phi_k )g{\rm d}s(y)\Big),
\end{align*} converge absolutely in $H^{1/2}(\partial B_R)$ and $H^{-3/2}(\partial B_R)$, respectively. Moreover, $\Lambda^{(m)}_1(\lambda)$ and $\Lambda^{(m)}_2(\lambda)$ can be extended to meromorphic families with poles at the eigenvalues.
\end{lemma}
Denote the DtN maps of $V_\alpha$ by $\Lambda_{\alpha, 1}, \Lambda_{\alpha, 2}, \alpha = 1, 2,$ respectively. The following lemma gives the mapping properties of the derivatives of the DtN maps. The proof is motivated by \cite[Lemma 2.32]{choulli} which is dated back to \cite[Lemma 2.3]{AS}. The lemma extends the result from the Laplacian operator to the biharmonic operator.
\begin{lemma}\label{ddtn}
Assume that $\lambda\notin \{\lambda^{(1)}_k\}_{k=1}^\infty \cup \{\lambda^{(2)}_k\}_{k=1}^\infty$ and let $l$ be a positive integer. The following estimates hold: \begin{align*}
\|\Lambda_{1,1}^{(j)}(\lambda) - \Lambda_{2,1}^{(j)}(\lambda)\|_{\mathcal{L}(H^{\frac{3}{2}}(\partial B_R), \, H^{t_1}(\partial B_R))} &\lesssim \frac{1}{|\lambda|^{j + \sigma_1}},\\
\|\Lambda_{1, 2}^{(j)}(\lambda) - \Lambda_{2, 2}^{(j)}(\lambda)\|_{\mathcal{L}(H^{-\frac{1}{2}}(\partial B_R), \, H^{t_2}(\partial B_R))} &\lesssim \frac{1}{|\lambda|^{j + \sigma_2}}, \end{align*}
where $0\leq j\leq l$, $|\lambda|\geq 2Q$, and \begin{align*} \sigma_1 = \frac{1 - 2t_1}{4}, \quad - \frac{3}{2}\leq t_1 \leq \frac{1}{2}, \quad \sigma_2 = \frac{-3 - 2t_2}{4}, \quad -\frac{7}{2}\leq t_2 \leq -\frac{3}{2}. \end{align*} \end{lemma}
\begin{proof}
Let $f\in H^{3/2}(\partial B_R)$, $g\in H^{-1/2}(\partial B_R)$ and $u_j, j = 1, 2$ be the solution to the boundary value problem \begin{align*} \begin{cases} \Delta^2 u_j + V_j u_j - \lambda u_j = 0 &\quad \text{in}\, B_R,\\ u_j = f &\quad \text{on}\, \partial B_R,\\ \Delta u_j = g &\quad \text{on}\, \partial B_R. \end{cases} \end{align*} Let $u := u_1 - u_2$. A simple calculation yields \begin{align*} \begin{cases} \Delta^2 u + V_1 u - \lambda u = (V_2 - V_1)u_2 &\quad \text{in}\, B_R,\\ u = 0 &\quad \text{on}\, \partial B_R,\\ \Delta u = 0 &\quad \text{on}\, \partial B_R. \end{cases} \end{align*}
For $|\lambda|\geq 2Q$, multiplying both sides of the above equation by $u$ and integrating by parts, we obtain \begin{align}\label{2.61}
\|u\|_{L^2(B_R)}\lesssim \frac{1}{|\lambda|} \|u_2\|_{L^2(B_R)}. \end{align} It follows from Theorem \ref{regularity} that \[
\|u_2\|_{L^2(B_R)} \lesssim \|f\|_{H^{3/2}(\partial B_R)} + \|g\|_{H^{-1/2}(\partial B_R)}, \] which gives \begin{align}\label{2.62}
\|u\|_{L^2(B_R)}\lesssim \frac{1}{|\lambda|} \big(\|f\|_{H^{3/2}(\partial B_R)} + \|g\|_{H^{-1/2}(\partial B_R)}\big). \end{align}
Denote by $u^\prime(\lambda)$ and $u_j^\prime(\lambda)$ the derivatives of $u(\lambda)$ and $u_j(\lambda)$ with respect to $\lambda$. It can be verified that $u_2^\prime(\lambda)$
satisfies
\begin{align*} \begin{cases} \Delta^2 u_2^\prime(\lambda) + V_2 u_2^\prime(\lambda) - \lambda u_2^\prime(\lambda) = u_2 &\quad \text{in}\, B_R,\\ u_2^\prime(\lambda) = 0 &\quad \text{on}\, \partial B_R,\\ \Delta u_2^\prime(\lambda) = 0 &\quad \text{on}\, \partial B_R. \end{cases} \end{align*} Using similar arguments as \eqref{2.61}, we get \begin{align}\label{2.63}
\|u_2^\prime(\lambda)\|_{L^2(B_R)}\lesssim \frac{1}{|\lambda|} \big(\|f\|_{H^{3/2}(\partial B_R)} + \|g\|_{H^{-1/2}(\partial B_R)}\big). \end{align} Since $u^\prime(\lambda)$ satisfies \begin{align*} \begin{cases} \Delta^2 u^\prime(\lambda) + V_1 u^\prime(\lambda) - \lambda u^\prime(\lambda) = u(\lambda) + (V_2 - V_1)u^\prime_2(\lambda) &\quad \text{in}\, B_R,\\ u^\prime(\lambda) = 0 &\quad \text{on}\,\partial B_R,\\ \Delta u^\prime(\lambda) = 0 &\quad \text{on}\,\partial B_R, \end{cases} \end{align*} we have \[
\|u^\prime(\lambda)\|_{L^2(B_R)} \lesssim \frac{1}{|\lambda|} \| u(\lambda) + (V_2 - V_1)u^\prime_2(\lambda)\|_{L^2(B_R)}. \] Combining \eqref{2.62} and \eqref{2.63} leads to \begin{align}\label{2.64}
\|u^\prime(\lambda)\|_{L^2(B_R)} \lesssim \frac{1}{|\lambda|^2} (\|f\|_{H^{3/2}(\partial B_R)} + \|g\|_{H^{-1/2}(\partial B_R)}). \end{align}
On the other hand, it follows from the standard regularity results of elliptic equations that \begin{align*}
\|u^\prime(\lambda)\|_{H^4(B_R)} \lesssim |\lambda| \|u^\prime(\lambda)\|_{L^2(B_R)} + \|u(\lambda)\|_{L^2(B_R)}
+ \|u_2^\prime(\lambda)\|_{L^2(B_R)}, \end{align*} which gives \begin{align}\label{2.65}
\|u^\prime(\lambda)\|_{H^2(B_R)} \lesssim \frac{1}{|\lambda|} \big(\|f\|_{H^{3/2}(\partial B_R)} + \|g\|_{H^{-1/2}(\partial B_R)}\big). \end{align} Recalling the interpolation inequality \[
\|w\|_{H^s(B_R)} \lesssim \|w\|^{1 - s/2}_{L^2(B_R)} \|w\|^{s/2}_{H^2(B_R)}, \quad 0\leq s\leq 2, \, w\in H^2_0(B_R), \] we obtain from \eqref{2.64}--\eqref{2.65} that \begin{align*}
\|u^\prime(\lambda)\|_{H^s(B_R)} \lesssim \frac{1}{|\lambda|^{2 - s/2}}\big(\|f\|_{H^{3/2}(\partial B_R)} + \|g\|_{H^{-1/2}(\partial B_R)}\big), \quad 0\leq s\leq 2. \end{align*} Therefore, we have \begin{align*}
\|\partial_\nu u^\prime(\lambda)\|_{H^{s - 3/2}(B_R)} &\lesssim \frac{1}{|\lambda|^{2 - s/2}}\big(\|f\|_{H^{3/2}(\partial B_R)} + \|g\|_{H^{-1/2}(\partial B_R)}\big), \quad 0\leq s\leq 2, \end{align*} and \begin{align*}
\|\partial_\nu (\Delta u^\prime(\lambda))\|_{H^{s - 7/2}(B_R)} &\lesssim \frac{1}{|\lambda|^{2 - s/2}}\big(\|f\|_{H^{3/2}(\partial B_R)} + \|g\|_{H^{-1/2}(\partial B_R)}\big), \quad 0\leq s\leq 2, \end{align*} which completes the proof by letting $t_1 = s - 3/2, t_2 = s - 7/2$ and an application of induction. \end{proof}
\section{Proof of the main result}\label{proof}
First we show an Isozaki's representation formula which links the potential function and the spectral data. A similar formula may be found in \cite[Lemma 2.2]{Isozaki} for the Schr\"{o}dinger operator. The result is closely related to the scattering theory. Specifically, let $\varphi_{\omega}(x) = e^{{\rm i} \sqrt[4]{\lambda}\omega\cdot x}$ for $\lambda\in\mathbb{C}\backslash (-\infty, 0)$ with $\Im \sqrt[4]{\lambda}\geq 0,$ which may be considered as an incident plane wave with direction $\omega$ and wavenumber $\sqrt[4]{\lambda}$. Denote by $R_V(\lambda) = (H_V - \lambda)^{-1}$ the resolvent of $H_V$. Let $\Omega_\delta$ be the holomorphic domain of the resolvent $R_V(\lambda)$ obtained in Theorem \ref{bound_2}.
Define
\[
S(\omega, \theta) = -\sqrt{\lambda} \int_{\partial B_R} \Lambda_1(\lambda)(\varphi_{\omega})\varphi_{-\theta}
+ \Lambda_2(\lambda) (\varphi_{\omega}) \varphi_{-\theta}{\rm d}s(x), \quad \omega, \theta\in\mathbb S^{n - 1},
\] which may be regarded as the scattering matrix for the case of the biharmonic operator.
\begin{lemma}\label{Isozaki_bi} Assume that $\lambda\notin \{\lambda_k\}_{k=1}^\infty$. For $\lambda \in \Omega_\delta$ it holds that \begin{align*} S(\omega, \theta) = -\int_{B_R} V e^{{\rm i} \sqrt[4]{\lambda}(\omega - \theta)\cdot x} {\rm d}x - \int_{B_R} R_V(\lambda) (-V \varphi_{\omega}) V\varphi_{-\theta}{\rm d}x - 2\sqrt{\lambda} \int_{\partial B_R} \varphi_{\omega} \partial_\nu (\varphi_{-\theta}) {\rm d}s(x). \end{align*} \end{lemma}
\begin{proof} Consider the boundary value problem \begin{align*} \begin{cases} H_V u - \lambda u=0 &\quad \text{in}\, B_R,\\ u = 0 &\quad \text{on}\, \partial B_R,\\ \Delta u = 0 &\quad \text{on}\, \partial B_R, \end{cases} \end{align*} which has a unique trivial solution $u = 0$. Decompose $u$ as $u = \tilde{u} + \varphi_{\omega}$. Then we have $\tilde{u} = R_V(\lambda)(-V\varphi_{\omega})$. Moreover, $\tilde{u}$ satisfies the boundary value problem \begin{align*} \begin{cases} H_V\tilde{u} - \lambda \tilde{u} = -V\varphi_{\omega} &\quad \text{in}\,B_R,\\ \tilde{u} = -\varphi_{\omega} &\quad \text{on}\,\partial B_R,\\ \Delta \tilde{u} = -\Delta \varphi_{\omega} &\quad \text{on}\,\partial B_R. \end{cases} \end{align*}
Multiplying both sides of the above equation by $\varphi_{-\theta}$ and integrating by parts yield
\begin{align*}
&\int_{\partial B_R} \partial_\nu(\Delta\tilde{u}) \varphi_{-\theta} {\rm d}s(x)+ \int_{\partial B_R} \partial_\nu\tilde{u} \Delta\varphi_{-\theta}{\rm d}s(x)\\
&= -\int_{B_R}\big( V \tilde{u} \varphi_{-\theta} + V \varphi_{\omega} \varphi_{-\theta}\big) {\rm d}x + \int_{\partial B_R} \big(\Delta\tilde{u} \partial_\nu(\varphi_{-\theta}) + \tilde{u} \partial_\nu(\Delta(\varphi_{-\theta}))\big) {\rm d}s(x),
\end{align*}
which completes the proof. \end{proof}
Define \begin{align*} \theta = c\eta + \frac{1}{2\zeta}\xi,\quad \omega =c \eta - \frac{1}{2\zeta}\xi, \quad \sqrt[4]{\lambda} = \zeta + {\rm i}, \end{align*} where the constant $c$ is chosen such that $\theta , \omega\in\mathbb S^{n - 1}$. Compared with \cite{Isozaki}, the difference comes from the fourth root of $\lambda$ instead of the square root of $\lambda$ due to the nature of the biharmonic operator. Denote by $S_\alpha(\omega, \theta)$ the above defined function $S$ corresponding to $V_\alpha$, where $ \alpha= 1, 2$. Using the resolvent estimate in Theorem \ref{bound_2} \begin{align}\label{resolvent}
\|R_V(\lambda)\|_{\mathcal{L}(L^2(B_R))} \lesssim \frac{1}{\sqrt{|\lambda|}}, \quad \lambda\in\Omega_\delta, \end{align} and a common technique to control the high frequency tail, we obtain the following lemma, which is useful in the proof of the main theorem. A similar procedure is also used in \cite{stefanov} for the Schr\"{o}dinger operator.
\begin{lemma}\label{control} There exists $\zeta_0>1$ sufficiently large such that for $\zeta\geq\zeta_0$ \begin{align*}
\|V_1 - V_2\|^2_{L^2(B_R)}\lesssim \frac{1}{\zeta^{\frac{1}{n}}} + |\zeta|^{1/2} |S_1(\omega, \theta) - S_2(\omega, \theta)|^2. \end{align*} \end{lemma}
\begin{proof}
Denote the difference of the two unknown potentials by $V = V_1 - V_2$. Recall that $\sqrt[4]{\lambda} = \zeta + {\rm i}$ with $\zeta\geq 1$. By the integral identity in Lemma \ref{Isozaki_bi} and the resolvent estimate \eqref{resolvent}, we obtain
\begin{align}\label{relation}
|\hat{V} (\xi + \frac{\rm i}{\zeta}\xi)| \lesssim \frac{1}{\zeta^2} + |S_1(\omega, \theta) - S_2(\omega, \theta)|. \end{align} Let $f(t) = \hat{V} (\xi + \frac{{\rm i} t}{\zeta}\xi)$. A simple calculation yields \begin{align}\label{fv}
|\hat{V} (\xi)| &= \Big| \int_0^1 f^\prime(t){\rm d}t - f(1)\Big|\notag\\
&\leq |\hat{V}(\xi + \frac{\rm i}{\zeta}\xi)| + \frac{1}{\zeta} \sup_{0\leq t\leq 1} |\nabla \hat{V}(\xi + \frac{{\rm i }t}{\zeta}\xi)\cdot\xi|. \end{align} It follows from Fourier transform that \[
|\partial_{\xi_i} \hat{V}(\xi + \frac{{\rm i }t}{\zeta}\xi)| = |\widehat{x_i V}(\xi + \frac{{\rm i }t}{\zeta}\xi)|\lesssim e^{\frac{|\xi|}{\zeta}}\|V\|_{L^\infty(B_R)}, \quad 0<t<1, \] which, along with \eqref{fv}, gives \begin{align}\label{5.1}
|\hat{V}(\xi)| \lesssim |\hat{V}(\xi + \frac{\rm i}{\zeta}\xi)| + \frac{|\xi|}{\zeta} e^{\frac{|\xi|}{\zeta}} \|V\|_{L^\infty(B_R)}. \end{align}
Combining \eqref{relation} and \eqref{5.1}, we have \[
|\hat{V}(\xi)| \lesssim \frac{1}{\zeta^2} + \frac{|\xi|}{\zeta} e^{\frac{|\xi|}{\zeta}} + |S_1(\omega, \theta) - S_2(\omega, \theta)|. \]
It follows from taking integration of the above inequality in the domain $|\xi|\leq \zeta^{1/(2n)}$ that \begin{align}\label{5.2}
\int_{|\xi|\leq\zeta^{1/(2n)}} |\hat{V}(\xi)|^2 {\rm d}\xi \lesssim \frac{1}{\zeta^{7/2}} + \frac{\zeta^{(2 + n)/(2n)}}{\zeta^2} e^{\zeta^{1/2n-1}}
+ |\zeta|^{1/2} |S_1(\omega, \theta) - S_2(\omega, \theta)|^2. \end{align} On the other hand, since $V\in H_0^1(B_R)$, we have the following inequality where the high frequency tail of $\hat{V}(\xi)$ is bounded by the $H^1$ norm of $V$: \begin{align*}
\|\hat{V}\|^2_{L^2(\mathbb R^3)} &= \int_{\mathbb R^3} |\hat{V}(\xi)|^2 {\rm d}\xi\\
&= \int_{|\xi|\leq\zeta^{1/(2n)}} |\hat{V}(\xi)|^2 {\rm d}\xi + \int_{|\xi|>\zeta^{1/(2n)}} |\hat{V}(\xi)|^2 {\rm d}\xi\\
&\leq \int_{|\xi|\leq\zeta^{1/(2n)}} |\hat{V}(\xi)|^2 {\rm d}\xi + \frac{1}{\zeta^{1/n}}\int_{|\xi|>\zeta^{1/(2n)}} |\xi|^2 |\hat{V}(\xi)|^2 {\rm d}\xi\\
&\leq \int_{|\xi|\leq\zeta^{1/(2n)}} |\hat{V}(\xi)|^2 {\rm d}\xi + \frac{1}{\zeta^{1/n}}\int_{\mathbb R^3} (1 + |\xi|^2) |\hat{V}(\xi)|^2 {\rm d}\xi\\
&\leq \int_{|\xi|\leq\zeta^{1/(2n)}} |\hat{V}(\xi)|^2 {\rm d}\xi + \frac{1}{\zeta^{1/n}} \|V\|^2_{H^1(\mathbb R^3)}. \end{align*}
Since $\|V\|_{H^1(\mathbb R^3)}\leq Q$, by \eqref{5.2} and $\|V\|^2_{L^2(B_R)} = \|\hat{V}\|^2_{L^2(\mathbb R^3)}$, we obtain \begin{align*}
\|V\|^2_{L^2(B_R)}\lesssim \frac{1}{\zeta^{\frac{1}{n}}} + |\zeta|^{1/2} |S_1(\omega, \theta) - S_2(\omega, \theta)|^2, \end{align*} which completes the proof. \end{proof}
Below we show the main theorem. Motivated by \cite[Theorem 2.2]{stefanov}, the proof employs the techniques of Taylor's formula and truncation of the DtN maps which were introduced in \cite[Proof of Proposition 2.1]{AS}. It is worth mentioning that our result is not a direct consequence of \cite[Theorem 2.2]{stefanov} and the proof is more involved, since we have to deal with the more sophisticated biharmonic operator, and consider two sets of the DtN maps and spectral data. Moreover, the resolvent and the Weyl-type law for the biharmonic operator differ significantly from the Schr\"odinger operator.
\vskip0.15cm
\begin{proof} Throughout the proof, we assume that $\lambda\in\Omega^{(1)}_\delta\cap\Omega^{(2)}_\delta$, where $\Omega^{(\alpha)}_\delta$ denotes the holomorphic domain obtained in Theorem \ref{bound_2} for the resolvent of $H_{V_\alpha}, \alpha = 1, 2,$ $\Re\lambda\notin \{\lambda^{(1)}_k\}_{k=1}^\infty \cup \{\lambda^{(2)}_k\}_{k=1}^\infty$ with $\Re\lambda>0$, and $\Im\lambda\geq 1$. This assumption is allowed due to the definition of $\Omega_\delta$ in Theorem \ref{bound_2}, which contains the first quadrant of the complex plane.
Let $V = V_1 - V_2$. By Lemma \ref{control}, for $\zeta\geq\zeta_0$ where $\zeta_0>1$ is sufficiently large, we have \begin{align*}
\|V\|^2_{L^2(B_R)}\lesssim \frac{1}{\zeta^{\frac{1}{n}}} + |\zeta|^{1/2} |S_1(\omega, \theta) - S_2(\omega, \theta)|^2. \end{align*}
Next we estimate $|S_1(\omega, \theta) - S_2(\omega, \theta)|^2$ by the two sets of DtN maps $\|\Lambda_{1, 1}(\lambda) - \Lambda_{2, 1}( \lambda)\|_1$ and $\|\Lambda_{1, 2}(\lambda) - \Lambda_{2, 2}(\lambda)\|_2$, where $\|\cdot\|_1$ and $\|\cdot\|_2$ stand for the norms in $\mathcal{L}(H^{3/2}(\partial B_R), L^2(\partial B_R))$ and $\mathcal{L}(H^{-1/2}(\partial B_R), H^{-3/2}(\partial B_R))$, respectively, by choosing $t_1 = 0 $ and $t_2 = -\frac{3}{2}$ in Lemma \ref{ddtn}. Using the estimates \[
\|\varphi_{\omega}\|_{H^{3/2}(\partial B_R)}\lesssim \zeta^{3/2}, \quad \|\varphi_{\omega}\|_{H^{-1/2}(\partial B_R)}\leq C,
\quad \|\varphi_{-\theta}\|_{H^{3/2}(\partial B_R)}\leq \zeta^{3/2}, \] one has \begin{align*}
|S_1(\omega, \theta) - S_2(\omega, \theta)|\lesssim \zeta^{7/2} \Big(\|\Lambda_{1,1}(\lambda) - \Lambda_{2,1}(\lambda)\|_1 + \|\Lambda_{1,2}(\lambda) -
\Lambda_{2,2}(\lambda)\|_2 \Big). \end{align*} Then we get from Lemma \ref{control} that \begin{align}\label{5.4}
\|V\|^2_{L^2(B_R)} \lesssim \frac{1}{\zeta^{\frac{1}{n}}} + \zeta^{15/2} \Big(\|\Lambda_{1,1}(\lambda) - \Lambda_{2,1}(\lambda)\|^2_1
+ \|\Lambda_{1,2}(\lambda) - \Lambda_{2,2}(\lambda)\|^2_2\Big). \end{align}
In what follows we study $\Lambda_{\alpha, 1}( \lambda), \alpha= 1, 2$. We fix a positive integer $E$ and decompose $\Lambda_{\alpha, 1}(\lambda)$ and $\Lambda_{\alpha, 2}(\lambda)$ into a sum of a finite series and an infinite one as follows: \begin{align*} \Lambda_{\alpha, 1} (\lambda) = \tilde{\Lambda}_{\alpha, 1} (\lambda) + \hat{\Lambda}_{\alpha, 1} (\lambda),\\ \Lambda_{\alpha, 2}( \lambda) = \tilde{\Lambda}_{\alpha, 2} (\lambda) + \hat{\Lambda}_{\alpha, 2}( \lambda), \end{align*} where \begin{align*} \tilde{\Lambda}_{\alpha, 1} (\lambda)(f, g) &= \sum_{k>E} \partial_\nu\phi^{(\alpha)}_k\Big\vert_{\partial B_R} \frac{1}{\lambda^{(\alpha)}_k - \lambda} \Big(\int_{\partial B_R} \partial_\nu\phi^{(\alpha)}_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(\alpha)}_k)g{\rm d}s(y)\Big),\\ \hat{\Lambda}_{\alpha, 1} (\lambda)(f, g) &= \sum_{k\leq E} \partial_\nu\phi^{(\alpha)}_k\Big\vert_{\partial B_R} \frac{1}{\lambda^{(\alpha)}_k - \lambda} \Big(\int_{\partial B_R} \partial_\nu\phi^{(\alpha)}_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(\alpha)}_k)g{\rm d}s(y)\Big), \end{align*} and \begin{align*} \tilde{\Lambda}_{\alpha, 2} (\lambda)(f, g) &= \sum_{k>E} \partial_\nu(\Delta\phi^{(\alpha)}_k)\Big\vert_{\partial B_R} \frac{1}{\lambda^{(\alpha)}_k - \lambda} \Big(\int_{\partial B_R} \partial_\nu\phi^{(\alpha)}_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(\alpha)}_k)g{\rm d}s(y)\Big),\\ \hat{\Lambda}_{\alpha, 2} (\lambda) (f, g)&= \sum_{k\leq E} \partial_\nu(\Delta\phi^{(\alpha)}_k)\Big\vert_{\partial B_R} \frac{1}{\lambda^{(\alpha)}_k - \lambda} \Big(\int_{\partial B_R} \partial_\nu\phi^{(\alpha)}_kf{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(\alpha)}_k)g{\rm d}s(y)\Big). \end{align*}
First let us consider the derivatives $\hat{\Lambda}^{(j)}_{\alpha, d}(\lambda)$ for $d = 1, 2$. Since $\lambda_k\lesssim k^{4/n}$ for all $k\geq 1$, we have the following estimate when $E^{4/n}\lesssim \Re \lambda$: \begin{align}\label{5.7}
\|\hat{\Lambda}^{(j)}_{\alpha, d}(\lambda)\|_d\leq\frac{1}{(\Re \lambda)^{j +1}}, \quad j\geq 0. \end{align} Especially, for some sufficiently large $\zeta_0\geq 1$ depending on $E$, we obtain from \eqref{5.7} that when $\Re\lambda = \mathcal{O}(\zeta^4)$ \begin{align}\label{5.8}
\|\hat{\Lambda}_{\alpha, d} (\lambda)\|_d\lesssim \frac{1}{\zeta^4}, \quad \zeta\geq\zeta_0. \end{align} Combing \eqref{5.4} and \eqref{5.8} gives \begin{align}\label{5.9}
\|V\|^2_{L^2(B_R)} &\lesssim \frac{1}{\zeta^{\frac{1}{n}}} + \frac{1}{\zeta^{\frac{1}{n}}} \notag\\
& \quad + \zeta^{15/2} \Big(\|\tilde{\Lambda}_{1,1}(\lambda) - \tilde{\Lambda}_{2,1}(\lambda)\|^2_1
+ \|\tilde{\Lambda}_{1,2}(\lambda) - \tilde{\Lambda}_{2,2}(\lambda)\|^2_2\Big)\notag\\
&\lesssim \frac{1}{\zeta^{\frac{1}{n}}} + \zeta^{15/2} \Big(\|\tilde{\Lambda}_{1,1}(\lambda) - \tilde{\Lambda}_{2,1}(\lambda)\|^2_1
+ \|\tilde{\Lambda}_{1,2}(\lambda) - \tilde{\Lambda}_{2,2}(\lambda)\|^2_2\Big). \end{align} Using Lemma \ref{ddtn} with $t_1 = 0, t_2 = -2$ and \eqref{5.7}, we have for $d = 1, 2$ that \begin{align}\label{5.11}
\|\tilde{\Lambda}_{1,d}^{(j)}(\lambda) - \tilde{\Lambda}_{2,d}^{(j)}(\lambda)\|_d \lesssim \frac{1}{(\Re \lambda)^{j + \sigma}}, \end{align} where $\lambda\in\mathbb{C}, \Re \lambda\geq 2Q, m> 1 + \frac{n}{4}, 0\leq j \leq m$ and $\sigma = \min\{\sigma_1, \sigma_2\}$. Here the constants $\sigma_1$ and $\sigma_2$ are given in Lemma \ref{ddtn}.
Hereafter we assume $\zeta\gg 1$ and $\Re\lambda\geq 2Q$. Let $T:=\tilde{\lambda} - \lambda$ such that $T>0$. Following \cite[Proof of Proposition 2.1]{AS}, by Taylor's formula, we have for $d = 1, 2$ that \begin{align}\label{5.12} \tilde{\Lambda}_{\alpha, d}(\lambda) &= \sum_{k = 0}^{m -1} \frac{(\lambda - \tilde{\lambda})^k}{k!} \tilde{\Lambda}^{(k)}_{\alpha, d}(\tilde{\lambda}) + \int_0^1 \frac{(1 - s)^m(\lambda - \tilde{\lambda})^m}{(m - 1)!} \tilde{\Lambda}^{(m)}_{\alpha, d} (\tilde{\lambda} + s(\lambda - \tilde{\lambda})) {\rm d}s\notag\\ &:= I_{\alpha, d}(\lambda) + R_{\alpha, d}( \lambda). \end{align} Since $\Re\tilde{\lambda} \geq \Re\lambda + T > T$, an application of \eqref{5.11} leads to \begin{align}\label{5.13}
\|I_d(V_1, \lambda) - I_d(V_2, \lambda)\|_d\lesssim \frac{1}{T^\sigma}. \end{align}
Next we study $R_{\alpha, 1}( \lambda), \alpha = 1, 2$. We start with $\tilde{\Lambda}^{(m)}_{\alpha, 1}(\lambda)$ appearing in the integral of $R_{\alpha, 1}(\lambda)$. We know from Lemma \ref{derivative} that \begin{align*} \tilde{\Lambda}^{(m)}_{\alpha, 1}(\lambda)f&= \sum_{k>E} \partial_\nu\phi^{(\alpha)}_k\Big\vert_{\partial B_R} \frac{1}{(\lambda^{(\alpha)}_k - \lambda)^m} \\ &\quad \times\Big(\int_{\partial B_R} \partial_\nu\phi^{(\alpha)}_kf{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(\alpha)}_k)g{\rm d}s(y)\Big). \end{align*}
For simplicity we denote $\tilde{\lambda} + s(\lambda - \tilde{\lambda}) = \lambda + (1-s)T$ appearing in $\tilde{\Lambda}^{(m)}_{\alpha, d} (\tilde{\lambda} + s(\lambda - \tilde{\lambda})) $ by $\beta = \beta(s)$. We further let \[ E_\alpha(\lambda) = \max \{j\geq E ; c\lambda_{j + 1}^{(\alpha)}< \Re\lambda\}, \] where $c$ is any positive constant satisfying $0<c<1$. Let $E(\lambda) = \max\{E_1(\lambda), E_2(\lambda)\}$. For sufficiently large $\Re\lambda$, we decompose the series in $\tilde{\Lambda}^{(m)}_{\alpha, 1}(\beta)f$ as a sum of a finite one and an infinite one in the following way: \[ \tilde{\Lambda}^{(m)}_{\alpha, 1}(\beta)f = \tilde{\Lambda}^{(m)}_{\alpha, 1, 1}(\beta)f + \tilde{\Lambda}^{(m)}_{\alpha, 1, 2}(\beta)f, \] where \begin{align*} \tilde{\Lambda}^{(m)}_{\alpha, 1, 1}( \beta)& = \sum_{k=E+1}^{E(\lambda)} \partial_\nu\phi^{(\alpha)}_k\Big\vert_{\partial B_R} \frac{1}{(\lambda^{(\alpha)}_k - \beta)^{m + 1}} \\ &\quad \times\Big(\int_{\partial B_R} \partial_\nu\phi^{(\alpha)}_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(\alpha)}_k)g{\rm d}s(y)\Big) \end{align*} and \begin{align*} \tilde{\Lambda}^{(m)}_{\alpha, 1, 2}(\beta) &= \sum_{k>E(\lambda)} \partial_\nu\phi^{(\alpha)}_i\Big\vert_{\partial B_R} \frac{1}{(\lambda^{(\alpha)}_k - \beta)^{m + 1}} \\ &\quad\times \Big(\int_{\partial B_R} \partial_\nu\phi^{(\alpha)}_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(\alpha)}_k)g{\rm d}s(y)\Big). \end{align*} Following \cite[Proof of Proposition 2.1]{AS}, we further make the decomposition \[ \tilde{\Lambda}^{(m)}_{1, 1, 1}(\beta) - \tilde{\Lambda}^{(m)}_{2, 1, 1}(\beta) = L_1 + L_2 + L_3, \] where \begin{align*} L_1f &= \sum_{k=E+1}^{E(\lambda)} \partial_\nu\phi^{(1)}_k\Big(\frac{1}{(\lambda^{(1)}_k - \beta)^{m + 1}} - \frac{1}{(\lambda^{(2)}_k - \beta)^{m + 1}} \Big)\\ &\quad \times\Big(\int_{\partial B_R} \partial_\nu\phi^{(1)}_k f{\rm d}s(y) + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(1)}_k)g{\rm d}s(y)\Big),\\ L_2f&= \sum_{k=E+1}^{E(\lambda)} \frac{ \partial_\nu\phi^{(1)}_k}{(\lambda^{(2)}_k - \beta)^{m + 1}} \Big(\int_{\partial B_R} (\partial_\nu\phi^{(1)}_k - \partial_\nu\phi^{(2)}_k) f{\rm d}s(y) \\ &\quad + \int_{\partial B_R} ( \partial_\nu(\Delta\phi^{(1)}_k) - \partial_\nu(\Delta\phi^{(2)}_k)) g{\rm d}s(y) \Big),\\ L_3f&= \sum_{k=E+1}^{E(\lambda)} \frac{1}{(\lambda^{(2)}_k - \beta)^{m + 1}} \Big(\int_{\partial B_R} \partial_\nu\phi^{(2)}_kf{\rm d}s(y) \\ &\quad + \int_{\partial B_R} \partial_\nu(\Delta\phi^{(2)}_k)g{\rm d}s(y)\Big) (\partial_\nu\phi^{(1)}_k - \partial_\nu\phi^{(2)}_k). \end{align*}
When $\varrho>\frac{8}{n} + 1$, we have from a simple calculation that \begin{align*}
\|L_1\|&\lesssim \frac{E(\lambda)^\varrho}{|\Im\lambda|^{m + 2}}\varepsilon_0\Big( \sum_{k=E+1}^{E(\lambda)} k^{-\varrho}\|\partial_\nu\phi^{(1)}_k\|^2_{L^2(\partial B_R)}\\
&\quad+ \sum_{k=E+1}^{E(\lambda)} k^{-\varrho}\|\partial_\nu\phi^{(1)}_k\|_{L^2(\partial B_R)}
\|\partial_\nu(\Delta\phi^{(1)}_k)\|_{L^2(\partial B_R)}\Big). \end{align*} We point out that the assumption $\Im\lambda\geq 1$ is useful in deriving the above estimate, since there may not be a uniform gap between adjacent eigenvalues. Using the following Weyl-type inequality \cite{LYZ}: \begin{align*}
\|\partial_\nu\phi^{(\alpha)}_k\|_{L^2(\partial B_R)}\lesssim k^{2/n}, \quad \|\partial_\nu(\Delta\phi^{(\alpha)}_k)\|_{L^2(\partial B_R)}\lesssim k^{4/n}, \end{align*} we get \begin{align*}
& \sum_{k=E+1}^{E(\lambda)} k^{-\varrho}\|\partial_\nu\phi^{(1)}_k\|^2_{L^2(\partial B_R)}+ \sum_{k=E+1}^{E(\lambda)} k^{-\varrho}\|\partial_\nu\phi^{(1)}_k\|_{L^2(\partial B_R)}
\|\partial_\nu(\Delta\phi^{(1)}_k)\|_{L^2(\partial B_R)}\\ &\lesssim \sum_{k\geq 1} k^{-\varrho + 6/n}. \end{align*} Thus, we obtain the estimate \[
\|L_1\|\lesssim \frac{E(\lambda)^\varrho}{|\Im\lambda|^{m + 2}} \varepsilon_0. \]
Denote the two sets of spectral data discrepancy by \begin{align*}
\varepsilon_1 &= \sum_{k\geq 1} k^{-4m/n} \|\partial_\nu\phi^{(1)}_{k + E} - \partial_\nu\phi^{(2)}_{k + E}\|_{L^2(\partial B_R)},\\
\varepsilon_2 &= \sum_{k\geq 1} k^{-4m/n} \|\partial_\nu(\Delta\phi^{(1)}_{k + E}) - \partial_\nu(\Delta\phi^{(2)}_{k + E})\|_{L^2(\partial B_R)}. \end{align*} Similarly, we may show that \begin{align*}
\|L_2\| &\lesssim \frac{E(\lambda)^{4m/n + 2/n}}{|\Im\lambda|^{m + 1}} ( \varepsilon_1 + \varepsilon_2),\\
\|L_3\| &\lesssim \frac{E(\lambda)^{4m/n + 4/n}}{|\Im\lambda|^{m + 1}} \varepsilon_1. \end{align*} Letting $ \varepsilon = \varepsilon_0 + \varepsilon_1 + \varepsilon_2$, we have \[
\|L_1\| + \|L_2\| + \|L_3\| \lesssim \frac{E(\lambda)^\varrho + E(\lambda)^{4(m + 1)/n}}{|\Im\lambda|^{m + 1}} \varepsilon. \] Choosing $\beta = 4(m + 1)/n$ and recalling that $m> 1 + \frac{n}{4}$ we have $\beta>\frac{8}{n} + 1$, which gives \begin{align}\label{5.14}
\|\tilde{\Lambda}^{(m)}_{1, 1, 1}(\beta) - \tilde{\Lambda}^{(m)}_{2, 1, 1}(\beta)\|_1\lesssim \frac{E(\lambda)^{4(m + 1)/n}}{|\Im\lambda|^{m + 1}}\varepsilon. \end{align}
From the inequality \[
|\lambda^{(\alpha)}_k - \beta|\geq \lambda^{(\alpha)}_k - \Re\lambda + (1 - s)\rho \geq \lambda^{(\alpha)}_k - \Re\lambda \geq (1 - c)\lambda^{(\alpha)}_k, \] we obtain \[
|\lambda^{(\alpha)}_k - \beta|^{-(m + 1)} \lesssim \frac{1}{({\lambda^{(\alpha)}_k})^{m + 1}}\lesssim \frac{1}{(k^{\frac{4}{n}})^{m + 1}}\lesssim k^{-4m/n}. \] Therefore, using similar arguments above by decomposing $\tilde{\Lambda}^{(m)}_{1, 1, 2}(\lambda) - \tilde{\Lambda}^{(m)}_{2, 1, 2}(\lambda)$ into three parts, we can obtain \[
\|\tilde{\Lambda}^{(m)}_{1, 1, 2}(\lambda) - \tilde{\Lambda}^{(m)}_{2, 1, 2}(\lambda)\|_1\lesssim\varepsilon, \] which, together with \eqref{5.14}, implies \[
\|\tilde{\Lambda}^{(m)}_{1,1}(\lambda) - \tilde{\Lambda}^{(m)}_{2,1}(\lambda)\|_1\lesssim E(\lambda)^{4(m + 1)/n}\varepsilon. \] From the definition of $E(\lambda)$ we obtain \[ E^{4/n}\lesssim\lambda^{(\alpha)}_E\lesssim \frac{1}{c}\Re\lambda. \] As a consequence, it holds that \[
\|\tilde{\Lambda}^{(m)}_{1, 1}(\lambda) - \tilde{\Lambda}^{(m)}_{2,1}( \lambda)\|_1\lesssim (\Re\lambda)^{m + 1}\varepsilon, \] which, together with $\Re\lambda = \mathcal{O}(\zeta^4)$, gives \[
\|\tilde{\Lambda}^{(m)}_{1,1}(\lambda) - \tilde{\Lambda}^{(m)}_{2,1}(\lambda)\|_1\lesssim \zeta^{4(m + 1)}\varepsilon. \] Then \begin{align}\label{5.15}
\|R_{1,1}(\lambda) - R_{1,2}(\lambda)\|_1 \lesssim T^m\zeta^{4(m + 1)}\varepsilon. \end{align}
Combining \eqref{5.12}, \eqref{5.13} and \eqref{5.15} leads to \[
\|\tilde{\Lambda}_{1,1}(\beta) - \tilde{\Lambda}_{2,1}(\beta)\|_1\lesssim \frac{1}{T^\sigma} + T^m\zeta^{4(m + 1)}\varepsilon. \] Using similar arguments, we obtain \begin{align*}
\|R_{1,2}(\lambda) - R_{2,2}(\lambda)\|_2 \lesssim T^m\zeta^{4(m + 1)}\varepsilon, \end{align*} which gives \[
\|\tilde{\Lambda}_{1,2}(\beta) - \tilde{\Lambda}_{2,2}(\beta)\|_2\lesssim \frac{1}{T^\sigma} + T^m\zeta^{4(m + 1)}\varepsilon. \] Substituting the above estimates into \eqref{5.9} yields \begin{align*}
\|V\|^2_{L^2(B_R)} \lesssim \frac{1}{\zeta^{\frac{1}{n}}} + \zeta^{15/2}\Big(\frac{1}{T^{2\sigma}} + T^{2m}\zeta^{8(m + 1)}\varepsilon^2\Big). \end{align*} Taking $T = (\Re\lambda)^\varsigma$ where $\varsigma = 1/\sigma$ gives \begin{align*}
\|V\|^2_{L^2(B_R)} \lesssim \frac{1}{\zeta^{\frac{1}{n}}} + \zeta^{15/2 + 8\varsigma m + 8(m + 1)}\varepsilon^2. \end{align*} Using the standard minimization with respect to $\zeta$ (cf. \cite[Proof of Proposition 2.1]{AS}), we obtain the stability estimate \[
\|V\|^2_{L^2(B_R)} \lesssim \varepsilon^{2\delta},\quad \delta = \frac{1}{16n(2 + \varsigma m + m )}, \] which completes the proof of Theorem \ref{main}. \end{proof}
\appendix
\section{Useful estimates}
\begin{theorem}\label{regularity} Let $u\in H^2(B_R)$ be a weak solution of the following boundary value problem with the Navier boundary condition: \begin{align*} \begin{cases} H_V u = F_1 \quad &\text{in}\,B_R,\\ u = f \quad \quad & \text{on}\,\partial B_R,\\ \Delta u = g \quad & \text{on}\,\partial B_R, \end{cases} \end{align*} where $H_V = \Delta^2 + V$ and $0$ is not an eigenvalue of $H_V$. Then \[
\|u\|_{H^2(\Omega)} \lesssim \|F\|_{L^2(\Omega)} + \|f\|_{H^{\frac{3}{2}}(\partial\Omega)} + \|g\|_{H^{-\frac{1}{2}}(\partial\Omega)}. \] \end{theorem}
The following lemma gives an estimate for the normal derivatives of the eigenfunctions on $\partial B_R$ and a Weyl-type inequality for the Dirichlet eigenvalues.
\begin{lemma}\label{eigenfunction_est1} The following estimate holds in $\mathbb R^n$: \begin{align}\label{boundary_estimate_2}
\|\partial_\nu \phi_k\|_{L^2(\partial B_R)}\leq C\lambda_k^{\frac{1}{2}},\quad \|\partial_\nu (\Delta\phi_k)\|_{L^2(\partial B_R)}\leq C\lambda_k, \end{align} where the positive constant $C$ is independent of $k$. Moreover, the following Weyl-type inequality holds for the Dirichlet eigenvalues $\{\mu_k\}_{k=1}^\infty$: \begin{align}\label{weyl_1} E_1 k^{4/n}\leq \lambda_k\leq E_2 k^{4/n}, \end{align} where $E_1$ and $E_2$ are two positive constants independent of $k$. \end{lemma}
\begin{proof} We begin with the estimate \eqref{boundary_estimate_2} for the eigenfunctions on the boundary. Let $u$ be an eigenfunction with eigenvalue $\mu$ such that \begin{align*} \begin{cases} H_V u = \lambda u\quad &\text{in}\, B_R,\\ u = \Delta u = 0 \quad &\text{on}\, \partial B_R. \end{cases} \end{align*} Define a differential operator \[
A = \frac{1}{2}(x\cdot \nabla+ \nabla\cdot x) = x\cdot\nabla + \frac{n}{2} = |x|\partial_\nu + \frac{n}{2}. \] Denote the commutator of two differential operators by $[\cdot, \, \cdot]$ such that $[O_1, O_2] = O_1 O_2 - O_2 O_1$ for two differential operators $O_1$ and $O_2$. Then we have \begin{align}\label{com} [\Delta^k, A] = 2k\Delta^k, \quad k\in\mathbb N^+. \end{align} Denote $B = A\Delta$. A simple calculation gives \begin{align*} &\int_{B_R} u [H_V, B] u {\rm d}x = \int_{B_R} \left(u (\Delta^2 + V) (Bu) - u B (\Delta^2 + V) u \right){\rm d}x\\ &= \int_{B_R} (\Delta^2 u + Vu - \lambda u) Bu {\rm d}x+ \int_{\partial B_R}\left( u\partial_\nu(\Delta Bu) - \partial_\nu u\Delta(Bu) \right) {\rm d}s\\ &\quad + \int_{\partial B_R}\left( \Delta u\partial_\nu(Bu) - \partial_\nu (\Delta u)Bu \right) {\rm d}s\\ &= - \int_{\partial B_R} \left(\partial_\nu u\Delta(Bu) + \partial_\nu (\Delta u)Bu\right) {\rm d}s\\
&= - \int_{\partial B_R}\left( \partial_\nu u\Delta(Bu) + R|\partial_\nu (\Delta u)|^2\right){\rm d}s, \end{align*} where we have used $u = \Delta u = 0$ on $\partial B_R$ and Green's formula. By \eqref{com}, we have \[ \Delta Bu = \Delta A\Delta = (A\Delta + 2\Delta)\Delta = A\Delta^2 + 2\Delta^2. \] It holds that \begin{align*} &\int_{\partial B_R} \partial_\nu u\Delta(Bu){\rm d}s = \int_{\partial B_R} \partial_\nu u (A\Delta^2 + 2\Delta^2)u {\rm d}s \\ &= \int_{\partial B_R} \Big(\partial_\nu u \big(\big(R\partial_\nu + \frac{n}{2}\big)\Delta^2 u\big) + 2\Delta^2u \Big){\rm d}s\\ &= R\int_{\partial B_R} \partial_\nu u \,\partial_\nu(\Delta^2 u){\rm d}s = R\int_{\partial B_R} \partial_\nu u \,\partial_\nu(\mu u - Vu){\rm d}s, \end{align*} where we have used $\Delta^2u = -Vu + \lambda u = 0$ and $u = 0$ on $\partial B_R$. Hence \begin{align}\label{I}
\Big|\int_{\partial B_R} \partial_\nu u\Delta(Bu){\rm d}s\Big| \geq (\lambda - \|V\|_{L^\infty(B_R)}) \int_{\partial B_R} |\partial_\nu u|^2{\rm d}s. \end{align}
On the other hand we have \begin{align}\label{II}
\int_{\partial B_R} \partial_\nu (\Delta u)Bu {\rm d}s= \int_{\partial B_R} \partial_\nu (\Delta u)Bu {\rm d}s = R \int_{\partial B_R} |\partial_\nu (\Delta u)|^2 {\rm d}s. \end{align} Moreover, it follows from \eqref{com} that $[H_V, B] = 4\Delta^3 + [V, A\Delta]$, which gives \begin{align}\label{III}
&\Big|\int_{B_R} u [H_V, B] u {\rm d}x\Big|= \Big|\int_{B_R} \left(4u\Delta^3u + [V, A\Delta]u\right) {\rm d}x\Big|\notag\\
&= \Big|\int_{B_R}\left( 4u\Delta (-Vu + \lambda u) + [V, A\Delta]u \right){\rm d}x\Big|\notag\\
&\leq C\lambda \|u\|^2_{H^2(B_R)}\leq C\lambda^2. \end{align} Here we have used the fact that the commutator $[V, A\Delta]$ has order of 2 at most. Using \eqref{I}--\eqref{III} we obtain \[
\|\partial_\nu u\|^2_{L^2(\partial B_R)}\leq\lambda,\quad \|\partial_\nu (\Delta u)\|^2_{L^2(\partial B_R)}\leq\lambda^2, \] which completes the proof of \eqref{boundary_estimate_2}.
Next, we prove the Weyl-type inequality \eqref{weyl_1}. Assume $\lambda_1<\lambda_2<\cdots$ are the eigenvalues of the operator $H$. Denote the functional space \[ H_\vartheta^{2}(B_R)=\{\psi\in H^2(B_R);\, \Delta\psi = \psi =0\text{ on }\partial B_R\}, \]
Then we have following min-max principle: \[ \lambda_k=\max_{\phi_1,\cdots,\phi_{k-1}}\min_{\psi\in[\phi_1,\cdots,
\phi_{k-1}]^\perp\atop \psi\in H_\vartheta^{2}(B_R)}\frac{\int_{B_R} |\Delta
\psi|^2 + V|\psi|^2{\rm d}x}{\int_{B_R}\psi^2{\rm d}x}. \] Assume that $\lambda_1^{(1)}<\lambda_2^{(1)}<\cdots$ are the eigenvalues for the operator $\Delta^2$. By the min-max principle, we have \[ C_1\lambda_k^{(1)}<\lambda_k<C_2\lambda_k^{(1)}, \quad k=1, 2, \dots, \]
where $C_1$ and $C_2$ are two positive constants depending on $\|V\|_{L^\infty(B_R)}$. We have from Weyl's law \cite{Weyl} for $\Delta^2$ that \[ \lim_{k\rightarrow+\infty}\frac{\lambda_k^{(1)}}{k^{4/n}}=D, \] where $D$ is a constant. Therefore there exist two constants $E_1$ and $E_2$ such that \[ E_1 k^{4/n}\leq \lambda_k\leq E_2 k^{4/n}, \] which completes the proof. \end{proof}
Denote the resolvent by $R_V(\lambda) = (-\Delta + V - \lambda)^{-1}, \lambda\in\mathbb C$. The following theorem gives a resonance-free region and a resolvent estimate of $\rho R_V(\lambda)\rho: L^2(\mathbb R^n)\rightarrow H^4(\mathbb R^n)$ for a given $\rho\in C_0^\infty(\mathbb R^n)$ when $n\geq 3$ is odd.
\begin{theorem}\label{bound_2} Let $V(x)\in L^\infty_{\rm comp}(\mathbb R^n, \mathbb C)$ and $n\geq 3$ be odd. Then for any given $\rho\in C_0^\infty(\mathbb R^n)$ satisfying $\rho V = V$, i.e., $\text{supp} (V)\subset \text{supp} (\rho)\subset\subset B_R$, there exists a positive constant $C$ depending on $\rho$ and $V$ such that \begin{align}\label{bound_3}
\|\rho R_V(\lambda)\rho\|_{L^2(B_R)\rightarrow H^j(B_R)}\leq C|\lambda|^{\frac{-2 + j}{4}} \big(e^{2R(\Re\sqrt[4]{\lambda})_-}+ e^{2R(\Im\sqrt[4]{\lambda})_-}\big),\quad j = 0, 1, 2, 3, 4, \end{align} where $\lambda\in \Omega_\delta$. Here $\Omega_\delta$ denotes the resonance-free region defined as \begin{align*}
\Omega_\delta:=\Big\{\lambda: {\Im}\sqrt[4]{\lambda}&\geq - A - \delta {\rm log}(1 + |\lambda|^{1/4}), \,{\Re}\sqrt[4]{\lambda} \geq - A - \delta {\rm log}(1 + |\lambda|^{1/4}), |\lambda|^{1/4}\geq C_0\Big\}, \end{align*} where $A$ and $C_0$ are two positive constants and $\delta$ satisfies $0<\delta<\frac{1}{2R}$. \end{theorem}
\begin{proof}
Denote the free resolvent by $R_0(\lambda) = (-\Delta - \lambda)^{-1}, \lambda\in\mathbb C.$ Using the following identity \begin{align}\label{decom} R_0(\lambda) = (\Delta^2 - \lambda)^{-1} = \frac{1}{2\sqrt{\lambda}}[ (-\Delta - \sqrt{\lambda})^{-1} - (-\Delta + \sqrt{\lambda})^{-1} ] \end{align} and \cite[Theorem 3.1]{Dyatlov}, we can prove that when $n\geq 3$ is odd, for each $\rho\in C_0^\infty(\mathbb R^n)$ with ${\rm supp}(\rho)\subset B_R$ and $\lambda\neq 0$ \begin{align}\label{free}
\|\rho R_0(\lambda) \rho\|_{L^2(B_R)\rightarrow L^2(B_R)}\lesssim\frac{1}{\sqrt{\lambda}}\big(e^{2R (\Im\sqrt[4]{\lambda})_-} + e^{2R (\Re\sqrt[4]{\lambda})_-}\big), \end{align} where $t_{-}:=\max\{-t,0\}$. Consequently, using \eqref{free} and similar arguments as in the proofs of \cite[Theorem 3.3]{LYZ} we can prove the estimate \eqref{bound_3}. \end{proof}
\begin{remark} We discuss the resolvent estimates in even dimensions $n\geq 2$. Since the free resolvent $G_0(\lambda) = (-\Delta - \lambda)^{-1}$ in even dimensions is a convolution operator with the kernel (see e.g. \cite{FY06}) \begin{align*}
G_0(\lambda) = \frac{c_n e^{{\rm i} \sqrt{\lambda} |x|}}{|x|^{n - 2}} \int_0^\infty e^{-t} t^{\frac{n - 3}{2}} \Big( \frac{t}{2} - {\rm i}\sqrt{\lambda} |x| \Big)^{\frac{n - 3}{2}} {\rm d}t, \end{align*} where $c_n$ is a positive constant depending on the dimension $n$. Then by \eqref{decom} and a direct calculation we have \begin{align*}
|G_0(\lambda)| \lesssim \frac{|\lambda|^{\frac{n - 3}{4}}}{|\lambda|} \big(e^{(\Im\sqrt{\lambda})_-|x|} + e^{(\Re\sqrt{\lambda})_-|x|}\big) \lesssim \frac{1}{|\lambda|^{1 - \frac{n - 3}{4}}} e^{(\Im\sqrt{\lambda})_-|x|}, \end{align*} which implies that only when $1 - \frac{n - 3}{4}>0$, by repeating the above arguments we may obtain similar resolvent estimates for even dimensional cases. \end{remark}
\end{document} | arXiv |
A review on the exact model
Photon counting by an ensemble of Λ-type three-level atom
Measuring squeezing of trapped coupling photons
A photon counting and a squeezing measurement method by the exact absorption and dispersion spectrum of Λ-type Atoms
Ghasem Naeimi1,
Samira Alipour2 and
Siamak Khademi2Email author
SpringerPlus20165:1402
© The Author(s) 2016
Received: 27 April 2016
Accepted: 5 August 2016
Recently, the master equations for the interaction of two-mode photons with a three-level Λ-type atom are exactly solved for the coherence terms. In this paper the exact absorption spectrum is applied for the presentation of a non-demolition photon counting method, for a few number of coupling photons, and its benefits are discussed. The exact scheme is also applied where the coupling photons are squeezed and the photon counting method is also developed for the measurement of the squeezing parameter of the coupling photons.
Electromagnetically induced transparency
Exact method
Non-demolition
Photon counting method
Measurement of the squeezing parameter
Electromagnetically induced transparency (EIT) has been theoretically introduced by Kocharovskaya and Khanin (1988) and experimentally observed, by Harris et al. (1990) and Harris (1997). Recently, many authors have been interested in studying EIT and its applications (Sargsyan et al. 2012; Hong-Wei and Xian-Wu 2012; Marangos 1998; Deng and Payne 2005; Chenguang and Zhang 2008; Jafari et al. 2011; Sahrai et al. 2011; Rabiei et al. 2011; Sahrai et al. 2010a, b). EIT is widely studied for different systems, e.g. V, Λ and cascade three-level atoms (Olson and Mayer 2009; Fleischhauer et al. 2005; Lazoudis et al. 2010, 2011) and many other atoms with more levels (Bai et al. 2013; Joshi and Xiao 2003). Many alkali atoms, e.g., Rydberg Rubidium atom, have been also experimentally used (Petrosyan et al. 2011; Wang et al. 2004). Properties of the electromagnetic fields interacting with a three-level Λ-type atom were studied in the semi-classical (Kocharovskaya and Khanin 1988; Harris et al. 1990; Harris 1997; Scully and Zubairy 1997; Dantan et al. 2012) and full-quantum (Wang et al. 1992; Akamatsu et al. 2004; Johnsson and Fleischhauer 2002) models by a weak field approximation (WFA) method. In WFA the possibility of research on the small intensity of coupling field disappears because the coupling field should have larger intensity compare to the probe field to establish approximation. The authors of this paper presented an exact analytical solution for multilevel systems that interact with the probe and quantized coupling fields (which is also applied for small intensity of coupling field) (Khademi et al. 2015). The EIT with the quantized fields in opto-cavity mechanics is another example for the full-quantum approach which is studied by Huang and Agarwal (2011). The destructive detection of photons has been investigated theoretically and experimentally. But non-demolition detection of photons (Braginsky and Khalili 1996) has until now been an interesting ultimate goal of some optical measurement methods (Grangier et al. 1998). In 2012, Serge Haroche and coworkers have been shown (Sayrin et al. 2012) that interaction of microwave photons, trapped in a superconducting cavity, with Rydberg atoms crossing the cavity, illustrates a non-demolition photon counting. In 2013, Andreas Raiser (Raiser et al. 2013) presents another method for non-demolition detection of photons which are passing through a superconductive cavity resonator that includes rubidium atoms. Haroche et al. (Sayrin et al. 2012) and Naeimi et al. (2013) investigated a photon counting and squeezing parameter measurement (for photons trapped in a quantum cavity) by measure the properties of a beam of atoms interacted with an array of cavities. But photon counting by measure the properties of another photons (or field) which are passing through the cavity, have never been investigated to our best of knowledge. In this paper, we present an exact analytical non-demolition photon counting method (for photons inside a cavity) by investigating the absorption profile of probe field. A full-quantum model of EIT is investigated for an ensemble of Λ-type three-level atoms, in which the probe and coupling fields are quantize. Interaction of a Λ-type three-level atom with the quantized electromagnetic fields is investigated using the Jaynes–Cummings model (Khademi et al. 2015). The Jaynes–Cummings interaction Hamiltonian is applied for each of the coupled levels. In this case, the exact master equations are investigated and solved in a steady-state without any WFA (Khademi et al. 2015). An exact form of absorption and dispersion spectra are obtained for the probe fields which are not generally weaker than the coupling field. It is shown that the EIT obtained for the probe fields is either weaker or stronger than that of the coupling field.
Moreover, profile of the absorption and dispersion spectra are shown to depend on the number of coupling photons so that the number of coupling photons could be measured using the absorption spectrum of the probe photons. This scheme is applied for the presentation of a non-demolition photon counting method. The present method is applied to the squeezed coupling photons. Straightforwardly, it is shown that the exact absorption and dispersion spectra drastically depended on squeezing parameter of the coupling photons. This scheme is also applied for presenting measurement of the squeezing parameter.
In "A review on the exact model" section, a review on the exact model of the full-quantum interaction of quantized electromagnetic fields with a Λ-type three-level atom will be presented. More details are found in reference (Khademi et al. 2015). The master equations in the steady-state, their exact solutions, a schematic experimental setup and notations are also introduced. "Photon counting by an ensemble of Λ-type three-level atom" section is devoted to a photon counting method in terms of the measurement of absorption spectrum. In "Measuring squeezing of trapped coupling photons" section, the exact probe coherence term is obtained where the coupling photons are squeezed. It is shown that the squeezing parameter is also measurable by the measurement of absorption and dispersion spectrum. The last section is devoted to the "Conclusions".
In this section a review on the exact model of a three-level Λ-type atom interacting with two quantized electromagnetic field (Khademi et al. 2015) is presented. The master equations, notations, experimental setup and some solutions and results are used in the next sections.
Suppose that, in cavity quantum electrodynamics, the quantized probe and coupling fields (photons) interact with a three-level Λ-type atom (see Fig. 1a). The interaction Hamiltonian of this system in the interaction picture is given by:
$${\text{V}} = - \hbar {\text{g}}_{ 1} \left[ {\sigma_{\text{ab}} {\text{a}}_{ 1} {\text{e}}^{{{\text{i}}\Delta_{ 1} {\text{t}}}} + {\text{a}}_{ 1}^{\dag } \sigma_{\text{ba}} {\text{e}}^{{ - {\text{i}}\Delta_{ 1} {\text{t}}}} } \right] - \hbar {\text{g}}_{ 2} \left[ {\sigma_{\text{ac}} {\text{a}}_{ 2} {\text{e}}^{{{\text{i}}\Delta_{ 2} {\text{t}}}} +{\text{ a}}_{ 2}^{\dag } \sigma_{\text{ca}} {\text{e}}^{{ - {\text{i}}\Delta_{ 2} {\text{t}}}} } \right],$$
where \(g_{1} = \wp_{ab} \cdot \,\hat{\varepsilon }_{1} E_{1} /\hbar\) and \(g_{2} = \wp_{ac} \,\hat{\varepsilon }_{2} E_{2} /\hbar\) are interaction strength of the probe and coupling fields, respectively, and \(E_{i} = (\hbar \nu_{i} /2\varepsilon_{0} {\text{v}})^{1/2}\). In this case, \({\text{v}}\) is cavity volume and \(\wp_{ab} = {\text{e}}\left\langle {\text{a}} \right|{\text{r}}\left| {\text{b}} \right\rangle\) and \(\wp_{ac} = {\text{e}}\left\langle {\text{a}} \right|{\text{r}}\left| {\text{c}} \right\rangle\) are matrix elements of atomic dipole moments, induced by the electromagnetic fields. \(\hat{a}_{1} \left( {\hat{a}_{1}^{{{\dag }}} } \right)\) and \(\hat{a}_{2} \left( {\hat{a}_{2}^{{{\dag }}} } \right)\) are annihilation (creation) operators for the probe and coupling photons, respectively. \(\sigma_{ij} = \left| i \right\rangle \left\langle j \right|\) is atomic transition operator from \(\left| j \right\rangle \to \left| i \right\rangle\). In Eq. (1), \(\Delta_{1} = \omega_{ab} - \nu_{1} (\Delta_{2} = \omega_{ac} - \nu_{2} )\) is detuning between the frequency of probe (coupling) and the atomic transition frequency \(\left| a \right\rangle \to \left| b \right\rangle \, (\left| a \right\rangle \to \left| c \right\rangle )\).
(Color online) (a) A Λ-type three-level atom interacting with two electromagnetic fields with frequencies \(\nu_{1}\) and \(\nu_{2}\). The red spot is an ensemble of atoms trapped and strongly coupled with the quantum cavity. The quantized probe photons are passed through the cavity and counted by D1 after interaction with the ensemble of atoms (Khademi et al. 2015)
Assume the system is initially in the ground state \(\left| b \right\rangle\) and the electromagnetic fields for the probe and coupling fields are in the states \(\left| {n_{1} } \right\rangle\) and \(\left| {n_{2} } \right\rangle\), respectively. Therefore, the initial state of total system is given by \(\left| {b,n_{1} ,n_{2} } \right\rangle\). After an atom–field interaction, one photon with frequency of \(\nu_{1}\) is absorbed and the atom is then transited into the higher level \(\left| a \right\rangle\) and state of the total system changes to \(\left| {a,n_{1} - 1,n_{2} } \right\rangle\). Due to the spontaneous or induced emission, the atom in the state \(\left| a \right\rangle\) is transited into another level \(\left| c \right\rangle\) and one photon with frequency of \(\nu_{2}\) is emitted and state of the total system changes to \(\left| {c,n_{1} - 1,n_{2} + 1} \right\rangle\). The master equations are obtained as:
$$\dot{\tilde{\rho }}_{aa} = - \left( {\gamma_{1} + \gamma_{2} } \right)\tilde{\rho }_{aa} + ig_{1} \sqrt {n_{1} } \left( {\tilde{\rho }_{ba} - \tilde{\rho }_{ab} } \right) + ig_{2} \sqrt {n_{2} + 1} \left( {\tilde{\rho }_{ca} - \tilde{\rho }_{ac} } \right)$$
$$\dot{\tilde{\rho }}_{bb} = \gamma_{1} \tilde{\rho }_{aa} + \gamma_{3} \tilde{\rho }_{cc} + ig_{1} \sqrt {n_{1} } \left( {\tilde{\rho }_{ab} - \tilde{\rho }_{ba} } \right)$$
$$\dot{\tilde{\rho }}_{cc} = \gamma_{2} \tilde{\rho }_{aa} - \gamma_{3} \tilde{\rho }_{cc} + ig_{2} \sqrt {n_{2} + 1} \left( {\tilde{\rho }_{ac} - \tilde{\rho }_{ca} } \right)$$
$$\dot{\tilde{\rho }}_{ab} = - \tfrac{1}{2}\left( {\gamma_{1} + 2i\Delta_{1} } \right)\tilde{\rho }_{ab} + ig_{1} \sqrt {n_{1} } \left( {\tilde{\rho }_{bb} - \tilde{\rho }_{aa} } \right) + ig_{2} \sqrt {n_{2} + 1} \tilde{\rho }_{cb}$$
$$\dot{\tilde{\rho }}_{ac} = - \tfrac{1}{2}\left( {\gamma_{2} + 2i\Delta_{2} } \right)\tilde{\rho }_{ac} + ig_{1} \sqrt {n_{1} } \tilde{\rho }_{bc} + ig_{2} \sqrt {n_{2} + 1} \left( {\tilde{\rho }_{cc} - \tilde{\rho }_{aa} } \right)$$
$$\dot{\tilde{\rho }}_{bc} = - \tfrac{1}{2}\left( {\gamma_{3} + - 2i\left( {\Delta_{2} - \Delta_{1} } \right)} \right)\tilde{\rho }_{bc} + ig_{1} \sqrt {n_{1} } \tilde{\rho }_{ac} - ig_{2} \sqrt {n_{2} + 1} \tilde{\rho }_{ba}$$
where \(\rho_{ij} = \rho_{ji}^{*}\), \(\gamma_{1} = \varGamma_{ab} ,\gamma_{2} = \varGamma_{ac}\) and \(\gamma_{3} = \varGamma_{cb}\) are spontaneous decay rates. To obtain Eqs. (2)–(7), the rotating frame transformations: \(\tilde{\rho }_{ab} = \rho_{ab} \exp \{ - i\Delta_{1} t\}\), \(\tilde{\rho }_{ac} = \rho_{ac} \exp \{ - i\Delta_{2} t\}\) and \(\tilde{\rho }_{cb} = \rho_{cb} \exp \{ i(\Delta {}_{2} - \Delta_{1} )t\}\) are applied.
An ensemble of cold three-level atoms is prepared by an optical pumping initially in the state \(\left| b \right\rangle\). The quantum cavity is filled with the three-level cold atoms as well as the \(n_{2}\) number of coupling photons. The coupling photons are strongly coupled with the quantum cavity electrodynamics. The probe photons are individually injected into the cavity and absorptive atoms. Absorption of the probe photons is controlled by the number of coupling photons \(n_{2}\) and measured by the detector D1. This experiment would be frequently performed for a specific number of the coupling photons trapped in the cavity. Absorption spectrums for different numbers of coupling photons are plotted in Fig. 2b, d.
(Color online) a, b are the dispersion and absorption of the probe field in terms of its detuning for a large number of coupling photons (\(n_{2} = 50,100,150\)); c, d are the dispersion and absorption of the probe field for a small number of coupling photons (\(n_{2} = 0,1,2,3,4,5\)). They are plotted versus detuning of probe field \(\Delta_{1}\) where other parameters are \(\gamma_{1} = 0.1,\gamma_{2} = 0.1,\gamma_{3} = 0.001,n_{1} = 1,\Delta_{2} = 0,g_{1} = g_{2} = 1\)
The master Eqs. (2)–(7) are exactly solved in the steady-state to obtain the exact coherence term \(\tilde{\rho }_{ab}\) (Khademi et al. 2015). The result is arranged in terms of different orders of the probe detuning in the numerator and denominator of exact \(\tilde{\rho }_{ab}\). The compact result could be written as:
$$\tilde{\rho }_{ab} = \frac{{2g_{1} \sqrt {n_{1} } \left( {iZ_{0} + Z_{1} \Delta_{1} + iZ_{2} \Delta_{1}^{2} + Z_{3} \Delta_{1}^{3} } \right)}}{{K_{0} + K_{2} \Delta_{1}^{2} + K_{4} \Delta_{1}^{4} }},$$
$$ \begin{aligned} Z_{0} & = \gamma_{3} \left( {4g_{1}^{2} n_{1} \gamma_{1} + 4g_{2}^{2} \gamma_{2} \left( {n_{2} + 1} \right) + \gamma_{1} \gamma_{2} \gamma_{3} } \right)\bigg(4g_{2}^{2} (\gamma_{1} + \gamma_{3} ) \\ & \quad \times \left( {n_{2} + 1} \right) + \left( {\gamma_{1} + \gamma_{2} } \right)\left( {4g_{1}^{2} n_{1} + \gamma_{2} \gamma_{3} } \right)\bigg), \\ \end{aligned} $$
$$\begin{aligned} Z_{1} & = \left( { - 32g_{2}^{4} \left( {n_{2} + 1} \right)^{2} \gamma_{2} \left( {\gamma_{1} + \gamma_{3} } \right) + 2\gamma_{3} \left( {\gamma_{1} + \gamma_{2} } \right)\left( {4g_{1}^{2} n_{1} + \gamma_{1} \gamma_{3} } \right)^{2} } \right. \\ & \quad - 8g_{2}^{2} \left( {n_{2} + 1} \right)\left( {\gamma_{2} \left( {\gamma_{2} + \gamma_{3} } \right)\gamma_{3} \left( {\gamma_{1} + \gamma_{2} + \gamma_{3} } \right)} \right. \\ & \quad - 4g_{1}^{2} n_{1} \left. {\left. {\left( { - \gamma_{2}^{2} + \gamma_{3} \left( {\gamma_{1} + \gamma_{2} } \right) + \gamma_{3}^{2} } \right)} \right)} \right), \\ \end{aligned}$$
$$Z_{2} = 4\gamma_{1} \gamma_{2} (\gamma_{2} \gamma_{3} (\gamma_{1} + \gamma_{2} ) + 4g_{2}^{2} (n_{2} + 1)(\gamma_{1} + \gamma_{3} )),$$
$$Z_{3} = 8\gamma_{2} \left( { - \gamma_{2} \gamma_{3} \left( {\gamma_{1} + \gamma_{2} } \right) - 4g_{2}^{2} \left( {n_{2} + 1} \right)\left( {\gamma_{1} + \gamma_{3} } \right)} \right),$$
$$\begin{aligned} K_{0} & = \left( {4g_{1}^{2} n_{1} \gamma_{1} + 4g_{2}^{2} \gamma_{2} \left( {n_{2} + 1} \right) + \gamma_{1} \gamma_{2} \gamma_{3} } \right)\left( {16g_{2}^{4} \left( {n_{2} + 1} \right)^{2} \left( {\gamma_{1} + \gamma_{3} } \right)} \right. \\ & \quad + \left( {4g_{1}^{2} n_{1} + \gamma_{2} \gamma_{3} } \right)\left( {\gamma_{1} \gamma_{3} \left( {\gamma_{1} + \gamma_{2} } \right) + 4g_{1}^{2} n_{1} \left( {\gamma_{2} + 2\gamma_{3} } \right)} \right) \\ & \quad \left. { + 4g_{2}^{2} \left( {n_{2} + 1} \right)\left( {4g_{1}^{2} n_{1} \left( {\gamma_{1} + \gamma_{2} } \right) + \gamma_{3} \left( {\gamma_{1}^{2} + \gamma_{2}^{2} + \gamma_{1} \left( {\gamma_{2} + \gamma_{3} } \right)} \right)} \right)} \right) \\ \end{aligned}$$
$$\begin{aligned} K_{2} & = 4\left( {16g_{1}^{4} n_{1}^{2} \gamma_{3} \left( {\gamma_{1} + \gamma_{2} } \right) - 32g_{2}^{4} \left( {n_{2} + 1} \right)^{2} \gamma_{2} \left( {\gamma_{1} + \gamma_{3} } \right)} \right. \\ & \quad + \gamma_{2}^{2} \left( {\gamma_{1} + \gamma_{2} } \right)\gamma_{3} \left( {\gamma_{1}^{2} + \gamma_{3}^{2} } \right) + 4g_{1}^{2} n_{1} \gamma_{2} \left( {\gamma_{2}^{2} \gamma_{1} + 2\gamma_{1} \gamma_{2} \gamma_{3} } \right. \\ & \quad + 2\left( {\gamma_{1} + \gamma_{2} \gamma_{3}^{2} } \right) + 4g_{2}^{2} \left( {n_{2} + 1} \right)\left( {\gamma_{2} \left( {\gamma_{1}^{3} + \gamma_{1}^{2} \gamma_{3} - 2\gamma_{2}^{2} \gamma_{3} + \gamma_{3}^{3} } \right)} \right. \\ & \quad + \gamma_{1} \gamma_{3} \left. {\left( { - 2\gamma_{2} + \gamma_{3} } \right)} \right) + 4g_{1}^{2} n_{1} \left( {\gamma_{2}^{2} + 3\gamma_{2} \gamma_{3} + \gamma_{3}^{2} } \right. \\ & \quad \left. {\left. {\left. { + \gamma_{1} \left( {2\gamma_{2} + \gamma_{3} } \right)} \right)} \right)} \right), \\ \end{aligned}$$
$$K_{4} = 16\gamma_{2} \left( {\gamma_{2} \gamma_{3} \left( {\gamma_{1} + \gamma_{2} } \right) + 4g_{2}^{2} \left( {n_{2} + 1} \right)\left( {\gamma_{1} + \gamma_{3} } \right)} \right),$$
are the real parameters (Khademi et al. 2015). Dispersion and absorption of the coherence term (8) are proportional to
$$\text{Re} [\tilde{\rho }_{ab} ] = \frac{{2g_{1} \sqrt {n_{1} } (Z_{1} \Delta_{1} + Z_{3} \Delta_{1}^{3} )}}{{K_{0} + K_{2} \Delta_{1}^{2} + K_{4} \Delta_{1}^{4} }},$$
$$\text{Im} [\tilde{\rho }_{ab} ] = \frac{{2g_{1} \sqrt {n_{1} } (Z_{0} \Delta_{1} + Z_{2} \Delta_{1}^{2} )}}{{K_{0} + K_{2} \Delta_{1}^{2} + K_{4} \Delta_{1}^{4} }},$$
The real and imaginary parts of \(\tilde{\rho }_{ab}\) are plotted in Fig. 2a–d for large and small numbers of coupling photons. It is shown that the detuning of the absorption peaks (DAPs) \(\Delta_{1}\), increases with increasing the number of coupling photons.
In the next section, a photon counting method based on the exact form of absorption spectrum of the probe field, which is derived from Eq. (17), is presented.
In this section, a non-demolition photon counting method is presented for measuring the number of coupling photons which are trapped in a quantum cavity and interact with an ensemble of three-level Λ-type atoms.
It is worthwhile to estimate the number of probe photons which is required to determine the probe absorption peak. As an example, the cavity field decay rate can be estimated as \(\kappa = 5\pi \;{\text{MHz}}\) (Raiser et al. 2013) and the coupling photons inside a high Q-factor cavity will not decay as soon as 0.1 μs. A traveling time for a probe photon passing through a cavity with dimensions about a few millimeters is also about 3 ps. Approximately, \(2 \times 10^{4}\) of probe photons are passing through the cavity meanwhile the coupling photons are trapped. This number of photons is sufficient to have a good precision to determine the absorption and dispersion curves in different detuning.
It is clear in Eqs. (8)–(17) and Fig. 2 that the profile and the probe DAP in the absorption spectrum depend on the number of coupling photons. The derivative of the imaginary part of the coherence term, Eq. (17), with respect to \(\Delta_{1}\) can be set to zero:
$$\frac{d}{{d\Delta_{1} }}\text{Im} [\tilde{\rho }_{ab} ] = 0,$$
to obtain the DAP \(\delta_{1} (n_{2} ) = \Delta_{1Max.Abs.} .\) DAPs are nonlinearly increased by the number of coupling photons \(n_{2}\), as plotted in Fig. 3a–c for large and small numbers of coupling photons. Figure 3a presents the relation between measurable DAPs and the large number of coupling photons.
(Color online) The DAPs versus the number of coupling photons for WFA (solid blue line) and exact (dashed red line) methods; a for the large number of coupling photons, the WFA and exact methods are very similar, b for the large number of coupling photons, WFA and exact methods have a fine different result, c for the small number of coupling photons, WFA and exact methods have different results
Although Fig. 3a shows the same behaviour in the exact and WFA methods for a large number of coupling photons, there is a fine difference which is shown in Fig. 3b. But, Fig. 3c shows the difference between WFA and exact methods for the small number of coupling photons. In this condition, the exact method has more benefits than the WFA methods.
A considerable difference between the plots in Fig. 3b, c indicates that the exact method in the full-quantum model provides more correct photon numbers, even for a few number of coupling photons. Furthermore, measurement of absorption spectrum versus detuning of the probe field is a simpler method compared with other photon counting schemes. It is also a non-demolition method for weak probe fields.
Another application of the full-quantum interaction of two-mode photons with three-level Λ-type atoms is in measurement of squeezing parameter of coupling photons. Supposed the trapped coupling photons are squeezed \(\left| {n_{2} ,\xi } \right\rangle = \hat{S}(\xi )\left| {n_{2} } \right\rangle\) with a squeezing operator \(\hat{S}(\xi ) = \exp (\tfrac{1}{2}\xi^{ * } \hat{a}^{2} - \tfrac{1}{2}\xi \hat{a}^{\dag 2} )\), where \(\xi = r\exp \{ i\beta \} .\) r and \(\beta\) are also the squeezing parameter and squeezing phase, respectively.
In this case, the interaction Hamiltonian in the interaction picture is given by ("Appendix")
$${\text{V}} = - \hbar {\text{g}}_{1} \left[ {\sigma_{\text{ab}} {\text{a}}_{1} {\text{e}}^{{{\text{i}}\Delta_{1} {\text{t}}}} + {\text{a}}_{1} {\kern 1pt}^{{{\dag }}} \sigma_{\text{ba}} {\text{e}}^{{ - {\text{i}}\Delta_{1} {\text{t}}}} } \right] - \hbar {\text{g}}_{2} \cosh ({\text{r}})\left[ {\sigma_{\text{ac}} {\text{a}}_{2} {\text{e}}^{{{\text{i}}\Delta_{2} {\text{t}}}} + {\text{a}}_{2} {\kern 1pt}^{{{\dag }}} \sigma_{\text{ca}} {\text{e}}^{{ - {\text{i}}\Delta_{2} {\text{t}}}} } \right],$$
Equation (19) is similar to Eq. (1) where \({\text{g}}_{ 2} \to {\text{g}}_{ 2} {\text{cosh(r)}} .\) Substituting \({\text{g}}_{ 2} {\text{cosh(r)}}\) instead of \({\text{g}}_{ 2}\) in Eqs. (2)–(7) and, after some calculations, the exact probe coherence term is given in terms of squeezing parameter \(r\) as:
$$\tilde{\rho }_{ab} = \frac{{2g_{1} \sqrt {n_{1} } \left( {iL_{0} + L_{2} g_{2}^{2} \cosh^{2} \left( r \right) + L_{4} g_{2}^{4} \cosh^{4} \left( r \right)} \right)}}{{M_{0} + M_{2} g_{2}^{2} \cosh^{2} \left( r \right) + M_{4} g_{2}^{2} \cosh^{4} \left( r \right) + M_{6} g_{2}^{6} \cosh^{6} \left( r \right)}},$$
$$L_{0} = \gamma_{3} \left( {\gamma_{1} + \gamma_{2} } \right)\left( {i\gamma_{1} + 2\Delta_{1} } \right)\left( {\left( {4g_{1}^{2} n_{1} + \gamma_{1} \gamma_{3} } \right)^{2} + 4\gamma_{2}^{2} \Delta_{1}^{2} } \right),$$
$$\begin{aligned} L_{2} & = 4\left( {n_{2} + 1} \right)\left( {i\gamma_{3} \left( {4g_{2}^{2} n_{1} + \gamma_{2} \gamma_{3} } \right)\left( {\gamma_{1}^{2} + \gamma_{2}^{2} + \gamma_{1} \left( {\gamma_{2} + \gamma_{3} } \right)} \right)} \right. \\ & \quad + 2\left( { - \gamma_{2} \gamma_{3} \left( {\gamma_{2} - \gamma_{3} } \right)\left( {\gamma_{1} + \gamma_{2} + \gamma_{3} } \right) - 4g_{1}^{2} n_{1} } \right)\left( { - \gamma_{2}^{2} + \gamma_{2} \gamma_{3} } \right. \\ & \quad \left.\left.+ \gamma_{3} \left( {\gamma_{1} + \gamma_{3} } \right)\right)\right)\Delta_{1} + 4i\gamma_{3} \gamma_{1} \left( {\gamma_{1} + \gamma_{3} } \right)\Delta_{1}^{2} + 8\gamma_{2} \left( {\gamma_{1} + \gamma_{3} } \right)\left. {\Delta_{1}^{3} } \right), \\ \end{aligned}$$
$$L_{4} = 16i\left( {n_{2} + 1} \right)^{2} \gamma_{2} \left( {\gamma_{1} + \gamma_{3} } \right) + \left( {2i\Delta_{1} + \gamma_{3} } \right),$$
$$\begin{aligned} M_{0} & = \left( {\left( {4g_{1}^{2} n_{1} + \gamma_{2} \gamma_{3} } \right)^{2} + 4\gamma_{2}^{2} \Delta_{1}^{2} } \right)\left( {4g_{1}^{2} n_{1} \gamma_{1} \left( {\gamma_{2} + 2\gamma_{3} } \right)} \right. \\ & \quad + \gamma_{3} \left.\left( {\gamma_{1} + \gamma_{2} } \right)\left( {\gamma_{1}^{2} + 4\Delta_{1}^{2} } \right)\right), \end{aligned}$$
$$\begin{aligned} M_{2} & = 4\left( {n_{2} + 1} \right)\left( {16g_{1}^{4} n_{1}^{2} \left( {\gamma_{1}^{2} + \gamma_{1} \gamma_{2} + \gamma_{2} \left( {\gamma_{2} + 2\gamma_{3} } \right)} \right) + 4g_{1}^{2} n_{1} \gamma_{3} ((\gamma_{1} + \gamma_{2} )^{2} } \right. \\ & \quad \left. { + \left( {\gamma_{1}^{2} + 2\gamma_{2}^{2} } \right)} \right) + 4\left( { - \gamma_{2}^{2} + 3\gamma_{2} \gamma_{3} + \gamma_{3}^{2} + \gamma_{1} \left( {2\gamma_{2} + \gamma_{3} } \right)} \right)\left. {\Delta_{1}^{2} } \right) \\ & \quad + \gamma_{2} \left( {\gamma_{1} \gamma_{3}^{2} \left( {\gamma_{1}^{2} + 2\gamma_{2}^{2} } \right) + \gamma_{1} \left( {2\gamma_{2} + \gamma_{3} } \right)} \right) + 4\left( {\gamma_{1}^{3} + \gamma_{1}^{2} \gamma_{3} - 2\gamma_{2}^{2} \gamma_{3} } \right. \\ & \quad \left. { + \gamma_{3}^{3} + \gamma_{1} \gamma_{3} \left( { - 2\gamma_{2} + \gamma_{3} } \right)} \right)\left. { + 16\left( {\gamma_{1} + \gamma_{3} } \right)\left. {\Delta_{1}^{2} } \right)} \right), \\ \end{aligned}$$
$$ \begin{aligned} M_{4} & = 16\left( {n_{2} + 1} \right)^{2} \left( {4g_{1}^{2} n_{1} \left( {\gamma_{1}^{2} + \gamma_{2}^{2} + \gamma_{1} \left( {\gamma_{2} + \gamma_{3} } \right)} \right)} \right. \\ & \quad + \;\gamma_{2} \gamma_{3} \left( {2\gamma_{1}^{2} + \gamma_{2}^{2} + \gamma_{1} \left( {\gamma_{2} + 2\gamma_{3} } \right)} \right)\left. { - 8\gamma_{2} \left( {\gamma_{1} + \gamma_{3} } \right)\Delta_{1}^{2} } \right), \\ \end{aligned} $$
$$M_{6} = 64\left( {n_{2} + 1} \right)^{3} \gamma_{2} \left( {\gamma_{1} + \gamma_{3} } \right).$$
The real and imaginary parts of the probe coherence term (20) correspond to the dispersion and absorption of probe photons, as plotted in Fig. 5a, b (Fig. 5c, d) for large (small) numbers of squeezed coupling photons for different squeezing parameters. The dispersion and absorption spectra drastically depend on the squeezing parameter and number of coupling photons, but are independent from the squeezing phase \(\beta\). Figure 4a, c show that the DAPs and detuning of dispersion peaks (DDPs) are nonlinearly increased by increasing the squeezing parameter r, which leads to applying the photon counting method for a squeezing measurement by measuring the DAPs or DDPs.
(Color online) a Dispersion and b absorption for different squeezing parameters and the number of coupling photons is n2 = 100. c Dispersion and d absorption for different squeezing parameters for vacuum coupling field (squeezing parameters are r = 0, 0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8, 2)
By taking a derivative of the imaginary and real part of Eq. (20) and setting the results to zero, the DAPs and DDPs are obtained in terms of the number of coupling photons and their squeezing parameter. The DAPs and DDPs are plotted in terms of squeezing parameter for different numbers of coupling photons in Fig. 4a, c. DAPs and DDPs are more sensitive for larger squeezing parameter r. Furthermore, they are more sensitive to the squeezing parameter for a larger number of photons. Of course, it is important to note that n 2 is not the average of coupling photons; but, it can be easily derived by \(\bar{n}_{2} = \left\langle {n,\xi } \right|\hat{n}_{2} \left| {n,\xi } \right\rangle\). Figure 5b and d demonstrate the DAP and DDPs in terms of the number states for different squeezing parameters. It is similar to Fig. 3a which is useful for the photon counting and shows that the DAP and DDPs are more sensitive to the small number states where \(r = 0\). This sensitivity is increased for a larger number states by increasing the squeezing parameter r. Therefore, number of coupling photons and their squeezing parameter can be obtained by measuring DAP and DDP of the absorption and dispersion spectra simultaneously. Some typical values of DAP and DDP are shown in Table 1 for different values of number of coupling photons and squeezing parameters. To measure the small number of photons or squeezing parameters, the accuracy of DAP and DDP measurements, according to the data in Table 1, should be about 0.2 g. For the range of atomic transition frequencies, 10 MHz < g < 1 THz the accuracy should be at least 2 MHz which is larger than the new electro-optical modulator resolutions (about 1 MHz) (Veisi et al. 2015).
(Color online) a Variation of DAPs in terms of squeezing parameter for different number of coupling photons. b Variation of DAPs in terms of coupling photons for different values of squeezing parameters. c Variation of DDPs in terms of squeezing parameter for different number of coupling photons. d Variation of DDPs in terms of coupling photons for different values of squeezing parameters
Different values of DAP (\(\delta_{1}\)) and DDP (\(\delta_{2}\)) in terms of some typical number of coupling photons (rows) and squeezing parameters (columns)
\(\begin{aligned} \delta_{1} , \hfill \\ \delta_{2} \hfill \\ \end{aligned}\)
1.666, 2.749
10.123, 11.148
9.628, 10.654
In this paper, the master equations of Λ-type three-level atom interacting with two-mode quantized electromagnetic field and its exact coherence term are applied to obtain the squeezed and non-squeezed coupling photons. The following results were obtained: (1) The method was applied for presenting a photon counting and a squeezing measurement method by measuring the absorption spectrum of the probe photons. (2) The difference between the exact and WFA photon counting methods and benefits of the exact method (especially for the weak coupling photons) were demonstrated. This sensitivity increased for the larger number of coupling photons by increasing the squeezing parameter. (3) It was shown that the photon counting method was more sensitive for the smaller number of coupling photons. (4) The present method for the measurement of squeezing was more sensitive for larger values of squeezing parameters. (5) The number of coupling photons and their squeezing parameter can be obtained simultaneously by measuring the DAP and DDP. (6) The photon counting method was non-demolition for the strong coupling photons.
This article is a part of the post graduate researches of GN and SA. The article is also written by their supervisor Dr. SK which is the corresponding author. All authors read and approved the final manuscript.
We would like to thank Prof. M. Mahmoudi for their useful discussions, comments and helps.
The article is mainly written in the University of Zanjan and is not supported by any other organizations.
The Jaynes–Cummings Hamiltonian of a three-level Λ-type atom interacting with two mode quantized light in the interaction picture is given by Scully and Zubairy (1997):
$$\begin{aligned} V & = - \hbar g_{1} \left( {\sigma_{ab} e^{{i\omega_{ab} t}} + \sigma_{ba} e^{{ - i\omega_{ab} t}} } \right)\left( {a_{1} e^{{ - i\nu_{1} t}} + a_{1}^{ + } e^{{i\nu_{1} t}} } \right) \\ &\quad - \hbar g_{2} \left( {\sigma_{ac} e^{{i\omega_{ac} t}} + \sigma_{ca} e^{{ - i\omega_{ca} t}} } \right)\left( {a_{2} e^{{ - i\nu_{2} t}} + a_{2}^{ + } e^{{i\nu_{2} t}} } \right). \\ \end{aligned}$$
If one of the quantized mode of coupling photons be squeezed the interaction Hamiltonian will be transformed by a squeezing operator as:
$$\begin{aligned} V_{Squeezed} &= S_{ac}^{ + } VS_{ac}^{{}} = S_{ac}^{ + } ( - \hbar g_{1} (\sigma_{ab} e^{{i\omega_{ab} t}} + \sigma_{ba} e^{{ - i\omega_{ab} t}} )(a_{1}^{{}} e^{{ - i\nu_{1} t}} + a_{1}^{ + } e^{{i\nu_{1} t}} ))S_{ac}^{{}} \hfill \\& \quad + S_{ac}^{ + } ( - \hbar g_{2} (\sigma_{ac} e^{{i\omega_{ac} t}} + \sigma_{ca} e^{{ - i\omega_{ca} t}} )(a_{2}^{{}} e^{{ - i\nu_{2} t}} + a_{2}^{ + } e^{{i\nu_{2} t}} ))S_{ac}^{{}} , \hfill \\ \end{aligned}$$
where squeezing operator \(\hat{S}(r) = \exp (\tfrac{1}{2}r\hat{a}_{2}^{2} - \tfrac{1}{2}r\hat{a}_{2}^{\dag 2} )\), and \(r\) is the squeezing parameter. The squeezing operator acts on the second mode of creation and annihilation operators
$$\begin{aligned} V_{Squeezed} &= - \hbar g_{1} (\sigma_{ab} e^{{i\omega_{ab} t}} + \sigma_{ba} e^{{ - i\omega_{ab} t}} )(a_{1}^{{}} e^{{ - i\nu_{1} t}} + a_{1}^{ + } e^{{i\nu_{1} t}} ) \hfill \\ &\quad - \hbar g_{2} (\sigma_{ac} e^{{i\omega_{ac} t}} + \sigma_{ca} e^{{ - i\omega_{ca} t}} )(S_{ac}^{ + } a_{2}^{{}} S_{ac}^{{}} e^{{ - i\nu_{2} t}} + S_{ac}^{ + } a_{2}^{ + } S_{ac}^{{}} e^{{i\nu_{2} t}} ). \hfill \\ \end{aligned}$$
By applying the identities \(S_{ac}^{ + } a_{2}^{{}} S_{ac}^{{}} = a_{2}^{{}} \text{Cosh} [r] - a_{2}^{ + } \text{Sinh} [r]\) and \(S_{ac}^{ + } a_{2}^{ + } S_{ac}^{{}} = a_{2}^{ + } \text{Cosh} [r] - a_{2}^{{}} \text{Sinh} [r]\) and some straightforward calculations and rearrangement one obtains
$$\begin{aligned} V_{Squeezed} & = - \hbar g_{1} \left( {\sigma_{ab} a_{1} e^{{i(\omega_{ab} - \nu_{1} )t}} + \sigma_{ba} a_{1}^{{}} e^{{ - i(\omega_{ab} + \nu_{1} )t}} + \sigma_{ab} a_{1}^{ + } e^{{i(\omega_{ab} + \nu_{1} )t}} + \sigma_{ba} a_{1}^{ + } e^{{ - i(\omega_{ab} - \nu_{1} )t}} } \right) \\ & \quad -\, \hbar g_{2} \left[ {a_{2}^{{}} \sigma_{ac} \left( {\text{Cosh} [r]e^{{i(\omega_{ac} - \nu_{2} )t}} - \text{Sinh} [r]e^{{i(\omega_{ac} + \nu_{2} )t}} } \right)} \right. \\ & \quad +\, a_{2}^{ + } \sigma_{ac} \left( {\text{Cosh} [r]e^{{i(\omega_{ac} + \nu_{2} )t}} - \text{Sinh} [r]e^{{i(\omega_{ac} - \nu_{2} )t}} } \right) \\ & \quad +\, a_{2}^{{}} \sigma_{ca} \left( {\text{Cosh} [r]e^{{ - i(\omega_{ca} + \nu_{2} )t}} - \text{Sinh} [r]e^{{ - i(\omega_{ca} - \nu_{2} )t}} } \right) \\ & \quad \left. { +\, a_{2}^{ + } \sigma_{ca} \left( {\text{Cosh} [r]e^{{ - i(\omega_{ca} - \nu_{2} )t}} - \text{Sinh} [r]e^{{ - i(\omega_{ca} + \nu_{2} )t}} } \right)} \right]. \\ \end{aligned}$$
Because of the conservation of energy, the non-conservative terms, which contains \(a\sigma^{ - }\), \(a^{ + } \sigma^{ + }\) or \(Exp\left[ {\omega + \nu } \right]\) should be removed to obtain Eq. (19), where \(\Delta_{1} = \left( {\omega_{ab} - \nu_{1} } \right)\) and \(\Delta_{2} = \left( {\omega_{ca} - \nu_{2} } \right)\).
Physics Groups, Qazvin Branch, Islamic Azad University, Qazvin, Iran
Department of Physics, University of Zanjan, 6th Km of Tabriz Road, Zanjan, P.O. Box 38791-45371, Iran
Akamatsu D, Akiba K, Kozuma M (2004) Electromagnetically Induced Transparency with squeezed vacuum. Phys Rev Lett 92:203602View ArticleGoogle Scholar
Bai Z, Hang C, Huang G (2013) Classical analogs of double electromagnetically induced transparency. Opt Commun 291:253–258View ArticleGoogle Scholar
Braginsky VB, Khalili FY (1996) Quantum nondemolition measurements: the route from toys to tools. Rev Mod Phys 68:1View ArticleGoogle Scholar
Chenguang Y, Zhang J (2008) Electromagnetically induced transparency-like effect in the degenerate triple-resonant optical parametric amplifier. Opt Lett 33:1911–1913View ArticleGoogle Scholar
Dantan A, Albert M, Drewsen M (2012) All-cavity electromagnetically induced transparency and optical switching: semiclassical theory. Phys Rev A 85:013840View ArticleGoogle Scholar
Deng L, Payne MG (2005) Achieving induced transparency with one- and three-photon destructive interference in a two-mode, three-level, double-Λ system. Phys Rev A 71:011803View ArticleGoogle Scholar
Fleischhauer M, Imamoglu A, Marangos JP (2005) Electromagnetically induced transparency: optics in coherent media. Rev Mod Phys 77:633–673View ArticleGoogle Scholar
Grangier P, Levenson JA, Poizat J-P (1998) Quantum non-demolition measurements in optics. Nature 396:537–542View ArticleGoogle Scholar
Harris SE (1997) Electromagnetically induced transparency. Phys Today 50:36View ArticleGoogle Scholar
Harris SE, Field JE, Imamoglu A (1990) Nonlinear optical processes using electromagnetically induced transparency. Phys Rev Lett 64:1107View ArticleGoogle Scholar
Hong-Wei W, Xian-Wu Mi (2012) Optical absorption and electromagnetically induced transparency in semiconductor quantum well driven by intense terahertz field. Chin Phys B 21:107102View ArticleGoogle Scholar
Huang S, Agarwal GS (2011) Electromagnetically induced transparency with quantized fields in optocavity mechanics. Phys Rev A 83:043826View ArticleGoogle Scholar
Jafari D, Sahrai M, Motavali H, Mahmoudi M (2011) Phase control of group velocity in a dielectric slab doped with three-level ladder-type atoms. Phys Rev A 84:063811View ArticleGoogle Scholar
Johnsson MT, Fleischhauer M (2002) Quantum theory of resonantly enhanced four-wave mixing: mean-field and exact numerical solutions. Phys Rev A 66:043808View ArticleGoogle Scholar
Joshi A, Xiao M (2003) Electromagnetically induced transparency and its dispersion properties in a four-level inverted-Y atomic system. Phys Lett A 317:370–377View ArticleGoogle Scholar
Khademi S, Naeimi Gh, Alipour S, Mirzaei Sh (2015) An exact scheme for the EIT for a three-level Λ-type atom in a quantum cavity. Appl Math Inf Sci 9(3):1225–1229Google Scholar
Kocharovskaya O, Khanin Ya I (1988) Coherent amplification of an ultrashort pulse in a three-level medium without a population inversion. JETP Lett 48:630–634Google Scholar
Lazoudis A et al (2010) Electromagnetically induced transparency in an open Λ -type molecular lithium system. Phys Rev A 82:023812View ArticleGoogle Scholar
Lazoudis A et al (2011) Electromagnetically induced transparency in an open V-type molecular system. Phys Rev A 83:063419View ArticleGoogle Scholar
Marangos JP (1998) Electromagnetically induced transparency. J Mod Opt 45:471–503View ArticleGoogle Scholar
Naeimi Ghasem, Khademi Siamak, Heibati Ozra (2013) A method for the measurement of photons number and squeezing parameter in a quantum cavity. ISRN Opt 2013:271951View ArticleGoogle Scholar
Olson AJ, Mayer SK (2009) Electromagnetically induced transparency in rubidium. Am J Phys 77:116View ArticleGoogle Scholar
Petrosyan D, Otterbach J, Fleischhauer M (2011) Electromagnetically Induced Transparency with Rydberg Atoms. Phys Rev Lett 107:213601View ArticleGoogle Scholar
Rabiei SW, Saaidi Kh, Ruzbahani B, Mahmoudi M (2011) Absorption-free superluminal light propagation in a V-type system. Armen J Phys 4:38Google Scholar
Raiser A, Ritterm S, Rempe G (2013) Nondestructive detection of an optical photon. Science 342:1349–1351View ArticleGoogle Scholar
Sahrai M, Etemadpour R, Mahmoudi M (2010a) Dispersive and absorptive properties of a Λ-type atomic system with two fold lower-levels. Eur Phys J B 59:463Google Scholar
Sahrai M, Aas S, Mahmoudi M (2010b) Subluminal to superluminal pulse propagation through one-dimensional photonic crystals with a three-level atomic defect layer. Eur Phys J B 78:51–58View ArticleGoogle Scholar
Sahrai M, Aas S, Aas M, Mahmoudi M (2011) Hartman effect in one-dimensional photonic crystals with a three-level atomic defect layer. Eur Phys J B 83:337View ArticleGoogle Scholar
Sargsyan A, Leroy C, Pashayan Y, Sarkisian D, Slavov D, Cartaleva S (2012) Electromagnetically Induced Transparency and optical pumping processes formed in Cs sub-micron thin cell. Opt Commun 285:2090–2095View ArticleGoogle Scholar
Sayrin C, Dotsenko I, Gleyzes S, Brune M, Raimond JM, Haroche S (2012) Optimal time-resolved photon number distribution reconstruction of a cavity field by maximum likelihood. New J Phys 14:115007View ArticleGoogle Scholar
Scully MO, Zubairy (1997) Quantum optics, 1st edn. Cambridge, New YorkView ArticleGoogle Scholar
Veisi Mariye, Vafafard Azar, Mahmoudi Mohammad (2015) Phase-controlled optical Faraday rotation in a closed-loop atomic system. JOSA B 32(1):167–172View ArticleGoogle Scholar
Wang Z, Guo W, Zheng S (1992) Quantum theory of optical multistability in a two-photon three-level Λ-configuration medium. Phys Rev A 46:7235–7241View ArticleGoogle Scholar
Wang J, Kong LB, Tu XH, Jiang KJ, Li K, Xiong HW, Zhu Y, Zhana MS (2004) Electromagnetically induced transparency in multi-level cascade scheme of cold rubidium atoms. Phys Lett A 328:437–443View ArticleGoogle Scholar | CommonCrawl |
\begin{document}
\title[Matrix concentration via semigroup methods]{Nonlinear matrix concentration via semigroup methods} \author[D. Huang and J. A. Tropp]{De Huang$^*$ and Joel A. Tropp$^\dagger$} \thanks{$^*$California Institute of Technology, USA. E-mail: [email protected]} \thanks{$^\dagger$California Institute of Technology, USA. E-mail: [email protected]}
\begin{abstract} Matrix concentration inequalities provide information about the probability that a random matrix is close to its expectation with respect to the $\ell_2$ operator norm. This paper uses semigroup methods to derive sharp nonlinear matrix inequalities. In particular, it is shown that the classic Bakry--{\'E}mery curvature criterion implies subgaussian concentration for ``matrix Lipschitz'' functions. This argument circumvents the need to develop a matrix version of the log-Sobolev inequality, a technical obstacle that has blocked previous attempts to derive matrix concentration inequalities in this setting. The approach unifies and extends much of the previous work on matrix concentration. When applied to a product measure, the theory reproduces the matrix Efron--Stein inequalities due to Paulin et al. It also handles matrix-valued functions on a Riemannian manifold with uniformly positive Ricci curvature. \end{abstract}
\subjclass[2010]{Primary: 60B20, 46N30. Secondary: 60J25, 46L53.} \keywords{Bakry--{\'E}mery criterion; concentration inequality; functional inequality; Markov process; matrix concentration; local Poincar{\'e} inequality; semigroup.}
\maketitle
\section{Motivation}
Matrix concentration inequalities describe the probability that a random matrix is close to its expectation, with deviations measured in the $\ell_2$ operator norm. The basic models---sums of independent random matrices and matrix-valued martingales---have been studied extensively, and they admit a wide spectrum of applications~\cite{tropp2015introduction}. Nevertheless, we lack a complete understanding of more general random matrix models. The purpose of this paper is to develop a systematic approach for deriving ``nonlinear'' matrix concentration inequalities.
In the scalar setting, functional inequalities offer a powerful framework for studying nonlinear concentration. For example, consider a real-valued Lipschitz function $f(Z)$ of a real random variable $Z$ with distribution $\mu$. If the measure $\mu$ satisfies a Poincar{\'e} inequality, then the variance of $f(Z)$ is controlled by the squared Lipschitz constant of $f$. If the measure satisfies a log-Sobolev inequality, then $f(Z)$ enjoys subgaussian concentration on the scale of the Lipschitz constant.
Now, suppose that we can construct a semigroup, acting on real-valued functions, with stationary distribution $\mu$. Functional inequalities for the measure $\mu$ are intimately related to the convergence of the semigroup. In particular, the measure admits a Poincar{\'e} inequality if and only if the semigroup rapidly tends to equilibrium (in the sense that the variance is exponentially ergodic). Meanwhile, log-Sobolev inequalities are associated with finer types of ergodicity.
In recent years, researchers have attempted to use functional inequalities and semigroup tools to prove matrix concentration results. So far, these arguments have met some success, but they are not strong enough to reproduce the results that are already available for the simplest random matrix models. The main obstacle has been the lack of a suitable extension of the log-Sobolev inequality to the matrix setting. See Section~\ref{sec:concentration_history} for an account of prior work.
The purpose of this paper is to advance the theory of semigroups acting on matrix-valued functions and to apply these methods to obtain matrix concentration inequalities for nonlinear random matrix models. To do so, we argue that the classic Bakry--{\'E}mery curvature criterion for a semigroup acting on real-valued functions ensures that an associated matrix semigroup also satisfies a curvature condition. This property further implies local ergodicity of the matrix semigroup, which we can use to prove strong bounds on the trace moments of nonlinear random matrix models.
The power of this approach is that the Bakry--{\'E}mery condition has already been verified for a large number of semigroups. We can exploit these results to identify many new settings where matrix concentration is in force. This program entirely evades the question about the proper way to extend log-Sobolev inequalities to matrices.
Our approach reproduces many existing results from the theory of matrix concentration, such as the matrix Efron--Stein inequalities~\cite{paulin2016efron}. Among other new results, we can achieve subgaussian concentration for a matrix-valued ``Lipschitz'' function on a positively curved Riemannian manifold. Here is a simplified formulation of this fact.
\begin{theorem}[Euclidean submanifold: Subgaussian concentration] \label{thm:riemann-simple} Let $M$ be a compact $n$-dimensional Riemannian submanifold of a Euclidean space, and let $\mu$ be the uniform measure on $M$. Suppose that the eigenvalues of the Ricci curvature tensor of $M$ are uniformly bounded below by $\rho$. Let $\mtx{f} : M \to \mathbb{H}_d$ be a differentiable function. For all $t \geq 0$, $$ \mathbb{P}_{\mu} \big\{ \norm{ \smash{\mtx{f} - \Expect_{\mu} \mtx{f}} } \geq t \big\}
\leq 2d \, \exp\left( \frac{-\rho t^2}{2 v_{\mtx{f}}} \right)
\quad\text{where}\quad
v_{\mtx{f}} := \sup\nolimits_{x \in M} \norm{ \sum_{i=1}^n (\partial_i \mtx{f}(x))^2 }. $$ Furthermore, for $q = 2$ and $q\geq 3$, $$ \left[ \operatorname{\mathbbm{E}}_{\mu} \operatorname{tr} ( \mtx{f} - \operatorname{\mathbbm{E}}_{\mu} \mtx{f} )^q \right]^{1/q}
\leq \rho^{-1/2} \sqrt{q - 1} \left[
\operatorname{\mathbbm{E}}_{\mu} \operatorname{tr} \left( \sum_{i=1}^n (\partial_i \mtx{f})^2 \right)^{q/2} \right]^{1/q}. $$ The real-linear space $\mathbb{H}_d$ contains all $d \times d$ Hermitian matrices, and $\norm{\cdot}$ is the $\ell_2$ operator norm. The operators $\partial_i$ compute partial derivatives in local (normal) coordinates. \end{theorem}
Theorem~\ref{thm:riemann-simple} follows from abstract concentration inequalities (Theorem~\ref{thm:polynomial_moment} and Theorem~\ref{thm:exponential_concentration}) and the classic fact that the Brownian motion on a positively curved Riemannian manifold satisfies the Bakry--{\'E}mery criterion~\cite[Sec.~1.16]{bakry2013analysis}. See Section~\ref{sec:concentration_results_Riemannian} for details.
Particular settings where the theorem is valid include the unit Euclidean sphere and the special orthogonal group. The variance proxy $v_{\mtx{f}}$ is analogous with the squared Lipschitz constant that appears in scalar concentration results. We emphasize that $\partial_i \mtx{f}$ is an Hermitian matrix, and the variance proxy involves a sum of the matrix squares. Thus, the ``Lipschitz constant'' is tailored to the matrix setting.
As a concrete example, consider the $n$-dimensional sphere $\mathbb{S}^n \subset \mathbbm{R}^{n+1}$, with uniform measure $\sigma_n$ and curvature $\rho = n - 1$. Let $\mtx{A}_1, \dots, \mtx{A}_{n+1} \in \mathbb{H}_d$ be fixed matrices. Construct the random matrix $$ \mtx{f}(x) = \sum_{i=1}^{n+1} x_i \mtx{A}_i \quad\text{where $x \sim \sigma_n$.} $$ By symmetry, $\Expect_{\sigma_n} \mtx{f} = \mtx{0}$. Moreover, the variance proxy $v_{\mtx{f}} \leq \norm{ \sum_{i=1}^{n+1} \mtx{A}_i^2 }$. Thus, Theorem~\ref{thm:riemann-simple} delivers the bound $$ \mathbb{P}_{\sigma_n} \big\{ \norm{ \mtx{f} } \geq t \big\}
\leq 2d \exp\left( \frac{-(n-1) t^2}{2 v_{\mtx{f}}} \right). $$ See Section~\ref{sec:riemann-exp} for more instances of Theorem~\ref{thm:riemann-simple} in action.
\begin{remark}[Noncommutative moment inequalities] After this paper was complete, we learned that Junge \& Zeng~\cite{junge2015noncommutative} have developed a similar method, based on a noncommutative Bakry--Emery criterion, to obtain moment inequalities in the setting of a von Neumann algebra equipped with a noncommutative diffusion semigroup. Their results are not fully comparable with ours, so we will elaborate on the relationship as we go along. \end{remark}
\section{Matrix Markov semigroups: Foundations} \label{sec:matrix_Markov_semigroups}
To start, we develop some basic facts about an important class of Markov semigroups that acts on matrix-valued functions. Given a Markov process, we define the associated matrix Markov semigroup and its infinitesimal generator. Then we construct the matrix carr\'e du champ operator and the Dirichlet form. Afterward, we outline the connection between convergence properties of the semigroup and Poincar{\'e} inequalities. Parts of our treatment are adapted from~\cite{cheng2017exponential,ABY20:Matrix-Poincare}, but some elements appear to be new.
\subsection{Notation}
Let $\mathbb{M}_d$ be the algebra of all $d \times d$ complex matrices. The real-linear subspace $\mathbb{H}_d$ contains all Hermitian matrices, and $\mathbb{H}_d^+$ is the cone of all positive-semidefinite matrices. Matrices are written in boldface. In particular, $\mathbf{I}_d$ is the $d$-dimensional identity matrix, while $\mtx{f}$, $\mtx{g}$ and $\mtx{h}$ refer to matrix-valued functions. We use the symbol $\preccurlyeq$ for the semidefinite partial order on Hermitian matrices: For matrices $\mtx{A}, \mtx{B} \in \mathbb{H}_d$, the inequality $\mtx{A}\preccurlyeq \mtx{B}$ means that $\mtx{B}-\mtx{A}\in \mathbb{H}_d^+$.
For a matrix $\mtx{A}\in\mathbb{M}_d$, we write $\|\mtx{A}\|$ for the $\ell_2$ operator norm, $\|\mtx{A}\|_\mathrm{HS}$ for the Hilbert--Schmidt norm, and $\operatorname{tr} \mtx{A}$ for the trace. The normalized trace is defined as $\operatorname{\bar{\trace}} \mtx{A} := d^{-1} \operatorname{tr} \mtx{A}$. Nonlinear functions bind before the trace. Given a scalar function $\varphi:\mathbb{R}\rightarrow \mathbb{R}$, we construct the \emph{standard matrix function} $\varphi : \mathbb{H}_d \to \mathbb{H}_d$ using the eigenvalue decomposition: \[\varphi(\mtx{A}) := \sum_{i=1}^d \varphi(\lambda_i) \, \vct{u}_i\vct{u}_i^* \quad \text{where}\quad \mtx{A} = \sum_{i=1}^d \lambda_i \,\vct{u}_i\vct{u}_i^*. \] We constantly rely on basic tools from matrix theory; see~\cite{carlen2010trace}.
Let $\Omega$ be a Polish space equipped with a probability measure $\mu$. Define $\Expect_\mu$ and $\mathrm{Var}_{\mu}$ to be the expectation and variance of a real-valued function with respect to the measure $\mu$. When applied to a random matrix, $\Expect_\mu$ computes the entrywise expectation. Nonlinear functions bind before the expectation.
\subsection{Markov semigroups acting on matrices}
This paper focuses on a special class of Markov semigroups acting on matrices. In this model, a classical Markov process drives the evolution of a matrix-valued function. Remark~\ref{rem:nc-semigroup} mentions some generalizations.
Suppose that $(Z_t)_{t\geq0} \subset \Omega$ is a time-homogeneous Markov process on the state space $\Omega$ with stationary measure $\mu$. For each matrix dimension $d \in \mathbbm{N}$, we can construct a Markov semigroup $(P_t)_{t\geq0}$ that acts on a (bounded) measurable matrix-valued function $\mtx{f} : \Omega\rightarrow\mathbb{H}_d$ according to \begin{equation} \label{eqn:semigroup}
(P_t\mtx{f})(z) := \Expect[\mtx{f}(Z_t)\,|\,Z_0 = z]\quad \text{for all $t\geq 0$ and all $z\in \Omega$}. \end{equation} The semigroup property $P_{t+s} = P_{t}P_{s} = P_{s}P_{t}$ holds for all $s, t \geq 0$ because $(Z_t)_{t\geq 0}$ is a homogeneous Markov process.
Note that the operator $P_0$ is the identity map: $P_0 \mtx{f} = \mtx{f}$. For a fixed $\mtx{A} \in \mathbb{H}_d$, regarded as a constant function on $\Omega$, the semigroup also acts as the identity: $P_t \mtx{A} = \mtx{A}$ for all $t\geq0$. Furthermore, $\operatorname{\mathbbm{E}}_{\mu}[ P_t \mtx{f} ] = \operatorname{\mathbbm{E}}_{\mu}[ \mtx{f} ]$ because $Z_0 \sim \mu$ implies that $Z_t \sim \mu$ for all $t \geq 0$. We use these facts without comment.
Although~\eqref{eqn:semigroup} defines a family of semigroups indexed by the matrix dimension $d$, we will abuse terminology and speak of this collection as if it were as single semigroup. A major theme of this paper is that facts about the action of the semigroup~\eqref{eqn:semigroup} on real-valued functions ($d = 1$) imply parallel facts about the action on matrix-valued functions ($d \in \mathbbm{N}$).
\begin{remark}[Noncommutative semigroups] \label{rem:nc-semigroup} There is a very general class of noncommutative semigroups acting on a von Neumann algebra where the action is determined by a family of completely positive unital maps~\cite{junge2015noncommutative}. This framework includes~\eqref{eqn:semigroup} as a special case; it covers quantum semigroups~\cite{cheng2017exponential} acting on $\mathbb{H}_d$ with a fixed matrix dimension $d$; it also includes more exotic examples. We will not study these models, but we will discuss the relationship between our results and prior work. \end{remark}
\subsection{Ergodicity and reversibility}
We say that the semigroup $(P_t)_{t\geq0}$ defined in~\eqref{eqn:semigroup} is \emph{ergodic} if \begin{equation*}\label{eqn:ergodicity_scalar} P_tf \rightarrow \Expect_\mu f\quad \text{in $L_2(\mu)$}\quad \text{as}\quad t\rightarrow+\infty \quad \text{for all $f:\Omega\rightarrow\mathbb{R}$}. \end{equation*} Furthermore, $(P_t)_{t\geq0}$ is \emph{reversible} if each operator $P_t$ is a \emph{symmetric} operator on $L_2(\mu)$. That is, \begin{equation}\label{eqn:reversibility_scalar} \Expect_\mu [(P_tf) \, g] = \Expect_\mu [f \, (P_tg)] \quad \text{for all $t\geq0$ and all $f,g:\Omega\rightarrow\mathbb{R}$}. \end{equation} Note that these definitions involve only real-valued functions $(d = 1)$.
In parallel, we say that the Markov process $(Z_t)_{t\geq0}$ is reversible (resp.~ergodic) if the associated Markov semigroup $(P_t)_{t\geq0}$ is reversible (resp.~ergodic). The reversibility of the process $(Z_t)_{t\geq0}$ implies that, when $Z_0 \sim \mu$, the pair $(Z_t, Z_0)$ is \emph{exchangeable} for all $t \geq 0$. That is, $(Z_t,Z_0)$ and $(Z_0,Z_t)$ follow the same distribution for all $t\geq0$.
Our matrix concentration results require ergodicity and reversibility of the semigroup action on matrix-valued functions. These properties are actually a consequence of the analogous properties for real-valued functions. Evidently, the ergodicity of $(P_t)_{t\geq0}$ is equivalent with the statement \begin{equation}\label{eqn:ergodicity} P_t\mtx{f} \rightarrow \Expect_\mu \mtx{f}\quad \text{in $L_2(\mu)$}\quad \text{as}\quad t\rightarrow+\infty \quad \text{for all $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$ and each $d \in \mathbbm{N}$.} \end{equation} Note that the $L_2(\mu)$ convergence in the matrix setting means $\lim_{t\rightarrow \infty}\Expect_\mu(P_t\mtx{f}-\Expect_\mu \mtx{f})^2= \mtx{0}$ , which is readily implied by the $L_2(\mu)$ convergence of all entries of $P_t\mtx{f}-\Expect_\mu \mtx{f}$. As for reversibility, we have the following result.
\begin{proposition}[Reversibility] \label{prop:reversibility} Let $(P_t)_{t \geq 0}$ be the family of semigroups defined in~\eqref{eqn:semigroup}. The following are equivalent.
\begin{enumerate} \item The semigroup acting on real-valued functions is symmetric, as in~\eqref{eqn:reversibility_scalar}.
\item The semigroup acting on matrix-valued functions is symmetric. That is, for each $d \in \mathbbm{N}$, \begin{equation}\label{eqn:reversibility_1} \Expect_\mu [(P_t\mtx{f}) \, \mtx{g}] = \Expect_\mu [\mtx{f} \, (P_t\mtx{g})] \quad \text{for all $t\geq0$ and all $\mtx{f},\mtx{g}:\Omega\rightarrow\mathbb{H}_d$}. \end{equation} \end{enumerate} \end{proposition}
\noindent Let us emphasize that~\eqref{eqn:reversibility_1} now involves matrix products. The proof of Proposition~\ref{prop:reversibility} appears below in Section~\ref{sec:reversibility-pf}.
\subsection{Convexity}
Given a convex function $\Phi:\mathbb{H}_d\rightarrow\mathbb{R}$ that is bounded below, the semigroup satisfies a Jensen inequality of the form \begin{equation*}\label{eqn:semigroup_Jensen_1}
\Phi(P_t\mtx{f}(z)) = \Phi(\Expect[\mtx{f}(Z_t)\,|\,Z_0 = z]) \leq \Expect[\Phi(\mtx{f}(Z_t))\,|\,Z_0 = z]\quad \text{for all $z\in \Omega$}. \end{equation*} This is an easy consequence of the definition~\eqref{eqn:semigroup}. In particular, \begin{equation}\label{eqn:semigroup_Jensen_2}
\Expect_\mu \Phi(P_t\mtx{f}) \leq \Expect_{Z\sim\mu} \Expect[\Phi(\mtx{f}(Z_t))\,|\,Z_0 = Z] = \Expect_{Z_0\sim\mu}[\Phi(\mtx{f}(Z_t))] = \Expect_\mu \Phi(\mtx{f}) . \end{equation} A typical choice of $\Phi$ is the trace function $\operatorname{tr} \varphi$, where $\varphi : \mathbb{H}_d \to \mathbb{H}_d$ is a standard matrix function.
\subsection{Infinitesimal generator}
The \emph{infinitesimal generator} $\mathcal{L}$ of the semigroup~\eqref{eqn:semigroup} acts on a (nice) measurable function $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$ via the formula \begin{equation}\label{eqn:Markov_generator} (\mathcal{L}\mtx{f})(z) := \lim_{t\downarrow 0}\frac{(P_t\mtx{f})(z) -\mtx{f}(z)}{t} \quad\text{for all $z \in \Omega$.} \end{equation} Because $(P_t)_{t \geq 0}$ is a semigroup, it follows immediately that \begin{equation}\label{eqn:derivative_relation} \frac{\diff{} }{\diff t}P_t = \mathcal{L} P_t = P_t\mathcal{L}\quad \text{for all}\ t\geq0. \end{equation} The null space of $\mathcal{L}$ contains all constant functions: $\mathcal{L} \mtx{A} = \mtx{0}$ for each fixed $\mtx{A} \in \mathbb{H}_d$. Moreover, \begin{equation}\label{eqn:mean_zero} \Expect_\mu[\mathcal{L} \mtx{f} ] = \mtx{0}\quad \text{for all $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$}. \end{equation} That is, the infinitesimal generator converts an arbitrary function into a zero-mean function.
We say that the infinitesimal generator $\mathcal{L}$ is {symmetric} on $L_2(\mu)$ when its action on real-valued functions is symmetric: \[ \Expect_\mu [(\mathcal{L} f)\,g] = \Expect_\mu [f \, (\mathcal{L} g)] \quad \text{for all $f,g:\Omega\rightarrow\mathbb{R}$}. \] The generator $\mathcal{L}$ is symmetric if and only if the semigroup $(P_t)_{t \geq 0}$ is symmetric (i.e., reversible). In this case, the action of $\mathcal{L}$ on matrix-valued functions is also symmetric: \begin{equation}\label{eqn:reversibility_2} \Expect_\mu [(\mathcal{L} \mtx{f})\,\mtx{g}] = \Expect_\mu [\mtx{f} \, (\mathcal{L}\mtx{g})] \quad \text{for all $\mtx{f},\mtx{g}:\Omega\rightarrow\mathbb{H}_d$}. \end{equation} This point follows from Proposition~\ref{prop:reversibility}.
As we have alluded, the limit in \eqref{eqn:Markov_generator} need not exist for all functions. The set of functions $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$ for which $\mathcal{L}\mtx{f}$ is defined $\mu$-almost everywhere is called the \emph{domain} of the generator. It is highly technical, but usually unimportant, to characterize the domain of the generator and related operators.
For our purposes, we may restrict attention to an unspecified algebra of \emph{suitable} functions (say, smooth and compactly supported) where all operations involving limits, derivatives, and integrals are justified. By approximation, we can extend the main results to the entire class of functions where the statements make sense. We refer the reader to the monograph~\cite{bakry2013analysis} for an extensive discussion about how to make these arguments airtight.
\subsection{Carr\'e du champ operator and Dirichlet form}
For each $d \in \mathbbm{N}$, given the infinitesimal generator $\mathcal{L}$, the matrix \textit{carr\'e du champ operator} is the bilinear form \begin{equation}\label{eqn:definition_Gamma} \Gamma(\mtx{f},\mtx{g}) := \frac{1}{2}\left[ \mathcal{L}(\mtx{f}\mtx{g}) - \mtx{f}\mathcal{L}(\mtx{g}) - \mathcal{L}(\mtx{f})\mtx{g} \right] \in \mathbb{M}_d \quad \text{for all suitable $\mtx{f},\mtx{g} : \Omega \to \mathbb{H}_d$}. \end{equation} The matrix \emph{Dirichlet form} is the bilinear form obtained by integrating the carr{\'e} du champ: \begin{equation} \label{eqn:Dirichlet_form} \mathcal{E}(\mtx{f},\mtx{g}) := \Expect_\mu \Gamma(\mtx{f},\mtx{g}) \in \mathbb{M}_d \quad \text{for all suitable $\mtx{f},\mtx{g} : \Omega \to \mathbb{H}_d$}. \end{equation} We abbreviate the associated quadratic forms as $\Gamma(\mtx{f}):=\Gamma(\mtx{f},\mtx{f})$ and $\mathcal{E}(\mtx{f}):=\mathcal{E}(\mtx{f},\mtx{f})$. Proposition~\ref{prop:Gamma_property} states that both these quadratic forms are positive operators in the sense that they take values in the cone of positive-semidefinite Hermitian matrices. In many instances, the carr\'e du champ $\Gamma(\mtx{f})$ has a natural interpretation as the squared magnitude of the derivative of $\mtx{f}$, while the Dirichlet form $\mathcal{E}(\mtx{f})$ reflects the total energy of the function $\mtx{f}$.
Using~\eqref{eqn:mean_zero}, we can rewrite the Dirichlet form as \begin{align}\label{eqn:Dirichlet_expression_1} \mathcal{E}(\mtx{f},\mtx{g}) = \Expect_\mu\Gamma(\mtx{f},\mtx{g}) = -\frac{1}{2} \Expect_\mu \left[\mtx{f}\mathcal{L}(\mtx{g}) + \mathcal{L}(\mtx{f}) \mtx{g}\right] \end{align} When the semigroup $(P_t)_{t \geq 0}$ is reversible, then~\eqref{eqn:reversibility_2} and~\eqref{eqn:Dirichlet_expression_1} indicate that \begin{align}\label{eqn:Dirichlet_expression_2} \mathcal{E}(\mtx{f},\mtx{g}) = -\Expect_\mu [\mtx{f}\mathcal{L}(\mtx{g})] = -\Expect_\mu [\mathcal{L}(\mtx{f})\mtx{g}]. \end{align} These alternative expressions are very useful for calculations.
\subsection{The matrix Poincar\'e inequality}
For each function $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$, the \textit{matrix variance} with respect to the distribution $\mu$ is defined as \begin{equation} \label{eqn:matrix_variance} \mVar_\mu[\mtx{f}] := \Expect_\mu\big[(\mtx{f}-\Expect_\mu\mtx{f})^2\big] = \Expect_\mu[\mtx{f}^2] - (\Expect_\mu\mtx{f})^2\in \mathbb{H}_d^+. \end{equation} We say that the Markov process satisfies a \textit{matrix Poincar\'e inequality} with constant $\alpha>0$ if \begin{equation}\label{eqn:matrix_Poincare} \mVar_\mu(\mtx{f})\preccurlyeq \alpha \cdot \mathcal{E}(\mtx{f})\quad \text{for all suitable $\mtx{f} : \Omega \to \mathbb{H}_d$}. \end{equation} This definition seems to be due to Chen et al.~\cite{cheng2017exponential}; see also Aoun et al.~\cite{ABY20:Matrix-Poincare}.
When the matrix dimension $d = 1$, the inequality~\eqref{eqn:matrix_Poincare} reduces to the usual scalar Poincar{\'e} inequality for the semigroup. For the semigroup~\eqref{eqn:semigroup}, the scalar Poincar{\'e} inequality ($d = 1$) already implies the matrix Poincar{\'e} inequality (for all $d \in \mathbbm{N}$). Therefore, to check the validity of~\eqref{eqn:matrix_Poincare}, it suffices to consider real-valued functions.
\begin{proposition}[Poincar{\'e} inequalities: Equivalence] \label{prop:poincare_equiv} For each $d \in \mathbbm{N}$, let $(P_t)_{t\geq 0}$ be the semigroup defined in~\eqref{eqn:semigroup}. The following are equivalent:
\begin{enumerate} \item \label{Poincare_inequality_scalar} \textbf{Scalar Poincar\'e inequality.} $\Var_\mu[f]\leq \alpha \cdot \mathcal{E}(f)$ for all suitable $f:\Omega\to \mathbb{R}$. \item \label{Poincare_inequality_matrix} \textbf{Matrix Poincar\'e inequality.} $\mVar_\mu[\mtx{f}]\preccurlyeq \alpha \cdot \mathcal{E}(\mtx{f})$ for all suitable $\mtx{f}: \Omega \to \mathbb{H}_d$ and all $d \in \mathbbm{N}$. \end{enumerate} \end{proposition}
\noindent The proof of Proposition~\ref{prop:poincare_equiv} appears in Section~\ref{sec:scalar_matrix}. We are grateful to Ramon van Handel for this observation.
\subsection{Poincar{\'e} inequalities and ergodicity}
As in the scalar case, the matrix Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare} is a powerful tool for understanding the action of a semigroup on matrix-valued functions. Assuming ergodicity, the Poincar{\'e} inequality is equivalent with the exponential convergence of the Markov semigroup $(P_t)_{t \geq 0}$ to the expectation operator $\Expect_{\mu}$. The constant $\alpha$ determines the rate of convergence. The following result makes this principle precise.
\begin{proposition}[Poincar\'e inequality: Consequences] \label{prop:matrix_poincare} Consider a Markov semigroup $(P_t)_{t \geq 0}$ with stationary measure $\mu$ acting on suitable functions $\mtx{f} : \Omega \to \mathbb{H}_d$ for a fixed $d \in \mathbbm{N}$, as defined in~\eqref{eqn:semigroup}. The following are equivalent: \begin{enumerate} \item \label{Poincare_inequality} \textbf{Poincar\'e inequality.} $\mVar_\mu[\mtx{f}]\preccurlyeq \alpha \cdot \mathcal{E}(\mtx{f})$ for all suitable $\mtx{f}: \Omega \to \mathbb{H}_d$. \item \label{variance_convergence} \textbf{Exponential ergodicity of variance.} $\mVar_\mu[P_t\mtx{f}]\preccurlyeq \mathrm{e}^{-2t/\alpha} \cdot \mVar_\mu[\mtx{f}]$ for all $t\geq 0$ and for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$. \end{enumerate} Moreover, if the semigroup $(P_t)_{t\geq 0}$ is reversible and ergodic, then the statements above are also equivalent to the following: \begin{enumerate}[resume] \item \label{energy_convergence} \textbf{Exponential ergodicity of energy.} $\mathcal{E}(P_t\mtx{f})\preccurlyeq \mathrm{e}^{-2t/\alpha} \cdot \mathcal{E}(\mtx{f})$ for all $t \geq 0$ and for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$. \end{enumerate} \end{proposition}
\noindent Section~\ref{sec:equivalence_Poincare} contains the proof of Proposition~\ref{prop:matrix_poincare}, which is essentially the same as in the scalar case~\cite[Theorem 2.18]{van550probability}.
\begin{remark}[Quantum semigroups] Proposition~\ref{prop:matrix_poincare} only concerns the action of a semigroup on matrices of fixed dimension $d$. As such, the result can be adapted to quantum Markov semigroups. A partial version of the result for this general setting already appears in \cite[Remark IV.2]{cheng2017exponential}. \end{remark}
\subsection{Iterated carr{\'e} du champ operator}
To better understand how quickly a Markov semigroup converges to equilibrium, it is valuable to consider the \textit{iterated carr\'e du champ operator}. In the matrix setting, this operator is defined as \begin{equation}\label{eqn:definition_Gamma2} \Gamma_2(\mtx{f},\mtx{g}) := \frac{1}{2}\left[ \mathcal{L}\Gamma(\mtx{f},\mtx{g}) - \Gamma(\mtx{f},\mathcal{L}(\mtx{g})) - \Gamma(\mathcal{L}(\mtx{f}),\mtx{g}) \right] \in \mathbb{M}_d \quad \text{for all suitable $\mtx{f},\mtx{g} : \Omega \to \mathbb{H}_d$}. \end{equation} As with the carr\'e du champ, we abbreviate the quadratic form $\Gamma_2(\mtx{f}) := \Gamma_2(\mtx{f},\mtx{f})$. We remark that this quadratic form is not necessarily a positive operator. Rather, $\Gamma_2(\mtx{f})$ reflects the ``magnitude'' of the squared Hessian of $\mtx{f}$ plus a correction factor that reflects the ``curvature'' of the matrix semigroup.
When the underlying Markov semigroup $(P_t)_{t \geq 0}$ is reversible, it holds that \[\Expect_\mu \Gamma_2(\mtx{f},\mtx{g}) = \Expect_\mu\left[\mathcal{L}(\mtx{f}) \, \mathcal{L}(\mtx{g})\right]\quad \text{for all suitable $\mtx{f},\mtx{g} : \Omega \to \mathbb{H}_d$}.\] Thus, for a reversible semigroup, the average value $\operatorname{\mathbbm{E}}_{\mu} \Gamma_2(\mtx{f})$ is a positive-semidefinite matrix.
\subsection{Bakry--{\'E}mery criterion} \label{sec:local_matrix_Poincare_inequality}
When the iterated carr{\'e} du champ is comparable with the carr{\'e} du champ, we can obtain more information about the convergence of the Markov semigroup. We say the semigroup satisfies the \textit{matrix Bakry--\'Emery criterion} with constant $c>0$ if \begin{equation}\label{Bakry-Emery} \Gamma(\mtx{f}) \preccurlyeq c \cdot \Gamma_2(\mtx{f}) \quad \text{for all suitable $\mtx{f} : \Omega \to \mathbb{H}_d$}. \end{equation} Since $\Gamma(\mtx{f})$ and $\Gamma_2(\mtx{f})$ are functions, one interprets this condition as a pointwise inequality that holds $\mu$-almost everywhere in $\Omega$. It reflects uniform positive curvature of the semigroup.
When the matrix dimension $d = 1$, the condition~\eqref{Bakry-Emery} reduces to the classic Bakry--{\'E}mery criterion~\cite[Sec.~1.16]{bakry2013analysis}. For a semigroup of the form~\eqref{eqn:semigroup}, the scalar result actually implies the matrix result for all $d \in \mathbbm{N}$.
\begin{proposition}[Bakry--{\'E}mery: Equivalence] \label{prop:BE_equiv} Let $(P_t)_{t\geq 0}$ be the family of semigroups defined in~\eqref{eqn:semigroup}. The following statements are equivalent:
\begin{enumerate} \item \label{Bakry-Emery_criterion_scalar} \textbf{Scalar Bakry--\'Emery criterion.} $\Gamma(f)\leq c \cdot \Gamma_2(f)$ for all suitable $f:\Omega\to \mathbb{R}$.
\item \label{Bakry-Emery_criterion_matrix} \textbf{Matrix Bakry--\'Emery criterion.} $\Gamma(\mtx{f})\preccurlyeq c \cdot \Gamma_2(\mtx{f})$ for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$ and all $d \in \mathbbm{N}$. \end{enumerate} \end{proposition}
\noindent See Section~\ref{sec:scalar_matrix} for the proof of Proposition~\ref{prop:BE_equiv}.
Proposition~\ref{prop:BE_equiv} is a very powerful tool, and it is a key part of our method. Indeed, it is already known~\cite{bakry2013analysis} that many kinds of Markov processes satisfy the scalar Bakry--{\'E}mery criterion~\eqref{Bakry-Emery_criterion_scalar}. When contemplating novel settings, we only need to check the scalar criterion, rather than worrying about matrix-valued functions. In all these cases, we obtain the matrix extension for free.
\begin{remark}[Curvature]\label{rmk:curvature} The scalar Bakry--\'Emery criterion, Proposition~\ref{prop:BE_equiv}\eqref{Bakry-Emery_criterion_scalar}, is also known as the curvature condition $CD(\rho,\infty)$ with $\rho=c^{-1}$. In the scenario where the infinitesimal generator $\mathcal{L}$ is the Laplace--Beltrami operator $\Delta_{\mathfrak{g}}$ on a Riemannian manifold $(M,\mathfrak{g})$ with co-metric $\mathfrak{g}$, the Bakry--\'Emery criterion holds if and only if the Ricci curvature tensor is everywhere positive definite, with eigenvalues bounded from below by $\rho>0$. See~\cite[Section 1.16]{bakry2013analysis} for a discussion. We will return to this example in Section~\ref{sec:Riemannin_intro}. \end{remark}
\subsection{Bakry--{\'E}mery and ergodicity}
The scalar Bakry--\'Emery criterion, Proposition~\ref{prop:BE_equiv}\eqref{Bakry-Emery_criterion_scalar}, is equivalent to a local Poincar\'e inequality, which is strictly stronger than the scalar Poincar\'e inequality, Proposition~\ref{prop:poincare_equiv}\eqref{Poincare_inequality_scalar}. It is also equivalent to a powerful local ergodicity property~\cite[Theorem 2.35]{van550probability}. The next result states that the matrix Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} implies counterparts of these facts.
\begin{proposition}[Bakry--{\'Emery}: Consequences] \label{prop:local_Poincare} Let $(P_t)_{t \geq 0}$ be a Markov semigroup acting on suitable functions $\mtx{f} : \Omega \to \mathbb{H}_d$ for fixed $d \in \mathbbm{N}$, as defined in~\eqref{eqn:semigroup}. The following are equivalent: \begin{enumerate}
\item \label{Bakry-Emery_criterion} \textbf{Bakry--\'Emery criterion.} $\Gamma(\mtx{f})\preccurlyeq c \cdot \Gamma_2(\mtx{f})$ for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$.
\item \label{local_ergodicity} \textbf{Local ergodicity.} $\Gamma(P_t\mtx{f})\preccurlyeq \mathrm{e}^{-2t/c} \cdot P_t\Gamma(\mtx{f})$ for all $t \geq 0$ and for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$. \item \label{local_Poincare} \textbf{Local Poincar\'e inequality.} $P_t(\mtx{f}^2) - (P_t\mtx{f})^2 \preccurlyeq c \,(1-\mathrm{e}^{-2t/c}) \cdot P_t\Gamma(\mtx{f})$ for all $t \geq 0$ and for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$. \end{enumerate} \end{proposition}
\noindent The proof Proposition~\ref{prop:local_Poincare} appears in Section~\ref{sec:equivalence_local_Poincare}. It follows along the same lines as the scalar result~\cite[Theorem 2.36]{van550probability}.
Proposition~\ref{prop:local_Poincare} plays a central role in this paper. With the aid of Proposition~\ref{prop:BE_equiv}, we can verify the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery_criterion} for many particular Markov semigroups. Meanwhile, the local ergodicity property~\eqref{local_ergodicity} supports short derivations of trace moment inequalities for random matrices.
The results in Proposition~\ref{prop:local_Poincare} refine the statements in Proposition~\ref{prop:matrix_poincare}. Indeed, the carr\'e du champ operator $\Gamma(\mtx{f})$ measures the local fluctuation of a function $\mtx{f}$, so the local ergodicity condition~\eqref{local_ergodicity} means that the fluctuation of $P_t\mtx{f}$ at every point $z\in \Omega$ is decreasing exponentially fast. By applying $\Expect_\mu$ to both sides of the local ergodicity inequality, we obtain the ergodicity of energy, Proposition~\ref{prop:matrix_poincare}\eqref{energy_convergence}.
If $(P_t)_{t\geq 0}$ is ergodic, applying the expectation $\Expect_\mu$ to the local Poincar\'e inequality~\eqref{local_Poincare} and then taking $t\rightarrow +\infty$
yields the matrix Poincar\'e inequality, Proposition~\ref{prop:matrix_poincare}\eqref{Poincare_inequality}
with constant $\alpha = c$. In fact, a standard method for establishing a Poincar\'e inequality
is to check the Bakry--\'Emery criterion.
\begin{remark}[Noncommutative semigroups] Junge \& Zeng have investigated the implications of the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} for noncommutative diffusion processes on a von Neumann algebra. For this setting, a partial version of Proposition~\ref{prop:local_Poincare} appears in~\cite[Lemma 4.6]{junge2015noncommutative}. \end{remark}
\subsection{Basic examples}\label{sec:examples} This section contains some examples of Markov semigroups that satisfy the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}. In Section~\ref{sec:main_results}, we will use these semigroups to derive matrix concentration results for several random matrix models.
\subsubsection{Product measures}\label{sec:product_measure_all_intro} Consider a product space $\Omega = \Omega_1\otimes \Omega_2\otimes \cdots\otimes \Omega_n $ equipped with a product measure $\mu = \mu_1\otimes \mu_2\otimes \cdots\otimes\mu_n$. In Section~\ref{sec:product_measure_all}, we present the standard construction of the associated Markov semigroup, adapted to the matrix setting. This semigroup is ergodic and reversible, and its carr\'e du champ operator takes the form of a discrete squared derivative: \begin{equation}\label{eqn:variance_proxy} \Gamma(\mtx{f})(z) = \mtx{V}(\mtx{f})(z) := \frac{1}{2}\sum_{i=1}^n\Expect_Z \left[ (\mtx{f}(z) - \mtx{f}((z;Z)_i))^2 \right] \quad \text{for all $z\in \Omega$}. \end{equation} In this expression, $Z = (Z^1,\dots,Z^n)\sim\mu$ and $(z;Z)_i = (z^1,\dots,z^{i-1},Z^i,z^{i+1},\dots,z^n)$ for each $i=1,\dots,n$. Superscripts denote the coordinate index.
Aoun et al.~\cite{ABY20:Matrix-Poincare} have shown that this Markov semigroup satisfies the matrix Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare} with constant $\alpha = 1$. In Section~\ref{sec:product_measure_all}, we will show that the semigroup also satisfies the Bakry--\'Emery criterion~\eqref{Bakry-Emery} with constant $c = 2$.
\subsubsection{Log-concave measures}\label{sec:log-concave_intro} Log-concave distributions~\cite{Pre73:Logarithmic-Concave,ambrosio2009existence,saumard2014log} are a fundamental class of probability measures on $\Omega = \mathbb{R}^n$ that are closely related to diffusion processes. A log-concave measure takes the form $\diff \mu \propto \mathrm{e}^{-W(z)}\idiff z$ where the potential $W:\mathbb{R}^n\rightarrow \mathbb{R}$ is a convex function, so it captures a form of negative dependence. The associated diffusion process naturally induces a semigroup whose carr{\'e} du champ operator takes the form of the squared ``magnitude'' of the gradient: \[\Gamma(\mtx{f})(z) = \sum_{i=1}^n(\partial_i\mtx{f}(z))^2\quad \text{for all $z\in \mathbb{R}^n$}.\] As usual, $\partial_i := \partial/\partial z_i$ for $i = 1, \dots, n$.
Many interesting results follow from the condition that the potential $W$ is uniformly strongly convex on $\mathbb{R}^n$. In other words, for a constant $\eta > 0$, we assume that the Hessian matrix satisfies \begin{equation} \label{eqn:hess-sc-intro} (\operatorname{Hess} W)(z) := \big[ (\partial_{ij} W)(z) \big]_{i,j=1}^n \succcurlyeq \eta \cdot \mathbf{I}_n \quad\text{for all $z \in \mathbb{R}^n$.} \end{equation} The partial derivative $\partial_{ij} := \partial^2/(\partial z_i \partial z_j)$ for $i,j=1,\dots,n$. It is a standard result~\cite[Sec. 4.8]{bakry2013analysis} that the strong convexity condition~\eqref{eqn:hess-sc-intro} implies the scalar Bakry--\'Emery criterion with constant $c = \eta^{-1}$. Therefore, according to Proposition~\ref{prop:BE_equiv}, the matrix Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} is valid for every $d \in \mathbbm{N}$.
One of the core examples of a log-concave measure is the standard Gaussian measure on $\mathbb{R}^n$, which is given by the potential $W(z) = z^\mathsf{T} z/2$. The associated diffusion process induces the Ornstein--Uhlenbeck semigroup, which satisfies the Bakry--\'Emery criterion~\eqref{Bakry-Emery} with constant $c = 1$.
A more detailed discussion on log-concave measures is presented in Section~\ref{sec:log-concave}.
\subsection{Measures on Riemannian manifolds} \label{sec:Riemannin_intro}
The theory of diffusion processes on Euclidean spaces can be generalized to the setting of Riemannian manifolds. Although this exercise may seem abstract, it allows us to treat some interesting and important examples in a unified way. We refer to~\cite{bakry2013analysis} for more background on this subject, and we instate their conventions.
Consider an $n$-dimensional compact Riemannian manifold $(M,\mathfrak{g})$. Let $\mathfrak{g}(x) = (g^{ij}(x) : 1 \leq i,j \leq n )$ be the matrix representation of the co-metric tensor $\mathfrak{g}$ in local coordinates, which is a symmetric and positive-definite matrix defined for every $x \in M$. The manifold is equipped with a canonical Riemannian probability measure $\mu_\mathfrak{g}$ that has local density $\diff \mu_\mathfrak{g} \propto \det(\mathfrak{g}(x))^{-1/2} \idiff{x}$ with respect to the Lebesgue measure in local coordinates. This measure $\mu_\mathfrak{g}$ is the stationary measure of the diffusion process on $M$ whose infinitesimal generator $\mathcal{L}$ is the Laplace--Beltrami operator $\Delta_\mathfrak{g}$. This diffusion process is called the \emph{Riemannian Brownian motion}.\footnote{Many authors use the convention that Riemmanian Brownian motion has infinitesimal generator $\tfrac{1}{2} \Delta_{\mathfrak{g}}$.} The associated matrix carr{\'e} du champ operator coincides with the squared ``magnitude'' of the differential: \begin{equation}\label{eqn:gamma_Riemannian_0} \Gamma(\mtx{f})(x) = \sum_{i,j=1}^ng^{ij}(x) \,\partial_i\mtx{f}(x)\, \partial_j\mtx{f}(x)\quad \text{for suitable $\mtx{f} : M \to \mathbb{H}_d$.} \end{equation} Here, $\partial_i$ for $i=1,\dots,n$ are the components of the differential, computed in local coordinates. We emphasize that the matrix carr{\'e} du champ operator is intrinsic; expressions for the carr{\'e} du champ resulting from different choices of local coordinates are equivalent under change of variables. See Section~\ref{sec:extension_Riemannian_manifold} for a more detailed discussion.
As mentioned in Remark~\ref{rmk:curvature}, the scalar Bakry--\'{E}mery criterion holds with $c=\rho^{-1}$ if and only if the Ricci curvature tensor of $(M, \mathfrak{g})$ is everywhere positive, with eigenvalues bounded from below by $\rho>0$. In other words, for Brownian motion on a manifold, the Bakry--{\'E}mery criterion is equivalent to the uniform positive curvature of the manifold. Proposition~\ref{prop:BE_equiv} ensures that the matrix Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} holds with $c = \rho^{-1}$ under precisely the same circumstances.
Many examples of positively curved Riemannian manifolds are discussed in \cite{ledoux2001concentration,gromov2007metric,cheeger2008comparison,bakry2013analysis}. We highlight two particularly interesting cases.
\begin{example}[Unit sphere] Consider the $n$-dimensional unit sphere $\mathbb{S}^{n} \subset \mathbbm{R}^{n+1}$ for $n \geq 2$. The sphere is equipped with the Riemannian manifold structure induced by $\mathbbm{R}^{n+1}$. The canonical Riemannian measure on the sphere is simply the uniform probability measure. The sphere has a constant Ricci curvature tensor, whose eigenvalues all equal $n - 1$. Therefore, the Brownian motion on $\mathbb{S}^n$ satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} with $c = (n-1)^{-1}$. See~\cite[Sec.~2.2]{bakry2013analysis}. \end{example}
\begin{example}[Special orthogonal group] The special orthogonal group $\mathrm{SO}(n)$ can be regarded as a Riemannian submanifold of $\mathbbm{R}^{n \times n}$. The Riemannian metric is the Haar probability measure on $\mathrm{SO}(n)$. It is known that the eigenvalues of the Ricci curvature tensor are uniformly bounded below by $(n-1)/4$. Therefore, the Brownian motion on $\mathrm{SO}(n)$ satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} with $c = 4/(n-1)$. See~\cite[pp.~26ff]{ledoux2001concentration}. \end{example}
The lower bound on Ricci curvature is stable under (Riemannian) products of manifolds, so similar results are valid for products of spheres or products of the orthogonal group; cf.~\cite[p.~27]{ledoux2001concentration}.
\subsection{History}
In the scalar setting, much of the classic research on Markov processes concerns the behavior of diffusion processes on Riemannian manifolds. Functional inequalities connect the convergence of these Markov processes to the geometry of the manifold. The rate of convergence to equilibrium of a Markov process plays a core role in developing concentration properties for the measure. The treatise \cite{bakry2013analysis} contains a comprehensive discussion. Other references include \cite{ledoux2001concentration,boucheron2013concentration,van550probability}.
Matrix-valued Markov processes were originally introduced to model the evolution of quantum systems \cite{davies1969quantum,lindblad1976generators,accardi1982quantum}. In recent years, the long-term behavior of quantum Markov processes has received significant attention in the field of quantum information. A general approach to exponential convergence of a quantum system is to establish quantum log-Sobolev inequalities for density operators \cite{majewski1998dissipative,olkiewicz1999hypercontractivity,kastoryano2013quantum}.
In this paper, we consider a mixed classical-quantum setting, where a classical Markov process drives a matrix-valued function. The papers~\cite{cheng2017exponential,cheng2019matrix,ABY20:Matrix-Poincare} contain some foundational results for this model. Our work provides a more detailed understanding of the connections between the ergodicity of the semigroup and matrix functional inequalities. The companion paper~\cite{HT20:Trace-Poincare} contains further results on trace Poincar{\'e} inequalities, which are equivalent to the Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare}.
A general framework for noncommutative diffusion processes on von Neumann algebras can be found in \cite{junge2006h,junge2015noncommutative}. In particular, the paper~\cite{junge2015noncommutative} shows that a noncommutative Bakry--{\'E}mery criterion implies local ergodicity of a noncommutative diffusion process.
In spite of its generality, the presentation in~\cite{junge2015noncommutative} does not fully contain our treatment. On the one hand, the noncommutative semigroup model includes the mixed classical-quantum model~\eqref{eqn:semigroup} as a special case. On the other hand, we do not need the underlying Markov process to be a diffusion (with continuous sample paths), while Junge \& Zeng pose a diffusion assumption.
\section{Nonlinear Matrix Concentration: Main Results} \label{sec:main_results}
The matrix Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare} has been associated with subexponential concentration inequalities for random matrices~\cite{ABY20:Matrix-Poincare,HT20:Trace-Poincare}. The central purpose of this paper is to establish that the (scalar) Bakry--{\'E}mery criterion leads to matrix concentration inequalities via a straightforward semigroup method. This section outlines our main results; the proofs appear in Section~\ref{sec:trace_to_moment}.
\begin{remark}[Noncommutative setting] After this paper was written, we learned that Junge \& Zeng~\cite{junge2015noncommutative} have used the (noncommutative) Bakry--{\'E}mery criterion to obtain subgaussian moment bounds for elements of von Neumann algebra using a martingale approach. Their setting is more general (if we ignore the diffusion assumptions), but we will see that their results are weaker in several respects. \end{remark}
\subsection{Markov processes and random matrices}
Let $Z$ be a random variable, taking values in the state space $\Omega$, with the distribution $\mu$. For a matrix-valued function $\mtx{f} : \Omega \to \mathbb{H}_d$, we can define the random matrix $\mtx{f}(Z)$, whose distribution is the push-forward of $\mu$ by the function $\mtx{f}$. Our goal is to understand how well the random matrix $\mtx{f}(Z)$ concentrates around its expectation $\operatorname{\mathbbm{E}} \mtx{f}(Z) = \operatorname{\mathbbm{E}}_{\mu} \mtx{f}$.
To do so, suppose that we can construct a reversible, ergodic Markov process $(Z_t)_{t \geq 0} \subset \Omega$ whose stationary distribution is $\mu$. We have the intuition that the faster that the process $(Z_t)_{t \geq 0}$ converges to equilibrium, the more sharply the random matrix $\mtx{f}(Z)$ concentrates around its expectation.
To quantify the rate of convergence of the matrix Markov process, we use the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} to obtain local ergodicity of the semigroup. This property allows us to prove strong bounds on the trace moments of the random matrix. Using standard arguments (Appendix~\ref{apdx:matrix_moments}), these moment bounds imply nonlinear matrix concentration inequalities.
\subsection{Polynomial concentration}
We begin with a general estimate on the polynomial trace moments of a random matrix under a Bakry--\'Emery criterion.
\begin{theorem}[Polynomial moments]\label{thm:polynomial_moment} Let $\Omega$ be a Polish space equipped with a probability measure $\mu$. Consider a reversible, ergodic Markov semigroup~\eqref{eqn:semigroup} with stationary measure $\mu$ that acts on (suitable) functions $\mtx{f} : \Omega \to \mathbb{H}_d$. Assume that the Bakry--\'Emery criterion \eqref{Bakry-Emery} holds for a constant $c>0$. Then, for $q=1$ and $q\geq1.5$, \begin{equation}\label{eqn:polynomial_moment_1}
\left[ \Expect_\mu \operatorname{tr}|\mtx{f}-\Expect_\mu\mtx{f}|^{2q}\right]^{1/(2q)}\leq \sqrt{c\,(2q-1)}\left[ \Expect_\mu\operatorname{tr}\Gamma(\mtx{f})^q\right]^{1/(2q)}. \end{equation}
If the variance proxy $v_{\mtx{f}} := \norm{ \|\Gamma(\mtx{f})\| }_{L_{\infty}(\mu)} <+\infty$, then \begin{equation}\label{eqn:polynomial_moment_2}
\left[ \Expect_\mu \operatorname{tr}|\mtx{f}-\Expect_\mu\mtx{f}|^{2q}\right]^{1/(2q)}\leq d^{1/(2q)}\sqrt{c\,(2q-1) \,\smash{v_{\mtx{f}}}} . \end{equation} \end{theorem}
\noindent We establish this theorem in Section~\ref{sec:trace_to_moment}.
For noncommutative diffusion semigroups, Junge \& Zeng~\cite{junge2015noncommutative} have developed polynomial moment bounds similar to Theorem~\ref{thm:polynomial_moment}, but they only obtain moment growth of $O(q)$ in the inequality~\eqref{eqn:polynomial_moment_1}. We can trace this discrepancy to the fact that they use a martingale argument based on the noncommutative Burkholder--Davis--Gundy inequality. At present, our proof only applies to the mixed classical-quantum semigroup~\eqref{eqn:semigroup}, but it seems plausible that our approach can be generalized.
For now, let us present some concrete results that follow when we apply Theorem~\ref{thm:polynomial_moment} to the semigroups discussed in Section~\ref{sec:examples}. In each of these cases, we can derive bounds for the expectation and tails of $\norm{ \smash{\mtx{f} - \Expect_{\mu} \mtx{f}} }$ using the matrix Chebyshev inequality (Proposition~\ref{prop:matrix_Chebyshev}). In particular, when $v_{\mtx{f}} < + \infty$, we obtain subgaussian concentration.
\subsubsection{Polynomial Efron--Stein inequality for product measures}
The first consequence of Theorem~\ref{thm:polynomial_moment} is a polynomial moment inequality for product measures. This result exactly reproduces the matrix polynomial Efron--Stein inequalities established by Paulin et al.~\cite[Theorem 4.2]{paulin2016efron}.
\begin{corollary}[Product measure: Polynomial moments]\label{cor:product_measure_Efron--Stein} Let $\mu = \mu_1\otimes \mu_2\otimes \cdots\otimes\mu_n$ be a product measure on a product space $\Omega = \Omega_1\otimes \Omega_2\otimes \cdots\otimes \Omega_n $. Let $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$ be a suitable function. Then, for $q= 1$ and $q\geq1.5$, \begin{equation}\label{eqn:product_measure_Efron--Stein}
\left[ \Expect_\mu \operatorname{tr}|\mtx{f}-\Expect_\mu\mtx{f}|^{2q}\right]^{1/(2q)}\leq \sqrt{2(2q-1)}\left[\Expect_\mu\operatorname{tr}\mtx{V}(\mtx{f})^q\right]^{1/(2q)}. \end{equation} The matrix variance proxy $\mtx{V}(\mtx{f})$ is defined in \eqref{eqn:variance_proxy}. \end{corollary}
\noindent The details appear in Section~\ref{sec:concentration_results_product}.
\subsubsection{Log-concave measures}
The second result is a new polynomial moment inequality for matrix-valued functions of a log-concave measure. To avoid domain issues, we restrict our attention to the Sobolev space \begin{equation}\label{def:H2_function}
\mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d) := \left\{\mtx{f} : \mathbb{R}^n\rightarrow\mathbb{H}_d:\Expect_\mu \|\mtx{f}\|_\mathrm{HS}^2+\sum_{i=1}^n\Expect_\mu \|\partial_i\mtx{f}\|_\mathrm{HS}^2 + \sum_{i,j=1}^n\Expect_\mu \|\partial_{ij}\mtx{f}\|_\mathrm{HS}^2 <\infty\right\}. \end{equation} For these functions, we have the following matrix concentration inequality.
\begin{corollary}[Log-concave measure: Polynomial moments]\label{cor:log-concave_polynomial_inequality} Let $\diff \mu \propto \mathrm{e}^{-W(z)}\idiff z$ be a log-concave measure on $\mathbb{R}^n$ whose potential $W:\mathbb{R}^n\rightarrow\mathbb{R}$ satisfies a uniform strong convexity condition: $\operatorname{Hess} W \succcurlyeq \eta \cdot \mathbf{I}_n$ with constant $\eta > 0$. Let $\mtx{f}\in \mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d)$. Then, for $q=1$ and $q\geq 1.5$, \begin{equation*}\label{eqn:log-concave_polynomial_inequality}
\left[\Expect_\mu \operatorname{tr}|\mtx{f}-\Expect_\mu\mtx{f}|^{2q}\right]^{1/(2q)}\leq \sqrt{\frac{2q-1}{\eta}}\left[\Expect_\mu\operatorname{tr}\left(\sum_{i=1}^n(\partial_i\mtx{f})^2\right)^q\right]^{1/(2q)}. \end{equation*} \end{corollary}
\noindent The details appear in Section~\ref{sec:concentration_results_log-concave}.
\subsection{Exponential concentration}
As a consequence of the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}, we can also derive exponential matrix concentration inequalities. In principle, polynomial moment inequalities are stronger, but the exponential inequalities often lead to better constants and more detailed information about tail decay.
\begin{theorem}[Exponential concentration]\label{thm:exponential_concentration} Let $\Omega$ be a Polish space equipped with a probability measure $\mu$. Consider a reversible, ergodic Markov semigroup \eqref{eqn:semigroup} with stationary measure $\mu$ that acts on (suitable) functions $\mtx{f} : \Omega \to \mathbb{H}_d$. Assume that the Bakry--\'Emery criterion \eqref{Bakry-Emery} holds for a constant $c>0$. Then \begin{align}\label{eqn:tail_bound_1} \mathbb{P}_{\mu}\left\{\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f})\geq t \right\} \leq&\ d\cdot \inf_{\beta>0} \exp \left(\frac{-t^2}{2cr_{\mtx{f}}(\beta) + 2t\sqrt{c/\beta} }\right) \quad\text{for all $t \geq 0$.} \end{align} The function $r_{\mtx{f}}$ computes an exponential mean of the carr{\'e} du champ: \[ r_{\mtx{f}}(\beta):=\frac{1}{\beta}\log \Expect_\mu\operatorname{\bar{\trace}} \mathrm{e}^{ \beta\Gamma(\mtx{f}) } \quad\text{for $\beta > 0$.} \]
In addition, suppose that the variance proxy $v_{\mtx{f}} := \norm{ \|\Gamma(\mtx{f}) \| }_{L_{\infty}(\mu)} <+\infty$. Then \begin{equation*}\label{eqn:tail_bound_2} \mathbb{P}_{\mu}\left\{\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f})\geq t \right\} \leq d\cdot \exp \left(\frac{-t^2}{2cv_{\mtx{f}}}\right) \quad\text{for all $t \geq 0$.} \end{equation*} Furthermore, \begin{equation*}\label{eqn:expectation_bound} \Expect_\mu\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f}) \leq \sqrt{2cv_{\mtx{f}}\log d}. \end{equation*} Parallel inequalities hold for the minimum eigenvalue $\lambda_{\min}$. \end{theorem}
\noindent We establish Theorem~\ref{thm:exponential_concentration} in Section~\ref{sec:exponential_concentration_proof} as a consequence of an exponential moment inequality, Theorem~\ref{thm:exponential_moment}, for random matrices. By combining Theorem~\ref{thm:exponential_concentration} with the examples in Section~\ref{sec:examples}, we obtain concentration results for concrete random matrix models.
A partial version of Theorem~\ref{thm:exponential_concentration} with slightly worse constants appears in \cite[Corollary 4.13]{junge2015noncommutative}. When comparing these results, note that probability measure in \cite{junge2015noncommutative} is normalized to absorb the dimensional factor $d$.
\subsubsection{Exponential Efron--Stein inequality for product measures}
We can reproduce the matrix exponential Efron--Stein inequalities of Paulin et al.~\cite[Theorem 4.3]{paulin2016efron} by applying Theorem~\ref{thm:exponential_moment} to a product measure (Section~\ref{sec:product_measure_all_intro}). For instance, we obtain the following subgaussian inequality.
\begin{corollary}[Product measure: Subgaussian concentration]\label{cor:product_measure_tailbound} Let $\mu = \mu_1\otimes \mu_2\otimes \cdots\otimes\mu_n$ be a product measure on a product space $\Omega = \Omega_1\otimes \Omega_2\otimes \cdots\otimes \Omega_n $. Let $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$ be a suitable function. Define the variance proxy $v_{\mtx{f}} := \norm{ \|\mtx{V}(\mtx{f}) \| }_{L_{\infty}(\mu)}$, where $\mtx{V}(\mtx{f})$ is given by \eqref{eqn:variance_proxy}. Then \begin{align*}\label{eqn:product_measure_tailbound} \Prob{\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f})\geq t } \leq d\cdot\exp\left(-\frac{t^2}{4v_{\mtx{f}}}\right) \quad\text{for all $t \geq 0$.} \end{align*} Furthermore, \begin{equation*}\label{eqn:product_measure_expectation} \Expect_\mu \lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f}) \leq 2\sqrt{v_{\mtx{f}}\log d}. \end{equation*} Parallel results hold for the minimum eigenvalue $\lambda_{\min}$. \end{corollary}
\noindent We defer the proof to Section~\ref{sec:concentration_results_product}.
\subsubsection{Log-concave measures}
We can also obtain exponential concentration for a matrix-valued function of a log-concave measure by combining Theorem~\ref{thm:exponential_concentration} with the results in Section~\ref{sec:log-concave_intro}.
\begin{corollary}[Log-concave measure: Subgaussian concentration]\label{cor:log-concave_concentration} Let $\diff \mu \propto \mathrm{e}^{-W(z)}\idiff z$ be a log-concave probability measure on $\mathbb{R}^n$ whose potential $W : \mathbb{R}^n \to \mathbb{R}$ satisfies a uniform strong convexity condition: $\operatorname{Hess} W \succcurlyeq \eta \cdot \mathbf{I}_n$ where $\eta > 0$. Let $\mtx{f} \in \mathrm{H}_{2,\mu}(\mathbbm{R}^n; \mathbb{H}_d)$, and define the variance proxy \[ v_{\mtx{f}} := \sup\nolimits_{z \in \mathbbm{R}^n} \norm{ \sum_{i=1}^n(\partial_i\mtx{f}(z))^2 }. \] Then \[\mathbb{P}_{\mu}\left\{\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f})\geq t \right\} \leq d\cdot\exp\left(\frac{-\eta t^2}{2v_{\mtx{f}}}\right) \quad\text{for all $t \geq 0$.} \] Furthermore, \[\Expect_\mu \lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f}) \leq \sqrt{2 \eta^{-1} v_{\mtx{f}}\log d }.\] Parallel results hold for the minimum eigenvalue $\lambda_{\min}$. \end{corollary}
\noindent See Section~\ref{sec:concentration_results_log-concave} for the proof.
\begin{example}[Matrix Gaussian series] \label{ex:matrix-gauss} Consider the standard normal measure $\gamma_n$ on $\mathbbm{R}^n$. Its potential, $W(z) = z^\mathsf{T} z / 2$, is uniformly strongly convex with parameter $\eta = 1$. Therefore, Corollary~\ref{cor:log-concave_concentration} gives subgaussian concentration for matrix-valued functions of a Gaussian random vector. To make a comparison with familiar results, we construct the matrix Gaussian series \begin{equation*} \label{eqn:gauss-series_1} \mtx{f}(z) = \sum_{i=1}^n Z_i \mtx{A}_i \quad\text{where $z = (Z_1, \dots, Z_n) \sim \gamma_n$ and $\mtx{A}_i \in \mathbb{H}_d$ are fixed.} \end{equation*} In this case, the carr{\'e} du champ is simply $$ \Gamma(\mtx{f})(z) = \sum_{i=1}^n \mtx{A}_i^2. $$ Thus, the expectation bound states that \begin{equation*} \label{eqn:gauss-series_2} \operatorname{\mathbbm{E}}_{\gamma_n} \lambda_{\max}(\mtx{f}(z)) \leq \sqrt{2 v_{\mtx{f}} \log d} \quad\text{where}\quad v_{\mtx{f}} = \norm{ \sum_{i=1}^n \mtx{A}_i^2 }. \end{equation*} Up to and including the constants, this matches the sharp bound that follows from ``linear'' matrix concentration techniques~\cite[Chapter 4]{tropp2015introduction}. \end{example}
Van Handel (private communication) has outlined out an alternative proof of Corollary~\ref{cor:log-concave_concentration} with slightly worse constants. His approach uses Pisier's method~\cite[Thm.~2.2]{pisier1986probabilistic} and the noncommutative Khintchine inequality~\cite{buchholz2001operator} to obtain the statement for the standard normal measure. Then Caffarelli's contraction theorem~\cite{Caf00:Monotonicity-Properties} implies that the same bound holds for every log-concave measure whose potential is uniformly strongly convex with $\eta \geq 1$. This approach is short and conceptual, but it is more limited in scope.
\subsection{Riemannian measures} \label{sec:riemann-exp}
As discussed in Section~\ref{sec:Riemannin_intro}, the Brownian motion on a Riemannian manifold with uniformly positive curvature satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}. Therefore, we can apply both Theorem~\ref{thm:polynomial_moment} and Theorem~\ref{thm:exponential_concentration} in this setting. Let us give a few concrete examples of the kind of results that can be derived with these methods.
\subsubsection{The sphere}
Consider the uniform distribution $\sigma_n$ on the $n$-dimensional unit sphere $\mathbb{S}^n \subset \mathbbm{R}^{n+1}$ for $n \geq 2$. The Brownian motion on the sphere satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} with $c = (n-1)^{-1}$. Therefore, Theorem~\ref{thm:polynomial_moment} implies that, for any suitable function $\mtx{f} : \mathbb{S}^n \to \mathbb{H}_d$, \[
\left[ \Expect_{\sigma_n} \operatorname{tr}|\mtx{f}-\Expect_{\sigma_n} \mtx{f}|^{2q}\right]^{1/(2q)} \leq \sqrt{\frac{2q-1}{n-1}}\left[ \Expect_{\sigma_n} \operatorname{tr}\Gamma(\mtx{f})^q\right]^{1/(2q)}, \] where the carr{\'e} du champ $\Gamma(\mtx{f})$ is defined by~\eqref{eqn:gamma_Riemannian_0}. We can also obtain subgaussian tail bounds in terms of the variance proxy $ v_{\mtx{f}} := \norm{ \norm{ \Gamma(\mtx{f}) } }_{L_{\infty}(\sigma_n)}. $ Indeed, Theorem~\ref{thm:exponential_concentration} yields the bound \[ \mathbb{P}_{\sigma_n}\left\{\lambda_{\max}(\mtx{f}-\Expect_{\sigma_n}\mtx{f})\geq t \right\}
\leq d\cdot \exp \left(\frac{-(n-1)t^2}{2v_{\mtx{f}}} \right) \quad\text{for all $t \geq 0$.} \] To use these concentration inequalities, we need to compute the carr{\'e} du champ $\Gamma(\mtx{f})$ and bound the variance proxy $v_{\mtx{f}}$ for particular functions $\mtx{f}$.
We give two illustrations, postponing the detailed calculations to Section~\ref{sec:Riemannian_gamma}. In each case, let $x = (x_1, \dots, x_{n+1}) \in \mathbb{S}^n$ be a random vector drawn from the uniform probability measure $\sigma_n$. Suppose that $(\mtx{A}_1, \dots, \mtx{A}_{n+1}) \subset \mathbb{H}_d$ is a list of deterministic Hermitian matrices.
\begin{example}[Sphere I]\label{example:sphere_I} Consider the random matrix $\mtx{f}(x) = \sum_{i=1}^{n+1}x_i\mtx{A}_i$. We can compute the carr{\'e} du champ as \begin{equation}\label{eqn:gamma_sphere_I} \Gamma(\mtx{f})(x) = \sum_{i=1}^{n+1}\mtx{A}_i^2 - \left(\sum_{i=1}^{n+1} x_i\mtx{A}_i\right)^2
\succcurlyeq \mtx{0}. \end{equation} It is obvious that $\Gamma(\mtx{f})(x) \preccurlyeq \sum_{i=1}^{n+1}\mtx{A}_i^2$ for all $x\in \mathbb{S}^n$, so the variance proxy $v_{\mtx{f}}\leq \norm{ \sum_{i=1}^{n+1}\mtx{A}_i^2 }$.
Compare this calculation with Example~\ref{ex:matrix-gauss}, where the coefficients follow the standard normal distribution. For the sphere, the carr{\'e} du champ operator is smaller because a finite-dimensional sphere has slightly more curvature than the Gauss space. \end{example}
\begin{example}[Sphere II]\label{example:sphere_II} Consider the random matrix $\mtx{f}(x) = \sum_{i=1}^{n+1}x_i^2\mtx{A}_i$. The carr{\'e} du champ admits the expression \begin{equation}\label{eqn:gamma_sphere_II} \Gamma(\mtx{f})(x) = 2\sum_{i,j=1}^{n+1}x_i^2x_j^2(\mtx{A}_i-\mtx{A}_j)^2. \end{equation} A simple bound shows that the variance proxy $v_{\mtx{f}} \leq 2 \max_{i, j} \norm{ \smash{\mtx{A}_i - \mtx{A}_j} }$. It is possible to make further improvements in some cases. \end{example}
\subsubsection{The special orthogonal group}
The Riemannian manifold framework also encompasses matrix-valued functions of random orthogonal matrices. For instance, suppose that $\mtx{O}_1, \dots, \mtx{O}_n \in \mathrm{SO}(d)$ are drawn independently and uniformly from the Haar measure $\mu$ on the special orthogonal group $\mathrm{SO}(d)$. As discussed in Section~\ref{sec:Riemannin_intro}, the Brownian motion on the product space satisfies the Bakry--{\'E}mery criterion with constant $c = 4/(d-1)$. In particular, if $\mtx{f} : \mathrm{SO}(d)^{\otimes n} \to \mathbb{H}_d$, $$ \mathbb{P}_{\mu^{\otimes n}}\left\{ \lambda_{\max}(\mtx{f} - \Expect_{\mu^{\otimes n}} \mtx{f}) \geq t \right\}
\leq d \cdot \exp\left( \frac{-(d-1) t^2}{8 v_{\mtx{f}}} \right)
\quad\text{for all $t \geq 0$.} $$ Here is a particular example where we can bound the variance proxy.
\begin{example}[Special orthogonal group]\label{example:SO_d} Let $(\mtx{A}_1, \dots, \mtx{A}_n) \subset \mathbb{H}_d(\mathbb{R})$ be a fixed list of real, symmetric matrices. Consider the random matrix $\mtx{f}(\mtx{O}_1, \dots, \mtx{O}_n) = \sum_{i=1}^n \mtx{O}_i \mtx{A}_i \mtx{O}_i^\mathsf{T}$. The carr{\'e} du champ is \begin{equation}\label{eqn:gamma_SO_d} \Gamma(\mtx{f})(\mtx{O}_1, \dots, \mtx{O}_n) = \frac{1}{2}\sum_{i=1}^n\mtx{O}_i\left[ \left(\operatorname{tr}[\mtx{A}_i^2]-d^{-1}\operatorname{tr}[\mtx{A}_i]^2\right)\cdot\mathbf{I}_d + d\left(\mtx{A}_i-d^{-1}\operatorname{tr}[\mtx{A}_i]\cdot \mathbf{I}_d \right)^2\right] \mtx{O}_i^\mathsf{T}. \end{equation} Each matrix $\mtx{O}_i$ is orthogonal, so the variance proxy satisfies \[ v_{\mtx{f}} \leq \frac{1}{2}\sum_{i=1}^n \left[ \operatorname{tr}[\mtx{A}_i^2]-d^{-1}\operatorname{tr}[\mtx{A}_i]^2 + d\cdot \norm{\mtx{A}_i - d^{-1}\operatorname{tr}[\mtx{A}_i]\cdot \mathbf{I}_d }^2 \right]. \] The details of the calculation appear in Section~\ref{sec:Riemannian_gamma}. \end{example}
\subsection{Extension to general rectangular matrices}
By a standard formal argument, we can extend the results in this section to a function $\mtx{h}:\Omega\rightarrow \mathbb{M}^{d_1\times d_2}$ that takes rectangular matrix values. To do so, we simply apply the theorems to the self-adjoint dilation \[\mtx{f}(z) = \left[\begin{array}{cc} \mtx{0} & \mtx{h}(z)\\ \mtx{h}(z)^*& \mtx{0} \end{array}\right] \in \mathbb{H}_{d_1+d_2}.\] See~\cite{tropp2015introduction} for many examples of this methodology.
\subsection{History}\label{sec:concentration_history}
Matrix concentration inequalities are noncommutative extensions of their scalar counterparts. They have been studied extensively, and they have had a profound impact on a wide range of areas in computational mathematics and statistics. The models for which the most complete results are available include a sum of independent random matrices~\cite{lust1986inegalites,rudelson1999random,oliveira2010sums,tropp2012user,huang2019generalized} and a matrix-valued martingale sequence~\cite{pisier1997non,oliveira2009concentration,tropp2011freedman,junge2015noncommutative,howard2018exponential}. We refer to the monograph \cite{tropp2015introduction} for an introduction and an extensive bibliography. Very recently, some concentration results for products of random matrices have also been established~\cite{henriksen2020concentration,huang2020matrix}.
In recent years, many authors have sought concentration results for more general random matrix models. One natural idea is to develop matrix versions of scalar concentration techniques based on functional inequalities or based on Markov processes.
In the scalar setting, the subadditivity of the entropy plays a basic role in obtaining modified log-Sobolev inequalities for product spaces, a core ingredient in proving subgaussian concentration results. Chen and Tropp \cite{chen2014subadditivity} established the subadditivity of matrix trace entropy quantities. Unfortunately, the approach in \cite{chen2014subadditivity} requires awkward additional assumptions to derive matrix concentration from modified log-Sobolev inequalities. Cheng et al.~\cite{cheng2016characterizations,cheng2017exponential,cheng2019matrix} have extended this line of research.
Mackey et al.~\cite{mackey2014,paulin2016efron} observed that the method of exchangeable pairs~\cite{stein1972,stein1986approximate,chatterjee2005concentration} leads to more satisfactory matrix concentration inequalities, including matrix generalizations of the Efron--Stein--Steele inequality. The argument in~\cite{paulin2016efron} can be viewed as a discrete version of the semigroup approach that we use in this paper; see Appendix~\ref{apdx:Stein_method} for more discussion.
Very recently, Aoun et al.~\cite{ABY20:Matrix-Poincare} showed how to derive exponential matrix concentration inequalities from the matrix Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare}. Their approach is based on the classic iterative argument, due to Aida \& Stroock~\cite{aida1994moment}, that operates in the scalar setting. For matrices, it takes serious effort to implement this technique. In our companion paper~\cite{HT20:Trace-Poincare}, we have shown that a trace Poincar{\'e} inequality leads to stronger exponential concentration results via an easier argument.
Another appealing contribution of the paper~\cite{ABY20:Matrix-Poincare} is to establish the validity of a matrix Poincar\'e inequality for particular matrix-valued Markov processes. Unfortunately, Poincar{\'e} inequalities are apparently not strong enough to capture subgaussian concentration. In the scalar case, log-Sobolev inequalities lead to subgaussian concentration inequalities. At present, it is not clear how to extend the theory of log-Sobolev inequalities to matrices, and this obstacle has delayed progress on studying matrix concentration via functional inequalities.
In the scalar setting, one common technique for establishing a log-Sobolev inequality is to prove that the Bakry--{\'E}mery criterion holds~\cite[Problem 3.19]{van550probability}. Inspired by this observation, we have chosen to investigate the implications of the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} for Markov semigroups acting on matrix-valued functions. Our work demonstrates that this type of curvature condition allows us to establish matrix moment bounds directly, without the intermediation of a log-Sobolev inequality. As a consequence, we can obtain subgaussian and subgamma concentration for nonlinear random matrix models.
After establishing the results in this paper, we discovered that Junge \& Zeng~\cite{junge2015noncommutative} have also derived subgaussian matrix concentration inequalities from the (noncommutative) Bakry--{\'E}mery criterion. Their approach is based on a noncommutative version of the Burkholder--Davis--Gundy inequality and a martingale argument that applies to a wider class of noncommutative diffusion semigroups acting on von Neumann algebras. As a consequence, their results apply to a larger family of examples, but the moment growth bounds are somewhat worse.
In contrast, our paper develops a direct argument for the mixed classical-quantum semigroup~\eqref{eqn:semigroup} that does not require any sophisticated tools from operator theory or noncommutative probability. Instead, we establish a new trace inequality (Lemma~\ref{lem:key_Gamma}) that mimics the chain rule for a scalar diffusion semigroup.
\section{Matrix Markov semigroups: Properties and proofs} \label{sec:matrix_Markov_semigroups_more}
This section presents some other fundamental facts about matrix Markov semigroups. We also provide proofs of the propositions from Section~\ref{sec:matrix_Markov_semigroups}.
\subsection{Properties of the carr\'e du champ operator}
Our first proposition gives the matrix extension of some classic facts about the carr\'e du champ operator $\Gamma$. Parts of this result are adapted from~\cite[Prop.~2.2]{ABY20:Matrix-Poincare}.
\begin{proposition}[Matrix carr{\'e} du champ] \label{prop:Gamma_property} Let $(Z_t)_{t \geq 0}$ be a Markov process. The associated matrix bilinear form $\Gamma$ has the following properties: \begin{enumerate} \item \label{limit_formula} For all suitable $\mtx{f},\mtx{g}: \Omega \rightarrow \mathbb{H}_d$ and all $z \in \Omega$, \begin{equation}\label{eqn:limit_formula_Gamma}
\Gamma(\mtx{f},\mtx{g})(z) = \lim_{t\downarrow 0} \frac{1}{2t} \Expect\big[\big(\mtx{f}(Z_t)-\mtx{f}(Z_0)\big)\big(\mtx{g}(Z_t)-\mtx{g}(Z_0)\big)\,\big|\,Z_0=z\big]. \end{equation}
\item\label{gamma_psd} In particular, the quadratic form $\mtx{f} \mapsto \Gamma(\mtx{f})$ is positive: $\Gamma(\mtx{f})\succcurlyeq \mtx{0}$.
\item\label{gamma_young} For all suitable $\mtx{f},\mtx{g}: \Omega \rightarrow \mathbb{H}_d$ and all $s > 0$, \[\Gamma(\mtx{f},\mtx{g}) + \Gamma(\mtx{g},\mtx{f})\preccurlyeq s \,\Gamma(\mtx{f}) + s^{-1}\,\Gamma(\mtx{g}).\]
\item \label{convexity} The quadratic form induced by $\Gamma$ is operator convex: \[\Gamma\big(\tau\mtx{f}+(1-\tau)\mtx{g}\big)\preccurlyeq \tau \,\Gamma(\mtx{f}) + (1-\tau)\,\Gamma(\mtx{g})\quad \text{for each}\ \tau\in[0,1].\] \end{enumerate} Similar results hold for the matrix Dirichlet form, owing to the definition~\eqref{eqn:Dirichlet_form}. \end{proposition}
\begin{proof} \textit{Proof of \eqref{limit_formula}.} The limit form of the carr\'e du champ can be verified with a short calculation: \begin{align*}
\Gamma(\mtx{f},\mtx{g})(z) =&\ \lim_{t\downarrow 0}\frac{1}{2t} \big[ \Expect[\mtx{f}(Z_t)\mtx{g}(Z_t)\,|\,Z_0=z]-\mtx{f}(z)\mtx{g}(z) \big] \\
&\quad - \lim_{t\downarrow 0}\frac{1}{2t} \big[\mtx{f}(z)\big(\Expect[\mtx{g}(Z_t)\,|\,Z_0=z]-\mtx{g}(z)\big) \big] - \lim_{t\downarrow 0}\frac{1}{2t}\big[\big(\Expect[\mtx{f}(Z_t)\,|\,Z_0=z]-\mtx{f}(z)\big)\mtx{g}(z)\big] \\
=&\ \lim_{t\downarrow 0}\frac{1}{2t} \Expect\big[\mtx{f}(Z_t)\mtx{g}(Z_t) - \mtx{f}(z)\mtx{g}(Z_t) -\mtx{f}(Z_t)\mtx{g}(z) + \mtx{f}(z)\mtx{g}(z)\,|\,Z_0=z\big]\\ =&\ \lim_{t\downarrow 0}\frac{1}{2t} \Expect
\big[(\mtx{f}(Z_t)-\mtx{f}(Z_0))(\mtx{g}(Z_t)-\mtx{g}(Z_0))\,|\,Z_0=z\big]. \end{align*} The first relation depends on the definition~\eqref{eqn:definition_Gamma} of $\Gamma$ and the definition~\eqref{eqn:Markov_generator} of $\mathcal{L}$.
\textit{Proof of \eqref{gamma_psd}.} The fact that $\mtx{f} \mapsto \Gamma(\mtx{f})$ is positive follows from~\eqref{limit_formula} because the square of a matrix is positive-semidefinite and the expectation preserves positivity.
\textit{Proof of \eqref{gamma_young}.} The Young inequality for the carr{\'e} du champ follows from the fact that $\Gamma$ is positive: \[\mtx{0}\preccurlyeq \Gamma(s^{1/2}\mtx{f} - s^{-1/2}\mtx{g}) = s \, \Gamma(\mtx{f}) + s^{-1} \,\Gamma(\mtx{g}) - \Gamma(\mtx{f},\mtx{g}) - \Gamma(\mtx{g},\mtx{f}). \] The second relation holds because $\Gamma$ is a bilinear form.
\textit{Proof of \eqref{convexity}.} To establish operator convexity, we use bilinearity again: \begin{align*} \Gamma(\tau\mtx{f}+(1-\tau)\mtx{g}) &= \tau^2\,\Gamma(\mtx{f}) + (1-\tau)^2\,\Gamma(\mtx{g}) + \tau(1-\tau)\left(\Gamma(\mtx{f},\mtx{g}) + \Gamma(\mtx{g},\mtx{f})\right) \\ &\preccurlyeq \tau^2\,\Gamma(\mtx{f}) + (1-\tau)^2\,\Gamma(\mtx{g}) + \tau(1-\tau)\left(\Gamma(\mtx{f}) + \Gamma(\mtx{g})\right) =\tau \,\Gamma(\mtx{f}) + (1-\tau)\,\Gamma(\mtx{g}). \end{align*} The first semidefinite inequality follows from~\eqref{gamma_young} with $s = 1$. \end{proof}
The next lemma is an extension of Proposition~\ref{prop:Gamma_property}\eqref{limit_formula}. We use this result to establish the all-important chain rule inequality in Section~\ref{sec:trace_to_moment}.
\begin{lemma}[Triple product] \label{lem:three_limit} Let $(Z_t)_{t\geq0}$ be a reversible Markov process with a stationary measure $\mu$ and infinitesimal generator $\mathcal{L}$. For all suitable $\mtx{f},\mtx{g},\mtx{h}:\Omega \rightarrow \mathbb{H}_d$ and all $z \in \Omega$, \begin{align*}
& \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\big[\big(\mtx{f}(Z_t)-\mtx{f}(Z_0)\big)\big(\mtx{g}(Z_t)-\mtx{g}(Z_0)\big)\big(\mtx{h}(Z_t)-\mtx{h}(Z_0)\big)\big]\,\big|\,Z_0=z\big] \\ &\qquad= \operatorname{tr}\big[ \mathcal{L}(\mtx{f}\mtx{g}\mtx{h}) - \mathcal{L}(\mtx{f}\mtx{g})\mtx{h} - \mathcal{L}(\mtx{h}\mtx{f})\mtx{g} - \mathcal{L}(\mtx{g}\mtx{h})\mtx{f} + \mathcal{L}(\mtx{f})\mtx{g}\mtx{h}+ \mathcal{L}(\mtx{g})\mtx{h}\mtx{f} + \mathcal{L}(\mtx{h})\mtx{f}\mtx{g}\big](z). \end{align*} In particular,
\[\Expect_{Z\sim\mu} \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\big[\big(\mtx{f}(Z_t)-\mtx{f}(Z_0)\big)\big(\mtx{g}(Z_t)-\mtx{g}(Z_0)\big)\big(\mtx{h}(Z_t)-\mtx{h}(Z_0)\big)\big]\,\big|\,Z_0=Z\big] = 0.\] \end{lemma}
\begin{proof} For simplicity, we abbreviate \[\mtx{f}_t = \mtx{f}(Z_t),\quad \mtx{g}_t = \mtx{g}(Z_t),\quad \mtx{h}_t = \mtx{h}(Z_t)\quad\text{and}\quad \mtx{f}_0 = \mtx{f}(Z_0),\quad \mtx{g}_0 = \mtx{g}(Z_0),\quad \mtx{h}_0 = \mtx{h}(Z_0).\] Direct calculation gives \begin{align*}
& \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\left[\big(\mtx{f}(Z_t)-\mtx{f}(Z_0)\big)\big(\mtx{g}(Z_t)-\mtx{g}(Z_0)\big)\big(\mtx{h}(Z_t)-\mtx{h}(Z_0)\big) \,\big|\,Z_0=z\right] \\
&\quad= \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\left[ \mtx{f}_t\mtx{g}_t\mtx{h}_t - \mtx{f}_t\mtx{g}_t\mtx{h}_0 -\mtx{f}_t\mtx{g}_0\mtx{h}_t + \mtx{f}_t\mtx{g}_0\mtx{h}_0 -\mtx{f}_0\mtx{g}_t\mtx{h}_t + \mtx{f}_0\mtx{g}_t\mtx{h}_0 + \mtx{f}_0\mtx{g}_0\mtx{h}_t - \mtx{f}_0\mtx{g}_0\mtx{h}_0 \,\big|\, Z_0 = z\right]\\ &\quad = \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\big[ \big(\mtx{f}_t\mtx{g}_t\mtx{h}_t - \mtx{f}_0\mtx{g}_0\mtx{h}_0\big) - \big((\mtx{f}_t\mtx{g}_t - \mtx{f}_0\mtx{g}_0)\mtx{h}_0\big) - \big((\mtx{h}_t\mtx{f}_t - \mtx{h}_0\mtx{f}_0)\mtx{g}_0\big) + \big((\mtx{f}_t - \mtx{f}_0)\mtx{g}_0\mtx{h}_0\big) \\
&\qquad\qquad\qquad\qquad -\ \big((\mtx{g}_t\mtx{h}_t - \mtx{g}_0\mtx{h}_0)\mtx{f}_0\big) + \big((\mtx{g}_t - \mtx{g}_0)\mtx{h}_0\mtx{f}_0\big) + \big((\mtx{h}_t - \mtx{h}_0)\mtx{f}_0\mtx{g}_0\big) \,\big|\,Z_0=z \big]\\ &\quad= \operatorname{tr}\big[ \mathcal{L}(\mtx{f}\mtx{g}\mtx{h})(z) - \mathcal{L}(\mtx{f}\mtx{g})(z)\mtx{h}(z) - \mathcal{L}(\mtx{h}\mtx{f})(z)\mtx{g}(z) - \mathcal{L}(\mtx{g}\mtx{h})(z)\mtx{f}(z) \\ & \qquad\qquad\quad +\ \mathcal{L}(\mtx{f})(z)\mtx{g}(z)\mtx{h}(z) + \mathcal{L}(\mtx{g})(z)\mtx{h}(z)\mtx{f}(z) + \mathcal{L}(\mtx{h})(z)\mtx{f}(z)\mtx{g}(z)\big]. \end{align*} We have applied the cyclic property of the trace. Using the reversibility~\eqref{eqn:reversibility_2} of the Markov process and the zero-mean property~\eqref{eqn:mean_zero} of the infinitesimal generator, we have \begin{align*} & \Expect_\mu \operatorname{tr}\left[ \mathcal{L}(\mtx{f}\mtx{g}\mtx{h}) - \mathcal{L}(\mtx{f}\mtx{g})\mtx{h} - \mathcal{L}(\mtx{h}\mtx{f})\mtx{g} - \mathcal{L}(\mtx{g}\mtx{h})\mtx{f} + \mathcal{L}(\mtx{f})\mtx{g}\mtx{h}+ \mathcal{L}(\mtx{g})\mtx{h}\mtx{f} + \mathcal{L}(\mtx{h})\mtx{f}\mtx{g}\right]\\ &\quad= \operatorname{tr}\left[ \Expect_\mu[\mathcal{L}(\mtx{f}\mtx{g}\mtx{h})] - \Expect_\mu[\mathcal{L}(\mtx{f}\mtx{g})\mtx{h} - \mtx{f}\mtx{g}\mathcal{L}(\mtx{h})] -\Expect_\mu[\mathcal{L}(\mtx{h}\mtx{f})\mtx{g} - \mtx{h}\mtx{f}\mathcal{L}(\mtx{g})] - \Expect_\mu[\mathcal{L}(\mtx{g}\mtx{h})\mtx{f} - \mtx{g}\mtx{h}\mathcal{L}(\mtx{f})] \right] \\ &\quad= 0. \end{align*} This concludes the second part of the lemma. \end{proof}
\subsection{Reversibility} \label{sec:reversibility-pf}
In this section, we establish Proposition~\ref{prop:reversibility}, which states that reversibility of the semigroup~\eqref{eqn:semigroup}
on real-valued functions is equivalent with the reversibility of the semigroup on matrix-valued functions. The pattern of argument was suggested to us by Ramon van Handel, and it will be repeated below in the proofs that certain functional inequalities for real-valued functions are equivalent with functional inequalities for matrix-valued functions.
\begin{proof}[Proof of Proposition~\ref{prop:reversibility}] The implication that matrix reversibility~\eqref{eqn:reversibility_1} for all $d \in \mathbbm{N}$ implies scalar reversibility is obvious: just take $d = 1$. To check the converse, we require an elementary identity. For all vectors $\vct{u},\vct{v}\in \mathbb{C}^d$ and all matrices $\mtx{A},\mtx{B}\in \mathbb{H}_d$, \begin{align} \vct{u}^*(\mtx{A}\mtx{B})\vct{v} &= \sum_{j=1}^d (\vct{u}^*\mtx{A}\mathbf{e}_j)(\mathbf{e}_j^*\mtx{B}\vct{v}) =: \sum_{j=1}^da_j\bar{b}_j\\ &= \sum_{j=1}^d\left[\operatorname{Re}(a_j)\operatorname{Re}(b_j) + \operatorname{Im}(a_j)\operatorname{Im}(b_j) - \mathrm{i}\operatorname{Re}(a_j)\operatorname{Im}(b_j) + \mathrm{i}\operatorname{Im}(a_j)\operatorname{Re}(b_j)\right]. \label{eqn:Ramon} \end{align} We have defined $a_j:= \vct{u}^*\mtx{A}\mathbf{e}_j$ and $b_j:= \vct{v}^*\mtx{A}\mathbf{e}_j$ for each $j=1,\dots,d$. As usual, $(\mathbf{e}_j : 1 \leq j \leq d)$ is the standard basis for $\mathbbm{C}^d$.
Now, consider two matrix-valued functions $\mtx{f},\mtx{g}:\Omega\rightarrow\mathbb{H}_d$. Introduce the scalar functions $f_j := \vct{u}^*\mtx{f}\mathbf{e}_j$ and $g_j := \vct{v}^*\mtx{g}\mathbf{e}_j$ for each $j=1,\dots,d$. The definition~\eqref{eqn:semigroup} of the semigroup $(P_t)_{t\geq0}$ as an expectation ensures that \[\vct{u}^*(P_t\mtx{f})\mathbf{e}_j = P_tf_j = P_t(\operatorname{Re}(f_j)) + \mathrm{i}\,P_t(\operatorname{Im}(f_j)) = \operatorname{Re}(P_t f_j) + \mathrm{i} \operatorname{Im}(P_tf_j).\] The parallel statement holds for $\vct{v}^*(P_t\mtx{g})\mathbf{e}_j$. Therefore, we can use formula \eqref{eqn:Ramon} to compute that \begin{align*} & \vct{u}^*\Expect_\mu [(P_t\mtx{f}) \, \mtx{g}]\vct{v} \\ &\quad = \sum_{j=1}^d \Expect_\mu [\vct{u}^*(P_t\mtx{f})\mathbf{e}_j\mathbf{e}_j^*\mtx{g}\vct{v}] = \sum_{j=1}^d \Expect_\mu [(P_tf_j)\,\bar{g}_j]\\ &\quad = \sum_{j=1}^d\Expect_\mu \left[(P_t\operatorname{Re}(f_j))\operatorname{Re}(g_j) + (P_t\operatorname{Im}(f_j))\operatorname{Im}(g_j) - \mathrm{i}(P_t\operatorname{Re}(f_j))\operatorname{Im}(g_j) + \mathrm{i}(P_t\operatorname{Im}(f_j))\operatorname{Re}(g_j)\right]\\ &\quad = \sum_{j=1}^d\Expect_\mu \left[\operatorname{Re}(f_j)(P_t\operatorname{Re}(g_j)) + \operatorname{Im}(f_j)(P_t\operatorname{Im}(g_j)) - \mathrm{i}\operatorname{Re}(f_j)(P_t\operatorname{Im}(g_j)) + \mathrm{i}\operatorname{Im}(f_j)(P_t\operatorname{Re}(g_j))\right]\\ &\quad= \sum_{j=1}^d \Expect_\mu [f_j\,(P_t\bar{g}_j)] = \sum_{j=1}^d \Expect_\mu [\vct{u}^*\mtx{f}\mathbf{e}_j\mathbf{e}_j^*(P_t\mtx{g})\vct{v}] = \vct{u}^*\Expect_\mu [\mtx{f} \, (P_t\mtx{g})]\vct{v}. \end{align*} The matrix identity \eqref{eqn:reversibility_1} follows immediately because $\vct{u},\vct{v}\in \mathbb{C}^d$ are arbitrary. \end{proof}
\subsection{Dimension reduction}
The following lemma explains how to relate the carr{\'e} du champ operator of a matrix-valued function to the carr{\'e} du champ operators of some scalar functions. It will help us transform the scalar Poincar{\'e} inequality and the scalar Bakry--{\'E}mery criterion to their matrix equivalents.
\begin{lemma}[Dimension reduction of carr{\'e} du champ]\label{lem:dimension_reduction} Let $(P_t)_{t \geq 0}$ be the semigroup defined in~\eqref{eqn:semigroup}. The carr{\'e} du champ operator $\Gamma$ and the iterated carr{\'e} du champ operator $\Gamma_2$ satisfy \begin{align} \vct{u}^*\Gamma(\mtx{f})\vct{u} &= \sum_{j=1}^d\left(\Gamma\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big) + \Gamma\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right); \label{eqn:scalar_gamma}\\ \vct{u}^*\Gamma_2(\mtx{f})\vct{u} &= \sum_{j=1}^d\left(\Gamma_2\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big) + \Gamma_2\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right). \label{eqn:scalar_gamma2} \end{align} These formulae hold for all $d \in \mathbbm{N}$, for all suitable functions $\mtx{f}:\Omega\to \mathbb{H}_d$, and for all vectors $\vct{u}\in\mathbb{C}^d$. \end{lemma}
\begin{proof} The definition~\eqref{eqn:definition_Gamma} of $\mathcal{L}$ implies that \[\vct{u}^*\mathcal{L}(\mtx{f})\vct{v} = \mathcal{L}(\vct{u}^*\mtx{f}\vct{v}) = \mathcal{L}(\operatorname{Re}(\vct{u}^*\mtx{f}\vct{v})) + \mathrm{i}\cdot \mathcal{L}(\operatorname{Im}(\vct{u}^*\mtx{f}\vct{v})).\] Introduce the scalar function $f_j := \vct{u}^*\mtx{f}\mathbf{e}_j$ for each $j=1,\dots,d$. Then we can use the definition~\eqref{eqn:definition_Gamma} of $\Gamma$ and formula \eqref{eqn:Ramon} to compute that \begin{align*} \vct{u}^*\Gamma(\mtx{f})\vct{u} &= \frac{1}{2}\left(\vct{u}^*\mathcal{L}(\mtx{f}^2)\vct{u} - \vct{u}^*\mtx{f}\mathcal{L}(\mtx{f})\vct{u} - \vct{u}^*\mathcal{L}(\mtx{f})\mtx{f}\vct{u}\right)\\ &= \frac{1}{2}\sum_{j=1}^d\left(\vct{u}^*\mathcal{L}(\mtx{f}\mathbf{e}_j\mathbf{e}_j^*\mtx{f})\vct{u} - \vct{u}^*\mtx{f}\mathbf{e}_j\mathbf{e}_j^*\mathcal{L}(\mtx{f})\vct{u} - \vct{u}^*\mathcal{L}(\mtx{f})\mathbf{e}_j\mathbf{e}_j^*\mtx{f}\vct{u}\right)\\ &= \frac{1}{2}\sum_{j=1}^d\left(\mathcal{L}(f_j\,\bar{f}_j) - f_j\,\mathcal{L}(\bar{f}_j) - \mathcal{L}(f_j)\,\bar{f}_j\right)\\ &= \frac{1}{2}\sum_{j=1}^d\left(\mathcal{L}(\operatorname{Re}(f_j)^2) + \mathcal{L}(\operatorname{Im}(f_j)^2) - 2\operatorname{Re}(f_j)\,\mathcal{L}(\operatorname{Re}(f_j)) - 2\operatorname{Im}(f_j)\,\mathcal{L}(\operatorname{Im}(f_j)) \right)\\ &= \sum_{j=1}^d\left(\Gamma(\operatorname{Re}(f_j)) + \Gamma(\operatorname{Im}(f_j))\right). \end{align*} This is the first identity~\eqref{eqn:scalar_gamma}. The second identity \eqref{eqn:scalar_gamma2} follows from a similar argument based on the definition~\eqref{eqn:definition_Gamma2} of $\Gamma_2$ and the relation \eqref{eqn:scalar_gamma}. \end{proof}
\subsection{Equivalence of scalar and matrix inequalities} \label{sec:scalar_matrix}
In this section, we verify Proposition~\ref{prop:poincare_equiv} and Proposition~\ref{prop:BE_equiv}. These results state that functional inequalities for the action of the semigroup~\eqref{eqn:semigroup} on real-valued functions induce functional inequalities for its action on matrix-valued functions.
\begin{proof}[Proof of Proposition~\ref{prop:poincare_equiv}] It is evident that the validity of the matrix Poincar\'e inequality \eqref{Poincare_inequality_matrix} for all $d \in \mathbbm{N}$ implies the scalar Poincar\'e inequality \eqref{Poincare_inequality_scalar}, which is simply the $d = 1$ case. For the reverse implication, we invoke formula \eqref{eqn:Ramon} to learn that \[ \vct{u}^*\mVar_\mu[\mtx{f}]\vct{u} = \sum_{j=1}^d\left(\Var_\mu\big[\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big]+\Var_\mu\big[\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big]\right). \] Moreover, we can take the expectation $\Expect_\mu$ of formula \eqref{eqn:scalar_gamma} to obtain \[ \vct{u}^*\mathcal{E}(\mtx{f})\vct{u} = \sum_{j=1}^d\left(\mathcal{E}\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)+\mathcal{E}\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right). \] Applying the scalar Poincar{\'e} inequality \eqref{Poincare_inequality_scalar} to the real scalar functions $\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)$ and $\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)$, we obtain \[\vct{u}^*\mVar_\mu[\mtx{f}]\vct{u} \leq \alpha\cdot \vct{u}^*\mathcal{E}(\mtx{f})\vct{u}\quad \text{for all $\vct{u}\in \mathbb{C}^d$}.\] This immediately implies the matrix Poincar{\'e} inequality \eqref{Poincare_inequality_matrix}. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:BE_equiv}] It is evident that the validity of the matrix Bakry--{\'E}mery criterion~\eqref{Bakry-Emery_criterion_matrix} for all $d \in \mathbbm{N}$ implies the validity of the scalar criterion~\eqref{Bakry-Emery_criterion_scalar}, as we only need to set $d = 1$. To develop the reverse implication, we use Lemma~\ref{lem:dimension_reduction} to compute that \begin{align*} \vct{u}^*\Gamma(\mtx{f})\vct{u} &= \sum_{j=1}^d\left(\Gamma\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big) + \Gamma\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right)\\ &\leq c \sum_{j=1}^d\left(\Gamma_2\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big) + \Gamma_2\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right)\\ &= c\cdot \vct{u}^*\Gamma_2(\mtx{f})\vct{u}. \end{align*} The inequality is applying \eqref{Bakry-Emery_criterion_scalar} to real scalar functions $\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)$ and $\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)$ for each $j=1,\dots,d$. Since $\vct{u} \in \mathbbm{C}^d$ is arbitrary, we immediately obtain \eqref{Bakry-Emery_criterion_matrix}. \end{proof}
\subsection{Derivative formulas}
A standard way to establish the equivalence between the Poincar\'e inequality and the exponential ergodicity property is by studying derivatives with respect to the time parameter $t$. The following result, extending~\cite[Lemma 2.3]{ABY20:Matrix-Poincare}, calculates the derivatives of the matrix variance and the Dirichlet form along the semigroup $(P_t)_{t\geq0}$. The result parallels the scalar case.
\begin{lemma}[Dissipation of variance and energy] \label{lem:derivative_formula} Let $(P_t)_{t\geq 0}$ be a Markov semigroup with stationary measure $\mu$, infinitesimal generator $\mathcal{L}$, and Dirichlet form $\mathcal{E}$. For all suitable $\mtx{f}:\Omega\rightarrow \mathbb{H}_d$, \begin{equation}\label{eqn:variance_derivative} \frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] = -2\mathcal{E}(\mtx{f})\quad \text{for all $t>0$}. \end{equation} Moreover, if the semigroup is reversible, \begin{equation}\label{eqn:energy_derivative} \frac{\diff{} }{\diff t}\mathcal{E}(P_t\mtx{f}) = -2\Expect_\mu\big[(\mathcal{L}(P_t\mtx{f}))^2\big]\quad \text{for all $t>0$}. \end{equation} \end{lemma}
\begin{proof} By the definition~\eqref{eqn:matrix_variance} of the matrix variance and the stationarity property $\operatorname{\mathbbm{E}}_{\mu} P_t = \operatorname{\mathbbm{E}}_\mu$, we can calculate that \begin{align*} \frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] = \frac{\diff{} }{\diff t}\big[\Expect_\mu (P_t\mtx{f})^2 - (\Expect_\mu\mtx{f})^2\big] =\Expect_\mu\big[\mathcal{L}(P_t\mtx{f})(P_t\mtx{f}) + (P_t\mtx{f})\mathcal{L}(P_t\mtx{f})\big] = -2\mathcal{E}(P_t\mtx{f}). \end{align*} The second equality above uses the derivative relation \eqref{eqn:derivative_relation} for the generator, and the third equality is the expression~\eqref{eqn:Dirichlet_expression_1} for the Dirichlet form. Similarly, we can calculate that \begin{align*} \frac{\diff{} }{\diff t} \mathcal{E}(P_t\mtx{f}) &= - \frac{\diff{} }{\diff t} \Expect_\mu\big[(P_t\mtx{f})\mathcal{L}(P_t\mtx{f})\big]\\ &= - \Expect_\mu\big[\mathcal{L}(P_t\mtx{f})\mathcal{L}(P_t\mtx{f}) + (P_t\mtx{f})\mathcal{L}(\mathcal{L}(P_t\mtx{f}))\big] = - 2\Expect_\mu\big[(\mathcal{L}(P_t\mtx{f}))^2\big]. \end{align*} The first equality is \eqref{eqn:Dirichlet_expression_2}. The last equality holds because $\mathcal{L}$ is symmetric. \end{proof}
The matrix Poincar\'e inequality \eqref{eqn:matrix_Poincare} allows us to convert the derivative formulas in Lemma~\ref{lem:derivative_formula} into differential inequalities for matrix-valued functions. The next lemma gives the solution to these differential inequalities.
\begin{lemma}[Differential matrix inequality] \label{lem:matrix_differential_inequality} Assume that $\mtx{A}:[0,+\infty) \rightarrow \mathbb{H}_d$ is a differentiable matrix-valued function that satisfies the differential inequality \[\frac{\diff{} }{\diff t}\mtx{A}(t) \preccurlyeq \nu \cdot \mtx{A}(t) \quad \text{for all $t > 0$,}\] where $\nu \in \mathbb{R}$ is a constant. Then \[\mtx{A}(t) \preccurlyeq \mathrm{e}^{\nu t}\cdot\mtx{A}(0)\quad \text{for all $t\geq 0$}. \] \end{lemma}
\begin{proof} Consider the matrix-valued function $\mtx{B}(t):= \mathrm{e}^{-\nu t} \mtx{A}(t)$ for $t \geq 0$. Then $\mtx{B}(0) = \mtx{A}(0)$, and \[\frac{\diff{} }{\diff t}\mtx{B}(t) = \mathrm{e}^{-\nu t}\frac{\diff{} }{\diff t}\mtx{A}(t) - \nu \mathrm{e}^{-\nu t} \mtx{A}(t)\preccurlyeq \mtx{0}. \] Since integration preserves the semidefinite order, \[\mathrm{e}^{-\nu t}\mtx{A}(t) = \mtx{B}(t) \preccurlyeq \mtx{B}(0) = \mtx{A}(0).\] Multiply by $\mathrm{e}^{\nu t}$ to arrive at the stated result. \end{proof}
\subsection{Consequences of the Poincar\'e inequality} \label{sec:equivalence_Poincare}
This section contains the proof of Proposition~\ref{prop:matrix_poincare}, the equivalence between the matrix Poincar\'e inequality and exponential ergodicity properties. This proof is adapted from its scalar analog~\cite[Theorem 2.18]{van550probability}.
\begin{proof}[Proof of Proposition~\ref{prop:matrix_poincare}] \textit{Proof that \eqref{Poincare_inequality} $\Rightarrow$ \eqref{variance_convergence}.} To see that the matrix Poincar{\'e} inequality~\eqref{Poincare_inequality} implies exponential ergodicity~\eqref{variance_convergence} of the variance, combine Lemma~\ref{lem:derivative_formula} with the matrix Poincar{\'e} inequality to obtain a differential inequality: \[\frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] = -2\mathcal{E}(P_t\mtx{f}) \preccurlyeq -\frac{2}{\alpha} \mVar_\mu[P_t\mtx{f}].\] Lemma~\ref{lem:matrix_differential_inequality} gives the solution: \[\mVar_\mu[P_t\mtx{f}] \preccurlyeq \mathrm{e}^{-2t/\alpha} \mVar_\mu[P_0\mtx{f}] = \mathrm{e}^{-2t/\alpha} \mVar_\mu[\mtx{f}].\] This is the ergodicity of the variance.
\textit{Proof that \eqref{variance_convergence} $\Rightarrow$ \eqref{Poincare_inequality}.} To obtain the matrix Poincar{\'e} inequality~\eqref{Poincare_inequality} from exponential ergodicity~\eqref{variance_convergence} of the variance, use the derivative~\eqref{eqn:variance_derivative} of the variance and the fact that $P_0$ is the identity map to see that \[\mathcal{E}(\mtx{f}) = \lim_{t\downarrow0} \frac{\mVar_\mu[\mtx{f}]- \mVar_\mu[P_t\mtx{f}]}{2t} \succcurlyeq \lim_{t\downarrow0} \frac{1-\mathrm{e}^{-2t/\alpha}}{2t} \cdot \mVar_\mu[\mtx{f}] = \frac{1}{\alpha} \mVar_\mu[\mtx{f}].\] The inequality follows from \eqref{variance_convergence}.
\textit{Proof that \eqref{Poincare_inequality} $\Rightarrow$ \eqref{energy_convergence} under reversibility.} Next, we argue that the matrix Poincar\'e inequality \eqref{Poincare_inequality} implies exponential ergodicity~\eqref{energy_convergence} of the energy, assuming that the semigroup is reversible. In this case, the zero-mean property \eqref{eqn:mean_zero} implies that $\Expect_\mu[\mtx{g}\mathcal{L}(\mtx{f})] = \Expect_\mu[(\mtx{g}-\Expect_\mu\mtx{g})\mathcal{L}(\mtx{f})]$ and $\Expect_\mu[\mathcal{L}(\mtx{f})\mtx{g}] = \Expect_\mu[\mathcal{L}(\mtx{f})(\mtx{g}-\Expect_\mu\mtx{g})]$ for all suitable $\mtx{f},\mtx{g}$. Therefore, \begin{align*} \mathcal{E}(\mtx{f}) &= - \frac{1}{2}\Expect_\mu\left[\mtx{f}\mathcal{L}(\mtx{f})+\mathcal{L}(\mtx{f})\mtx{f}\right] = - \frac{1}{2}\Expect_\mu\left[(\mtx{f}-\Expect_\mu\mtx{f})\mathcal{L}(\mtx{f}) + \mathcal{L}(\mtx{f})(\mtx{f}-\Expect_\mu\mtx{f})\right]\\ &\preccurlyeq \frac{1}{2\alpha}\Expect_\mu \big[(\mtx{f}-\Expect_\mu\mtx{f})^2\big] + \frac{\alpha}{2}\Expect_\mu \big[\mathcal{L}(\mtx{f})^2\big] \preccurlyeq \frac{1}{2} \mathcal{E}(\mtx{f}) + \frac{\alpha}{2}\Expect_\mu \big[\mathcal{L}(\mtx{f})^2\big]. \end{align*} The first inequality holds because $\mtx{A}\mtx{B} + \mtx{B}\mtx{A}\preccurlyeq \mtx{A}^2+\mtx{B}^2$ for all $\mtx{A},\mtx{B}\in \mathbb{H}_d$, and the second follows from the matrix Poincar{\'e} inequality~\eqref{Poincare_inequality}. Rearranging, we obtain the relation $\mathcal{E}(\mtx{f})\preccurlyeq \alpha \Expect_\mu [\mathcal{L}(\mtx{f})^2]$ for all suitable $\mtx{f}$. Combine this fact with the derivative formula \eqref{eqn:energy_derivative} to reach \[\frac{\diff{} }{\diff t} \mathcal{E}(P_t\mtx{f}) = - 2\Expect_\mu\big[\mathcal{L}(P_t\mtx{f})^2\big] \preccurlyeq - \frac{2}{\alpha} \mathcal{E}(P_t\mtx{f}).\] Lemma~\ref{lem:matrix_differential_inequality} gives the solution to the differential inequality: \[\mathcal{E}(P_t\mtx{f})\preccurlyeq \mathrm{e}^{-2t/\alpha} \mathcal{E}(P_0\mtx{f}) = \mathrm{e}^{-2t/\alpha} \mathcal{E}(\mtx{f}).\] This is the ergodicity of energy.
\textit{Proof that \eqref{energy_convergence} $\Rightarrow$ \eqref{Poincare_inequality} under ergodicity.} To see that exponential ergodicity~\eqref{energy_convergence} of the energy implies the matrix Poincar\'e inequality \eqref{Poincare_inequality} when the semigroup is ergodic, we combine \eqref{energy_convergence} with the derivative~\eqref{eqn:variance_derivative} of the Dirichlet form to obtain \[\frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] = -2\mathcal{E}(P_t\mtx{f}) \succcurlyeq -2\mathrm{e}^{-2t/\alpha}\mathcal{E}(\mtx{f}).\] Using the ergodicity assumption~\eqref{eqn:ergodicity} on the semigroup, we have \begin{align*} \mVar_\mu[\mtx{f}] &= \mVar_\mu[P_0\mtx{f}] - \lim_{t\rightarrow\infty}\mVar_\mu[P_t\mtx{f}] = -\int_0^\infty \frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] \idiff t \\ &\preccurlyeq 2\int_0^\infty\mathrm{e}^{-2t/\alpha} \idiff t \cdot \mathcal{E}(\mtx{f}) = \alpha \mathcal{E}(\mtx{f}). \end{align*} The first equality follows from the ergodicity relation \[\lim_{t\rightarrow\infty}\mVar_\mu[P_t\mtx{f}] = \lim_{t\rightarrow\infty}\Expect_\mu(P_t\mtx{f}-\Expect_\mu\mtx{f})^2 = \mtx{0}.\] This completes the proof of Proposition~\ref{prop:matrix_poincare}. \end{proof}
\subsection{Equivalence result for local Poincar\'e inequality} \label{sec:equivalence_local_Poincare}
Proposition~\ref{prop:local_Poincare} states that the matrix Bakry--\'Emery criterion, the local Poincar\'e inequality, and the local ergodicity of the carr\'e du champ operator are equivalent with each other. This section is dedicated to the proof, which is modeled on the scalar argument~\cite[Theorem 2.36]{van550probability}.
\begin{proof}[Proof of Proposition~\ref{prop:local_Poincare}] \textit{Proof that \eqref{Bakry-Emery_criterion} $\Rightarrow$ \eqref{local_ergodicity}.} Let us show that the matrix Bakry--\'Emery criterion \eqref{Bakry-Emery_criterion} implies local ergodicity~ \eqref{local_ergodicity} of the carr\'e du champ operator. Given any suitable $\mtx{f}$ and any $t\geq 0$, construct the function $\mtx{A}(s) := P_{t-s}\Gamma(P_s\mtx{f})$ for $s\in[0,t]$. Then we have \begin{align*} \frac{\diff{} }{\diff s} \mtx{A}(s) &= - \mathcal{L} P_{t-s} \Gamma(P_s\mtx{f}) + P_{t-s}\Gamma(\mathcal{L} P_s\mtx{f},P_s\mtx{f}) + P_{t-s}\Gamma(P_s\mtx{f},\mathcal{L} P_s\mtx{f}) \\ &= - P_{t-s}\big( \mathcal{L} \Gamma(P_s\mtx{f}) - \Gamma(\mathcal{L} P_s\mtx{f},P_s\mtx{f}) -\Gamma(P_s\mtx{f},\mathcal{L} P_s\mtx{f})\big) \\ &= -2 P_{t-s} \Gamma_2(P_s\mtx{f})\\ &\preccurlyeq -2c^{-1} P_{t-s}\Gamma(P_s\mtx{f})\\ &= -2c^{-1} \mtx{A}(s). \end{align*} The inequality follows from \eqref{Bakry-Emery_criterion}. Apply Lemma~\ref{lem:matrix_differential_inequality} to reach to bound $\mtx{A}(t)\preccurlyeq \mathrm{e}^{-2t/c} \mtx{A}(0)$. This yields \eqref{local_ergodicity} because $\mtx{A}(t) = \Gamma(P_t\mtx{f})$ and $\mtx{A}(0) = P_t\Gamma(\mtx{f})$.
\textit{Proof that \eqref{local_ergodicity} $\Rightarrow$ \eqref{local_Poincare}.} Next, we argue that local ergodicity of the carr\'e du champ operator \eqref{local_ergodicity} implies the local matrix Poincar\'e inequality \eqref{local_Poincare}. Construct the function $\mtx{B}(s) := P_{t-s}((P_s\mtx{f})^2)$ for $s\in[0,t]$. Taking the derivative with respect to $s$ gives \begin{align*} \frac{\diff{} }{\diff s} \mtx{B}(s) =&\ - \mathcal{L} P_{t-s} ((P_s\mtx{f})^2) + P_{t-s}(\mathcal{L}(P_s\mtx{f})P_s\mtx{f}) + P_{t-s}(P_s\mtx{f}\mathcal{L}(P_s\mtx{f})) \\ =&\ - P_{t-s}\left( \mathcal{L} ((P_s\mtx{f})^2) - \mathcal{L}(P_s\mtx{f})P_s\mtx{f} - P_s\mtx{f}\mathcal{L}(P_s\mtx{f})\right) \\ =&\ -2 P_{t-s} \Gamma(P_s\mtx{f})\\ \succcurlyeq&\ -2\mathrm{e}^{-2s/c} P_{t-s}P_s\Gamma(\mtx{f})\\ =&\ -2\mathrm{e}^{-2s/c} P_t\Gamma(\mtx{f}). \end{align*} Therefore, \[P_t(\mtx{f}^2) - (P_t\mtx{f})^2 = \mtx{B}(0) - \mtx{B}(t) \preccurlyeq 2\int_0^t\mathrm{e}^{-2s/c}\idiff s \cdot P_t\Gamma(\mtx{f}) = c \, (1-\mathrm{e}^{-2t/c})\,P_t\Gamma(\mtx{f}). \] This is the local ergodicity property.
\textit{Proof that \eqref{local_Poincare} $\Rightarrow$ \eqref{Bakry-Emery_criterion}.} Last, we show that the local matrix Poincar\'e inequality \eqref{local_Poincare} implies the matrix Bakry--\'Emery criterion \eqref{Bakry-Emery_criterion}. Construct the function $\mtx{C}(t) := P_t(\mtx{f}^2) - (P_t\mtx{f})^2 - c\,(1-\mathrm{e}^{-2t/c})\,P_t\Gamma(\mtx{f})$. Evidently, $\mtx{C}(0) = \mtx{0}$, and the local Poincar{\'e} inequality \eqref{local_Poincare} implies that $\mtx{C}(t)\preccurlyeq \mtx{0}$ for all $t\geq 0$. Now, the first derivative satisfies \begin{align*}
\frac{\diff{} }{\diff t}\bigg|_{t=0} \mtx{C}(t)
= \mathcal{L}(\mtx{f}^2) -\mathcal{L}(\mtx{f})\mtx{f} - \mtx{f}\mathcal{L}(\mtx{f}) - 2\Gamma(\mtx{f}) = \mtx{0}. \end{align*} The second derivative takes the form \begin{align*}
\frac{\diff{^2}}{\diff t^2}\bigg|_{t=0} \mtx{C}(t) &= \mathcal{L}^2(\mtx{f}^2)-\mathcal{L}^2(\mtx{f})\mtx{f} - \mtx{f}\mathcal{L}^2(\mtx{f}) - 2(\mathcal{L}\mtx{f})^2 + 4c^{-1}\Gamma(\mtx{f}) - 4\mathcal{L}\Gamma(\mtx{f}) \\ &= 4c^{-1}\left(\Gamma(\mtx{f}) - c\Gamma_2(\mtx{f})\right). \end{align*} Therefore,
\[\Gamma(\mtx{f}) - c\Gamma_2(\mtx{f}) = \frac{c}{4}\frac{\diff{^2}}{\diff t^2} \bigg|_{t=0} \mtx{C}(t) = \frac{c}{2}\lim_{t\rightarrow 0}\frac{\mtx{C}(t)}{t^2} \preccurlyeq \mtx{0}.\] This verifies the validity of the matrix Bakry--\'Emery criterion with constant $c$. \end{proof}
\section{From curvature conditions to matrix moment inequalities} \label{sec:trace_to_moment}
The main results of this paper, Theorems~\ref{thm:polynomial_moment} and~\ref{thm:exponential_moment}, demonstrate that the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} leads to trace moment inequalities for random matrices. This section is dedicated to the proofs of these theorems. These arguments appear to be new, even in the scalar setting, but see~\cite{Led92:Heat-Semigroup,Sch99:Curvature-Nonlocal} for some precedents.
\subsection{Overview}
Let $(P_t)_{t \geq 0}$ be a reversible, ergodic semigroup acting on matrix-valued functions. Assume that the semigroup satisfies a Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}, so Proposition~\ref{prop:local_Poincare} implies that it is locally ergodic. Without loss of generality, we may assume that the matrix-valued function $\mtx{f}$ is zero-mean: $\Expect_\mu\mtx{f}=\mtx{0}$.
For a standard matrix function $\varphi$, the basic idea is to estimate a trace moment of the form $\Expect_\mu\operatorname{tr}[\mtx{f}\,\varphi(\mtx{f})]$ via a classic semigroup argument: \[\Expect_\mu\operatorname{tr}[\mtx{f}\,\varphi(\mtx{f})] =\Expect_\mu\operatorname{tr}[P_0(\mtx{f})\,\varphi(\mtx{f})] = \lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})] - \int_0^\infty\frac{\diff{}}{\diff t}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})]\idiff t.\] By ergodicity \eqref{eqn:ergodicity}, $\lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})] = \Expect_\mu\operatorname{tr}[(\Expect_\mu\mtx{f})\,\varphi(\mtx{f})] = 0$. In the second term on the right-hand side, the time derivative places the infinitesimal generator $\mathcal{L}$ in the integrand, which then becomes \begin{equation} \label{eqn:overview_gamma} -\Expect_\mu\operatorname{tr}[\mathcal{L}(P_t\mtx{f})\,\varphi(\mtx{f})] = \Expect_\mu\operatorname{tr} \Gamma(P_t\mtx{f},\varphi(\mtx{f})). \end{equation} This familiar formula is the starting point for our method.
To control the trace of the carr{\'e} du champ, we employ the following fundamental lemma, which is related to the Stroock--Varopoulos inequality~\cite{Str84:Introduction-Theory,Var85:Hardy-Littlewood-Theory}.
\begin{lemma}[Chain rule inequality]\label{lem:key_Gamma}
Let $\varphi:\mathbb{R}\rightarrow\mathbb{R}$ be a function such that $\psi := |\varphi'|$ is convex. For all suitable $\mtx{f},\mtx{g}:\Omega\rightarrow \mathbb{H}_d$, \begin{equation*}\label{eqn:general_Gamma} \Expect_\mu \operatorname{tr}\Gamma(\mtx{g},\varphi(\mtx{f}))\leq \Big(\Expect_\mu\operatorname{tr} \left[\Gamma(\mtx{f})\,\psi(\mtx{f})\right]\cdot\Expect_\mu\operatorname{tr} \left[\Gamma(\mtx{g})\,\psi(\mtx{f})\right]\Big)^{1/2}. \end{equation*} \end{lemma}
\noindent The proof of this lemma appears below in Section~\ref{sec:key_lemma}.
Lemma~\ref{lem:key_Gamma} isolates the contributions from the matrix $P_t \mtx{f}$ and the matrix $\varphi(\mtx{f})$ in the formula~\eqref{eqn:overview_gamma}. To estimate $\Gamma(P_t \mtx{f})$, we invoke the local ergodicity property, Proposition~\ref{prop:local_Poincare}\eqref{local_ergodicity}. Last, we apply the matrix decoupling techniques, based on H{\"o}lder and Young trace inequalities, to bound $\Expect\operatorname{tr} \left[\Gamma(\mtx{f})\,\psi(\mtx{f})\right]$ and $\Expect\operatorname{tr} \left[\Gamma(P_t\mtx{f})\,\psi(\mtx{f})\right]$ in terms of the original quantity of interest $\Expect_{\mu}\operatorname{tr}[\mtx{f} \, \varphi(\mtx{f})]$. The following sections supply full details.
Our approach incorporates some techniques and ideas from~\cite[Theorems 4.2 and 4.3]{paulin2016efron}, but the argument is distinct. Appendix~\ref{apdx:Stein_method} gives more details about the connection.
\subsection{Proof of chain rule inequality} \label{sec:key_lemma}
To prove Lemma~\ref{lem:key_Gamma}, we require a novel trace inequality.
\begin{lemma}[Mean value trace inequality]\label{lem:mean_value_inequality} Let $\varphi:\mathbb{R}\rightarrow\mathbb{R}$ be a function such that $\psi:= |\varphi'|$ is convex. For all $\mtx{A},\mtx{B},\mtx{C}\in\mathbb{H}_d$, \[\operatorname{tr}\left[\mtx{C} \, \big(\varphi(\mtx{A})-\varphi(\mtx{B})\big)\right]\leq \inf_{s>0} \frac{1}{4}\operatorname{tr}\left[\left(s\,(\mtx{A}-\mtx{B})^2+s^{-1}\,\mtx{C}^2\right)\big(\psi(\mtx{A})+\psi(\mtx{B})\big)\right].\] \end{lemma}
Lemma~\ref{lem:mean_value_inequality} is a common generalization of \cite[Lemmas 9.2 and 12.2]{paulin2016efron}. Roughly speaking, it exploits convexity to bound the difference $\varphi(\mtx{A})-\varphi(\mtx{B})$ in the spirit of the mean value theorem. We defer the proof of Lemma~\ref{lem:mean_value_inequality} to Appendix~\ref{apdx:mean_value}.
\begin{proof}[Proof of Lemma~\ref{lem:key_Gamma} from Lemma~\ref{lem:mean_value_inequality}] For simplicity, we abbreviate \[\mtx{f}_t = \mtx{f}(Z_t),\quad \mtx{g}_t = \mtx{g}(Z_t) \quad\text{and}\quad \mtx{f}_0 = \mtx{f}(Z_0), \quad \mtx{g}_0 = \mtx{g}(Z_0).\] By Proposition~\ref{prop:Gamma_property}\eqref{limit_formula}, \begin{equation}\label{step:key_lemma_1} \begin{split}
\Expect_\mu \operatorname{tr}\Gamma(\mtx{g},\varphi(\mtx{f})) =&\ \Expect_{Z\sim\mu}\operatorname{tr}\lim_{t\downarrow 0}\frac{1}{2t} \Expect\left[\left(\mtx{g}_t-\mtx{g}_0\right)\left(\varphi(\mtx{f}_t)-\varphi(\mtx{f}_0)\right)\,\big|\,Z_0=Z\right]\\
=&\ \Expect_{Z\sim\mu}\lim_{t\downarrow 0}\frac{1}{2t} \Expect \left[\operatorname{tr}\left[\left(\mtx{g}_t-\mtx{g}_0\right)\left(\varphi(\mtx{f}_t)-\varphi(\mtx{f}_0)\right)\right]\,\big|\,Z_0=Z\right]. \end{split} \end{equation} Fix a parameter $s > 0$. For each $t > 0$, the mean value trace inequality, Lemma~\ref{lem:mean_value_inequality}, yields \begin{equation}\label{step:key_lemma_2} \begin{split} \operatorname{tr} \left[\left(\mtx{g}_t-\mtx{g}_0\right)\big(\varphi(\mtx{f}_t)-\varphi(\mtx{f}_0)\big)\right] &\leq \frac{1}{4}\operatorname{tr}\left[\left(s\,(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\big(\psi(\mtx{f}_t)+\psi(\mtx{f}_0)\big)\right]\\ &= \frac{1}{2} \operatorname{tr}\left[\left(s\,(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\psi(\mtx{f}_0)\right]\\ &\qquad + \frac{1}{4} \operatorname{tr}\left[\left(s(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\big(\psi(\mtx{f}_t)-\psi(\mtx{f}_0)\big)\right]. \end{split} \end{equation} It follows from the triple product result, Lemma~\ref{lem:three_limit}, that the second term satisfies \begin{equation}\label{step:key_lemma_3}
\Expect_{Z\sim\mu}\lim_{t\downarrow 0}\frac{1}{t} \operatorname{tr}\Expect \left[ \left(s\,(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\big(\psi(\mtx{f}_t)-\psi(\mtx{f}_0)\big) \,\big|\,Z_0=Z\right] =0. \end{equation} Sequence the displays \eqref{step:key_lemma_1},\eqref{step:key_lemma_2} and \eqref{step:key_lemma_3} to reach \begin{align*}
\Expect_\mu \operatorname{tr}\Gamma(\mtx{g},\varphi(\mtx{f}))&\leq \frac{1}{2}\Expect_{Z\sim\mu}\lim_{t\downarrow 0} \frac{1}{2t} \operatorname{tr} \Expect\left[\left(s\,(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\psi(\mtx{f}_0) \,\big|\,Z_0=Z\right] \\
&= \frac{1}{2}\Expect_{Z\sim\mu}\operatorname{tr}\Big[\Big(s\,\lim_{t\downarrow0}\frac{1}{2t} \Expect[(\mtx{f}_t-\mtx{f}_0)^2\,|\,Z_0=Z] \\
&\qquad\qquad\qquad\ + s^{-1}\,\lim_{t\downarrow0}\frac{1}{2t} \Expect[(\mtx{g}_t-\mtx{g}_0)^2\,|\,Z_0=Z] \Big)\psi(\mtx{f}(Z))\Big] \\ &= \frac{1}{2} \Expect_\mu\operatorname{tr} \left[\left(s\,\Gamma(\mtx{f}) +s^{-1}\,\Gamma(\mtx{g})\right)\,\psi(\mtx{f})\right]. \end{align*} The last relation is Proposition~\ref{prop:Gamma_property}\eqref{limit_formula}. Minimize the right-hand side over $s\in(0,\infty)$ to arrive at \[\Expect_\mu \operatorname{tr}\Gamma(\mtx{g},\varphi(\mtx{f}))\leq \big(\Expect_\mu\operatorname{tr} \left[\Gamma(\mtx{f})\,\psi(\mtx{f})\right]\big)^{1/2}\cdot\big(\Expect_\mu\operatorname{tr} \left[\Gamma(\mtx{g})\,\psi(\mtx{f})\right]\big)^{1/2}.\] This completes the proof of Lemma~\ref{lem:key_Gamma}. \end{proof}
\subsection{Polynomial moments} \label{sec:polynomial_moments_proof}
This section is dedicated to the proof of Theorem~\ref{thm:polynomial_moment}, which states that the Bakry--{\'E}mery criterion implies matrix polynomial moment bounds.
\subsubsection{Setup}
Consider a reversible, ergodic Markov semigroup $(P_t)_{t \geq 0}$ with stationary measure $\mu$. Assume that the semigroup satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} with constant $c > 0$. By Proposition~\ref{prop:local_Poincare}, this is equivalent to local ergodicity.
Fix a suitable function $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$. Proposition~\ref{prop:Gamma_property}\eqref{limit_formula} implies that the carr{\'e} du champ is shift invariant. In particular, $\Gamma(\mtx{f}) = \Gamma(\mtx{f}-\Expect_\mu\mtx{f})$. Therefore, we may assume that $\Expect_\mu\mtx{f}=\mtx{0}$.
The quantity of interest is \[
\Expect_\mu\operatorname{tr} |\mtx{f}|^{2q}
= \Expect_\mu\operatorname{tr} \left[\mtx{f}\cdot \sgn(\mtx{f})\cdot |\mtx{f}|^{2q-1}\right]
=: \Expect_{\mu} \operatorname{tr} \left[ \mtx{f} \, \varphi(\mtx{f}) \right]. \] We have introduced the signed moment function $\varphi: x\mapsto \sgn(x)\cdot \abs{x}^{2q-1}$ for $x \in \mathbbm{R}$. Note that the absolute derivative $\psi(x) := \abs{\varphi'(x)} = (2q-1)\abs{x}^{2q-2}$ is convex when $q= 1$ or when $q\geq 1.5$.
\begin{remark}[Missing powers] A similar argument holds when $q \in (1, 1.5)$. It requires a variant of Lemma~\ref{lem:key_Gamma} that holds for monotone $\psi$, but has an extra factor of $2$ on the right-hand side. \end{remark}
\subsubsection{A Markov semigroup argument}
By the ergodicity assumption \eqref{eqn:ergodicity}, it holds that \[\lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})] = \Expect_\mu\operatorname{tr}[(\Expect_\mu\mtx{f})\,\varphi(\mtx{f})] = 0.\] Therefore, \begin{equation}\label{step:polynomial_1} \begin{split} \Expect_\mu\operatorname{tr} \abs{\mtx{f}}^{2q} &= \Expect_\mu\operatorname{tr} \left[P_0\mtx{f}\, \varphi(\mtx{f})\right] - \lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})]\\ &= -\int_0^\infty\frac{\diff{} }{\diff t}\Expect_\mu\operatorname{tr}\left[(P_t\mtx{f}) \, \varphi(\mtx{f})\right]\idiff t = -\int_0^\infty\Expect_\mu\operatorname{tr}\left[\mathcal{L}(P_t\mtx{f}) \, \varphi(\mtx{f})\right]\idiff t. \end{split} \end{equation} By convexity of $\psi$, we can invoke the chain rule inequality, Lemma~\ref{lem:key_Gamma}, to obtain \begin{equation}\label{step:polynomial_2} \begin{split} - \Expect_\mu\operatorname{tr}\left[\mathcal{L}(P_t\mtx{f})\, \varphi(\mtx{f})\right] =&\ \Expect_\mu\operatorname{tr}\Gamma(P_t\mtx{f},\varphi(\mtx{f}))\\ \leq&\ \left(\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f})\,\psi(\mtx{f}) \right]\cdot \Expect_\mu\operatorname{tr}\left[\Gamma(P_t\mtx{f})\,\psi(\mtx{f})\right]\right)^{1/2} \\ =&\ (2q-1)\left(\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f})\abs{\mtx{f}}^{2q-2} \right]\cdot \Expect_\mu\operatorname{tr}\left[\Gamma(P_t\mtx{f})\abs{\mtx{f}}^{2q-2}\right]\right)^{1/2}\\ \leq&\ (2q-1)\,\mathrm{e}^{-t/c}\left(\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f})\abs{\mtx{f}}^{2q-2} \right]\cdot \Expect_\mu\operatorname{tr}\left[(P_t\Gamma(\mtx{f}))\abs{\mtx{f}}^{2q-2}\right]\right)^{1/2}. \end{split} \end{equation} The last inequality is the local ergodicity condition, Proposition~\ref{prop:local_Poincare}\eqref{local_ergodicity}.
\subsubsection{Decoupling}
Apply H\"older's inequality for the trace followed by H\"older's inequality for the expectation to obtain \begin{equation} \label{step:polynomial_2.5} \begin{aligned}
\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f}) \abs{\mtx{f}}^{2q-2} \right] &\leq \left(\Expect_\mu\operatorname{tr} \Gamma(\mtx{f})^q \right)^{1/q}\cdot \left(\Expect_\mu\operatorname{tr}|\mtx{f}|^{2q}\right)^{(q-1)/q} \quad\text{and} \\ \Expect_\mu\operatorname{tr}\left[(P_t\Gamma(\mtx{f})) \abs{\mtx{f}}^{2q-2}\right] &\leq \left(\Expect_\mu\operatorname{tr}{} (P_t\Gamma(\mtx{f}))^q \right)^{1/q}\cdot \big(\Expect_\mu\operatorname{tr} \abs{\mtx{f}}^{2q}\big)^{(q-1)/q}. \end{aligned} \end{equation} Introduce the bounds~\eqref{step:polynomial_2.5} into \eqref{step:polynomial_2} to find that \begin{equation}\label{step:polynomial_3} \begin{split} & - \Expect_\mu\operatorname{tr}\left[\mathcal{L}(P_t\mtx{f}) \,\varphi(\mtx{f})\right] \\ &\qquad\qquad \leq (2q-1)\,\mathrm{e}^{-t/c}\left(\Expect_\mu\operatorname{tr} \Gamma(\mtx{f})^q \cdot \Expect_\mu\operatorname{tr}{}(P_t\Gamma(\mtx{f}))^q \right)^{1/(2q)} \big(\Expect_\mu\operatorname{tr}\abs{\mtx{f}}^{2q}\big)^{(q-1)/q}. \end{split} \end{equation} Substitute \eqref{step:polynomial_3} into \eqref{step:polynomial_1} and rearrange the expression to reach \begin{equation}\label{step:polynomial_4} \big(\Expect_\mu\operatorname{tr} \abs{\mtx{f}}^{2q}\big)^{1/q}\leq (2q-1)\left(\Expect_\mu\operatorname{tr} \Gamma(\mtx{f})^q \right)^{1/(2q)} \int_0^{\infty} \mathrm{e}^{-t/c}\left(\Expect_\mu\operatorname{tr}{} (P_t\Gamma(\mtx{f}))^q\right)^{1/(2q)}\idiff t. \end{equation} It remains to remove the semigroup from the integral.
\subsubsection{Endgame}
The trace power $\operatorname{tr}[ (\cdot)^q ]$ is convex on $\mathbb{H}_d$ for $q\geq1$; see~\cite[Theorem 2.10]{carlen2010trace}. Therefore, the Jensen inequality \eqref{eqn:semigroup_Jensen_2} for the semigroup implies that \begin{equation}\label{step:polynomial_5} \Expect_\mu\operatorname{tr}{} (P_t\Gamma(\mtx{f}))^q \leq \Expect_\mu \operatorname{tr} \Gamma(\mtx{f})^q. \end{equation} Substituting \eqref{step:polynomial_5} into \eqref{step:polynomial_4} yields \[\big(\Expect_\mu\operatorname{tr} \abs{\mtx{f}}^{2q}\big)^{1/q}\leq (2q-1) \left(\Expect_\mu\operatorname{tr} \Gamma(\mtx{f})^q \right)^{1/q} \int_0^\infty \mathrm{e}^{-t/c} \idiff t = c \, (2q-1)\left(\Expect_\mu\operatorname{tr}\Gamma(\mtx{f})^q\right)^{1/q}.\] This establishes \eqref{eqn:polynomial_moment_1}.
Define the uniform bound $v_{\mtx{f}} := \norm{ \norm{ \Gamma(\mtx{f}) } }_{L_{\infty}(\mu)}$. We have the further estimate \[\left(\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f})^q\right]\right)^{1/(2q)}\leq d^{1/(2q)} \sqrt{v_{\mtx{f}}}.\] The statement \eqref{eqn:polynomial_moment_2} now follows from \eqref{eqn:polynomial_moment_1}. This step completes the proof of Theorem~\ref{thm:polynomial_moment}.
\subsection{Exponential moments} \label{sec:exponential_moments_proof}
In this section, we establish Theorem~\ref{thm:exponential_concentration}, the exponential matrix concentration inequality. The main technical ingredient is a bound on exponential moments:
\begin{theorem}[Exponential moments]\label{thm:exponential_moment} Instate the hypotheses of Theorem~\ref{thm:exponential_concentration}. For all $\theta\in(-\sqrt{\beta/c},\sqrt{\beta/c})$, \begin{equation}\label{eqn:exponential_moment_1} \log\Expect_\mu \operatorname{\bar{\trace}} \mathrm{e}^{\theta(\mtx{f}-\Expect_\mu\mtx{f})} \leq \frac{c\theta^2 r_{\mtx{f}}(\beta)}{2(1-c\theta^2/\beta)}. \end{equation} Moreover, if $v_{\mtx{f}} <+\infty$, then \begin{equation}\label{eqn:exponential_moment_2} \log\Expect_\mu \operatorname{\bar{\trace}} \mathrm{e}^{\theta(\mtx{f}-\Expect_\mu\mtx{f})} \leq \frac{cv_{\mtx{f}}\theta^2}{2} \quad\text{for all $\theta \in \mathbbm{R}$.} \end{equation} \end{theorem}
\noindent The proof of Theorem~\ref{thm:exponential_moment} occupies the rest of this subsection. Afterward, in Section~\ref{sec:exponential_concentration_proof}, we derive Theorem~\ref{thm:exponential_concentration}.
\subsubsection{Setup}
As usual, we consider a reversible, ergodic Markov semigroup $(P_t)_{t \geq 0}$ with stationary measure $\mu$. Assume that the semigroup satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} for a constant $c > 0$, so it is locally ergodic.
Choose a suitable function $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$. We may assume that $\Expect_\mu\mtx{f}=\mtx{0}$. Furthermore, we only need to consider the case $\theta \geq 0$. The results for $\theta < 0$ follow formally under the change of variables $\theta \mapsto - \theta$ and $\mtx{f} \mapsto - \mtx{f}$.
The quantity of interest is the normalized trace mgf: \[m(\theta) := \Expect_\mu \operatorname{\bar{\trace}} \mathrm{e}^{\theta\mtx{f}}\quad \text{for}\ \theta\geq0.\] We will bound the derivative of this function: \[ m'(\theta) = \Expect_\mu \operatorname{\bar{\trace}} \left[\mtx{f} \, \mathrm{e}^{\theta\mtx{f}}\right]
=: \Expect_{\mu} \operatorname{\bar{\trace}} [ \mtx{f} \, \varphi(\mtx{f}) ]. \] We have introduced the function $\varphi : x \mapsto \mathrm{e}^{\theta x}$ for $x \in \mathbbm{R}$. Note that its absolute derivative $\psi(x) := \abs{ \varphi'(x) } = \theta \mathrm{e}^{\theta x}$ is a convex function, since $\theta \geq 0$. Here and elsewhere, we use the properties of the trace mgf that are collected in Lemma~\ref{prop:trace_mgf}.
\subsubsection{A Markov semigroup argument}
By the ergodicity assumption \eqref{eqn:ergodicity}, we have \begin{equation}\label{step:exponential_1} \begin{split} m'(\theta) &= \Expect_\mu \operatorname{\bar{\trace}} \left[P_0\mtx{f}\,\mathrm{e}^{\theta\mtx{f}}\right] - \lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}\left[P_t(\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right] \\ &= -\int_0^\infty \frac{\diff{}}{\diff{t}}\Expect_\mu \operatorname{\bar{\trace}} \left[P_t(\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\diff t = -\int_0^\infty \Expect_\mu \operatorname{\bar{\trace}} \left[\mathcal{L}(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\diff t. \end{split} \end{equation} Invoke the chain rule inequality, Lemma~\ref{lem:key_Gamma}, to obtain \begin{equation}\label{step:exponential_2} \begin{split} -\Expect_\mu \operatorname{\bar{\trace}} \left[\mathcal{L}(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right] &= \Expect_\mu \operatorname{\bar{\trace}} \Gamma(P_t\mtx{f},\mathrm{e}^{\theta\mtx{f}})\\ &\leq \theta \left(\Expect_\mu\operatorname{\bar{\trace}} \left[\Gamma(\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\cdot \Expect_\mu\operatorname{\bar{\trace}} \left[\Gamma(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\right)^{1/2} \\ &\leq \theta \mathrm{e}^{-t/c}\left(\Expect_\mu\operatorname{\bar{\trace}} \left[\Gamma(\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\cdot \Expect_\mu\operatorname{\bar{\trace}} \left[(P_t\Gamma(\mtx{f}))\, \mathrm{e}^{\theta\mtx{f}}\right]\right)^{1/2}. \end{split} \end{equation} The second inequality is the local ergodicity condition, Proposition~\ref{prop:local_Poincare}\eqref{local_ergodicity}.
\subsubsection{Decoupling}
The next step is to use an entropy inequality to separate the carr\'e du champ operator in \eqref{step:exponential_2} from the matrix exponential. The following trace inequality appears as \cite[Proposition A.3]{mackey2014}; see also \cite[Theorem 2.13]{carlen2010trace}.
\begin{fact}(Young's inequality for matrix entropy)\label{lem:Young_inequality} Let $\mtx{X}$ be a random matrix in $\mathbb{H}_d$, and let $\mtx{Y}$ be a random matrix in $\mathbb{H}_d^+$ such that $\Expect\operatorname{\bar{\trace}} \mtx{Y} = 1$. Then \begin{equation*}\label{eqn:Young_inequality} \Expect \operatorname{\bar{\trace}}\left[\mtx{X}\mtx{Y}\right] \leq \log\Expect\operatorname{\bar{\trace}}\mathrm{e}^{\mtx{X}} + \Expect\operatorname{\bar{\trace}}\left[\mtx{Y}\log \mtx{Y}\right]. \end{equation*} \end{fact}
Apply Fact~\ref{lem:Young_inequality} to see that, for any $\beta>0$, \begin{equation}\label{step:exponential_3} \begin{split} \Expect_\mu\operatorname{\bar{\trace}} \left[\Gamma(\mtx{f}) \, \mathrm{e}^{\theta\mtx{f}}\right] &= \frac{m(\theta)}{\beta} \Expect_\mu\operatorname{\bar{\trace}} \left[\beta \Gamma(\mtx{f})\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]\\ &\leq \frac{m(\theta)}{\beta} \left(\log\Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta\Gamma(\mtx{f})\right) + \Expect_\mu\operatorname{\bar{\trace}}\left[\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]\right) \\ &= m(\theta)\, r(\beta) + \frac{1}{\beta}\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]. \end{split} \end{equation} We have identified the exponential mean $r(\beta) := \beta^{-1}\log\Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta\Gamma(\mtx{f}) \right)$.
Likewise, \begin{equation*} \Expect_\mu\operatorname{\bar{\trace}} \left[(P_t\Gamma(\mtx{f}))\,\mathrm{e}^{\theta\mtx{f}}\right]\leq \frac{m(\theta)}{\beta} \log\Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta P_t\Gamma(\mtx{f})\right) + \frac{1}{\beta}\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]. \end{equation*} The trace exponential $\operatorname{\bar{\trace}} \exp(\cdot)$ is operator convex; see \cite[Theorem 2.10]{carlen2010trace}. The Jensen inequality \eqref{eqn:semigroup_Jensen_2} for the semigroup implies that \begin{equation*}\label{step:exponential_5} \Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta P_t\Gamma(\mtx{f})\right) \leq \Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta\Gamma(\mtx{f})\right)
= \exp \left(\beta r(\beta)\right). \end{equation*} Combine the last two displays to obtain \begin{equation}\label{step:exponential_4} \Expect_\mu\operatorname{\bar{\trace}} \left[(P_t\Gamma(\mtx{f}))\,\mathrm{e}^{\theta\mtx{f}}\right]
\leq m(\theta)\, r(\beta) + \frac{1}{\beta}\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]. \end{equation} Thus, the two terms on the right-hand side of~\eqref{step:exponential_2} have matching bounds.
Sequence the displays \eqref{step:exponential_2}, \eqref{step:exponential_3}, and \eqref{step:exponential_4} to reach \begin{equation}\label{step:exponential_6} -\Expect_\mu \operatorname{\bar{\trace}} \left[\mathcal{L}(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right] \leq \mathrm{e}^{-t/c}\theta \left( m(\theta)\, r(\beta) + \frac{1}{\beta}\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right] \right). \end{equation} This is the integrand in \eqref{step:exponential_1}. Next, we simplify this expression to arrive at a differential inequality.
\subsubsection{A differential inequality}
In view of Proposition~\ref{prop:trace_mgf}\eqref{eqn:m.g.f_Property_1}, we have $\log m(\theta)\geq0$ and hence \[\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)} = \theta \mtx{f} - \log m(\theta) \cdot \mathbf{I} \preccurlyeq \theta \mtx{f}.\] It follows that \begin{equation}\label{step:exponential_7} \Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]\leq \theta \Expect_\mu \operatorname{\bar{\trace}}\left[\mtx{f}\,\mathrm{e}^{\theta\mtx{f}}\right] = \theta\, m'(\theta). \end{equation} Combine \eqref{step:exponential_6} and \eqref{step:exponential_7} to reach \[- \Expect_\mu \operatorname{\bar{\trace}} \left[\mathcal{L}(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right] \leq \mathrm{e}^{-t/c}\theta \left(m(\theta)\,r(\beta) + \frac{\theta}{\beta} m'(\theta) \right).\] Substitute this bound into \eqref{step:exponential_1} and compute the integral to arrive at the differential inequality \begin{equation}\label{eqn:differential_inequality} m'(\theta)\leq c\theta \,m(\theta)\,r(\beta) + \frac{c\theta^2}{\beta} m'(\theta)\quad\text{for $\theta\geq 0$.} \end{equation} Finally, we need to solve for the trace mgf.
\subsubsection{Solving the differential inequality} Fix parameters $\theta$ and $\beta$ where $0\leq \theta <\sqrt{\beta/c}$. By rearranging the expression \eqref{eqn:differential_inequality}, we find that \[\frac{\diff{} }{\diff \zeta}\log m(\zeta) \leq \frac{c\zeta \, r(\beta)}{1-c\zeta^2/\beta}\leq \frac{c\zeta\,r(\beta)}{1-c\theta^2/\beta} \quad\text{for $\zeta \in (0, \theta]$.} \] Since $\log m(0) = 0$, we can integrate this bound over $[0,\theta]$ to obtain \[\log m(\theta) \leq \frac{c\theta^2 r(\beta)}{2(1-c\theta^2/\beta)}.\] This is the first claim \eqref{eqn:exponential_moment_1}.
Moreover, it is easy to check that $r(\beta)\leq v_{\mtx{f}}$. Since this bound is independent of $\beta$, we can take $\beta\rightarrow +\infty$ in \eqref{eqn:exponential_moment_1} to achieve \eqref{eqn:exponential_moment_2}. This completes the proof of Theorem~\ref{thm:exponential_moment}.
\subsection{Exponential matrix concentration} \label{sec:exponential_concentration_proof}
We are now ready to prove Theorem~\ref{thm:exponential_concentration}, the exponential matrix concentration inequality, as a consequence of the moment bounds of Theorem~\ref{thm:exponential_moment}. To do so, we use the standard matrix Laplace transform method, summarized in Appendix~\ref{apdx:matrix_moments}.
\begin{proof}[Proof of Theorem~\ref{thm:exponential_concentration} from Theorem~\ref{thm:exponential_moment}] To obtain inequalities for the maximum eigenvalue $\lambda_{\max}$, we apply Proposition~\ref{prop:matrix_exponential_concentration} to the random matrix $\mtx{X} = \mtx{f}(Z) -\Expect_\mu\mtx{f}$ where $Z \sim \mu$. To do so, we first need to weaken the moment bound \eqref{eqn:exponential_moment_1}: \[\log\Expect_\mu \operatorname{\bar{\trace}} \mathrm{e}^{\theta(\mtx{f}-\Expect_\mu\mtx{f})} \leq \frac{c\theta^2r(\beta)}{2(1-c\theta^2/\beta)}\leq \frac{c\theta^2r(\beta)}{2(1-\theta\sqrt{c/\beta})}\quad \text{for $0\leq \theta < \sqrt{\beta/c}$}.\] Then substitute $c_1=c r(\beta)$ and $c_2 = \sqrt{c/\beta}$ into Proposition~\ref{prop:matrix_exponential_concentration} to achieve the results stated in Theorem~\ref{thm:exponential_concentration}.
To obtain bounds for the minimum eigenvalue $\lambda_{\min}$, we apply Proposition~\ref{prop:matrix_exponential_concentration} instead to the random matrix $\mtx{X} = -(\mtx{f}(Z) -\Expect_\mu\mtx{f})$ where $Z \sim \mu$. \end{proof}
\section{Bakry--{\'E}mery criterion for product measures} \label{sec:product_measure_all}
In this section, we introduce the classic Markov process for a product measure. We check the Bakry--{\'E}mery criterion for this Markov process, which leads to matrix concentration results for product measures.
\subsection{Product measures and Markov processes}
Consider a product space $\Omega = \Omega_1\otimes \Omega_2\otimes \cdots\otimes \Omega_n $ equipped with a product measure $\mu = \mu_1\otimes \mu_2\otimes \cdots\otimes\mu_n$. We can construct a Markov process $(Z_t)_{t\geq0} = (Z^1_t,Z^2_t,\dots,Z^n_t)_{t\geq 0}$ on $\Omega$ whose stationary measure is $\mu$. Let $\{N_t^i\}_{i=1}^n$ be a sequence of independent Poisson processes. Whenever $N_t^i$ increases for some $i$, we replace the value of $Z_t^i$ in $Z_t$ by an independent sample from $\mu_i$ while keeping the remaining coordinates fixed.
To describe the Markov semigroup associated with this Markov process, we need some notation. For each subset $I\subseteq \{1,\dots,n\}$ and all $z,w\in\Omega$, define the interlacing operation \[(z;w)_I := (\eta^1,\eta^2,\dots,\eta^n)\quad \text{where}\quad \begin{cases} \eta^i = w^i, & i\in I; \\ \eta^i = z^i, & i\notin I. \end{cases} \] In particular, $(z;w)_{\emptyset} = z$, and we abbreviate $(z;w)_i = (z^1,\dots,z^{i-1},w^i,z^{i+1},\dots,z^n)$. In this section, the superscript stands for the index of the coordinate.
Let $Z = (Z^1,Z^2,\dots,Z^n)\in\Omega$ be a random vector drawn from the measure $\mu$; that is, each coordinate $Z^{i}\in\Omega_i$ is drawn independently from the measure $\mu_i$. Through this section, we write $\Expect_Z := \Expect_{Z\sim\mu}$. The Markov semigroup $(P_t)_{t\geq 0}$ induced by the Markov process is given by \begin{equation}\label{eqn:tensor_Markov}
P_t\mtx{f}(z) = \sum_{I\subseteq \{1,\dots,n\}}(1-\mathrm{e}^{-t})^{|I|}\mathrm{e}^{-t(n-|I|)} \cdot \Expect_Z \mtx{f}\big((z;Z)_I\big) \quad\text{for all $z\in\Omega$.} \end{equation} This formula is valid for every $\mu$-integrable function $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$. The ergodicity \eqref{eqn:ergodicity} of the semigroup follows immediately from~\eqref{eqn:tensor_Markov}
because $\lim_{t\rightarrow\infty}(1-\mathrm{e}^{-t})^{|I|}\mathrm{e}^{-t(n-|I|)}=0$ whenever $|I|<n$.
The infinitesimal generator $\mathcal{L}$ of the semigroup admits the explicit form \begin{equation}\label{eqn:tensor_L} \mathcal{L}\mtx{f} = \lim_{t\downarrow 0}\frac{P_t\mtx{f}-\mtx{f}}{t} = -\sum_{i=1}^n\delta_i\mtx{f}. \end{equation} The difference operator $\delta_i$ is given by \[\delta_i\mtx{f}(z) := \mtx{f}(z) - \Expect_Z \mtx{f}\big((z;Z)_i\big)\quad \text{for all $z\in\Omega$}.\] This infinitesimal generator $\mathcal{L}$ is well defined for all integrable functions, so the class of suitable functions contains $L_1(\mu)$. It follows from the definition of $\delta_i$ that \[\Expect_\mu[\mtx{f} \,\delta_i(\mtx{g})] = \Expect_\mu[\delta_i(\mtx{f}) \,\delta_i(\mtx{g})] = \Expect_\mu[\delta_i(\mtx{f}) \, \mtx{g}]\quad \text{for each $1\leq i\leq n$}.\] Thus, the infinitesimal generator $\mathcal{L}$ is symmetric on $L_2(\mu)$. As a consequence, the semigroup is reversible, and the Dirichlet form is given by \[\mathcal{E}(\mtx{f},\mtx{g}) = \Expect_\mu\left[\sum_{i=1}^n\delta_i(\mtx{f})\delta_i(\mtx{g})\right]=\sum_{i=1}^n\Expect_Z\left[\left(\mtx{f}(Z)-\Expect_{\tilde{Z}}\mtx{f}((Z;\tilde{Z})_i)\right)\left(\mtx{g}(Z)-\Expect_{\tilde{Z}}\mtx{g}((Z;\tilde{Z})_i)\right)\right]\] for any $\mtx{f},\mtx{g}:\Omega \rightarrow \mathbb{H}_d$, where $\tilde{Z}$ is an independent copy of $Z$. All the results above and their proofs can be found in \cite{van550probability,ABY20:Matrix-Poincare}.
\subsection{Carr\'e du champ operators} The following lemma gives the formulas for the matrix carr\'e du champ operator and the iterated matrix carr\'e du champ operator.
\begin{lemma}[Product measure: Carr{\'e} du champs] \label{lem:tensor_Gamma} The matrix carr\'e du champ operator $\Gamma$ and the iterated matrix carr\'e du champ operator $\Gamma_2$ of the semigroup~\eqref{eqn:tensor_Markov} are given by the formulas \begin{align}\label{eqn:tensor_Gamma} \Gamma(\mtx{f},\mtx{g})(z) &= \frac{1}{2}\sum_{i=1}^n \Expect_Z\left[\big(\mtx{f}(z)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_i)\big)\right] \intertext{and} \Gamma_2(\mtx{f},\mtx{g})(z) &= \frac{1}{4}\sum_{i=1}^n \Expect_{\tilde{Z}}\Expect_Z \Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_i)\big)\\ & \qquad\qquad\qquad\qquad\qquad + \big(\mtx{f}((z;\tilde{Z})_i)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i)-\mtx{g}((z;Z)_i)\big)\Big] \\ & + \frac{1}{4}\sum_{i\neq j}\Expect_{\tilde{Z}}\Expect_Z \Big[\big(\mtx{f}(z)-\mtx{f}((z;\tilde{Z})_i) - \mtx{f}((z;Z)_j) + \mtx{f}(((z;\tilde{Z})_i;Z)_j) \big)\\ &\qquad\qquad\qquad\qquad\qquad \times\big(\mtx{g}(z)-\mtx{g}((z;\tilde{Z})_i) - \mtx{g}((z;Z)_j) + \mtx{g}(((z;\tilde{Z})_i;Z)_j) \big) \Big]. \label{eqn:tensor_Gamma2} \end{align} These expressions are valid for all suitable $\mtx{f},\mtx{g}:\Omega \rightarrow \mathbb{H}_d$ and all $z\in \Omega$. The random variables $Z$ and $\tilde{Z}$ are independent draws from the measure $\mu$. \end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:tensor_Gamma}] The expression \eqref{eqn:tensor_Gamma} is a consequence of the form \eqref{eqn:tensor_L} of the infinitesimal generator and the definition \eqref{eqn:definition_Gamma} of the carr\'e du champ operator $\Gamma$. Further, the following displays are consequences of \eqref{eqn:tensor_L} and \eqref{eqn:tensor_Gamma}. \begin{align*} & \mathcal{L}\Gamma(\mtx{f},\mtx{g})(z) \\ &\qquad = -\sum_{i=1}^n\delta_i\Gamma(\mtx{f},\mtx{g})(z)\\ &\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{\tilde{Z}}\Expect_{Z} \Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big) \\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad - \big(\mtx{f}((z;\tilde{Z})_i)-\mtx{f}(((z;\tilde{Z})_i;Z)_j)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i)-\mtx{g}(((z;\tilde{Z})_i;Z)_j)\big)\Big].\\ & \Gamma(\mtx{f},\mathcal{L}\mtx{g})(z)\\ &\qquad = -\sum_{i=1}^n\Gamma(\mtx{f},\delta_i\mtx{g})(z)\\ &\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{Z}\Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\\ & \qquad\qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\Expect_{\tilde{Z}}\big[\mtx{g}((z;\tilde{Z})_i)\big] - \mtx{g}((z;Z)_j) + \Expect_{\tilde{Z}}\big[\mtx{g}(((z;Z)_j;\tilde{Z})_i)\big]\big)\Big]\\ &\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{\tilde{Z}}\Expect_{Z}\Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\\ &\qquad \qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\mtx{g}((z;\tilde{Z})_i) - \mtx{g}((z;Z)_j) + \mtx{g}(((z;Z)_j;\tilde{Z})_i)\big)\Big].\\ &\Gamma(\mathcal{L}\mtx{f},\mtx{g})(z)\\ &\qquad = -\sum_{i=1}^n\Gamma(\delta_i\mtx{f},\mtx{g})(z)\\ &\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{Z}\Big[\big(\mtx{f}(z)-\Expect_{\tilde{Z}}\big[\mtx{f}((z;\tilde{Z})_i)\big] - \mtx{f}((z;Z)_j) + \Expect_{\tilde{Z}}\big[\mtx{f}(((z;Z)_j;\tilde{Z})_i)\big]\big)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big)\Big]\\ &\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{\tilde{Z}}\Expect_{Z}\Big[\big(\mtx{f}(z)-\mtx{f}((z;\tilde{Z})_i) - \mtx{f}((z;Z)_j) + \mtx{f}(((z;Z)_j;\tilde{Z})_i)\big)\\ & \qquad\qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big)\Big]. \end{align*} If $j=i$, then $((z;\tilde{Z})_i;Z)_j = (z;Z)_i$ and $((z;Z)_j;\tilde{Z})_i = (z;\tilde{Z})_i$. But if $j\neq i$, then $((z;Z)_j;\tilde{Z})_i = ((z;\tilde{Z})_i;Z)_j$. Therefore, by the definition \eqref{eqn:definition_Gamma2} of iterated carr\'e du champ operator $\Gamma_2$, we can compute that \begin{align*} &\Gamma_2(\mtx{f},\mtx{g})(z)\\ &\qquad = \frac{1}{4}\sum_{i,j=1}^n\Expect_{\tilde{Z}}\Expect_{Z}\Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big) \\ & \qquad\qquad\qquad\qquad\qquad\qquad\quad + \big(\mtx{f}((z;\tilde{Z})_i)-\mtx{f}(((z;\tilde{Z})_i;Z)_j)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i)-\mtx{g}(((z;\tilde{Z})_i;Z)_j)\big)\\ & \qquad\qquad\qquad\qquad\qquad\qquad\quad - \big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i) - \mtx{g}(((z;Z)_j;\tilde{Z})_i)\big)\\ & \qquad\qquad\qquad\qquad\qquad\qquad\quad - \big(\mtx{f}((z;\tilde{Z})_i) - \mtx{f}(((z;Z)_j;\tilde{Z})_i)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big)\Big]\\ &\qquad = \frac{1}{4}\sum_{i=1}^n\Expect_{\tilde{Z}}\Expect_Z \Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_i)\big) \\ & \qquad\qquad\qquad\qquad\qquad\qquad\quad + \big(\mtx{f}((z;\tilde{Z})_i)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i)-\mtx{g}((z;Z)_i)\big)\Big] \\ &\qquad\qquad + \frac{1}{4}\sum_{i\neq j}\Expect_{\tilde{Z}}\Expect_Z \Big[\big(\mtx{f}(z)-\mtx{f}((z;\tilde{Z})_i) - \mtx{f}((z;Z)_j) + \mtx{f}(((z;\tilde{Z})_i;Z)_j) \big)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\mtx{g}((z;\tilde{Z})_i) - \mtx{g}((z;Z)_j) + \mtx{g}(((z;\tilde{Z})_i;Z)_j) \big) \Big]. \end{align*} This gives the expression \eqref{eqn:tensor_Gamma2}. \end{proof}
\subsection{Bakry--\'Emery criterion}
It is clear from Lemma~\ref{lem:tensor_Gamma} that the formula~\eqref{eqn:tensor_Gamma} for $\Gamma$ appears within the formula~\eqref{eqn:tensor_Gamma2} for $\Gamma_2$. We immediately conclude that the Bakry--{\'E}mery criterion holds.
\begin{theorem}[Product measure: Bakry--{\'E}mery] \label{thm:product_measure_localPoincare} For the semigroup~\eqref{eqn:tensor_Markov}, the Bakry--\'Emery criterion \eqref{Bakry-Emery} holds with $c = 2$. That is, for any suitable function $f:\Omega \rightarrow \mathbb{R}$, \begin{equation*}\label{eqn:product_measure_localPoincare} \Gamma(f)\leq 2\Gamma_2(f). \end{equation*} \end{theorem}
\begin{proof} Comparing the two expressions in Lemma~\ref{lem:tensor_Gamma} with $f=g$ gives \begin{align*} \Gamma_2(f)(z) &= \frac{1}{4}\sum_{i=1}^n \Expect_{\tilde{Z}}\Expect_Z \Big[\big(f(z)-f((z;Z)_i)\big)^2 + \left(f((z;\tilde{Z})_i)-f((z;Z)_i)\right)^2\Big] \\ &\qquad + \frac{1}{4}\sum_{i\neq j} \Expect_{\tilde{Z}}\Expect_Z \Big[\Big(f(z)-f((z;\tilde{Z})_i) - f((z;Z)_j) + f(((z;\tilde{Z})_i;Z)_j) \Big)^2\Big]\\ &\geq \frac{1}{4}\sum_{i=1}^n \Expect_Z \left[\big(f(z)-f((z;Z)_i)\big)^2\right]\\ &= \frac{1}{2}\Gamma(f)(z), \end{align*} which is the stated inequality. \end{proof}
After completing this paper, we learned that Theorem~\ref{thm:product_measure_localPoincare} appears in \cite[Example 6.6]{junge2015noncommutative} with a different style of proof.
\begin{remark}[Matrix Poincar\'e inequality: Constants] Following the discussion in Section~\ref{sec:local_matrix_Poincare_inequality}, Theorem~\ref{thm:product_measure_localPoincare} implies the matrix Poincar\'e inequality~\eqref{eqn:matrix_Poincare} with $\alpha = 2$. However, Aoun et al.~\cite{ABY20:Matrix-Poincare} proved that the Markov process~\eqref{eqn:tensor_Markov} actually satisfies the matrix Poincar\'e inequality with $\alpha = 1$; see also \cite[Theorem 5.1]{cheng2016characterizations}. This gap is not surprising because the averaging operation that is missing in the local Poincar\'e inequality contributes to the global convergence of the Markov semigroup. \end{remark}
\subsection{Matrix concentration results} \label{sec:concentration_results_product} In this subsection, we complete the proofs of the matrix concentration results for product measures stated in Section~\ref{sec:main_results}.
For a product measure $\mu = \mu_1\otimes\mu_2\otimes\cdots\otimes\mu_n$, Theorem~\ref{thm:product_measure_localPoincare} shows that there is a reversible ergodic Markov semigroup whose stationary measure is $\mu$ and which satisfies the Bakry--\'Emery criterion \eqref{Bakry-Emery} with constant $c=2$. We then apply Theorem~\ref{thm:polynomial_moment} with $c=2$ to obtain the polynomial moment bounds in Corollary~\ref{cor:product_measure_Efron--Stein}. Similarly, we apply Theorem~\ref{thm:exponential_concentration} with $c=2$ to obtain the subgaussian concentration inequalities in Corollary~\ref{cor:product_measure_tailbound}.
\section{Bakry--{\'E}mery criterion for log-concave measures} \label{sec:log-concave}
In this section, we study a class of log-concave measures; the most important example in this class is the standard Gaussian measure. First, we introduce the standard diffusion process associated with a log-concave measure. We verify that the associated semigroup is reversible and ergodic via standard arguments. Then we introduce the Bakry--{\'E}mery criterion which follows from the uniform strong convexity of the potential.
\subsection{Log-concave measures and Markov processes}
Consider the Markov processes $(Z_t)_{t\geq 0}$ on $\mathbb{R}^n$ generated by the stochastic differential equation: \begin{equation}\label{eqn:SDE} \diff Z_t = -\nabla W(Z_t)\idiff{t} + \sqrt{2}\idiff{B_t}, \end{equation} where $B_t$ is the standard $n$-dimensional Brownian motion and $W:\mathbb{R}^n\rightarrow \mathbb{R}$ is a smooth convex function. The stationary measure $\mu$ of this process has the density $\diff \mu = \rho^\infty(z)\idiff{z} = M^{-1}\mathrm{e}^{-W(z)}\idiff{z}$, where $M := \int_{\mathbb{R}^n}\mathrm{e}^{-W(z)}\idiff z$ is a normalization constant. The infinitesimal generator $\mathcal{L}$ is given by \begin{equation}\label{eqn:log-concave_L} \mathcal{L}\mtx{f}(z) = -\sum_{i=1}^n\partial_iW(z)\cdot\partial_i\mtx{f}(z) + \sum_{i=1}^n\partial_i^2\mtx{f}(z) \quad\text{for all $z=(z_1,\dots,z_n)\in \mathbb{R}^n$.} \end{equation} The class of suitable functions is the Sobolev space $\mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d)$, defined in~\eqref{def:H2_function}. Here and elsewhere, $\partial_i$ means $\partial/\partial z_i$ and $\partial_{ij}$ means $\partial^2/(\partial z_i\partial z_j)$ for all $i,j=1,\dots,n$.
\subsubsection{Reversibility} The reversibility of this Markov $(Z_t)_{t\geq0}$ can be verified with a standard calculation. We restrict our attention to functions in $\mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d)$. Integration by parts yields \begin{align*} \Expect_\mu[\mathcal{L}(\mtx{f})\mtx{g}] =&\ \int_{\mathbb{R}^n}\left(-\sum_{i=1}^n\partial_iW(z)\cdot\partial_i\mtx{f}(z) + \sum_{i=1}^n\partial_i^2\mtx{f}(z)\right)\mtx{g}(z)\rho^\infty(z)\idiff z\\ =&\ -\sum_{i=1}^n\int_{\mathbb{R}^n}\partial_i\mtx{f}(z)\cdot \partial_i\mtx{g}(z)\cdot\rho^\infty(z)\idiff z\\ =&\ \int_{\mathbb{R}^n}\mtx{f}(z)\left(-\sum_{i=1}^n\partial_iW(z)\cdot\partial_i\mtx{g}(z) + \sum_{i=1}^n\partial_i^2\mtx{g}(z)\right)\rho^\infty(z)\idiff z\\ =&\ \Expect_\mu[\mtx{f}\mathcal{L}(\mtx{g})]. \end{align*} This shows that $\mathcal{L}$ is symmetric on $L_2(\mu)$ and thus $(Z_t)_{t\geq 0}$ is reversible. From the calculation above, we also obtain a simple formula for the associated Dirichlet form: \[\mathcal{E}(\mtx{f},\mtx{g}) = \sum_{i=1}^n\Expect_\mu\left[\partial_i\mtx{f}\cdot \partial_i\mtx{g}\right]\quad \text{for all $\mtx{f},\mtx{g}\in \mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d)$}.\] These results parallel the scalar case, but the partial derivatives are matrix-valued.
\subsubsection{Ergodicity} We now turn to the ergodicity of the Markov process given by \eqref{eqn:SDE}, which generally reduces to studying the convergence of the corresponding Fokker--Planck equation: \begin{equation}\label{eqn:Fokker-Planck} \begin{cases} \frac{\partial}{\partial t}\rho_{x}(z,t) = \mathcal{L}^*\rho_{x}(z,t) := \sum_{i=1}\partial_i(\partial_iW(z)\rho_{x}(z,t)) + \sum_{i=1}^n\partial_i^2\rho_{x}(z,t); \\[3pt] \rho_{x}(z,0) = \delta(z-x). \end{cases} \end{equation} We define $\rho_{x}(z,t)$ to be the density of $Z_t$, conditional on $Z_0 = x\in\mathbb{R}^n$. As usual, $\delta(z-x)$ is the Dirac distribution centered at $x$. The associated Markov semigroup $(P_t)_{t\geq 0}$ can be recognized as \begin{equation}\label{eqn:log-concave_semigroup}
P_t \mtx{f}(x) = \Expect_\mu\left[\mtx{f}(Z_t) \,|\, Z_0 = x \right] = \int_{\mathbb{R}^n} \mtx{f}(z) \rho_{x}(z,t) \idiff z \quad \text{for all $t\geq0$ and all $x\in \mathbb{R}^n$ }. \end{equation} The semigroup $(P_t)_{t\geq 0}$ is ergodic in the sense of \eqref{eqn:ergodicity} if and only if $\rho_{x}(\cdot,t)$ converges weakly to $\rho^\infty$ for all $x\in \mathbb{R}^n$.
A fundamental way to prove the convergence of \eqref{eqn:Fokker-Planck} to the stationary density $\rho^\infty$ is through the method of Lyapunov functions~\cite{hairer2010convergence,ji2019convergence}. However, ergodicity in the weak sense follows more easily from the assumption that the function $W$ is uniformly strongly convex. That is, \[ (\operatorname{Hess} W)(z) := \big[\partial_{ij} W(z)\big]_{i,j=1}^n
\succcurlyeq \eta \cdot \mathbf{I}_n
\quad\text{for all $z \in \mathbb{R}^n$.} \] To see this, recall the Brascamp--Lieb inequality~\cite[Theorem 4.1]{BRASCAMP1976366}, which states that the (ordinary) variance of a scalar function $h:\mathbb{R}^n\rightarrow \mathbb{R}$ is bounded as \[\Var_\mu[h]\leq \int_{\mathbb{R}^n} (\nabla h(z))^\mathsf{T}\big((\operatorname{Hess} W)(z)\big)^{-1}\nabla h(z) \idiff \mu(z).\] Combine the last two displays to arrive at the Poincar\'e inequality $\Var_\mu[h]\leq \eta^{-1}\mathcal{E}(h)$.
Next, consider the scalar function $\varphi_{x}(z,t) := (\rho_{x}(z,t) - \rho^\infty(z))/\rho^\infty(z)$. Let us check that its variance $\Var_\mu[\varphi_{x}(\cdot,t)]$ converges to $0$ exponentially fast. Indeed, it is not hard to verify that $\varphi_{x}(z,t)$ satisfies the partial differential equation \[\frac{\partial}{\partial t} \varphi_{x}(z,t) = \mathcal{L}\varphi_{x}(z,t) \quad \text{for $t\geq0$ and $z\in \mathbb{R}^n$}.\] Along with the Poincar\'e inequality and the fact that $\Expect_\mu\varphi_{x}(\cdot,t) = 0$, this implies \[\frac{\diff{} }{\diff t} \Var_\mu[\varphi_{x}(\cdot,t)] = - 2\mathcal{E}(\varphi_{x}(\cdot,t))\leq - 2\eta \Var_\mu[\varphi_{x}(\cdot,t)]. \] Therefore, the quantity $\Var_\mu[\varphi_{x}(\cdot,t)]$ converges to $0$ exponentially fast because \[\Var_\mu[\varphi_{x}(\cdot,t)] \leq \mathrm{e}^{-2\eta (t-t_0)} \Var_\mu[\varphi_{x}(\cdot,t_0)]\quad \text{for}\ t\geq t_0>0.\] As a consequence, for any $f\in \mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{R})$ and any $x\in \mathbb{R}^n$, \begin{align*}
\left|P_tf(x) - \Expect_\mu f\right| &= \left|\int_{\mathbb{R}^n} f(z)(\rho_{x}(z,t)-\rho^\infty(z))\idiff z \right| = \left|\int_{\mathbb{R}^n} f(z)\rho^\infty(z)\varphi_{x}(z,t)\idiff z\right| \\
&\leq \int_{\mathbb{R}^n} |f(z)|\cdot\rho^\infty(z)\cdot|\varphi_{x}(z,t)|\idiff z \leq \left(\Expect_\mu |f|^2\right)^{1/2}\Var_\mu[\varphi_{x}(\cdot,t)]^{1/2} \rightarrow 0. \end{align*} This justifies the pointwise convergence of $P_t\mtx{f}$, which is stronger than the $L_2(\mu)$ ergodicity \eqref{eqn:ergodicity} of the semigroup $(P_t)_{t\geq0}$.
\subsection{Carr\'e du champ operators} After checking reversibility and ergodicity, we now turn to the derivation of the matrix carr\'e du champ operator and the iterated matrix carr\'e du champ operator. Their explicit forms are given in the next lemma.
\begin{lemma}[Log-concave measure: Carr{\'e} du champs] \label{lem:log-concave_Gamma} The matrix carr\'e du champ operator $\Gamma$ and the iterated matrix carr\'e du champ operator $\Gamma_2$ of the Markov process defined by~\eqref{eqn:SDE} are given by the formulas \begin{equation}\label{eqn:log-concave_Gamma} \Gamma(\mtx{f},\mtx{g})= \sum_{i=1}^n\partial_i\mtx{f}\cdot \partial_i\mtx{g} \end{equation} and \begin{equation}\label{eqn:log-concave_Gamma2} \Gamma_2(\mtx{f},\mtx{g}) = \sum_{i,j=1}^n\partial_{ij}W\cdot \partial_i\mtx{f} \cdot \partial_j\mtx{g} + \sum_{i,j=1}^n\partial_{ij}\mtx{f}\cdot \partial_{ij}\mtx{g} \end{equation} for all suitable $\mtx{f},\mtx{g}:\mathbb{R}^n\rightarrow\mathbb{H}_d$. \end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:log-concave_Gamma}] Knowing the explicit form \eqref{eqn:log-concave_L} of the Markov generator $\mathcal{L}$, we can compute the carr\'e du champ operator $\Gamma$ as \begin{align*} \Gamma(\mtx{f},\mtx{g})=&\ \frac{1}{2}\sum_{i=1}^n\left(-\partial_i W\cdot \partial_i(\mtx{f}\mtx{g}) + \partial_i^2(\mtx{f}\mtx{g}) - \big(-\partial_i W\cdot\partial_i \mtx{f} + \partial_i^2\mtx{f}\big)\mtx{g} - \mtx{f}\big(-\partial_i W\cdot \partial_i\mtx{g} + \partial_i^2\mtx{g}\big)\right)\\ =&\ \sum_{i=1}^n\partial_i\mtx{f}\cdot \partial_i\mtx{g}. \end{align*} Moreover, combining the expressions \eqref{eqn:log-concave_L} and \eqref{eqn:log-concave_Gamma} yields the following: \begin{align*} \mathcal{L}\Gamma(\mtx{f},\mtx{g}) =&\ -\sum_{i=1}^n\partial_iW\cdot \partial_i\left(\sum_{j=1}^n\partial_j\mtx{f}\cdot \partial_j\mtx{g}\right) + \sum_{i=1}^n\partial_i^2\left(\sum_{j=1}^n\partial_j\mtx{f}\cdot \partial_j\mtx{g}\right)\\ =&\ \sum_{i,j}^n\left(-\partial_iW\cdot \partial_{ij}\mtx{f}\cdot \partial_j\mtx{g} - \partial_iW\cdot \partial_j\mtx{f}\cdot \partial_{ij}\mtx{g} + \partial_i^2(\partial_j\mtx{f})\cdot \partial_j\mtx{g}+2\partial_{ij}\mtx{f}\cdot\partial_{ij}\mtx{g}+ \partial_j\mtx{f}\cdot \partial_i^2(\partial_j\mtx{g})\right).\\ \Gamma(\mathcal{L}\mtx{f},\mtx{g}) =&\ \sum_{j=1}^n\partial_j\left(\sum_{i=1}^n \big(- \partial_iW\cdot \partial_i\mtx{f} + \partial_i^2\mtx{f})\right)\cdot \partial_j\mtx{g}\\ =&\ \sum_{i,j=1}^n\left(-\partial_{ij}W\cdot \partial_i\mtx{f}\cdot \partial_j\mtx{g} - \partial_iW\cdot \partial_{ij}\mtx{f}\cdot \partial_j\mtx{g} + \partial_i^2(\partial_j\mtx{f})\cdot \partial_j\mtx{g}\right).\\ \Gamma(\mtx{f},\mathcal{L}\mtx{g}) =&\ \sum_{j=1}^n\partial_j\mtx{f}\cdot \partial_j\left(\sum_{i=1}^n \big(- \partial_iW \cdot\partial_i\mtx{g} + \partial_i^2\mtx{g})\right)\\ =&\ \sum_{i,j=1}^n\left(-\partial_{ij}W\cdot \partial_j\mtx{f}\cdot \partial_i\mtx{g} - \partial_iW\cdot \partial_j\mtx{f}\cdot \partial_{ij}\mtx{g} + \partial_j\mtx{f}\cdot \partial_i^2(\partial_j\mtx{g})\right). \end{align*} Then we can compute that \begin{align*} \Gamma_2(\mtx{f},\mtx{g}) =&\ \frac{1}{2}\left(\mathcal{L}\Gamma(\mtx{f},\mtx{g}) -\Gamma(\mathcal{L}\mtx{f},\mtx{g}) -\Gamma(\mtx{f},\mathcal{L}\mtx{g})\right)\\ =&\ \frac{1}{2}\sum_{i,j=1}^n\left(\partial_{ij}W\cdot \partial_i\mtx{f}\cdot \partial_j\mtx{g}+ \partial_{ij}W\cdot \partial_j\mtx{f}\cdot \partial_i\mtx{g}\right) + \sum_{i,j=1}^n\partial_{ij}\mtx{f}\cdot \partial_{ij}\mtx{g}\\ =&\ \sum_{i,j=1}^n\partial_{ij}W\cdot \partial_i\mtx{f}\cdot \partial_j\mtx{g} + \sum_{i,j=1}^n\partial_{ij}\mtx{f}\cdot \partial_{ij}\mtx{g}. \end{align*} This gives the expression \eqref{eqn:log-concave_Gamma2}. \end{proof}
\subsection{Bakry--\'Emery criterion} It is a well-known result that a Bakry--\'Emery criterion follows from the uniform strong convexity of $W$. For example, see the discussion in \cite[Sec. 4.8]{bakry2013analysis}. Nevertheless, we provide a short proof here for the sake of completeness.
\begin{fact}[Log-concave measure: Matrix Bakry--{\'E}mery] \label{fact:log-concave_localPoincare} Consider the Markov process defined by \eqref{eqn:SDE}. If the potential $W:\mathbb{R}\rightarrow\mathbb{R}$ satisfies $(\operatorname{Hess} W)(z)\succcurlyeq \eta \cdot \mathbf{I}_n $ for all $z\in \mathbb{R}^n$ for some constant $\eta>0$, then the Bakry--\'Emery criterion \eqref{Bakry-Emery} holds with $c = \eta^{-1}$. That is, for any suitable function $f:\mathbb{R}^n\rightarrow\mathbb{R}$, \begin{equation*}\label{eqn:log-concave_localPoincare} \Gamma(f)\preccurlyeq \eta^{-1}\Gamma_2(f). \end{equation*} \end{fact}
\begin{proof} Comparing the two expressions in Lemma~\ref{lem:log-concave_Gamma} with $f=g$ gives that \begin{align*} \Gamma_2(f) &= \sum_{i,j=1}^n\partial_{ij}W\cdot \partial_if\cdot \partial_jf + \sum_{i,j=1}^n(\partial_{ij}f)^2 \\ &\geq (\nabla f)^\mathsf{T}(\operatorname{Hess} W)\nabla f \geq \eta \sum_{i=1}^n(\partial_i\mtx{f})^2 = \eta\cdot\Gamma(\mtx{f}). \end{align*} The second inequality follows from the uniform strong convexity of $W$. Proposition~\ref{prop:BE_equiv} extends the scalar Bakry--{\'E}mery criterion to matrices. \end{proof}
\subsection{Standard normal distribution}\label{sec:Gaussian}
The most important example of a strongly log-concave measure occurs for the potential \[W(z) = \frac{1}{2}z^\mathsf{T} z\quad\text{for all $z\in \mathbb{R}^n$.}\] In this case, the corresponding log-concave measure $\mu$ coincides with the density of the $n$-dimensional standard Gaussian distribution $N(\vct{0},\mathbf{I}_n)$: \[\diff \mu = \frac{1}{\sqrt{(2\pi)^n}}\exp\left(-\frac{1}{2}z^\mathsf{T} z\right) \idiff{z}\quad \text{for all $z\in \mathbb{R}^n$.}\] The associated Markov process is known as the Ornstein--Uhlenbeck process. The semigroup $(P_t)_{t\geq 0}$ has a simple form, given by the Mehler formula: \[P_t\mtx{f}(z) = \Expect \mtx{f}\left(\mathrm{e}^{-t}z + \sqrt{1-\mathrm{e}^{-2t}}\xi\right)\quad\text{where $\xi\sim N(\vct{0},\mathbf{I}_n)$.} \] The ergodicity of this Markov semigroup is obvious from the above formula because $\mathrm{e}^{-t}\rightarrow 0$ as $t\rightarrow +\infty$. Lemma~\ref{lem:log-concave_Gamma} gives the matrix carr\'e du champ operator $\Gamma$ and the iterated matrix carr\'e du champ operator $\Gamma_2$ for the Ornstein--Uhlenbeck process: \[\Gamma(\mtx{f},\mtx{g}) = \sum_{i=1}^n\partial_i\mtx{f}\cdot \partial_i\mtx{g}\quad \text{and}\quad \Gamma_2(\mtx{f},\mtx{g}) = \sum_{i=1}^n\partial_i\mtx{f}\cdot \partial_i\mtx{g} + \sum_{i,j=1}^n\partial_{ij}\mtx{f}\cdot \partial_{ij}\mtx{g}.\] Clearly, $\Gamma(\mtx{f})\preccurlyeq \Gamma_2(\mtx{f})$. Therefore, the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} holds with $c = 1$.
\subsection{Matrix concentration results} \label{sec:concentration_results_log-concave}
Finally, we prove the matrix concentration results for log-concave measures stated in Section~\ref{sec:main_results}.
Consider a log-concave probability measure $\diff \mu \propto \mathrm{e}^{-W(z)}\idiff z$ on $\mathbb{R}^n$, where the potential satisfies the strong convexity condition $\operatorname{Hess} W\succcurlyeq \eta\mathbf{I}_n$ for $\eta > 0$. Fact~\ref{fact:log-concave_localPoincare} states that the associated semigroup \eqref{eqn:log-concave_semigroup} satisfies the Bakry--\'Emery criterion with constant $c=\eta^{-1}$. We then apply Theorem~\ref{thm:polynomial_moment} with $c=\eta^{-1}$ to obtain the polynomial moment bounds in Corollary~\ref{cor:log-concave_polynomial_inequality}. Similarly, we apply Theorem~\ref{thm:exponential_concentration} with $c=\eta^{-1}$ to obtain the subgaussian concentration inequalities in Corollary~\ref{cor:log-concave_concentration}.
\section{Extension to Riemannian manifolds}\label{sec:extension_Riemannian_manifold}
In this section, we give a high-level discussion about diffusion processes on Riemannian manifolds. The book \cite{bakry2013analysis} contains a comprehensive treatment of the subject. For an introduction to calculus on Riemannian manifolds, references include \cite{petersen2016riemannian,lee2018introduction}.
\subsection{Measures on Riemannian manifolds}
Let $(M, \mathfrak{g})$ be an $n$-dimensional Riemannian manifold whose co-metric tensor $\mathfrak{g}(x) = (g^{ij}(x) : 1 \leq i,j \leq n)$ is symmetric and positive definite for every $x \in M$. We write $\mtx{G}(x) = (g_{ij} : 1 \leq i, j \leq n)$ for the metric tensor, which satisfies the relation $\mtx{G}(x) = \mathfrak{g}(x)^{-1}$.
The Riemannian measure $\mu_\mathfrak{g}$ on the manifold $(M,\mathfrak{g})$ has density $\diff \mu_\mathfrak{g} \propto w_\mathfrak{g}(x(z)) \idiff{z}$ with respect to the Lebesgue measure in local coordinates. The weight $w_\mathfrak{g} := \det(\mathfrak{g})^{-1/2}$. Whenever this measure is finite, we normalize it to obtain a probability. In particular, a compact Riemannian manifold always admits a Riemannian probability measure.
The matrix Laplace--Beltrami operator $\Delta_{\mathfrak{g}}$ on the manifold is defined as \[\Delta_\mathfrak{g}\mtx{f}(x) := \frac{1}{w_\mathfrak{g}} \sum_{i,j=1}^n\partial_i\left(w_\mathfrak{g}g^{ij}\partial_j\mtx{f}(x)\right)\quad \text{for suitable $\mtx{f} : M \to \mathbb{H}_d$ and $x\in M$.}\] Here, $\partial_i$ and the like represent the components of the differential with respect to local coordinates. The diffusion process on $M$ whose infinitesimal generator is $\Delta_{\mathfrak{g}}$ is called the Riemannian Brownian motion. The measure $\mu_\mathfrak{g}$ is the stationary measure for the Brownian motion.
To generalize, one may consider a weighted measure $\diff \mu \propto \mathrm{e}^{-W} \diff \mu_\mathfrak{g}$ where the potential $W:M\rightarrow \mathbb{R}$ is sufficiently smooth. The associated infinitesimal generator is then the Laplace--Beltrami operator plus a drift term: \begin{equation} \label{eqn:mL-drift} \mathcal{L}\mtx{f}(x) := -\sum_{i,j=1}^n g^{ij}\,\partial_iW\, \partial_j\mtx{f} + \frac{1}{w_\mathfrak{g}} \sum_{i,j=1}^n\partial_i\left(w_\mathfrak{g}g^{ij}\,\partial_j\mtx{f}(x)\right)\quad \text{for suitable $\mtx{f} : M \to \mathbb{H}_d$.} \end{equation} It is not hard to check that $\mathcal{L}$ is symmetric with respect to $\mu$, and hence the induced diffusion process with drift is reversible.
\subsection{Carr{\'e} du champ operators}
Next, we present expressions for the matrix carr{\'e} du champ operators associated with the infinitesimal generator $\mathcal{L}$ defined in~\eqref{eqn:mL-drift}. The derivation follows from a standard symbol calculation, as in the scalar setting.
\subsubsection{Carr\'{e} du champ operator} The carr{\'e} du champ operator coincides with the squared ``magnitude'' of the differential: \begin{equation}\label{eqn:gamma_Riemannian} \Gamma(\mtx{f}) = \sum_{i,j=1}^ng^{ij}\,\partial_i\mtx{f}\, \partial_j\mtx{f}\quad \text{for suitable $\mtx{f}:M\rightarrow\mathbb{H}_d$}. \end{equation} Note that this expression contains a matrix product. Calculation of the carr{\'e} du champ involves a choice of local coordinates. Nevertheless, expressions of the carr{\'e} du champ in different choices of local coordinates are equivalent under change of variables.
Another way to calculate the carr{\'e} du champ $\Gamma(\mtx{f})$ is by relating it to the tangential gradient of $\mtx{f}$ on the manifold. For a point $x\in M$, let $T_xM$ denote the tangent space at $x$. The tangential gradient $\nabla_M\mtx{f}(x)$ of a matrix-valued function $\mtx{f}:M\rightarrow\mathbb{H}_d$ can be written as \[\nabla_M\mtx{f}(x) = \sum_{i=1}^N \vct{v}_i\otimes \mtx{A}_i\] for some vectors $\{\vct{v}_i\}_{i=1}^N\subset T_xM$ and some matrices $\{\mtx{A}_i\}_{i=1}^N\subset \mathbb{H}_d$ that depend on the representation of the manifold $M$. The integer $N$ is not necessarily the dimension of $M$. When $d=1$, the tangential gradient $\nabla_M\mtx{f}(x)$ is also a vector in $T_xM$. Now, the carr{\'e} du champ at the point $x$ is given by an equivalent expression: \begin{equation}\label{eqn:gamma_Riemannian_alternative} \Gamma(\mtx{f})(x) = \langle \nabla_M\mtx{f}(x),\nabla_M\mtx{f}(x)\rangle_{\mtx{G}} := \sum_{i,j=1}^N\langle \vct{v}_i,\vct{v}_j\rangle_{\mtx{G}}\cdot \mtx{A}_i\mtx{A}_j \end{equation} where $\langle \cdot ,\cdot\rangle_{\mtx{G}}$ is the inner product on $T_xM$ associated with the metric tensor $\mtx{G}$.
The expression \eqref{eqn:gamma_Riemannian_alternative} coincides with \eqref{eqn:gamma_Riemannian} if we choose $(\vct{v}_i : 1 \leq i \leq n)$ to be the moving frame of $N = n$ local coordinates. In this case, $\langle \vct{v}_i(x),\vct{v}_i(x)\rangle_{\mtx{G}} = g_{ij}(x)$ for $i,j=1,\dots,n$. Moreover, the tangential gradient can be written as \[\nabla_M\mtx{f}(x) = \sum_{i=1}^n \vct{v}_i(x)\otimes \nabla_M^i\mtx{f}(x),\] where $\nabla_M^i\mtx{f}(x):= \sum_{j=1}^ng^{ij}\partial_j\mtx{f}$ for $i=1,\dots,n$. Then one can rewrite the expression \eqref{eqn:gamma_Riemannian_alternative} in the form \eqref{eqn:gamma_Riemannian} by recalling that $\mtx{G}=\mathfrak{g}^{-1}$.
The expression \eqref{eqn:gamma_Riemannian_alternative} is especially useful when the Riemannian manifold $M$ is embedded into a higher-dimensional Euclidean space $\mathbb{R}^N$ with the metric tensor $\mtx{G}$ induced by the Euclidean metric. That is, $M$ is a Riemannian submanifold of $\mathbb{R}^N$. In this case, for a function $\mtx{f} : \mathbbm{R}^{N} \to \mathbb{H}_d$, the tangential gradient $\nabla_M\mtx{f}(x)$ is simply the projection of $\nabla_{\mathbb{R}^N}\mtx{f}(x)$ onto the tangent space $T_xM$, where $\nabla_{\mathbb{R}^N}\mtx{f}$ is the ordinary gradient of $\mtx{f}$ in the embedding space $\mathbbm{R}^N$. Let us elaborate. Suppose that $x = (x_1,\dots,x_N)$ is the representation of a point $x\in M$ with respect to the standard basis $\{\mathbf{e}_i\}_{i=1}^N$ of $\mathbb{R}^N$. Define the orthogonal projection $\mathrm{Proj}_x$ onto the tangent space $T_xM$. Then the tangential gradient satisfies \[\nabla_M\mtx{f}(x) = (\mathrm{Proj}_x \otimes \mathbf{I})\left(\sum_{i=1}^N \mathbf{e}_i\otimes \frac{\partial \mtx{f}(x)}{\partial x_i}\right) = \sum_{i=1}^N (\mathrm{Proj}_x\mathbf{e}_i)\otimes \frac{\partial \mtx{f}(x)}{\partial x_i}.\] This expression of the tangential gradient helps simplify the calculation of the carr{\'e} du champ operator in many interesting examples.
\subsubsection{Iterated carr{\'e} du champ operator} To introduce the iterated matrix carr{\'e} du champ operator, we first define the Hessian $\nabla^2 \mtx{f} := (\nabla^2_{ij} \mtx{f} : 1 \leq i,j \leq n)$ of a matrix-valued function $\mtx{f}:M\rightarrow \mathbb{H}_d$, where \[\nabla^2_{ij} \mtx{f} := \partial_{ij}\mtx{f} - \sum_{k=1}^n\gamma_{ij}^k\partial_k\mtx{f} \quad \text{for $i,j=1,\dots,n$}.\] The Christoffel symbols $\gamma_{ij}^k$ are the quantities \[ \gamma_{ij}^k := \frac{1}{2}\sum_{l=1}^ng^{kl}(\partial_{j}g_{il} + \partial_{i}g_{jl} - \partial_{l}g_{ij})
\quad \text{for $i,j,k=1,2,\dots,n$}.\] When the matrix dimension $d > 1$, the Hessian $\nabla^2 \mtx{f}$ is a 4-tensor.
Now, the iterated matrix carr{\'e} du champ operator $\Gamma_2$ admits the formula \begin{equation} \label{eqn:gamma2-riemann} \Gamma_2(\mtx{f}) = \sum_{i,j,k,l=1}^ng^{ij}g^{kl} \, \nabla^2_{ik}\mtx{f}\, \nabla^2_{jl}\mtx{f} + \sum_{i,j,k,l=1}^ng^{ik}g^{jl}\left(\operatorname{Ric}_{kl} + \nabla^2_{kl}W\right) \partial_i\mtx{f}\, \partial_j\mtx{f}. \end{equation} Again, this expression involves matrix products. The Ricci tensor $\operatorname{\mtx{Ric}} = (\operatorname{Ric}_{ij} : 1 \leq i,j \leq n)$ is given by \[\operatorname{Ric}_{ij} := \sum_{k=1}^n \left(\partial_k\gamma_{ij}^k - \partial_i\gamma_{kj}^k\right) + \sum_{k,l=1}^n\left(\gamma_{kl}^k\gamma_{ij}^l - \gamma_{il}^k\gamma_{jk}^l\right).\] The Ricci tensor expresses the curvature of the manifold.
\subsection{Bakry--\'Emery criterion}\label{sec:BE_Riemannian} Since the first sum in the expression~\eqref{eqn:gamma2-riemann} for $\Gamma_2(\mtx{f})$ is a positive-semidefinite matrix, we have the inequality \begin{equation}\label{eqn:gamma_2_Riemannian} \Gamma_2(\mtx{f}) \succcurlyeq \sum_{i,j,k,l=1}^ng^{ik}g^{jl}\left(\operatorname{Ric}_{kl} + \nabla^2_{kl}W\right) \partial_i\mtx{f}\, \partial_j\mtx{f}. \end{equation} In a Euclidean space, the Ricci tensor is everywhere zero, so the Bakry--\'Emery criterion~\eqref{Bakry-Emery} relies on the strong convexity of the potential $W$, as we have seen in Section~\ref{sec:log-concave}. In contrast, on a Riemannian manifold, the Ricci tensor plays an important role.
Let us now assume that the Riemannian manifold is unweighted; that is, the potential $W = 0$ identically. By comparing the displays \eqref{eqn:gamma_Riemannian} and \eqref{eqn:gamma_2_Riemannian} for a scalar function $f:M\to\mathbb{R}$, we can see that the scalar Bakry--\'Emery criterion holds with constant $c=\rho^{-1}$, provided that \[ \mathfrak{g}(x)\operatorname{\mtx{Ric}}(x)\mathfrak{g}(x)\succcurlyeq \rho \mathfrak{g}(x)\quad \text{or equivalently}\quad \operatorname{\mtx{Ric}}(x) \succcurlyeq \rho \mtx{G}(x) \quad \text{for all $x\in M$}. \] That is, the eigenvalues of $\operatorname{\mtx{Ric}}$ relative to the metric $\mtx{G}$ are bounded from below by $\rho$. This is often referred as the curvature condition $CD(\rho,\infty)$. Proposition~\ref{prop:BE_equiv} allows us to lift the scalar Bakry--{\'E}mery criterion to matrix-valued functions; we can also achieve this goal by direct argument.
We remark that the uniform positiveness of the Ricci curvature tensor also leads to a Poincar\'e inequality for the diffusion process on the manifold; see \cite[Section 4.8]{bakry2013analysis}. Therefore, proposition~\ref{prop:matrix_poincare} implies that the associated Markov semigroup is ergodic in the sense of \eqref{eqn:ergodicity}.
As a typical example, consider the $n$-dimensional unit sphere $\mathbb{S}^n \subset \mathbbm{R}^{n+1}$, equipped with the induced Riemmanian structure. The associated Riemannian measure is the uniform distribution. For the sphere, the Ricci curvature tensor is constant: $\operatorname{\mtx{Ric}} = (n-1)\mtx{G}$; see \cite[Section 2.2]{bakry2013analysis}. Therefore, the Brownian motion on $\mathbb{S}^n$ satisfies a Bakry--\'Emery criterion \eqref{Bakry-Emery} with $c = (n-1)^{-1}$ for $n\geq 2$.
Next, consider the special orthogonal group $\mathrm{SO}(n) \subset \mathbbm{R}^{n \times n}$ with the induced Riemannian structure. The canonical measure is the Haar probability measure. For this manifold, it is known that the eigenvalues of the Ricci tensor are bounded below by $\rho = (n-1)/4$; see~\cite[p.~27]{ledoux2001concentration}. Therefore, the special orthogonal group $\mathrm{SO}(n)$ satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} with $c = 4/(n-1)$.
There are many other Riemannian manifolds where a lower bound on the Ricci curvature is available. We refer the reader to~\cite[Sec.~2.2.1]{ledoux2001concentration} for more examples and references.
\subsection{Calculations of carr{\'e} du champ operators}\label{sec:Riemannian_gamma} In this section, we provide calculations of carr{\'e} du champ operators for the concrete examples in Section~\ref{sec:riemann-exp}.
\subsubsection{Example~\ref{example:sphere_I}: Sphere I}
In this example, we consider the unit sphere $\mathbb{S}^n \subset \mathbbm{R}^{n+1}$ as a Riemannian submanifold of $\mathbbm{R}^{n+1}$ for $n \geq 2$. The canonical Riemannian measure is the uniform probability measure $\sigma_n$ on the sphere.
Let $(\mtx{A}_1, \dots, \mtx{A}_{n+1}) \subset \mathbb{H}_d$ be a fixed collection of Hermitian matrices. Draw a random vector $\vct{x} = (x_1, \dots, x_{n+1}) \in \mathbb{S}^n$ from the uniform measure; we use boldface to emphasize that $\vct{x}$ is a vector in the embedding space. Consider the matrix-valued function $$ \mtx{f}(\vct{x}) = \sum_{i=1}^{n+1} x_i \mtx{A}_i. $$ We can use the expression~\eqref{eqn:gamma_Riemannian_alternative} to compute the carr{\'e} du champ of $\mtx{f}$.
Indeed, the ordinary gradient of $\mtx{f}$ as a function on $\mathbb{R}^{n+1}$ is given by \[\nabla_{\mathbb{R}^{n+1}} \mtx{f}(\vct{x}) = \sum_{i=1}^{n+1}\mathbf{e}_i\otimes \frac{\partial \mtx{f}(\vct{x})}{\partial x_i} = \sum_{i=1}^{n+1}\mathbf{e}_i\otimes \mtx{A}_i\quad \text{for all $\vct{x}\in \mathbb{R}^{n+1}$}.\] As usual, $\{\mathbf{e}_i\}_{i=1}^{n+1}$ is the standard basis of $\mathbb{R}^{n+1}$. Define the orthogonal projection $\mathrm{Proj}_{\vct{x}} = \mathbf{I} - \vct{x}\vct{x}^\mathsf{T}$ onto the tangent space $T_{\vct{x}}\mathbb{S}^n = \{\vct{y}\in \mathbb{R}^{n+1}: \vct{y}^\mathsf{T}\vct{x}=0 \}$. Thus, the tangential gradient is the projection of the ordinary gradient onto the tangent space: \[\nabla_{\mathbb{S}^n}\mtx{f}(\vct{x}) = (\mathrm{Proj}_{\vct{x}} \otimes \mathbf{I})\nabla_{\mathbb{R}^{n+1}} \mtx{f}(\vct{x}) = \sum_{i=1}^{n+1}(\mathbf{e}_i - x_i\vct{x})\otimes \mtx{A}_i.\] By the expression \eqref{eqn:gamma_Riemannian_alternative}, we can compute the carr{\'e} du champ at each point $\vct{x}\in \mathbb{S}^{n}$ as \begin{align*} \Gamma(\mtx{f})(\vct{x}) &= \sum_{i,j=1}^{n+1}(\mathbf{e}_i - x_i\vct{x})^\mathsf{T}(\mathbf{e}_j - x_j\vct{x})\cdot \mtx{A}_i\mtx{A}_j = \sum_{i,j=1}^{n+1}(\delta_{ij} - x_ix_j)\,\mtx{A}_i\mtx{A}_j \\ &= \sum_{i=1}^{n+1}\mtx{A}_i^2 - \sum_{i,j=1}^{n+1}x_ix_j\,\mtx{A}_i\mtx{A}_j = \sum_{i=1}^{n+1}\mtx{A}_i^2 - \left(\sum_{i=1}^{n+1}x_i\mtx{A}_i\right)^2. \end{align*} This calculation verifies the formula \eqref{eqn:gamma_sphere_I}. It is now evident that $$ \mtx{0} \preccurlyeq \Gamma(\mtx{f})(\vct{x}) \preccurlyeq \sum_{i=1}^{n+1}\mtx{A}_i^2 \quad\text{for all $\vct{x} \in \mathbb{S}^n$.} $$ Therefore, the variance proxy $v_{\mtx{f}} \leq \norm{ \sum_{i=1}^{n+1} \mtx{A}_i^2 }$.
\subsubsection{Example~\ref{example:sphere_II}: Sphere II}
We maintain the setup and notation from the last subsection, and we consider the matrix-valued function $$ \mtx{f}(\vct{x}) = \sum_{i=1}^{n+1}x_i^2\mtx{A}_i \quad\text{where $\vct{x} \sim \sigma_n$ on $\mathbb{S}^n$.} $$ Treating $\mtx{f}$ as a function on the embedding space $\mathbb{R}^{n+1}$, the ordinary gradient is given by \[\nabla_{\mathbb{R}^{n+1}} \mtx{f}(\vct{x}) = \sum_{i=1}^{n+1}\mathbf{e}_i\otimes \frac{\partial \mtx{f}(\vct{x})}{\partial x_i} = 2\sum_{i=1}^{n+1}x_i\mathbf{e}_i\otimes \mtx{A}_i\quad \text{for all $\vct{x}\in \mathbb{R}^{n+1}$}.\] Thus, the tangential gradient of $\mtx{f}$ at a point $\vct{x}\in \mathbb{S}^{n}$ can be computed as \[\nabla_{\mathbb{S}^n}\mtx{f}(\vct{x}) = (\mathrm{Proj}_{\vct{x}} \otimes \mathbf{I})\nabla_{\mathbb{R}^{n+1}} \mtx{f}(\vct{x}) = 2\sum_{i=1}^{n+1}(x_i\mathbf{e}_i - x_i^2\vct{x})\otimes \mtx{A}_i.\] By the expression \eqref{eqn:gamma_Riemannian_alternative} of the carr{\'e} du champ operator, we can compute that \begin{align*} \Gamma(\mtx{f})(\vct{x}) &= 4\sum_{i,j=1}^{n+1}(x_i\mathbf{e}_i - x_i^2\vct{x})^\mathsf{T}(x_j\mathbf{e}_j - x_j^2\vct{x})\cdot \mtx{A}_i\mtx{A}_j = 4\sum_{i=1}^{n+1}x_i^2\mtx{A}_i^2 - 4\sum_{i,j=1}^{n+1}x_i^2x_j^2\,\mtx{A}_i\mtx{A}_j\\ &= 4\sum_{i,j=1}^{n+1}x_i^2x_j^2\mtx{A}_i^2 - 4\sum_{i,j=1}^{n+1}x_i^2x_j^2\,\mtx{A}_i\mtx{A}_j = 2\sum_{i,j=1}^{n+1}x_i^2x_j^2(\mtx{A}_i - \mtx{A}_j)^2. \end{align*} This establishes the formula \eqref{eqn:gamma_sphere_II}.
Using this result, we can obtain some bounds for the variance proxy. First, introduce the maximum norm difference $a:= \max_{i,j}\norm{\smash{\mtx{A}_i-\mtx{A}_j}}$. Then the carr{\'e} du champ satisfies \[\Gamma(\mtx{f})(\vct{x}) \preccurlyeq 2\sum_{i,j=1}^{n+1}x_i^2x_j^2\norm{\smash{\mtx{A}_i-\mtx{A}_j}}^2\cdot \mathbf{I}_d \preccurlyeq 2a^2\sum_{i,j=1}^{n+1}x_i^2x_j^2\cdot \mathbf{I}_d = 2a^2\mathbf{I}_d.\] Thus, $v_{\mtx{f}} \leq 2 a^2$. Here is an alternative approach. For an arbitrary matrix $\mtx{B} \in \mathbb{H}_d$, we can write \begin{align*} \Gamma(\mtx{f})(\vct{x}) &= 2\sum_{i,j=1}^{n+1}x_i^2x_j^2(\mtx{A}_i - \mtx{B} + \mtx{B} - \mtx{A}_j)^2\\ &= 4\sum_{i=1}^{n+1}x_i^2(\mtx{A}_i - \mtx{B})^2 - 4\left(\sum_{i=1}^{n+1}x_i^2\mtx{A}_i - \mtx{B}\right)^2\\ &\preccurlyeq 4\sum_{i=1}^{n+1}x_i^2(\mtx{A}_i - \mtx{B})^2 \end{align*} Defining $b:=\min_{\mtx{B}\in \mathbb{H}_d} \max_i \norm{\mtx{A}_i-\mtx{B}}$, we see that the variance proxy $v_{\mtx{f}} \leq 4 b^2$. Modulo an extra factor of two, the second bound represents a qualitative improvement over the first.
\subsubsection{Example~\ref{example:SO_d}: Special orthogonal group}
Let $(\mtx{A}_1, \dots, \mtx{A}_n) \subset \mathbb{H}_d(\mathbbm{R})$ be fixed, real, symmetric matrices. Draw $(\mtx{O}_1, \dots, \mtx{O}_n) \subset \mathrm{SO}(d)$ independent and uniformly from the Haar measure on the special orthogonal group $\mathrm{SO}(d)$. Consider the random matrix $$ \mtx{f}(\mtx{O}_1, \dots, \mtx{O}_n) = \sum_{i=1}^n \mtx{O}_i \mtx{A}_i \mtx{O}_i^\mathsf{T}. $$ To study this random matrix model, we will use local geodesic/normal coordinates on the product manifold $\mathrm{SO}(d)^{\otimes n}$ to compute the carr{\'e} du champ; for example, see~\cite[Sec. 5]{lee2018introduction} \& \cite[Sec. 3]{hall2015lie}. Since $\mathrm{SO}(d)^{\otimes n}$ is a Lie group, we only need to consider the geodesic frame of the tangent space at the identity element $(\mathbf{I}_d, \dots, \mathbf{I}_d)$.
For each $1\leq k<l\leq d$, let $\mtx{S}_{kl}\in \mathbb{M}_d$ be the unit skew-symmetric matrices: \[(\mtx{S}_{kl})_{kl} = 1/\sqrt{2}\quad\text{and}\quad (\mtx{S}_{kl})_{lk} = -1/\sqrt{2}\quad \text{and other entries of $\mtx{S}_{kl}$ are zero}.\] Define the tangent vectors \[\mtx{V}_{kl}^i = \underbrace{(\mtx{0},\dots,\mtx{S}_{kl},\dots,\mtx{0})}_{\text{The $i$th coordinate is $\mtx{S}_{kl}$}} \quad \text{for $i=1,\dots,n$ and $1\leq k<l\leq d$}.\] Then $( \mtx{V}_{kl}^i : 1\leq i\leq n \text{ and } 1\leq k<l\leq d )$ forms an orthonormal basis for the tangent space at the identity element of the Lie group $\mathrm{SO}(d)^{\otimes n}$, with respect to the Hilbert--Schmidt inner product: \[\langle (\mtx{P}_1,\dots,\mtx{P}_n), (\mtx{Q}_1,\dots,\mtx{Q}_n)\rangle_\mathrm{HS} = \sum_{i=1}^n \operatorname{tr}[\mtx{P}_i^*\mtx{Q}_i]\quad \text{for $\mtx{P}_1,\dots,\mtx{P}_n,\mtx{Q}_1,\dots,\mtx{Q}_n\in\mathbb{M}_d$}.\] This basis $\{\mtx{V}_{kl}^i\}_{1\leq i\leq n,1\leq k<l\leq d}$ can be translated to an orthonormal basis of the tangent space at another point $(\mtx{O}_1, \dots, \mtx{O}_n)$ by the group operation: $(\mtx{0},\dots,\mtx{S}_{kl},\dots,\mtx{0})\mapsto (\mtx{0},\dots,\mtx{S}_{kl}\mtx{O}_i,\dots,\mtx{0})$.
Now, for each $(\mtx{O}_1, \dots, \mtx{O}_n)\in\mathrm{SO}(d)^{\otimes n}$, consider the local geodesic map corresponding to the direction $\mtx{V}_{kl}^i$: \[(\mtx{O}_1, \dots, \mtx{O}_i, \dots, \mtx{O}_n) \mapsto (\mtx{O}_1, \dots, \mathrm{e}^{\varepsilon\mtx{S}_{kl}}\mtx{O}_i, \dots, \mtx{O}_n)\quad \text{for some small $\varepsilon\geq0$}.\] Then the directional derivative of $\mtx{f}$ in local geodesic coordinates, evaluated at the point $(\mtx{O}_1,\dots,\mtx{O}_n)$ where $\varepsilon = 0$, is given by \[\frac{\partial \mtx{f}}{ \partial \mtx{V}_{kl}^i}(\mtx{O}_1, \dots, \mtx{O}_n) = \mtx{S}_{kl}\mtx{O}_i\mtx{A}_i\mtx{O}_i^\mathsf{T} - \mtx{O}_i\mtx{A}_i\mtx{O}_i^\mathsf{T} \mtx{S}_{kl} =: \mtx{S}_{kl}\mtx{B}_i - \mtx{B}_i\mtx{S}_{kl},\] where $\mtx{B}_i := \mtx{O}_i\mtx{A}_i\mtx{O}_i^\mathsf{T}$. In local geodesic coordinates, the co-metric tensor $\mathfrak{g}$ at the origin equals the identity. Using the formula \eqref{eqn:gamma_Riemannian}, we can compute the carr{\'e} du champ as \begin{align*} \Gamma(\mtx{f})(\mtx{O}_1, \dots, \mtx{O}_n) &=\sum_{i=1}^n\sum_{1\leq k<l\leq b} \left(\frac{\partial \mtx{f}}{ \partial \mtx{V}_{kl}^i}\right)^2 = \sum_{i=1}^n\sum_{1\leq k<l\leq b}\left(\mtx{S}_{kl}\mtx{B}_i - \mtx{B}_i\mtx{S}_{kl}\right)^2\\ &= \sum_{i=1}^n\sum_{1\leq k<l\leq b}\left(-\mtx{S}_{kl}\mtx{B}_i^2\mtx{S}_{kl} - \mtx{B}_i\mtx{S}_{kl}^2\mtx{B}_i + \mtx{S}_{kl}\mtx{B}_i\mtx{S}_{kl}\mtx{B}_i + \mtx{B}_i\mtx{S}_{kl}\mtx{B}_i\mtx{S}_{kl}\right). \end{align*} It is not hard to check that, for any real matrix $\mtx{M}\in \mathbb{M}_d(\mathbb{R})$, \[\sum_{1\leq k<l\leq b} \mtx{S}_{kl}\mtx{M}\mtx{S}_{kl} = -\frac{1}{2}(\operatorname{tr}[\mtx{M}] \cdot \mathbf{I}_d - \mtx{M}^\mathsf{T}).\] Therefore, we can obtain that \begin{align*} \Gamma(\mtx{f})(\mtx{O}_1, \dots, \mtx{O}_n) &= \frac{1}{2}\sum_{i=1}^n\left( \operatorname{tr}[\mtx{B}_i^2] \cdot \mathbf{I}_d - \mtx{B}_i^2 + (d-1)\mtx{B}_i^2 - (\operatorname{tr}[\mtx{B}_i]\cdot \mathbf{I}_d - \mtx{B}_i)\mtx{B}_i - \mtx{B}_i(\operatorname{tr}[\mtx{B}_i]\cdot \mathbf{I}_d - \mtx{B}_i) \right) \\ &= \frac{1}{2}\sum_{i=1}^n\left( \operatorname{tr}[\mtx{B}_i^2]\cdot \mathbf{I}_d + d\cdot \mtx{B}_i^2 - 2\operatorname{tr}[\mtx{B}_i]\cdot \mtx{B}_i\right)\\ &= \frac{1}{2}\sum_{i=1}^n\mtx{O}_i\left( \operatorname{tr}[\mtx{A}_i^2]\cdot \mathbf{I}_d + d\cdot \mtx{A}_i^2 - 2\operatorname{tr}[\mtx{A}_i]\cdot \mtx{A}_i\right)\mtx{O}_i^\mathsf{T}.\\ &= \frac{1}{2}\sum_{i=1}^n\mtx{O}_i\left\{ \left(\operatorname{tr}[\mtx{A}_i^2]-\frac{\operatorname{tr}[\mtx{A}_i]^2}{d}\right)\cdot \mathbf{I}_d + d\cdot \left(\mtx{A}_i - \frac{\operatorname{tr}[\mtx{A}_i]}{d}\cdot \mathbf{I}_d \right)^2\right\}\mtx{O}_i^\mathsf{T}. \end{align*} This justifies the formula \eqref{eqn:gamma_SO_d}. Since each $\mtx{O}_i$ is an orthogonal matrix, the variance proxy satisfies \begin{align*} v_{\mtx{f}} &= \max\nolimits_{\mtx{O}_i} \norm{\Gamma(\mtx{f})(\mtx{O}_1, \dots, \mtx{O}_n)} \\ &\leq \frac{1}{2}\sum_{i=1}^n\norm{ \left(\operatorname{tr}[\mtx{A}_i^2]-d^{-1}\operatorname{tr}[\mtx{A}_i]^2\right)\cdot \mathbf{I}_d + d\cdot \left(\mtx{A}_i - d^{-1}\operatorname{tr}[\mtx{A}_i]\cdot \mathbf{I}_d \right)^2 }\\ &= \frac{1}{2}\sum_{i=1}^n \left( \operatorname{tr}[\mtx{A}_i^2]-d^{-1}\operatorname{tr}[\mtx{A}_i]^2 + d\cdot \norm{\mtx{A}_i - d^{-1}\operatorname{tr}[\mtx{A}_i]\cdot \mathbf{I}_d }^2 \right). \end{align*} Note that this bound is sharp because we can always choose some particular point $(\mtx{O}_1, \dots, \mtx{O}_n)$ to achieve equality. \subsection{Matrix concentration results}\label{sec:concentration_results_Riemannian} At last, we provide a proof of Theorem~\ref{thm:riemann-simple} from Theorem~\ref{thm:polynomial_moment} and Theorem~\ref{thm:exponential_concentration}.
Consider a compact $n$-dimensional Riemannian submanifold $M$ of a Euclidean space. The uniform measure $\mu$ on $M$ is the stationary measure of the associated Brownian motion on $M$. As discussed in Section~\ref{sec:BE_Riemannian}, the Brownian motion satisfies a Bakry--\'Emery criterion with constant $c=\rho^{-1}$ if the eigenvalues of the Ricci curvature tensor are bounded below by $\rho$. We then apply Theorem~\ref{thm:polynomial_moment} and Theorem~\ref{thm:exponential_concentration} with $c=\rho^{-1}$ to obtain the matrix concentration inequalities in Theorem~\ref{thm:riemann-simple}.
For any point $x\in M$, we can compute the carr\'e du champ $\Gamma(\mtx{f})(x)$ in local normal coordinates centered at $x$. In this case, the co-metric tensor $\mathfrak{g}$ is the identity matrix $\mathbf{I}_n$ when evaluated at $x$. The expression of the variance proxy $v_{\mtx{f}}$ in Theorem~\ref{thm:riemann-simple} then follows from formula \eqref{eqn:gamma_Riemannian} of the carr\'e du champ operator.
\appendix
\section{Matrix moments and concentration} \label{apdx:matrix_moments}
For reference, this appendix summarizes a few standard results on matrix moments and concentration. Proposition~\ref{prop:matrix_Chebyshev} explains how to transfer the polynomial moments bounds in Theorem~\ref{thm:polynomial_moment} into matrix concentration inequalities. Proposition~\ref{prop:trace_mgf} states some properties of the trace mgf that are used in the proof of Theorem~\ref{thm:exponential_moment}. Proposition~\ref{prop:matrix_exponential_concentration} allows us to derive the exponential concentration inequalities in Theorem~\ref{thm:exponential_concentration} from the exponential moment bounds in Theorem~\ref{thm:exponential_moment}.
\subsection{The matrix Chebyshev inequality} We can obtain concentration inequalities for a random matrix given bounds for the polynomial trace moments. This result extends Chebyshev's probability inequality. For instance, see \cite[Proposition 6.2]{mackey2014}.
\begin{proposition}[Matrix Chebyshev inequality]\label{prop:matrix_Chebyshev} Let $\mtx{X}\in\mathbb{H}_d$ be a random matrix. For all $t\geq 0$,
\[\Prob{\|\mtx{X}\|\geq t}\leq \inf_{q\geq 1}t^{-q}\cdot \Expect\operatorname{tr}|\mtx{X}|^q.\] Furthermore,
\[\Expect\|\mtx{X}\|\leq \inf_{q\geq 1}\left(\Expect\operatorname{tr}|\mtx{X}|^q\right)^{1/q}.\] \end{proposition}
As mentioned in Section~\ref{sec:main_results}, Proposition~\ref{prop:matrix_Chebyshev} can be applied to the polynomial moment bounds in Theorem~\ref{thm:polynomial_moment} to yield subgaussian concentration inequalities.
\subsection{The matrix Laplace transform method} We can also obtain exponential concentration inequalities via the matrix Laplace transform. Let $\mtx{X}\in\mathbb{H}_d$ be a random matrix. The normalized trace moment generating function (mgf) of $\mtx{X}$ is defined as \[m(\theta):= \Expect\operatorname{\bar{\trace}} \mathrm{e}^{\theta\mtx{X}},\quad \text{for}\ \theta\in \mathbb{R}.\] This definition is due to Ahlswede and Winter \cite{ahlswede2002strong}. In the proof of Theorem~\ref{thm:exponential_moment}, we have used some properties of the trace mgf given in the following proposition, which restates \cite[Lemma 12.3]{paulin2016efron}.
\begin{proposition}[Properties of the trace mgf]\label{prop:trace_mgf} Assume that $\mtx{X}\in\mathbb{H}_d$ is a zero-mean random matrix that is bounded in norm. Define the normalized trace mgf $m(\theta) := \Expect \operatorname{\bar{\trace}} \mathrm{e}^{\theta\mtx{X}}$ for $\theta\in\mathbb{R}$. Then \begin{equation}\label{eqn:m.g.f_Property_1} \log m(\theta) \geq 0 \quad \text{and} \quad \log m(0) = 0. \end{equation} The derivative of the trace mgf satisfies \begin{equation*}\label{eqn:m.g.f_Property_2} m'(\theta) = \Expect \operatorname{\bar{\trace}}\left[\mtx{X}\,\mathrm{e}^{\theta\mtx{X}}\right] \quad \text{and} \quad m'(0) = 0. \end{equation*} The trace mgf is a convex function; in particular \begin{equation*}\label{eqn:m.g.f_Property_3} m'(\theta) \leq 0\quad \text{for}\quad \theta\leq 0 \quad \text{and} \quad m'(\theta) \geq 0\quad \text{for}\quad \theta\geq 0. \end{equation*} \end{proposition}
Using the matrix Laplace transform method, one can convert estimates on the trace mgf into bounds on the extreme eigenvalues of a random matrix. For example, see~\cite[Proposition 3.3]{mackey2014}. In particular, having an explicit bound on the trace mgf, we can obtain concrete estimates on the maximum eigenvalue. See \cite[Section 4.2.4]{mackey2014} for a proof.
\begin{proposition}\label{prop:matrix_exponential_concentration} Let $\mtx{X}\in\mathbb{H}_d$ be a random matrix with normalized trace mgf $m(\theta):= \Expect\operatorname{\bar{\trace}} \mathrm{e}^{\theta\mtx{X}}$. Assume that there are constants $c_1,c_2 \geq 0$ for which \[\log m(\theta) \leq \frac{c_1\theta^2}{2(1-c_2\theta)}\quad \text{when}\ 0\leq \theta<\frac{1}{c_2}.\] Then for all $t\geq0$, \[\Prob{\lambda_{\max}(\mtx{X})\geq t} \leq d\cdot \exp\left(\frac{-t^2}{2c_1+2c_2t}\right).\] Furthermore, \[\Expect\lambda_{\max}(\mtx{X})\leq \sqrt{2c_1\log d} + c_2\log d.\] \end{proposition}
We have applied Proposition~\ref{prop:matrix_exponential_concentration} to the trace mgf bounds in Theorem~\ref{thm:exponential_moment} to derive exponential concentration inequalities as those in Theorem~\ref{thm:exponential_concentration}.
\section{Mean value trace inequality} \label{apdx:mean_value}
In this section, we establish the mean value trace inequality, Lemma~\ref{lem:mean_value_inequality}. This result is a generalization of~\cite[Lemmas 9.2 and 12.2]{paulin2016efron}. The proof is similar in spirit, but it uses some additional ingredients from matrix analysis.
The key idea is to use tensorization to lift a pair of noncommuting matrices to a pair of commuting tensors. This step gives us access to tools that are not available for general matrices. For any two Hermitian matrices $\mtx{X},\mtx{Y}\in \mathbb{H}_d$, define a linear operator $\mtx{X}\otimes \mtx{Y} : \mathbb{M}_d \to \mathbb{M}_d$ whose action is given by \[(\mtx{X}\otimes \mtx{Y})(\mtx{Z}) = \mtx{X}\mtx{Z}\mtx{Y}\quad \text{for all}\ \mtx{Z}\in \mathbb{M}_d.\] The linear operator $\mtx{X}\otimes \mtx{Y}$ is self-adjoint with respect to the standard inner product on $\mathbb{M}_d$: \[\langle(\mtx{X}\otimes \mtx{Y})(\mtx{Z}_1),\mtx{Z}_2\rangle_{\mathbb{M}_d} = \operatorname{tr}\left[\mtx{Y}\mtx{Z}_1^*\mtx{X}\mtx{Z}_2\right] = \operatorname{tr}\left[\mtx{Z}_1^*\mtx{X}\mtx{Z}_2\mtx{Y}\right] = \langle\mtx{Z}_1,(\mtx{X}\otimes \mtx{Y})(\mtx{Z}_2)\rangle_{\mathbb{M}_d}\quad \text{for all}\ \mtx{Z}_1,\mtx{Z}_2\in \mathbb{M}_d.\] Therefore, for any function $\varphi:\mathbb{R}\rightarrow \mathbb{R}$, we can define the tensor function $\varphi(\mtx{X}\otimes \mtx{Y})$ using the spectral resolution of $\mtx{X} \otimes \mtx{Y}$. It is not hard to check that \[\varphi(\mtx{X}\otimes \mathbf{I}) = \varphi(\mtx{X})\otimes \mathbf{I}\quad \text{and}\quad \varphi(\mathbf{I}\otimes \mtx{Y}) = \mathbf{I}\otimes \varphi(\mtx{Y}).\] Note that the tensors $\mtx{X}\otimes\mathbf{I}$ and $\mathbf{I}\otimes\mtx{Y}$ commute with each other, regardless of whether $\mtx{X}$ and $\mtx{Y}$ commute.
\begin{proof}[Proof of Lemma~\ref{lem:mean_value_inequality}] We can write \begin{align*} \varphi(\mtx{A})-\varphi(\mtx{B}) &= \big(\varphi(\mtx{A})\otimes \mathbf{I} -\mathbf{I}\otimes \varphi(\mtx{B})\big) (\mathbf{I})\\ &= \big(\varphi(\mtx{A}\otimes \mathbf{I}) -\varphi(\mathbf{I}\otimes \mtx{B})\big) (\mathbf{I}) = \int_0^1\frac{\diff{} }{\diff \tau}\varphi\big(\tau \mtx{A}\otimes \mathbf{I} + (1-\tau) \mathbf{I}\otimes \mtx{B}\big) (\mathbf{I}) \idiff \tau. \end{align*} Since $\mtx{A}\otimes \mathbf{I}$ commutes with $\mathbf{I} \otimes \mtx{B}$, we have \[\frac{\diff{} }{\diff \tau}\varphi\big(\tau \mtx{A}\otimes \mathbf{I} + (1-\tau) \mathbf{I}\otimes \mtx{B}\big) = \varphi'\big(\tau \mtx{A}\otimes \mathbf{I} + (1-\tau) \mathbf{I}\otimes \mtx{B}\big) (\mtx{A}\otimes \mathbf{I}- \mathbf{I}\otimes \mtx{B}).\] As a consequence, \begin{align*} \varphi(\mtx{A})-\varphi(\mtx{B}) &= \int_0^1\varphi'\big(\tau \mtx{A}\otimes \mathbf{I} + (1-\tau) \mathbf{I}\otimes \mtx{B}\big) (\mtx{A}\otimes \mathbf{I}- \mathbf{I}\otimes \mtx{B})(\mathbf{I}) \idiff \tau \\ &= \int_0^1\varphi'\big(\tau \mtx{A}\otimes \mathbf{I} + (1-\tau) \mathbf{I}\otimes \mtx{B}\big)(\mtx{A}- \mtx{B}) \idiff \tau =: \int_0^1\mathcal{M}_{\tau}(\mtx{A},\mtx{B})(\mtx{A}- \mtx{B})\idiff \tau. \end{align*} Since $\mathcal{M}_{\tau}(\mtx{A},\mtx{B})$ is a self-adjoint linear operator on the Hilbert space $\mathbb{M}_d$, we can apply the operator Cauchy--Schwarz inequality~\cite[Lemma A.2]{paulin2016efron}. For any $s>0$, \begin{multline}\label{step:mean_value_1} \operatorname{tr}\left[\mtx{C} \,\big(\varphi(\mtx{A})-\varphi(\mtx{B})\big)\right] = \ip{ \mtx{C} }{ \varphi(\mtx{A})-\varphi(\mtx{B})}_{\mathbb{M}_d} = \int_0^1 \ip{ \mtx{C} }{ \mathcal{M}_{\tau}(\mtx{A},\mtx{B})(\mtx{A}- \mtx{B}) }_{\mathbb{M}_d} \idiff \tau \\ \leq \int_0^1\left[ \frac{s}{2} \ip{ \mtx{A}-\mtx{B} }{ \abs{\mathcal{M}_{\tau}(\mtx{A},\mtx{B})}(\mtx{A}-\mtx{B}) }_{\mathbb{M}_d} + \frac{s^{-1}}{2} \ip{ \mtx{C} }{ \abs{\mathcal{M}_{\tau}(\mtx{A},\mtx{B})}(\mtx{C})}_{\mathbb{M}_d}\right] \idiff \tau. \end{multline} By assumption, $\psi := \abs{\varphi'}$ is convex. Thus, for all $\tau\in[0,1]$, \begin{align*} \abs{\mathcal{M}_{\tau}(\mtx{A},\mtx{B})} &= \abs{ \varphi'\big(\tau \mtx{A}\otimes \mathbf{I} + (1-\tau) \mathbf{I}\otimes \mtx{B}\big)} = \psi\big(\tau \mtx{A}\otimes \mathbf{I} + (1-\tau) \mathbf{I}\otimes \mtx{B}\big)\\ &\preccurlyeq \tau \cdot \psi\left(\mtx{A}\otimes \mathbf{I}\right) + (1-\tau)\cdot \psi\left(\mathbf{I}\otimes \mtx{B}\right) = \tau\cdot \psi(\mtx{A})\otimes \mathbf{I} + (1-\tau)\cdot \mathbf{I}\otimes \psi(\mtx{B}). \end{align*} The argument above depends on the commutativity of $\mtx{A}\otimes \mathbf{I}$ and $\mathbf{I}\otimes \mtx{B}$, which means that we do not need $\psi$ to be operator convex. Hence, for any $\mtx{Z}\in\mathbb{M}_d$, \begin{multline}\label{step:mean_value_2} \int_0^1\ip{ \mtx{Z} }{ \abs{\mathcal{M}_{\tau}(\mtx{A},\mtx{B})}(\mtx{Z}) }_{\mathbb{M}_d} \idiff \tau \leq \int_0^1 \ip{ \mtx{Z} }{ \big(\tau\cdot \psi(\mtx{A})\otimes \mathbf{I} + (1-\tau)\cdot \mathbf{I}\otimes \psi(\mtx{B})\big)(\mtx{Z})}_{\mathbb{M}_d} \idiff \tau\\ = \frac{1}{2}\big( \ip{ \mtx{Z} }{ \psi(\mtx{A}) \,\mtx{Z} }_{\mathbb{M}_d} + \ip{ \mtx{Z} }{ \mtx{Z}\,\psi(\mtx{B}) }_{\mathbb{M}_d} \big) = \frac{1}{2}\big(\operatorname{tr}\left[\mtx{Z}\mtx{Z}^*\, \psi(\mtx{A}) \right] + \operatorname{tr}\left[\mtx{Z}^*\mtx{Z} \,\psi(\mtx{B}) \right]\big). \end{multline} Applying \eqref{step:mean_value_2} to \eqref{step:mean_value_1}, substituting $\mtx{A}-\mtx{B}$ and $\mtx{C}$ for $\mtx{Z}$, we arrive at \[\operatorname{tr}\left[\mtx{C} \, (\varphi(\mtx{A})-\varphi(\mtx{B}))\right]\leq \frac{1}{4} \operatorname{tr}\left[\left(s\,(\mtx{A}-\mtx{B})^2+ s^{-1}\,\mtx{C}^2\right)\big(\psi(\mtx{A}) + \psi(\mtx{B}) \big) \right].\] Optimize over $s>0$ to achieve the stated result. \end{proof}
\section{Connection with Stein's method} \label{apdx:Stein_method}
There is an established approach to proving matrix concentration inequalities using the method of exchangeable pairs; see~\cite{chatterjee2005concentration} for the scalar setting and \cite{mackey2014,paulin2016efron} for matrix extensions. As mentioned in Section~\ref{sec:concentration_history}, the approach in \cite[Sections 10--11]{paulin2016efron} implicitly relies on a discrete version of the local ergodicity condition. A limiting version of this argument can also be used to derive the results in our paper. This appendix details the connection.
Given a reversible, exponentially ergodic Markov process $(Z_t)_{t\geq 0}$ with a stationary measure $\mu$, one can construct an exchangeable pair as follows. Fix a time $t > 0$. Let $Z$ be drawn from the measure $\mu$, and let $\tilde{Z} = Z_t$ where $Z_0=Z$. By reversibility, it is easy to check that $(Z,\tilde{Z})$ is an exchangeable pair; that is, $(Z,\tilde{Z})$ has the same distribution as $(\tilde{Z},Z)$.
For a zero-mean function $\mtx{f}:\Omega\rightarrow \mathbb{H}_d$, define the function $\mtx{g}_t: \Omega\rightarrow \mathbb{H}_d$ by \[\mtx{g}_t = \left(\frac{P_0-P_t}{t}\right)^{-1}\mtx{f} = t\sum_{k=0}^\infty P_{kt}\mtx{f}.\] Then $(\mtx{f}(Z),\mtx{f}(\tilde{Z}))$ is a \emph{kernel Stein pair} associated with the kernel \[\mtx{K}_t(z,\tilde{z}) = \frac{\mtx{g}_t(z) - \mtx{g}_t(\tilde{z})}{t}\quad\text{for all $z,\tilde{z}\in \Omega$.} \] By construction, for all $z, \tilde{z} \in \Omega$, \begin{gather}\label{eqn:kernel_property_1} \mtx{K}_t(z,\tilde{z}) = -\mtx{K}_t(\tilde{z},z); \\ \label{eqn:kernel_property_2}
\Expect\left[\mtx{K}_t(Z,\tilde{Z})\,|\,Z = z\right] = \mtx{f}(z). \end{gather} This construction is inspired by Stein's work~\cite{stein1986approximate}; see Chatterjee's PhD thesis~\cite[Section 4.1]{chatterjee2005concentration}. One consequence of the properties \eqref{eqn:kernel_property_1} and \eqref{eqn:kernel_property_2} is the identity \begin{equation}\label{eqn:kernel_identity} \Expect[\mtx{f}(Z) \, \varphi(\mtx{f}(Z))] = \frac{1}{2}\Expect\left[\mtx{K}_t(Z,\tilde{Z})\left(\varphi(\mtx{f}(Z))-\varphi(\mtx{f}(\tilde{Z}))\right)\right], \end{equation}
which holds for any measurable function $\varphi:\mathbb{H}_d\rightarrow\mathbb{H}_d$ that satisfies the regularity condition $\|\mtx{K}_t(Z,\tilde{Z})\,\varphi(\mtx{f}(Z))\|<+\infty$ almost surely. Paulin et al.~\cite{paulin2016efron} use~\eqref{eqn:kernel_identity} to establish matrix Efron--Stein inequalities, much in the same way that we derive Theorem~\ref{thm:polynomial_moment} and Theorem~\ref{thm:exponential_moment}.
The approach we undertake in this paper is not exactly parallel with the approach in Paulin et al.~\cite{paulin2016efron}. Let us elaborate. Take the limit of $\mtx{g}_t$ as $t \downarrow 0$, using $\mathcal{L} = \lim_{t\downarrow0}(P_t-P_0)/t$. We get \begin{equation}\label{eqn:inverse_L} \mtx{g}_0 = (-\mathcal{L})^{-1}\mtx{f} = \int_{0}^\infty P_t \mtx{f} \idiff t. \end{equation} Indeed, by ergodicity, one can check that \[\mtx{f} = \mtx{f}-\Expect_{\mu}\mtx{f} = P_0\mtx{f}-P_\infty\mtx{f} = -\int_{0}^\infty \frac{\diff{} }{\diff t}P_t\mtx{f} \idiff t =-\mathcal{L} \int_{0}^\infty P_t\mtx{f} \idiff t = -\mathcal{L} \mtx{g}_0\quad \text{in $L_2(\mu)$}.\] Consequently, we have \begin{equation}\label{eqn:limit_kernel_identity} \Expect_{\mu} [\mtx{f}\,\varphi(\mtx{f})] = -\Expect_{\mu} \left[\mathcal{L}(\mtx{g}_0)\, \varphi(\mtx{f})\right] = \Expect_{\mu} \Gamma(\mtx{g}_0,\varphi(\mtx{f})). \end{equation} The identity \eqref{eqn:kernel_identity} is just a discrete version of the formula~\eqref{eqn:limit_kernel_identity}. In contrast, the argument in this paper is based on the identity \[ \Expect_{\mu} [\mtx{f}\,\varphi(\mtx{f})]
= \int_0^{\infty} \operatorname{\mathbbm{E}}_{\mu} \Gamma( P_t \mtx{f}, \varphi(\mtx{f}) ) \idiff{t}. \] The integral is not in the same place! Our approach is technically a bit simpler because it does not require us to justify the convergence of the integral~\eqref{eqn:inverse_L}. Nevertheless, our work is strongly inspired by the tools and techniques developed by Paulin et al.~\cite{paulin2016efron} in the discrete setting.
\section*{Acknowledgments}
We thank Ramon van Handel for his feedback on an early version of this manuscript. He is responsible for the observation and proof that matrix Poincar{\'e} inequalities are equivalent with scalar Poincar{\'e} inequalities, and we are grateful to him for allowing us to incorporate these ideas.
DH was funded by NSF grants DMS-1907977 and DMS-1912654. JAT gratefully acknowledges funding from ONR awards N00014-17-12146 and N00014-18-12363, and he would like to thank his family for their support in these difficult times.
\def\footnotesize{\footnotesize}
\newcommand{\etalchar}[1]{$^{#1}$}
\end{document} | arXiv |
\begin{document}
\title[Matrix factorizations and intermediate Jacobians]{Matrix factorizations and intermediate Jacobians of cubic threefolds}
\author[B\"ohning]{Christian B\"ohning}\thanks{The first author was supported by the EPSRC New Horizons Grant EP/V047299/1.} \address{Christian B\"ohning, Mathematics Institute, University of Warwick\\ Coventry CV4 7AL, England} \email{[email protected]}
\author[von Bothmer]{Hans-Christian Graf von Bothmer} \address{Hans-Christian Graf von Bothmer, Fachbereich Mathematik der Universit\"at Hamburg\\ Bundesstra\ss e 55\\ 20146 Hamburg, Germany} \email{[email protected]}
\author[Buhr]{Lukas Buhr} \address{Lukas Buhr, Institut f\"ur Mathematik\\ Johannes Gutenberg-Universit\"at Mainz\\ Staudingerweg 9\\ 55128 Mainz, Germany} \email{[email protected]}
\date{\today}
\begin{abstract} Results due to Druel and Beauville show that the blowup of the intermediate Jacobian of a smooth cubic threefold $X$ in the Fano surface of lines can be identified with a moduli space of semistable sheaves of Chern classes $c_1=0, c_2=2, c_3=0$ on $X$. Here we further identify this space with a space of matrix factorizations. This has the advantage that this description naturally generalizes to singular and even reducible cubic threefolds. In this way, given a degeneration of $X$ to a reducible cubic threefold $X_0$, we obtain an associated degeneration of the above moduli spaces of semistable sheaves. \end{abstract}
\maketitle
\section{Introduction and recollection about moduli spaces of skew-symmetric $6\times 6$ matrices of linear form}\label{sClassificationSkewSemistable}
Pfaffian representations of cubic threefolds are a classical subject, cf. e.g. Beauville \cite{Beau00}, \cite{Beau02}, Comaschi \cite{Co20}, \cite{Co21}, Manivel-Mezzetti \cite{MaMe05}, Iliev-Markushevich \cite{IM00}; in \cite{BB22-2} we introduced a fibration of algebraic varieties $\widetilde{\MM} \to \mathcal{C}$, where $\mathcal{C} =\mathbb P\bigl(H^0\bigl(\OO_{\mathbb P^4}(3)\bigr)\bigr)$ is the space of cubic hypersurfaces in $\mathbb P^4$, which may be viewed as the \emph{universal family of $G$-equivalence classes of Pfaffian representations of cubic threefolds}. Moreover, the fibres are projective. Let us recall how to obtain $\widetilde{\MM}$: we let $R$ be the graded polynomial ring over $\mathbb C$ in variables $x_0, \dots, x_4$ of weight $1$, and let $\mathcal{S} = \mathbb P \bigl ( (\mathbb C^5)^{\vee} \otimes \Lambda^2 \mathbb C^6 \bigr)$ be the projective space of skew-symmetric $6\times 6$-matrices with entries linear forms on $\mathbb P^4$. This has a natural action by $G=\GL_6 (\C )$ given by $M\mapsto AMA^t$ for $A\in \GL_6 (\C )$ and $M\in (\mathbb C^5)^{\vee} \otimes \Lambda^2 \mathbb C^6$. Then \[ \sheaf{M} := \sk^{ss} //G \] is the good projective categorical quotient of the locus of semistable points in $\mathcal{S}$ by this action. Let \[ \pi \colon \sk^{ss} \to \sheaf{M} \] be the canonical projection. We define the subset $\sk^{ps} \subset \sk^{ss}$ to be the subset of those points whose orbits in $\sk^{ss}$ are closed. Then look at the incidence correspondence \[ \mathcal{T}=\bigl\{ ([M], [F]) \mid \mathrm{Pf} (M) \in (F)\bigr\} \subset \sk^{ss} \times \mathcal{C} \] with its two projections \[ \pi_1 \colon \mathcal{T} \to \sk^{ss}, \quad \pi_2 \colon \mathcal{T} \to \mathcal{C} . \]
We also denote by $\sk^{ss}_0 \subset \sk^{ss}$ the subset consisting of matrices with Pfaffian zero, and let \[ \mathcal{T}_0= \pi_1^{-1}(\sk^{ss}_0) =\sk^{ss}_0 \times \mathcal{C} . \] Clearly, $\pi_1$ is one-to-one onto its image outside of the subset $\mathcal{T}_0 \subset \mathcal{T}$ and we denote by $\widetilde{\sk}^{ss}$ the closure of $\pi_1^{-1} (\sk^{ss} - \sk^{ss}_0)$ in $\mathcal{T}$.
Then the group $G$ acts on $\mathcal{S} \times \mathcal{C}$ if we let it act trivially on $\mathcal{C}$ and $\sk^{ss} \times \mathcal{C}$ is the locus of semistable points for this action, $\widetilde{\sk}^{ss}$ is a $G$-invariant irreducible closed subset of $\sk^{ss} \times \mathcal{C}$ and the good categorical quotient $(\sk^{ss} \times \mathcal{C})//G$ is nothing but $\sheaf{M} \times \mathcal{C}$. Then $\widetilde{\sk}^{ss}$ maps to an irreducible closed subset of $\sheaf{M} \times \mathcal{C}$, which we denote by $\widetilde{\MM}$. It comes with a natural projection $\widetilde{\MM} \to \mathcal{C}$.
The main results we will show in this article are the following.
\begin{enumerate} \item We define a space $\mathcal{MF}$ of certain matrix factorisations, which we call \emph{matrix factorisations of intermediate Jacobian type}. This has an action by a non-reductive group $\Gamma$. Since we were not able to use general methods provided by currently available versions of non-reductive GIT to this example, we define ad hoc the loci of semistable and polystable points, $\mathcal{MF}^{ss}, \mathcal{MF}^{ps}$ in $\mathcal{MF}$, and show that there is a natural morphism $\mathcal{MF}^{ps} \to \widetilde{\MM}$ each nonempty fibre of which is a single $\Gamma$-orbit. Indeed, in a forthcoming paper we will show this morphism is also surjective. In this sense, $\widetilde{\MM}$ can be interpreted as a moduli space of matrix factorisations of intermediate Jacobian type. \item For smooth $X$ in $\mathcal{C}$, we will show that the fibre $\widetilde{\MM}_X$ of $\widetilde{\MM}\to\mathcal{C}$ over $X$ admits a natural bijective morphism onto Druel's compactification $\overline{\mathcal{M}}_X$ of the moduli space of stable rank 2 vector bundles with $c_1=0$ and $c_2=2$ on $X$ \cite{Dru00}, \cite{Beau02}, the Maruyama-Druel-Beauville moduli space of equivalence classes of semistable sheaves on $X$ with Chern classes $c_1=0, c_2=2, c_3=0$. Since it is known that $\overline{\mathcal{M}}_X$ is smooth, hence in particular normal, it then follows from Zariski's main theorem that the morphism from $\widetilde{\MM}_X$ is an isomorphism. We mention that $\overline{\mathcal{M}}_X$ is also isomorphic to the intermediate Jacobian of $X$ blown up in the Fano surface of lines \cite{Beau02}. Thus our constructions here allow us to study degenerations of these (birational models of) intermediate Jacobians along with the cubic. Our initial motivation for the present work is that this may help to shed some light on unsolved cycle-theoretic questions about these intermediate Jacobians, such as the representability by algebraic cycles of certain minimal cohomology classes which is intimately linked to the question of stable rationality for smooth cubic threefolds \cite{Voi17}. \end{enumerate}
\
Lastly, for the reader's convenience, we restate here two results already appearing in \cite{BB22-2}, which we will make repeated use of below.
\begin{table}
\begin{tabular}{|c|c|c|c| c|} \hline & $M$ & $S$ & $Y$ & \\ \hline
(a)
&
$\left(
\begin{smallmatrix}
&{l}_{3}&&0&{l}_{0}&{l}_{1}\\
{-{l}_{3}}&&&{-{l}_{0}}&0&{l}_{2}\\
&&&{-{l}_{1}}&{-{l}_{2}}&0\\
0&{l}_{0}&{l}_{1}&&{l}_{4}&\\
{-{l}_{0}}&0&{l}_{2}&{-{l}_{4}}&&\\
{-{l}_{1}}&{-{l}_{2}}&0&&& \end{smallmatrix} \right)$ & $\left(
\begin{smallmatrix}
{l}_{2}&\\
{-{l}_{1}}&\\
{l}_{0}&{l}_{4}\\
&{l}_{2}\\
&{-{l}_{1}}\\
{l}_{3}&{l}_{0} \end{smallmatrix} \right)$ & a smooth conic & stable \\ \hline
(b)
& $\left( \begin{smallmatrix}
0&{l}_{0}&{l}_{1}\\
{-{l}_{0}}&0&{l}_{2}\\
{-{l}_{1}}&{-{l}_{2}}&0\\
&&&0&{l}_{2}&{l}_{3}\\
&&&{-{l}_{2}}&0&{l}_{4}\\
&&&{-{l}_{3}}&{-{l}_{4}}&0 \end{smallmatrix} \right)$ & $\left( \begin{smallmatrix}
{l}_{2}&\\
{-{l}_{1}}&\\
{l}_{0}&\\
&{l}_{4}\\
&{-{l}_{3}}\\
&{l}_{2} \end{smallmatrix} \right)$
&
two skew lines
&
stable \\ \hline
(c)
& $\left( \begin{smallmatrix}
0&{l}_{0}&{l}_{1}\\
{-{l}_{0}}&0&{l}_{2}\\
{-{l}_{1}}&{-{l}_{2}}& 0\\
&&&0&{l}_{1}&{l}_{2}\\
&&&{-{l}_{1}}&0&{l}_{3}\\
&&&{-{l}_{2}}&{-{l}_{3}}&0 \end{smallmatrix} \right)$ & $\left( \begin{smallmatrix}
{l}_{2}&\\
{-{l}_{1}}&\\
{l}_{0}&\\
&{l}_{3}\\
&{-{l}_{2}}\\
&{l}_{1} \end{smallmatrix} \right)$ &
\begin{tabular}{c} two distinct \\ intersecting lines\\ with an embedded point\\ at the intersection \\ spanning the ambient $\mathbb P^4$ \end{tabular} & stable \\ \hline
(d)
& $\left( \begin{smallmatrix} & & & 0 &l _0 &l_1 \\
& & & -l_0 & 0 & l_2 \\
& & & -l_1 & -l_2 & 0 \\ 0 &l _0 &l_1 & & l_3 & l_4\\ -l_0 & 0 & l_2 & -l_3 & & \\ -l_1 & -l_2 & 0 & -l_4 & & \end{smallmatrix} \right)$ & $\left( \begin{smallmatrix}
{l}_{2}&\\
{-{l}_{1}}& -l_4\\
{l}_{0}&{l}_{3}\\
&{l}_{2}\\
&{-{l}_{1}}\\
&{l}_{0} \end{smallmatrix} \right)$
&
\begin{tabular}{c}
a double line lying on \\ a smooth quadric surface\\ \end{tabular} & \begin{tabular}{c} strictly semistable,\\ but not polystable \end{tabular} \\ \hline
(e)
& $\left( \begin{smallmatrix} & & & 0 &l _0 &l_1 \\
& & & -l_0 & 0 & l_2 \\
& & & -l_1 & -l_2 & 0 \\ 0 &l _0 &l_1 & & l_3 & \\ -l_0 & 0 & l_2 & -l_3 & & \\ -l_1 & -l_2 & 0 & & & \end{smallmatrix} \right)$ & $\left( \begin{smallmatrix}
{l}_{2}&\\
{-{l}_{1}}& \\
{l}_{0}&{l}_{3}\\
&{l}_{2}\\
&{-{l}_{1}}\\
&{l}_{0} \end{smallmatrix} \right)$ &
\begin{tabular}{c} a plane double line \\ with an embedded point,\\ spanning the ambient $\mathbb P^4$\\ \end{tabular} & \begin{tabular}{c} strictly semistable,\\ but not polystable \end{tabular} \\ \hline
(f) & $\left( \begin{smallmatrix} & & & 0 &l _0 &l_1 \\
& & & -l_0 & 0 & l_2 \\
& & & -l_1 & -l_2 & 0 \\ 0 &l _0 &l_1 & & & \\ -l_0 & 0 & l_2 & & & \\ -l_1 & -l_2 & 0 & & & \end{smallmatrix} \right)$ & $\left( \begin{smallmatrix}
{l}_{2}&\\
{-{l}_{1}}& \\
{l}_{0}&\\
&{l}_{2}\\
&{-{l}_{1}}\\
&{l}_{0} \end{smallmatrix} \right)$ & \begin{tabular}{c}
a line \\ together with its \\ full first order\\ infinitesimal \\ neighbourhood \end{tabular} & polystable \\ \hline \end{tabular}
\caption{Semi-stable matrices $M$ with vanishing Pfaffian} \label{tPfaffZero} \end{table}
\begin{theorem}\label{tGeometryM0} Let $[M]\in \sk^{ss}$ have vanishing Pfaffian. View $M$ as a map of graded $R$-modules \[ R (-1)^{6} \xrightarrow{M} R^6. \] Let $S$ be a matrix with columns representing a minimal system of generators of the kernel of this map $M$. Let $Y$ be the rank at most two locus of $M$ with its scheme structure defined by the $4\times 4$ sub-Pfaffians. Then there exists independent linear forms $l_0,\dots,l_4$ and matrices $B \in \mathrm{GL_6}(\mathbb C)$ and $B' \in \mathrm{GL_2}(\mathbb C)$ such that after making the replacements \begin{align*}
M & \mapsto B^t M B \\
S & \mapsto B^{-1} S B' \end{align*} we have one of the cases in Table \ref{tPfaffZero}. Moreover, the stability type of $M$ is as described in the last column of Table \ref{tPfaffZero}. \end{theorem}
\begin{proposition}\label{pSameInformation} Let $M$ and $S$ be matrices as in Table \ref{tPfaffZero}. Then $M$ represents the syzygy module of $S^t$. If $M'$ is a another skew symmetric $6 \times 6$ matrix with linear forms representing the syzygy module of $S^t$ then $[M']$ is in the $G$-orbit of $[M]$. Furthermore the ideal generated by the $2\times 2$ minors of $S$ is equal to the one generated by the $4 \times 4$ Pfaffians of $M$. \end{proposition}
\section{Matrix factorizations of intermediate Jacobian type}\label{sMatrixFactorizations}
In this section we identify $\widetilde{\MM}$ with a space of matrix factorizations.
Consider the following three projective spaces: \begin{enumerate} \item the projective space $\mathbb P_f= \mathcal{C}$ parametrising cubics in $\mathbb P^4$; for a nonzero homogeneous cubic polynomial $F$ in $x_0, \dots , x_4$, we will denote by $[F]$ the corresponding element of $\mathbb P_f$. \item the projective space $\mathbb P_a$ parametrising skew-symmetric maps \[ 2 \sheaf{O}_{\mathbb P^4}(1) \oplus 6 \sheaf{O}_{\mathbb P^4} (2)
\xrightarrow{A}
2 \sheaf{O}_{\mathbb P^4} (4)\oplus 6\sheaf{O}_{\mathbb P^4}(3) \] up to homotheties. We will write $[A]$ for the class of such a map $A$ in $\mathbb P_a$. We use the notation \[
A = \begin{pmatrix}
A_3 & A_2 \\
-A_2^t & A_1
\end{pmatrix} \] where the subscript denotes the homogeneous degree of the entries of each matrix; \item the projective space $\mathbb P_b$ parametrising skew-symmetric maps \[
2 \sheaf{O}_{\mathbb P^4} (1) \oplus 6 \sheaf{O}_{\mathbb P^4}
\xrightarrow{B}
2 \sheaf{O}_{\mathbb P^4}(1) \oplus 6 \sheaf{O}_{\mathbb P^4} (2) \] up to homotheties. Again we write $[B]$ for the class of such a map $B$ in $\mathbb P_b$ and \[ B = \begin{pmatrix}
B_0 & B_1^t\\
-B_1 & B_2
\end{pmatrix}. \] \end{enumerate}
\begin{definition}\label{dMFIntermediateJacobian} Let $\mathcal{MF}$ be the space of triples $([A],[B],[F])\in \mathbb P_a\times \mathbb P_b\times\mathbb P_f$ such that the following hold: \begin{enumerate} \item $AB\neq 0$, $BA \neq 0$, $\mathrm{Pf}(A)\neq 0$, $\mathrm{Pf} (B)\neq 0$. \item $[AB] =[F\cdot \mathrm{id}]$ and $[BA]= [F\cdot \mathrm{id}]$, where the equalities (and identity maps) are to be understood in the appropriate projective spaces. \ \item $[\mathrm{Pf} (A)]=[\mathrm{Pf}(B)] =[F^2]$. \end{enumerate} Note that condition $a)$ defines an open subvariety of $\mathbb P_a\times \mathbb P_b\times\mathbb P_f$, and $b)$ and $c)$ define a closed subvariety inside this. Hence $\mathcal{MF}$ is a locally closed subvariety of $\mathbb P_a\times \mathbb P_b\times\mathbb P_f$. We call elements in $\mathcal{MF}$ {\sl matrix factorizations of intermediate Jacobian type}. \end{definition}
\begin{definition}\label{dGroupActionMF} Consider the (non-reductive) group of automorphism $\Gamma$ of the graded free bundle $2 \sheaf{O}_{\mathbb P^4}(1) \oplus 6 \sheaf{O}_{\mathbb P^4}$. $\Gamma$ acts on $\mathcal{MF}$ by the rule \[ \gamma \cdot ([A],[B],[F]) = \bigl( [\gamma A \gamma^t], [(\gamma^t)^{-1}B \gamma^{-1}], [F] \bigr). \]
The situation is summarised in the following commutative diagram \xycenter{
2 \sheaf{O}_{\mathbb P^4} (1) \oplus 6 \sheaf{O}_{\mathbb P^4}
\ar[r]^-B \ar[d]^{\gamma}
&
2 \sheaf{O}_{\mathbb P^4}(1) \oplus 6 \sheaf{O}_{\mathbb P^4} (2)
\ar[r]^-A
\ar[d]^{(\gamma^{-1})^t}
&
2 \sheaf{O}_{\mathbb P^4} (4) \oplus 6\sheaf{O}_{\mathbb P^4}(3)
\ar[d]^{\gamma}
\\
2 \sheaf{O}_{\mathbb P^4} (1) \oplus 6 \sheaf{O}_{\mathbb P^4}
\ar[r]^-{B'}
&
2 \sheaf{O}_{\mathbb P^4}(1) \oplus 6 \sheaf{O}_{\mathbb P^4} (2)
\ar[r]^-{A'}
&
2 \sheaf{O}_{\mathbb P^4} (4) \oplus 6\sheaf{O}_{\mathbb P^4}(3)
} \end{definition}
\begin{definition}\label{dMFStabilityProp} We denote by $\mathcal{MF}^{ss}$ and $\mathcal{MF}^{ps}$ the loci inside $\mathcal{MF}$ where $[A_1]$ is semistable and polystable, respectively. \end{definition}
\begin{lemma}\label{lMFNormal} Let $([A], [B], [F])\in \mathcal{MF}$. Then \[
\mathrm{Pf} (A_1)\in (F). \] More precisely: \begin{enumerate} \item[(A)]
If $B_0 \not= 0$ then, $([A],[B],[F])$ is in the same $\Gamma$-orbit as a matrix factorization $([A'],[B]',[F])$ with \[
A' = \begin{pmatrix}
0 & F & \\
-F & 0 & \\
&& A'_1
\end{pmatrix} \text{ and }
B' = \begin{pmatrix}
0 & 1 & \\
-1 & 0 & \\
&& \mathrm{adj}^{\mathrm{Pf}} {A'_1}
\end{pmatrix} \] where $[A_1']$ is in the same $G$-orbit as $[A_1]$. In this case, $(\mathrm{Pf} (A_1))= (F)$. \item[(B)] If $B_0=0$, then $\mathrm{Pf} (A_1)=0$. Moreover, if $A_1$ is semistable, then $([A],[B],[F])$ is in the same $\Gamma$-orbit as a matrix factorization $([A'],[B'],[F'])$ with $A'_1 = M$ and $B'_1 = S$ where $M$ and $S$ are as in one of the cases in Table \ref{tPfaffZero}.
\end{enumerate} In particular, $\mathrm{Pf} (A_1) =0$ if and only if $B_0=0$. \end{lemma}
\begin{proof}
\fbox{Case (A):} If $B_0$ is nonzero, it is invertible since it is skew. Therefore there exists a $\Gamma$-translate $([A'], [B'], [F])\in \mathcal{MF}$ such that $A_1=A_1'$ and \[
B_0' = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \] and $B_1' = 0$. But then \[
[F \cdot \mathrm{id} ]
= [A' B']
= \left[ \begin{pmatrix}
A_3' & A_2' \\
-(A_2')^t & A_1'
\end{pmatrix}
\begin{pmatrix}
B_0' & 0\\
0 & B_2'
\end{pmatrix} \right]. \] In particular $(A_2')^t B_0' = 0$. Since $B_0'$ is invertible, it follows that $A_2' = 0$. So \[
[F \cdot \mathrm{id}] = [A' B' ]
= \left[ \begin{pmatrix}
A_3'B_0'& 0\\
0 & A_1'B_2'
\end{pmatrix}\right]. \] This implies \[
[A_3' ]= \left[\begin{pmatrix}0 & F \\ -F & 0 \end{pmatrix}\right] \] and since $[\mathrm{Pf} (A')]= [F^2]$ by assumption, we must consequently have $[\mathrm{Pf} (A_1)]= [\mathrm{Pf} (A_1')]= [F]$ in this case.
We have $[A_1'B_2']= [F\cdot \mathrm{id}_{6\times 6}]$. Moreover, $[\mathrm{Pf} (A_1')] = [F]$, $[\mathrm{Pf} (B_2')] = [\mathrm{Pf} (B)] = [F^2]$ and then $B_2'$ must be proportional to the Pfaffian adjoint of $A_1'$.
In summary, what we have achieved at this point is reduction of $A$ and $B$ to \[
A'= \begin{pmatrix}
0 & \lambda F & \\
-\lambda F & 0 & \\
&& A'_1
\end{pmatrix} \text{ and }
B'= \begin{pmatrix}
0 & 1 & \\
-1 & 0 & \\
&& \mu\mathrm{adj}^{\mathrm{Pf}} {A'_1}
\end{pmatrix} \] with $\lambda, \mu \in \mathbb C^*$. Since $A'B'$ is a multiple of $F$, we have $\lambda=\mu$. Now acting with the element $\gamma \in \Gamma$ given by \[ \gamma =\mathrm{diag} \left( (\sqrt{\lambda})^{-1}, (\sqrt{\lambda})^{-1}, 1, 1, 1, 1, 1, 1 \right) \] and noting that $[B'] = [\lambda B']$, we obtain $A'$ and $B'$ as claimed in part (A) of the Lemma (treating $A'$ and $B'$ as dynamical variables in this last step).
\fbox{Case (B):} If $B_0=0$, we have $A_1B_1 = 0$ since $[AB] =[F\cdot \mathrm{id}]$. Since $B_1 \not=0$ (otherwise we would have $\mathrm{Pf}(B) = 0$ contradicting $[\mathrm{Pf} (B)] =[F^2]$) the skew symmetric $6 \times 6$ matrix $A_1$ can not have generic rank $6$. Therefore $\mathrm{Pf}(A_1) = 0$.
If we assume in addition that $A_1$ is semi-stable, we can appeal to the classification in Table \ref{tPfaffZero} and conclude that, after acting by an element in $\Gamma$, we can assume $A_1 = M$ with $M$ as in that table. Let $S$ be the matrix associated to $M$ in Table \ref{tPfaffZero}. Since the syzygy module of $M=A_1$ is generated by the columns of $S$ and since $A_1B_1 = 0$ the columns of $B_1$ must be generated by the columns of $S$. The columns of $B_1$ cannot be dependent since $\mathrm{Pf}(B) \not=0$. Since the entries of $B_1$ and $S$ are linear and both matrices have two columns this implies that we may assume $B_1 = S$ after acting with another element in $\Gamma$. \end{proof}
\begin{lemma}\label{lWellDefined} Suppose that $([A],[B],[F]) \in \mathcal{MF}^{ss}$ and $\mathrm{Pf}(A_1) = 0$. \begin{enumerate} \item We have that $X=V(F)$ contains the rank $\le 2$ locus of $A_1$ set-theoretically. \item If $A_1$ is of types (d) or (e) in Table \ref{tPfaffZero} and let $Y$ be the subscheme defined by the $4\times 4$ sub-Pfaffians of $A_1$, we can assert something stronger: in that case $X=V(F)$ scheme-theoretically contains the unique degree $2$ curve contained in $Y$. \end{enumerate} \end{lemma}
\begin{proof} For $a)$, suppose $A_1$ has rank at most $2$ at a point $P\in \mathbb P^4$. Observe quite generally that if $A_1$ has rank $r$, then $A$ has rank at most $r+4$ because $A$ can be obtained from $A_1$ by first expanding $A_1$ by two length $6$ columns to a $6\times 8$ matrix, and afterwards, adding two length $8$ rows to get $A$; and for each added row or column the rank can increase by at most $1$. Hence if $A_1$ has rank at most $2$, $A$ can never have full rank, hence its determinant, which is $F^2$, is zero at $P$, hence $X=V(F)$ contains $P$.
Now suppose we are under the assumptions in $b)$. Then, by passing to another element in the same $\Gamma$-orbit, by Lemma \ref{lMFNormal} we can assume that $A_1$ is one of the matrices $M$ in types (d) or (e) in Table \ref{tPfaffZero}, $B_0=0$ and $B_1=S$. Since $[AB]=[F]$, we obtain \[ [F\mathrm{id}_{2\times 2}]= [ A_2B_1] =[A_2 S]. \] Denoting \[ A_2 = (Q_{ij})_{1\le i \le 2, 1\le j \le 6} \] we get in case (d) \begin{align*} (Q_{11}, \dots, Q_{16}) \cdot (l_2, -l_1, l_0, 0, 0, 0)^t &= F\\ (Q_{11}, \dots, Q_{16}) \cdot (0, -l_4, l_3, l_2, -l_1, l_0)^t &= 0 \end{align*} and in case (e) \begin{align} (Q_{11}, \dots, Q_{16}) \cdot (l_2, -l_1, l_0, 0, 0, 0)^t &= F \label{fCaseE1} \\ (Q_{11}, \dots, Q_{16} )\cdot (0, 0, l_3, l_2, -l_1, l_0)^t &= 0. \label{fCaseE2} \end{align}
The argument in both cases can now be finished similarly, for simplicity we only give details for case (e). Relation (\ref{fCaseE2}) above shows that $(Q_{11}, \dots, Q_{16} )$ is a degree $2$ syzygy of $(0, 0, l_3, l_2, -l_1, l_0)$, and the module of all syzygies is generated by the columns of \[ \begin{pmatrix}
1&0&0&0&0&0&0&0\\
0&1&0&0&0&0&0&0\\
0&0&{l}_{2}&{l}_{1}&0&{l}_{0}&0&0\\
0&0&{-{l}_{3}}&0&{l}_{1}&0&{l}_{0}&0\\
0&0&0&{l}_{3}&{l}_{2}&0&0&{l}_{0}\\
0&0&0&0&0&{-{l}_{3}}&{-{l}_{2}}&{l}_{1}\end{pmatrix}. \] Therefore there exist linear forms $m_0, m_1, m_2$ auch that \[ Q_{13}= l_2 m_2 +l_1m_1 +l_0m_0. \] Substituting this into formula (\ref{fCaseE1}) gives \[ F= Q_{11}l_2 - Q_{12}l_1 + l_0 (l_2 m_2 +l_1m_1 +l_0m_0) \] and hence $F \in (l_1, l_2, l_0^2)$ which is the ideal of the unique double line contained in the subscheme defined by the $4\times 4$ sub-Pfaffians of $A_1$. The computation in case (d) is similar. This finishes the proof. \end{proof}
\begin{proposition}\label{pWellDefinedMap} The map \[ \psi \colon \mathcal{MF}^{ss} \to \widetilde{\sk}^{ss} \] sending $([A],[B],[F])$ to $([A_1], [F])$ is well-defined. Hence one obtains a map \[ \overline{\psi}\colon \mathcal{MF}^{ss} \to \widetilde{\MM} \] by post-composing with the map to the quotient. Moreover, $\overline{\psi}$ is constant on $\Gamma$-orbits. \end{proposition}
\begin{proof} By \cite[Theorem 2.1]{BB22-2} it suffices to show the following two assertions to prove that $\psi$ is well-defined: \begin{enumerate} \item If $([A],[B],[F]) \in \mathcal{MF}^{ss}$ and $\mathrm{Pf}(A_1) \neq 0$, then the ideal generated by $F$ coincides with the ideal generated by $\mathrm{Pf}(A_1)$. \item If $([A],[B],[F]) \in \mathcal{MF}^{ss}$ and $\mathrm{Pf}(A_1) = 0$, then the subscheme $\overline{Y}$ defined by the $4\times4$ sub-Pfaffians of $A_1$ and $F$ contains a degree $2$ curve. In other words, in cases (a) -(e) of Table \ref{tPfaffZero} we need to show that $X= V(F)$ contains the unique degree $2$ curve contained in the scheme $Y$, and in case (f) we need to show that $X$ contains the rank $2$ locus of $A_1$. \end{enumerate}
Notice that under the assumptions of $a)$, Lemma \ref{lMFNormal} says we are in case (A) of that Lemma, and the conclusion that the ideal generated by $F$ coincides with the ideal generated by $\mathrm{Pf}(A_1)$ is true. Hence it suffices to prove $b)$, but this is nothing but the assertion of Lemma \ref{lWellDefined}. The remaining statements of the Proposition are clear. \end{proof}
\begin{theorem}\label{tFibresMF} Each fibre of the map \[ \overline{\psi}\mid_{\mathcal{MF}^{ps}} \colon \mathcal{MF}^{ps} \to \widetilde{\MM} \] is a single $\Gamma$-orbit. \end{theorem}
\begin{proof} Let $x\in \widetilde{\MM}$ be an element represented by a pair $([A_1], [F])$ with $A_1$ polystable. We show that if $([A], [B], [F])$ and $([A'], [B'], [F])$ are two elements in $\mathcal{MF}^{ps}$ mapping down to $x$, then they are in the same $\Gamma$-orbit. We distinguish two cases: $\mathrm{Pf} (A_1)\neq 0$ and $\mathrm{Pf} (A_1)=0$.
\fbox{Case 1: $\mathrm{Pf} (A_1)\neq 0$}: In this case, both $([A], [B], [F])$ and $([A'], [B'], [F])$ can be brought to the normal form in Lemma \ref{lMFNormal}, (A).
\fbox{Case 2: $\mathrm{Pf} (A_1) = 0$}: Here we can assume $A_1=A_1'=M$ and $B_1=B_1'=S$ as in (a), (b), (c) or (f) in Table \ref{tPfaffZero} by Lemma \ref{lMFNormal}, (B). Furthermore we can assume \[
AB = A'B' = F \cdot \mathrm{id}. \] Indeed, the definition of matrix factorization now gives \[
[AB] = [A'B'] = [F \cdot \mathrm{id}] \] and we can assume \[
AB = F \cdot \mathrm{id} \quad \text{and} \quad A'B' = \lambda F \cdot \mathrm{id} \] Replacing $B'$ by $\lambda^{-1} B'$ and operating with \[
L = \begin{pmatrix} \lambda^{-1} \mathrm{id}_2 & 0 \\ 0 &\mathrm{id}_4 \end{pmatrix} \] on the primed matrix factorization, we keep $A_1 = A_1'= M$, $B_1 = B_1'= S$ and get in addition \[
AB = A'B' = F \cdot \mathrm{id}. \quad \quad (\ast) \]
The operation of the unipotent radical of $\Gamma$ is now \begin{align*} \begin{pmatrix}
\mathrm{id} & u \\
0 & \mathrm{id} \end{pmatrix}\begin{pmatrix}
A_3 & A_2 \\
-A_2^t & M \end{pmatrix}\begin{pmatrix}
\mathrm{id} & 0 \\
u^t & \mathrm{id} \end{pmatrix} &= \begin{pmatrix}
A_3-uA_2^t+A_2u^t+uMu^t & A_2+uM \\
-A_2^t + Mu^t & M \end{pmatrix}\\ \begin{pmatrix}
\mathrm{id} & 0 \\
-u^t & \mathrm{id} \end{pmatrix}\begin{pmatrix}
0 & S^t \\
-S & B_2 \end{pmatrix}\begin{pmatrix}
\mathrm{id} & -u \\
0 & \mathrm{id} \end{pmatrix} &= \begin{pmatrix}
0 & S^t \\
-S & B_2+Su-u^tS^t \end{pmatrix} \end{align*} Writing out the condition $(*)$ we get \[
\begin{pmatrix}
-A_2S & A_3S^t+A_2B_2\\
0 & -A_2^tS^t + MB_2
\end{pmatrix}
=
\begin{pmatrix}
-A'_2S & A_3'S^t+A_2'B_2' \\
0 & -A_2^tS^t + MB'_2
\end{pmatrix}
=
\begin{pmatrix}
\mathrm{id} \cdot F & 0 \\
0 & \mathrm{id} \cdot F
\end{pmatrix} \] In particular $( A_2- A_2')S=0$, i.e. that $ A_2-A_2'$ is a quadratic syzygy of $S$. Since all such syzygies are generated by the rows of $M$ by Proposition \ref{pSameInformation}, we can find a $2\times 6$ matrix $u$ of linear forms such that $$A_2'-A_2=uM \iff A_2' = A_2+uM$$ Operating with $$ U = \begin{pmatrix}
\mathrm{id} & u \\
0 & \mathrm{id} \end{pmatrix} $$ on $A$ and $B$ we can assume in addition $A_2 = A_2'$.
Now we argue similarly for $B_2$. By $(\ast)$ we have $$ - A_2^tS^t+MB_2 = F\cdot \mathrm{id} = -A_2^tS^t + MB_2' $$ and hence $M(B_2'- B_2)=0$. Thus we can choose a $2 \times 6$ matrix of linear forms $v$ such that $Sv=B_2'- B_2$. By skew-symmetry we find $$
B_2'=B_2+\frac{1}{2}(-v^tS^t+Sv). $$ Acting with $$ V = \begin{pmatrix} \mathrm{id} & \frac{1}{2}v\\ 0 & \mathrm{id} \end{pmatrix} $$ on $A$ and $B$ we obtain $B_2=B_2'$ but possibly loose the equation $A_2 = A_2'$. Still $(*)$ now gives $$ -A_2^tS^t+MB_2 = F\cdot \mathrm{id} = -(A_2')^tS^t + MB_2 $$ and hence $S(A_2-A_2')=0$. But $S$ does not have any syzygies on this side, so $A_2 = A_2'$.
Finally $(*)$ now gives $$ SA_3 + B_2A_2^t =0=SA_3'+B_2A_2^t $$ i.e. $S(A_3-A_3')=0$ and $S$ has no syzygies on this side. Hence also $A_3 = A_3'$. \end{proof}
\begin{remark}\label{rSurjectivityMF} It can be shown that the map $\overline{\psi}\mid_{\mathcal{MF}^{ps}} \colon \mathcal{MF}^{ps} \to \widetilde{\MM}$ is surjective, but the proof uses a variant of the Shamash construction, which we will discuss in its proper context in a different article. \end{remark}
\section{The connection to the intermediate Jacobian}\label{sIntermediateJacobians}
In this section we fix a smooth cubic threefold $X \subset \mathbb P^4$ and a cubic polynomial $F \in \mathbb C[x_0,\dots,x_4]$ defining $X$. We denote by $J_X$ the intermediate Jacobian, and consider \[
\widetilde{\MM}_X := \{ ([M], [F]) \, \mid \, V(F) = X\} \subset \widetilde{\MM} \] with its reduced structure.
Given $([M],[F]) \in \widetilde{\MM}_X$, we consider the image $\sheaf{F}_M$ of the map given by the Pfaffian adjoint matrix: \[
6 \sheaf{O}_X(-1) \xrightarrow{\mathrm{adj}^{\mathrm{Pf}}(M)} 6 \sheaf{O}_X(1) . \] Here $\mathrm{adj}^{\mathrm{Pf}}(M)$ is the skew matrix whose $(i,j)$-entry for $i<j$ is \[ (-1)^{i+j} \mathrm{Pf} (M_{ij}) \] where $M_{ij}$ is the matrix obtained from $M$ by deleting rows and columns with indices $i, j$.
We now want to prove that there is an isomorphism from $\widetilde{\MM}_X$ to Druel's compactification $\overline{\mathcal{M}}_X$ of the moduli space of stable rank 2 vector bundles with $c_1=0$ and $c_2=2$ on $X$ sending $([M],[F])$ to $\sheaf{F}_M$. For this we recall some properties of $\overline{\mathcal{M}}_X$ from \cite{Dru00} following \cite{Beau02}.
$\overline{\mathcal{M}}_X$ is smooth and connected, and contains a nonempty open subset $\mathcal{M}_X$ corresponding to stable rank 2 vector bundles $\sheaf{E}$ with $c_1=0$ and $c_2=2$ on $X$. Every such $\sheaf{E}$ sits in an exact sequence \[ 0 \to 6 \sheaf{O}_{\mathbb P^4}(-2) \xrightarrow{M} 6 \sheaf{O}_{\mathbb P^4}(-1) \to \sheaf{E} \to 0 \] with $M$ a skew $6\times 6$ matrix with $X = V\bigl( \mathrm{Pf} (M) \bigr)$, and conversely, every cokernel $\sheaf{E}_M$ of a skew map $M$ with $X = V\bigl( \mathrm{Pf} (M) \bigr)$ as in the above short exact sequence is a vector bundle of this type. From this one sees that $\mathcal{M}_X$ is isomorphic to the GIT quotient of the space of such skew matrices up to congruence. Points in $\overline{\mathcal{M}}_X - \mathcal{M}_X$ correspond to isomorphism classes of polystable sheaves with $c_1=0, c_2=2, c_3=0$ on $X$. Moreover, \[ \overline{\mathcal{M}}_X - \mathcal{M}_X = \mathcal{A}\cup \mathcal{B} \] with $\mathcal{A}$ an irreducible locally closed codimension $1$ subvariety, and $\mathcal{B}$ an irreducible closed codimension $1$ subvariety containing all points of $\overline{\mathcal{A}}-\mathcal{A}$. Representatives of points in $\mathcal{A}$ and $\mathcal{B}$ can be described as follows.
\begin{itemize}
\item[($\alpha$)] Let $C$ be a smooth conic in $X$, $L$ the positive generator of $\Pic(C)$ (so that $\sheaf{O}_X(1)|_C \simeq L^{\otimes 2}$ ). Let $\sheaf{E}$ be the kernel of the canonical evaluation map $H^0(C, L) \otimes \sheaf{O}_X \to L $. Then $\sheaf{E}$ is a torsion free sheaf, with $c_1(\sheaf{E}) = c_3(\sheaf{E}) = 0$ and $c_2(\sheaf{E}) = [C]$. \item[($\beta$)] Let $l,l'$ be two lines in $X$ (possibly equal), and let $I_l,I_{l'}$ be their ideal sheaves. Then the sheaf $I_l \oplus I_{l'}$ is a torsion free sheaf with $c_1(E) = c_3(E) = 0$ and $c_2(E) = [l] + [l']$ . \end{itemize}
We will now show that there is a morphism \[
\widetilde{\MM}_X \xrightarrow{\mathrm{im}\, \mathrm{adj}^{\mathrm{Pf}}} \overline{\mathcal{M}}_X \] sending $([M],[F])$ to $\sheaf{F}_M$. We then consider the morphism $\widetilde{c}_2$ defined by the diagram \xycenter{
\widetilde{\MM}_X \ar[r]^{\mathrm{im}\, \mathrm{adj}^{\mathrm{Pf}}} \ar[dr]_{\widetilde{c}_2} & \overline{\mathcal{M}}_X \ar[d]^{c_2}\\
& J_X^{(2)} } Here we think of the intermediate Jacobian $J_X$ as being parametrised, using the Abel-Jacobi map, by the group of $1$-cycles on $X$ that are homologically equivalent to zero, modulo those that are rationally equivalent to zero, and of $J_X^{(2)}$ as the translate of $J_X$ by the class of twice a line. See \cite{Gri84} for details concerning these definitions. Moreover, the Chern classes are understood to take values in the Chow groups of cycles modulo rational equivalence, as in Grothendieck's sense.
\begin{lemma}\label{lDruelNonZero} If $\mathrm{Pf}(M) \not=0$ then $\sheaf{E}_M \simeq \sheaf{F}_M$. \end{lemma}
\begin{proof} We have
\[
\mathrm{adj}^{\mathrm{Pf}}(M)\cdot M =M \cdot \mathrm{adj}^{\mathrm{Pf}}(M) =\mathrm{Pf} (M) \cdot\mathrm{Id},
\]
hence on $X$ we get a $2$-periodic exact sequence
\[ \dots \xrightarrow{} 6 \sheaf{O}_X(-2)\xrightarrow{M} 6 \sheaf{O}_X(-1) \xrightarrow{\mathrm{adj}^{\mathrm{Pf}}(M)} 6 \sheaf{O}_X(1) \xrightarrow{M} 6\sheaf{O}_X (2) \xrightarrow{} \dots
\]
hence
\begin{gather*}
\sheaf{E}_M=\mathrm{coker}\left(6 \sheaf{O}_X(-2)\xrightarrow{M} 6 \sheaf{O}_X(-1) \right) \simeq \ker \left( 6 \sheaf{O}_X(1) \xrightarrow{M} 6\sheaf{O}_X (2) \right)\\
\simeq \mathrm{im} \left( 6 \sheaf{O}_X(-1) \xrightarrow{\mathrm{adj}^{\mathrm{Pf}}(M)} 6 \sheaf{O}_X(1) \right) = \sheaf{F}_M .
\end{gather*}
\end{proof}
\noindent For the case $\mathrm{Pf}(M) = 0$, we need some preparations.
\begin{lemma}\label{lDruelPrep} Let $M$ and $S$ be as in Table \ref{tPfaffZero}. Then \[
\mathrm{adj}^{\mathrm{Pf}}(M) = S I S^t \] with $I = \left(\begin{smallmatrix} 0 & 1 \\ -1 & 0\end{smallmatrix}\right)$ in cases (a), (d), (e), (f) and $I = \left(\begin{smallmatrix} 0 & -1 \\ 1 & 0 \end{smallmatrix}\right)$ in cases (b) and (c).
\noindent Moreover, letting \[ \sheaf{F}_{S} = \mathrm{im} \left( 6 \sheaf{O}_X(-1) \xrightarrow{S^t} 2 \sheaf{O}_X \right) . \] we have \[
\sheaf{F}_M \simeq \sheaf{F}_S. \] \end{lemma}
\begin{proof} The first assertion is a direct computation \cite[SISt.m2]{BB-M2}. Moreover, it can be checked \cite[SISt.m2]{BB-M2} that $S$ is injective on $\mathbb P^4$ and thus in particular on $X$.
We have a commutative diagram
\xycenter{ 6 \sheaf{O}_X(-1) \ar[rr]^{\mathrm{adj}^{\mathrm{Pf}}(M)} \ar[d]^{S^t} & & 6 \sheaf{O}_X(1) \\ 2 \sheaf{O}_X \ar[rr]^{I} & & 2 \sheaf{O}_X \ar[u]^S . } Since $S\circ I$ is injective, we see that $\sheaf{F}_M \simeq \sheaf{F}_S$. \end{proof}
We now go through the polystable cases (a), (b), (c) and (f) in Table \ref{tPfaffZero}.
\begin{lemma}\label{lConic} Let $([M],[F])\in \widetilde{\MM}_X$ where $M$ is of type $(a)$ in Table \ref{tPfaffZero}. Let $S$ be the corresponding syzygy matrix. Then $\sheaf{F}_S$ is isomorphic to a sheaf $\sheaf{E}$ given by the construction in case $(\alpha)$ above. Moreover, any such sheaf $\sheaf{E}$ arises from a unique pair $([M],[F])\in \widetilde{\MM}_X$ in this way. \end{lemma}
\begin{proof}
In case $(a)$ we have \[ S^t = \begin{pmatrix}
l_2 & -l_1 & l_0 &&& l_3 \\
&& l_4 & l_2 & -l_1 & l_0 \end{pmatrix} \] with the $l_i$ independent linear forms. $S^t$ has rank $1$ on a smooth conic $C$ in $X$, defined by $l_2 = l_1 = l_3l_4-l_0^2 = 0$, and nowhere rank $0$. Therefore \[ \coker \left( 6 \sheaf{O}_{\mathbb P^4}(-1) \xrightarrow{S^t} 2 \sheaf{O}_{\mathbb P^4} \right) \] is a line bundle $L$ supported on $C$. Restricting to the $\mathbb P^2$ defined by $l_2=l_1=0$ we obtain an exact sequence \[
0 \to 2 \sheaf{O}_{\mathbb P^2}(-1)
\xrightarrow{
\left(
\begin{smallmatrix}
l_3 & l_0 \\
l_0 & l_4
\end{smallmatrix}
\right)
}
2 \sheaf{O}_{\mathbb P^2}
\to
L
\to
0. \] By for example \cite[Proposition 3.1 (b)]{Beau00} applied to $L(-1)$ the degree of $L$ must be $1$. It is therefore the positive generator of $\Pic(C)$. We now consider the sequence on $X$: \[
6 \sheaf{O}_{X}(-1) \xrightarrow{S^t} 2 \sheaf{O}_{X} \xrightarrow{} L \rightarrow 0. \] The map $2 \sheaf{O}_{X} \to L$ gives linearly independent global sections of $L$; since $\deg L=1$, $H^0(C, L)$ has dimension $2$ and thus the map can be identified with the evaluation map $H^0(C, L)\otimes\sheaf{O}_X \to L$. Therefore, $\sheaf{F}_S$ is isomorphic to the sheaf $\sheaf{E}$ given by the construction in case $(\alpha)$ for the conic $C$.
Any sheaf $\sheaf{E}$ of this type arises like this because all conics in $\mathbb P^4$ form one $\mathrm{GL}_5 (\mathbb C)$-orbit, therefore all such sheaves $L$ as in $(\alpha)$ arise as cokernels of matrices $S^t$ as in (a). Then let $M$ be the corresponding matrix as in in Table \ref{tPfaffZero}, (a).
For the uniqueness statement notice that $S^t$ represents the map $\varphi_1$ in the minimal free resolution \[ \dots \rightarrow F_2 \xrightarrow{\varphi_2} F_1 \xrightarrow{\varphi_1} F_0 \rightarrow \Gamma_* (L) \rightarrow 0 \]
of the associated graded module \[ \Gamma_* (L) =\bigoplus_{n\ge 0} H^0 \bigl( L(n) \bigr) =\bigoplus_{n \ge 0} H^0 \bigl( \sheaf{O}_{\mathbb P^1}(1+2n)\bigr). \] Then one uses Proposition \ref{pSameInformation} to conclude that this determines $M$ up the action of $G$. \end{proof}
\begin{lemma}\label{lTwoLines} Let $S$ be as in Table \ref{tPfaffZero}, cases $(b), (c)$ or $(f)$, and let $l,l'$ be the two lines (possibly equal) where $S$ drops rank. Then $\sheaf{F}_S = I_{l} \oplus I_{l'}$ is the sheaf associated to $l,l'$ as in case $(\beta)$ above. Conversely, any sheaf as in case $(\beta )$ arises from a unique pair $([M],[F])\in \widetilde{\MM}_X$ in this way. \end{lemma}
\begin{proof} In cases $(b)$, $(c)$ and $(f)$ we have \[ S^t = \begin{pmatrix}
l_2 & -l_1 & l_0 &&& \\
&& & m_2 & -m_1 & m_0 \end{pmatrix} \] with $l = V(l_1,l_2,l_3)$ and $l' = V(m_1,m_2,m_3)$ lines in $X \subset \mathbb P^4$. The first claim follows.
For the converse first notice that there are three $\mathrm{GL}_6 (\mathbb C )$-orbits of pairs of lines in $\mathbb P^4$ corresponding to the cases $(b), (c), (f)$. Therefore every sheaf of type $(\beta)$ arises in this way. For the uniqueness notice that $\sheaf{F}_S$ in this case determines $S$ up to row and column operations, hence $M$ up to an action of $G$ by Proposition \ref{pSameInformation}. \end{proof}
\begin{lemma}\label{lNotPolystable} Let $S$ be as in Table \ref{tPfaffZero}, cases $(d)$ or $(e)$, and let $l$ be the line where $S$ drops rank. Then $\sheaf{F}_S$ is a nontrivial extension of the ideal sheaf $I_l$ by itself. \end{lemma}
\begin{proof} In cases $(d)$ and $(e)$ we have \[ S^t = \begin{pmatrix}
l_2 & -l_1 & l_0 &&& \\
m_2 & -m_1 & m_0 & l_2 & -l_1 & l_0 \end{pmatrix} \] with $l = V(l_0,l_1,l_2)$ and $m_0, m_1, m_2$ certain linear forms not all equal to zero.
Let $A_X =\mathbb C [x_0,\dots , x_4]/(F)$ be the homogeneous coordinate ring and consider $S^t$ as a map \[
6 A_X (-1) \xrightarrow{S^t} A_X \oplus A_X . \] Here the first copy of $A_X$ in $A_X \oplus A_X$ corresponds to the first row of $S^t$ above, the second copy of $A_X$ to the second row. Projecting onto the first copy of $A_X$ we see that $\mathrm{im}\, S^t$ sits in an extension \[ 0 \to K \to \mathrm{im}\, S^t \to I_l \to 0 \] where $I_l= (l_0,l_1,l_2)$ and $K$ contains $I_l$. We need to show that in cases (d) and (e) we indeed have $K=I_l$. Denoting $e_1, e_2$ a basis of $A_X \oplus A_X$, this amounts to proving that whenever an $A_X$-linear combination \[ \alpha (l_2e_1 +m_2e_2 ) + \beta ( l_1e_1 +m_1e_2 ) + \gamma ( l_0 e_1 + m_0 e_2) \] is such that \[ \alpha m_2e_2 + \beta m_1e_2 + \gamma m_0 e_2 = 0 \] then \[ \alpha m_2 + \beta m_1 + \gamma m_0 \] is already in the ideal generated by $(l_0, l_1, l_2)$ in $A_X$. For this we will use that $([M],[F])\in \widetilde{\MM}_X$, and that by the main Theorem, \cite[Theorem 2.1]{BB22-2} this means that the scheme-theoretic intersection of $Y$, defined by the $2\times2$ minors of $S$, and $X$ contains a degree $2$ curve. This will give us enough of a connection between $F$ and the possible relations between $l_0, l_1, l_2$ modulo $F$ to conclude.
First a preliminary observation: suppose that \[ \alpha l_2 + \beta l_1 + \gamma l_0 = g \cdot F \] for $\alpha, \beta, \gamma, g \in \mathbb C [x_0,\dots , x_4]$, and write \[ F = q_2 l_2 + q_1l_1 +q_0l_0 \] with $q_j$ some quadrics, which is always possible since $X$ in particular contains $l$. Then \[ (\alpha -gq_2) l_2 + ( \beta-gq_1) l_1 + (\gamma -gq_0) l_0 = 0 \] and in particular, all relations modulo $F$ between $l_2, l_1, l_0$ are generated by Koszul relations and the one relation $q_2 l_2 + q_1l_1 +q_0l_0 \equiv 0 \mod F$. Whenever $(\alpha, \beta , \gamma)$ gives a Koszul relation between $l_2, l_1, l_0$, clearly $\alpha m_2 + \beta m_1 + \gamma m_0$ is in the ideal generated by $(l_0, l_1, l_2)$. So it suffices to check if also \[ q_2 m_2 + q_1m_1 +q_0m_0 \in (l_0, l_1, l_2) \] if we write $F = q_2 l_2 + q_1l_1 +q_0l_0$. We distinguish cases (d) and (e), starting with (e) which is a little simpler. In that case, the matrix $S^t$ takes the form \[ S^t = \begin{pmatrix}
l_2 & -l_1 & l_0 & 0 & 0 & 0 \\
0 & 0 & l_3 & l_2 & -l_1 & l_0 \end{pmatrix}. \] The unique pure-dimensional degree $2$ subscheme contained in $Y$ in this case is the plane double line with saturated ideal $(l_1, l_2, l_0^2)$. If $F=q_2 l_2 + q_1l_1 +q_0l_0$ is in that ideal, we must have $q_0l_0 \in (l_1, l_2, l_0^2)$, meaning $q_0$ is in $(l_1, l_2, l_0)$. Therefore, $q_2 m_2 + q_1m_1 +q_0m_0 = q_0l_3 \in (l_0, l_1, l_2)$ in this case.
In case (d) \[ S^t = \begin{pmatrix}
l_2 & -l_1 & l_0 & 0 & 0 & 0 \\
0 & -l_4 & l_3 & l_2 & -l_1 & l_0 \end{pmatrix}, \] and the saturation of the ideal of $2\times 2$ minors in that case is the ideal of a double line on the quadric $l_2=0, l_0l_4-l_1l_3=0$. It is generated by $l_2, l_0l_4-l_1l_3, l_1^2, l_0l_1, l_0^2$. Again $F=q_2 l_2 + q_1l_1 +q_0l_0$. The linear forms $l_0, \dots , l_4$ are independent, and putting $l_2=0$, which amounts to working in the polynomial ring in $l_0, \dots , l_3$ we see that $\overline{F} = \overline{q}_1 l_1 +\overline{q}_0 l_0$ is in the ideal generated by \begin{align*} (-l_3)l_1 + l_4 l_0 & \\
l_1^2 & \\
l_0^2 & \\
l_0l_1 \end{align*} where the overlines mean ``put $l_2=0$", or equivalently, work in the polynomial ring in $l_0, \dots , l_3$. Subtracting of a multiple of $(-l_3)l_1 + l_4 l_0$ we can assume that $\overline{F} = \overline{q}_1 l_1 +\overline{q}_0 l_0$ is even in the ideal generated by $l_1^2, l_0l_1, l_0^2$, meaning that the new $\overline{q}_1, \overline{q}_0$ are in $(l_1,l_0)$. Summarising, that means that we can write the original $q_1, q_0$ as \begin{align*} q_1 =& \lambda_1 l_2 - \lambda l_3 + q_1' \\ q_0= & \lambda_2 l_2 + \lambda l_4 + q_2' \end{align*} with $\lambda_1, \lambda_2, \lambda$ some linear forms and $q_1', q_2' \in (l_1,l_0)$. Therefore also in this case, \[ q_2 m_2 + q_1m_1 +q_0m_0 = q_1 l_4 + q_0l_3 \in (l_0, l_1, l_2). \] \end{proof}
\begin{theorem}\label{tComparisonDruel} There is an isomorphism \[ \mathrm{im}\, \mathrm{adj}^{\mathrm{Pf}} \colon \widetilde{\MM}_X \to \overline{\mathcal{M}}_X . \] sending $([M], [F])$ to $\sheaf{F}_M$. \end{theorem}
\begin{proof} Lemmata \ref{lDruelNonZero}, \ref{lConic}, \ref{lTwoLines}, \ref{lNotPolystable} show that $\mathrm{im}\, \mathrm{adj}^{\mathrm{Pf}}$ defines a morphism from $\widetilde{\sk}^{ss}$ to $\overline{\mathcal{M}}_X$ that is constant on $G$-orbits, giving a morphism $\widetilde{\MM}_X \to \overline{\mathcal{M}}_X$.
To show that $\mathrm{im}\, \mathrm{adj}^{\mathrm{Pf}}$ is injective on closed points, we distinguish two cases. If $\sheaf{E} \in \mathcal{M}_X\subset \overline{\mathcal{M}}_X$, then $\sheaf{E}$ fits into a short exact sequence \[ 0 \to 6 \sheaf{O}_{\mathbb P^4}(-2) \xrightarrow{M} 6 \sheaf{O}_{\mathbb P^4}(-1) \to \sheaf{E} \to 0 \] where $M$ is a skew $6\times 6$ matrix whose Pfaffian defines $X$. If \[ 0 \to 6 \sheaf{O}_{\mathbb P^4}(-2) \xrightarrow{M'} 6 \sheaf{O}_{\mathbb P^4}(-1) \to \sheaf{E} \to 0 \] is a second such sequence, then there exist invertible matrices $S,T \in \mathrm{GL_6}(\mathbb C)$ such that \[
M' = S^{-1}M T. \] By Lemma \ref{lAppendixGeneralSkew} this implies that there exists an invertible matrix $U \in \mathrm{GL_6}(\mathbb C)$ such that \[ M' = S^{-1} M T = U^t M U. \] Therefore $M$ and $M'$ are in the same $G$-orbit.
If $\sheaf{E}\in \overline{\mathcal{M}}_X - \mathcal{M}_X = \mathcal{A}\cup \mathcal{B}$, then the assertion follows from the uniqueness statements in by Lemmata \ref{lTwoLines} and \ref{lConic}.
Moreover, $\mathrm{im}\, \mathrm{adj}^{\mathrm{Pf}}$ is dominant since all of $\mathcal{M}_X$ is in its image, hence bijective on closed points since $\widetilde{\MM}_X$ is projective. Since $\overline{\mathcal{M}}_X$ is known to be smooth by Druel's results, the morphism $\mathrm{im}\, \mathrm{adj}^{\mathrm{Pf}} $ must be an isomorphism by Zariski's main theorem. \end{proof}
\
\appendix
\section{A linear algebra lemma}\label{sAppendixSkew}
\noindent We want to prove the following
\begin{lemma} \label{lAppendixGeneralSkew} Let $M$ be a skew-symmetric $n \times n$ matrix with entries in a $\mathbb C$-algebra $R$ and $A,B \in \mathrm{GL_n}(\mathbb C)$ invertible matrices such that $M'=A^{-1}MB$ is also skew. Then there exists an an invertible matrix $S \in \mathrm{GL_n}(\mathbb C)$ such that \[
A^{-1}MB = S^t M S. \] \end{lemma}
\noindent Since \[
A^{-1}MB = S^t M S \iff M(BA^t) = (AS^t) M (SA^t) \] we can reduce to the case where $A = \mathrm{id}$. Furthermore, we observe that for any $T \in \mathrm{GL_n}(\mathbb C)$ \[
M' = MB \iff T^tM'T = (T^tMT)(T^{-1}BT). \] We can therefore assume that $B$ has Jordan normal form.
By the following proposition we can reduce to the case where all Jordan blocks of $B$ have the same eigenvalue:
\begin{proposition} Let $M$ be skew and assume that $B$ has Jordan blocks for pairwise different eigenvalues $\lambda_1, \dots , \lambda_n \in \mathbb C^*$. We write in block form \[
MB = \begin{pmatrix}
M_{11} & \dots & M_{1n} \\
\vdots & \ddots & \vdots \\
-M_{1n}^t & \dots & M_{nn}
\end{pmatrix}
\begin{pmatrix}
B_{\lambda_1} & & \\
& \ddots & \\
& & B_{\lambda_n}
\end{pmatrix}
= M' \] with $B_{\lambda_i}$ a square matrix containing all the Jordan blocks for the eigenvalue $\lambda_i$ on the diagonal. If $M'$ is also skew, then $M_{ij}=0$ for $i\neq j$. \end{proposition}
\begin{proof} Choose indices $i< j$. If $M'$ is skew then \[
M'_{ij}= M_{ij}B_{\lambda_j} = -(M_{ji}')^t= - \bigl( -(M_{ij})^tB_{\lambda_i}\bigr)^t = B_{\lambda_i}^t M_{ij}. \] We want to prove $M_{ij} = 0$ by induction on the number of rows of $M_{ij}$.
\noindent If $M_{ij}$ has only one row, then \[
M_{ij}B_{\lambda_j} = B_{\lambda_i}^t M_{ij} = \lambda_i M_{ij} = M_{ij} (\lambda_i \cdot\mathrm{id}) \] which implies \[
M_{ij}(B_{\lambda_j}-\lambda_i\cdot\mathrm{id}) = 0. \] Since $\lambda_i \not=\lambda_j$ we get that $M_{ij}=0$.
\noindent If $M_{ij}$ has more than one row, we write the equation above as \[
B_{\lambda_i}^t M_{ij}= \begin{pmatrix}
(\widetilde{B}_{\lambda_i})^t & 0 \\
\begin{pmatrix} 0 & \cdots & 0 & \epsilon \end{pmatrix}& \lambda_i
\end{pmatrix}
\begin{pmatrix}
\widetilde{M}_{ij} \\
m
\end{pmatrix}
= \begin{pmatrix}
\widetilde{M}_{ij} \\
m
\end{pmatrix}
B_{\lambda_j} \] with $\epsilon = 0$ or $1$. We see that $\widetilde{M}_{ij}$ satisfies the induction hypothesis and therefore $\widetilde{M}_{ij}=0$. The equation above then reduces to \[
\lambda_i m = m B_{\lambda_j} \] which as in the one-row case implies $m=0$. \end{proof}
\noindent The case where $B$ has only one eigenvalue is treated by
\begin{proposition} Let $M$ be skew and $B_\lambda$ a matrix consisting of Jordan blocks with eigenvalue $\lambda \in \mathbb C^*$. If \[
M' = MB_\lambda \] is again skew, then there exists an invertible matrix $S$ such that $S^2=B_\lambda$ and \[
M' = S^t M S. \] \end{proposition}
\begin{proof} Since $M$ and $M'$ are skew, we have \[
MB_{\lambda} = M' = -(M')^t = -B_{\lambda}^tM^t = B_{\lambda}^tM \] We can now write \[
B_\lambda = \lambda \cdot \mathrm{id} + N \] with $N$ nilpotent. Plugging this into the above equation we get \[
M (\lambda \cdot \mathrm{id} + N) = (\lambda \cdot \mathrm{id} + N^t)M
\quad \iff \quad
MN = N^tM. \] For $s_i \in \mathbb C$ we consider the matrix \[
S = \sum_i s_i N^i . \] Since the $s_i$ are in $\mathbb C$ we can successively solve the equation $S^2 = B_{\lambda}$ for the $s_i$, computing $s_0, s_1, s_2 \dots$ in this order. There are then two solutions corresponding to the two solutions of $s_0^2 = \lambda$ (after which $s_1, \dots$ are uniquely determined using $\lambda\neq 0$). Now since $MN = N^tM$ we also have $MS = S^tM$. With this we get \[
M'=MB_{\lambda} = MS^2 = (MS) S = S^tMS \] as claimed. \end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\href}[2]{#2}
\end{document} | arXiv |
Subsets and Splits