set
stringclasses 1
value | id
stringlengths 5
9
| chunk_text
stringlengths 1
115k
| chunk_num_tokens
int64 1
106k
| document_num_tokens
int64 58
521k
| document_language
stringclasses 2
values |
---|---|---|---|---|---|
train
|
0.4930.0
|
\begin{document}
\title{Lipschitz extensions of definable $p$-adic functions}
\abstract{In this paper, we prove a definable version of Kirszbraun's theorem in a non-Archimedean setting for definable families of functions in one variable. More precisely, we prove that every definable function $f:X\times Y\to\mathbb{Q}_p^s$, where $X\subset \mathbb{Q}_p$ and $Y\subset \mathbb{Q}_p^r$, that is $\lambda$-Lipschitz in the first variable, extends to a definable function $\tilde{f}:\mathbb{Q}_p\times Y\to\mathbb{Q}_p^s$ that is $\lambda$-Lipschitz in the first variable.}
\section{Introduction}
In 1934, Kirszbraun proved that every $\lambda$-Lipschitz function $f:S\subset \mathbb{R}^r\to\mathbb{R}^s$ extends to a $\lambda$-Lipschitz function $\tilde{f}:\mathbb{R}^r\to\mathbb{R}^s$ (see \cite{kirszbraun}). In 1983, Bhaskaran proved that a version of Kirszbraun's theorem still holds in a non-Archimedean setting, more precisely, for all spherically complete fields (see \cite{bhaskaran}). Recently, in 2010, Aschenbrenner and Fischer proved a definable version of Kirszbraun's theorem. In particular, they proved that every $\lambda$-Lipschitz function $f:S\subset\mathbb{R}^r\to\mathbb{R}^s$, that is definable in an expansion of the ordered field of real numbers, extends to a $\lambda$-Lipschitz function $\tilde{f}:\mathbb{R}^r\to\mathbb{R}^s$ that is definable in the same structure (see \cite{aschenbrenner}).
The proof of Bhaskaran relies in an essential way on Zorn's Lemma, which makes it far from being applicable to a definable setting. Therefore Aschenbrenner posed the question whether there could be a \emph{definable} version of Kirszbraun's theorem in a non-Archimedean setting. In this paper we partially answer that question and prove a definable version of Kirszbraun's theorem in a non-Archimedean setting for definable families of functions in one variable. More precisely, we prove that every definable function $f:X\times Y\to\mathbb{Q}_p^s$, where $X\subset \mathbb{Q}_p$ and $Y\subset \mathbb{Q}_p^r$, that is $\lambda$-Lipschitz in the first variable, extends to a definable function $\tilde{f}:\mathbb{Q}_p\times Y\to\mathbb{Q}_p^s$ that is $\lambda$-Lipschitz in the first variable. By \emph{definable}, we mean definable in either a semi-algebraic or a subanalytic structure on $\mathbb{Q}_p$. Working with these languages will allow us to use a cell decomposition result (see Theorem \ref{thm_preparation}) that is essential for the construction of Lipschitz extensions.
In a first approach we use a more easy construction to obtain a $\Lambda$-Lipschitz extension, where $\Lambda$ is possibly larger than $\lambda$. In a second and more involved approach, we show one can take $\Lambda$ equal to $\lambda$. More generally, we prove our results for finite field extensions of $\mathbb{Q}_p$.
\subsection*{Acknowledgments}
The autor would like to thank Raf Cluckers for proposing the idea for this paper, the many fruitful discussions and the constant optimism during the preparation of this paper. The author is also grateful to Matthias Aschenbrenner, for it was his question that formed the inspiration for this research project.
| 909 | 12,420 |
en
|
train
|
0.4930.1
|
\section{Preliminary definitions and facts}
Let $p$ be a prime number, and $\mathbb{Q}_p$ the field of $p$-adic numbers. Let $K$ be a finite field extension of $\mathbb{Q}_p$. Denote by $\mathrm{ord}:K^\times \to \mathbb{Z}$ the valuation. Denote by $\mathcal{O}_K$ the valuation ring, by $\mathcal{M}_K$ the maximal ideal of $\mathcal{O}_K$ and by $\pi_K$ a fixed generator of $\mathcal{M}_K$. Let $q$ denote the cardinality of the residue field. Finally, let $\overline{\mathrm{ac}}_m: K\to \mathcal{O}_K/(\pi_K^m)$ be the angular component map of depth $m$, sending every nonzero $x$ to $x\pi_K^{-\mathrm{ord}(x)} \mod (\pi_K^m)$ and 0 to 0.
The valuation induces a non-Archimedean norm on $K$ by setting $\abs{x}=q^{-\mathrm{ord}(x)}$ for nonzero $x$, and $\abs{0}=0$. This extends to a non-Archimedean norm on $K^s$ by setting $\abs{(x_1,\ldots,x_s)} = \max_i \{\abs{x_i}\}$. A function $f:K^r\to K^s$ is said to be $\lambda$-Lipschitz, with $\lambda\in\mathbb{R}$, if $\abs{f(x)-f(y)}\leq \lambda \abs{x-y}$ for all $x,y\in K^r$. One calls $\lambda$ the \emph{Lipschitz constant} of $f$.
Say a set $X\subset K^r$ is \emph{definable} if it is definable in either a semi-algebraic or a subanalytic structure on $K$. This means that $X$ is given by a first-order formula, possibly with parameters form $K$, in the semi-algebraic or subanalytic language (see \cite{ccl} for more details). For the convenience of the reader, we recall these languages. The semi-algebraic (or \emph{Macintyre}) language is the language $\mathcal{L}_{\text{Mac}}=(+,-,\cdot,\{P_n\}_{n>0},0,1)$, where the predicates $P_n$ stand for the $n$-th powers in $K$. The subanalytic language is the language $\mathcal{L}_{\text{an}}=\mathcal{L}_{\text{Mac}} \cup (^{-1},\cup_{m>0}K\{x_1,\ldots,x_m\})$, where $^{-1}$ is interpreted as the multiplicative inverse extended by $0^{-1}=0$, and where every function symbol from $K\{x_1,\ldots,x_m\}$ is interpreted as the restricted analytic function $K^m\to K$ given by
\[x\mapsto \begin{cases} f(x) &\text{if }x\in \mathcal{O}_K^m,\\0&\text{otherwise},\end{cases}\]
where $f$ is a formal power series converging on $\mathcal{O}_K^m$. Let $X\subset K^r$ be a definable set, then a function $f:X\to K^s$ is definable if its graph is a definable subset of $K^{r+s}$.
We work with the notion of $p$-adic cells as given in \cite{ch}. We recall the main definitions and properties. For integers $m,n>0$, let $Q_{m,n}$ be the (definable) set
\[Q_{m,n} = \{x\in K^\times \mid \mathrm{ord}(x)\in n\mathbb{Z},\ \overline{\mathrm{ac}}_m(x)=1\}.\]
\begin{definition}
Let $Y$ be a definable set. A \emph{cell} $C\subset K\times Y$ over $Y$ is a (nonempty) set of the form
\begin{equation*}
\resizebox{.85\hsize}{!}{$C = \{(x,y)\in K\times Y\mid y\in Y',\ \abs{\alpha(y)}\mathrel{\square_1}\abs{x-c(y)}\mathrel{\square_2} \abs{\beta(y)},\ x-c(y)\in\xi Q_{m,n}\}$,}
\end{equation*}
where $Y'\subset Y$ is a definable set, $\xi\in K$, $\alpha, \beta: Y'\to K^\times$ and $c:Y'\to K$ are definable functions, $\square_i$ is either $<$ or ``no condition'', and such that $C$ projects surjectively onto $Y'$. We call $c$ and $\xi Q_{m,n}$ the \emph{center} and the \emph{coset} of the cell $C$, respectively. If $\xi = 0$ we call $C$ a \emph{0-cell}, otherwise we call $C$ a \emph{1-cell}. We call $Y'$ the \emph{base} of the cell $C$.
\end{definition}
\begin{definition}
Let $Y$ be a definable set. Let $C\subset K\times Y$ be a 1-cell over $Y$ with center $c$ and coset $\xi Q_{m,n}$. Then, for each $(t,y) \in C$ with $y\in Y$, there exists a unique maximal ball $B$ containing $t$ and satisfying $B\times \{y\}\subset C$, where maximality is under inclusion. If $\mathrm{ord}(t-c(y))=l$, this ball is of the form
\[B = B_{l,c(y),m,\xi} = \{x\in K\mid \mathrm{ord}(x-c(y)) = l,\, \overline{\mathrm{ac}}_m(x-c(y)) = \overline{\mathrm{ac}}_m(\xi)\}.\]
We call the collection of all these maximal balls the \emph{balls of the cell $C$}. For fixed $y_0\in Y$, we call the collection of balls $\{B_{l,c(y_0),m,\xi}\mid B_{l,c(y_0),m,\xi}\times \{y_0\}\subset C\}$ the balls of the cell $C$ \emph{above $y_0$}. If $C\subset K\times Y$ is a $0$-cell, we define the collection of balls of $C$ to be the empty collection.
\end{definition}
Notice that $B_{l,c(y),m,\xi}$ is a ball of diameter $q^{-(l+m)}$, in particular, for every $x_1,x_2\in B_{l,c(y),m,\xi}$ it holds that $\abs{x_1-x_2}\leq q^{-(l+m)}$.
\begin{definition}[Jacobian property]
Let $f:B\to B'$ be a function, where $B,B'\subset K$ are balls. Say that $f$ has the \emph{Jacobian property} if the following conditions hold:
\begin{enumerate}
\item $f$ is a bijection;
\item $f$ is continuously differentiable on $B$, with derivative $\mathrm{d}eriv{f}{x}$;
\item $\mathrm{ord} (\mathrm{d}eriv{f}{x})$ is constant (and finite) on $B$;
\item for all $x,y\in B$ with $x\neq y$, one has:
\[\mathrm{ord}(f(x)-f(y)) = \mathrm{ord}(\mathrm{d}eriv{f}{x}) + \mathrm{ord}(x-y).\]
\end{enumerate}
\end{definition}
\begin{definition}
Let $f:S\subset K\times Y\to K$ be a function. Then we define
\[f\times \mathrm{id}: S\to K\times Y: (x,y)\mapsto (f(x,y),y),\]
and we denote with $S_f$ the image of $f\times\mathrm{id}$.
\end{definition}
\begin{definition}
Let $f:S\subset K\times Y\to K^s$ be a function. Then we define for every $y\in Y$
\[f_y: S_y\to K^s: x\mapsto f(x,y),\]
where $S_y$ denotes the fiber $S_y = \{x\in K\mid (x,y)\in S\}$.
\end{definition}
\begin{definition}
Let $Y$ be a definable set, let $C\subset K\times Y$ be a 1-cell over $Y$, and let $f:C\to K$ be a definable function. Say that $f$ is \emph{compatible} with the cell $C$ if either $C_f$ is a 0-cell over $Y$, or the following holds: $C_f$ is a 1-cell over $Y$ and for each $y\in Y$ and each ball $B$ of $C$ above $y$ and each ball $B'$ of $C_f$ above $y$, the functions $\restr{f_y}{B}$ and $\restr{f_y^{-1}}{B'}$ have the Jacobian property.
If $g:C\to K$ is a second definable function which is compatible with the cell $C$ and if we have $C_f = C_g$ and $\mathrm{ord}(\frac{\partial f(x,y)}{\partial{x}})= \mathrm{ord} (\frac{\partial g(x,y)}{\partial x})$ for every $(x,y)\in C$, then we say that $f$ and $g$ are \emph{equicompatible} with $C$.
If $C'\subset K\times Y$ is a 0-cell over $Y$, any definable function $h:C'\to K$ is said to be compatible with $C'$, and $h$ and $k:C'\to K$ are equicompatible with $C'$ if and only if $h=k$.
\end{definition}
The following theorem is based on Theorem 3.3 of \cite{ch}. This theorem is the result of a constant refinement of the concept of $p$-adic cell decomposition for semi-algebraic and subanalytic structures. Earlier versions are due to Cohen \cite{cohen}, Denef \cite{denef-84,denef}, Cluckers \cite{cluckers}, and relate to the quantifier elimination results from Macintyre \cite{mac} and Denef-van den Dries \cite{denef-vdd}.
\begin{theorem}\label{thm_preparation}
Let $S\subset K\times Y$ and $f:S\to K$ be definable. Then there exists a finite partition of $S$ into cells $C$ over $Y$ such that the restriction $\restr{f}{C}$ is compatible with $C$ for each cell $C$. Moreover, for each cell $C$ there exists a definable function $m:C\to K$, a definable function $e:Y\to K$ and coprime integers $a$ and $b$ with $b>0$, such that for all $(x,y)\in C$
\[m(x,y)^b = e(y)(x-c(y))^a,\]
where $c$ is the center of $C$, and such that if one writes $c'$ for the center of $C_f$, one has that $g = m+c'$ and $f$ are equicompatible with $C$ (we use the conventions that $b=1$ whenever $a=0$, that $a=0$ whenever $C$ is a 0-cell, and that $0^0=1$).
Furthermore, if $C$ and $C_f$ are 1-cells, then for every $y\in Y$ one has that $f_y(B) = g_y(B)$ for every ball $B$ of $C$ above $y$, and the formula
\begin{equation}\label{eq_formula_order}
\mathrm{ord}\left(\frac{\partial f(x,y)}{\partial{x}}\right) = \mathrm{ord}(e(y)^{1/b}q) + (q-1)\mathrm{ord}(x-c(y))
\end{equation}
holds for all $(x,y)\in C$, where $q=a/b$ and where we use the convenient notation $\mathrm{ord}(t^{1/b}) = \mathrm{ord}(t)/b$, for $t\in K$ and $b>0$ a positive integer.
\end{theorem}
\begin{proof}
The existence of a finite partition of $S$ in cells $C$ over $Y$, and for every such a cell $C$ the existence of $g=m+c'$ such that $f$ and $g$ are equicompatible with $C$, follows immediately from Theorem 3.3 in \cite{ch}.
Now assume that $C$ and $C_f$ are 1-cells. It is easy to see that $f_y(B) = g_y(B)$ for every $y\in Y$ and every ball $B$ of $C$ above $y$.
We now prove \eqref{eq_formula_order}. Fix $(x,y)\in C$. Since $f$ and $g$ are equicompatible, we have $\mathrm{ord}(\frac{\partial f(x,y)}{\partial{x}}) = \mathrm{ord}(\frac{\partial g(x,y)}{\partial{x}})$, so we only need to prove that \eqref{eq_formula_order} holds for $f$ replaced by $g$. For this, we first note that
\begin{equation}\label{eq_ord_g}
\mathrm{ord}(g(x,y)-c'(y)) = [\mathrm{ord}(e(y))+a\cdot\mathrm{ord}(x-c(y))]/b.
\end{equation}
It is also immediate that
\begin{equation}\label{eq_g_rechts}
\mathrm{ord}\left(\frac{\partial ([g(x,y)-c'(y)]^b)}{\partial{x}}\right) = \mathrm{ord}(e(y)a(x-c(y))^{a-1}).
\end{equation}
On the other hand, by the chain rule, the left hand side of \eqref{eq_g_rechts} also equals
\begin{equation}\label{eq_g_links}
\mathrm{ord}\left(\frac{\partial ([g(x,y)-c'(y)]^b)}{\partial{x}}\right) =\mathrm{ord}\left(b[g(x,y)-c'(y)]^{b-1}\frac{\partial g(x,y)}{\partial{x}}\right).
\end{equation}
Equating the right hand sides of \eqref{eq_g_rechts} and \eqref{eq_g_links}, and using \eqref{eq_ord_g}, one easily finds the required formula.
\end{proof}
Comparing sizes of balls between which there is a function with the Jacobian property, we obtain the following useful formula.
\begin{lemma}\label{lemma_l'm'ordf'lm}
Let $f:B_{l,c(y),m,\xi}\to B_{l',c'(y),m',\xi'}$ be a function with the Jacobian property. Then $l'+m' = \mathrm{ord}(df/dx)+l+m$.
\end{lemma}
| 3,756 | 12,420 |
en
|
train
|
0.4930.2
|
\section{Existence of Lipschitz extensions}
We now proceed towards proving the existence of definable Lipschitz extensions of definable families of functions in one variable. Let us first formulate the main theorem of this paper:
\begin{theorem}\label{thm_main_param_const1}
Let $Y\subset K^r$ and $X\subset K$ be definable sets and let $f:X\times Y\to K^s$ be a definable function that is $\lambda$-Lipschitz in the first variable. Then $f$ extends to a definable function $\tilde{f}:K\times Y\to K^s$ that is $\lambda$-Lipschitz in the first variable, i.e. $\tilde{f}_y$ is $\lambda$-Lipschitz for every $y\in Y$.
\end{theorem}
\begin{remark}\label{remark_lipschitz_constant}
By rescaling, if suffices to proof the theorem for $\lambda=1$. Also, since we use the max-norm on $K^s$, it is enough to prove the theorem for $s=1$.
\end{remark}
Firstly, we present a very general way of \emph{gluing} Lipschitz extensions of a given function to obtain a Lipschitz extension with a larger domain (this is Lemma \ref{lemma_glue}).
Secondly, given a definable function that is $\lambda$-Lipschitz in the first variable, we give a more easy construction to obtain a definable extension that is $\Lambda$-Lipschitz in the first variable, where $\Lambda$ is possibly larger than $\lambda$ (this is Theorem \ref{thm_main_param}).
Thirdly and lastly, using a more involved argument, we show that one can take $\Lambda$ equal to $\lambda$ (this is Theorem \ref{thm_main_param_const1}).
\begin{lemma}[Gluing extensions]\label{lemma_glue}
Let $X\subset K^r$ be a definable set and let $f:X\to K$ be a definable and $\lambda$-Lipschitz function. Let $X = \cup_{i=1}^k X_i$ be a finite covering of $X$ by definable subsets $X_i$. Call $f_i = \restr{f}{X_i}: X_i\to K$.
If every $f_i$ extends to a definable and $\Lambda_i$-Lipschitz map $\tilde{f_i}:K^r\to K$, with $\Lambda_i\geq\lambda$, then $f$ extends to a definable and $\Lambda$-Lipschitz map $\tilde{f}:K^r\to K$, where $\Lambda = \max_i \{\Lambda_i\}$.
\end{lemma}
\begin{proof}
We prove the lemma first for $k=2$. Define $T_1 = \{ x\in K^r\mid \mathrm{d}(x,X_1)\leq \mathrm{d}(x,X_2)\}$, where $\mathrm{d}(x,A)$ denotes the distance from $x$ to the set $A$, i.e. $\mathrm{d}(x,A) = \inf\{\abs{x-a}\mid a\in A\}$. Define $T_2 = K^r\setminus T_1$, and let
\[ \tilde{f}:K^r\to K: x\mapsto \begin{cases} \tilde{f_1}(x)&\text{if }x\in T_1,\\\tilde{f_2}(x)&\text{if }x\in T_2.\end{cases}\]
Clearly $\tilde{f}$ is a definable extension of $f$. We prove that $\tilde{f}$ is $\Lambda$-Lipschitz, where $\Lambda = \max\{\Lambda_1,\Lambda_2\}$. The only nontrivial fact to verify is that for $t_1\in T_1$ and $t_2\in T_2$, we have $\abs{\tilde{f}(t_1)-\tilde{f}(t_2)}\leq \Lambda\abs{t_1-t_2}$.
Since every definable and $\lambda$-Lipschitz function extends uniquely to a definable and $\lambda$-Lipschitz function on the topological closure of its domain, we may assume that $X$, $X_1$ and $X_2$ are topologically closed.
Fix elements $a_i\in X_i$ such that $\abs{t_i-a_i} = \mathrm{d}(t_i,X_i)$, for $i=1,2$.
\begin{comment}
$a_1\in X_1$ and $a_2\in X_2$ according to the following case distinction. If $t_1\in\overline{X_1}$ and $t_2\not\in\overline{X_2}$, choose $a_2\in X_2$ such that $\abs{t_2-a_2}= \mathrm{d}(t_2,X_2)$ and $a_1\in X_1$ such that $\abs{t_1-a_1}<\abs{a_1-a_2}$. If $t_1\not\in \overline{X_1}$ and $t_2\in\overline{X_2}$, choose $a_1\in X_1$ such that $\abs{t_1-a_1} = \mathrm{d}(t_1,X_1)$ and $a_2\in X_2$ such that $\abs{t_2-a_2}<\abs{a_1-a_2}$. If $t_i\not\in \overline{X_i}$ for $i=1,2$, choose $a_i\in X_i$ such that $\abs{t_i-a_i} = \mathrm{d}(t_i,X_i)$.
\end{comment}
It then always holds that
\begin{equation}\label{eq_a2x2<a1a2}
\abs{t_2-a_2}<\abs{a_1-a_2}.
\end{equation}
We can now calculate as follows:
\begin{align*}
\abs{\tilde{f}(t_1)-\tilde{f}(t_2)} &= \abs{\tilde{f_1}(t_1)-f(a_1)+f(a_2)-\tilde{f_2}(t_2)+f(a_1)-f(a_2)}\\
&\leq\max\{\abs{\tilde{f_1}(t_1)-f(a_1)},\abs{f(a_2)-\tilde{f_2}(t_2)},\abs{f(a_1)-f(a_2)}\}\\
&\leq\max\{\Lambda_1\abs{t_1-a_1},\Lambda_2\abs{a_2-t_2},\lambda\abs{a_1-a_2}\}\\
&\leq\max\{\Lambda_1,\Lambda_2\}\max\{\abs{t_1-a_1},\abs{a_2-t_2},\abs{a_1-a_2}\}\\
&\stackrel{\eqref{eq_a2x2<a1a2}}{=}\Lambda\max\{\abs{t_1-a_1},\abs{a_1-a_2}\}\\
&\leq\Lambda\abs{t_1-t_2},
\end{align*}
where we only have to verify the last inequality. For this we prove that
\begin{equation}\label{eq_preparatory}\max\{\abs{t_1-a_1},\abs{a_1-a_2}\}\leq \abs{t_1-t_2},\end{equation}
by considering two cases.
\begin{description}
\item[Case 1: $\abs{t_1-a_1}<\abs{a_1-a_2}$.]
It then holds that
\begin{equation}\label{eq_a1a2=x1a2N}
\abs{a_1-a_2}=\abs{t_1-a_2}.
\end{equation}
So
\begin{equation}\label{eq_lange}
\abs{t_2-a_2}\stackrel{\eqref{eq_a2x2<a1a2}}{<}\abs{a_1-a_2}\stackrel{\eqref{eq_a1a2=x1a2N}}{=}\abs{t_1-a_2},
\end{equation}
hence
\begin{equation}
\abs{a_1-a_2}\stackrel{\eqref{eq_a1a2=x1a2N}}{=}\abs{t_1-a_2}\stackrel{\eqref{eq_lange}}{=}\abs{t_1-t_2}.
\end{equation}
\begin{comment}
Suppose, for the sake of deriving a contradiction, that
\begin{equation}\label{eq_x1x2<a1a2}\abs{t_1-t_2}<\abs{a_1-a_2}.\end{equation}
By the case we are in, it holds that
\begin{equation}\label{eq_a1a2=x1a2}\abs{a_1-a_2}=\abs{t_1-a_2},\end{equation}
and combined with \eqref{eq_x1x2<a1a2} we find
\begin{equation}\label{eq_x1x2<x1a2} \abs{t_1-t_2}<\abs{t_1-a_2}.\end{equation}
Therefore
\[\abs{t_2-a_2} \stackrel{\eqref{eq_x1x2<x1a2}}{=}\abs{t_1-a_2} \stackrel{\eqref{eq_a1a2=x1a2}}{=} \abs{a_1-a_2} \stackrel{\eqref{eq_a2x2<a1a2}}{>} \abs{t_2-a_2},\]
which is a contradiction.
\end{comment}
\item[Case 2: $\abs{a_1-a_2}\leq\abs{t_1-a_1}$.] Suppose that
\begin{equation}\label{eq_x1x2<x1a1} \abs{t_1-t_2}<\abs{t_1-a_1},\end{equation}
then
\begin{equation}\label{eq_x1a1=x2a1=a1a2}\abs{t_1-a_1}\stackrel{\eqref{eq_x1x2<x1a1}}{=}\abs{t_2-a_1} \stackrel{\eqref{eq_a2x2<a1a2}}{=} \abs{a_1-a_2}.\end{equation}
By the choice of $a_1$ and the fact that $t_1\in T_1$, we know $\abs{t_1-a_2}\geq \abs{t_1-a_1}$, so by \eqref{eq_x1a1=x2a1=a1a2} equality holds:
\begin{equation}\label{eq_x1a1=x1a2}\abs{t_1-a_1}=\abs{t_1-a_2}.\end{equation}
Together with \eqref{eq_x1x2<x1a1}, this implies
\begin{equation}\label{eq_x1x2<x1a2twee}\abs{t_1-t_2}<\abs{t_1-a_2}.\end{equation}
So finally,
\begin{equation*}
\abs{a_1-a_2}\stackrel{\eqref{eq_x1a1=x2a1=a1a2}}{=}\abs{t_1-a_1}\stackrel{\eqref{eq_x1a1=x1a2}}{=}\abs{t_1-a_2}\stackrel{\eqref{eq_x1x2<x1a2twee}}{=}\abs{t_2-a_2} \stackrel{\eqref{eq_a2x2<a1a2}}{<}\abs{a_1-a_2},
\end{equation*}
which is a contradiction.
\end{description}
This proves \eqref{eq_preparatory}, and therefore the lemma is proved for $k=2$. An easy induction argument then proves the lemma for general $k$.
\end{proof}
| 3,014 | 12,420 |
en
|
train
|
0.4930.3
|
\begin{remark} Lemma \ref{lemma_glue} remains true is one replaces every instance of the word ``Lipschitz'' by ``Lipschitz in the first variable''.
\end{remark}
\begin{theorem}\label{thm_main_param}
Let $Y\subset K^r$ and $X\subset K$ be definable sets and let $f:S=X\times Y\to K^s$ be a definable function that is $\lambda$-Lipschitz in the first variable. Then there exists $\Lambda\geq \lambda$ such that $f$ extends to a definable function $\tilde{f}:K\times Y\to K^s$ that is $\Lambda$-Lipschitz in the first variable, i.e. $\tilde{f}_y$ is $\Lambda$-Lipschitz for every $y\in Y$.
\end{theorem}
\begin{proof}
By Remark \ref{remark_lipschitz_constant}, we may assume that $\lambda=1$ and $s=1$. By Theorem \ref{thm_preparation} and (the remark after) Lemma \ref{lemma_glue}, we may assume that $S$ is a cell over $Y$ with which $f$ is compatible. Furthermore, we may assume that the base of $S$ is $Y$.
If $S_f$ is a $0$-cell over $Y$ with center $c'$, we define
\[\tilde{f}:K\times Y\to K: (x,y)\mapsto c'(y).\]
Clearly, $\tilde{f}$ is a definable extension of $f$ and for all $y\in Y$, $\tilde{f}_y$ is $1$-Lipschitz.
Assume from now on that $S$ and $S_f$ are 1-cells over $Y$, with center $c$ and $c'$, and coset $\xi Q_{m,n}$ and $\xi' Q_{m',n'}$, respectively.
We define $\tilde{f}$ as follows:
\[\tilde{f}:K\times Y\to K: (x,y)\mapsto\begin{cases} f(x,y) & \text{if } (x,y)\in S,\\c'(y)&\text{if }(x,y)\not\in S.\end{cases}\]
Clearly, $\tilde{f}$ is a definable extension of $f$. We prove that $\tilde{f}_y$ is $q^{m'}$-Lipschitz for every $y\in Y$.
Fix $y\in Y$. Let $t_1\in X$ and $t_2\not \in X$. Let $l$ and $l'$ be such that $t_1\in B_{l,c(y),m,\xi}$ and $f(t_1,y)\in B_{l',c'(y),m',\xi'}$. Then
\begin{align} \abs{f(t_1,y)-c'(y)} &= q^{-l'}\notag\\
&= q^{-\mathrm{ord}(\partial f(t_1,y)/\partial x)}q^{m'-m}q^{-\mathrm{ord}(t_1-c(y))}\notag\\
&\leq q^{m'-m}\abs{t_1-c(y)},\label{eq_mm'xc}
\end{align}
where the second equality follows from Lemma \ref{lemma_l'm'ordf'lm} and the last inequality holds because $f$ is $1$-Lipschitz in the first variable, and therefore $\abs{\partial f(t_1,y)/\partial x}\leq 1$.
There are two cases to consider.
\begin{description}
\item[Case 1: $\abs{t_1-c(y)} = \abs{t_2-c(y)}$.] Because $B_{l,c(y),m,\xi}$ is a ball of diameter $q^{-m-l}$, it holds that $\abs{t_1-t_2}> q^{-m-l}$, or put differently:
\begin{equation}\label{eq_l<mx1x2}
q^{-m}\abs{t_1-c(y)}<\abs{t_1-t_2}.
\end{equation}
Therefore
\begin{align*}
\abs{\tilde{f}_y(t_1)-\tilde{f}_y(t_2)} &= \abs{\tilde{f}(t_1,y)-\tilde{f}(t_2,y)}\\
&= \abs{f(t_1,y)-c'(y)}\\
&\stackrel{\eqref{eq_mm'xc}}{\leq} q^{m'-m}\abs{t_1-c(y)}\\
&\stackrel{\eqref{eq_l<mx1x2}}{<} q^{m'}\abs{t_1-t_2}.
\end{align*}
\item[Case 2: $\abs{t_1-c(y)} \neq \abs{t_2-c(y)}$.] From the non-Archimedean property it then follows that
\begin{equation}\label{eq_x1c<x1x2}
\abs{t_1-c(y)}\leq\abs{t_1-t_2},
\end{equation}
so we find
\begin{align*}
\abs{\tilde{f}_y(t_1)-\tilde{f}_y(t_2)} &= \abs{\tilde{f}(t_1,y)-\tilde{f}(t_2,y)}\\
&=\abs{f(t_1,y)-c'(y)} \\
&\stackrel{\eqref{eq_mm'xc}}{\leq} q^{m'-m}\abs{t_1-c(y)}\\
&\stackrel{\eqref{eq_x1c<x1x2}}{\leq}q^{m'-m}\abs{t_1-t_1}.\qedhere
\end{align*}
\end{description}
\end{proof}
| 1,490 | 12,420 |
en
|
train
|
0.4930.4
|
\begin{remark}\label{remark_M}
Analyzing the proof of Theorem \ref{thm_main_param}, we find that one can take $\Lambda = \lambda\max_i\{q^{m_i'}\}$, where $\lambda$ is the Lipschitz constant of $f$ (in the first variable), and the $m_i'$ correspond to the 1-cells in the cell decomposition of $S_f$.
\end{remark}
\begin{remark}\label{rem_phi}
We can even improve (i.e. decrease) $\Lambda$ from Remark \ref{remark_M} as follows. In the proof, the worst Lipschitz constant occurs in \textbf{Case 1}. We can get around this case in the following way (as in the beginning of Theorem \ref{thm_main_param}, we assume that $S$ and $S_f$ are $1$-cells over $Y$ with center $c$ and coset $\xi Q_{m,n}$).
For every nonzero $a\in \mathcal{O}_K^\times/(\pi_K^m)$, choose $\xi_m(a)\in \mathcal{O}_K^\times$ to be a class representative of $a$. Since we only need to make a finite number of representative choices, $\xi_m:\mathcal{O}_K^\times/(\pi_K^m)\to K$ is a definable map. Let $\varphi:K\times Y\to K$ be the definable map rescaling the angular component as follows:
\begin{align*}
\varphi:K\times Y&\to K:\\
(x,y)&\mapsto \begin{cases}(x-c(y))\xi_m(\overline{\mathrm{ac}}_m(x-c(y))^{-1}\overline{\mathrm{ac}}_m(\xi))+c(y) & \text{if }x\neq c(y),\\ c(y) & \text{if }x=c(y).\end{cases}
\end{align*}
It is not difficult to see that for every $y\in Y$, $\varphi_y:K\to K$ is $1$-Lipschitz. Now let $\hat{f}$ be the extension described in the proof of Theorem \ref{thm_main_param} in the case that $S$ and $S_f$ are $1$-cells over $Y$ (remark that in Theorem \ref{thm_main_param}, this extension is denoted with $\tilde{f}$). Then $\tilde{f}:K\times Y\to K: (x,y)\mapsto \hat{f}(\varphi(x,y),y)$ is a definable extension of $f$ that is $q^{m'-m}$-Lipschitz in the first variable. One can therefore take $\Lambda = \lambda\max_i\{q^{m_i'-m_i}\}$, where $\lambda$ is the Lipschitz constant of $f$ (in the first variable), and the $m_i$ and $m_i'$ correspond to the 1-cells in the cell decomposition of $S$ and $S_f$, respectively.
\end{remark}
Note that in the proof of Theorem \ref{thm_main_param} we did not use the full generality of Theorem \ref{thm_preparation}. We will now prove Theorem \ref{thm_main_param_const1}, the main theorem of this paper, which uses a more involved extension for which the Lipschitz constant doesn't grow. For this, the full power of Theorem \ref{thm_preparation} is used. Again, the result is formulated in definable families of functions. For clarity, we repeat the formulation of Theorem \ref{thm_main_param_const1}.
\begin{theorem*}
Let $Y\subset K^r$ and $X\subset K$ be definable sets and let $S=X\times Y$. Let $f:S\to K^s$ be a definable function that is $\lambda$-Lipschitz in the first variable. Then $f$ extends to a definable function $\tilde{f}:K\times Y\to K^s$ that is $\lambda$-Lipschitz in the first variable, i.e. $\tilde{f}_y$ is $\lambda$-Lipschitz for every $y\in Y$.
\end{theorem*}
\begin{proof}
By Remark \ref{remark_lipschitz_constant}, we may assume that $\lambda=1$ and $s=1$. By Theorem \ref{thm_preparation} and (the remark after) Lemma \ref{lemma_glue}, we may assume that $S$ is a cell over $Y$ with which $f$ is compatible. Furthermore, we may assume that the base of $S$ is $Y$.
If $S_f$ is a $0$-cell, extend $f$ as in Theorem \ref{thm_main_param}.
Assume from now on that $S$ and $S_f$ are 1-cells over $Y$, with center $c$ and $c'$, and coset $\xi Q_{m,n}$ and $\xi' Q_{m',n'}$, respectively. Let $g$ be as in Theorem \ref{thm_preparation}, in particular $f$ and $g$ are equicompatible with $S$, and $(g(x,y)-c'(y))^b = e(y)(x-c(y))^a$ for every $(x,y)\in S$.
Fix $y\in Y$ and let $B_{l,c(y),m,\xi}$ be a ball of $S$ above $y$. By Theorem \ref{thm_preparation} we can write $f_y(B_{l,c(y),m,\xi})=B_{l',c'(y),m',\xi'}=g_y(B_{l,c(y),m,\xi})$, where $B_{l',c',m',\xi'}$ is a ball of $S_f$ above $y$. Also, we have that $\mathrm{ord}(\partial f/\partial x) = \mathrm{ord}(\partial g/\partial x)$. Let $q=a/b$, then there are three different cases to consider, depending on whether $q=1$, $q<1$ or $q>1$.
\begin{description}
\item[Case 1: $q=1$.] From equation \eqref{eq_formula_order} we have $\mathrm{ord}(\partial f(x,y)/\partial x) = \mathrm{ord}(e(y))$ for all $(x,y)\in S$. So for $x\in B_{l,c(y),m,\xi}$ we have
\begin{align*}
l' &= \mathrm{ord}(e(y)(x-c(y))+c'(y)-c'(y))\\
&=\mathrm{ord}(e(y))+\mathrm{ord}(x-c(y))\\
&=\mathrm{ord}(\partial f(x,y)/\partial x) + l,
\end{align*}
which implies $l'\geq l$, since $f$ is 1-Lipschitz in the first variable. In particular note that in this case $m=m'$, by Lemma \ref{lemma_l'm'ordf'lm}. This allows us to use the same extension as described in Remark \ref{rem_phi}, namely $\tilde{f}:K\times Y\to K: (x,y)\mapsto \hat{f}(\varphi_y(x),y)$, where $\hat{f}$ is as in the proof of Theorem \ref{thm_main_param} in the case that $S$ and $S_f$ are $1$-cells over $Y$, and $\varphi_y$ is as in Remark \ref{rem_phi} (again, remark that in Theorem \ref{thm_main_param} this extension is denoted with $\tilde{f}$). We prove that $\tilde{f}_y$ is 1-Lipschitz. Let $t_1\in\cup_l D_{l,c(y)}\cap X$ and $t_2\not\in\cup_l D_{l,c(y)}\cap X$, where $D_{l,c(y)} = \{x\in K\mid \mathrm{ord}(x-c(y))=l\}$. Then
\begin{align*}
\abs{\tilde{f}_y(t_1)-\tilde{f}_y(t_2)} &= \abs{f(t_1,y)-c'(y)}\\
&\leq \abs{t_1-c(y)}\\
&\leq \abs{t_1-t_2},
\end{align*}
where the first inequality follows from $l'\geq l$ and the second from the non-Archimedean property.
\item[Case 2: $q>1$.] Because $f$ is 1-Lipschitz in the first variable, we have $ \mathrm{ord}(\partial f/\partial x) \geq 0$, and together with \eqref{eq_formula_order} this gives the following lower bound:
\[l\geq -\mathrm{ord}(e(y)^{1/b}q)/(q-1).\]
Recall that $\mathrm{ord}(e(y)^{1/b}q)$ is short for $\mathrm{ord}(e(y))/b+\mathrm{ord}(q)$. On the other hand, as soon as $l\geq (m'-m-\mathrm{ord}(e(y)^{1/b}q))/(q-1)$, we have $l'\geq l$. Indeed, this follows immediately from Lemma \ref{lemma_l'm'ordf'lm} and from \eqref{eq_formula_order}. So up to partitioning $S$ into two cells over $Y$, we may assume that either $l'\geq l$ for all balls of $S$ above $y$, for every $y\in Y$, or that $S$ has at most $N$ balls above $y$, for every $y\in Y$, where $N$ does not depend on $y$. In the former case we can extend $f$ as we did in \textbf{Case 1}. In the latter case we can, after partitioning $Y$ in a finite number of definable sets, assume that there are \emph{exactly} $N$ balls of $S$ above $y$, for every $y\in Y$. Using (the remark after) Lemma \ref{lemma_glue} we may assume that there is exactly one ball of $S$ above $y$, for every $y\in Y$. By definable selection (see \cite{denef-vdd} and \cite{Dries1984-DRIATW}) there is a definable function $h:Y\to K$ such that for each $(x,y)\in S$ with $x\in K$ and $y\in Y$, $(h(y),y)\in S$. We then extend $f$ as follows:
\[\tilde{f}:K\times Y\to K: (x,y)\mapsto\begin{cases}f(x,y) &\text{if }(x,y)\in S,\\ f(h(y),y)&\text{if }(x,y)\not\in S.\end{cases}\]
Fix $y\in Y$, we show that $\tilde{f}_y$ is 1-Lipschitz. Recall that by the argument given above, $S_y$ is a ball in $K$. The only nontrivial case to consider is the following. Let $t_1\in S_y$ and $t_2\not\in S_y$, then
\begin{align*}
\abs{\tilde{f}_y(t_1)-\tilde{f}_y(t_2)} &= \abs{f(t_1,y)-f(h(y),y)}\\
&\leq \abs{t_1-h(y)}\\
&<\abs{t_1-t_2},
\end{align*}
where the last inequality holds because of the non-Archimedean property and the fact that $t_1$ and $h(y)$ are both contained in the ball $S_y$, and $t_2$ is not.
\item[Case 3: $q<1$.] This case is similar to \textbf{Case 2}, where now one finds an upper bound for $l$ instead of a lower bound. The proof is omitted. \qedhere
\end{description}
\end{proof}
\begin{remark}
Note that we proved the main theorem for semi-algebraic and subanalytic structures on $K$. It is, for now, unclear whether the main theorem could also hold in other structures on $K$, such as, for example, $P$-minimal structures, as defined by Haskell and Macpherson in \cite{has-mac-97}. Also, it is unclear whether the extension that we constructed could be used to extend a definable function $f:X\subset K^r\to K^s$ that is $\lambda$-Lipschitz in \emph{all} variables to a definable function $\tilde{f}:K^r\to K^s$ that is $\lambda$-Lipschitz in all variables. For now, there is no evidence towards either a positive or a negative answer to this question.
\end{remark}
\vspace*{10pt}
\textsc{Tristan Kuijpers\\ KU Leuven, Department of Mathematics, Celestijnenlaan 200B, 3001 Leuven, Belgium}\\
\textit{E-mail:} [email protected]
\end{document}
| 3,251 | 12,420 |
en
|
train
|
0.4931.0
|
\begin{document}
\title[A scheme for time fractional equations]{\protect{On a discrete scheme for time fractional \\
fully nonlinear evolution equations}}
\author[Y. GIGA, Q. LIU, H. MITAKE]
{Yoshikazu Giga, Qing Liu, Hiroyoshi Mitake}
\thanks{
The work of YG was partially supported by Japan Society for the Promotion of Science (JSPS) through grants KAKENHI \#26220702, \#16H03948, \#18H05323, \#17H01091.
The work of QL was partially supported by the JSPS grant KAKENHI \#16K17635 and the grant \#177102 from Central Research Institute of Fukuoka University.
The work of HM was partially supported by the JSPS grant KAKENHI \#16H03948.
}
\address[Y. Giga]{
Graduate School of Mathematical Sciences,
University of Tokyo
3-8-1 Komaba, Meguro-ku, Tokyo, 153-8914, Japan}
\email{[email protected]}
\address[Q. Liu]{Department of Applied Mathematics, Faculty of Science, Fukuoka University, Fukuoka 814-0180, Japan.
}
\email{[email protected]}
\address[H. Mitake]{
Graduate School of Mathematical Sciences,
University of Tokyo
3-8-1 Komaba, Meguro-ku, Tokyo, 153-8914, Japan}
\email{[email protected]}
\keywords{Approximation to solutions; Caputo's time fractional derivatives; Second order fully nonlinear equations; Viscosity solutions.}
\subjclass[2010]{
35R11,
35A35,
35D40.
}
\maketitle
\begin{abstract}
We introduce a discrete scheme for second order fully nonlinear parabolic PDEs with Caputo's time fractional derivatives. We prove the convergence of the scheme in the framework of the theory of viscosity solutions.
The discrete scheme can be viewed as a resolvent-type approximation.
\end{abstract}
| 619 | 14,084 |
en
|
train
|
0.4931.1
|
\section{Introduction}
In this paper, we are concerned with the second order fully nonlinear PDEs with Caputo's time fractional derivatives:
\begin{numcases}{}
\partial_t^{\alpha}u(x,t)+F(x, t, Du, D^2u)=0 &\qquad\text{for all $x\in\mathbb{R}^n, t>0,$} \langlebel{eq:1}\\
u(x,0)=u_0(x) &\qquad\text{for all $x\in\mathbb{R}^n$}, \langlebel{eq:ini}
\end{numcases}
where $\alphapha\in(0,1)$ is a given constant, $u:\mathbb{R}^n\times[0,\infty)\to\mathbb{R}$ is an unknown function and
$Du$ and $D^2 u$, respectively, denote its spatial gradient and Hessian of $u$.
We \textit{always} assume that $u_0\in BUC(\mathbb{R}^n)$,
which denotes the space of all bounded uniformly continuous functions in $\mathbb{R}^n$. We denote
\textit{Caputo's time fractional derivative} by $\partial_t^{\alpha}u$, i.e.,
\[
\partial_t^{\alpha}u(x,t):=\frac{1}{\Gamma(1-\alpha)}\int_0^t(t-s)^{-\alpha}\partial_{s}u(x,s)\,ds,
\]
where $\Gamma$ is the Gamma function.
We assume that $F$ is a continuous \textit{degenerate elliptic} operator, that is,
\[
F(x, t, p, X_1) \leq F(x, t, p, X_2)
\]
for all $x\in\mathbb{R}^n, t\geq 0, p \in \mathbb{R}^n$ and $X_1, X_2 \in \mathbb{S}^n \text{ with } X_1 \geq X_2$, where $\mathbb{S}^n$ denotes the space of $n \times n$ real symmetric matrices.
Moreover, throughout this work we assume that $F$ is locally bounded in the sense that
\begin{equation}\langlebel{eq:op bound}
M_R:=\sup_{\substack{(x, t)\in \mathbb{R}^n\times [0, \infty)\\ |p|, |X|\leq R}}|F(x, t, p, X)|<\infty\qquad \text{for any $R>0$}.
\end{equation}
Studying differential equations with fractional derivatives is motivated by mathematical models that
describe diffusion phenomena in complex media like fractals, which is sometimes called \textit{anomalous diffusion}
(see \cite{MK} for instance).
It has inspired further research on numerous related topics.
We refer to a non-exhaustive list of references \cite{L, SY, ACV, C, GN, TY, A, N, KY, CKKW} and the references therein.
Among these results, the authors of \cite{ACV, A} mainly study regularity of solutions to a space-time nonlocal equation with
Caputo's time fractional derivative in the framework of viscosity solutions.
More recently, unique existence of a viscosity solution to the initial value problem with Caputo's time fractional derivatives has been established in the thesis of Namba \cite{N-thesis} and independently and concurrently by Topp and Yangari \cite{TY}. The main part of \cite{N-thesis} on this subject has been published in \cite{GN, N}. For example, a comparison principle, Perron's method, and stability results for \eqref{eq:1} in bounded domains with various boundary conditions have been established in \cite{GN, N}. Similar results for whole space has been established in \cite{TY} for nonlocal parabolic equations.
Motivated by these works, in this paper we introduce a discrete scheme for \eqref{eq:1}--\eqref{eq:ini}, which will be explained in detail in the subsection below.
\subsection{The discrete scheme}\langlebel{subsec:scheme}
Our scheme is naturally derived from the definitions of Riemann integral and Caputo's time fractional derivative.
We first observe that
\begin{align*}
\partial_t^{\alpha}u(\cdot, mh)&
=\frac{1}{\Gamma(1-\alpha)}\int_0^{mh}(mh-s)^{-\alpha}\partial_{s}u(x,s)\,ds\\
&=\frac{1}{\Gamma(1-\alpha)}\sum_{k=0}^{m-1} \int_{kh}^{(k+1)h}(mh-s)^{-\alpha}\partial_{s}u(x,s)\,ds
\end{align*}
for $m\in\mathbb{N}$ and $h>0$.
If $u$ is smooth in $\mathbb{R}^n\times [0, \infty)$ and $h$ is small, then we can approximately think that
\[
\int_{kh}^{(k+1)h}(mh-s)^{-\alpha}\partial_{s}u(x,s)\,ds
\approx
\int_{kh}^{(k+1)h}(mh-s)^{-\alpha}\frac{u(x,(k+1)h)-u(x,kh)}{h}\,ds.
\]
Note that $z\Gamma(z)=\Gamma(z+1)$ and
\begin{align*}
\int_{kh}^{(k+1)h}(mh-s)^{-\alpha}\,ds
=&\,
\frac{1}{1-\alpha}\left(((m-k)h)^{1-\alpha}-((m-k-1)h)^{1-\alpha}\right)\\
=&\,
\frac{1}{1-\alpha}f(m-k)h^{1-\alpha},
\end{align*}
where we set
\begin{equation}\langlebel{func:f}
f(r):=r^{1-\alpha}-(r-1)^{1-\alpha}\quad\text{for} \ r\ge1.
\end{equation}
Thus,
\begin{align*}
\partial_t^{\alpha}u(\cdot, mh)\approx
&\,
\frac{1}{\Gamma(2-\alpha)h^\alpha}\sum_{k=0}^{m-1}
f(m-k)\left(u(x,(k+1)h)-u(x,kh)\right)\\
&\,
=
\frac{1}{\Gamma(2-\alpha)h^\alpha}\left\{
u(x,mh)
-\sum_{k=0}^{m-1} C_{m, k}u(x,kh)\right\},
\end{align*}
where we set
\[
C_{m,0}:=f(m), \quad C_{m,k}:=f(m-k)-f(m-(k-1)) \quad\text{for} \ k=1,\ldots, m-1.
\]
Since $f$ is a non-increasing function, we easily see that
\begin{equation}\langlebel{positive}
C_{m,k}\ge 0 \quad\text{for} \ k=0,\ldots, m-1,
\end{equation}
which implies monotonicity of the scheme
(see Proposition \ref{prop:monotone}).
Inspired by this observation, for any fixed $h>0$,
we below define a family of functions
$\{U^h(\cdot, mh)\}_{m\in\mathbb{N}\cup\{0\}}\subset BUC(\mathbb{R}^n)$ by induction.
Set $U^h(\cdot, 0):=u_0^h$,
where $u_0^h\in BUC(\mathbb{R}^n)$ satisfies
\begin{equation}\langlebel{initial approx}
\sup_{\mathbb{R}^n} \left|u_0^h-u_0\right|\to 0\quad \text{as $h\to 0$.}
\end{equation}
Let $U^h(\cdot, mh)\in C(\mathbb{R}^n)$ for $m\geq 1$ be the viscosity solution of
\begin{equation}\langlebel{eq:m}
\frac{1}{\Gamma(2-\alpha)h^\alpha}\left\{
u(x)
-\sum_{k=0}^{m-1} C_{m,k}U^h(x,kh)\right\}
+F\left(x, t, D u, D^2u\right)=0 \quad \text{in $\mathbb{R}^n$}.
\end{equation}
Let us emphasize here that the equation \eqref{eq:m} is an (degenerate) elliptic problem with the elliptic operator strictly monotone in $u$.
In fact, for any $m\geq 1$ the elliptic equation is of the form
\begin{equation}\langlebel{eq:elliptic}
\langlembda u(x)+F\left(x, t, Du, D^2u\right)=g(x)\quad \text{in $\mathbb{R}^n$,}
\end{equation}
where $\langlembda>0$ and $g\in BUC(\mathbb{R}^n)$. We can obtain such a unique viscosity solution $U^h(\cdot, mh)\in BUC(\mathbb{R}^n)$ to \eqref{eq:m} with $t=mh$ for any $m\in \mathbb{N}$ under appropriate assumptions on $F$.
Define the function $u^{h}:\mathbb{R}^n\times[0,\infty)\to\mathbb{R}$ by
\begin{equation}\langlebel{def:Uh}
u^h(x,t):=U^h(x, mh)\quad\text{for each}\ x\in \mathbb{R}^n, \ t\in[mh,(m+1)h), \ m\in\mathbb{N}\cup\{0\}.
\end{equation}
Our main result of this paper is to show the convergence of $u^h$ to the unique viscosity solution of \eqref{eq:1}--\eqref{eq:ini}.
We remark that our scheme can be regarded as a resolvent-type approximation. Recall the implicit Euler scheme for the differential equation:
\[
u_t + F[u]:=u_t+F\left(x, t, Du,D^2u\right) = 0,
\]
which is given by
\[
u^h(\cdot, mh) - u^h(\cdot, (m-1)h) + hF[u^h(\cdot, mh)]=0\quad (m\in \mathbb{N}).
\]
This is a typical scheme by approximating $u$ by a function $u^h$ piecewise linear in time with time grid length $h$. The resulting equation is a resolvent type
equation for $u^h(\cdot, mh)$ if $u^h(\cdot, (m-1)h)$ is given. It is elliptic if the
original equation is parabolic.
| 2,685 | 14,084 |
en
|
train
|
0.4931.2
|
\subsection{Main Results}
We first give an abstract framework on the convergence of $u^h$.
\begin{thm}[Scheme convergence]\langlebel{thm:main}
Assume that \eqref{eq:op bound} and the following two conditions hold.
\begin{enumerate}
\item[{\rm(H1)}] For any $g\in BUC(\mathbb{R}^n)$, there exists a viscosity solution $u\in BUC(\mathbb{R}^n)$ to \eqref{eq:elliptic} for any $t>0$. Moreover, if $u, v\in BUC(\mathbb{R}^n)$ are, respectively, a subsolution and a supersolution of \eqref{eq:elliptic} with any fixed $t>0$, then $u\leq v$ in $\mathbb{R}^n$.
\item[{\rm(H2)}] Let $u\in USC(\mathbb{R}^n\times [0, \infty))$ and $v\in LSC(\mathbb{R}^n\times [0, \infty))$ be, respectively, a sub- and a supersolution of \eqref{eq:1}. Assume $u$ and $v$ are bounded in $\mathbb{R}^n\times [0, T)$ for any $T>0$.
If $u(\cdot, 0)\leq v(\cdot, 0)$ in $\mathbb{R}^n$,
then $u\leq v$ in $\mathbb{R}^n\times [0, \infty)$.
\end{enumerate}
Let $u^h$ be given by \eqref{def:Uh} for any $h>0$, where initial data $u^h_0$ is assumed to fulfill \eqref{initial approx}.
Then, $u^h\to u$ locally uniformly in $\mathbb{R}^n\times [0, \infty)$ as $h\to 0$, where $u$ is the unique viscosity solution to \eqref{eq:1}--\eqref{eq:ini}.
\end{thm}
We obtain the following corollary of Theorem \ref{thm:main} under more explicit sufficient conditions of (H1) and (H2).
\begin{comment}
\begin{cor}\langlebel{cor:periodic}
Assume that $x\mapsto u_0(x), F(x, t, p, X)$ are periodic for all $t>0, p\in\mathbb{R}^n, X\in\mathbb{S}^n$, \eqref{eq:op bound} and
\begin{enumerate}
\item[{\rm(F1)}] There exists a modulus of continuity $\omegaega: [0, \infty)\to [0, \infty)$ such that
\[
F\left(x, t, \mu(x-y),Y\right)-F\left(y, t, \mu(x-y),X\right)\le \omegaega\left(|x-y|(\mu|x-y|+1)\right)
\]
for all $\mu>0$, $x, p\in\mathbb{R}^n$, $t\geq 0$ and $X,Y\in\mathbb{S}^n$ satisfying
\[
\left(
\begin{array}{cc}
X & 0 \\
0 & -Y
\end{array}
\right)
\le
\mu
\left(
\begin{array}{cc}
I & -I \\
-I & I
\end{array}
\right).
\]
\end{enumerate}
Then, \eqref{conv} holds.
\end{cor}
\end{comment}
\begin{cor}\langlebel{cor:non-periodic}
Assume that \eqref{eq:op bound} and the following two conditions hold.
\begin{enumerate}
\item[{\rm(F1)}] There exists a modulus of continuity $\omegaega: [0, \infty)\to [0, \infty)$ such that
\[
F\left(x, t, \mu(x-y),Y\right)-F\left(y, t, \mu(x-y),X\right)\le \omegaega\left(|x-y|(\mu|x-y|+1)\right)
\]
for all $\mu>0$, $x, p\in\mathbb{R}^n$, $t\geq 0$ and $X,Y\in\mathbb{S}^n$ satisfying
\[
\left(
\begin{array}{cc}
X & 0 \\
0 & -Y
\end{array}
\right)
\le
\mu
\left(
\begin{array}{cc}
I & -I \\
-I & I
\end{array}
\right).
\]
\item[{\rm(F2)}] There exists a modulus of continuity $\tilde{\omegaega}: [0, \infty)\to [0, \infty)$ such that
\[
|F(x, t, p, X)-F(x, t, q, Y)|\leq \tilde{\omegaega}(|p-q|+|X-Y|)
\]
for all $x\in \mathbb{R}^n$, $t\geq 0$, $p, q\in \mathbb{R}^n$ and $X, Y\in \mathbb{S}^n$.
\end{enumerate}
Then, the conclusion of Theorem \ref{thm:main} holds.
\end{cor}
\begin{rem}
The assumption (F2) can be removed in the presence of periodic boundary condition, that is, $x\mapsto u_0(x)$ and $x\mapsto F(x, t, p, X)$
are periodic with the same period.
Recall that in a bounded domain or with the periodic boundary condition, (H1) is established in \cite{CIL}
and (H2)
is available in \cite[Theorem 3.1]{GN} \cite[Theorem 3.4]{N} under (F1).
\end{rem}
The comparison result in (H1) under (F1), (F2) in an unbounded domain is due to \cite{JLS}. Existence of solutions in this case can be obtained by Perron's method. In fact, thanks to \eqref{eq:op bound} with $R=0$, we can take $C>0$ large such that $C$ and $-C$ are, respectively, a supersolution and a subsolution of \eqref{eq:elliptic}. We then can prove the existence of solutions by adopting the standard argument in \cite{CIL, G}.
In addition, as shown in \cite{TY}, (H2) is also guaranteed by (F1) and (F2).
Our results above apply to a general class of nonlinear parabolic equations. We refer the reader to \cite[Example 3.6]{CIL} for concrete examples of $F$ that satisfy our assumptions, especially the condition (F1).
Finally, it is worthwhile to mention that the idea for a discrete scheme in this paper can be adopted to handle a more general type of time fractional derivatives as in \cite{C, CKKW}, provided that the comparison theorems can be obtained.
In this paper, we choose Caputo's time fractional derivatives to simplify the presentation.
This paper is organized as follows. In Section \ref{sec:pre}, we give the monotonicity and boundedness of discrete schemes. Section \ref{sec:main} is devoted to the proof of Theorem \ref{thm:main}.
| 1,794 | 14,084 |
en
|
train
|
0.4931.3
|
\section{Preparations}\langlebel{sec:pre}
We first recall the definition of viscosity solutions to \eqref{eq:1}.
\begin{defn}[Definition of viscosity solutions]\langlebel{defn vis}
For any $T>0$, a function $u\in USC(\mathbb{R}^n\times[0,T))$ {\rm(}resp., $u\in LSC(\mathbb{R}^n\times[0,T))${\rm)}
is called a viscosity subsolution {\rm(}resp., supersolution{\rm)} of \eqref{eq:1}
if for any $\phi\in C^{2}(\mathbb{R}^n\times [0,T))$
one has
\begin{equation}\langlebel{def:vis}
\partial_t^\alphapha \phi(x_0, t_0)+F(x_0, t_0, D \phi(x_0, t_0), D^2 \phi(x_0, t_0))
\le {\rm(resp.,}\ge{\rm)}\ 0
\end{equation}
whenever $u-\phi$ attains a local maximum {\rm(}resp., minimum{\rm)} at $(x_0, t_0)\in \mathbb{R}^n\times (0,T)$.
We call $u\in C(\mathbb{R}^n\times[0,T))$ a viscosity solution of \eqref{eq:1} if
$u$ is both a viscosity subsolution and a supersolution of \eqref{eq:1}.
\end{defn}
\begin{rem}
Our definition essentially follows \cite[Definition 2.2]{N}. In fact, since
\begin{equation}\langlebel{equiv time der}
\partial_t^{\alpha}\phi(x, t)
=
\frac{1}{\Gamma(1-\alpha)}\left(\frac{\phi(x, t)-\phi(x, 0)}{t^\alpha}+\alpha\int_0^t\frac{\phi(x, t)-\phi(x, s)}{(t-s)^{1+\alpha}}\,ds \right)
\end{equation}
for any $\phi\in C^{1}(\mathbb{R}^n\times [0, \infty))$, our definition is thus the same as \cite[Definition 2.2]{N}. A similar definition of viscosity solutions is cocurrently and independently proposed in \cite[Definition 2.1]{TY} for general space-time nonlocal parabolic problems.
Another possible way to define sub- or supersolutions is to separate the term $\partial_t^{\alpha}\phi(x, t)$ in \eqref{def:vis} into two parts like \eqref{equiv time der} and replace $\phi$ in one or both of the parts by $u$.
See \cite[Definition 2.1]{TY} and \cite[Definition 2.5]{GN}.
Such definitions are proved to be equivalent to Definition \ref{defn vis}. We refer to \cite[Lemma 2.3]{TY} and \cite[Proposition 2.5]{N} for proofs.
Note that the original definition of viscosity solutions in \cite{N-thesis, GN} looks stronger but it turns out that it is the same \cite[Lemma 2.9, Proposition 3.6]{N-thesis}.
\end{rem}
For any $h>0$, define $\partial_{t}^{\alpha,h}: L^\infty_
{loc}(\mathbb{R}^n\times [0, \infty))\to L^\infty_{loc}(\mathbb{R}^n\times [0, \infty))$ to be
\begin{equation}\langlebel{eq:op discrete}
\partial_{t}^{\alpha,h}u(x, t):=\frac{1}{\Gamma(2-\alpha)h^\alpha}\left\{
u(x, mh)
-\sum_{k=0}^{m-1} C_{m,k}u(x,kh)\right\}
\end{equation}
for $(x, t)\in \mathbb{R}^n\times [0, \infty)$, and $m\in \mathbb{N}\cup\{0\}$ satisfying $m=\lfloor t/h\rfloor$,
where $\lfloor s\rfloor$ denotes the greatest integer less than or equal to $s\geq 0$.
A locally bounded function $u: \mathbb{R}^n\times [0, \infty)\to \mathbb{R}$ is said to be a subsolution (resp., supersolution) of
\begin{equation}\langlebel{eq:discrete}
\partial_t^{\alpha, h} u+F\left(x, t, Du, D^2u\right)=0 \quad \text{in $\Omegaega\times (0, \infty)$}
\end{equation}
if for any $m\in \mathbb{N}$, $U=u(\cdot, mh)\in USC(\mathbb{R}^n)$ (resp., $U=u(\cdot, mh)\in LSC(\mathbb{R}^n)$) is a viscosity subsolution (resp, supersolution) of
\[
\frac{1}{\Gamma(2-\alpha)h^\alpha}\left\{
U-\sum_{k=0}^{m-1} C_{m,k}u(\cdot, kh)\right\}
+F\left(x, mh, D U, D^2U\right)=0 \quad \text{in $\mathbb{R}^n$.}
\]
By definition, it is clear that $u^h$ given by \eqref{def:Uh} is a solution of \eqref{eq:discrete}.
\begin{prop}[Monotonicity]\langlebel{prop:monotone}
Fix $h>0$. Assume that {\rm(H1)} holds. Let $U^h(\cdot, t)$, $V^h(\cdot, t)\in BUC(\mathbb{R}^n)$ for all $t\geq 0$ be, respectively, a subsolution and supersolution to \eqref{eq:discrete}.
Then, $U^h(\cdot, mh)\leq V^h(\cdot, mh)$ in $\mathbb{R}^n$ for all $m\in \mathbb{N}$.
\end{prop}
\begin{proof}
Due to the positiveness \eqref{positive} of $C_{m,k}$, one can easily see that the scheme is monotone by iterating the comparison principle in (H1) for elliptic problems.
\end{proof}
We next discuss below the boundedness of the scheme.
\begin{lem}[Barrier]\langlebel{lem:est1}
For any $h>0$, let $V^h(x, t):=(mh)^\alphapha$ for all $x\in \mathbb{R}^n$ and $t\geq 0$ with $m=\lfloor t/h\rfloor$.
Then,
\[
\partial_t^{\alphapha, h} V^h(x, t)\geq {(1-\alphapha)\alphapha\over \Gammama(2-\alphapha)}\qquad \text{for all $x\in \mathbb{R}^n$ and $t\geq h$}.
\]
\end{lem}
\begin{proof}
We have
\begin{equation}\langlebel{eq:barrier1}
\partial_t^{\alphapha,h} V^h(x, t)=\frac{1}{\Gamma(2-\alphapha)}\sum_{k=0}^{m-1}f(m-k)\big((k+1)^\alphapha-k^\alphapha\big)
\end{equation}
for all $x\in \mathbb{R}^n$ and $t\geq h$. Noting that
\begin{align*}
& f(m-k)\ge (1-\alphapha)/(m-k)^\alphapha\ge (1-\alphapha)/m^\alphapha, \\
& (k+1)^\alphapha-k^\alphapha\ge \alphapha/(k+1)^{1-\alphapha}\ge\alphapha/m^{1-\alphapha},
\end{align*}
we can plug these estimates into \eqref{eq:barrier1} to deduce the .
\end{proof}
\begin{lem}[Uniform boundedness]\langlebel{lem:bound}
Assume that \eqref{eq:op bound} and {\rm(H1)} hold.
Let $u^h$ be given by \eqref{def:Uh} for any fixed $h>0$. Then,
\[
|u^h(x, t)|\leq \sup_{\mathbb{R}^n}\left|u_0^h\right|+\frac{\Gamma(2-\alpha)M_0}{(1-\alpha)\alpha}t^\alphapha
\quad\text{for all} \
h>0, x\in \mathbb{R}^n, t\geq 0.
\]
\end{lem}
\begin{proof}
We define
\[
W^h(x, t):=\sup_{\mathbb{R}^n}\left|u_0^h\right|+\frac{\Gamma(2-\alpha)M_0}{(1-\alpha)\alpha}V^h(x, mh)
\]
for any $(x, t)\in \mathbb{R}^n\times [0, \infty)$, where $m=\lfloor t/h\rfloor$ and $V^h$ is given in Lemma \ref{lem:est1}.
In light of Lemma \ref{lem:est1}, we have
\[
\partial_t^{\alpha, h} W^h(x,mh)+F\left(x, t, DW^h(x,mh), D^2W^h(x,mh)\right)
\ge M_0+F(x, t, 0, 0) \ge 0
\]
for all $m\in\mathbb{N}$.
Combining with $U^h(\cdot,0)\le W^h(\cdot,0)$ on $\mathbb{R}^n$, by Proposition \ref{prop:monotone}, we get $U^h(\cdot,mh)\le W^h(\cdot,mh)$ for all $m\in\mathbb{N}$.
Symmetrically, we get $U^h(x, mh)\ge -W^h(\cdot,mh)$ for all $m\in\mathbb{N}\cup\{0\}$,
which implies the conclusion.
\end{proof}
| 2,513 | 14,084 |
en
|
train
|
0.4931.4
|
\section{Convergence of discrete schemes}\langlebel{sec:main}
Let $u^h$ be the function defined by \eqref{def:Uh}. By Lemma \ref{lem:bound} and \eqref{initial approx}, we can define the half-relaxed limit of $u^h$ as follows:
\begin{equation}\langlebel{half-relax}
\begin{aligned}
\overline{u}(x,t)&:=
\lim_{\deltata\to 0}\sup\left\{u^h(y,s): |x-y|+|t-s|\le \deltata,\ s\geq 0,\ 0<h\le \deltata\right\},\\
\underline{u}(x,t)&:=
\lim_{\deltata \to 0}\inf\left\{u^h(y,s): |x-y|+|t-s|\le \deltata,\ s\geq 0,\ 0<h\le \deltata\right\}
\end{aligned}
\end{equation}
for all $(x,t)\in\mathbb{R}^n\times[0,\infty)$.
By the definition of Riemann integral and the operator $\partial_{t}^{\alpha,h}$, we have the following.
\begin{lem}\langlebel{lem:limit}
Let $\partial_t^{\alphapha, h}$ be given by \eqref{eq:op discrete}. Then for any $\psi\in C^{1}(\mathbb{R}^n\times[0,\infty))$, we have
\[
\partial_{t}^{\alpha, h}\psi\to \partial_t^{\alpha}\psi
\quad
\text{locally uniformly in} \ \mathbb{R}^n\times(0,\infty) \ \text{as}\ h\to0.
\]
\end{lem}
\begin{prop}[Sub- and supersolution property]\langlebel{prop:half}
Let $\overline{u}$ and $\underline{u}$ be the functions defined by \eqref{half-relax}.
Then $\overline{u}$ and $\underline{u}$ are, respectively, a subsolution and supersolution to \eqref{eq:1}.
\end{prop}
\begin{proof}
We only prove that $\overline{u}$ is a subsolution to \eqref{eq:1} as we can similarly prove that
$\underline{u}$ is a supersolution to \eqref{eq:1}.
Take a test function $\varphi\in C^2(\mathbb{R}^n\times[0,\infty))$ and
$(\hat{x},\hat{t})\in \mathbb{R}^n\times(0,\infty)$ so that $\overline{u}-\varphi$ takes
a strict maximum at $(\hat{x},\hat{t})$ with $(\overline{u}-\varphi)(\hat{x},\hat{t})=0$.
By adding $|x-\hat{x}|^4$ to $\varphi$ (we still denote it by $\varphi$),
we may assume that $\varphi(x,t)\to \infty$ as $|x|$ uniformly for all $t\geq 0$.
We first claim that there exists $(x_{j},t_j)\in\mathbb{R}^n\times(0,\infty)$, $h_j>0$ so that
$(x_j,t_j)\to(\hat{x},\hat{t})$ and $h_j\to0$ as $j\to\infty$,
\begin{align}
&u^{h_j}(\cdot, t_j)-\varphi(\cdot,t_j) \ \text{takes a maximum at} \ x_{j},
\langlebel{claim2}\\
&
\sup_{(x,t)\in\mathbb{R}^n\times(0,\infty)}(u^{h_j}-\varphi)(x,t)
<(u^{h_j}-\varphi)(x_j,t_j)+h_j. \langlebel{claim3}
\end{align}
Indeed, by definition of $\overline{u}$, there exists $(y_j,s_j)\in\mathbb{R}^n\times(0,\infty)$, and $h_j>0$ so that
\begin{align*}
&(y_j,s_j)\to(\hat{x},\hat{t}), \ h_j\to0,
\ \text{and} \
u^{h_j}(y_j,s_j)\to\overline{u}(\hat{x},\hat{t}) \quad\text{as} \ j\to\infty.
\end{align*}
We next take $t_j>0$ such that
\[
\sup_{(x,t)\in\mathbb{R}^n\times(0,\infty)}(u^{h_j}-\varphi)(x,t)
<
\sup_{x\in\mathbb{R}^n}(u^{h_j}-\varphi)(x,t_j)+h_j.
\]
Also, by Lemma \ref{lem:bound} again, there exists $x_j\in\mathbb{R}^n$ so that
\[
\sup_{x\in\mathbb{R}^n}(u^{h_j}-\varphi)(x,t_j)
=\max_{x\in\mathbb{R}^n}(u^{h_j}-\varphi)(x,t_j)
=(u^{h_j}-\varphi)(x_j,t_j).
\]
Then, we can also easily check that
$(x_j,t_j)\to(\hat{x},\hat{t})$ as $j\to\infty$.
Set $N_j:=\lfloor t_j/h_j\rfloor$.
Then we have $u^{h_j}(\cdot ,t_j)=U^{h_j}(\cdot, N_jh_{j})$ in $\mathbb{R}^n$.
Since $U^{h_j}(\cdot, N_jh_{j})$ is a viscosity solution to \eqref{eq:m} with $m=N_j$ and $h=h_j$,
in light of \eqref{claim2},
we obtain
\[
\partial_{t}^{\alpha, h_j}u^{h_j}(x_{j}, t_j)+F\left(x_j, t_j, D\varphi(x_{j},t_{j}),D^2\varphi(x_{j},t_{j})\right)\le 0.
\]
Set
$\sigma_j:=\max_{x\in\mathbb{R}^n}(u^{h_j}-\varphi)(x,t_j)
=u^{h_j}(x_j,t_j)-\varphi(x_j, t_j)$.
In light of \eqref{claim3}, we have
\[
(u^{h_j}-\varphi)(x_j,kh_j)
\le h_j+\sigma_j
\]
for all $k=0,\ldots,N_j-1$.
Hence,
\begin{align*}
& \Gamma(2-\alpha)(h_j)^\alpha\partial_{t}^{\alpha, h_j}u^{h_j}(x_j, t_j)=
u^{h_j}(x_j,N_jh_j)
-\sum_{k=0}^{N_j-1} C_{N_j,k}u^{h_j}(x_j, kh_j)\\
\ge&\,
\varphi(x_j,N_jh_j)+\sigma_j
-\sum_{k=0}^{N_j-1} C_{N_j, k}\left(\varphi(x_j, kh_j)+h_j+\sigma_j\right).
\end{align*}
Noting that
\begin{equation*}\langlebel{eq:sum}
\sum_{k=0}^{N_j-1} C_{N_j, k}=f(1)=1,
\end{equation*}
we obtain
\begin{align*}
\partial_{t}^{\alpha,h_j}u^{h_j}(x_j,N_jh_j)
\ge&\, \frac{1}{\Gamma(2-\alpha)(h_j)^\alpha}\left\{
\varphi(x_j,N_jh_j)-\sum_{k=0}^{N_j-1} C_{N_j, k}\varphi(x_j, kh_j)-h_j\right\}\\
=&\,
\partial_{t}^{\alpha,h_j}\varphi(x_j,N_jh_j)+O(h_j^{1-\alphapha}).
\end{align*}
We therefore obtain
\[
\partial_{t}^{\alpha,,h_j}\varphi(x_{j},N_jh_j)
+F\left(x_j, t_j, D\varphi(x_{j},t_{j}),D^2\varphi(x_{j},t_{j})\right)
\le O(h_j^{1-\alphapha}).
\]
By Lemma \ref{lem:limit} and the continuity of $F$, sending $j\to\infty$ yields
\[
\partial_{t}^{\alpha}\varphi(\hat{x},\hat{t})+
F\left(\hat{x}, \hat{t}, D\varphi(\hat{x},\hat{t}),D^2\varphi(\hat{x},\hat{t})\right)
\le 0.
\qedhere
\]
\end{proof}
\begin{prop}[Initial consistency]\langlebel{prop:ini}
Assume that \eqref{eq:op bound} and {\rm(H1)} hold. Let $\overline{u}$ and $\underline{u}$ be the functions defined by \eqref{half-relax}.
Then $\overline{u}\leq u_0\leq \underline{u}$ in $\mathbb{R}^n$.
\end{prop}
\begin{proof}
Fix any $x_0\in \mathbb{R}^n$. Since $u_0\in BUC(\mathbb{R}^n)$ and \eqref{initial approx} holds, for any $\sigmama>0$ we can find a bounded smooth function $\phi_\sigmama$ such that
$\phi_\sigmama(x_0)\leq u_0(x_0)+\sigmama$ and
$u_0^h(x)\leq \phi_\sigmama(x)$ for all $x\in \mathbb{R}^n$ and all $h>0$ small.
We claim that
\[
\phi^h(x , t)=\phi_\sigmama(x)+{M_{R_\sigmama}\Gammama(2-\alphapha)\over (1-\alphapha)\alphapha}t^\alphapha
\]
is a supersolution of \eqref{eq:discrete} with $h>0$ small, where $M_{R_\sigmama}$ is given in \eqref{eq:op bound} with
\[
R_\sigmama=\sup_{\mathbb{R}^n}\left(|D \phi_\sigmama|+|D^2 \phi_\sigmama|\right).
\]
Indeed, for any $x\in \mathbb{R}^n$, applying Lemma \ref{lem:est1},
we deduce that for all $m\in\mathbb{N}$,
\[
\partial_t^{\alphapha, h}\phi^h(x, mh)\geq M_{R_\sigmama}\geq -F\left(x, mh, D \phi_\sigmama(x), D^2 \phi_\sigmama(x)\right)
\]
for all $x\in \mathbb{R}^n$.
We thus can adopt Proposition \ref{prop:monotone} to obtain that
$u^h(x, Nh)\leq \phi^h(x, Nh)$ for all $x\in \mathbb{R}^n$ and $t\geq 0$ with $N=\lfloor t/h\rfloor$,
which implies that
\[
u^h(x, t)\leq \phi_\sigmama(x)+{M_{R_\sigmama}\Gammama(2-\alphapha)\over (1-\alphapha)\alphapha}t^\alphapha
\]
for all $x\in \mathbb{R}^n$ and $t\geq 0$. We thus have
\[
\overline{u}(x_0, 0)\leq \phi_\sigmama(x_0),
\]
which implies, by letting $\sigmama\to 0$, that
$\overline{u}(x_0, 0)\leq u_0(x_0)$.
The proof for the part on $\underline{u}$ is symmetric and therefore omitted here.
\end{proof}
\begin{proof}[Proof of Theorem {\rm\ref{thm:main}}]
If (H2) holds, then the conclusion of the theorem is a straightforward result of Propositions \ref{prop:half} and \ref{prop:ini}.
\end{proof}
\begin{comment}
\appendix
| 3,120 | 14,084 |
en
|
train
|
0.4931.5
|
\section{A comparison principle in $\mathbb{R}^n$}\langlebel{sec:appendix}
We give a comparison principle for \eqref{eq:1} under (F1) and (F2) for completeness.
\begin{thm}\langlebel{thm:comparison unbounded}
Assume that $F$ is a continuous elliptic operator satisfying {\rm(F1)} and {\rm(F2)}.
Then for any $\alphapha\in (0, 1)$, the comparison principle {\rm(H2)} holds.
\end{thm}
\begin{proof}
Assume by contradiction that there exists $(x_0, t_0)\in \mathbb{R}^n\times (0, \infty)$ such that $u(x_0, t_0)-v(x_0, t_0)\geq \theta$ for some $\theta>0$.
For $\mu, \beta>0$ and $T>t_0$, set
\[
\Phi(x, y, t)=u(x, t)-v(x, t)-{\mu\over T-t}-\beta f(x)-\beta f(y).
\]
where $f(x)=(1+|x|^2)^{1/2}$ for $x\in \mathbb{R}^n$. By letting $\mu, \beta$ small, we get
$
\max_{\mathbb{R}^n\times [0, T)} \Phi\geq {\theta/ 2}.
$
Let $(\hat{x}, \hat{t})$ be any maximizer of $\Phi$.
We can easily see that $\hat{t}>0$.
The penalty near space infinity essentially enables us to pursue our argument in the same manner as in the case for a bounded domain (\cite[Theorem 3.4]{N}). Note that
\[
\Phi_{\sigmama}(x, y, t)=u(x, t)-v(y, t)-{|x-y|^2\over \sigmama}-{\mu\over T-t}-\beta f(x)-\beta f(y)
\]
attains a maximum at $(x_{\sigmama}, y_{\sigmama}, t_{\sigmama})\in \mathbb{R}^{2n}\times (0, \infty)$.
Since $(x_\sigmama, y_\sigmama)$ is uniformly bounded in $\sigmama$ due to the penalty terms $\beta f(x)$ and $\beta f(y)$, we see that $(x_{\sigmama}, y_{\sigmama}, t_{\sigmama})$ converges to a maximizer of $\Phi$, denoted again by $(\hat{x}, \hat{x}, \hat{t})$, via a subsequence as $\sigmama\to 0$. A standard argument also yields that $|x_{\sigmama}-y_{\sigmama}|^2/\sigmama\to 0$ as $\sigmama\to 0$.
Applying
an equivalent definition of solutions involving semijets (\cite[Proposition 2.7]{N}), we have $p_{\sigmama}, q_{\sigmama}\in \mathbb{R}^n$, $X_{\sigmama}, Y_{\sigmama}\in \mathbb{S}^n$ satisfying
\begin{equation}\langlebel{eq:ishii1}
p_{\sigmama}={2(x_\sigmama-y_\sigmama)\over \sigmama}+\beta Df(x_\sigmama), \quad q_\sigmama={2(x_\sigmama-y_\sigmama)\over \sigmama}-\beta Df(y_\sigmama),
\end{equation}
\begin{equation}\langlebel{eq:ishii2}
\left(
\begin{array}{cc}
X_{\sigmama}-\beta D^2 f(x_{\sigmama}) & 0 \\
0 & -Y_{\sigmama}-\beta D^2 f(y_{\sigmama})
\end{array}
\right) \le
\frac{2}{\sigmama}
\left(
\begin{array}{cc}
I & -I \\
-I & I
\end{array}
\right)
\end{equation}
such that
\[
\partial_t^\alpha u(x_{\sigmama}, t_{\sigmama})+F(x_{\sigmama}, t_{\sigmama}, p_{\sigmama}, X_{\sigmama})\leq 0,
\quad
\partial_t^\alpha v(y_{\sigmama}, t_{\sigmama})+F(y_{\sigmama}, t_{\sigmama}, q_{\sigmama}, Y_{\sigmama})\geq 0.
\]
Taking the difference of both inequalities above, we have
\begin{equation}\langlebel{vis ineq}
\partial_t^\alpha u(x_{\sigmama}, t_{\sigmama})-\partial_t^\alpha v(y_{\sigmama}, t_{\sigmama})\leq F(y_{\sigmama}, t_{\sigmama}, q_{\sigmama}, Y_{\sigmama})-F(x_{\sigmama}, t_{\sigmama}, p_{\sigmama}, X_{\sigmama}).
\end{equation}
We next use \eqref{eq:ishii1}, \eqref{eq:ishii2} and
(F1), (F2) to estimate the right hand side of \eqref{vis ineq} as below.
\[
F(y_{\sigmama}, t_{\sigmama}, q_{\sigmama}, Y_{\sigmama})-F(x_{\sigmama}, t_{\sigmama}, p_{\sigmama}, X_{\sigmama})
\leq \omegaega\left({2|x_{\sigmama}-y_{\sigmama}|^2\over\sigmama}+|x_{\sigmama}-y_{\sigmama}|\right)+2\tilde{\omegaega}(C\beta)
\]
for some $C>0$ independent of $\sigmama$ and $\beta$.
On the other hand, we can use the same argument as in the proof of \cite[Theorem 3.4]{N} to get
\[
\partial_t^\alpha u(x_{\sigmama}, t_{\sigmama})-\partial_t^\alpha v(y_{\sigmama}, t_{\sigmama})\geq \frac{\theta-u(x_{\sigmama}, 0)+v(y_{\sigmama}, 0)}{t_{\sigmama}^\alphapha\Gammama(1-\alphapha)}\geq \frac{\theta}{t_{\sigmama}^\alphapha\Gammama(1-\alphapha)}
\]
by using \eqref{equiv time der}.
Combining these estimates with \eqref{vis ineq} and passing to the limit as $\sigmama\to 0$,
we get
\[
{\theta\over T^\alphapha \Gammama(1-\alphapha)}\leq {\theta\over \hat{t}^\alphapha \Gammama(1-\alphapha)}\leq 2\tilde{\omegaega}(C\beta).
\]
We reach a contradiction by letting $\beta\to 0$.
\end{proof}
\end{comment}
\begin{comment}
We first adopt (i) to get
\[
|F_\varepsilon(x_{\sigmama, \varepsilon}, t_{\sigmama, \varepsilon}, p_{\sigmama, \varepsilon}, X_{\sigmama, \varepsilon})-F(x_{\sigmama, \varepsilon}, t_{\sigmama, \varepsilon}, p_{\sigmama, \varepsilon}, X_{\sigmama, \varepsilon})|\leq \omegaega(M\varepsilon^{1/2}),
\]
\[
|F^\varepsilon(y_{\sigmama, \varepsilon}, t_{\sigmama, \varepsilon}, q_{\sigmama, \varepsilon}, Y_{\sigmama, \varepsilon})-F(y_{\sigmama, \varepsilon}, t_{\sigmama, \varepsilon}, q_{\sigmama, \varepsilon}, Y_{\sigmama, \varepsilon})|\leq \omegaega(M\varepsilon^{1/2}).
\]
We next use \eqref{eq:ishii1}, \eqref{eq:ishii2} and (ii), (iii) to obtain
\[
\begin{aligned}
&F(y_{\sigmama, \varepsilon}, t_{\sigmama, \varepsilon}, q_{\sigmama, \varepsilon}, Y_{\sigmama, \varepsilon})-F(x_{\sigmama, \varepsilon}, t_{\sigmama, \varepsilon}, p_{\sigmama, \varepsilon}, X_{\sigmama, \varepsilon})\\
&\leq \omegaega\left({2|x_{\sigmama, \varepsilon}-y_{\sigmama, \varepsilon}|^2\over\sigmama}+|x_{\sigmama, \varepsilon}-y_{\sigmama, \varepsilon}|\right)+\tilde{\omegaega}(\beta)+2\omegaega_R(\beta).
\end{aligned}
\]
where $R>0$ depends only on $\varepsilon$. Hence, we have
\end{comment}
\begin{thebibliography}{30}
\bibitem{A}
M. Allen,
\emph{A nondivergence parabolic problem with a fractional time derivative},
Differential Integral Equations 31 (2018), no. 3-4, 215--230.
\bibitem{ACV}
M. Allen, L. Caffarelli, A. Vasseur,
\emph{A parabolic problem with a fractional time derivative},
Arch. Ration. Mech. Anal. 221 (2016), no. 2, 603--630.
\bibitem{C}
Z.-Q. Chen,
\emph{Time fractional equations and probabilistic representation},
Chaos Solitons Fractals 102 (2017), 168--174.
\bibitem{CKKW}
Z.-Q. Chen, P. Kim, T. Kumagai, J. Wang,
\emph{Heat kernel estimates for time fractional equations},
preprint.
\bibitem{CIL}
M. G. Crandall, H. Ishii, P.-L. Lions,
\emph{User's guide to viscosity solutions of second order partial differential equations},
Bull. Amer. Math. Soc. (N.S.) 27 (1992), no. 1, 1--67.
\bibitem{G}
Y. Giga. \emph{Surface evolution equations, a level set approach}, volume 99 of Monographs in Mathematics. Birkh\"auser Verlag, Basel, 2006.
\bibitem{GN}
Y. Giga, T. Namba,
\emph{Well-posedness of Hamilton-Jacobi equations with Caputo's time fractional derivative},
Comm. Partial Differential Equations 42 (2017), no. 7, 1088--1120.
\bibitem{JLS}
R. Jensen, P.-L. Lions, P. E. Souganidis,
\emph{
A uniqueness result for viscosity solutions of second order fully nonlinear partial differential equations}, Proc. Amer. Math. Soc. 102, (1988), no. 4, 975--978.
\bibitem{KY}
A. Kubica, M. Yamamoto,
\emph{Initial-boundary value problems for fractional diffusion equations with time-dependent coefficients},
Fract. Calc. Appl. Anal. 21 (2018), no. 2, 276--311.
\bibitem{L}
Y. Luchko,
\emph{Maximum principle for the generalized time-fractional diffusion equation},
J. Math. Anal. Appl. 351 (2009), no. 1, 218--223.
\bibitem{MK}
R. Metzler, J. Klafter,
\emph{The random walk's guide to anomalous diffusion: a fractional dynamics approach},
Phys. Rep. 339 (2000), no. 1, 77 pp.
\bibitem{N-thesis}
T. Namba,
\emph{Analysis for viscosity solutions with special emphasis on anomalous effects},
January 2017, thesis, the University of Tokyo.
\bibitem{N}
T. Namba,
\emph{On existence and uniqueness of viscosity solutions for second order fully nonlinear PDEs with Caputo time fractional derivatives},
NoDEA Nonlinear Differential Equations Appl. 25 (2018), no. 3, Art. 23, 39 pp.
\bibitem{SY}
K. Sakamoto, M. Yamamoto,
\emph{Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems},
J. Math. Anal. Appl. 382 (2011), no. 1, 426--447.
\bibitem{TY}
E. Topp, M. Yangari,
\emph{Existence and uniqueness for parabolic problems with Caputo time derivative},
J. Differential Equations 262 (2017), no. 12, 6018--6046.
\bibitem{Z}
R. Zacher,
\emph{Weak solutions of abstract evolutionary integro-differential equations in Hilbert spaces},
Funkcial. Ekvac. 52 (2009), no. 1, 1--18.
\end {thebibliography}
\end{document}
| 3,353 | 14,084 |
en
|
train
|
0.4932.0
|
\begin{document}
\title{\PSPACE-completeness of Pulling Blocks to Reach a Goal}
\begin{abstract}
We prove \textsc{PSPACE}\xspace-completeness of all but one problem in a large space of pulling-block problems where the goal is for the agent to reach a target destination. The problems are parameterized by whether pulling is optional, the number of blocks which can be pulled simultaneously, whether there are fixed blocks or thin walls, and whether there is gravity. We show \textsc{NP}\xspace-hardness for the remaining problem, \PullkFG[1][?] (optional pulling, strength 1, fixed blocks, with gravity).
\end{abstract}
\section{Introduction}
\label{sec:intro}
In the broad field of \emph{motion planning}, we seek algorithms for actuating
or moving mobile agents (e.g., robots) to achieve certain goals.
In general settings, this problem is PSPACE-complete
\cite{Canny-1988-pspace,Reif-1979-mover},
but much attention has been given to finding simple variants near the threshold
between polynomial time and PSPACE-complete;
see, e.g., \cite{hearn2009games}.
One interesting and well-studied case, arising in warehouse maintenance,
is when a single robot with $O(1)$ degrees of freedom navigates an environment
with obstacles, some of which can be moved by the robot (but which cannot move
on their own).
Research in this direction was initiated in 1988 \cite{Wilfong-1991}.
A series of problems in this space arise from computer puzzle games,
where the robot is the agent controlled by the player,
and the movable obstacles are \emph{blocks}.
The earliest and most famous such puzzle game is \emph{Sokoban},
first released in 1982 \cite{sokoban-wiki}.
Much later, this game was proved PSPACE-complete
\cite{sokoban,hearn2009games}.
In Sokoban, the agent can \emph{push} movable $1 \times 1$ blocks
on a square grid, and the goal is to bring those blocks to target locations.
Later research in \emph{pushing-block puzzles} considered the simpler
goal of simply getting the robot to a target location,
proving various versions NP-hard, NP-complete, or PSPACE-complete
\cite{demainepush,demaine2003pushing,demaine2004pushpush}.
In this paper, we study the \Pull series of motion-planning problems
\cite{Ritt10,PRB16}, where the agent can \emph{pull} (instead of push)
movable $1 \times 1$ blocks on a square grid.
Figure~\ref{fig:example} shows a simple example.
This type of block-pulling mechanic (sometimes together with a block-pushing
mechanic) appears in many real-world video games,
such as Legend of Zelda, Tomb Raider, Portal, and Baba Is You.
\begin{figure}
\caption{A pulling-block problem. The robot is the agent, the flag is the goal square, the light gray blocks can be moved, and the bricks are fixed in place.
\sl \href{https://fontawesome.com/icons/robot?style=solid}
\label{fig:example}
\end{figure}
We study several different variants of \Pull, which can be combined in arbitrary combination:
\begin{enumerate}
\setlength\itemsep{0pt}
\setlength\parskip{0pt}
\item \textbf{Optional/forced pulls:} In \textsc{Pull!}, every agent motion that can also pull blocks must pull as many as possible (as in many video games where the player input is just a direction). In \textsc{Pull?}, the agent can choose whether and how many blocks to pull. Only the latter has been studied in the literature, where it is traditionally called \textsc{Pull}; we use the explicit ``?''\ to indicate optionality and distinguish from \textsc{Pull!}.
\item \textbf{Strength:} In \Pullk[$k$][], the agent can pull an unbroken horizontal or vertical line of up to $k$ pullable blocks at once. In \Pullk[$\ast$][], the agent can pull any number of blocks at once.
\item \textbf{Fixed blocks/walls:} In \PullkF[][], the board may have fixed $1 \times 1$ blocks that cannot be traversed or pulled. In the \PullkW[][], the board may have fixed thin ($1 \times 0$) walls; this is more general because a square of thin walls is equivalent to a fixed block. Thin walls were introduced in \cite{demaine2017push}.
\item \textbf{Gravity:} In \textsc{Pull-G}, all movable blocks fall downward after each agent move. Gravity does not affect the agent's movement.
\end{enumerate}
Table~\ref{tab:results} summarizes our results: for all variants that include fixed blocks or
walls, we prove \textsc{PSPACE}\xspace-completeness for any strength, with optional or forced pulls, and with or
without gravity, with the exception of \PullkFG[1][?] for which we only show \textsc{NP}\xspace-hardness.
\definecolor{header}{rgb}{0.29,0,0.51}
\definecolor{gray}{rgb}{0.85,0.85,0.85}
\def\header#1{\multicolumn{1}{c}{\textcolor{white}{\textbf{#1}}}}
\def\tableref#1{[\S\ref{#1}]}
\begin{table}
\centering
\tabcolsep=0.5\tabcolsep
\begin{tabular}{l c c c c l l}
\rowcolor{header}
\header{Problem} & \header{\hspace{-0.3em}Forced} & \header{Strength\hspace{-0.2em}} & \header{Features} & \header{\hspace{-0.2em}Gravity\hspace{-0.2em}} & \header{Our result} & \header{\hspace{-0.3em}Previous best} \\
\Pull?-$k$F & no & $k \ge 1$ & fixed blocks & no & \textsc{PSPACE}\xspace-complete \tableref{sec:no gravity} & \textsc{NP}\xspace-hard \cite{Ritt10} \\
\rowcolor{gray}
\Pull?-$\ast$F & no & $\infty$ & fixed blocks & no & \textsc{PSPACE}\xspace-complete \tableref{sec:no gravity} & \textsc{NP}\xspace-hard \cite{Ritt10} \\
\Pull!-$k$F & yes & $k \ge 1$ & fixed blocks & no & \textsc{PSPACE}\xspace-complete \tableref{sec:no gravity} & \\
\rowcolor{gray}
\Pull!-$\ast$F & yes & $\infty$ & fixed blocks & no & \textsc{PSPACE}\xspace-complete \tableref{sec:no gravity} & \\
\Pull?-1FG & no & $k = 1$ & fixed blocks & yes & \textsc{NP}\xspace-hard \tableref{sec:Pull1FG NP} & \\
\rowcolor{gray}
\Pull?-1WG & no & $k = 1$ & thin walls & yes & \textsc{PSPACE}\xspace-complete \tableref{sec:optional pull} & \\
\Pull?-$k$FG & no & $k \ge 2$ & fixed blocks & yes & \textsc{PSPACE}\xspace-complete \tableref{sec:optional pull} & \\
\rowcolor{gray}
\Pull?-$\ast$FG & no & $\infty$ & fixed blocks & yes & \textsc{PSPACE}\xspace-complete \tableref{sec:optional pull} & \\
\Pull!-$k$FG & yes & $k \ge 1$ & fixed blocks & yes & \textsc{PSPACE}\xspace-complete \tableref{sec:mandatory gravity} & \\
\rowcolor{gray}
\Pull!-$\ast$FG & yes & $\infty$ & fixed blocks & yes & \textsc{PSPACE}\xspace-complete \tableref{sec:mandatory gravity} & \\
\end{tabular}
\caption[]{Summary of our results.}
\label{tab:results}
\end{table}
The only previously known hardness result for this family of problems is NP-hardness for both \PullkF and \PullkF[$*$] \cite{Ritt10}.
In some cases, our results are stronger than the best known results for the corresponding \textsc{Push} (pushing-block) problem; see \cite{PRB16}.
More complex variants \PullPull (where pulled blocks slide maximally), \PushPull (where blocks can be pushed and pulled), and \textsc{Storage Pull} (where the goal is to place multiple blocks into desired locations) are also known to be PSPACE-complete \cite{demaine2017push,PRB16}.
Our reductions are from Asynchronous Nondeterministic Constraint Logic (NCL)
\cite{hearn2009games, DBLP:conf/cccg/Viglietta13} and
planar 1-player motion planning \cite{demaine2018general, doors}.
In Section~\ref{sec:no gravity}, we reduce from NCL to prove \textsc{PSPACE}\xspace-hardness of all nongravity variants.
In Section~\ref{sec:gravity pspace}, we use the motion-planning-through-gadgets framework \cite{demaine2018general} to prove \textsc{PSPACE}\xspace-completeness of most variants with gravity, including all variants with forced pulling and variants with optional pulling and either thin walls or fixed blocks with $k\ge2$.
These reductions use two particular gadgets for 1-player motion planning,
the newly introduced \emph{nondeterministic locking 2-toggle\xspace}
(a variant of the locking 2-toggle from \cite{demaine2018general})
and the \emph{3-port self-closing door} (one of the self-closing doors from
\cite{doors}).
Although the latter gadget is proved hard in \cite{doors}, for completeness,
we give a more succinct proof in Appendix~\ref{app:self-closing door}.
In Section~\ref{sec:Pull1FG NP}, we prove \textsc{NP}\xspace-hardness for the one remaining case of \PullkFG[1][?],
again reducing from 1-player planar motion planning,
this time with an NP-hard gadget called the crossing NAND gadget \cite{doors}.
| 2,683 | 12,144 |
en
|
train
|
0.4932.1
|
\section{Pulling Blocks with Fixed Blocks is \textsc{PSPACE}\xspace-complete}
\label{sec:no gravity}
In this section, we show the PSPACE-completeness of all variants of pulling-block problems we have defined without gravity, namely \PullkF, \PullkW, \PullkF[$k$][!], and \PullkW[$k$][!] for $k \ge 1$, and \PullkF[$\ast$], \PullkW[$\ast$], \PullkF[$*$][!], and \PullkW[$*$][!]. We do this through a reduction from Nondeterministic Constraint Logic \cite{hearn2009games}, which we describe briefly before moving on to the main proof.
\subsection{Asynchronous Nondeterministic Constraint Logic}
\label{ssec:NCL}
\textsc{Nondeterministic Constraint Logic}\xspace (NCL) takes place on \emph{constraint graphs}: a directed graph where each edge has weight 1 or 2. Weight-1 edges are called \emph{red}; weight-2 edges are called \emph{blue}. The ``constraint'' in NCL is that each vertex must maintain in-weight at least 2. A \emph{move} in NCL is a reversal of the direction of one edge, while maintaining compliance with the constraint.
In \emph{asynchronous} NCL, the process of switching the orientation of an edge does not happen instantaneously, but instead it takes a positive amount of time, and it is possible to be in the process of switching several edges simultaneously. When an edge is in the process of being reversed, it is not oriented towards either vertex. Viglietta \cite{DBLP:conf/cccg/Viglietta13} showed that this model is equivalent to the regular (synchronous) model, because there is no benefit to having an edge in the intermediate unoriented state. In this work, we only use the asynchronous NCL model; any mention of NCL should be understood to mean asynchronous NCL.
An instance of \textsc{Nondeterministic Constraint Logic}\xspace consists of a constraint graph $G$ and an edge $e$ of $G$, called the \emph{target edge}.
The output is \textsc{yes}\xspace if there is a sequence of moves on $G$ that reverses the direction of $e$, and \textsc{no}\xspace otherwise. \textsc{Nondeterministic Constraint Logic}\xspace is \textsc{PSPACE}\xspace-complete, even for planar constraint graphs that have only two types of vertices: AND (two red edges, one blue edge) and OR (three blue edges).
We will reduce from the planar, AND/OR, asynchronous version of NCL to show pulling-block problems without gravity \textsc{PSPACE}\xspace-hard.
For more description of NCL, including a proof of \textsc{PSPACE}\xspace-completeness, the reader is referred to \cite{hearn2009games}.
\subsection{NCL Gadgets in Pulling Blocks}
In order to embed an NCL constraint graph into \PullkF, we need three
components, corresponding to NCL edges (which can attach to AND and OR gadgets
in all necessary orientations, and that allows the player to win if the
winning edge is flipped), AND vertices, and OR vertices.
In each of these gadgets, we will show that if the underlying NCL constraint is violated, then the agent will be ``trapped'', meaning that the state is in an \emph{unrecoverable configuration}, a concept used in several previous blocks games \cite{sokoban,hearn2009games}. This occurs when the agent makes a pull move after which no set of moves will lead to a solution, generally because the agent has trapped itself in a way that no pull can be made \textit{at all} (or only a few more pull moves may be made, and all of them lead to a state such that there are no more pull moves).
\textbf{Diode Gadget.} Before describing the three main gadgets, we describe a helper gadget, the \emph{diode}, shown in Figure~\ref{fig:diode}. The diode can be repeatedly traversed in one direction but never the other. It was introduced in \cite{Ritt10}.
\begin{figure}
\caption{Diode gadget, which can be repeatedly traversed from left to right but never from right to left.
In diagrams to follow it will be represented by the diode symbol.}
\label{fig:diode}
\end{figure}
In the next three sections, we describe the three main gadgets in turn.
\input{Gadgets/NCLWire}
\input{Gadgets/NCLor.tex}
\input{Gadgets/NCLand.tex}
\subsection{Proof of \textsc{PSPACE}\xspace-completeness}
We first observe that every pulling-block problem we consider is in \textsc{PSPACE}\xspace.
\begin{lemma}
\label{lem:pull-in-PSPACE}
Every pulling-block problem defined in Section~\ref{sec:intro} is in \textsc{PSPACE}\xspace.
\end{lemma}
\begin{proof}
The entire configuration while playing on instance of a pulling-block problem can be stored in polynomial space (e.g., as a matrix recording whether each cell is empty, a fixed block, a movable block, the agent's location, or the finish tile). There is a simple nondeterministic algorithm which guesses each move and keeps track of the configuration using only polynomial space, accepting if the agent reaches the goal square.
Thus the problem is in \textsc{NP}\xspaceSPACE, so by Savitch's Theorem \cite{savitch1970relationships} it is also in \textsc{PSPACE}\xspace.
\end{proof}
\begin{theorem}
\label{thm:pull-kF-PSPACE-complete}
\PullkF and \PullkF[$k$][!] PSPACE-complete for $k \ge 1$ and $k=*$.
\end{theorem}
\begin{proof}
Lemma~\ref{lem:pull-in-PSPACE} gives us containment in \textsc{PSPACE}\xspace.
For \textsc{PSPACE}\xspace-hardness, we reduce from asynchronous NCL
(as defined in Section~\ref{ssec:NCL}).
Given a planar AND/OR NCL graph, we construct an instance of \PullkF or \PullkF[$k$][!] as follows. First, embed the graph in a grid graph. Scale this grid graph by enough to fit our gadgets; $20\times20$ suffices. At each vertex, place the appropriate AND or OR vertex gadget. Place edge gadgets in the appropriate configuration along each edge, using corner gadgets on turns. Adjust the vertex gadgets to accommodate the alignment of the edge gadgets incident to them. Finally, place the goal tile in the edge gadget corresponding to the target edge so that it is accessible only if the target edge is flipped, and place the agent on any empty tile.
The agent can walk through edge gadgets to visit any NCL edge or vertex, and by
Lemmas~\ref{lem:NCL-vertices-OR} and~\ref{lem:NCL-vertices-AND}, flip edges in accordance with the rules of NCL. Ultimately, it can reach the goal tile if and only if the target edge of the NCL instance can be reversed.
In our construction, the agent never has the opportunity to pull more than 1 block at a time. Thus the reduction works for \PullkF for any $k\geq1$, including $k=*$. In addition, the agent never has to choose not to pull a block when taking a step, so the reduction works for \PullkF[$k$][!] as well as \PullkF.
\end{proof}
\begin{corollary}
\label{cor:pull-W-PSPACE-complete}
\PullkW and \PullkW[$k$][!] are \textsc{PSPACE}\xspace-complete for $k\geq1$ and $k=*$.
\end{corollary}
\begin{proof}
A fixed block can be simulated using four thin walls drawn around a single tile, so our constructions can be built using thin walls instead of fixed blocks. Formally, this is a reduction from \PullkF to \PullkW and a reduction from \PullkF[$k$][!] to \PullkW[$k$][!].
\end{proof}
| 2,065 | 12,144 |
en
|
train
|
0.4932.2
|
\section{\texorpdfstring{\PullkFG}{Pull?-kFG} is \textsc{PSPACE}\xspace-complete for \texorpdfstring{$k \ge 2$}{k ≥ 2} and \texorpdfstring{\PullkFG[$k$][!]}{Pull!-kFG} is \textsc{PSPACE}\xspace-complete for \texorpdfstring{$k \ge 1$}{k ≥ 1}}
\label{sec:gravity pspace}
In this section, we show \textsc{PSPACE}\xspace-completeness results for most of the
pulling-block variants with gravity.
In Section~\ref{sec:gadgets}, we introduce and prove results about
\emph{1-player motion planning} from the motion-planning-through-gadgets
framework introduced in \cite{demaine2018computational}, which will be the
basis for the later proofs.
In Section~\ref{sec:optional pull},
we show \textsc{PSPACE}\xspace-completeness for \PullkFG with $k \ge 2$, for
\PullkFG[$\ast$], for \PullkWG with $k \ge 1$, and for \PullkWG[$\ast$].
In Section~\ref{sec:mandatory gravity}, we show \textsc{PSPACE}\xspace-completeness for
\PullkFG[$k$][!] with $k \ge 1$, and for \PullkFG[$\ast$][!].
The one case missing from this collection is \PullkFG[1], which we prove
NP-hard later in Section~\ref{sec:Pull1FG NP}.
\subsection{1-player Motion Planning}
\label{sec:gadgets}
\emph{1-player motion planning} refers to the general problem of planning an agent's motion to complete a path through a series of gadgets whose state and traversability can change when the agent interacts with them. In particular, a \emph{gadget} is a constant-size set of locations, states, and traversals, where each traversal indicates that the agent can move from one location to another while changing the state of the gadget from one state to another. A system of gadgets is constructed by connecting the locations of several gadgets with a graph, which is sometimes restricted to be planar. The decision problem for 1-player motion planning is whether the agent, starting from a specified stating location, can follow edges in the graph and transitions within gadgets to reach some goal location.
Our results use that 1-player planar motion planning is \textsc{PSPACE}\xspace-complete
for the following gadgets:
\begin{enumerate}
\item The \emph{locking 2-toggle}, shown in Figure~\ref{fig:l2t}, is a three-state two-tunnel reversible deterministic gadget. In the \emph{middle state}, both tunnels can be traversed in one direction, switching to one of two \emph{leaf states}. Each leaf state only allows the transition back across that tunnel in the opposite direction, returning the gadget to the middle state. Traversing one tunnel ``locks'' the other side from being used until the prior traversal is reversed.
1-player planar motion planning with locking 2-toggles was shown
\textsc{PSPACE}\xspace-complete in \cite{demaine2018general}.
In Section~\ref{sec:leaf2toggle}, we strengthen the result in \cite{demaine2018general} by showing that 1-player motion planning with locking 2-toggle remains hard even if the initial configuration of the system has all gadgets in leaf (locked) states.
\begin{figure}
\caption{State space of the locking 2-toggle.}
\label{fig:l2t}
\caption{State space of the nondeterministic locking 2-toggle.}
\label{fig:nl2t}
\end{figure}
\item The \emph{nondeterministic locking 2-toggle\xspace}, shown in Figure~\ref{fig:nl2t}, is a four-state gadget where each state has two transitions, each across the same tunnel. The top pair of states each allow a single traversal downward, and allow the agent to choose either of the two bottom states for the gadget. Similarly, the bottom pair of states each allow a single traversal upward to one of the top states. We can imagine this as being similar to the locking 2-toggle if the tunnel to be taken next is guessed ahead of time: the bottom state of the locking 2-toggle is split into two states which together allow the same traversals, but only if the agent picks the correct one ahead of time.
In Section~\ref{sec:nondet2toggle}, we show that 1-player motion planning with the nondeterministic locking 2-toggle\xspace is \textsc{PSPACE}\xspace-complete.
\item The \emph{door gadget} has three directed tunnels called \emph{open}, \emph{close}, and \emph{traverse}. The traverse tunnel is open or closed depending on the state of the gadget and does not change the state. Traversing the open or close tunnel opens or closes the traverse tunnel, respectively.
1-player motion planning with door gadgets was shown
\textsc{PSPACE}\xspace-complete in \cite{nintendoor} and explored more thoroughly
(in particular, proved hard for most planar cases) in \cite{doors}.
\item The \emph{3-port self-closing door}, shown in Figure~\ref{fig:statespace-scd}, is a gadget with a tunnel that becomes closed when the agent traverses it and a location that the agent can visit to reopen the tunnel.
It has an \emph{opening port}, which opens the gadget,
and a \emph{self-closing tunnel}, which is the tunnel that closes when traversed.
In Appendix~\ref{app:self-closing door}, we prove that 1-player planar motion planning with the \emph{3-port self-closing door} is \textsc{PSPACE}\xspace-complete.
A more general result on self-closing doors can be found in \cite{doors}, but we include this more succinct proof for completeness and conciseness.
\begin{figure}
\caption{State space of the 3-port self-closing door, used in the \PullkFG[$k$][!] reduction.}
\label{fig:statespace-scd}
\end{figure}
\end{enumerate}
\subsubsection{Nondeterministic Locking 2-toggle}
\label{sec:nondet2toggle}
\label{sec:leaf2toggle}
In this section, we prove that 1-player motion planning with the nondeterministic locking 2-toggle\xspace is \textsc{PSPACE}\xspace-complete. We also show that 1-player motion planning with the locking 2-toggle remains \textsc{PSPACE}\xspace when the gadgets are restricted to start in leaf states.
We use the construction shown in Figure~\ref{fig:nl2t-to-l2t} to show simultaneously that locking 2-toggles starting in leaf states can simulate a locking 2-toggle starting in a nonleaf state, and nondeterministic locking 2-toggles can simulate a locking 2-toggle. This construction consists of two nondeterministic locking 2-toggles and a 1-toggle. A \emph{1-toggle} is a two-state, two-location, reversible, deterministic gadget where each state admits a single (opposite) transition between the locations and these transitions flip the state. It can be trivially simulated by taking a single tunnel of a locking 2-toggle or nondeterministic locking 2-toggle.
\begin{theorem} \label{thm:nlt}
1-player planar motion planning with the nondeterministic locking 2-toggle\xspace is \textsc{PSPACE}\xspace-complete.
\end{theorem}
\begin{proof}
In the construction shown in Figure~\ref{fig:nl2t-to-l2t}, the agent can enter through either of the top lines; suppose they enter on the left. Other than backtracking, the agent's only path is across the bottom 1-toggle, then up the leftmost tunnel, having chosen the state of the nondeterministic locking 2-toggle\xspace which makes that tunnel traversable.
Now the only place the agent can usefully enter the construction is the leftmost line. The agent can only go down the leftmost tunnel, up the 1-toggle, and out the top right entrance, again making the appropriate nondeterministic choice when traversing the left gadget.
Symmetrically, if (from the unlocked state) the agent enters the top right, they must exit the bottom right, and the next traversal must go from the bottom right to the top right and return the construction to the unlocked state. Thus this construction simulates a locking 2-toggle.
\end{proof}
\begin{figure}
\caption{Constructing a locking 2-toggle from a nondeterministic locking 2-toggle. It is currently in the unlocked state. The nondeterministic locking 2-toggles are in leaf states (top states in Figure~\ref{fig:nl2t}
\label{fig:nl2t-to-l2t}
\end{figure}
If we instead build the above construction with locking 2-toggles in leaf states, then all three of the locking 2-toggles used are in leaf states (the 1-toggle is one tunnel of a locking 2-toggle). A very similar argument as the nondeterministic locking 2-toggle\xspace construction shows this gadget also simulates a locking 2-toggle. Thus, given a 1-player motion planning problem with locking 2-toggles, we can replace all of the locking 2-toggles in nonleaf states with this gadget to obtain an instance where all starting gadgets are in leaf states.
\begin{corollary}
1-player motion planning with the locking 2-toggle where all of the locking 2-toggles start in leaf states is \textsc{PSPACE}\xspace-complete.
\end{corollary}
\later{
| 2,484 | 12,144 |
en
|
train
|
0.4932.3
|
\section{3-port Self-Closing Door}
\label{app:self-closing door}
Ani et al.~\cite{doors} proved \textsc{PSPACE}\xspace-completeness of 1-player planar motion planning with many types of self-closing door gadgets and all of their planar variations.
For completeness, we give a proof specific to the 3-port self-closing door gadget in this section.
Our proof is more succinct because it does not consider other variants of the gadget.
The reduction is from 1-player motion planning with the door gadget from \cite{nintendoor}.
\begin{theorem}\label{thm:scd}
1-player planar motion planning with the 3-port self-closing door is \textsc{PSPACE}\xspace-hard.
\end{theorem}
\begin{proof}
We will show that the 3-port self-closing door planarly simulates a crossover, which lets us ignore planarity.
We will then show that the 3-port self-closing door simulates the door gadget. Because 1-player motion planning with the door gadget is \textsc{PSPACE}\xspace-hard \cite{nintendoor}, so is 1-player motion planning with the 3-port self-closing door, and because it simulates a crossover, so is 1-player planar motion planning with the 3-port self-closing door. Along the way, we will construct a self-closing door with multiple door and opening ports as well as a diode.
\paragraph{Diode.}We can simulate a diode (one-way tunnel which is always traversable)
by connecting the opening port to the input of the self-closing tunnel. The agent can always go to the opening port
and then through the self-closing tunnel, but can never go the other way because the self-closing tunnel is directed.
\paragraph{Port Duplicator.} The construction shown in Figure~\ref{fig:scd-port-duplicator} simulates a self-closing door with two equivalent opening ports. If the agent enters from the top, it can
open only one of the upper gadgets, then open the lower gadget, and then must exit the same way it came. Note, this same idea can be used to construct more than two ports, which will be needed later.
\begin{figure}
\caption{3-port self-closing door simulating a version of it that has 2 opening ports. Opening ports are shown in green.
A dotted self-closing tunnel is closed, and a solid self-closing tunnel is open.}
\label{fig:scd-port-duplicator}
\end{figure}
We use these to simulate an intermediate gadget composed of two of self-closing doors each connected to two opening ports in a particular order arrangement, shown in Figure~\ref{fig:planar-scd-crossoverish}. If the agent enters from port 1 or 4,
it will open door E or F, respectively, and then leave. If the agent enters from port 2, it can open doors A, B, and C. If it then traverses door B and opens door E, it will get stuck because both B and D are closed. So the agent cannot open door E and exit.
Instead, it can traverse doors B and A, ending up back at port 2 with no change except that door C is open. Entering
port 2 or 3 always gives the agent an opportunity to open door C, so leaving door C open does not help.
So the only useful path after entering port 2 is to traverse door C. The agent is then forced to go right and can open door F. Then
it is forced to traverse door B. Again if the agent opens door E, it will be stuck, so the agent traverses door A instead and
returns to port 2, leaving door F open.
Similarly, if the agent enters from port 3, the only useful thing it can do is open
door E and return to port 3.
\begin{figure}
\caption{3-port self-closing door simulating the gadget on the right, where each port opens the door of the same color (the top and third-from-top open the top door, and the others open the bottom door).}
\label{fig:planar-scd-crossoverish}
\end{figure}
\paragraph{Crossover.} This intermediate gadget can simulate a directed crossover, shown in Figure~\ref{fig:planar-scd-crossover}. If the agent enters at the top left, it can open the left door on the top gadget, open both doors on the bottom gadget, and then exit the bottom right while closing all three opened doors. If the agent opens both doors on the top gadget it will get stuck. Similarly if the agent enters the bottom left, all it can do is exit the top right.
The directed crossover can simulate an undirected crossover, as in
Figure~\ref{fig:dir-crossover} and shown in \cite{Push100}.
\begin{figure}
\caption{3-port self-closing door simulating a crossover.}
\label{fig:planar-scd-crossover}
\end{figure}
\begin{figure}
\caption{Directed crossover simulating an undirected crossover.}
\label{fig:dir-crossover}
\end{figure}
\paragraph{Door Duplicator.} Now, we use this crossover to simulate a gadget with two self-closing doors controlled by the same opening port, as shown in Figure~\ref{fig:scd-tunnel-duplicator}. This gadget has two states, open and closed. Both doors are either open or closed and going through either door closes both of them.
The construction is similar to the construction for the port duplicator, but goes through a tunnel instead.
\begin{figure}
\caption{3-port self-closing door simulating a gadget with 2 self-closing tunnels.}
\label{fig:scd-tunnel-duplicator}
\end{figure}
\paragraph{Door Gadget.} Finally, we triplicate the opening port by adding a third entrance to the construction in Figure~\ref{fig:scd-port-duplicator} similar to the other two, and use these ports to simulate a door gadget as shown
in Figure~\ref{fig:scd-otc}. Recall the whole three-port two-door gadget has only two states, open and closed. The agent can open both doors from any of the open ports and going across either self-closing door will close both doors. If the agent enters from port $O$, it can open the doors and leave.
If the agent enters from port $T_0$ and the gadget is open, the agent can traverse the door and then reopen it using the third port. The agent then leaves at port $T_1$. If the agent enters from port $C_0$, it can open the gadget and then must traverse the bottom tunnel and leave at port $C_1$, closing the
gadget.
\end{proof}
\begin{figure}
\caption{Simulation of the door gadget in \cite{nintendoor}
\label{fig:scd-otc}
\end{figure}
}
\subsection{\texorpdfstring{\PullkFG}{Pull?-kFG}}
\label{sec:optional pull}
In this section, we show that several versions of pulling-block problems with optional pulling and gravity are \textsc{PSPACE}\xspace-complete by a reduction from 1-player motion planning with nondeterministic locking 2-toggles, shown \textsc{PSPACE}\xspace-hard in Section~\ref{sec:nondet2toggle}.
We begin with a construction of a 1-toggle, and then use those and an intermediate construction to build a nondeterministic 2 toggle.
\paragraph{1-toggle.}
A \emph{1-toggle} is a gadget with a single tunnel, traversable in one direction. When the agent traverses it, the direction that it can be traversed is flipped, meaning that the agent must backtrack and return the way it came in order to be able to traverse it the first way again.
Our 1-toggle construction in \PullkFG for $k\geq2$ is shown in Figure~\ref{fig:1toggle}. In the state shown, it can only be traversed from left to right by pulling both blocks to the left. This traversal flips the direction that the gadget can be traversed---it can now only be traversed from right to left.
\begin{figure}
\caption{1-toggle in \PullkFG[2].}
\label{fig:1toggle}
\end{figure}
\paragraph{Nondeterministic Locking 2-toggle.}
Our construction of a nondeterministic locking 2-toggle\xspace, shown in Figure~\ref{fig:locking2toggle}, uses two 1-toggles plus a connecting section at the top.
\begin{figure}
\caption{Locking 2-toggle in \PullkFG[2].}
\label{fig:locking2toggle}
\caption{Locking 2-toggle in \PullkWG[1].}
\label{fig:locking2toggle-W}
\end{figure}
The configuration shown in Figure~\ref{fig:locking2toggle} is a leaf state. The right tunnel is traversable from to right to bottom right. If the agent traverses that tunnel, it can choose whether to pull the top pair of blocks to the right (because pulling is optional), corresponding to the nondeterministic choice in the nondeterministic locking 2-toggle\xspace. Both 1-toggles will be in the state where they can be traversed from bottom (outside) to top (inside). One of these paths will be blocked by the top pair of blocks and the other will be traversable, depending on whether the agent chose to pull those blocks. Traversing the traversable path then puts the gadget in a leaf state, either the one shown or its reflection.
It is possible for the agent to pull only one block instead of two, but this can only prevent future traversals, so never benefits the agent.
\begin{theorem}
\label{thm:pull-kFG-PSPACE-complete}
\PullkFG is \textsc{PSPACE}\xspace-complete for $k \ge 2$ and $k=*$.
\end{theorem}
\begin{proof}
Lemma~\ref{lem:pull-in-PSPACE} gives containment in \textsc{PSPACE}\xspace.
For hardness, we reduce from 1-player planar motion planning with the nondeterministic locking 2-toggle\xspace, shown \textsc{PSPACE}\xspace-hard in Theorem~\ref{thm:nlt}. We embed any planar network of gadgets in a grid, and replace each nondeterministic locking 2-toggle\xspace with the construction described above in the appropriate state. The resulting pulling-block problem is solvable if and only if the motion planning problem is.
This reduction works for \PullkFG for any $k \ge 2$ including $k=*$, because the player only ever has the opportunity to pull 2 blocks at a time. This proof requires optional pulling because the player must choose whether to pull blocks while traversing a nondeterministic locking 2-toggle\xspace.
\end{proof}
\begin{corollary}
\PullkWG is \textsc{PSPACE}\xspace-complete for $k\ge1$ and $k=*$.
\end{corollary}
\begin{proof}
With thin walls, the tunnels can be separated by a thin wall instead of a fixed block, which means that only one block is required in each of the toggles. This is shown in Figure~\ref{fig:locking2toggle-W}. The rest of the proof follows in the same manner, demonstrating \textsc{PSPACE}\xspace-completeness of \PullkWG for $k \ge 1$.
\end{proof}
\subsection{\texorpdfstring{\PullkFG[$k$][!]}{Pull!-kFG}}
\label{sec:mandatory gravity}
In this section, we show \textsc{PSPACE}\xspace-completeness for pulling-block problems with forced pulling and gravity, using a reduction from 1-player planar motion planning with the 3-port self-closing door, shown \textsc{PSPACE}\xspace-hard in Theorem~\ref{thm:scd}.
\begin{theorem}
\label{thm:pull!-kFG-PSPACE-complete}
\PullkFG[$k$][!] is \textsc{PSPACE}\xspace-complete for $k \ge 1$ and $k=*$.
\end{theorem}
\begin{proof}
Lemma~\ref{lem:pull-in-PSPACE} gives containment in \textsc{PSPACE}\xspace.
We show \textsc{PSPACE}\xspace-hardness by a reduction from 1-player planar motion planning with the 3-port self-closing door. It suffices to construct a 3-port self-closing door in \PullkFG[$k$][!].
First, we construct
a diode, shown in Figure~\ref{fig:pull!-kFG-diode}. The agent cannot enter from the right. If the agent enters from the left,
it must pull the left block to the left to advance. If it pulls the left block left and then exits, they still cannot enter from the right,
so doing so is useless. The agent then advances and is forced to pull the left block back to its original position. The agent then must
pull the right block left to advance, and must actually advance because the way back is blocked. As the agent exits the gadget, it
is forced to pull the right block back to its original position. Therefore, the agent can always cross the gadget from left to right
and never from right to left, simulating a diode.
\begin{figure}
\caption{A diode in \PullkFG[$k$][!].}
\label{fig:pull!-kFG-diode}
\end{figure}
Using this diode, we then construct a 3-port self-closing door, shown in Figure~\ref{fig:pull!-kFG-scd}; the diode icons indicate the diode shown in Figure~\ref{fig:pull!-kFG-diode}. The bottom is exit-only. In the closed
state, the agent should not enter from the top because it would become trapped between a block and the wrong end of a diode. The
agent can enter from the right, pull the block 1 tile right, and leave, opening the gadget. In the open state, the agent can enter
from the top and exit out the bottom, and is forced to pull the block back to its original position, closing the gadget. So this construction
simulates a 3-port self-closing door.
\begin{figure}
\caption{A 3-port self-closing door in \PullkFG[$k$][!].}
\label{fig:pull!-kFG-scd}
\end{figure}
Because the player never has the opportunity to pull multiple blocks, this reduction works for all $k\ge1$ including $k=*$.
\end{proof}
| 3,681 | 12,144 |
en
|
train
|
0.4932.4
|
\section{\texorpdfstring{\PullkFG[1]}{Pull?-1FG} is NP-hard}
\label{sec:Pull1FG NP}
In this section, we show \textsc{NP}\xspace-hardness for \PullkFG[1] by reducing from
1-player planar motion planning with the crossing NAND gadget from \cite{doors}.
A \emph{crossing NAND gadget} is a three-state gadget with two crossing
tunnels, where traversing either tunnel permanently closes the other tunnel.
1-player planar motion planning with the crossing NAND gadget is NP-hard
in \cite[Lemma~4.9]{doors} based on the constructions in
\cite{demaine2003pushing,friedman2002pushing} which originally reduce from \textsc{Planar 3-Coloring}\xspace.
\begin{theorem}
\label{thm:pull-1FG-NP-hard}
\PullkFG[1] is \textsc{NP}\xspace-hard.
\end{theorem}
\begin{proof}
We reduce from 1-player planar motion planning with the crossing NAND gadget
\cite[Lemma~4.9]{doors}.
First we first construct a ``single-use'' one-way gadget, shown in
Figure~\ref{fig:single-one-way}.
This gadget can initially can be crossed in one way, but then becomes
impassable in both directions.
\begin{figure}
\caption{Single-use one-way gadget that initially allows traversal from left-to-right and then
prevents traversal in both directions.}
\label{fig:single-one-way}
\end{figure}
Figure~\ref{fig:crossover} shows our construction of the crossing NAND gadget.
Single-use one-way gadgets enforce that the agent must enter
through one of the top paths.
The agent must pull two blocks to enter the gadget;
these blocks end up stacked in the vertical tunnel on top of the block below.
The agent cannot exit via the bottom tunnel underneath its entry tunnel:
the agent can pull one block into the slot on the bottom, and then can pull
one block one square, but that still leaves the third block of the stack
blocking off the exit path.
The agent cannot exit via the other top path, because it is blocked by the
single-use one-way gadget.
The only path remaining is for the agent to cross diagonally by pulling the
single block in the lower layer into the slot, revealing a path to the exit
opposite where the agent entered.
After leaving, both the entry tunnel and exit tunnel are impassable
because the single-use one-way gadgets have become impassable.
If the agent later enters via the other entry tunnel, the agent will be trapped,
because it will not be able to leave via the tunnel that was ``collapsed''
in the initial entry.
\end{proof}
\begin{figure}
\caption{Crossing NAND gadget allowing traversal either from the top-left to
the bottom-right, or from the top-right to the bottom-left. After being
traversed once, the entire gadget becomes impassable in any direction.}
\label{fig:crossover}
\end{figure}
We leave open the question of whether \PullkFG[1] is in \textsc{NP}\xspace or \textsc{PSPACE}\xspace-hard.
\section{Open Problems}
\label{sec:Open Problems}
There are several open problems remaining related to the pulling-block problems considered in this paper.
\begin{enumerate}
\item What is the complexity of \PullkFG[1] (the last remaining problem in Table~\ref{tab:results})? We leave a gap between \textsc{NP}\xspace-hardness and containment in \textsc{PSPACE}\xspace.
\item What is the complexity of pulling-block puzzles without fixed blocks (say, on a rectangular board)? With block pushing, one can generally construct effectively fixed blocks by putting enough blocks together. This technique no longer works in the block-pulling context.
\item Do all of these variants remain \textsc{PSPACE}\xspace-hard when we ask about storage (can the player place blocks covering some set of squares?)\ or reconfiguration (where blocks are distinguishable and must reach a desired configuration) instead of reachability? The storage question for \PullkFG[$k$][?] for $k \geq 1$ and \PullkFG[$*$][?] has been proved PSPACE-hard \cite{PRB16}.
\item What about the studied variants applied to \PushPull (where blocks can be pushed and pulled) and \PullPull (where blocks must be pulled maximally until the robot backs against another block)? Standard versions are proved PSPACE-complete in \cite{demaine2017push,PRB16}, but variations with mandatory pulling, gravity, and/or no fixed blocks all remain open.
\end{enumerate}
\appendix
\latertrue \the\magicAppendix
\end{document}
| 1,231 | 12,144 |
en
|
train
|
0.4933.0
|
\begin{document}
\title{Circuit Quantum Electrodynamics: A New Look Toward Developing Full-Wave Numerical Models}
\author{Thomas E. Roth~\IEEEmembership{Member,~IEEE},
and Weng C. Chew~\IEEEmembership{Life Fellow,~IEEE}
\thanks{This work was supported by NSF ECCS 169195, a startup fund at Purdue University, and the Distinguished Professorship Grant at Purdue University.
Thomas E. Roth is with the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907 USA. Weng C. Chew is with the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA and the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907 USA (contact e-mail: [email protected]).
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.}
}
\maketitle
\begin{abstract}
Devices built using circuit quantum electrodynamics architectures are one of the most popular approaches currently being pursued to develop quantum information processing hardware. Although significant progress has been made over the previous two decades, there remain many technical issues limiting the performance of fabricated systems. Addressing these issues is made difficult by the absence of rigorous numerical modeling approaches. This work begins to address this issue by providing a new mathematical description of one of the most commonly used circuit quantum electrodynamics systems, a transmon qubit coupled to microwave transmission lines. Expressed in terms of three-dimensional vector fields, our new model is better suited to developing numerical solvers than the circuit element descriptions commonly used in the literature. We present details on the quantization of our new model, and derive quantum equations of motion for the coupled field-transmon system. These results can be used in developing full-wave numerical solvers in the future. To make this work more accessible to the engineering community, we assume only a limited amount of training in quantum physics and provide many background details throughout derivations.
\end{abstract}
\begin{IEEEkeywords}
Circuit quantum electrodynamics, computational electromagnetics, quantum mechanics.
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{C}{ircuit} quantum electrodynamics (QED) architectures are one of the leading candidates currently being pursued to develop quantum information processing devices \cite{blais2004cavity,blais2007quantum,gu2017microwave}, such as quantum simulators, gate-based quantum computers, and single photon sources \cite{ma2019dissipatively,arute2019quantum,zhou2020tunable,houck2007generating}. These circuit QED devices are most often formed by embedding superconducting Josephson junctions into planar microwave circuitry (also made from superconductors), such as coplanar waveguides \cite{blais2004cavity}. Doing this, planar ``on-chip'' realizations of many cavity QED and quantum optics concepts can be implemented at microwave frequencies. This provides a pathway to leveraging the fundamental light-matter interactions necessary to create and process quantum information in these new architectures \cite{gu2017microwave}. To achieve this, circuit QED systems implement ``artificial atoms'' (also frequently called qubits) through various configurations of Josephson junctions that are then coupled to coplanar waveguide resonators \cite{blais2004cavity,blais2007quantum,gu2017microwave}.
Circuit QED systems have garnered a high degree of interest in large part because of the achievable strength of light-matter coupling and the engineering control that is possible with these systems. The unique aspects of using artificial atoms formed by Josephson junctions allows for them to achieve substantially higher coupling strengths to electromagnetic fields than what is possible with natural atomic systems \cite{devoret2007circuit}. As a result, circuit QED systems have been able to achieve some of the highest levels of light-matter coupling strengths seen in any physical system to date, providing an avenue to explore and harness untapped areas of physics \cite{kockum2019ultrastrong}.
In addition to the strong coupling, circuit QED systems also provide a much higher degree of engineering control than is typically possible with natural atoms or ions. This is because many artificial atoms can be designed to have different desirable features by assembling Josephson junctions in various topologies \cite{gu2017microwave,vion2002manipulating,manucharyan2009fluxonium,koch2007charge}. Further, the operating characteristics of these artificial atoms can be tuned \textit{in situ} by applying various biases, such as voltages, currents, or magnetic fluxes \cite{gu2017microwave}. This allows dynamic reconfiguration of the artificial atom, opening possibilities for device designs that are not feasible using fixed systems like natural atoms or ions \cite{gu2017microwave}. Finally, many aspects of the fabrication processes for these systems are mature due to their overlap with established semiconductor fabrication technologies \cite{gu2017microwave}.
Although there have been many successes with circuit QED systems to date (e.g., achieving ``quantum supremacy'' \cite{arute2019quantum}), a substantial amount of progress is still needed to truly unlock the potential of these systems. One area that could help accelerate the maturation of these technologies is the development of rigorous numerical modeling methods. Current models used in the physics community incorporate numerous approximations to simplify them to the point that they can be solved using semi-analytical approaches to build intuition about the physics \cite{blais2004cavity,blais2007quantum,koch2007charge,gu2017microwave,vool2017introduction,langford2013circuit,girvin2011circuit}. For instance, in almost all circuit QED studies, the electromagnetic aspects of the system are represented as simple combinations of lumped element LC circuits. Although this is appropriate and useful for building intuition, performing the engineering design and optimization of a practical device requires a level of precision that is not possible with this kind of \textit{circuit-based} description.
Instead, models that retain the full three-dimensional vector representation of the electromagnetic aspects of these circuit QED devices are needed. With these \textit{field-based} models, accurate full-wave numerical methods can begin to be formulated. These numerical methods can then be used to enable studies on the engineering optimization of circuit QED devices. Unfortunately, to the authors' best knowledge, this kind of detailed field-based description is not available in the literature.
To address this issue, we present details in this work on the desired field-based framework for circuit QED systems and show how it can be used to derive the more commonly used circuit-based models found in the physics literature. To make the discussion more concrete, we focus on developing a field-based model for the transmon qubit \cite{koch2007charge}, which is one of the most widely used qubits in modern circuit QED systems \cite{ma2019dissipatively,arute2019quantum,zhou2020tunable,houck2007generating}. Similar procedures to those shown in this work can be applied to develop field-based models for other commonly used artificial atoms.
In an effort to make this work accessible to the engineering community, we provide many details in the derivations. We also assume only a limited amount of background knowledge in quantum physics at the level of \cite{chew2016quantum,chew2016quantum2,chew2021qme-made-simple}. Although circuit QED uses superconducting qubits, a minimal knowledge of superconductivity is needed to understand the general physics of these systems. Introductions to superconductivity in the context of circuit QED can be found in \cite{girvin2011circuit,langford2013circuit,vool2017introduction}.
The remainder of this work is organized in the following way. In Section \ref{sec:transmon-background}, we review basic details of circuit QED systems using transmon qubits and introduce the field-based Hamiltonian developed in this work. Following this, we discuss quantization procedures in Section \ref{sec:field-quantization} for the electromagnetic fields that are devised specifically for developing numerical methods. Next, Section \ref{sec:field-to-circuit} presents details on how field-based descriptions can be converted into a transmission line formalism. Using these details, Section \ref{sec:field-transmon-hamiltonian} shows how the field-based Hamiltonian for circuit QED systems is consistent with the circuit-based descriptions found in the literature. With an appropriate Hamiltonian developed, Section \ref{sec:eom} derives the field-based quantum equations of motion that can be used for formulating new numerical modeling strategies. Finally, we present conclusions on this work in Section \ref{sec:conclusion}.
| 2,202 | 32,292 |
en
|
train
|
0.4933.1
|
\section{Circuit QED Preliminaries}
\label{sec:transmon-background}
To support the development of the field-based description of circuit QED systems, it will first be necessary to review a few properties of these systems. We begin this by discussing the basic physical properties of the transmon qubit in Section \ref{subsec:transmon-basic-physics}. Following this, we discuss in Section \ref{subsec:transmon-coupling} how the coupling of a transmon qubit to a transmission line is typically handled using a circuit theory description. Finally, in Section \ref{subsec:field-based} we briefly introduce the field-based description of the transmon qubit coupled to a transmission line structure. We will demonstrate the consistency of the field- and circuit-based descriptions of this system in Section \ref{sec:field-transmon-hamiltonian} after developing the necessary tools in the intervening sections.
For readers interested in a more complete description of the transmon qubit, we refer them to the seminal work of \cite{koch2007charge} that presents an in-depth theoretical analysis. More details on the derivation of the typically used Hamiltonians introduced in this section can be found in \cite{vool2017introduction,girvin2011circuit,langford2013circuit,kockum2019quantum}. We focus our discussions on the basic physics of the Hamiltonians to provide an intuitive understanding only.
\subsection{Basic Physical Properties of a Transmon}
\label{subsec:transmon-basic-physics}
Typically, quantum effects are only observable at microscopic levels due to the fragility of individual quantum states. To observe quantum behavior at a macroscopic level (e.g., on the size of circuit components), a strong degree of coherence between the individual microscopic quantum systems must be achieved \cite{girvin2011circuit}. One avenue for this to occur is in superconductors cooled to extremely low temperatures (on the order of 10 mK) \cite{kockum2019quantum}. At these temperatures, electrons in the superconductor tend to become bound to each other as Cooper pairs \cite{vool2017introduction,girvin2011circuit,langford2013circuit}. These Cooper pairs exhibit bosonic properties and become the charge carriers of the superconducting system. Importantly, they have the necessary degree of coherence over large length scales to make observing macroscopic quantum behavior possible. Circuit QED systems interact with these macroscopic quantum states using microwave photons and other circuitry \cite{gu2017microwave}.
One of the earliest qubits used to observe macroscopic quantum behavior in circuit QED systems was the Cooper pair box (CPB) \cite{nakamura1999coherent}, which can be viewed as a predecessor to the transmon. The traditional CPB is formed by a thin insulative gap (on the order of a nm thick) that connects a superconducting ``island'' and a superconducting ``reservoir'' \cite{kockum2019quantum}. The superconductor-insulator-superconductor ``sandwich'' formed between the island and reservoir is known as a Jospehson junction, and has the property that Cooper pairs may tunnel through the junction without requiring an applied voltage \cite{tafuri2019introductory}. For the basic CPB, the island is not directly connected to other circuitry, while the reservoir can be in contact with external circuit components (if desired). Since the superconducting island is isolated from other circuitry, the CPB is very sensitive to the number of Cooper pairs that have tunneled through the Josephson junction. Due to this sensitivity, the CPB is also often referred to as a charge qubit \cite{kockum2019quantum}.
As is common in quantum physics, it is desirable to consider a Hamiltonian mechanics description of the CPB \cite{chew2016quantum,chew2016quantum2,chew2021qme-made-simple}. For an isolated system, this amounts to expressing the total energy of the Josephson junction in terms of \textit{canonical conjugate variables}. These conjugate variables vary with respect to each other in a manner to ensure that the total energy of the system is conserved \cite{chew2021qme-made-simple}. For the CPB system, the canonical conjugate variables are the Cooper pair density difference $n$ and the Josephson phase $\varphi$. Initially considering the classical case, these variables are real-valued deterministic numbers.
For this case, $n$ is the net density of Cooper pairs that have tunneled through the Josephson junction relative to an equilibrium level \cite{vool2017introduction,girvin2011circuit,langford2013circuit}. Due to its relationship to a microscopic theory of superconductivity, $\varphi$ is more challenging to interpret \cite{langford2013circuit}. Briefly, there exists a long-range phase coherence for a collective description of all the Cooper pairs in a superconductor. As a result, the phase of the collective description becomes a meaningful variable to characterize the momentum of all Cooper pairs. This phase can then be related to a current flowing in the superconductor \cite{langford2013circuit}. For a Josephson junction, the phase difference of the two superconductors is important, and is denoted as $\varphi$ \cite{vool2017introduction,girvin2011circuit,langford2013circuit}.
The Hamiltonian of the CPB can be found by considering the total energy of the junction in terms of an effective capacitance and inductance expressed with $n$ and $\varphi$. The capacitance is due to the ``parallel plate'' configuration of the junction. The energy is found by first noting that $2en = Q$, where $Q$ is the total charge ``stored'' in the junction capacitance and $2e$ is the charge of a Cooper pair. Now, considering that the single electron charging energy of a capacitor is $E_C = e^2/2C$, the total capacitive energy of the junction capacitance, $Q^2/2C$, can be written as $4 E_C n^2$ \cite{girvin2011circuit}.
The inductance of the Josephson junction is more complicated to understand because it involves the tunneling physics. However, for the discussion here it only needs to be known that the supercurrent flowing through the junction is $I=I_c \sin{\varphi}$, where $I_c$ is the critical current of the junction. Since $\partial_t \varphi$ can be related to the voltage drop over the junction, this current-phase relationship can be used to derive an effective Josephson inductance that is proportional to $1/\cos{\varphi}$ \cite{girvin2011circuit}. Overall, the energy associated with the inductance is $-E_J\cos{\varphi}$, where $E_J$ is the Josephson energy.
Combining the results for the effective capacitive and inductive energy, the Hamiltonian for the CPB is
\begin{align}
H_T = 4E_C n^2 - E_J \cos\varphi.
\label{eq:classical-JJ-Hamiltonian}
\end{align}
This Hamiltonian can be viewed as being equivalent to a linear capacitor in parallel with a nonlinear inductor. It is this nonlinear inductance that allows Josephson junctions to form qubits. Without the nonlinearity, the energy levels of a quantized form of (\ref{eq:classical-JJ-Hamiltonian}) would be evenly spaced, making it impossible to selectively target a single pair of energy levels to perform qubit operations. If needed, the equations of motion for the CPB can be derived from (\ref{eq:classical-JJ-Hamiltonian}) using Hamilton's equations \cite{chew2016quantum,chew2016quantum2,chew2021qme-made-simple}.
Typically, it is desirable to be able to control the operating point of the CPB system. This can be done using a voltage source capacitively coupled to the superconducting island. This induces a ``background'' Cooper pair density that has tunneled onto the island, denoted by $n_g$ (this is also often referred to as the offset charge) \cite{kockum2019quantum}. The basic circuit diagram of this qubit is shown in Fig. \ref{subfig:basic_cpb}, where $V_g$ is an applied voltage bias capacitively coupled to the CPB through $C_g$. For this system, the Hamiltonian is modified to be
\begin{align}
H_T = 4E_C (n-n_g)^2 - E_J \cos\varphi.
\label{eq:classical-JJ-Hamiltonian2}
\end{align}
\begin{figure}
\caption{Circuit schematics for (a) a traditional CPB and (b) a transmon. A Josephson junction consists of a pure tunneling element (symbolized as a box with an ``X'' through it) in parallel with a small junction capacitance $C_J$.}
\label{fig:cpb_schematics}
\end{figure}
The system can now be quantized by elevating the canonical conjugate variables to be non-commuting quantum operators. In particular, we now characterize the CPB with the quantum operators $\hat{n}$ and $\hat{\varphi}$ that have commutation relation \cite{vool2017introduction}
\begin{align}
[\hat{\varphi},\hat{n}] = i.
\end{align}
Combined with a complex-valued quantum state function, these quantum operators take on a statistical interpretation and share an uncertainty principle relationship \cite{chew2021qme-made-simple}. As a result, measurements of observables associated with these quantum operators (e.g., the number of Cooper pairs that have tunneled through the junction) become random variables with means and variances dictated by the laws of quantum mechanics \cite{chew2021qme-made-simple}. The combination of these properties allows for the non-classical interference between possible states of a system, which is key to quantum information processing.
Now, one advantage of determining the classical Hamiltonian of the CPB system is that the quantum Hamiltonian follows easily from it \cite{chew2021qme-made-simple}. In particular, the quantum Hamiltonian is \cite{kockum2019quantum,koch2007charge}
\begin{align}
\hat{H}_T = 4 E_C (\hat{n}-n_g)^2 - E_J \cos \hat{\varphi}.
\label{eq:isolated_cpb_hamiltonian}
\end{align}
Note that $n_g$ remains a classical variable that describes the offset charge induced by the applied DC voltage.
The different terms in (\ref{eq:isolated_cpb_hamiltonian}) take on similar physical meaning to the classical case of (\ref{eq:classical-JJ-Hamiltonian2}). However, the second term can be expressed in the charge basis as
\begin{align}
-E_J\cos\hat{\varphi} = \frac{E_J}{2} \sum_N \big[ |N\rangle\langle N+1 | + | N+1\rangle\langle N | \big],
\label{eq:tunneling-hamiltonian}
\end{align}
where $|N \rangle$ is an eigenstate of $\hat{n}$ with eigenvalue $N$ \cite{vool2017introduction}. This eigenvalue is a discrete number that counts how many Cooper pairs have tunneled through the junction. Considering this, we see that the form of the effective inductive energy given in (\ref{eq:tunneling-hamiltonian}) clearly shows the tunneling physics.
Unfortunately, the CPB is very sensitive to charge fluctuations (i.e., noise) that appear from a variety of sources in $n_g$ \cite{koch2007charge}. This sensitivity prevents the CPB from being applicable to scalable quantum information processing systems. To address this issue, the transmon qubit was introduced as an ``optimized CPB''. The differences between these qubits are most easily understood in terms of the ratio of $E_J$ to $E_C$ where the devices are designed to operate. In traditional CPBs, $E_J/E_C \ll 1$, due to the small capacitance $C_J$ naturally provided by the Josephson junction. The transmon qubit is designed to operate like a CPB, but with $E_J/E_C \gg 1$.
This is achieved by dramatically decreasing $E_C$ by adding a large shunting capacitance $C_B$ around the CPB, as shown in Fig. \ref{subfig:transmon}. This capacitance is often implemented by forming interdigital capacitors between two superconducting islands that are connected by a Josephson junction, as shown in Fig. \ref{fig:physical_transmon}. Although the physical implementation of the transmon is different from the CPB, the same Hamiltonian of (\ref{eq:isolated_cpb_hamiltonian}) describes its behavior \cite{koch2007charge}. However, because $E_C \ll E_J$ the transmon becomes insensitive to fluctuations in $n_g$, making it useful for scalable quantum information processing systems \cite{koch2007charge}.
\begin{figure}
\caption{Schematic of a transmon qubit coupled to a coplanar waveguide transmission line. Dimensions (in $\mu$m) are based on the transmon qubit used in \cite{houck2007generating}
\label{fig:physical_transmon}
\end{figure}
| 3,289 | 32,292 |
en
|
train
|
0.4933.2
|
\subsection{Coupling the Transmon to a Transmission Line Resonator}
\label{subsec:transmon-coupling}
A completely isolated qubit cannot be controlled, and so is of little use. Circuit QED systems address this by coupling a qubit to transmission line structures that can be used to apply microwave drive pulses to control and read out the qubit's state, as well as to interface separated qubits to implement qubit-qubit interactions \cite{blais2004cavity,blais2007quantum,gu2017microwave}. For transmons, capacitive coupling to transmission lines is typical \cite{koch2007charge}. One early strategy using a coplanar waveguide is shown in Fig. \ref{fig:physical_transmon}. More recently, other strategies have emerged to improve the interconnectivity to the transmon \cite{barends2014superconducting}. However, analyzing these newer implementations follows the analysis of the coupling shown in Fig. \ref{fig:physical_transmon}, so only the simpler case will be considered here.
The interaction between the transmon and transmission line can be described quantum mechanically in a number of ways. However, it is usually convenient to express the interaction in terms of $\hat{n}$ and a transmission line voltage operator \cite{koch2007charge}. Often, the transmission line the transmon is coupled to is a resonator. As a result, the voltage operator can be conveniently written in terms of the modes of the resonator \cite{blais2004cavity}. These modes are the one-dimensional sinusoidal functions used to describe the spatial dependence of a resonator's voltage and current in microwave engineering \cite{pozar2009microwave}. This spatial dependence is integrated out of the Hamiltonian to arrive at a circuit-based (i.e., lumped element) description of the interaction.
Following this process, the resulting Hamiltonian that describes a single transmon qubit coupled to a single mode of a transmission line resonator is
\begin{align}
\hat{H} = \hat{H}_T + \hat{H}_R + 2e\beta V_g \hat{V}_r \hat{n},
\label{eq:coupled-transmon}
\end{align}
\begin{align}
\hat{H}_R = \frac{1}{2}\big[ L_r \hat{I}_r^2 + C_r \hat{V}_r^2 \big],
\label{eq:free-resonator}
\end{align}
where $\hat{I}_r$ and $\hat{V}_r$ are the \textit{integrated} resonator voltage and current (i.e., the spatial variation of the mode has been integrated out) \cite{blais2004cavity,koch2007charge}. The Hamiltonian in (\ref{eq:free-resonator}) is the \textit{free resonator Hamiltonian}, i.e., it describes the total energy of the uncoupled resonator mode (and can be viewed as being equivalent to an LC tank circuit). The final term in (\ref{eq:coupled-transmon}) represents the interaction between the transmon and resonator, in terms of the resonator voltage and transmon charge operators.
Other terms in (\ref{eq:coupled-transmon}) and (\ref{eq:free-resonator}) include the magnitude of the resonator voltage at the location of the transmon $V_g$ and $\beta = C_g/(C_g+C_B)$. The latter term represents a voltage divider to capture the portion of the resonator voltage applied to the transmon within a lumped element approximation (note $C_J$ has been absorbed into the much larger capacitance $C_B$ in $\beta$). Further, $L_r = \ell L$ and $C_r = \ell C$, where $\ell$ is the length of the resonator and $L$ and $C$ are the per-unit-length inductance and capacitance of the transmission line. These terms arise from the normalization and subsequent spatial integration of the resonator voltage and current modes. Extending this model to contain multiple resonator modes is in principle quite simple, and will be considered later in this work.
Depending on the numerical model being developed, it may or may not be desirable to keep the interdigital capacitor in the geometric description of the system. When it is desirable to explicitly model the interdigital capacitor, $\beta$ should be omitted from equations since the ``voltage divider'' it represents would already be accounted for. To allow the equations in this work to be used in either situation, we keep $\beta$ in all equations.
Most circuit QED studies work with the transmission line operators expressed in terms of bosonic ladder operators. In particular, each mode of the transmission line is viewed as an independent quantum harmonic oscillator characterized by bosonic ladder operators \cite{chew2016quantum2}. The integrated resonator voltage and current operators in terms of the ladder operators are
\begin{align}
\hat{V}_r = \sqrt{\frac{\hbar \omega_r}{2C_r}} (\hat{a}+\hat{a}^\dagger),
\label{eq:total-v}
\end{align}
\begin{align}
\hat{I}_r = -i\sqrt{\frac{\hbar \omega_r}{2L_r}} (\hat{a}-\hat{a}^\dagger),
\label{eq:total-i}
\end{align}
where $\omega_r$ is the resonant frequency of the mode \cite{blais2004cavity,koch2007charge}. The ladder operators satisfy the bosonic commutation relation,
\begin{align}
[\hat{a},\hat{a}^\dagger] = 1.
\end{align}
Using properties of the ladder operators, the Hamiltonian in (\ref{eq:coupled-transmon}) can be expressed as
\begin{align}
\hat{H} = \hat{H}_T + \hbar\omega_r \hat{a}^\dagger\hat{a} + 2e\beta V^\mathrm{rms}_g \hat{n} (\hat{a}+\hat{a}^\dagger),
\label{eq:coupled-transmon2}
\end{align}
where $V^\mathrm{rms}_g = V_g \sqrt{\hbar\omega_r/2C_r}$ and the zero point energy of the transmission line resonator has been adjusted to remove constant terms.
The isolated transmon Hamiltonian of (\ref{eq:isolated_cpb_hamiltonian}) can be diagonalized exactly \cite{koch2007charge}. Using these eigenstates (denoted as $|j\rangle$), the complete system Hamiltonian of (\ref{eq:coupled-transmon2}) can be rewritten as
\begin{multline}
\hat{H} = \sum_j \hbar\omega_j |j\rangle\langle j | + \hbar\omega_r \hat{a}^\dagger\hat{a} \\ + 2e\beta V_g^\mathrm{rms} \sum_{i,j} \langle i | \hat{n} | j \rangle \, |i\rangle \langle j | (\hat{a}+\hat{a}^\dagger) ,
\label{eq:coupled-transmon3}
\end{multline}
where $\omega_j$ is the eigenvalue associated with the $j$th transmon eigenstate. A more complete description of these eigenstates can be found in \cite{koch2007charge}. For this work, the essential property is that transitions between two transmon eigenstates corresponds to a high probability event of some number of Cooper pairs tunneling through the Josephson junction.
In the transmon operating regime, (\ref{eq:coupled-transmon3}) can be further simplified because it is safe to assume that the $\hat{n}$ operator only couples nearest neighbor transmon eigenstates. Hence, (\ref{eq:coupled-transmon3}) can be rewritten as
\begin{multline}
\hat{H} = \sum_j \hbar\omega_j |j\rangle\langle j | + \hbar\omega_r \hat{a}^\dagger\hat{a} + 2e\beta V_g^\mathrm{rms} \times \\ \sum_{i} \langle i | \hat{n} | i+1 \rangle \, \big(|i\rangle \langle i+1 | + |i+1\rangle \langle i | \big) \big(\hat{a}+\hat{a}^\dagger \big) ,
\label{eq:coupled-transmon4}
\end{multline}
where the fact that $\langle i | \hat{n} | i+1 \rangle = \langle i+1 | \hat{n} | i \rangle $ has also been used \cite{koch2007charge}. This is the circuit-based Hamiltonian that is typically used as a starting point for many circuit QED studies.
\subsection{Field-Based Description of the Transmon System}
\label{subsec:field-based}
With the physics of the transmon understood, we can now briefly introduce the field-based description of a circuit QED system using a transmon. In particular, the postulated field-transmon system Hamiltonian is
\begin{align}
\hat{H} = \hat{H}_T + \hat{H}_F - \iiint \hat{\mathbf{E}} \cdot \partial_t^{-1} \hat{\mathbf{J}}_t d\mathbf{r}.
\label{eq:field-transmon-hamiltonian1}
\end{align}
In (\ref{eq:field-transmon-hamiltonian1}), $\hat{H}_T$ is the transmon Hamiltonian given in (\ref{eq:isolated_cpb_hamiltonian}),
\begin{align}
\hat{H}_F = \frac{1}{2} \iiint \big( \epsilon \hat{\mathbf{E}}^2 + \mu \hat{\mathbf{H}}^2 \big) d\mathbf{r}
\end{align}
is the free field Hamiltonian consisting of the electric and magnetic field operators $\hat{\mathbf{E}}$ and $\hat{\mathbf{H}}$, and the final term is the interaction between $\hat{\mathbf{E} }$ and a transmon current density operator $\hat{\mathbf{J}}_t$. The transmon current density operator is
\begin{align}
\hat{\mathbf{J}}_t = -2e \beta \mathbf{d} \delta(z-z_0) \partial_t\hat{n} ,
\label{eq:transmon-current-operator1}
\end{align}
where $\hat{n}$ is the standard Josephson junction charge operator. We use the awkward notation of $\partial_t^{-1}\hat{\mathbf{J}}_t$ in (\ref{eq:field-transmon-hamiltonian1}) since it will be convenient to use a current density operator when deriving equations of motion in Section \ref{sec:eom}. In (\ref{eq:transmon-current-operator1}), $\mathbf{d}$ is a vector parameterizing the integration path taken to define the voltage of the transmission line at the location of the transmon. Further, the $z$-axis has been identified as the longitudinal direction of the transmission line in the region local to the transmon for notational simplicity.
Before continuing, it is worth commenting on the interpretation of $\hat{\mathbf{J}}_t$. Within the transmon basis, (\ref{eq:transmon-current-operator1}) becomes
\begin{multline}
\hat{\mathbf{J}}_t = -2e\beta\mathbf{d} \delta(z-z_0) \\ \times\sum_j \langle j | \hat{n} | j + 1\rangle \partial_t\big[ |j\rangle\langle j+1| + |j+1\rangle\langle j| \big],
\label{eq:transmon-current-operator3}
\end{multline}
after applying the result that only nearest neighbor states couple for a transmon \cite{koch2007charge}. From this, we see that $\hat{\mathbf{J}}_t$ involves transitions between different transmon eigenstates. These transitions correspond to a high probability event of Cooper pairs tunneling through the Josephson junction. This tunneling produces a current, making the designation of (\ref{eq:transmon-current-operator1}) as a current density reasonable.
Although the physics of (\ref{eq:field-transmon-hamiltonian1}) is fairly intuitive, we will need to develop a number of tools in the following sections to demonstrate its consistency with the circuit-based description of (\ref{eq:coupled-transmon}). This will first require a careful look at the quantization of electromagnetic fields for circuit QED systems in Section \ref{sec:field-quantization}, followed by establishing a correspondence between field and transmission line representations of quantum operators in Section \ref{sec:field-to-circuit}. We will then demonstrate the consistency of (\ref{eq:field-transmon-hamiltonian1}) and (\ref{eq:coupled-transmon}) in Section \ref{sec:field-transmon-hamiltonian}.
| 3,082 | 32,292 |
en
|
train
|
0.4933.3
|
\section{Field Quantization for Circuit QED Systems}
\label{sec:field-quantization}
We present two approaches for quantizing the electromagnetic field in systems containing inhomogeneous, lossless, and non-dispersive dielectric and perfectly conducting regions. To simplify the notation, we assume there are no magnetic materials present. The two quantization approaches are relevant for developing different numerical methods. The first approach, discussed in Section \ref{subsec:modes-of-the-universe}, follows a standard mode decomposition quantization process \cite{chew2016quantum2,walls2007quantum,gerry2005introductory,viviescas2003field}. We will refer to this as the \textit{modes-of-the-universe} quantization approach, since the spatial modes used extend across all space \cite{viviescas2003field}. For many numerical modeling approaches, it is convenient to consider a finite-sized \textit{simulation domain} with a number of semi-infinite \textit{port regions} attached to it, as illustrated in Fig. \ref{fig:region-illustration}. This is not compatible with the modes-of-the-universe approach, and so, a different quantization approach will be discussed in Section \ref{subsec:projector-quantization} for this case.
\begin{figure}
\caption{Example of region definitions for a simple circuit QED system composed of a transmon qubit (not to scale) coupled to a coplanar waveguide resonator of length $\ell$. The red regions are fictitious boundaries for the port regions. The dielectric substrate of the system is not shown.}
\label{fig:region-illustration}
\end{figure}
Both quantization methods discussed are performed within the framework of macroscopic QED \cite{scheel2008macroscopic}. The key aspect of this is that a microscopic description of a lossless, non-dispersive dielectric medium is unnecessary. As a result, macroscopic permittivities and permeabilities can be directly used in a quantum description of the electromagnetic fields. Accounting for dispersive or lossy media is more complicated, and is outside the scope of this work \cite{wei2018dissipative,milonni1995field,gruner1996green}.
In some situations, requiring a mode decomposition description can be inconvenient. For instance, this can occur when dealing with certain kinds of nonlinearities. If this is the case, an energy conservation argument can be applied to quantize the electromagnetic field directly in coordinate space \cite{chew2021qme-made-simple}.
\subsection{Modes-of-the-Universe Quantization}
\label{subsec:modes-of-the-universe}
Quantizing the electromagnetic field using a modes-of-the-universe framework is one of the most common quantization approaches. Introductory reviews can be found in \cite{chew2016quantum2,gerry2005introductory,walls2007quantum}. This technique is often simply referred to as a mode decomposition approach. The longer terminology will be used in this work to differentiate it from the quantization approach discussed in Section \ref{subsec:projector-quantization}.
The first step of the modes-of-the-universe approach is to use separation of variables to write the electric field as
\begin{align}
\mathbf{E}(\mathbf{r},t) = \sum_k \sqrt{\frac{\omega_k}{2\epsilon_0}} \big( q_k(t) \mathbf{E}_k(\mathbf{r}) + q_k^*(t) \mathbf{E}^*_k(\mathbf{r}) \big).
\end{align}
For initial simplicity, a discrete summation of modes is assumed. A continuum of modes will be considered as part of the quantization procedure discussed in Section \ref{subsec:projector-quantization}. Inserting this representation into
\begin{align}
\nabla\times\nabla\times\mathbf{E} + \mu\epsilon \partial_t^2 \mathbf{E} = 0
\end{align}
yields two separated equations for each mode, given by
\begin{align}
\partial_t^2 q_k(t) = -\omega_k^2 q_k(t),
\label{eq:mode_time}
\end{align}
\begin{align}
\nabla\times\nabla\times\mathbf{E}_k(\mathbf{r}) - \mu\epsilon \omega_k^2 \mathbf{E}_k(\mathbf{r}) = 0.
\label{eq:field-eig-def}
\end{align}
The complex conjugates of $q_k$ and $\mathbf{E}_k$ also obey (\ref{eq:mode_time}) and (\ref{eq:field-eig-def}), respectively, since $\omega_k$, $\epsilon$, and $\mu$ are all real. Considering (\ref{eq:mode_time}), it is easily seen that the time dependence of these modes will be $\exp(\pm i \omega_k t)$.
To simplify the analysis, it is further required that the field modes are orthonormal such that
\begin{align}
\iiint \epsilon_r(\mathbf{r}) \mathbf{E}^*_{k_1}(\mathbf{r}) \cdot \mathbf{E}_{k_2}(\mathbf{r}) d\mathbf{r} = \delta_{k_1,k_2},
\label{eq:orthonormal-def}
\end{align}
where $\delta_{k_1,k_2}$ is the Kronecker delta function. Later, it will be useful to explicitly consider the normalization of the field modes. Hence, we will often write the modes as
\begin{align}
\mathbf{E}_k = \frac{1}{\sqrt{N_{u,k}}} \mathbf{u}_k(\mathbf{r}),
\end{align}
where
\begin{align}
N_{u,k} = \iiint \epsilon_r(\mathbf{r}) \mathbf{u}_k^*(\mathbf{r}) \cdot \mathbf{u}_k(\mathbf{r}) dV.
\label{eq:original-enorm}
\end{align}
Similarly, the magnetic field can be written as
\begin{align}
\mathbf{H}(\mathbf{r},t) = \sum_k \sqrt{\frac{\omega_k}{2\mu_0}} \big( p_k(t) \mathbf{H}_k(\mathbf{r}) + p_k^*(t) \mathbf{H}^*_k(\mathbf{r}) \big),
\end{align}
where
\begin{align}
\mathbf{H}_k = \frac{1}{\sqrt{N_{v,k}} } \mathbf{v}_k(\mathbf{r})
\end{align}
and the normalization for the magnetic field modes take on a similar form to (\ref{eq:original-enorm}). Although the magnetic field modes have been denoted by seemingly independent variables, i.e., $\mathbf{v}_k$ and $p_k$, they are related to the electric field variables $\mathbf{u}_k$ and $q_k$ according to Maxwell's equations.
It should be noted that these expansions are valid for complex-valued spatial modes. In a closed region (e.g., a cavity), it is often advantageous to use real-valued spatial modes. For this situation, the field expansions become
\begin{align}
\mathbf{E}(\mathbf{r},t) = \sum_k \sqrt{\frac{\omega_k}{2\epsilon_0}} \big( q_k(t) + q_k^*(t) \big) \mathbf{E}_k(\mathbf{r}),
\end{align}
\begin{align}
\mathbf{H}(\mathbf{r},t) = -i\sum_k \sqrt{\frac{\omega_k}{2\mu_0}} \big( p_k(t) - p_k^*(t) \big) \mathbf{H}_k(\mathbf{r}).
\end{align}
It will be useful to use both real- and complex-valued spatial mode functions in this work.
With the mode expansion defined, the Hamiltonian for the electromagnetic field system can be expanded in terms of these modes. The electromagnetic field Hamiltonian is equivalent to the total electromagnetic energy in a system, which is
\begin{align}
H_F = \iiint \frac{1}{2}\big( \epsilon |\mathbf{E}(\mathbf{r},t)|^2 + \mu |\mathbf{H}(\mathbf{r},t)|^2 \big) d\mathbf{r}.
\label{eq:free-field-cH}
\end{align}
Substituting in either the real- or complex-valued mode expansions and performing the spatial integrations, the Hamiltonian simplifies to
\begin{align}
H_F = \sum_k \frac{\omega_k}{2} \big( |q_k(t)|^2 + |p_k(t)|^2 \big).
\end{align}
This can be readily identified as a summation of Hamiltonians for uncoupled harmonic oscillators, or equivalently uncoupled LC resonant circuits \cite{chew2016quantum2}.
Hence, a canonical quantization process can now be performed by elevating the conjugate variables of each harmonic oscillator (i.e., the $q_k$ and $p_k$) to be quantum operators \cite{chew2016quantum2}. These operators obey the canonical commutation relation
\begin{align}
[\hat{q}_{k_1},\hat{p}_{k_2}] = i\hbar \delta_{k_1,k_2} .
\end{align}
These operators may be combined to form bosonic ladder operators for each quantum harmonic oscillator. This gives the annihilation operator as
\begin{align}
\hat{a}_k = \frac{1}{\sqrt{2\hbar}}(\hat{q}_k + i \hat{p}_k)
\label{eq:annihilation}
\end{align}
and the creation operator as
\begin{align}
\hat{a}^\dagger_k = \frac{1}{\sqrt{2\hbar}}(\hat{q}_k - i \hat{p}_k).
\end{align}
These operators satisfy the bosonic commutation relation
\begin{align}
[\hat{a}_{k_1},\hat{a}^\dagger_{k_2}] = \delta_{k_1,k_2} .
\label{eq:boson-commutation}
\end{align}
In terms of ladder operators, the field operators become
\begin{align}
\hat{\mathbf{E}}(\mathbf{r},t) = \sum_k N_{E,k} \big( \hat{a}_k(t)\mathbf{u}_k(\mathbf{r}) + \hat{a}_k^\dagger(t)\mathbf{u}^*_k(\mathbf{r}) \big)
\label{eq:c-q-efield}
\end{align}
\begin{align}
\hat{\mathbf{H}}(\mathbf{r},t) = \sum_k N_{H,k} \big( \hat{a}_k(t)\mathbf{v}_k(\mathbf{r}) + \hat{a}_k^\dagger(t)\mathbf{v}^*_k(\mathbf{r}) \big)
\label{eq:c-q-hfield}
\end{align}
for complex-valued spatial modes. In (\ref{eq:c-q-efield}) and (\ref{eq:c-q-hfield}),
\begin{align}
N_{E,k} = \sqrt{\frac{\hbar \omega_k}{2\epsilon_0 N_{u,k} }}, \,\, N_{H,k} = \sqrt{\frac{\hbar \omega_k}{2\mu_0 N_{v,k} }}.
\label{eq:e-norm}
\end{align}
Similarly, the real-valued spatial mode expansions give
\begin{align}
\hat{\mathbf{E}}(\mathbf{r},t) = \sum_k N_{E,k} \big( \hat{a}_k(t) + \hat{a}_k^\dagger(t) \big) \mathbf{u}_k(\mathbf{r}),
\label{eq:r-q-efield}
\end{align}
\begin{align}
\hat{\mathbf{H}}(\mathbf{r},t) = -i\sum_k N_{H,k} \big( \hat{a}_k(t) - \hat{a}_k^\dagger(t) \big) \mathbf{v}_k(\mathbf{r}).
\label{eq:r-q-hfield}
\end{align}
The quantum field Hamiltonian becomes
\begin{align}
\hat{H}_{F} = \iiint \frac{1}{2} \big( \epsilon \hat{\mathbf{E}}^2 + \mu \hat{\mathbf{H}}^2 \big) d\mathbf{r},
\label{eq:free-field-qH}
\end{align}
which after spatial integration and adjusting the zero point energy can be written in a ``diagonalized'' form in terms of the ladder operators as
\begin{align}
\hat{H}_{F} = \sum_k \hbar \omega_k \hat{a}_k^\dagger \hat{a}_k.
\label{eq:universe-hamiltonian}
\end{align}
As expected, the final Hamiltonian closely matches the isolated circuit portion of the Hamiltonian given in (\ref{eq:coupled-transmon4}).
| 3,268 | 32,292 |
en
|
train
|
0.4933.4
|
\subsection{Projector-Based Quantization}
\label{subsec:projector-quantization}
The quantization process in Section \ref{subsec:modes-of-the-universe} is not desirable when port regions like those shown in Fig. \ref{fig:region-illustration} are needed to model a circuit QED system. The issue is that the modes-of-the-universe approach makes no distinction between modes that are associated with ``internal'' dynamics (e.g., that of a transmon coupled to a resonator) and ``external'' modes leaving the device (e.g., modes entering or exiting a device via a port). As a result, a different quantization procedure that allows for the modes in the various regions to be independently worked with is of more interest.
One approach to do this is the Feshbach projector technique as applied to quantum optics \cite{viviescas2003field,viviescas2004quantum}. This approach defines a set of projection operators to isolate the behavior in the various regions of the problem. The eigenvalue problem from the modes-of-the-universe approach is then projected into a set of eigenvalue problems for the various regions being considered. The hermiticity of the projected eigenvalue problems can be maintained by selecting complementary boundary conditions to apply at the interfaces between regions \cite{viviescas2003field}. As a result, a complete set of orthogonal eigenmodes can be found for each region. These various modes can then be quantized and coupled to each other.
For modeling a circuit QED system, a natural decomposition would have one projector cover the simulation domain and another set of complementary projectors cover the infinitely long transmission lines that model ports, as illustrated in Fig. \ref{fig:region-illustration}. For clarity, the region of the simulation domain will be denoted by $\mathcal{Q}$ and the various port regions by $\mathcal{P}_p$. The set of all ports will be denoted by $\mathcal{P}$. The surface at the interface between the simulation domain and port $p$ will be denoted by $\partial \mathcal{Q} \cap \partial\mathcal{P}_p$.
We now present a physically-motivated development of this quantization approach based on an analysis method for open cavities \cite[Ch. 2.9]{haus2012electromagnetic}. This approach also has similarities with using waveports in various computational electromagnetics methods \cite{wang2015higher}. To help guide the discussion, an illustration of the problem setup for this quantization approach is shown in Fig. \ref{fig:projector-quantization-setup}. In Fig. \ref{subfig:projector-quantization-setup1}, the original problem is shown with two reference planes for ports identified. The true fields in all regions of the problem are $\mathbf{E}_T$ and $\mathbf{H}_T$. Now, the regions of the problem are separated by introducing perfect electric conductor (PEC) or perfect magnetic conductor (PMC) boundary conditions in the simulation domain at all port interfaces, as shown in Fig. \ref{subfig:projector-quantization-setup2}. To maintain the hermiticity of the entire problem, complementary conditions are used to close the port region problems \cite{viviescas2003field}. That is, if a PMC condition closes the simulation domain the corresponding port is closed with a PEC condition, as shown in Fig. \ref{subfig:projector-quantization-setup2}.
\begin{figure}
\caption{Illustration of the projector-based quantization setup. In (a) an example two-port problem is shown, while in (b) artificial boundaries and equivalent currents are introduced to separate the regions of the problem.}
\label{fig:projector-quantization-setup}
\end{figure}
The artificial ``closing'' surfaces lead to discontinuities in the electric or magnetic fields at these locations that should not be present from the original problem shown in Fig. \ref{subfig:projector-quantization-setup1}. To produce the correct fields within the simulation domain, equivalent electric or magnetic current densities are introduced at the closing surfaces. For instance, if a PMC condition is applied at a region of the simulation domain (c.f. $\partial\mathcal{Q}\cap\partial\mathcal{P}_1$ in Fig. \ref{subfig:projector-quantization-setup2}), the resulting discontinuity in the magnetic field is compensated with an equivalent electric current density given by $\mathbf{J}_{eq,p} = \hat{n}_p\times \mathbf{H}_T$. Here, $\hat{n}_p$ points into the simulation domain and $\mathbf{H}_T$ should be expanded in terms of the port modes to tie the two problems together \cite{haus2012electromagnetic}. Similarly, to produce the correct fields within the corresponding port region, an equivalent magnetic current density given by $\mathbf{M}_{eq,p} = \hat{n}_p\times \mathbf{E}_T$ must be introduced in the port region (c.f. $\partial\mathcal{Q}\cap\partial\mathcal{P}_1$ in Fig. \ref{subfig:projector-quantization-setup2}). Here, $\mathbf{E}_T$ should be expanded in terms of the simulation domain modes and the unusual sign in the definition of $\mathbf{M}_{eq,p}$ is due to the fixed polarity of the unit normal vector $\hat{n}_p$.
From this physical picture, we see that the interaction between the simulation domain and port regions can be achieved by introducing equivalent current densities. Considering, for now, only closing the simulation domain with PMC conditions, a $\mathbf{J}_{eq,p}$ will need to be introduced at each port in the simulation domain. In Lagrangian/Hamiltonian treatments of electromagnetics, the interaction between a current $\mathbf{J}$ and the field is typically given in terms of the vector potential as $\mathbf{A}\cdot\mathbf{J}$ \cite{chew2016quantum2}. Hence, our resulting Hamiltonian should be
\begin{multline}
H_F = \frac{1}{2}\iiint \big( \epsilon |\mathbf{E}_q|^2 + \mu | \mathbf{H}_q|^2 + \sum_{p \in \mathcal{P}} \big[ \epsilon |\mathbf{E}_p|^2 \\ + \mu | \mathbf{H}_p|^2 \big] - \sum_{p \in \mathcal{P}} 2 \mathbf{A}_q \cdot (\hat{n}_p\times \mathbf{H}_p ) \big) d\mathbf{r},
\label{eq:interacting-system-hamiltonian}
\end{multline}
where a subscript of $q$ ($p$) denotes that the term is associated with the simulation domain (ports). Further, the term $\hat{n}_p \times \mathbf{H}_p$ is an equivalent electric current density (with $\hat{n}_p$ pointing into the simulation domain). The more general case involving both artificial PEC and PMC conditions will be handled in Section \ref{sec:eom}, where it will also be shown that this Hamiltonian produces the correct equations of motion (i.e., Maxwell's equations fed by electric and magnetic current sources).
Inspecting the coupling term in (\ref{eq:interacting-system-hamiltonian}), we see that it has been written from the perspective of treating the port fields as a source to the simulation domain. It is of course possible to also look at the Hamiltonian from the alternative viewpoint that the ports are being fed by a current density. This is done by rearranging the coupling term to be $\mathbf{H}_p\cdot(\mathbf{A}_q\times\hat{n}_p)$, which shows the magnetic field coupling to a term that is proportional to a $\mathbf{M}_{eq,p}$. Although difficult to see at this point, this coupling term will produce the correct form of Maxwell's equations with a $\mathbf{M}_{eq,p}$ acting as a source to the port field equations. This will be shown in Section \ref{sec:eom}.
With the Hamiltonian formulated, the modal expansion of the fields needs to be revisited. By construction of the problem, a complete set of modes can be found in each region to expand the fields in a piecewise manner. Hence, we have that the simulation domain electric field is
\begin{align}
\mathbf{E}_q(\mathbf{r},t) = \sum_k \sqrt{\frac{\omega_k}{2\epsilon_0}} \big( q_k(t) \mathbf{E}_k(\mathbf{r}) + q^*_k(t) \mathbf{E}^*_k(\mathbf{r}) \big)
\label{eq:sim-e-field}
\end{align}
and the port region electric fields are
\begin{multline}
\mathbf{E}_p(\mathbf{r},t) = \sum_\lambda \int_0^\infty d\omega_{\lambda p}\, \sqrt{\frac{\omega_{\lambda p}}{2\epsilon_0}} \big( q_{\lambda p}(\omega_{\lambda p},t) \mathbf{E}_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \\ + q^*_{\lambda p}(\omega_{\lambda p},t) \mathbf{E}^*_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \big).
\label{eq:port-e-field}
\end{multline}
In (\ref{eq:sim-e-field}), the summation over $k$ represents a discrete spectrum of modes with eigenvalue $\omega_k$ for the region $\mathcal{Q}$. In (\ref{eq:port-e-field}), the index $p$ is used to differentiate the different ports in the set $\mathcal{P}$. Each port can support different transverse modes (e.g., transverse electromagnetic or transverse electric), which are differentiated by the discrete index $\lambda$. Due to the semi-infinite length of the port regions, each transverse mode will also support a continuous spectrum. Hence, the integration over the eigenvalue $\omega_{\lambda p}$ can be interpreted as ``continuously summing'' over the one-dimensional continuum of modes for each transverse mode. The overall fields are then the summation of the various mode expansions, i.e., $\mathbf{E} = \mathbf{E}_q + \sum_{p\in\mathcal{P}} \mathbf{E}_p$.
A similar expansion also holds for the magnetic field, i.e., $\mathbf{H} = \mathbf{H}_q + \sum_{p\in\mathcal{P}} \mathbf{H}_p$ where
\begin{align}
\mathbf{H}_q(\mathbf{r},t) = \sum_k \sqrt{\frac{\omega_k}{2\mu_0}} \big( p_k(t) \mathbf{H}_k(\mathbf{r}) + p^*_k(t) \mathbf{H}^*_k(\mathbf{r}) \big),
\end{align}
\begin{multline}
\mathbf{H}_p(\mathbf{r},t) = \sum_\lambda \int_0^\infty d\omega_{\lambda p}\, \sqrt{\frac{\omega_{\lambda p}}{2\mu_0}} \big( p_{\lambda p}(\omega_{\lambda p},t) \mathbf{H}_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \\ + p^*_{\lambda p}(\omega_{\lambda p},t) \mathbf{H}^*_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \big).
\end{multline}
As suggested by the Hamiltonian, an expansion for $\mathbf{A}$ is also necessary. To be consistent with the the expansions of $\mathbf{E}$ and $\mathbf{H}$, the modal expansions for $\mathbf{A}$ are
\begin{align}
\mathbf{A}_q(\mathbf{r},t) = -i\sum_k \sqrt{\frac{1}{2\omega_k\epsilon_0}} \big( p_k(t) \mathbf{E}_k(\mathbf{r}) - p^*_k(t) \mathbf{E}^*_k(\mathbf{r}) \big),
\end{align}
\begin{multline}
\mathbf{A}_p(\mathbf{r},t) = -i\sum_\lambda \int_0^\infty d\omega_{\lambda p}\, \sqrt{\frac{1}{2\omega_{\lambda p}\epsilon_0}} \times \\ \big( p_{\lambda p}(\omega_{\lambda p},t) \mathbf{E}_{\lambda p}(\mathbf{r}, \omega_{\lambda p}) - p^*_{\lambda p}(\omega_{\lambda p},t) \mathbf{E}^*_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \big),
\end{multline}
with $\mathbf{A} = \mathbf{A}_q + \sum_{p\in\mathcal{P}} \mathbf{A}_p$. To simplify the derivation, we use the radiation gauge defined by $\nabla\cdot\epsilon\mathbf{A} = 0, \Phi = 0$ in this work. This gauge is valid when there are either no near-field sources or when only transverse currents are considered, i.e., $\nabla\cdot\mathbf{J}=0$ \cite{jackson1999classical}.
These modal expansions can now be substituted into the Hamiltonian given in (\ref{eq:interacting-system-hamiltonian}). This gives
\begin{align}
H_F = H_\mathcal{Q} + H_\mathcal{P} + H_{\mathcal{QP}},
\label{eq:proj-hamiltonian-sho}
\end{align}
where
\begin{align}
H_\mathcal{Q} = \sum_k \frac{\omega_k}{2}\big( |q_k(t)|^2 + |p_k(t)|^2 \big),
\end{align}
\begin{multline}
H_\mathcal{P} = \sum_{p\in\mathcal{P}} \sum_\lambda \int_0^\infty d\omega_{\lambda p} \, \frac{\omega_{\lambda p} }{2} \big( |q_{\lambda p}(\omega_{\lambda p},t)|^2 \\ + |p_{\lambda p}(\omega_{\lambda p},t)|^2 \big),
\end{multline}
\begin{multline}
H_{\mathcal{QP}} = \sum_{p\in\mathcal{P}} \sum_{\lambda, k} \int_0^\infty d\omega_{\lambda p} \, \big( \mathcal{W}_{k,\lambda p}(\omega_{\lambda p}) p^*_k(t) p_{\lambda p}(\omega_{\lambda p},t) \\ + \mathcal{V}_{k, \lambda p}(\omega_{\lambda p}) p_k(t) p_{\lambda p}(\omega_{\lambda p},t) + \mathrm{H.c.} \big),
\label{eq:proj-hamiltonian-sho2}
\end{multline}
\begin{align}
\mathcal{W}_{k,\lambda p}(\omega_{\lambda p}) = - \mathcal{K}_{k,\lambda p} \int \mathbf{E}^*_k(\mathbf{r}) \cdot \hat{n}_p\!\times\!\nabla\!\times\!\mathbf{E}_{ \lambda p}(\mathbf{r}, \omega_{\lambda p}) dS ,
\label{eq:coupling1}
\end{align}
\begin{align}
\mathcal{V}_{k,\lambda p}(\omega_{\lambda p}) = \mathcal{K}_{k, \lambda p} \int \mathbf{E}_k(\mathbf{r}) \cdot \hat{n}_p\!\times\!\nabla\!\times\!\mathbf{E}_{\lambda p}(\mathbf{r},\omega_{\lambda p}) dS,
\label{eq:coupling2}
\end{align}
\begin{align}
\mathcal{K}_{k,\lambda p} = \frac{c_0^2}{2}\sqrt{\frac{1}{\omega_k\omega_{\lambda p}}}.
\end{align}
In (\ref{eq:coupling1}) and (\ref{eq:coupling2}), the integration surface is $\partial \mathcal{Q} \cap \partial\mathcal{P}_p$ and the magnetic field has been rewritten in terms of the vector potential to more closely match the formulas given in \cite{viviescas2003field}. Inspecting (\ref{eq:proj-hamiltonian-sho}) to (\ref{eq:proj-hamiltonian-sho2}), $H_\mathcal{Q}$ and $H_\mathcal{P}$ represent summations of simple harmonic oscillators for each mode of the fields, while $H_{\mathcal{QP}}$ represents coupling between these oscillators. Further, considering that $\hat{n}_p\!\times\!\nabla\!\times\!\mathbf{E}_{\lambda p}$ is proportional to electric field modes \cite[Ch. 2]{haus2012electromagnetic}, the coupling terms given in (\ref{eq:coupling1}) and (\ref{eq:coupling2}) are proportional to overlap integrals of different spatial mode profiles. These overlap integrals will weight how strongly the harmonic oscillators from the different regions interact, which is physically intuitive from a mode matching perspective.
| 4,023 | 32,292 |
en
|
train
|
0.4933.5
|
These modal expansions can now be substituted into the Hamiltonian given in (\ref{eq:interacting-system-hamiltonian}). This gives
\begin{align}
H_F = H_\mathcal{Q} + H_\mathcal{P} + H_{\mathcal{QP}},
\label{eq:proj-hamiltonian-sho}
\end{align}
where
\begin{align}
H_\mathcal{Q} = \sum_k \frac{\omega_k}{2}\big( |q_k(t)|^2 + |p_k(t)|^2 \big),
\end{align}
\begin{multline}
H_\mathcal{P} = \sum_{p\in\mathcal{P}} \sum_\lambda \int_0^\infty d\omega_{\lambda p} \, \frac{\omega_{\lambda p} }{2} \big( |q_{\lambda p}(\omega_{\lambda p},t)|^2 \\ + |p_{\lambda p}(\omega_{\lambda p},t)|^2 \big),
\end{multline}
\begin{multline}
H_{\mathcal{QP}} = \sum_{p\in\mathcal{P}} \sum_{\lambda, k} \int_0^\infty d\omega_{\lambda p} \, \big( \mathcal{W}_{k,\lambda p}(\omega_{\lambda p}) p^*_k(t) p_{\lambda p}(\omega_{\lambda p},t) \\ + \mathcal{V}_{k, \lambda p}(\omega_{\lambda p}) p_k(t) p_{\lambda p}(\omega_{\lambda p},t) + \mathrm{H.c.} \big),
\label{eq:proj-hamiltonian-sho2}
\end{multline}
\begin{align}
\mathcal{W}_{k,\lambda p}(\omega_{\lambda p}) = - \mathcal{K}_{k,\lambda p} \int \mathbf{E}^*_k(\mathbf{r}) \cdot \hat{n}_p\!\times\!\nabla\!\times\!\mathbf{E}_{ \lambda p}(\mathbf{r}, \omega_{\lambda p}) dS ,
\label{eq:coupling1}
\end{align}
\begin{align}
\mathcal{V}_{k,\lambda p}(\omega_{\lambda p}) = \mathcal{K}_{k, \lambda p} \int \mathbf{E}_k(\mathbf{r}) \cdot \hat{n}_p\!\times\!\nabla\!\times\!\mathbf{E}_{\lambda p}(\mathbf{r},\omega_{\lambda p}) dS,
\label{eq:coupling2}
\end{align}
\begin{align}
\mathcal{K}_{k,\lambda p} = \frac{c_0^2}{2}\sqrt{\frac{1}{\omega_k\omega_{\lambda p}}}.
\end{align}
In (\ref{eq:coupling1}) and (\ref{eq:coupling2}), the integration surface is $\partial \mathcal{Q} \cap \partial\mathcal{P}_p$ and the magnetic field has been rewritten in terms of the vector potential to more closely match the formulas given in \cite{viviescas2003field}. Inspecting (\ref{eq:proj-hamiltonian-sho}) to (\ref{eq:proj-hamiltonian-sho2}), $H_\mathcal{Q}$ and $H_\mathcal{P}$ represent summations of simple harmonic oscillators for each mode of the fields, while $H_{\mathcal{QP}}$ represents coupling between these oscillators. Further, considering that $\hat{n}_p\!\times\!\nabla\!\times\!\mathbf{E}_{\lambda p}$ is proportional to electric field modes \cite[Ch. 2]{haus2012electromagnetic}, the coupling terms given in (\ref{eq:coupling1}) and (\ref{eq:coupling2}) are proportional to overlap integrals of different spatial mode profiles. These overlap integrals will weight how strongly the harmonic oscillators from the different regions interact, which is physically intuitive from a mode matching perspective.
The form of the Hamiltonian given in (\ref{eq:proj-hamiltonian-sho}) suggests the quantization process \cite{viviescas2003field}. Similar to the modes-of-the-universe case, each mode can be quantized by elevating the harmonic oscillator variables to be quantum operators with equal-time commutation relations
\begin{align}
[\hat{q}_{k_1}(t),\hat{p}_{k_2}(t)] = i\hbar \delta_{k_1,k_2},
\end{align}
\begin{multline}
[\hat{q}_{\lambda_1 p_1}(\omega_{\lambda_1 p_1},t),\hat{p}_{\lambda_2 p_2}(\omega_{\lambda_2 p_2}',t)] = i\hbar \delta_{\lambda_1, \lambda_2} \delta_{p_1,p_2} \\ \times \delta(\omega_{\lambda_1 p_1}-\omega_{\lambda_2 p_2}').
\end{multline}
In addition to these commutation relations, the operators from different regions commute with each other.
Now, bosonic ladder operators can be introduced for each mode similar to (\ref{eq:annihilation}) to (\ref{eq:boson-commutation}). Using these, the total electric field operator is $\hat{\mathbf{E}} = \hat{\mathbf{E}}_q + \sum_{p\in\mathcal{P}}\hat{\mathbf{E}}_p$ where
\begin{align}
\hat{\mathbf{E}}_q(\mathbf{r},t) = \sum_k N_{E,k} \big( \hat{a}_k(t) \mathbf{u}_k(\mathbf{r}) + \hat{a}^\dagger_k(t) \mathbf{u}^*_k(\mathbf{r}) \big),
\label{eq:q-sim-e-field}
\end{align}
\begin{multline}
\hat{\mathbf{E}}_p(\mathbf{r},t) = \sum_\lambda \int_0^\infty d\omega_{\lambda p} \, N_{E,\lambda p} \big( \hat{a}_{\lambda p}(\omega_{\lambda p},t) \mathbf{u}_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \\ + \hat{a}^\dagger_{\lambda p}(\omega_{\lambda p},t) \mathbf{u}^*_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \big).
\label{eq:q-port-e-field}
\end{multline}
A similar expansion holds for the magnetic field operator, i.e., $\hat{\mathbf{H}} = \hat{\mathbf{H}}_q + \sum_{p\in\mathcal{P}}\hat{\mathbf{H}}_p$ where
\begin{align}
\hat{\mathbf{H}}_q(\mathbf{r},t) = \sum_k N_{H,k} \big( \hat{a}_k(t) \mathbf{v}_k(\mathbf{r}) + \hat{a}^\dagger_k(t) \mathbf{v}^*_k(\mathbf{r}) \big),
\label{eq:q-sim-h-field}
\end{align}
\begin{multline}
\hat{\mathbf{H}}_p(\mathbf{r},t) = \sum_\lambda \int_0^\infty d\omega_p \, N_{H,\lambda p} \big( \hat{a}_{\lambda p}(\omega_{\lambda p},t) \mathbf{v}_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \\ + \hat{a}^\dagger_{\lambda p}(\omega_{\lambda p},t) \mathbf{v}^*_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \big) .
\label{eq:q-port-h-field}
\end{multline}
Further, we have that $\hat{\mathbf{A}} = \hat{\mathbf{A}}_q + \sum_{p\in\mathcal{P}}\hat{\mathbf{A}}_p$ where
\begin{align}
\hat{\mathbf{A}}_q(\mathbf{r},t) = -i\sum_k N_{A,k} \big( \hat{a}_k(t) \mathbf{u}_k(\mathbf{r}) - \hat{a}^\dagger_k(t) \mathbf{u}^*_k(\mathbf{r}) \big),
\label{eq:q-sim-a-field}
\end{align}
\begin{multline}
\hat{\mathbf{A}}_p(\mathbf{r},t)\! = \!-i\sum_\lambda \int_0^\infty d\omega_{\lambda p } \, N_{A,\lambda p} \big( \hat{a}_{\lambda p}(\omega_{\lambda p},t) \mathbf{u}_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \\ - \hat{a}^\dagger_{\lambda p}(\omega_{\lambda p},t) \mathbf{u}^*_{\lambda p}(\mathbf{r},\omega_{\lambda p}) \big),
\label{eq:q-port-a-field}
\end{multline}
\begin{align}
N_{A,k(\lambda p)} = \sqrt{\frac{\hbar }{2\epsilon_0 \omega_{k(\lambda p)} N_{E,k(\lambda p)} }}.
\end{align}
In (\ref{eq:q-sim-e-field}) to (\ref{eq:q-port-a-field}), the modal representations of $\mathbf{u}_k$, $\mathbf{u}_p$, $\mathbf{v}_k$, and $\mathbf{v}_p$ have been used to allow the modal normalizations to be explicitly included in the field operators. This is useful when establishing a correspondence between Hamiltonians written in terms of field and transmission line operators.
The corresponding Hamiltonian is
\begin{multline}
\hat{H}_F = \frac{1}{2}\iiint \big( \epsilon \hat{\mathbf{E}}^2_q + \mu \hat{\mathbf{H}}^2_q + \sum_{p \in \mathcal{P}} \big[ \epsilon \hat{\mathbf{E}}^2_p + \mu \hat{\mathbf{H}}^2_p \big] \\ - \sum_{p \in \mathcal{P}} 2 \hat{\mathbf{A}}_q \cdot ( \hat{n}_p \times \hat{\mathbf{H}}_p ) \big) d\mathbf{r}.
\label{eq:q-interacting-system-hamiltonian}
\end{multline}
This can be expressed using bosonic ladder operators as
\begin{align}
\hat{H}_F = \hat{H}_\mathcal{Q} + \hat{H}_\mathcal{P} + \hat{H}_{\mathcal{Q}\mathcal{P}},
\label{eq:q-interacting-system-hamiltonian2}
\end{align}
where
\begin{align}
\hat{H}_\mathcal{Q} = \sum_k \hbar \omega_k \hat{a}^\dagger_k (t) \hat{a}_k(t)
\end{align}
\begin{align}
\hat{H}_\mathcal{P} = \sum_{p\in \mathcal{P}} \sum_\lambda \int_0^\infty d\omega_{\lambda p} \, \hbar \omega_{\lambda p} \hat{a}^\dagger_{\lambda p}(\omega_{\lambda p},t) \hat{a}_{\lambda p}(\omega_{\lambda p},t)
\end{align}
\begin{multline}
\hat{H}_{\mathcal{Q}\mathcal{P}} = \sum_{p\in \mathcal{P}} \sum_{\lambda, k} \int_0^\infty d\omega_{\lambda p} \big( \mathcal{W}_{k,\lambda p}(\omega_{\lambda p}) \hat{a}^\dagger_k(t) \hat{a}_{\lambda p}(\omega_{\lambda p},t) \\ + \mathcal{V}_{k,\lambda p}(\omega_{\lambda p}) \hat{a}_k(t) \hat{a}_{\lambda p}(\omega_{\lambda p},t) + \mathrm{H.c.} \big).
\end{multline}
The Hamiltonian given in (\ref{eq:q-interacting-system-hamiltonian2}) can be recognized as the system-and-bath Hamiltonian that is commonly used in quantum optics \cite{viviescas2003field,gardiner2004quantum}. This is a satisfying result, since this Hamiltonian is often used to study the input-output relationship of optical cavities \cite{walls2007quantum}.
| 2,984 | 32,292 |
en
|
train
|
0.4933.6
|
\section{Correspondence Between Field and Transmission Line Hamiltonians}
\label{sec:field-to-circuit}
With an appropriate quantization procedure now in place, we can continue the process of developing a field-based description of circuit QED systems. To assist in this, it will be useful to determine a correspondence between the field-based Hamiltonian of (\ref{eq:free-field-qH}) or (\ref{eq:q-interacting-system-hamiltonian}) and a Hamiltonian consisting of transmission line voltages and currents. To simplify this process, the classical case from the modes-of-the-universe approach; i.e., (\ref{eq:free-field-cH}), is considered first.
Since our goal is to reduce our expressions to a form like the circuit-based description given in (\ref{eq:coupled-transmon}), we need to introduce some assumptions implicit in (\ref{eq:coupled-transmon}) for the manipulations in this section and Section \ref{sec:field-transmon-hamiltonian} to work. However, we emphasize that these approximations are only needed to show consistency with the approximate expressions used in the literature. They are not needed in the construction of our general field-based equations provided up to this point.
Now, the first assumption is that our transmission line geometry and operating frequencies are such that only quasi-TEM modes are excited. For simplicity, these modes will be treated as pure TEM modes for the purposes of defining transmission line parameters such as voltages, currents, and per-unit-length impedances. Second, we will assume in this section that we are only considering a finite length transmission line with a constant cross-section and that fringing effects at the end of the transmission line can be accounted for separately (e.g., by shifting mode frequencies). For notational simplicity, the longitudinal direction of the transmission line will be aligned with the $z$-axis. Considering these simplifications, we can arrive at an ``exact'' correspondence between field-based and transmission line-based descriptions in this section.
To begin, we need to revisit the expansion of the electric and magnetic fields in terms of modes. Since we are dealing with TEM waves, we can decompose the electric field as
\begin{multline}
\mathbf{E}(\mathbf{r},t) = \sum_{k,l} \sqrt{\frac{\omega_{k,l}}{2\epsilon_0 N_{E_{T,k}} N_{E_{L,l}} }} \\ \times \big( q_{k,l}(t) \mathbf{u}_{k,l}(\mathbf{r}) + q^*_{k,l}(t) \mathbf{u}^*_{k,l}(\mathbf{r}) \big)
\label{eq:emode}
\end{multline}
where the mode functions $\mathbf{u}_{k,l}$ are split into a transverse vector function and a longitudinal scalar function as
\begin{align}
\mathbf{u}_{k,l}(\mathbf{r}) = \mathbf{u}_{T,k}(x,y) u_{L,l}(z).
\end{align}
These mode functions are orthogonal in the sense that
\begin{align}
\iint \epsilon_r\mathbf{u}_{T,k_1}^* \cdot \mathbf{u}_{T,k_2} dxdy = \delta_{k_1,k_2} N_{E_{T,k_1}},
\label{eq:norm1}
\end{align}
\begin{align}
\int u_{L,l_1}^*(z) u_{L,l_2}(z) dz = \delta_{l_1,l_2} N_{E_{L,l_1}}.
\label{eq:norm2}
\end{align}
Similarly, for the magnetic field we have that
\begin{multline}
\mathbf{H}(\mathbf{r},t) = \sum_{k,l} \sqrt{\frac{\omega_{k,l}}{2\mu_0 N_{H_{T,k}} N_{H_{L,l}} }} \\ \times \big( p_{k,l}(t) \mathbf{v}_{k,l}(\mathbf{r}) + p^*_{k,l}(t) \mathbf{v}^*_{k,l}(\mathbf{r}) \big)
\end{multline}
where the mode functions $\mathbf{v}_{k,l}$ are
\begin{align}
\mathbf{v}_{k,l}(\mathbf{r}) = \mathbf{v}_{T,k}(x,y) v_{L,l}(z).
\end{align}
These mode functions are orthogonal in a similar sense to (\ref{eq:norm1}) and (\ref{eq:norm2}).
The conversion between fields and transmission line quantities can be performed by adopting definitions for the transmission line parameters so that the power and energy densities expressed in terms of field and circuit parameters agree \cite{pozar2009microwave}. Since we consider lossless lines here, only the per-unit-length capacitance and inductance of the line are needed. For a particular transmission line mode, these are denoted as $C_k$ and $L_k$, respectively. Within the notation of this work, the definitions for these become $C_k = \epsilon_0 N_{E_{T,k}}$ and $L_k = \mu_0 N_{H_{T,k}}$ \cite{pozar2009microwave}.
With these definitions, the field-Hamiltonian of (\ref{eq:free-field-cH}) can now be converted into a transmission line form. The electric field term will be considered first. Substituting in the modal expansion, this term becomes
\begin{multline}
\epsilon |\mathbf{E}(\mathbf{r},t)|^2 = \epsilon \! \sum_{k_1,l_1,k_2,l_2}\!\!\sqrt{\frac{\omega_{k_1,l_1}\omega_{k_2,l_2}}{\epsilon^2_0 N_{E_{T,k_1}} N_{E_{L,l_1}} N_{E_{T,k_2}} N_{E_{L,l_2}} } } \\ \times \big( q^*_{k_2,l_2}(t) q_{k_1,l_1}(t) \mathbf{u}^*_{k_2,l_2}(\mathbf{r}) \cdot \mathbf{u}_{k_1,l_1}(\mathbf{r}) \\ + \frac{1}{2} q_{k_2,l_2}(t)q_{k_1,l_1}(t) \mathbf{u}_{k_2,l_2}(\mathbf{r}) \cdot \mathbf{u}_{k_1,l_1}(\mathbf{r}) \\ + \frac{1}{2} q^*_{k_2,l_2}(t)q^*_{k_1,l_1}(t) \mathbf{u}^*_{k_2,l_2}(\mathbf{r}) \cdot \mathbf{u}^*_{k_1,l_1}(\mathbf{r}) \big) .
\label{eq:e1}
\end{multline}
Recalling that the Hamiltonian involves the volume integral of these terms, we can simplify our expression by noting that the terms proportional to $\mathbf{u}_{k_2,l_2}\cdot\mathbf{u}_{k_1,l_1}$ and $(\mathbf{u}_{k_2,l_2}\cdot\mathbf{u}_{k_1,l_1})^*$ will average to zero unless $k_1=k_2$ and $l_1=l_2$. We then also have that for harmonic oscillators, terms of the form $q_{k,l}^2$ and $(q_{k,l}^*)^2$ vanish \cite{haken1976quantum}. Hence, we have that
\begin{multline}
\!\!\iiint \epsilon |\mathbf{E}(\mathbf{r},t)|^2 d\mathbf{r} = \\ \!\!\! \sum_{k_1,l_1,k_2,l_2} \sqrt{\frac{\omega_{k_1,l_1}\omega_{k_2,l_2}}{\epsilon^2_0 N_{E_{T,k_1}} N_{E_{L,l_1}} N_{E_{T,k_2}} N_{E_{L,l_2}} } } \\ \times q^*_{k_2,l_2}(t) q_{k_1,l_1}(t) \iiint \epsilon \, \mathbf{u}^*_{k_2,l_2}(\mathbf{r}) \cdot \mathbf{u}_{k_1,l_1}(\mathbf{r}).
\end{multline}
We can expand the $\mathbf{u}_{k,l}$ functions and use the orthogonality relationships given in (\ref{eq:norm1}) and (\ref{eq:norm2}) to get
\begin{multline}
\iiint \epsilon |\mathbf{E}(\mathbf{r},t)|^2d\mathbf{r} \\ = \int \sum_{k,l} C_k \frac{\omega_{k,l}}{C_k N_{E_{L,l}}} |q_{k,l}(t)|^2 |u_{L,l}(z)|^2 dz,
\label{eq:int1}
\end{multline}
where we have also noted that $C_k = \epsilon_0 N_{E_{T,k}}$. We don't cancel the $C_k$ terms in (\ref{eq:int1}) because it helps suggest the correct form of voltage mode to convert between the electric field and transmission line voltage descriptions of the system.
In particular, we can define a voltage mode to be
\begin{align}
V_{k,l}(z,t) = \sqrt{\frac{\omega_{k,l}}{ C_k N_{E_{L,l}}}} \big(q_{k,l}(t) u_{L,l}(z) + q_{k,l}^*(t) u_{L,l}^*(z) \big).
\end{align}
Following a similar set of steps to those shown in (\ref{eq:e1}) to (\ref{eq:int1}), we can see that
\begin{align}
\iiint \epsilon |\mathbf{E}(\mathbf{r},t)|^2d\mathbf{r} = \int \sum_{k,l} C_k |V_{k,l}(z,t)|^2 dz.
\label{eq:eint}
\end{align}
Similarly, We can define a current mode as
\begin{align}
I_{k,l}(z,t) = \sqrt{\frac{\omega_{k,l}}{ L_k N_{H_{L,l}}}} \big(p_{k,l}(t) v_{L,l}(z) + p_{k,l}^*(t) v_{L,l}^*(z) \big)
\end{align}
to see that
\begin{align}
\iiint \mu |\mathbf{H}(\mathbf{r},t)|^2d\mathbf{r} = \int \sum_{k,l} L_k |I_{k,l}(z,t)|^2 dz.
\label{eq:hint}
\end{align}
Combining the results in (\ref{eq:eint}) and (\ref{eq:hint}), the transmission line Hamiltonian can be written as
\begin{align}
H_{TR} = \int \frac{1}{2} \sum_{k,l} \big( C_k |V_{k,l}(z,t)|^2 + L_k|I_{k,l}(z,t)|^2 \big)dz,
\end{align}
which is equivalent to the field-based Hamiltonian. This equality is of course only valid when considering a portion of a system with a constant transmission line cross-section, as mentioned at the beginning of this section. However, this is exactly the part of a system used in writing a circuit-based Hamiltonian like (\ref{eq:coupled-transmon}). Hence, we now have the tools to relate a field-based Hamiltonian to the simpler circuit descriptions often used in the literature.
Moving now to the quantum case, it is important to note that all the operations used in the classical case still apply. Hence, the process easily generalizes to the quantum case, giving
\begin{align}
\hat{H}_{F} = \hat{H}_{TR} = \int \frac{1}{2} \sum_{k,l} \big( C_k \hat{V}_{k,l}^2 + L_k \hat{I}_{k,l}^2 \big) dz,
\end{align}
where
\begin{align}
\hat{V}_{k,l}(z,t) = N_{V_{k,l}} \big( \hat{a}_{k,l}(t) u_{L,l}(z) + \hat{a}_{k,l}^\dagger(t)u^*_{L,l}(z) \big),
\label{eq:c-q-voltage}
\end{align}
\begin{align}
\hat{I}_{k,l}(z,t) = N_{I_{k,l}} \big( \hat{a}_{k,l}(t)v_{L,l}(z) + \hat{a}_{k,l}^\dagger(t)v^*_{L,l}(z) \big) ,
\label{eq:c-q-current}
\end{align}
\begin{align}
N_{V_{k,l}} = \sqrt{ \frac{\hbar \omega_{k,l}}{2 C_k N_{E_{L,l}}} }, \,\, N_{I_{k,l}} = \sqrt{ \frac{\hbar \omega_{k,l}}{2 L_k N_{H_{L,l}}} }.
\label{eq:sim-domain-mode-norm}
\end{align}
These definitions for $\hat{V}_{k,l}$ and $\hat{I}_{k,l}$ are not immediately seen to be consistent with those given in (\ref{eq:total-v}) and (\ref{eq:total-i}). The reason is that complex-valued spatial modes have been used in (\ref{eq:c-q-voltage}) and (\ref{eq:c-q-current}), while real-valued modes were used in (\ref{eq:total-v}) and (\ref{eq:total-i}). Further, the spatial variation of the voltage and current modes have been completely integrated out for (\ref{eq:total-v}) and (\ref{eq:total-i}).
To see that the expressions derived in this section are consistent with the literature, the steps outlined in this section can be repeated for the real-valued spatial mode expansions like those given in (\ref{eq:r-q-efield}) and (\ref{eq:r-q-hfield}). For this case, the voltage and current operators become
\begin{align}
\hat{V}_{k,l}(z,t) = N_{V_{k,l}} \big( \hat{a}_{k,l}(t) + \hat{a}_{k,l}^\dagger(t) \big) u_{L,l}(z)
\label{eq:r-q-voltage}
\end{align}
\begin{align}
\hat{I}_{k,l}(z,t) = -i N_{I_{k,l}} \big( \hat{a}_{k,l}(t) - \hat{a}_{k,l}^\dagger(t) \big) v_{L,l}(z).
\label{eq:r-q-current}
\end{align}
These can be seen to be consistent with (\ref{eq:total-v}) and (\ref{eq:total-i}) by recalling that $C_r = C\ell$ and $L_r = L\ell$, where $\ell$ is the length of the resonator \cite{blais2004cavity}. Restricting (\ref{eq:r-q-voltage}) and (\ref{eq:r-q-current}) to be a resonator mode, it can be seen that the longitudinal normalizations $N_{E_{L,k}}$ and $N_{H_{L,k}}$ would become $\ell$. The additional factors that occur after integrating out the remaining spatial variation in (\ref{eq:r-q-voltage}) and (\ref{eq:r-q-current}) are grouped with other terms in the overall Hamiltonian to be consistent with \cite{blais2004cavity}.
With the basic process developed, the expressions from the projector-based quantization approach, e.g., (\ref{eq:q-interacting-system-hamiltonian}), can now be converted into a transmission line form as well. This will not be performed here for brevity. However, it should be noted that the interacting part of the Hamiltonian given in (\ref{eq:q-interacting-system-hamiltonian}) cannot be written in a simpler transmission line form. This is because the transverse integrations cannot be concisely written when allowing for complex-valued mode functions. If real-valued mode functions are used, the spatial integrals can be shown to be proportional to overlap integrals of the electric field modes for the different regions of the problem, which is an intuitively satisfying result.
| 4,080 | 32,292 |
en
|
train
|
0.4933.7
|
\section{Hamiltonian for the Field-Transmon System}
\label{sec:field-transmon-hamiltonian}
With the ability to convert between field and transmission line representations, it is now possible to show the consistency of the postulated field-based Hamiltonian of (\ref{eq:field-transmon-hamiltonian1}) with the more common circuit-based description given in (\ref{eq:coupled-transmon}). To do this, we must match the assumptions implicit in (\ref{eq:coupled-transmon}), which correspond to only considering the unperturbed TEM modes of a transmission line resonator that interact with the transmon qubit. This approximately corresponds to only considering the portion of Fig. \ref{fig:region-illustration} marked with length $\ell$. To simplify the expressions, only fields from the simulation domain will be considered as these are what interact with the transmon. The subscript of $q$ will be dropped from these fields in this section. Further, all of the simulation domain mode functions will be assumed to be real-valued. Since the simulation domain can typically be made a closed system, this choice does not amount to a loss of generality \cite{chew2016quantum2}. The complex-valued mode function case can be handled easily, but leads to unnecessarily long expressions that are omitted for brevity.
Considering these points, the postulated field-transmon system Hamiltonian of (\ref{eq:field-transmon-hamiltonian1}) is reproduced here in full for ease of reference as
\begin{multline}
\hat{H} = 4E_C (\hat{n}-n_g)^2 - E_J \cos \hat{\varphi} \\ + \iiint \frac{1}{2} \big( \epsilon \hat{\mathbf{E}}^2 + \mu \hat{\mathbf{H}}^2 - 2 \hat{\mathbf{E}} \cdot \partial_t^{-1} \hat{\mathbf{J}}_t \big)d\mathbf{r},
\label{eq:field-transmon-hamiltonian}
\end{multline}
where the transmon current density operator was
\begin{align}
\hat{\mathbf{J}}_t = -2e \beta \mathbf{d} \delta(z-z_0) \partial_t\hat{n} .
\label{eq:transmon-current-operator}
\end{align}
The field-based Hamiltonian of (\ref{eq:field-transmon-hamiltonian}) can be shown to be equivalent to circuit-based Hamiltonians by evaluating the spatial integration in (\ref{eq:field-transmon-hamiltonian}). The first two terms can be simply evaluated following the steps in Section \ref{sec:field-to-circuit}, yielding
\begin{multline}
\iiint \frac{1}{2} \big( \epsilon \hat{\mathbf{E}}^2 + \mu \hat{\mathbf{H}}^2 \big) d\mathbf{r} \\ = \frac{1}{2}\sum_{k,l} \big( C_k N_{E_{L,l}} \hat{V}^2_{I_{k,l}} + L_k N_{H_{L,l}} \hat{I}^2_{I_{k,l}} \big),
\label{eq:f-to-tr1}
\end{multline}
where the integrated voltage and current operators are
\begin{align}
\hat{V}_{I_{k,l}} = N_{V_{k,l}} \big( \hat{a}_{k,l}(t) + \hat{a}_{k,l}^\dagger(t) \big)
\label{eq:q-total-voltage-mode}
\end{align}
\begin{align}
\hat{I}_{I_{k,l}} = -i N_{I_{k,l}} \big( \hat{a}_{k,l}(t) - \hat{a}_{k,l}^\dagger(t) \big).
\label{eq:q-total-current-mode}
\end{align}
The remaining term to consider is the coupling term. To carry out the spatial integration, the expression is expanded in terms of the modes for $\hat{\mathbf{E}}$ given in (\ref{eq:emode}). This gives
\begin{multline}
-\iiint \hat{\mathbf{E}}(\mathbf{r},t) \cdot \partial_t^{-1} \hat{\mathbf{J}}_t(\mathbf{r},t) d\mathbf{r} =
\sum_{k,l} N_{E_{k,l}} \big( \hat{a}_{k,l}(t) + \hat{a}^\dagger_{k,l}(t) \big) \\ \times 2e \beta \hat{n}(t) \iiint \mathbf{u}_{k,l}(\mathbf{r}) \cdot \mathbf{d}(x,y) \delta(z-z_0) d\mathbf{r},
\end{multline}
where
\begin{align}
N_{E_{k,l}} = \sqrt{ \frac{\hbar \omega_{k,l}}{2 \epsilon_0 N_{E_{T,k}} N_{E_{L,l}}} }.
\label{eq:enorm}
\end{align}
Now, the spatial integral along the $z$-axis can be evaluated easily and the remaining transverse integral can be identified as the definition of a voltage, i.e.,
\begin{align}
V_{k,l}(z_0) = \int_{a}^{b} \mathbf{u}_{k,l}(x,y,z_0) \cdot d \mathbf{d}(x,y),
\end{align}
where $a$ and $b$ are the initial and final points of the integration path defined by $\mathbf{d}$ \cite{pozar2009microwave}. Hence, we have that
\begin{multline}
-\iiint \hat{\mathbf{E}}(\mathbf{r},t) \cdot \partial_t^{-1} \hat{\mathbf{J}}_t(\mathbf{r},t) d\mathbf{r} \\ =
2e \beta \hat{n}(t) \sum_{k,l} N_{V_{k,l}} \big( \hat{a}_{k,l}(t) + \hat{a}^\dagger_{k,l}(t) \big) V_{k,l}(z_0) ,
\label{eq:int3}
\end{multline}
where we have rewritten $N_{E_{k,l}}$ as $N_{V_{k,l}}$ by noting the relationship between the transverse normalization in (\ref{eq:enorm}) and the per-unit-length modal capacitance in (\ref{eq:sim-domain-mode-norm}). Finally, we can use (\ref{eq:q-total-voltage-mode}) to get
\begin{multline}
-\iiint \hat{\mathbf{E}}(\mathbf{r},t) \!\cdot\! \partial_t^{-1} \hat{\mathbf{J}}_t(\mathbf{r},t) d\mathbf{r} \\ = \sum_{k,l} 2e \beta V_{k,l}(z_0) \hat{V}_{I_{k,l}}(t) \hat{n}(t).
\label{eq:coupling-derivation}
\end{multline}
Putting the results of (\ref{eq:f-to-tr1}) and (\ref{eq:coupling-derivation}) together, the field-transmon system Hamiltonian of (\ref{eq:field-transmon-hamiltonian}), can now be written as
\begin{multline}
\hat{H} = 4E_C (\hat{n}-n_g)^2 - E_J \cos \hat{\varphi} + \frac{1}{2}\sum_{k,l} \big( C_k N_{E_{L,l}} \hat{V}^2_{I_{k,l}} \\ + L_k N_{H_{L,l}} \hat{I}^2_{I_{k,l}} \big) + \sum_{k,l} 2e \beta V_{k,l}(z_0) \hat{V}_{I_{k,l}} \hat{n}.
\end{multline}
Restricting this Hamiltonian to only consider a single mode of a resonator coupled to the transmon recovers (\ref{eq:coupled-transmon}). Hence, the postulated field-transmon system Hamiltonian can be seen to be consistent with the circuit-based descriptions of circuit QED systems typically used in the literature.
| 2,019 | 32,292 |
en
|
train
|
0.4933.8
|
\section{Equations of Motion}
\label{sec:eom}
Now that an appropriate Hamiltonian has been found for the field-transmon system, the quantum equations of motion can be derived using Hamilton's equations \cite{chew2016quantum}. We will consider the full system composed of the transmon qubit, the simulation domain fields, and the port region fields. The Hamiltonian is also generalized by allowing for the termination of the port regions in the simulation domain with either PEC or PMC conditions.
With this understood, the complete Hamiltonian becomes
\begin{align}
\hat{H} = \hat{H}_T + \hat{H}_F + \hat{H}_I ,
\label{eq:quantum-hamiltonian}
\end{align}
where
\begin{align}
\hat{H}_T = 4E_C (\hat{n}-n_g)^2 - E_J \cos \hat{\varphi},
\label{eq:transmon-hamiltonian}
\end{align}
\begin{align}
\hat{H}_F = \frac{1}{2}\iiint \big( \epsilon \hat{\mathbf{E}}^2_q + \mu \hat{\mathbf{H}}^2_q + \sum_{p \in \mathcal{P}} \big[ \epsilon \hat{\mathbf{E}}^2_p + \mu \hat{\mathbf{H}}^2_p \big] \big) d\mathbf{r} ,
\label{eq:field-hamiltonian}
\end{align}
\begin{multline}
\hat{H}_I = - \iiint \big( \hat{\mathbf{E}}_q \cdot \partial_t^{-1}\hat{\mathbf{J}}_t + \sum_{p \in \mathcal{P}_M} \hat{\mathbf{A}}_q \cdot (\hat{n}_p\times \hat{\mathbf{H}}_p) \\ + \sum_{p \in \mathcal{P}_E} \hat{\mathbf{F}}_q \cdot (\hat{\mathbf{E}}_p \times \hat{n}_p) \big) d\mathbf{r}.
\label{eq:interaction-hamiltonian}
\end{multline}
In (\ref{eq:interaction-hamiltonian}), $\mathcal{P}_E$ ($\mathcal{P}_M$) denotes the set of ports terminated in a PEC (PMC) \textit{in the simulation domain}. The union of these sets is all of the ports $\mathcal{P}$. Note that $\hat{n}_p$ is the unit normal vector to the port surface, and it points into the simulation domain. The terms in (\ref{eq:interaction-hamiltonian}) quantify the interactions between the different parts of the total system. The first term is the coupling of the simulation domain field and transmon system, while the next two terms represent coupling between the simulation domain and port region fields.
In (\ref{eq:interaction-hamiltonian}), the electric vector potential, $\hat{\mathbf{F}}_q$, has been introduced into the Hamiltonian. This is necessary because of the set of ports $\mathcal{P}_E$ that introduce equivalent magnetic current densities as sources to the simulation domain. The simplest way to account for the presence of magnetic sources is to introduce another set of auxiliary potentials, as is commonly done in classical electromagnetics \cite{jin2011theory}.
To derive equations of motion, the Hamiltonian needs to be expressed in terms of canonical conjugate operators \cite{chew2016quantum}. The transmon operators $\hat{n}$ and $\hat{\varphi}$ are already in this form. However, the electric and magnetic fields are not canonical conjugate operators in a Hamiltonian mechanics formalism \cite{chew2016quantum}. Instead, the electromagnetic field portions of the Hamiltonian need to be rewritten in terms of the electric and magnetic vector potentials and their conjugate momenta.
To support this, the electric and magnetic fields are first decomposed into the set of fields produced by electric or magnetic sources. Under this decomposition, the fields produced by electric (magnetic) sources are completely specified by the magnetic (electric) vector potential \cite{jin2011theory}. For this to hold, a radiation gauge is being used for both the magnetic and electric vector potentials. Considering this, the field portions of the Hamiltonian are first rewritten as
\begin{multline}
\hat{H}_F = \frac{1}{2} \iiint \big( \epsilon\hat{\mathbf{E}}_{qe}^2 + \mu\hat{\mathbf{H}}_{qe}^2 + \epsilon\hat{\mathbf{E}}_{qm}^2 + \mu\hat{\mathbf{H}}_{qm}^2 \\ + \sum_{p \in \mathcal{P}} \big[ \epsilon \hat{\mathbf{E}}^2_{pe} + \mu \hat{\mathbf{H}}^2_{pe} + \epsilon \hat{\mathbf{E}}^2_{pm} + \mu \hat{\mathbf{H}}^2_{pm}\big] \big) d\mathbf{r} ,
\end{multline}
\begin{multline}
\hat{H}_I = -\iiint \big( \hat{\mathbf{E}}_{qe} \cdot \partial_t^{-1} \hat{\mathbf{J}}_t + \sum_{p \in \mathcal{P}_M} \hat{\mathbf{A}}_{qe} \cdot (\hat{n}_p\times \hat{\mathbf{H}}_{pm}) \\ + \sum_{p \in \mathcal{P}_E} \hat{\mathbf{F}}_{qm} \cdot (\hat{\mathbf{E}}_{pe} \times \hat{n}_{p}) \big) d\mathbf{r},
\end{multline}
where a subscript $e$ ($m$) denotes that this quantity is due to electric (magnetic) sources. The structure of the coupling terms between the fields in different regions reflects the difference in boundary conditions and corresponding equivalent source densities at the interfaces between regions.
With the Hamiltonian decomposed into portions due to electric and magnetic sources, it can now be written in terms of the electric and magnetic vector potentials and their conjugate momenta. This gives
\begin{multline}
\hat{H}_F = \frac{1}{2} \iiint \big( \epsilon^{-1}\hat{\mathbf{\Pi}}_{qe}^2 + \mu^{-1}(\nabla\times\hat{\mathbf{A}}_{qe})^2 + \mu^{-1}\hat{\mathbf{\Pi}}_{qm}^2 \\ + \epsilon^{-1}(\nabla\times\hat{\mathbf{F}}_{qm})^2 + \sum_{p \in \mathcal{P}} \big[ \epsilon^{-1}\hat{\mathbf{\Pi}}_{pe}^2 + \mu^{-1}(\nabla\times\hat{\mathbf{A}}_{pe})^2 \\ + \mu^{-1}\hat{\mathbf{\Pi}}_{pm}^2 + \epsilon^{-1}(\nabla\times\hat{\mathbf{F}}_{pm})^2\big] \big) d\mathbf{r},
\end{multline}
\begin{multline}
\hat{H}_I = \iiint \big( \epsilon^{-1}\hat{\mathbf{\Pi}}_{qe} \cdot \partial_t^{-1} \hat{\mathbf{J}}_t - \! \sum_{p \in \mathcal{P}_M} \!\!\mu^{-1} \hat{\mathbf{A}}_{qe} \!\cdot\! (\hat{n}_p \! \times \! \hat{\mathbf{\Pi}}_{pm}) \\ - \sum_{p \in \mathcal{P}_E} \epsilon^{-1} \hat{\mathbf{F}}_{qm} \cdot (\hat{\mathbf{\Pi}}_{pe} \times \hat{n}_{p}) \big) d\mathbf{r},
\end{multline}
where $\hat{\mathbf{\Pi}}_{qe} = \epsilon\partial_t \hat{\mathbf{A}}_{qe}$ is the conjugate momentum for the vector potential in the simulation domain. Similarly, $\hat{\mathbf{\Pi}}_{pe} = \epsilon\partial_t \hat{\mathbf{A}}_{pe}$ is the conjugate momentum for the vector potential in the port regions. The conjugate momenta for the electric vector potentials are $\hat{\mathbf{\Pi}}_{qm} = \mu \partial_t \hat{\mathbf{F}}_{qm}$ and $\hat{\mathbf{\Pi}}_{pm} = \mu \partial_t \hat{\mathbf{F}}_{pm}$ for the simulation domain and port regions, respectively.
With the Hamiltonian now written completely in terms of canonical conjugate operators, equations of motion can be derived using Hamilton's equations \cite{chew2016quantum}. Equations of motion will first be derived for the transmon operators. For these operators, Hamilton's equations are
\begin{align}
\frac{\partial \hat{\varphi}}{\partial t} = \frac{\partial \hat{H}}{\partial \hat{n}} , \,\, \frac{\partial \hat{n}}{\partial t} = -\frac{\partial \hat{H}}{\partial \hat{\varphi}} .
\end{align}
Evaluating the necessary derivatives gives
\begin{multline}
\frac{\partial \hat{\varphi}}{\partial t} = 8 E_C (\hat{n}-n_g) \\ + 2e\beta \iiint \hat{\mathbf{E}}_{qe} (\mathbf{r},t) \cdot \mathbf{d} \delta(z-z_0) d\mathbf{r},
\end{multline}
\begin{align}
\frac{\partial \hat{n}}{\partial t} = - E_J \sin\hat{\varphi},
\end{align}
where the field-transmon interaction term has been rewritten in terms of the electric field operator.
Equations of motion for the potentials requires taking functional derivatives of $\hat{H}$ with respect to the conjugate operators \cite{chew2016quantum}. These can be easily performed, and will be done in stages for the different sets of potentials. Beginning with the simulation domain magnetic vector potential, we have
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{A}}_{qe}} = \mu^{-1} \nabla\times\nabla\times\hat{\mathbf{A}}_{qe} + \sum_{p \in \mathcal{P}_M}\mu^{-1} \hat{n}_p\times \hat{\mathbf{\Pi}}_{pm},
\end{align}
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{qe}} = \epsilon^{-1} \hat{\mathbf{\Pi}}_{qe} + \epsilon^{-1} \partial_t^{-1} \hat{\mathbf{J}}_t .
\end{align}
Hamilton's equations of motion for the magnetic vector potential system are \cite{chew2016quantum}
\begin{align}
\frac{\partial \hat{\mathbf{A}}_{qe}}{\partial t} = \frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{qe}}, \,\, \frac{\partial \hat{\mathbf{\Pi}}_{qe}}{\partial t} = -\frac{\delta\hat{H}}{\delta \hat{\mathbf{A}}_{qe}}.
\label{eq:ham1}
\end{align}
These can be combined to give
\begin{align}
\nabla\times\nabla\times\hat{\mathbf{A}}_{qe} + \mu\epsilon \partial_t^2 \hat{\mathbf{A}}_{qe} = \mu \hat{\mathbf{J}}_t - \sum_{p \in \mathcal{P}_M} \hat{n}_p \times \hat{\mathbf{\Pi}}_{pm}.
\label{eq:sim-A-wave}
\end{align}
This inhomogeneous wave equation can be seen to take the expected form for the magnetic vector potential by noting that $-\hat{n}_p\times\hat{\mathbf{\Pi}}_{pm} = \mu \hat{n}_p\times\hat{\mathbf{H}}_{pm}$ is an equivalent electric current density times the permeability.
A similar process can be done for the magnetic vector potential in the port regions. For a particular port, we have
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{A}}_{pe}} = \mu^{-1} \nabla\times\nabla\times\hat{\mathbf{A}}_{pe}
\end{align}
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{pe}} = \epsilon^{-1} \hat{\mathbf{\Pi}}_{pe} + \sum_{p' \in \mathcal{P}_E} \delta_{p,p'}\epsilon^{-1} \hat{n}_{p'} \times\hat{\mathbf{F}}_{qm}.
\end{align}
The Kronecker delta function is used to only include a source term if the particular port $p \in \mathcal{P}_E$. Similar functional derivatives can be evaluated for each of the individual port regions. Hamilton's equations can then be used to derive a wave equation. This yields
\begin{align}
\nabla\times\nabla\times\hat{\mathbf{A}}_{pe} + \mu\epsilon \partial^2_t \hat{\mathbf{A}}_{pe} = \!\!\sum_{p' \in \mathcal{P}_E} \delta_{p,p'} \mu \hat{n}_{p'} \! \times\!\partial_t\hat{\mathbf{F}}_{qm}.
\label{eq:port-A-wave}
\end{align}
Note that due to the fixed orientation of $\hat{n}_p$ pointing into the simulation domain, $\hat{n}_p \times\partial_t \hat{\mathbf{F}}_{qm}$ is an equivalent electric current density with a positive sign.
To finish the derivation, equations of motion for the electric vector potential need to be established. As expected, these follow a very similar process to that for the magnetic vector potential. Beginning with the equations for the simulation domain, the necessary functional derivatives are
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{F}}_{qm}} = \nabla\times\epsilon^{-1}\nabla\times\hat{\mathbf{F}}_{qm} - \sum_{p \in \mathcal{P}_E}\epsilon^{-1} \hat{n}_p\times \hat{\mathbf{\Pi}}_{pm}
\end{align}
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{qm}} = \mu^{-1} \hat{\mathbf{\Pi}}_{qm}.
\end{align}
Hamilton's equations of motion for the electric vector potential system are
\begin{align}
\frac{\partial \hat{\mathbf{F}}_{qm}}{\partial t} = \frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{qm}}, \,\, \frac{\partial \hat{\mathbf{\Pi}}_{qm}}{\partial t} = -\frac{\delta\hat{H}}{\delta \hat{\mathbf{F}}_{qm}}.
\label{eq:ham3}
\end{align}
These can be combined to give
\begin{align}
\epsilon\nabla\times\epsilon^{-1}\nabla\times\hat{\mathbf{F}}_{qm} + \mu\epsilon \partial_t^2 \hat{\mathbf{F}}_{qm} = \sum_{p \in \mathcal{P}_M} \hat{n}_p \times \hat{\mathbf{\Pi}}_{pm}.
\label{eq:sim-F-wave}
\end{align}
By recalling that $\hat{n}_p \times\hat{\mathbf{\Pi}}_{pm} = \epsilon \hat{\mathbf{E}}_{pm} \times\hat{n}_p$, it is seen that the source term for this inhomogeneous wave equation has the form of an equivalent magnetic current density times the permittivity. Hence, this is the expected wave equation for an electric vector potential in the radiation gauge.
| 3,763 | 32,292 |
en
|
train
|
0.4933.9
|
Equations of motion for the potentials requires taking functional derivatives of $\hat{H}$ with respect to the conjugate operators \cite{chew2016quantum}. These can be easily performed, and will be done in stages for the different sets of potentials. Beginning with the simulation domain magnetic vector potential, we have
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{A}}_{qe}} = \mu^{-1} \nabla\times\nabla\times\hat{\mathbf{A}}_{qe} + \sum_{p \in \mathcal{P}_M}\mu^{-1} \hat{n}_p\times \hat{\mathbf{\Pi}}_{pm},
\end{align}
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{qe}} = \epsilon^{-1} \hat{\mathbf{\Pi}}_{qe} + \epsilon^{-1} \partial_t^{-1} \hat{\mathbf{J}}_t .
\end{align}
Hamilton's equations of motion for the magnetic vector potential system are \cite{chew2016quantum}
\begin{align}
\frac{\partial \hat{\mathbf{A}}_{qe}}{\partial t} = \frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{qe}}, \,\, \frac{\partial \hat{\mathbf{\Pi}}_{qe}}{\partial t} = -\frac{\delta\hat{H}}{\delta \hat{\mathbf{A}}_{qe}}.
\label{eq:ham1}
\end{align}
These can be combined to give
\begin{align}
\nabla\times\nabla\times\hat{\mathbf{A}}_{qe} + \mu\epsilon \partial_t^2 \hat{\mathbf{A}}_{qe} = \mu \hat{\mathbf{J}}_t - \sum_{p \in \mathcal{P}_M} \hat{n}_p \times \hat{\mathbf{\Pi}}_{pm}.
\label{eq:sim-A-wave}
\end{align}
This inhomogeneous wave equation can be seen to take the expected form for the magnetic vector potential by noting that $-\hat{n}_p\times\hat{\mathbf{\Pi}}_{pm} = \mu \hat{n}_p\times\hat{\mathbf{H}}_{pm}$ is an equivalent electric current density times the permeability.
A similar process can be done for the magnetic vector potential in the port regions. For a particular port, we have
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{A}}_{pe}} = \mu^{-1} \nabla\times\nabla\times\hat{\mathbf{A}}_{pe}
\end{align}
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{pe}} = \epsilon^{-1} \hat{\mathbf{\Pi}}_{pe} + \sum_{p' \in \mathcal{P}_E} \delta_{p,p'}\epsilon^{-1} \hat{n}_{p'} \times\hat{\mathbf{F}}_{qm}.
\end{align}
The Kronecker delta function is used to only include a source term if the particular port $p \in \mathcal{P}_E$. Similar functional derivatives can be evaluated for each of the individual port regions. Hamilton's equations can then be used to derive a wave equation. This yields
\begin{align}
\nabla\times\nabla\times\hat{\mathbf{A}}_{pe} + \mu\epsilon \partial^2_t \hat{\mathbf{A}}_{pe} = \!\!\sum_{p' \in \mathcal{P}_E} \delta_{p,p'} \mu \hat{n}_{p'} \! \times\!\partial_t\hat{\mathbf{F}}_{qm}.
\label{eq:port-A-wave}
\end{align}
Note that due to the fixed orientation of $\hat{n}_p$ pointing into the simulation domain, $\hat{n}_p \times\partial_t \hat{\mathbf{F}}_{qm}$ is an equivalent electric current density with a positive sign.
To finish the derivation, equations of motion for the electric vector potential need to be established. As expected, these follow a very similar process to that for the magnetic vector potential. Beginning with the equations for the simulation domain, the necessary functional derivatives are
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{F}}_{qm}} = \nabla\times\epsilon^{-1}\nabla\times\hat{\mathbf{F}}_{qm} - \sum_{p \in \mathcal{P}_E}\epsilon^{-1} \hat{n}_p\times \hat{\mathbf{\Pi}}_{pm}
\end{align}
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{qm}} = \mu^{-1} \hat{\mathbf{\Pi}}_{qm}.
\end{align}
Hamilton's equations of motion for the electric vector potential system are
\begin{align}
\frac{\partial \hat{\mathbf{F}}_{qm}}{\partial t} = \frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{qm}}, \,\, \frac{\partial \hat{\mathbf{\Pi}}_{qm}}{\partial t} = -\frac{\delta\hat{H}}{\delta \hat{\mathbf{F}}_{qm}}.
\label{eq:ham3}
\end{align}
These can be combined to give
\begin{align}
\epsilon\nabla\times\epsilon^{-1}\nabla\times\hat{\mathbf{F}}_{qm} + \mu\epsilon \partial_t^2 \hat{\mathbf{F}}_{qm} = \sum_{p \in \mathcal{P}_M} \hat{n}_p \times \hat{\mathbf{\Pi}}_{pm}.
\label{eq:sim-F-wave}
\end{align}
By recalling that $\hat{n}_p \times\hat{\mathbf{\Pi}}_{pm} = \epsilon \hat{\mathbf{E}}_{pm} \times\hat{n}_p$, it is seen that the source term for this inhomogeneous wave equation has the form of an equivalent magnetic current density times the permittivity. Hence, this is the expected wave equation for an electric vector potential in the radiation gauge.
The final set of equations are for the electric vector potential in the port regions. The functional derivatives are
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{F}}_{pm}} = \nabla\times\epsilon^{-1}\nabla\times\hat{\mathbf{F}}_{pm}
\end{align}
\begin{align}
\frac{\delta\hat{H}}{\delta \hat{\mathbf{\Pi}}_{pm}} = \mu^{-1} \hat{\mathbf{\Pi}}_{pm} - \sum_{p' \in \mathcal{P}_M} \delta_{p,p'}\mu^{-1} \hat{n}_{p'} \times\hat{\mathbf{A}}_{qe}.
\end{align}
Using Hamilton's equations, the results of these functional derivatives can be combined to give
\begin{multline}
\epsilon\nabla\times\epsilon^{-1}\nabla\times\hat{\mathbf{F}}_{pm} + \mu\epsilon \partial^2_t \hat{\mathbf{F}}_{pm} \\ = -\sum_{p' \in \mathcal{P}_M} \delta_{p,p'} \epsilon \hat{n}_{p'} \times\partial_t\hat{\mathbf{A}}_{qe}.
\label{eq:port-F-wave}
\end{multline}
Similar to (\ref{eq:port-A-wave}), the fixed polarity of $\hat{n}_p$ means that $-\hat{n}_p\times\partial_t\hat{\mathbf{A}}_{qe}$ is equal to an equivalent magnetic current density with a positive sign.
Noting that (\ref{eq:sim-A-wave}), (\ref{eq:port-A-wave}), (\ref{eq:sim-F-wave}), and (\ref{eq:port-F-wave}) are the expected wave equations in each region for the radiation gauge, it can be concluded that the equations of motion for the electromagnetic fields are simply the quantum Maxwell's equations for each region with the necessary sources added \cite{chew2016quantum2}. For the simulation domain, this gives
\begin{multline}
\nabla\times\hat{\mathbf{H}}_q(\mathbf{r},t) - \partial_t \hat{\mathbf{D}}_q(\mathbf{r},t) \\ = \hat{\mathbf{J}}_t(\mathbf{r},t) + \sum_{p \in \mathcal{P}_M} \hat{\mathbf{J}}_{p}(\mathbf{r},t)
\end{multline}
\begin{align}
\nabla\times\hat{\mathbf{E}}_q(\mathbf{r},t) + \partial_t \hat{\mathbf{B}}_q(\mathbf{r},t) = - \sum_{p \in \mathcal{P}_E} \hat{\mathbf{M}}_{p}(\mathbf{r},t)
\end{align}
\begin{align}
\nabla \cdot \hat{\mathbf{D}}_q(\mathbf{r},t) = 0
\end{align}
\begin{align}
\nabla \cdot \hat{\mathbf{B}}_q(\mathbf{r},t) = 0
\end{align}
where the port current densities are
\begin{align}
\hat{\mathbf{J}}_p(\mathbf{r},t) = \hat{n}_p \times \hat{\mathbf{H}}_p(\mathbf{r},t), \,\, \,\,\,\,\, p \in \mathcal{P}_M,
\end{align}
\begin{align}
\hat{\mathbf{M}}_p(\mathbf{r},t) = -\hat{n}_p \times \hat{\mathbf{E}}_p(\mathbf{r},t), \,\, \,\,\,\,\, p \in \mathcal{P}_E.
\end{align}
Similarly, the quantum Maxwell's equations for a single port region are
\begin{align}
\nabla\times\hat{\mathbf{H}}_p(\mathbf{r},t) - \partial_t \hat{\mathbf{D}}_p(\mathbf{r},t) = \hat{\mathbf{J}}_{q}(\mathbf{r},t)
\end{align}
\begin{align}
\nabla\times\hat{\mathbf{E}}_p(\mathbf{r},t) + \partial_t \hat{\mathbf{B}}_p(\mathbf{r},t) = - \hat{\mathbf{M}}_{q}(\mathbf{r},t)
\end{align}
\begin{align}
\nabla \cdot \hat{\mathbf{D}}_p(\mathbf{r},t) = 0
\end{align}
\begin{align}
\nabla \cdot \hat{\mathbf{B}}_p(\mathbf{r},t) = 0
\end{align}
where the port current densities are
\begin{align}
\hat{\mathbf{J}}_q(\mathbf{r},t) = -\hat{n}_p \times \hat{\mathbf{H}}_q(\mathbf{r},t), \,\, \,\,\,\,\, p \in \mathcal{P}_E,
\end{align}
\begin{align}
\hat{\mathbf{M}}_q(\mathbf{r},t) = \hat{n}_p \times \hat{\mathbf{E}}_q(\mathbf{r},t), \,\, \,\,\,\,\, p \in \mathcal{P}_M.
\end{align}
Note that due to the radiation gauge used in this work, the port current densities are all solenoidal and as a result there are no quantum charge densities associated with these currents.
Maxwell's equations can be used to form wave equations for $\hat{\mathbf{E}}$. In the simulation domain, this gives
\begin{multline}
\nabla\times\nabla\times\hat{\mathbf{E}}_q(\mathbf{r},t) + \mu\epsilon\partial_t^2\hat{\mathbf{E}}_q(\mathbf{r},t) = -\mu \partial_t \hat{\mathbf{J}}_t(\mathbf{r},t) \\ -\mu \sum_{p\in\mathcal{P}_M} \partial_t \hat{\mathbf{J}}_p(\mathbf{r},t) - \sum_{p\in\mathcal{P}_E} \nabla\times \hat{\mathbf{M}}_p(\mathbf{r},t),
\label{eq:q-Sim-wave}
\end{multline}
while in a particular port region we have
\begin{multline}
\nabla\times\nabla\times\hat{\mathbf{E}}_p(\mathbf{r},t) + \mu\epsilon\partial_t^2\hat{\mathbf{E}}_p(\mathbf{r},t) \\ = -\mu \partial_t \hat{\mathbf{J}}_q(\mathbf{r},t) - \nabla\times \hat{\mathbf{M}}_q(\mathbf{r},t).
\label{eq:q-Port-wave}
\end{multline}
In general, only one set of sources will be present in (\ref{eq:q-Port-wave}) depending on whether $p\in \mathcal{P}_E$ or $p\in\mathcal{P}_M$.
With appropriate wave equations developed, different modeling strategies can be devised to solve them. For instance, a quantum finite-difference time-domain solver could be used \cite{na2020quantum2}. Alternatively, eigenmodes of the electromagnetic system can be found numerically and used in a quantum information preserving numerical framework \cite{na2020quantum}. In certain cases, the current densities can be treated as impressed sources so that a dyadic Green's function approach can be used to propagate the quantum information \cite{chew2016quantum2}. We will demonstrate this process for a circuit QED system in our future work.
Although full-wave numerical models are of primary interest for practical applications, the development of simpler problems that can be solved using analytical methods are also important. The solutions to these test problems can help validate different full-wave modeling techniques and serve as a useful pedagogical tool for learning how this new formalism can be applied to real-world problems.
| 3,376 | 32,292 |
en
|
train
|
0.4933.10
|
\section{Conclusion}
\label{sec:conclusion}
In this work, we have provided a new look at how circuit QED systems using transmon qubits can be described mathematically. Expressed in terms of three-dimensional vector fields, this new approach is well-suited to developing numerical models that can leverage the latest developments in computational electromagnetics research. We have also demonstrated how our new model is consistent with the simpler circuit-based descriptions often used in the literature. Using our new model, we derived the quantum equations of motion applicable to the coupled field-transmon system. Developing solution strategies for this kind of coupled quantum system is an area of active research interest. Numerical methods in this area have the potential to greatly benefit the overall field of circuit QED, and correspondingly, the development of new kinds of quantum information processing hardware.
\ifCLASSOPTIONcaptionsoff
\fi
\end{document}
| 206 | 32,292 |
en
|
train
|
0.4934.0
|
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch}
{#1}\small\normalsize} \spacingset{1}
\if11
{
\title{\bf Transformed-linear prediction for extremes}
\author{Jeongjin Lee\thanks{
Jeongjin Lee and Daniel Cooley were partially supported by US National Science Foundation Grant DMS-1811657.}
\hspace{.2cm}\\
Department of Statistics, Colorado State University\\
and \\
Daniel Cooley \\
Department of Statistics, Colorado State University}
\maketitle
} \fi
\if01
{
\begin{center}
{\LARGE\bf Transformed-linear prediction for extremes}
\end{center}
} \fi
\begin{abstract}
We consider the problem of performing prediction when observed values are at their highest levels.
We construct an inner product space of nonnegative random variables from transformed-linear combinations of independent regularly varying random variables.
Under a reasonable modeling assumption, the matrix of inner products corresponds to the tail pairwise dependence matrix, which summarizes tail dependence.
The projection theorem yields the optimal transformed-linear predictor, which has the same form as the best linear unbiased predictor in non-extreme prediction.
We also construct prediction intervals based on the geometry of regular variation.
We show that these intervals have good coverage in a simulation study as well as in two applications: prediction of high pollution levels, and prediction of large financial losses.
\end{abstract}
\noindent
{\it Keywords: }Multivariate Regular Variation, Projection Theorem, Tail Pairwise Dependence Matrix, Air Pollution, Financial Risk
\spacingset{1.9}
\section{Introduction}
\label{sec:intro}
Prediction of unobserved quantities is a common objective of statistical analyses.
Figure \ref{fig: washingtonDC} shows the one-hour maximum measurements of the air pollutant nitrogen dioxide (NO$_2$) in parts per billion for four monitoring stations in the Washington DC area on January 23, 2020.
Given these measurements, it is natural to ask what the predicted level would be at a nearby unmonitored location such as Alexandria VA, which is marked ``Alx" in Figure \ref{fig: washingtonDC} and which had NO$_2$ monitoring prior to 2015. What makes this particular day interesting is that measurements are at very high levels; each measurement exceeds its station's empirical 0.98 quantile for the year, and the Arlington station (Arl) is recording its highest measurement for the year.
We propose a linear prediction method which is designed specifically for when observed values are at extreme levels and which is based on a framework from extreme value analysis.
If the joint distribution of all variates were known, the conditional distribution would provide complete information about the variate of interest given the observed values.
The air pollution data's distribution is not known, is clearly non-Gaussian, and there is no clear choice for a candidate joint distribution.
Further, extreme value analysis would caution against using a model that had been fit to the entire data set to describe joint tail behavior.
Linear methods, such as kriging in spatial statistics, offer a straightforward predictor by simply applying weights to each of the observations.
Linear prediction methods do not require specification of the joint distribution and instead provide the best (in terms of mean square prediction error, MSPE) linear unbiased prediction (BLUP) weights given only the covariance structure between the observed and unobserved measurements.
Uncertainty is often summarized by MSPE and prediction intervals are commonly based on Gaussian assumptions.
However, covariance could be a poor descriptor of tail dependence, and Gaussian assumptions may be poorly suited to describe uncertainty in the tail.
\begin{figure}
\caption{Maximum NO$_2$ measurements for January 23, 2020. All observations are above the empirical .98 quantile for each location.}
\label{fig: washingtonDC}
\end{figure}
In this work, we propose a extremal prediction method which is similar in spirit to familiar linear prediction.
We will analyze only data which are extreme.
To provide a framework for modeling dependence in the upper tail, we rely on regular variation on the positive orthant.
Modeling in the positive orthant allows our method to focus only on the upper tail, which is assumed to be the direction of interest; in this example we are interested in predicting when pollution levels are high.
On the way to developing our prediction method, we will construct a vector space of non-negative regularly-varying random vectors arising from transformed-linear operations.
We summarize pairwise tail dependencies in a matrix which has properties analogous to a covariance matrix.
Our transformed-linear predictor has a similar form to the BLUP in non-extreme linear prediction.
Rather than being based on the elliptical geometry underlying standard linear prediction, uncertainty quantification is based on on the polar geometry of regular variation.
We will show that our method has good coverage when applied to the Washington air pollution data and also when applied to a higher dimensional financial data set.
| 1,224 | 17,691 |
en
|
train
|
0.4934.1
|
\section{Introduction}
\label{sec:intro}
Prediction of unobserved quantities is a common objective of statistical analyses.
Figure \ref{fig: washingtonDC} shows the one-hour maximum measurements of the air pollutant nitrogen dioxide (NO$_2$) in parts per billion for four monitoring stations in the Washington DC area on January 23, 2020.
Given these measurements, it is natural to ask what the predicted level would be at a nearby unmonitored location such as Alexandria VA, which is marked ``Alx" in Figure \ref{fig: washingtonDC} and which had NO$_2$ monitoring prior to 2015. What makes this particular day interesting is that measurements are at very high levels; each measurement exceeds its station's empirical 0.98 quantile for the year, and the Arlington station (Arl) is recording its highest measurement for the year.
We propose a linear prediction method which is designed specifically for when observed values are at extreme levels and which is based on a framework from extreme value analysis.
If the joint distribution of all variates were known, the conditional distribution would provide complete information about the variate of interest given the observed values.
The air pollution data's distribution is not known, is clearly non-Gaussian, and there is no clear choice for a candidate joint distribution.
Further, extreme value analysis would caution against using a model that had been fit to the entire data set to describe joint tail behavior.
Linear methods, such as kriging in spatial statistics, offer a straightforward predictor by simply applying weights to each of the observations.
Linear prediction methods do not require specification of the joint distribution and instead provide the best (in terms of mean square prediction error, MSPE) linear unbiased prediction (BLUP) weights given only the covariance structure between the observed and unobserved measurements.
Uncertainty is often summarized by MSPE and prediction intervals are commonly based on Gaussian assumptions.
However, covariance could be a poor descriptor of tail dependence, and Gaussian assumptions may be poorly suited to describe uncertainty in the tail.
\begin{figure}
\caption{Maximum NO$_2$ measurements for January 23, 2020. All observations are above the empirical .98 quantile for each location.}
\label{fig: washingtonDC}
\end{figure}
In this work, we propose a extremal prediction method which is similar in spirit to familiar linear prediction.
We will analyze only data which are extreme.
To provide a framework for modeling dependence in the upper tail, we rely on regular variation on the positive orthant.
Modeling in the positive orthant allows our method to focus only on the upper tail, which is assumed to be the direction of interest; in this example we are interested in predicting when pollution levels are high.
On the way to developing our prediction method, we will construct a vector space of non-negative regularly-varying random vectors arising from transformed-linear operations.
We summarize pairwise tail dependencies in a matrix which has properties analogous to a covariance matrix.
Our transformed-linear predictor has a similar form to the BLUP in non-extreme linear prediction.
Rather than being based on the elliptical geometry underlying standard linear prediction, uncertainty quantification is based on on the polar geometry of regular variation.
We will show that our method has good coverage when applied to the Washington air pollution data and also when applied to a higher dimensional financial data set.
\section{Background}
\label{sec:background}
\subsection{Regular variation on the positive orthant}
Informally, a multivariate regularly varying random variable has a distribution which is jointly heavy tailed.
Regular variation is closely tied to classical extreme value analysis \citep[][Appendix B]{deHaan2007}, and \cite{resnick2007} gives a comprehensive treatment.
Let $\bm{X}$ be a $p$-dimensional random vector that takes values in $\mathbb{R}_{+}^{p}=[0,\bm{\infty})^p$.
$\bm{X}$ is regularly varying (denoted $RV_+^p(\alpha)$) if there exists a function $b(s) \rightarrow \infty$ as $s \rightarrow \infty$ and a non-degenerate limit measure $\nu_{\bm X}$ for sets in $[0,\infty)^{p} \setminus \{\bm{0}\}$ such that
\begin{equation}
\label{eq: regVar1}
s\operatorname{P}(b(s)^{-1}\bm{X}\in \cdot)\xrightarrow{v} \nu_{\bm{X}}(\cdot)
\end{equation}
as $s\rightarrow\infty$, where $\xrightarrow{v}$ indicates vague convergence
in the space of non-negative Radon measures on $[0,\bm{\infty}]^{p}\setminus\{\bm{0}\}$.
The normalizing function is of the form $b(s)=U(s) s^{1/\alpha}$
where $U(s)$ is a slowly varying function, and $\alpha$ is termed the tail index.
For any set $C \subset [0,\bm{\infty}]^{p}\setminus\{\bm{0}\}$ and $k > 0$, the measure has the scaling property $\nu_{\bm{X}}(kC)=k^{-\alpha}\nu_{\bm{X}}(C)$.
This scaling property implies regular variation can be more easily understood in a polar geometry.
Given any norm, $r>0$, and Borel set $B\subset \Theta_{+}^{p-1}=\{\bm{x}\in \mathbb{R}_{+}^{p}:||\bm{x}||=1\}$, the set $C(r,B)=\{\bm{x}\in \mathbb{R}_{+}^{p}:||\bm{x}||>r, \bm{x}/||\bm{x}|| \in B\}$ has measure $\nu_{\bm{X}}{(C(r,B))}=r^{-\alpha}H_{\bm{X}}(B)$, where $H_{\bm X}$ is a measure on $\Theta_{+}^{p-1}$.
The angular measure $H_{\bm X}$ fully describes tail dependence in the limit; however, modeling $H_{\bm X}$ even in moderate dimensions is difficult.
The measure's intensity function in terms of polar coordinates is
\begin{equation}
\label{eq:nu}
\nu_{\bm{X}}(\mathrm{d} r\times \mathrm{d}\bm w)=\alpha r^{-\alpha-1}\mathrm{d} r\mathrm{d} H_{\bm{X}}(\bm w).
\end{equation}
\subsection{Transformed linear operations}
In order to perform linear-like operations for vectors in the positive orthant, \cite{cooley2019decompositions} defined transformed linear operations.
Consider $\bm x \in \mathbb{R}_{+}^{p}=[0,\infty)^{p}$, let $t$ be a monotone bijection mapping from $\mathbb{R}$ to $\mathbb{R}_{+}$, with $t^{-1}$ its inverse.
For $\bm y \in \mathbb{R}^{p}$, $t(\bm y)$ applies the transform componentwise.
For $\bm{x}_1$ and $\bm{x}_2 \in \mathbb{R}_{+}^{p}=[0,\infty)^{p}$, define vector addition as $\bm{x}_1 \oplus \bm{x}_2 = t\{t^{-1}(\bm{x}_1)+t^{-1}(\bm{x}_2)\}$ and define scalar multiplication as $a\circ \bm{x}_1=t\{at^{-1}(\bm{x}_1)\}$ for $a\in \mathbb{R}$.
It is straightforward to show that $\mathbb{R}_{+}^{p}$ with these transformed-linear operations is a vector space as it is isomorphic to $\mathbb{R}^{p}$ with standard operations.
To apply transformed linear operations to non-negative regularly-varying random vectors, \cite{cooley2019decompositions} consider the softplus function $t(y)=\log\{1+\exp(y)\}$,
whose important property is $\lim\limits_{y\to\infty} t(y)/y=\lim\limits_{x\to\infty} t^{-1}(x)/x=1$.
Because $t$ negligibly affects large values, regular variation in the upper tail is preserved when $t$ is used to define transformed-linear operations on regularly-varying random vectors.
More precisely,
assume $\bm X_i$ is regularly varying as in (\ref{eq: regVar1}) with limit measure $\nu_{\bm X_i}(\cdot)$, $i=1,2$.
Further assume that the marginals meet the lower tail condition
$s\operatorname{P}\{X_{i,j} \le \exp(-kb(s))\}\to 0, \mbox{ as } s\rightarrow\infty$,
$j=1,\cdots,p$, for all $k > 0$.
This lower tail condition is specific to $t$ and is required to guarantee that $\operatorname{P}(X_{i,j} < x) \rightarrow 0$ as $x \rightarrow 0$ fast enough so that when $a < 0$, $a \circ \bm X_i$ does not affect the upper tail; it is met by common regularly varying distributions like the Fr\'echet and Pareto.
Applying transformed linear operations, if $\bm X_1, \bm X_2$ are independent,
\begin{eqnarray}
\label{eq:transPlus}
s\operatorname{P}(b(s)^{-1}(\bm X_1 \oplus \bm X_2)\in \cdot) &\xrightarrow{\nu}& \nu_{\bm X_1}(\cdot)+\nu_{\bm X_2}(\cdot)\mbox{; and}\\
\label{eq:transMult}
s\operatorname{P}(b(s)^{-1}(a\circ \bm X_1)\in \cdot) &\xrightarrow{v}&
\left \{
\begin{array}{l}
a^{\alpha}\nu_{\bm X_1}(\cdot) \mbox{ if } a>0\mbox{, and}\\
0 \mbox{ if } a \le 0.
\end{array}
\right.
\end{eqnarray}
Other transforms with the same limiting properties and with appropriately adjusted lower tail condition could be used in place of $t$.
\cite{cooley2019decompositions} go on to construct $\bm X \in RV_+^p(\alpha)$ via transformed linear combinations of independent regularly varying random variables.
Let $A = (\bm a_1, \ldots, \bm a_q)$, where $\bm a_j \in \mathbb{R}^p$ and hence $A \in \mathbb{R}^{p \times q}$.
Let
\begin{equation}
\label{eq:linConst}
\bm X = A \circ \bm Z = t(A t^{-1}(\bm Z)),
\end{equation}
where
$\bm Z = (Z_1, \ldots Z_q)^\top$ is a vector of independent regularly varying random variables where $s\operatorname{P}(b(s)^{-1} Z_{j} > z) \to z^{-\alpha}$ and $Z_j$ meets the aforementioned lower tail condition.
$\bm X$ is regularly varying
with angular measure
\begin{equation}
\label{eq: discreteAngMsr}
H_{\bm X}(\cdot) = \sum_{j = 1}^q \| \bm a^{(0)}_{j} \|^\alpha \delta_{ \bm a^{(0)}_{j} / \| \bm a^{(0)}_{j} \|}(\cdot),
\end{equation}
where $\delta$ is the Dirac mass function.
The zero operation $a^{(0)} := \max(a, 0)$ will be important throughout, and is understood to be componentwise when applied to vectors or matrices.
As $q \rightarrow \infty$ the class of angular measures resulting from this construction method is dense in the class of possible angular measures \citep{cooley2019decompositions}, and one only needs to consider nonnegative matrices $A$ to construct the dense class.
\subsection{Tail Pairwise Dependence Matrix}
For a general $\bm X \in RV_+^p(\alpha)$, if $p$ is even moderately large, it is challenging to describe the angular measure $H_{\bm X}$.
Rather than fully characterize $H_{\bm X}$, we will summarize tail dependence via the tail pairwise dependence matrix (TPDM), a matrix of pairwise summary measures.
Let $\alpha = 2$ and let $\bm X \in RV_+^p(2)$ have angular measure $H_{\bm X}$.
Let $\Sigma_{\bm X} =\{ \sigma_{\bm X_{ij}}\}_{i,j=1,\cdots,p}$ be the $p \times p$ matrix where
\begin{equation}
\label{eq: TPDM}
\sigma_{\bm X_{ik}}=\int_{\Theta_{+}^{p-1}} {w_{i}w_{k}} \mathrm{d} H_{\bm{X}}(w),
\end{equation}
and $\Theta_{+}^{p-1}=\{\bm{x}\in \mathbb{R}_{+}^{p}:\|\bm{x}\|_{2}=1\}$.
Each element $\sigma_{\bm X_{ij}}$ is an extremal dependence measure of \cite{larsson2012},
however we do not require $H$ to be a probability measure.
As (\ref{eq: TPDM}) resembles a second moment, it is not surprising that it has some properties similar to a covariance matrix.
Most importantly, $\Sigma_{\bm X}$ can be shown to be positive semi-definite \citep{cooley2019decompositions}.
Also, the diagonal elements ${\sigma_{\bm X}}_{ii}$ reflect the relative magnitudes of the respective elements $X_i$, as (\ref{eq:nu}) implies
$\lim_{s \rightarrow\infty}s\operatorname{P}(b(s)^{-1} X_i >c)
=\int_{\Theta_{+}^{p-1}}\int_{c/w_i}^{\infty}{2r^{-3}\mathrm{d} r\mathrm{d} H_{\bm{X}}(w)=c^{-2}\int_{\Theta_{+}^{p-1}}w_{i}^{2}\mathrm{d} H_{\bm{X}}(w)=c^{-2}\sigma_{X_{ii}}}.
$
Letting $x = cU(s)s^{1/2}$, there is a corresponding slowly varying function $L$ such that the relation can be rewritten as
\begin{equation}
\label{eq:tailRatio}
\lim_{x \rightarrow \infty} \frac{\operatorname{P}(X_i > x)}{x^{-2} L(x)} = {\sigma_{\bm X}}_{ii}.
\end{equation}
So the `magnitude' of the elements of $\bm X$ described by the diagonal elements of the TPDM is in terms of suitably-normalized tail probabilities rather than variance.
The presence of the slowly varying function $L(x)$ in the denominator means it is ambiguous to discuss the `scale' of a regularly varying random variable, as scale information is in both the normalizing sequence and the angular measure (and consequently, TPDM).
Because the notion of `scale' is inherent in principal component analysis, \cite{cooley2019decompositions} further assumed that $\bm X$ was Pareto-tailed, making $L(x)$ a constant that was pushed into the angular measure $H_{\bm X}$ and subsequently into $\Sigma_{\bm X}$.
Here, we will not require a Pareto tail, and the random variables we will construct in Section \ref{sec:innerProductSpace}
will have a natural normalizing function.
\cite{cooley2019decompositions} choose $\alpha = 2$ because the TPDM has a convenient form for random vectors defined as in (\ref{eq:linConst}).
With the angular measure in (\ref{eq: discreteAngMsr}), $\sigma_{\bm X_{ik}} = \sum_{j = 1}^q a_{ij}^{(0)}a_{kj}^{(0)}$ and $\Sigma_{\bm A \circ \bm Z} = A^{(0)} {A^{(0)}}^\top.$
\cite{kiriliouk2022} recently generalized the TPDM for any $\alpha > 0$ by allowing the integrand to depend on $\alpha$.
For the inner product space we introduce in Section \ref{sec:innerProductSpace}, we will continue to assume $\alpha = 2$.
Additionally, for any $\bm X \in RV_+^p(2)$, $\Sigma_{\bm X}$ is completely positive; that is, there exists $q_* < \infty$ and nonnegative $p \times q_*$ matrix $A$ such that $\Sigma_{\bm X} = A A^T$ \citep[][Proposition 5]{cooley2019decompositions}.
The value of $q_*$ is not known, and $A$ is not unique.
This property implies that given any TPDM, one can find a nonnegative matrix $A$ such that $A \circ \bm Z$, and
in Section \ref{sec:UQ}, we will use this completely positive decomposition to create prediction intervals.
| 4,055 | 17,691 |
en
|
train
|
0.4934.2
|
\section{Inner product space and prediction}
\label{sec:innerProductSpace}
\subsection{Inner product space $\mathcal{V}^q$}
We consider a space of regularly varying random variables constructed from transformed-linear combinations.
We assume $\alpha = 2$ to obtain an inner product space.
Let $\bm Z = (Z_{1}, \ldots Z_{q})^\top$ be a vector of independent $Z_{j} \in RV_{+}^{1}(2)$ meeting lower tail condition,
$s\operatorname{P}(Z_{j}\le \exp(-kb(s)))\rightarrow 0$ as $s\rightarrow \infty$ for all $k>0$.
Define $L(z)$ such that $\lim_{z \rightarrow \infty} \frac{\operatorname{P}(Z_j > z)}{z^{-2} L(z)} = 1$ for all $j = 1, \ldots, q$.
For $\bm a \in \mathbb{R}^q$, consider the subspace of $RV_+^1(2)$
\begin{equation}
\label{eq:V}
\mathcal{V}^q = \big\{ X ; X =
\bm a^\top \circ \bm Z = a_{1} \circ Z_{1} \oplus \cdots \oplus a_{q} \circ Z_{q} \}.
\end{equation}
If $X_1 = \bm a_1^\top \circ \bm Z$ and $X_2 = \bm a_2^\top \circ \bm Z$, then $X_1 \oplus X_2 = (\bm a_1 + \bm a_2)^\top \circ \bm Z$.
Also, $c \circ X_1 = c \bm a_1^\top \circ \bm Z$ for $c \in \mathbb{R}$.
$\mathcal{V}^q$ is isomorphic to $\mathbb{R}^q$ as any $X \in \mathcal{V}^q$ is uniquely identifiable by its vector of coefficients $\bm a$.
Like $\mathbb{R}^{q},$ $\mathcal{V}^{q}$ is complete and thus is a Hilbert space \citep{lee2022phd}.
$\mathcal{V}^q$ differs from the vector space in \cite{cooley2019decompositions} which was non-stochastic.
We define the inner product of $X_{1} = \bm a_1^\top \circ \bm Z$ and $X_2=\bm a_2^\top \circ \bm Z$ as
\begin{equation*}
\langle X_{1}, X_{2} \rangle := \bm a_1^\top \bm a_2 = \sum_{i=1}^{q}a_{1i}a_{2i}.
\end{equation*}
We say $X_1, X_2 \in \mathcal{V}^q$ are orthogonal if $\langle X_1, X_2 \rangle = 0$.
The norm is defined as $\| X \|_{\mathcal{V}^{q}} = \sqrt{ \langle X, X \rangle}$, whose subscript $\mathcal{V}^{q}$ distinguishes this norm based on the random variable's coefficients from the usual Euclidean norm.
The norm
defines a metric
$
d(X_1, X_2) = \| X_1 \ominus X_2 \|_{\mathcal{V}^{q}}=\sqrt{\sum_{i=1}^{q}(a_{1i}-a_{2i})^2},
$
which we will further interpret in Section \ref{sec:Vplus}.
Considering vectors $\bm X = (X_1, \ldots, X_p)^\top$ where $X_i = \bm a_i^\top \circ \bm Z \in \mathcal{V}^q$ for $i = 1, \ldots, p$,
$\bm X \in RV_+^p(2)$ and is of the form $A \circ \bm Z$ in (\ref{eq:linConst}).
we denote the matrix of inner products
\begin{equation}
\label{eq:ipMatrix}
\Gamma_{\bm X} = \langle X_i, X_j \rangle_{i,j = 1, \ldots p} = A A^\top.
\end{equation}
We will relate $\Gamma_{\bm X}$ for $X_i$ in $\mathcal{V}^q$ to the TPDM $\Sigma_{\bm X}$ for general $\bm X \in RV_+^p(2)$ in Section \ref{sec:Vplus}.
\subsection{Transformed-linear prediction}
\label{sec:transLinearPred}
As $\mathcal{V}^q$ is isomorphic to Hilbert space $\mathbb{R}^q$, the best transformed-linear predictor follows similarly.
Assume $X_i = \bm a_i^\top \circ \bm Z \in \mathcal{V}^q$ for $i = 1, \ldots, p+1$. Let $\bm X_p = (X_1, \ldots, X_p)^\top$.
We aim to find $\bm b \in \mathbb{R}^p$ such that $d(\bm b^\top \circ \bm X_p, X_{p+1})$ is minimized.
Writing in matrix form
\begin{equation*}
\begin{bmatrix}
\bm{X}_{p}\\
X_{p+1}
\end{bmatrix}
=
\begin{bmatrix}
A_p\\
\bm{a}_{p+1}^\top
\end{bmatrix}
\circ \bm{Z},
\end{equation*}
where $A_{p}= (\bm a_1^\top, \ldots, \bm a_p^\top)^\top$.
The matrix of inner products of $(\bm{X}_{p}^\top, X_{p+1})^\top$ is
\begin{equation}\label{eq:GammaMatrices}
\Gamma_{(\bm{X}_{p}^\top, X_{p+1})^\top}
=
\begin{bmatrix}
A_{p}A_{p}^\top & A_{p}\bm{a}_{p+1}\\
\bm{a}_{p+1}^\top A_{p}^\top & \bm{a}_{p+1}^\top\bm{a}_{p+1}
\end{bmatrix}
:=
\begin{bmatrix}
\Gamma_{11} & \Gamma_{12}\\
\Gamma_{21} & \Gamma_{22}
\end{bmatrix}.
\end{equation}
Minimizing $d(\bm b^\top \circ \bm X_p, X_{p+1})$ is equivalent to minimizing $\|A_{p}^\top\bm{b}-\bm{a}_{p+1}\|_{2}^{2}$.
Taking derivatives with respect to $\bm b$ and setting equal to zero, the minimizer $\hat {\bm b}$ solves
$(A_{p}A_{p}^\top)\hat{\bm b}=A_{p}\bm{a}_{p+1}$.
If $A_{p}A_{p}^\top$ is invertible, then the solution $\hat{\bm{b}}$ is,
\begin{equation}\label{eq: bHat}
\hat{\bm{b}}=(A_{p}A_{p}^\top)^{-1}A_{p} \bm{a}_{p+1}=\Gamma_{11}^{-1}\Gamma_{12}.
\end{equation}
An equivalent way to think of the best transformed-linear prediction is through the projection theorem.
$\hat{X}_{p+1}$ is
such that $X_{p+1}\ominus\hat{X}_{p+1}$ is orthogonal to the plane spanned by $X_1,\ldots, X_p$.
The orthogonality condition can be stated as $\langle X_{p+1}\ominus\hat{X}_{p+1},X_i\rangle=0$, for $i=1,\ldots,p$. By linearity of inner products, this can equivalently be expressed as
\begin{align}\label{eq:4}
\begin{bmatrix}
<X_{p+1},X_{i}>\\
\end{bmatrix}_{i=1}^{p}
&=
\begin{bmatrix}
<X_{i},X_{j}>
\end{bmatrix}_{i,j=1}^{p}
\begin{bmatrix}
b_{i}
\end{bmatrix}_{i=1}^{p}
=
\begin{bmatrix}
\sum_{k=1}^{q}a_{ik}a_{jk}
\end{bmatrix}_{i,j=1}^{p}
\begin{bmatrix}
b_{i}
\end{bmatrix}_{i=1}^{p}.
\end{align}
By (\ref{eq:GammaMatrices}), $\hat {\bm b}$ satisfies $A_{p}\bm{a}_{p+1}=A_{p}A_{p}^\top\hat{\bm b}$ as above.
| 2,122 | 17,691 |
en
|
train
|
0.4934.3
|
\section{Modeling and Subset $\mathcal{V}_+^q$}
\label{sec:Vplus}
At this point we can employ transformed linear operations to construct regularly-varying random vectors $\bm X = A \circ \bm Z$ that take values in the positive orthant, and elements are in the vector space $\mathcal{V}^q$.
While it is essential that the elements of the coefficient vectors $\bm a$ are allowed to be negative for $\mathcal{V}^q$ to be a vector space, these negative elements can feel largely academic as they do not influence tail behavior.
The magnitude (as in \eqref{eq:tailRatio}) of $X \in \mathcal{V}^q$ can be understood in terms of the generating $Z_j$'s.
Using the fact $\operatorname{P}(Z_1 + Z_2 > z) \sim \operatorname{P}(Z_1 > z) + \operatorname{P}(Z_2 > z)$ as $z \rightarrow \infty$ if $Z_1, Z_2$ are independent \citep[cf.][Lemma 3.1]{jessen2006},
we call
$$
TR(X) := \lim_{z \rightarrow \infty} \frac{\operatorname{P}(X > z)}{\operatorname{P}(Z_1 > z)} = \sum_{j = 1}^q {(a_j^{(0)})}^2
$$
the tail ratio of $X$ and only the positive elements of $\bm a$ contribute.
The random variables $X = \bm a^\top \circ \bm Z$ and $X_+ = \bm a^{(0)^\top} \circ \bm Z$ have the same tail ratio.
Furthermore, if $\bm X = A \circ \bm Z$, both it and $\bm X_+ = A^{(0)} \circ \bm Z$ have the same angular measure: $H_{\bm X} = H_{\bm X_+} = \sum_{j = 1}^q \| a^{(0)}_{j} \|^2 \delta_{ a^{(0)}_{j} / \| a^{(0)}_{j} \|}(\cdot)$.
$\bm X$ and $\bm X_+$ are indistinguishable in terms of their tail behavior.
In terms of modeling, it seems reasonable to restrict our attention to the subset $\mathcal{V}^q_+ = \big\{ X ; X = \bm a^\top \circ \bm Z = a_{1} \circ Z_{1} \oplus \cdots \oplus a_{q} \circ Z_{q} \},$ where $a_j \in [0, \infty)$, and $\bm Z = (Z_{1}, \ldots Z_{q})^\top$ as in (\ref{eq:V}).
Considering inference for a random vector $\bm X \in RV_+^p$, we assume that $\bm X = A \circ \bm Z$ for some unknown $p \times q$ nonnegative matrix $A$.
Recall such constructions are dense in $RV_+^p$.
Furthermore, we will assume that $p$ is large enough that estimating $H_{\bm X}$ is intractable, so we instead summarize dependence via the TPDM, which is estimable from $\bm X$'s pairwise tail behavior.
Since $X_i \in \mathcal{V}^q_+$, $\Sigma_{\bm X} = \Gamma_{\bm X} = A A^\top$, and we are able to apply the results from Section \ref{sec:innerProductSpace}.
Furthermore, the underlying dimension $q$ is latent and not needed for inference.
Assuming the elements of $\bm X$ are in $\mathcal{V}^q_+$ is not only reasonable, but the results of Section 3 are probably useful only if this assumption is made.
Consider the simple example where
\begin{equation}
\label{eq:reviewerExample}
\bm X =
\left(
\begin{array}{c}
X_1\\
X_2
\end{array}
\right)
=
\left(
\begin{array}{c c}
1 & -10\\
1 & -1
\end{array}
\right)
\circ
\left(
\begin{array}{c}
Z_1\\
Z_2
\end{array}
\right)
= A \circ \bm Z,
\end{equation}
and $Z_i$ is iid with $\operatorname{P}(Z_i \leq z) = 1 - z^{-2}$ for $z \geq 1$.
The left panel of Figure \ref{fig:reviewerExample} shows realizations $\bm x_t, t = 1, \ldots, 20,000$ from (\ref{eq:reviewerExample}).
Here, the angular measure is given by $H_{\bm X}(\cdot) = 2 \delta_{(1/\sqrt{2}, 1/\sqrt{2})}(\cdot)$, and $X_1$ and $X_2$ exhibit perfect tail dependence in the limit as shown in Figure \ref{fig:reviewerExample}.
However, $\| X_1 \ominus X_2 \|_{\mathcal{V}^q} = 9$, and the non-zero distance between these random variables is hard to reconcile with their perfect tail dependence.
This distance arises from the negative elements in $A$ whose influence is not evident in realizations of $\bm X$, but which can be seen in the preimage $\bm Y = t^{-1}(\bm X)$
shown in the right panel of Figure \ref{fig:reviewerExample}.
Furthermore applying \eqref{eq: bHat}, $\hat X_1 = 5.5 \circ X_2$, with the weight 5.5 is the best `average' of the two possible ways that the preimages can be large.
Thus both the norm and the predictor arising from $\mathcal{V}^q$ seem more applicable to the latent preimage space rather than the one observed.
However, given only the data in the left panel of Figure \ref{fig:reviewerExample}, the information in the negative coefficients of $A$ would not be visible.
The TPDM of (\ref{eq:reviewerExample}) is $\Sigma_{\bm X} = {\tiny \begin{pmatrix}1 & 1\\ 1 & 1\end{pmatrix}}$.
If we assume $X_i \in \mathcal{V}^q_+$ and use the TPDM as the inner product matrix in (\ref{eq: bHat}), $\hat X_1 = 1 \circ X_2$.
Further, note that if $\bm X = A^{(0)}\circ \bm Z$, then $\| X_1 \ominus X_2 \|_{\mathcal{V}^q} = 0$.
\begin{figure}
\caption{Left panel: realizations $\bm x_t$ from the model in (\ref{eq:reviewerExample}
\label{fig:reviewerExample}
\end{figure}
Applying transformed-linear prediction in practice, we propose assuming that the elements of $(\bm X_p^\top, X_{p+1})^\top$ are in $\mathcal{V}^q_+$, and using the (estimated) TPDM for prediction: $\hat X_{p+1} = \hat {\bm b}^\top \circ \bm X_p$ where $\hat {\bm b} = \Sigma_{11}^{-1} \Sigma_{12}$.
Although $X_{p+1}$ is assumed to be in $\mathcal{V}^q_+$, the predictor $\hat X_{p+1}$ may not be in this subset as $\hat {\bm b}$ may have negative elements.
We do not see this as a detriment as the tranformed-linear operations guarantee $\hat X_{p+1} > 0$ almost surely and the coefficients defining $\hat X_{p+1}$ in $\mathcal{V}^q$ are latent.
The tail ratio allows us to better discuss the meaning of the metric $d(X_1, X_2) = \| X_1 \ominus X_2 \|_{\mathcal{V}^{q}}$.
$TR(X_1 \ominus X_2)$ does not equal $TR(X_2 \ominus X_1)$, except under the unusual circumstance where $\sum_{j = 1}^q \left( (a_{1j} - a_{2j})^{(0)} \right)^2 = \sum_{j = 1}^q \left( (a_{2j} - a_{1j})^{(0)} \right)^2$.
Consider $TR\left( \max ( X_1 \ominus X_2, X_2 \ominus X_1) \right) =
\label{eq:trMax}
\lim_{z \rightarrow \infty}
\left (
\frac{
{P(X_1 \ominus X_2 > z)}+
{P(X_2 \ominus X_1 > z)}-
{P(X_1 \ominus X_2 > z, X_2 \ominus X_1 > z)} }
{P(Z > z)}
\right ).
$
Let $Q = \{j \in \{1, \ldots, q\} \mid (a_{1j} - a_{2j})t^{-1}(Z_j)>0 \}$ be a set of indices where the sign of $(a_{1j}-a_{2j})$ is aligned with the sign of $t^{-1}(Z_j)\in RV_1(2)$ and $Q^{\complement}=\{1\,\ldots,q\}\setminus{Q}$ be its complement set, then the numerator's third term can be rewritten as
\begin{eqnarray*}
\lim_{z \rightarrow \infty}
\frac{P(X_1 \ominus X_2 > z, X_2 \ominus X_1 > z)}{P(Z>z)} &=&
\lim_{z \rightarrow \infty}
\frac{P\left(\bigoplus\limits_{j \in Q} (a_{1j} - a_{2j})\circ Z_j > z, \bigoplus\limits_{j \in Q^{c}} (a_{2j} - a_{1j})\circ Z_j > z\right)}{P(Z > z)}.
\end{eqnarray*}
Since $Q \cap Q^{\complement} = \emptyset$, the independence of the $Z_j$'s implies that this limit is zero and
\begin{equation}
\label{eq:tailRatioMetric}
TR\left( \max ( X_1 \ominus X_2, X_2 \ominus X_1) \right) = \sum_{j = 1}^q (a_{1j} - a_{2j})^2 = d^2(X_1, X_2).
\end{equation}
In section \ref{sec:innerProductSpace}, the metric for $\mathcal{V}^q$ was defined in terms of the random variables' defining coefficients, but the previous definition is unsatisfying as these coefficients are not visible given realizations of the random variables.
The relationship in (\ref{eq:tailRatioMetric}) explains the metric in terms of a tail ratio, which can be estimated from realizations.
Further, $TR\left( \max ( (X_{p+1} \ominus \hat X_{p+1}), (\hat X_{p+1} \ominus X_{p+1})) \right)$ can be viewed as the risk function which $\hat {\bm b}$ minimizes.
| 2,706 | 17,691 |
en
|
train
|
0.4934.4
|
\section{Prediction Error}
\label{sec:UQ}
\subsection{Analogue to Mean Square Prediction Error}
\label{sec:MSPE}
In the non-extreme setting, linear prediction minimizes MSPE.
As MSPE corresponds to the conditional variance under a Gaussian assumption, it is used to generate Gaussian-based prediction intervals.
Similarly, our transformed linear predictor $\hat {\bm b}$ minimizes
\begin{equation}\label{eq:7}
\begin{split}
||\hat{X}_{p+1} \ominus X_{p+1}||^{2}_{\mathcal{V}^{q}}
&=(\hat {\bm{b}}^\top A_{p}-\bm{a}_{p+1}^\top)(\hat {\bm{b}}^\top A_{p}-\bm{a}_{p+1})^\top\\
&=\Sigma_{22}-\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12}:=K.\\
\end{split}
\end{equation}
Unlike MSPE, $K$ is not understood via expectation, but instead via tail probabilities as
$
K = TR \left( \max ( (X_{p+1} \ominus \hat X_{p+1}), (\hat X_{p+1} \ominus X_{p+1}) ) \right).
$
However, despite its similarity to MSPE, $K$ seems not very useful for constructing prediction intervals.
To illustrate, we simulate $n = 20,000$ four dimensional vectors $\bm X$ and obtain $\hat X_4$ predicted on $(X_1, X_2, X_3)^\top$.
$\bm X$ is generated from a $4 \times 10$ matrix $A$ applied to a vector $\bm Z$ comprised of 10 independent $RV_+(2)$ random variables; the elements of $A$ are drawn from a uniform distribution, then normalized to have rows with norm 1.
Using the known TPDM to obtain $K = 0.224$ and known tail behavior of the $Z_j$'s, we calculate $\operatorname{P} \left( D \leq 2.99 \right) \approx 0.95$ where $D = \max ( (X_{p+1} \ominus \hat X_{p+1}), (\hat X_{p+1} \ominus X_{p+1}) )$.
We observe 0.952 of the simulated $D$ values are in fact below this bound.
However, Figure \ref{fig:mspe} shows that knowledge of $K$ is not useful for constructing prediction intervals.
Unlike the Gaussian case where the variance of the conditional distribution does not depend on the predicted value $\hat X_{p+1}$,
in the polar geometry of regular variation, the magnitude of the error is related to the size of the predicted value.
In the next sections we use the polar geometry of regular variation to construct meaningful prediction intervals when $\hat X_{p+1}$ is large.
\begin{figure}
\caption{The plot of $D=\max(\hat{X}
\label{fig:mspe}
\end{figure}
\subsection{Prediction inner product matrix and completely positive decomposition}
\label{sec:predTPDM}
To quantify prediction error, we first aim to describe the tail dependence between the predictor $\hat{X}_{p+1}$ and predictand $X_{p+1}$.
The vector $(\hat{X}_{p+1}, X_{p+1})^\top \in RV_+^2(2)$, and this vector's tail dependence is characterized by $H_{(\hat{X}_{p+1}, X_{p+1})^\top}$.
While this angular measure is not readily available, the $2 \times 2$ `prediction' inner product matrix
\begin{equation}\label{eq:predTPDM}
\begin{split}
\Gamma_{(\hat{X}_{p+1}, X_{p+1})^\top}
&=
\begin{bmatrix}
\hat{\bm{b}}^\top{A}_p\\
\bm{a}_{p+1}^\top
\end{bmatrix}
\begin{bmatrix}
{A}_{p}^\top\hat{\bm{b}} &
\bm{a}_{p+1}
\end{bmatrix}
=
\begin{bmatrix}
\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12} & \Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12}\\
\Sigma_{21}\Sigma_{11}^{-1}\Sigma_{12} & \Sigma_{22}
\end{bmatrix}
\end{split},
\end{equation}
can be obtained from the partitioned TPDM,
as we have assumed $X_1, \ldots, X_{p+1} \in \mathcal{V}^q_+$.
We then use complete positivity to find an angular measure constrained by knowledge of $\Gamma_{(\hat{X}_{p+1}, X_{p+1})^\top}$.
Although the entries of $\hat{\bm{b}}^\top{A}_p$ are not guaranteed to be nonnegative, the Cholesky decomposition of the $2 \times 2$ prediction inner product matrix yields positive entries and thus $\Gamma_{(\hat{X}_{p+1}, X_{p+1})^\top}$ is completely positive.
Given a $q_* \geq 2$, there exist procedures \citep{groetzner2020}
to obtain nonnegative $2 \times q_*$ matrices $B$ such that $B B^\top = \Gamma_{(\hat{X}_{p+1}, X_{p+1})^\top}$, and we can then use (\ref{eq: discreteAngMsr}) to construct an angular measure consisting of $q_*$ discrete point masses.
Since the completely positive decomposition is not unique,
there would seem to be incentive to set $q_*$ large, thereby distributing the total mass of the angular measure $H_{B \circ \bm Z}$ into many point masses.
On the other hand, as $q_*$ grows, the procedures for obtaining $B$ require more computation.
We take a practical approach.
We choose $q_*$ to be of moderate size, but apply the procedure repeatedly, obtaining nonnegative $B^{(k)}, k = 1, \ldots, n_{decomp},$ such that $B^{(k)} {B^{(k)}}^\top = \Gamma_{(\hat{X}_{p+1}, X_{p+1})^\top}$ for all $k$.
We then set $\hat H_{(\hat{X}_{p+1}, X_{p+1})^\top} = n_{decomp}^{-1} \sum_{k = 1}^{n_{decomp}} H_{B^{(k)} \circ \bm Z}$, and $n_{decomp}^{-1} \sum_{k = 1}^{n_{decomp}} B^{(k)} {B^{(k)}}^\top = \Gamma_{(\hat{X}_{p+1}, X_{p+1})^\top}$ as desired.
$\hat H_{(\hat{X}_{p+1}, X_{p+1})^\top}$ consists of $n_{decomp}\times q_*$ point masses.
We use a simulation study to illustrate.
We again begin by generating a matrix $A$ whose elements are drawn from a uniform distribution; however this time the dimension of $A$ is $7 \times 400$ and the true angular measure consists of 400 point masses.
We draw 60,000 random realizations of $\bm X = A \circ \bm Z$, and use the first 40,000 as a training set.
The largest 1\% of this training set is used to estimate the seven-dimensional TPDM, from which we obtain $\hat {\bm b}$ and additionally $\hat \Gamma_{(\hat{X}_{p+1}, X_{p+1})^\top}$.
We then use the completely positive decomposition to obtain $2 \times 9$ matrices $B^{(k)}, k = 1, \ldots, 51$, resulting in an estimated angular measure $\hat H_{(\hat{X}_{p+1}, X_{p+1})^\top}$ consisting of 459 point masses.
We obtain a 95\% `joint polar region' by drawing bounds at $\bm w_{0.025}$ and $\bm w_{0.975}$, the 0.025
and 0.975 empirical quantiles of the univariate distribution of angles provided by the normalized estimated angular measure.
The left panel of Figure \ref{fig:simStudy} shows the scatterplot of the 20,000 remaining test points $\hat{X}_{p+1}$ and $X_{p+1}$ and the 95\% joint region.
Thresholding at the 0.95 quantile of $\| (\hat{X}_{p+1}, X_{p+1}) \|_{2}$,
we find that 96.3\% of the large values in the test set fall within the joint region.
To informally assess the variability of these quantiles,
we perform the completely positive decomposition under different scenarios on the same data set.
To speak in terms of angles, let $\theta(\bm w) = \arctan(w_2/w_1)$.
For the scenario above, our bounds were $(\theta(\bm w_{0.025}), \theta(\bm w_{0.975}))$ = (0.30, 1.40).
A second completely positive decomposition where $q_* = 6$ and consisting of 510 point masses yielded bounds of (0.29 1.40), and a third decomposition where $q_* = 7$ and consisting of 560 point masses yielded bounds of (0.33, 1.41).
It seems that constraining $\hat H_{(\hat{X}_{p+1}, X_{p+1})^\top}$ by $\hat \Gamma_{(\hat{X}_{p+1}, X_{p+1})^\top}$ and requiring it to consist of a large enough number of point masses result in bounds with low variability.
If a continuous angular measure is desired, we propose performing a kernel density estimate of the angular masses obtained from the completely positive decomposition.
We use the adjusted boundary bias approach of \cite{marron1994transformations} for the kernel density estimation since the support of $H_{(\hat X_{p+1}, X_{p+1})^\top}$ is bounded.
The bounds obtained by automatically choosing the bandwidth and applying to the three decompositions above are
(0.28, 1.42), (0.32 1.43), and (0.28 1.41).
\subsection{Prediction intervals for $X_{p+1}$ given large $\hat{X}_{p+1}$}
\label{sec:condtlInterval}
The region obtained in the previous section describes the joint behavior of $\hat{X}_{p+1}$ and $X_{p+1}$, but the quantity of interest is the conditional behavior of $X_{p+1}$ given a specific large value $\hat{X}_{p+1} = x$.
In $(p+1)$-dimensional space, \cite{cooley2012approximating} fit a parametric model for angular density $h_{(\bm X_p^\top, X_{p+1})^\top}$, and use the limiting intensity function of regular variation to get an approximate density of $X_{p+1}$ given large $\bm X_p = \bm x_p$.
Following their approach with $\alpha = 2$ and the $L_2$ norm, and letting $\bm x = (\bm x_p^\top, x_{p+1})^\top$, transforming (\ref{eq:nu}) from polar to Cartesian coordinates has Jacobian $|J|=\|\bm{x}\|^{-(p+1)}x_{p+1}$ \citep{song1997} and yields a limiting measure of $\nu_{(\bm X_p^\top, X_{p+1})^\top}(\bm x)\mathrm{d}\bm x = 2 \|\bm x \|^{-(p+4)} x_{p+1} h( \bm x \|\bm x\|^{-1}) \mathrm{d} \bm x$.
The approximate conditional density is $f_{X_{p+1}|\bm X_p}(x_{p+1} | \bm x_p) \approx c^{-1} \nu_{(\bm X_p^\top, X_{p+1})^\top}( \bm x_p, x_{p+1} )$, where $c = \int_0^\infty \nu_{(\bm X_p^\top, X_{p+1})^\top}(\bm x) \mathrm{d} x_{p+1}.$
\cite{cooley2012approximating} applied their method in moderate dimension ($p = 4$); applying the approach for larger $p$ would require a high dimensional angular measure model.
We adapt the method of \cite{cooley2012approximating} to model the relationship between $X_{p+1}$ and $\hat X_{p+1}$.
Regardless of $p$, we only need to describe this bivariate relationship.
In two dimensions, the problem simplifies.
To find the bound of the prediction interval for a given value $\hat x_{p+1}$, we wish to find $d$
such that
$$
\rho
= \int_0^d f_{X_{p+1}|\hat X_{p+1}}(x_{p+1}|\hat x_{p+1})\mathrm{d} x_{p+1}
=\frac
{\int_0^d 2 \|\bm x\|^{-5} x_{p+1} h(\bm x \|\bm x\|^{-1}) \mathrm{d} x_{p+1}}
{\int_0^\infty 2 \|\bm x\|^{-5} x_{p+1} h(\bm x \|\bm x\|^{-1}) \mathrm{d} x_{p+1}},
$$
where $\bm x = (\hat x_{p+1}, x_{p+1})^\top$, and where $\rho$ sets the prediction level; below we set $\rho$ to 0.025 and 0.975 to yield a 95\% prediction interval of $x_{p+1}$ given $\hat x_{p+1}$
Letting $\theta$
be such that $x_{p+1} = \hat x_{p+1} \tan \theta$, simple substitution and cancellation of $\hat x_{p+1}$ yield the equivalent problem
$$
\rho
=\frac
{\int_0^{\theta^*} 2 \tan\theta (1 + \tan^2 \theta)^{-5/2} h(\cos \theta, \sin \theta) \sec^2\theta \mathrm{d} \theta}
{\int_0^\infty 2 \tan\theta (1 + \tan^2 \theta)^{-5/2} h(\cos \theta, \sin \theta) \sec^2\theta \mathrm{d} \theta}.
$$
With $\rho$ specified, $\theta^*$ can be solved independently of the value of $\hat x_{p+1}$, and given this value, the bound is $d = \hat x_{p+1} \tan (\theta^*)$.
We use the kernel density estimated in Section \ref{sec:predTPDM} in place of $h$ and numerically integrate to solve for $\theta^*$.
The center panel of Figure \ref{fig:simStudy} illustrates the conditional density for a particular realization from the aforementioned simulation study where $\hat x_{p+1}$ = 33.17 and with actual value $x_{p+1}$ = 48.15 denoted by the star.
The right panel shows a scatterplot of the largest 5\% (by $\hat x_{p+1}$) of the test set from the aforementioned simulation along with the upper and lower bounds from the conditional density approximation.
Scatterplots of realizations from regularly-varying random vectors can be difficult to interpret, because weak dependence implies that large points occur near the axes.
The fact that the points occur in the interior implies that there is a strong relationship between $\hat X_{p+1}$ and $X_{p+1}$, and clearly the width of the prediction interval needs to increase with $\hat x_{p+1}$.
The coverage rate of these intervals is 0.947.
\begin{figure}
\caption{(Left) The estimated joint 95\% joint prediction region based on the approximated angular measure $\hat H_{(\hat{X}
\label{fig:simStudy}
\end{figure}
| 3,930 | 17,691 |
en
|
train
|
0.4934.5
|
\section{Applications}
\label{sec:applications}
\subsection{Nitrogen dioxide air pollution.}
NO$_2$ is one of six air pollutants for which the US Environmental Protection Agency (EPA) has national air quality standards.
We analyze daily EPA NO$_2$ data\footnote{https://www.epa.gov/outdoor-air-quality-data/download-daily-data} from five locations in the Washington DC metropolitan area (see Figure \ref{fig: washingtonDC}).
The first four stations (McMillan 11-001-0043, River Terrace 11-001-0041, Takoma 11-001-0025, Arlington 51-013-0020) have long data records spanning 1995-2020.
Alexandria does not have observations after 2016.
We will perform prediction at Alexandria given data at the other four locations.
Observations in Alexandria actually come from two different stations: 51-510-0009 which has measurements from January1995 to August 2012 and 51-510-00210 from August 2012 to April 2016.
Exploratory analysis did not indicate any detectable change point in the Alexandria data either with respect to the marginal distribution or with dependence with other stations, so we treat this data as coming from a single station.
There are 5163 days between 1995 and 2016 where all five locations have measurements.
Because NO$_2$ levels have decreased over the study period, we detrend at each location using a moving average mean and standard deviation with window of 901 days to center and scale.
Our inner product space assumes each $X_i \in RV_+^1(\alpha = 2)$, and the detrended NO$_2$ data must be transformed to meet this assumption.
In fact, it is unclear whether the NO$_2$ data are even heavy tailed.
Nevertheless, we believe the regular variation framework is useful for describing the tail dependence for this data after marginal transformation.
Characterizing dependence after marginal transformation is justified by Sklar's theorem (\citet{sklar1959}, see also \citet[Proposition 5.15]{resnick1987}), and such transformations are regularly used in multivariate extremes studies.
After viewing standard diagnostic plots, we fit a generalized Pareto distribution above each location's 0.95 quantile and obtain the marginal estimated cdf's $\hat F_i$ which are the empirical cdf below the 0.95 quantile and the fitted generalized Pareto above.
Letting $X_i^{(orig)}$ denote the random variable for detrended NO$_2$ at location $i$, we define $X_i = 1/\sqrt{(1-\hat{F_i}(X_i^{(orig)}))}-\delta$ obtaining a `shifted' Pareto distribution for $i = 1, \ldots, 5$.
Each $X_i \in RV_+(\alpha = 2)$ and the shift $\delta = 0.9352$ is such that $\operatorname{E}[t^{-1}(X_i)]$ = 0.
This shift makes the preimages of the transformed data centered which we found reduced bias in the estimation of the TPDM.
We assume $\bm X = (X_1, \ldots, X_5)^\top \in RV_+^5(\alpha = 2)$.
Further, we let $\bm X_t$ denote the random vector of observations on day $t$, which we assume to be iid copies of $\bm X$.
This is a simplifying assumption as there is temporal dependence in the NO$_2$ data, but it seems less informative that the spatial dependence exhibited by each day's observations.
We first predict during the period prior to 2015 in order that we can use the observed data at Alexandria to assess performance.
Indices are randomly drawn to divide the data set into training and test sets consisting of 3442 and 1721 observations respectively, and both sets cover the entire observational period.
Using the training set, the five-dimensional TPDM $\hat \Sigma_{\bm X}$ is estimated as follows.
Let $\bm x_t$ denote the observed measurements on day $t$.
For each $i \neq j$ in $1, \ldots, 5$, let $r_{t,ij} = \| (x_{t,i}, x_{t,j}) \|_2$ and $(w_{t,i},w_{t,j})=(x_{t,i},x_{t,j})/r_{t,ij}$.
We let $\hat{\sigma}_{ij}=2 n_{exc}^{-1}\sum_{t=1}^{n}{w_{t,i}w_{t,j}\mathbb{I}(r_{t,ij}>r^*_{ij})}$, where $n_{exc} = \sum_{t=1}^{n} \mathbb{I}(r_{t,ij}>r^*_{ij})$.
We choose $r^*_{ij}$ to correspond to the 0.95 quantile.
The constant 2 arises from knowledge that the tail ratio of each $X_i$ is one due to the marginal transformation.
This pairwise estimation of the TPDM differs from the method in \cite{cooley2019decompositions} who used the entire vector norm as the radial component.
\cite{mhatre2021transformedlinear} show that the TPDM is equivalent whether it is defined in terms of the angular measure of the entire vector or the angular measure corresponding to the two-dimensional marginals.
From $\hat \Sigma_{\bm X}$, we obtain $\hat X_{t,5} = \hat {\bm b}^\top \circ \bm X_{t,4}$, where $\hat {\bm b} = (-0.047, 0.177, 0.192, 0.482)^\top$.
We note that the largest weighted component is Arlington, which is closest to Alexandria.
Interestingly, McMillan has a slightly negative weight.
We calculate $\hat X_{t,5}$ for all $t$, but only consider those for which $\hat X_{t,5}$ exceeds the 0.95 quantile.
The left panel of Figure \ref{fig:no2} shows the scatterplot of the values $x_{t,5}$ versus $\hat x_{t,5}$.
By taking the inverse of the marginal transformation, multiplying by the moving average standard deviation and adding the moving average mean, the predicted value can be put on the scale of the original data.
The center panel of Figure \ref{fig:no2} shows the scatterplot on the original scale.
We use the method described in Section \ref{sec:predTPDM} to approximate $H_{(\hat X_{p+1}, X_{p+1})}$ and use the method described in Section \ref{sec:condtlInterval} to create 95\% prediction intervals for each large predicted value $\hat x_{t,5}$.
We chose the matrix $B$ arising from the completely positive decomposition to be $2 \times 9$.
Prediction intervals on the Pareto scale are shown in the left panel of Figure \ref{fig:no2} and the coverage rate of these intervals is 0.965.
The intervals can similarly be back-transformed to be on the original scale as shown in the center panel of Figure \ref{fig:no2}.
The lack of monotonicity in these intervals with respect to the predicted value is due to the trend in the data over the observation period.
For comparison to standard linear prediction, we find the BLUP based on the estimated covariance matrix from the entire data set, and create Gaussian-based 95\% confidence intervals from the estimated MSPE.
When done on the original data, we obtain a coverage rate of 0.88, and when done on square-root transformed data to account for the skewness, we obtain a coverage rate of 0.78.
We also compare our prediction method to the extremes-based method of \cite{cooley2012approximating}, which approximated the conditional distribution of the large values of a regularly varying variate via a parametric model for the angular measure.
The method of \cite{cooley2012approximating} can be done due to this application's relatively low dimension.
As done in \cite{cooley2012approximating}, the pairwise beta model \citep{cooley2010pairwise} is fit by maximum likelihood to the preprocessed training data set.
The 95\% prediction intervals are based on the approximated conditional density of $X_5$ given $x_1, \ldots, x_4$, and the achieved coverage rate for the test set is 0.965.
Because the fitted angular measure model would seemingly contain more information than the estimated TPDM, we were surprised that the widths of the prediction intervals were very similar for the two methods.
The average ratio of \cite{cooley2012approximating} average interval width to our TPDM-based approach was 1.04.
We then apply our prediction method to five dates in 2019 and 2020 (including January 23, 2020 in Figure \ref{fig: washingtonDC}) when observed values at the four recording stations were large and no observation was taken at Alexandria.
Here, we use the entire period from 1995-2016 to estimate the TPDM, and we obtain a slightly different estimate $\hat {\bm b} = (0.026, 0.153, 0.118, 0.461)^\top$.
The right panel of Figure \ref{fig:no2} shows the point estimate and 95\% prediction intervals from our transformed-linear approach (after back transformation to original scale).
The trend at Arlington was used for the unobserved trend at Alexandria.
For comparison, covariance matrix-based BLUP's and MSPE-based 95\% prediction intervals for these dates are shown with a dashed line.
\begin{figure}
\caption{(Left) Scatterplot of $\hat{X}
\label{fig:no2}
\end{figure}
\subsection{Industry portfolios.}
We apply the transformed-linear prediction method to a higher dimensional financial data set.
The data set obtained from the Kenneth French Data Library\footnote{https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data\_library.html} contains the value-averaged daily returns of 30 industry portfolios.
We analyze data for 1950-2020, consisting of $n=17911$ observations.
Since our interest is in extreme losses, we negate the returns, and set negative returns to zero so that data is in the positive orthant.
Although these data appear to be heavy-tailed, it still requires marginal transformation so that $\alpha = 2$ can be assumed.
Let $\bm{X}^{(orig)}$ denote the random vector of the value-averaged daily returns.
For simplicity we use the empirical CDF to perform the marginal transformation $X_i =1/\sqrt{(1-\hat{F_i}(X_i^{(orig)}))}-\delta$, which is applied to each industry's data so that $X_i$ follows the same shifted Pareto distribution as before.
We again assume $\bm X_t$, the random vector denoting the observations on day $t$, are iid copies of $\bm X$.
A training set consisting of two-thirds of the data ($n_{train}=11940)$ is randomly selected and used to estimate the TPDM and obtain the vector $\hat {\bm b}.$
The test set consists of the remaining one-third of the data ($n_{test}=5970)$ to assess coverage rates.
Following similar steps in the previous application, the $30\times 30$ TPDM $\Sigma_{\bm{X}}$ is estimated first in the training set. We focus on performing the linear prediction for extreme losses of coal, beer, and paper.
The three largest coefficients in $\hat {\bm b}_{coal}$ are $(0.42, 0.36, 0.20)$ and correspond to fabricated products and machinery, steel, and oil respectively.
The three largest coefficients $\hat {\bm b}_{beer}$ are $(0.52, 0.24, 0.12)$ and correspond to food products, retail, and consumer goods (household).
The three largest coefficients for $\hat {\bm b}_{paper}$ are $(0.21, 0.11, 0.08)$ and correspond to chemicals, consumer goods (household), and construction materials.
The assessed coverage rates of our transformed linear $95\%$ prediction intervals for coal, beer, and paper are $97.9\%$, $96.3\%$, and $98\%$, respectively.
For the purpose of comparison, we also assessed coverage rates of the MSPE-based $95\%$ prediction intervals.
Because the data are strongly non-Gaussian, we use the empirical CDF to transform the marginals to be standard normal before estimating the covariance matrix.
The coverage rates of MSPE-based 95\% prediction intervals are $79.3\%$, $66.6\%$, and $51.2\%$ for coal, beer, and paper respectively.
\section{Summary and Discussion}
\label{sec:summary}
We have proposed a method for performing linear prediction when observations are large.
To do so, we constructed an inner product space of nonnegative random variables arising from transformed linear combinations of independent regularly varying random variables.
The elements of the TPDM correspond to these inner products if one is willing to assume that these random variables in $\mathcal{V}^q_+$.
The projection theorem yields the optimal transformed linear predictor.
Our method for obtaining prediction intervals shows very good performance both in a simulation study and in two applications.
The method is simple and is based only on the TPDM which is estimable in high dimensions.
We restrict to nonnegative regularly varying random variables to focus on the upper tail.
Relaxing this restriction could allow one to use standard linear operations.
Even when the data can be negative, we believe there is value in focusing in one direction.
In the financial application, tail dependence for extreme losses is different than for gains, and this information is lost when dependence is summarized with a single number as in the TPDM.
The random vectors $\bm X = A \circ \bm Z$ comprised of elements of our vector space have a simple angular measure consisting of $q$ point masses where $q$ is the number of columns of $A$.
Previous models with angular measures consisting of discrete point masses have been criticized as being overly simple.
A difference here is that we do not have to specify $q$ to use this framework to perform prediction, or more generally, we do not have to really believe that our data arise from such a simple model.
Rather, if we are comfortable with the information contained in the TPDM, then we can use its information to easily obtain a point prediction and sensible prediction intervals that reflect the information contained.
In many applications, dependence cannot be measured between the observed values and the value to be predicted.
In kriging for example, a spatial process model is first fit so that covariance between any two locations is quantified.
One can imagine modeling the extremal pairwise dependence as a function of distance before applying the methods here to perform prediction for extreme levels.
\end{document}
| 3,654 | 17,691 |
en
|
train
|
0.4935.0
|
\begin{document}
\tilde{t}le{Average diagonal entropy in non-equilibrium isolated quantum systems}
\author{Olivier Giraud}
\affiliation{LPTMS, CNRS, Univ.~Paris-Sud, Universit\'e Paris-Saclay, 91405 Orsay, France}
\author{Ignacio Garc\'ia-Mata} \affiliation{Instituto de Investigaciones F\'isicas de Mar del Plata (CONICET-UNMdP), B7602AYL Mar del Plata, Argentina} \affiliation{Consejo Nacional de Investigaciones Cient\'ificas y Tecnol\'ogicas (CONICET), C1425FQB C.A.B.A, Argentina}
\begin{abstract}
The diagonal entropy was introduced as a good entropy candidate especially for isolated quantum systems out of equilibrium.
Here we present an analytical calculation of the average diagonal entropy for systems undergoing unitary evolution and an external perturbation in the form of a cyclic quench. We compare our analytical findings with numerical simulations of various many-body quantum systems.
Our calculations elucidate various heuristic relations proposed recently in the literature.
\end{abstract}
\title{Average diagonal entropy in non-equilibrium isolated quantum systems}
The precision for manipulating quantum systems attained to date
has led to the next big question: how do basic thermodynamics principles operate at very small scales.
Motivated by this question
an ever growing effort has surged attempting to describe quantum thermodynamics of small isolated quantum systems, and their approach to equilibrium and subsequent thermalization. The incredible advance in experimental techniques allowing to follow the time evolution of closed quantum systems
\cite{Paredes2004,*Kinoshita,*Hofferbert} has been the main boost of these endeavors. The relevance of this subject is
(at least) twofold. On the one hand, quantum technologies (e.g.~for quantum information
\cite{BlattRMP} and quantum simulation \cite{Gerri,*Blatt2012,*Serwane,*Korenblit}) tend to be based on systems with negligible interaction with the environment. On the other hand, a complete microscopic thermodynamical description of these advances in nonequilibrium statistical mechanics of such quantum systems has remained elusive, in part
due to the lack of a suitable definition of entropy.
Although the von Neumann entropy $S_{\rm vN}=-\mathrm{tr} \rho \ln \rho$ (with $k_{\rm B}=1$) is a natural tool to measure the entropy of a quantum state $\rho$, it cannot be used to describe the approach to equilibrium of isolated quantum systems, since they undergo unitary dynamics. As an alternative the diagonal entropy (DE)
\begin{equation}
\label{defDE}
S_{\rm D}=-\sum_n \rho_{nn}\ln\rho_{nn}
\end{equation}
was proposed \cite{Polkov2011}, where $\rho_{nn}$ are the diagonal elements of the density matrix in the energy eigenbasis.
The DE possesses most of the expected features of a thermodynamic entropy, such as additivity, or increase when a system at equilibrium undergoes an external perturbation~\cite{Polkov2011, Ikeda2015}. The DE appears to be a fundamental quantity, which can describe the behavior of a very broad class of out-of-equilibrium systems. Clarifying the universality or the specificities of its properties is therefore an important goal.
Two universal properties of DE have been proposed recently. In the case where a system is perturbed by an external operation during a time $\tau$, one can study the DE as a function of the duration $\tau$ of the perturbation. In \cite{Ikeda2015} a conjecture was made introducing bounds on the difference between the time averaged DE, $\overline{S_{\rm D}(\tau)}$, and the DE of the time averaged state $\overline{\rho(\tau)}$ (denoted as $S_{_{\overline{\rho(\tau)}}}$), namely
\begin{equation}
\label{ikeda}
0 \le \Delta S\le 1- \gamma, \ \text{ where }\ \Delta S \stackrel{\rm def}{=}S_{_{\overline{\rho(\tau)}}}-\overline{S_{\rm D}(\tau)}
\end{equation}
and $\gamma=0.5772\ldots$ is Euler's constant. A second property was numerically uncovered in \cite{GarciaMata2015}, where a universal relation was found between $\Delta S$ and a quantity
measuring localization of the initial state.
The aim of this Letter is to provide analytical support to both of these observations and to determine to which extent they are universal. To this end, we derive an analytical expression for $\Delta S$ as an expansion in terms of average generalized participation ratios (PR), which characterize the localization properties of the vector of transition probabilities between the initial and the perturbed eigenstates. Using perturbation theory, we analyze the behavior of the first terms of this expansion in the two extreme regimes of localized and delocalized states.
To support our analytical findings we performed numerical simulations using two representative physical models displaying chaotic and integrable regimes.
We show that our truncated expansion of $\Delta S$ is accurate independently of the physical model and of the localization
properties of the eigenfunctions.
The importance of our results is twofold.
They provide a precise analytical value for the time average of $S_{\rm D}$ and $\Delta S$, improving \cite{Ikeda2015}, and giving a tighter bound.
Moreover, they relate the DE to localization properties of eigenstates of the perturbed system by an explicit and accurate expression. This should lead to a better understanding of the deep connections between Anderson-type transitions
and equilibration characterized by the DE.
It is noteworthy that the DE is uniquely related to the energy distribution \cite{Polkov2011} and it is thus (in principle) a measurable quantity. Therefore, it could be relevant in the experimental description of equilibration and thermalization processes which have gained so much attention due to a flurry of recent breakthroughs (see \cite{Jensen85,Deutsch91, Srednicki1994,Calabrese2006,Rigol2008,Rigol2009,Linden2009,GogolinMullerEisert2011} to name but a few).
For simplicity we consider a cyclic external operation, where a system, described by a Hamiltonian $H=H(\lambda)$ depending on a fixed parameter $\lambda$, undergoes a sudden quench $H\rightarrow H'=H(\lambda+\delta\lambda)$ at time $t=0$ and is then reverted to the Hamiltonian $H$ at time $\tau$. If $H$ is time-independent, the DE $S_D(t)$ of the state $\rho(t)$ is constant for $t>\tau$. It can thus be studied as a function of the perturbation duration $\tau$, and denoted $S_D(\tau)$. For large enough $\tau$, the system evolving under Hamiltonian $H'$ will have the time to equilibrate, so that $S_D(\tau)$ goes to a constant value that can be estimated by considering the average $\overline{S_{\rm D}(\tau)}$.
Let us label by $\ket{n}$ the basis of normalized eigenvectors of $H$ (with eigenvalues $E_n$), and by $\ket{m}$ the basis of eigenvectors of $H'$ (with eigenvalues $E'_m$). We may consider finite systems of size $N$, or truncate our matrices to
a Hilbert space of dimension $N$. We assume that at $t=0$ the system is in an eigenstate $\ket{n_0}\bra{n_0}$ of $H$ with energy $E_{n_0}$. Let $U=e^{-i H'\tau}$ ( $\hbar\stackrel{\rm def}{=} 1$) be the evolution operator from time 0 to $\tau$. At time $\tau$ the state is $\rho(\tau)=U\ket{n_0}\bra{n_0}U^{\dagger}$. Its DE can be expressed as
\begin{equation}
\label{dent}
S_D(\tau)=-\sum_n h_n\ln h_n,{\bf ???}uad h_n=|\bra{n}U\ket{n_0}|^2,
\end{equation}
where $h_n$ is the probability to observe the system in an eigenstate $\ket{n}$ at time $\tau$ (we have the normalization $\sum_n h_n=1$).
Our aim is to calculate the quantity
\begin{equation}
\label{DeltaS}
\Delta S=-\sum_n \overline{h_n}\ln\overline{h_n}+\sum_n \overline{h_n\ln h_n}.
\end{equation}
Since $h_n\in [0,1]$ has finite support, the knowledge of all its moments uniquely defines its distribution $P(h_n)$, from which it is possible to calculate $\Delta S$. Using
\begin{equation}
\label{Unn0}
\bra{n}U\ket{n_0}=\sum_me^{-iE'_m \tau}\langle n\ketbra{m}n_0{\rm a}ngle,
\end{equation}
the probabilities $h_n$ can be expressed as
\begin{equation}
\label{hnbis}
h_n=\!\!\sum_{m_1,m_2}\!e^{i(E'_{m_1}-E'_{m_2}) \tau}\langle n\ketbra{m_2}n_0{\rm a}ngle\langle n_0\ketbra{m_1}n{\rm a}ngle.
\end{equation}
Moments of $h_n$ are obtained by calculating the averages $\overline{h_n^k}$. From Eq.~\eqref{hnbis}, these averages involve averages of quantities $\exp \left[i\left(\sum_{i=1}^k E'_{m_{2i-1}}-\sum_{i=1}^{k} E'_{m_{2i}}\right)\tau\right]$. We make, as in \cite{Ikeda2015}, the assumption that these quantities average to 1 if the sets $\{m_{2i-1},1\leq i\leq k\}$ and $\{m_{2i},1\leq i\leq k\}$ are permutations of each other, and to 0 otherwise. From Eq.~\eqref{hnbis}, keeping only terms with $m_1=m_2$ we have the first moment
\begin{equation}
\label{hnbar}
\overline{h_n}=\sum_m c_{mn},{\bf ???}uad c_{mn}=|\langle m|n_0{\rm a}ngle|^2|\langle m|n{\rm a}ngle|^2.
\end{equation}
The $\overline{h_n}$ are the average probabilities of transition from $\ket{n_0}$ to $\ket{n}$. For the second moment $\overline{h_n^2}$, keeping only terms for which the sets $\{m_1,m_3\}$ and $\{m_2,m_4\}$ are the same, we get
\begin{equation}
\label{hn2}
\overline{h_n^2}=2\left(\sum_m c_{mn}\right)^2-\sum_m c_{mn}^2.
\end{equation}
For integer $q$, we introduce the PR
\begin{equation}
\label{xiqn}
\xi_{q,n}\equiv\frac{\sum_m c_{mn}^q}{(\sum_m c_{mn})^q},
\end{equation}
which characterize the localization properties of the vectors $(c_{mn})_{m}$. We can then express Eq.~\eqref{hn2} as $\overline{h_n^2}/\overline{h_n}^2=2-\xi_{2,n}$. Higher-order moments can be expressed in the same way. For instance we have $\overline{h_n^3}/\overline{h_n}^3=6-9\xi_{2,n}+4\xi_{3,n}$ and $\overline{h_n^4}/\overline{h_n}^4=24-72\xi_{2,n}+18(\xi_{2,n})^2+64\xi_{3,n}-33\xi_{4,n}$. Keeping only the first two terms in these expressions we get the general formula (see Supplemental material for a detailed proof)
\begin{equation}
\label{hnkordre2}
\overline{h_n^k}=\overline{h_n}^k\, k!\,\left(1-\frac{k(k-1)}{4}\xi_{2,n}\right).
\end{equation}
As $\xi_{2,n}$ does not depend on $k$, the moment generating function of $h_n$ can then be resummed as
\begin{equation}
\label{sumM}
M_n(t)=\sum_{k=0}^{\infty}\frac{\overline{h_n^k}}{k!}t^k=\frac{1}{1-\overline{h_n}t}-\frac{(\overline{h_n}t)^2}{2(1-\overline{h_n}t)^3}\xi_{2,n}.
\end{equation}
The probability distribution for $h_n$ is then obtained by inverse Laplace transform of $M_n(t)$, which gives
\begin{equation}
\label{phnext}
P(h_n)=\frac{1}{\,\overline{h_n}\, }e^{-\tfrac{h_n}{\,\overline{h_n}\,}}\left(1-\left[\frac{h_n^2}{4\overline{h_n}^2}-\frac{h_n}{\,\overline{h_n}\, }+\frac12\right]\xi_{2,n}\right).
\end{equation}
Using this distribution, the calculation of the averages in Eq.~\eqref{DeltaS} is then straightforward and direct integration yields
\begin{equation}
\label{conjecturenext}
\Delta S= 1-\gamma-\frac14\overline{\xi}_{2},{\bf ???}uad \overline{\xi}_{2}\equiv\sum_n\overline{h_n}\,\xi_{2,n}.
\end{equation}
Recalling that $\sum_n\overline{h_n}=1$, the quantity $\overline{\xi}_{2}$ appears as the average over PRs \eqref{xiqn} weighted by the mean transition probability $\overline{h_n}$ to go from $\ket{n_0}$ to $\ket{n}$ during the quench. This first resul, Eq.~\eqref{conjecturenext}, has two consequences. First, it makes a more precise statement than the conjecture $\Delta S\leq 1-\gamma$ of \eqref{ikeda} by showing that for finite dimension $N$, as the value of $\overline{\xi}_2$ is finite, the inequality $\Delta S<1-\gamma$ is strictly fullfilled. Moreover, while the equality $\Delta S = 1-\gamma$ was proved in \cite{Ikeda2015} under some assumptions on the localization properties and in the limit $N\to\infty$, our result provides a value for the first correction to the difference between $\Delta S$ and its upper bound in the finite $N$ case.
The second consequence of Eq.~(\ref{conjecturenext}) is that it relates the DE to the average PRs of the overlaps $c_{mn}$, substantiating the observations of \cite{GarciaMata2015}.
| 3,712 | 23,370 |
en
|
train
|
0.4935.1
|
In Eq.~\eqref{hnkordre2} we only kept the first two terms in the expression of the moments $\overline{h_n^k}/\overline{h_n}^k$. It is in fact possible to systematically carry out the calculation by keeping successive terms in the expression of the moments, as we show in the Supplemental material. This yields an expansion of the form
\begin{equation}
\label{bigsum}
\Delta S= 1-\gamma+\sum_{\mu}\frac{a_{\mu}\overline{\xi}_{\mu}}{|\mu|(|\mu|-1)},\quad \overline{\xi}_{\mu}\equiv\sum_n\overline{h_n}\,\xi_{\mu_1,n}\xi_{\mu_2,n}\ldots,
\end{equation}
where $a_{\mu}$ are rational numbers and the sum over $\mu$ runs over all finite integer sequences $\mu=(\mu_1,\mu_2,\ldots)$ such that $\mu_i\geq 2$, and $|\mu|=\sum_i\mu_i$. For instance, keeping the few next lowest-order terms in the expression of moments, \eqref{conjecturenext} can be corrected to
\begin{eqnarray}
\label{conjecture3}
\Delta S&=& 1-\gamma-\frac14\overline{\xi}_2+\left(\frac19\overline{\xi}_3+\frac{1}{16}\overline{\xi}_{22}\right)\nonumber\\
&&-\left(\frac{11}{96} \overline{\xi}_4+\frac{1}{6} \overline{\xi}_{32}+\frac{1}{16} \overline{\xi}_{222}\right).
\end{eqnarray}
This expression is the main result of this work. It provides a connection between the average DE and the structure of the eigenfunctions -- localization on the perturbed basis -- through the generalized PR \eqref{xiqn}. It is all the more accurate that the higher-order terms are negligible (which typically is the case in the delocalized regime, as we discuss below). To distinguish these different orders it will be useful to introduce the quantities $O_1=1-\gamma-\overline{\xi}_2/4$, $O_2=O_1+(\overline{\xi}_3/9+\overline{\xi}_{22}/16)$ and $O_3=O_2-(11\overline{\xi}_4/96+ \overline{\xi}_{32}/6+ \overline{\xi}_{222}/16)$.
\begin{figure}
\caption{$\Delta S$ (solid black line) as a function of the coupling strength $\lambda$ for the DM with $j=20$, $N_t=250$, $\tau=10^7$, $\Delta\tau=250$, and quench strength $\delta \lambda=0.1$. The initial state is $\ket{n_0}
\label{fig:Dickeorder}
\end{figure}
To test the consistency of our analytical results we study two different models.
Both of them undergo a localization-delocalization transition when varying one parameter.
The first model is the Dicke model (DM) \cite{Dicke54} describing the dipole interaction of a single mode of a bosonic field, of frequency $\omega$, with $n_s$ two-level particles, with level splitting $\omega_0$. The corresponding Hamiltonian is
\begin{equation}
H(\lambda)=\omega_0 J_z +\omega a^\dagger a+\frac{\lambda}{\sqrt{2 j}}(a^\dagger + a)(J_{-}+J_{+}).
\end{equation}
The collective angular momentum operators $J_z$, $J_\pm$ correspond to a pseudospin $j=n_s/2$, and $a^\dagger$ ($a$) are
creation (annihilation) operators of the field.
The DM undergoes a superradiant quantum phase transition in the thermodynamic limit ($n_s\to \infty$) at $\lambda_c=\sqrt{\omega_0\omega}/2$ \cite{dicke}. For finite $n_s$, there is a transition from Poissonian to Wigner-Dyson level spacing statistics at $\lambda\approx \lambda_c$. This marks a transition from quasi-integrability at small $\lambda$ to quantum chaos at large $\lambda$, as is verified using a semiclassical model in \cite{EmaryBrandes2003}. In our calculations we consider $\omega=\omega_0=1$ ($\lambda_c=0.5$) and the quench is implemented by changing $\lambda\to \lambda+\delta\lambda$.
\begin{figure}
\caption{$\Delta S$ (solid black line) for the SW model with $p=0.06$, $\delta W=0.3$, $N=2^9$, $\tau=10^6$, $\Delta \tau=2500$ (top) and $\Delta \tau=3500$ (bottom), for one realization of disorder and of the shortcut links. Initial states are $\ket{n_0}
\label{fig:SWorder}
\end{figure}
The second model is a quantum smallworld (SW) system with disorder \cite{PhysRevB.62.14780, Giraud2005}. This is a one-dimensional tight-binding Anderson model having $N=2^{n_r}$ sites, with nearest-neighbor interaction and periodic boundary conditions, to which $p\,N$ shortcut links between sites are added, connecting $p\,N$ random pairs of vertices. The Hamiltonian which describes the quantum version of this system is
\begin{equation}
H=\sum_{i}\varepsilon_i\op{i}{i}+\sum_{\langle i,j{\rm a}ngle}V \op{i}{j}
+\sum_{k=1}^{\lfloor pN \rfloor} V(\op{i_k}{j_k}+\op{j_k}{i_k}),
\end{equation}
where $\varepsilon_i$ are Gaussian random variables with zero mean and width $W$, and $V=1$. The second sum runs over nearest-neighbors, while the third term describes the shortcut links of smallworld type, connecting random pairs $(i_k,j_k)$. When $p=0$ the model coincides with the usual one-dimensional Anderson model where all states are known to be localized with localization length $l\sim1/W^2$ for small $W$ \cite{Kramer1993}. The presence of long-range interacting pairs for $p>0$ induces a delocalization transition from localized states for large $W$ to delocalized states at small $W$ \cite{Giraud2005, NOUSunpub}.
The quench in this case is implemented by keeping the shortcut links $(i_k,j_k)$ fixed and changing $W\to W-\delta W$ .
To compute $\Delta S$ we fully diagonalize $H$ and $H'$ to calculate the $h_n$ and perform the average \eqref{DeltaS} over a window $[\tau,\tau+\Delta \tau]$ for a very large $\tau$. Examples of $S_{_{\overline{\rho(\tau)}}}$ and $\overline{S_{\rm D}(\tau)}$ are plotted in the inset of Fig.~\ref{fig:Dickeorder}. In the case of the DM, we take into account the parity symmetry, and we truncate the phonon basis to a finite size $N_t$ (we only consider energies well inside the converged part of the spectrum).
In Fig.~\ref{fig:Dickeorder} we show $\Delta S$ for the DM as a function of the coupling strength $\lambda$ for two different initial states.
At large $\lambda$ (delocalized regime), order $O_1$ already approximates $\Delta S$ rather well. At small $\lambda$ (localized regime), $O_1$ and $O_2$ tend to a constant value corresponding to the fully localized case $\overline{\xi}_\mu\to 1$, while $O_3$ matches $\Delta S$ very well. The higher the energy, the smaller the localized plateau is, as illustrated in Fig.~\ref{fig:Dickeorder}; this can be understood by the fact that the quench induces more transitions at higher energies (see also \cite{GarciaMata2015}). In Fig.~\ref{fig:SWorder} we present similar results for the SW model: in the delocalized (small-$W$) regime, order $O_1$ gives again a good approximation of $\Delta S$, while in the localized regime $O_3$ gives a very good approximation for $\Delta S$. Note that Fig.~\ref{fig:SWorder} corresponds to a single disorder and shortcut link realization (we have checked that the results are equivalent for any realization with sufficiently large $\Delta \tau$).
In order to understand these features, we consider two limiting situations. When equilibrium is reached `ideally', the eigenvectors $\ket{m}$ of $H'$ are uniformly spread (i.~e.~delocalized) in the $\ket{n}$ basis, thus each overlap is $|\langle m\ket{n}|^2\sim 1/N$, so that $c_{mn}\sim 1/N^2$ for all $m,n$. For the PR this implies $\xi_{q,n}\sim N^{1-q}$, and thus $\overline{\xi}_{\mu}\sim N^{\textrm{lg}(\mu)-|\mu|}$, with lg$(\mu)$ the number of terms in the sequence $\mu$. In the opposite case where eigenvectors $\ket{m}$ and $\ket{n}$ coincide, all $\overline{\xi}_{\mu}$ are equal to 1. These distinct features reflect in the behavior of the moments $\overline{h_n^k}$ (for instance, the sum of mean return probabilities $\sum_{n_0}\overline{h_{n_0}}$ was proposed as a tool to measure the degree of equilibration of an isolated quantum system \cite{Luck2015}).
In these two extreme regimes $\Delta S$ has very distinct behaviors. In the delocalized case where $\overline{\xi}_{\mu}\sim N^{\textrm{lg}(\mu)-|\mu|}$, we have for instance $\overline{\xi}_2\sim 1/N$, while the two last brackets in Eq.~\eqref{conjecture3} correspond to terms scaling as $1/N^2$ and $1/N^3$, respectively. One can thus truncate the sum \eqref{bigsum} to any order by keeping terms with the same power in $N$. Order 0 is simply given by the constant $1-\gamma$, which coincides with the result of \cite{Ikeda2015}. Keeping the term in $1/N$ yields our first main result Eq.~\eqref{conjecturenext}, and as already mentioned explains the conjecture of \cite{Ikeda2015}.
In the localized case, on the other hand, the expansion \eqref{bigsum} is no longer valid, as each term $\overline{\xi}_{\mu}$ is of order 1. However, the truncation \eqref{conjecture3} happens to yield a very good approximation for $\Delta S$, as shown on the physical models in Figs.~\ref{fig:Dickeorder} and \ref{fig:SWorder}, where at small $\lambda$ or large $W$ the numerically computed $\Delta S$ is almost indistinguishable from the expression for $O_3$. This can be understood via the following perturbation-theory approach. Let us consider the simplest case of a localized model, where $H'=H+\epsilon V$ and $V$ is a symmetric matrix with elements of order 1. Standard perturbation theory for $\epsilon\ll 1$ yields
\begin{equation}
\label{overlap_perturb}
\langle n\ket{m}=\delta_{nm}\left(1-\frac{1}{2}\sum_{k\neq m}v_{km}^2\right)+\epsilon(1-\delta_{nm}) v_{nm},
\end{equation}
with $v_{nm}=\epsilon V_{nm}/(E_m-E_n)$ and $\delta_{nm}$ is the Kronecker symbol. Inserting this expression into \eqref{Unn0}, we get, at lowest order in $\epsilon$,
\begin{equation}
\label{hnperturb}
h_n=\left\{
\begin{array}{cc}4\sin^2[(E'_n-E'_{n_0})\tau/2]v_{nn_0}^2\quad&n\neq n_0\\
&\\
1-2\sum_{k\neq n_0}v_{n_0k}^2\quad&n=n_0\,.
\end{array}
\right.
\end{equation}
Upon averaging over $\tau$ in \eqref{DeltaS}, the term $n=n_0$ does not contribute (as it does not depend on $\tau$), while terms $n\neq n_0$ yield a contribution involving the average $\overline{z\log z}-\overline{z}\log(\overline{z})=\frac12(1-\log 2)$ (where $z\stackrel{\rm def}{=}\sin^2x$).
At order $\epsilon^2$ the average \eqref{DeltaS} is thus given by
\begin{equation}
\label{deltaSperturb1}
\Delta S\simeq2(1-\log 2)\sum_{n\neq n_0}v_{nn_0}^2.
\end{equation}
One can similarly calculate a perturbation expansion for the $\overline{\xi}_{\mu}$, by injecting \eqref{overlap_perturb} into $c_{mn}$ given by \eqref{hnbar} and calculating the PR defined by \eqref{xiqn}. At order $\epsilon^2$ one gets
\begin{eqnarray}
\label{xiperturb}
\overline{\xi}_2&\simeq&1-\sum_{n\neq n_0}v_{nn_0}^2,\quad
\overline{\xi}_3\simeq\overline{\xi}_{22}\simeq1-\frac32\sum_{n\neq n_0}v_{nn_0}^2,\nonumber\\
\overline{\xi}_4&\simeq&\overline{\xi}_{32}\simeq\overline{\xi}_{222}\simeq1-\frac74\sum_{n\neq n_0}v_{nn_0}^2.
\end{eqnarray}
Using \eqref{xiperturb} to calculate the successive orders of $\Delta S$ given in \eqref{conjecture3} we obtain
\begin{eqnarray}
\label{deltaSperturb2}
O_1\simeq&\frac34-\gamma+\frac14\sum_{n\neq n_0}v_{nn_0}^2,\\
\label{deltaSperturb3}
O_2\simeq&\frac{133}{144}-\gamma-\frac{1}{96}\sum_{n\neq n_0}v_{nn_0}^2,\\
\label{deltaSperturb4}
O_3\simeq&\frac{167}{288}-\gamma+\frac{227}{384}\sum_{n\neq n_0}v_{nn_0}^2.
\end{eqnarray}
As the constant in front of $\epsilon^2$ in \eqref{deltaSperturb3} is $1/96$, at this order $\Delta S$ is essentially equal to $133/144-\gamma\simeq 0.346395$, as can be seen in the figures in the localized region. At the next order $O_3$ on the other hand, the constant $167/288-\gamma\simeq 0.00264545$ almost vanishes, while $227/384\simeq 0.591146$ is numerically very close to $2(1-\log 2)\simeq 0.613706$. Thus the expression \eqref{deltaSperturb4} almost coincides with \eqref{deltaSperturb1}, which explains why the truncation at $O_3$ works so well. Truncation at order 4 (which can be obtained from the general result given explicitly in the Supplemental material) would yield $\Delta S\simeq 1.2184 - 1.68839 \sum_{n\neq n_0}v_{nn_0}^2$, so that this approximation is worse than order 3 (and the same happens at higher orders). $O_3$ thus appears as the optimal truncation in the localized regime. As was noted above, this truncation also works quite well in the delocalized regime.
One sees from the numerical results that Eq.~\eqref{conjecture3} is in fact a good approximation for $\Delta S$ over the whole range of parameters.
To summarize,
there is a quest for a quantum entropy with all the required properties.
The DE is a good candidate and here we give an analytical expression for its time average as a function of eigenvector properties only, to a very good approximation and independently of the dynamical properties, in particular of the localization-delocalization or chaotic-integrable characteristics.
Our results establish the validity of the conjecture of Ref.~\cite{Ikeda2015} and extend its accuracy.
These results are
relevant in the relation between many-body localization transition and the study of equilibration in non-equilibrium isolated quantum systems.
| 4,072 | 23,370 |
en
|
train
|
0.4935.2
|
As the constant in front of $\epsilon^2$ in \eqref{deltaSperturb3} is $1/96$, at this order $\Delta S$ is essentially equal to $133/144-\gamma\simeq 0.346395$, as can be seen in the figures in the localized region. At the next order $O_3$ on the other hand, the constant $167/288-\gamma\simeq 0.00264545$ almost vanishes, while $227/384\simeq 0.591146$ is numerically very close to $2(1-\log 2)\simeq 0.613706$. Thus the expression \eqref{deltaSperturb4} almost coincides with \eqref{deltaSperturb1}, which explains why the truncation at $O_3$ works so well. Truncation at order 4 (which can be obtained from the general result given explicitly in the Supplemental material) would yield $\Delta S\simeq 1.2184 - 1.68839 \sum_{n\neq n_0}v_{nn_0}^2$, so that this approximation is worse than order 3 (and the same happens at higher orders). $O_3$ thus appears as the optimal truncation in the localized regime. As was noted above, this truncation also works quite well in the delocalized regime.
One sees from the numerical results that Eq.~\eqref{conjecture3} is in fact a good approximation for $\Delta S$ over the whole range of parameters.
To summarize,
there is a quest for a quantum entropy with all the required properties.
The DE is a good candidate and here we give an analytical expression for its time average as a function of eigenvector properties only, to a very good approximation and independently of the dynamical properties, in particular of the localization-delocalization or chaotic-integrable characteristics.
Our results establish the validity of the conjecture of Ref.~\cite{Ikeda2015} and extend its accuracy.
These results are
relevant in the relation between many-body localization transition and the study of equilibration in non-equilibrium isolated quantum systems.
\acknowledgments
IGM and OG received partial funding from a binational project from CONICET (grant no. 1158/14) and CNRS (grant no PICS06303).
IGM also received a funding from Universit\'e Toulouse III Paul Sabatier as an invited professor.
IGM thanks D. A. Wisniacki for fruitful discussions.
\begin{thebibliography}{27}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \mathrm{tr}anslation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
| 1,283 | 23,370 |
en
|
train
|
0.4935.3
|
\bibitem [{\citenamefont {Paredes}\ \emph {et~al.}(2004)\citenamefont
{Paredes}, \citenamefont {Widera}, \citenamefont {Murg}, \citenamefont
{Mandel}, \citenamefont {F\"olling}, \citenamefont {Cirac}, \citenamefont
{Shlyapnikov}, \citenamefont {H\"ansch},\ and\ \citenamefont
{Bloch}}]{Paredes2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Paredes}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Widera}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Murg}}, \bibinfo {author}
{\bibfnamefont {O.}~\bibnamefont {Mandel}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {F\"olling}}, \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Cirac}}, \bibinfo {author} {\bibfnamefont {G.~V.}\
\bibnamefont {Shlyapnikov}}, \bibinfo {author} {\bibfnamefont {T.~W.}\
\bibnamefont {H\"ansch}}, \ and\ \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Bloch}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {429}},\ \bibinfo {pages}
{277} (\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kinoshita}\ \emph {et~al.}(2006)\citenamefont
{Kinoshita}, \citenamefont {Wenger},\ and\ \citenamefont
{Weiss}}]{Kinoshita}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kinoshita}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Wenger}}, \
and\ \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Weiss}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf
{\bibinfo {volume} {440}},\ \bibinfo {pages} {900} (\bibinfo {year}
{2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hofferberth}\ \emph {et~al.}(2007)\citenamefont
{Hofferberth}, \citenamefont {Lesanovsky}, \citenamefont {Fischer},
\citenamefont {Schumm},\ and\ \citenamefont {Schmiedmayer}}]{Hofferbert}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Hofferberth}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Lesanovsky}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Fischer}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schumm}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Schmiedmayer}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {449}},\ \bibinfo {pages} {324} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Leibfried}\ \emph {et~al.}(2003)\citenamefont
{Leibfried}, \citenamefont {Blatt}, \citenamefont {Monroe},\ and\
\citenamefont {Wineland}}]{BlattRMP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Leibfried}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}}, \ and\ \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Wineland}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf
{\bibinfo {volume} {75}},\ \bibinfo {pages} {281} (\bibinfo {year}
{2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gerritsma}\ \emph {et~al.}(2010)\citenamefont
{Gerritsma}, \citenamefont {Kirchmair}, \citenamefont {Z\"ahringer},
\citenamefont {Solano}, \citenamefont {Blatt},\ and\ \citenamefont
{Roos}}]{Gerri}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Gerritsma}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Kirchmair}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Z\"ahringer}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.~F.}\ \bibnamefont {Roos}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {463}},\ \bibinfo {pages} {68} (\bibinfo {year} {2010})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Blatt}\ and\ \citenamefont {Roos}(2012)}]{Blatt2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Blatt}}\ and\ \bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont
{Roos}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature
Physics}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {277} (\bibinfo
{year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Serwane}\ \emph {et~al.}(2011)\citenamefont
{Serwane}, \citenamefont {Z\"urn}, \citenamefont {Lompe}, \citenamefont
{Ottenstein}, \citenamefont {Wenz},\ and\ \citenamefont {Jochim}}]{Serwane}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Serwane}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Z\"urn}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Lompe}}, \bibinfo
{author} {\bibfnamefont {T.~B.}\ \bibnamefont {Ottenstein}}, \bibinfo
{author} {\bibfnamefont {A.~N.}\ \bibnamefont {Wenz}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Jochim}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {332}},\ \bibinfo {pages} {336} (\bibinfo {year}
{2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Korenblit}\ \emph {et~al.}(2012)\citenamefont
{Korenblit}, \citenamefont {Kafri}, \citenamefont {Campbell}, \citenamefont
{Islam}, \citenamefont {Edwards}, \citenamefont {Gong}, \citenamefont {Lin},
\citenamefont {Duan}, \citenamefont {Kim}, \citenamefont {Kim},\ and\
\citenamefont {Monroe}}]{Korenblit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Korenblit}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kafri}},
\bibinfo {author} {\bibfnamefont {W.~C.}\ \bibnamefont {Campbell}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Islam}}, \bibinfo {author}
{\bibfnamefont {E.~E.}\ \bibnamefont {Edwards}}, \bibinfo {author}
{\bibfnamefont {Z.-X.}\ \bibnamefont {Gong}}, \bibinfo {author}
{\bibfnamefont {G.-D.}\ \bibnamefont {Lin}}, \bibinfo {author} {\bibfnamefont
{L.-M.}\ \bibnamefont {Duan}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Kim}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\
}\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {095024} (\bibinfo
{year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Polkovnikov}(2011)}]{Polkov2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Polkovnikov}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Ann. Phys.}\ }\textbf {\bibinfo {volume} {326}},\ \bibinfo {pages} {486}
(\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ikeda}\ \emph {et~al.}(2015)\citenamefont {Ikeda},
\citenamefont {Sakumichi}, \citenamefont {Polkovnikov},\ and\ \citenamefont
{Ueda}}]{Ikeda2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~N.}\ \bibnamefont
{Ikeda}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Sakumichi}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Polkovnikov}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ueda}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Ann. Phys.}\ }\textbf {\bibinfo
{volume} {354}},\ \bibinfo {pages} {338} (\bibinfo {year}
{2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Garc{\'\i}a-Mata}\ \emph {et~al.}(2015)\citenamefont
{Garc{\'\i}a-Mata}, \citenamefont {Roncaglia},\ and\ \citenamefont
{Wisniacki}}]{GarciaMata2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Garc{\'\i}a-Mata}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Roncaglia}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Wisniacki}},\ }\href@noop {} {\ \textbf {\bibinfo {volume} {91}},\ \bibinfo
{pages} {010902} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Jensen}\ and\ \citenamefont
{Shankar}(1985)}]{Jensen85}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~V.}\ \bibnamefont
{Jensen}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Shankar}},\ }\href {\doibase 10.1103/PhysRevLett.54.1879} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {54}},\ \bibinfo {pages} {1879} (\bibinfo {year}
{1985})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Deutsch}(1991)}]{Deutsch91}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Deutsch}},\ }\href {\doibase 10.1103/PhysRevA.43.2046} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {43}},\
\bibinfo {pages} {2046} (\bibinfo {year} {1991})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Srednicki}(1994)}]{Srednicki1994}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Srednicki}},\ }\href {\doibase 10.1103/PhysRevE.50.888} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume}
{50}},\ \bibinfo {pages} {888} (\bibinfo {year} {1994})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Calabrese}\ and\ \citenamefont
{Cardy}(2006)}]{Calabrese2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Calabrese}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Cardy}},\ }\href {\doibase 10.1103/PhysRevLett.96.136801} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {96}},\ \bibinfo {pages} {136801} (\bibinfo {year}
{2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rigol}\ \emph {et~al.}(2008)\citenamefont {Rigol},
\citenamefont {Dunjko},\ and\ \citenamefont {Olshanii}}]{Rigol2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Rigol}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Dunjko}}, \
and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Olshanii}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf
{\bibinfo {volume} {452}},\ \bibinfo {pages} {854} (\bibinfo {year}
{2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rigol}(2009)}]{Rigol2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Rigol}},\ }\href {\doibase 10.1103/PhysRevLett.103.100403} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {103}},\ \bibinfo {pages} {100403} (\bibinfo {year}
{2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Linden}\ \emph {et~al.}(2009)\citenamefont {Linden},
\citenamefont {Popescu}, \citenamefont {Short},\ and\ \citenamefont
{Winter}}]{Linden2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Linden}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}},
\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},\ }\href
{\doibase 10.1103/PhysRevE.79.061103} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo
{pages} {061103} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gogolin}\ \emph {et~al.}(2011)\citenamefont
{Gogolin}, \citenamefont {M\"uller},\ and\ \citenamefont
{Eisert}}]{GogolinMullerEisert2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Gogolin}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont
{M\"uller}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Eisert}},\ }\href {\doibase 10.1103/PhysRevLett.106.040401} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {106}},\ \bibinfo {pages} {040401} (\bibinfo {year}
{2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dicke}(1954)}]{Dicke54}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont
{Dicke}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {99} (\bibinfo
{year} {1954})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hepp}\ and\ \citenamefont {Lieb}(1973)}]{dicke}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Hepp}}\ and\ \bibinfo {author} {\bibfnamefont {E.~H.}\ \bibnamefont
{Lieb}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Ann.
Phys. (NY)}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {360}
(\bibinfo {year} {1973})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Emary}\ and\ \citenamefont
{Brandes}(2003)}]{EmaryBrandes2003}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Emary}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Brandes}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {044101}
(\bibinfo {year} {2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zhu}\ and\ \citenamefont
{Xiong}(2000)}]{PhysRevB.62.14780}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.-P.}\ \bibnamefont
{Zhu}}\ and\ \bibinfo {author} {\bibfnamefont {S.-J.}\ \bibnamefont
{Xiong}},\ }\href {\doibase 10.1103/PhysRevB.62.14780} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {62}},\
\bibinfo {pages} {14780} (\bibinfo {year} {2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Giraud}\ \emph {et~al.}(2005)\citenamefont {Giraud},
\citenamefont {Georgeot},\ and\ \citenamefont {Shepelyansky}}]{Giraud2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Giraud}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Georgeot}}, \
and\ \bibinfo {author} {\bibfnamefont {D.~L.}\ \bibnamefont {Shepelyansky}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\
}\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {036203} (\bibinfo
{year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kramer}\ and\ \citenamefont
{MacKinnon}(1993)}]{Kramer1993}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Kramer}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{MacKinnon}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Rep. Prog. Phys.}\ }\textbf {\bibinfo {volume} {56}},\ \bibinfo {pages}
{1469} (\bibinfo {year} {1993})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lemari\'e}\ \emph {et~al.}(2016)\citenamefont
{Lemari\'e}, \citenamefont {Dubertrand}, \citenamefont {Martin},
\citenamefont {Giraud}, \citenamefont {Garc\'ia-Mata},\ and\ \citenamefont
{Georgeot}}]{NOUSunpub}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Lemari\'e}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Dubertrand}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Martin}},
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Giraud}}, \bibinfo
{author} {\bibfnamefont {I.}~\bibnamefont {Garc\'ia-Mata}}, \ and\ \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Georgeot}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Unpublished}\ } (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Luck}(2016)}]{Luck2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-M.}\ \bibnamefont
{Luck}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal
of Physics A: Mathematical and Theoretical}\ }\textbf {\bibinfo {volume}
{49}},\ \bibinfo {pages} {115303} (\bibinfo {year} {2016})}\BibitemShut
{NoStop}
\end{thebibliography}
| 7,227 | 23,370 |
en
|
train
|
0.4935.4
|
\onecolumngrid
\appendix
\begin{center}
{\large\textbf{Supplemental material for ``Average diagonal entropy in non-equilibrium isolated quantum systems
''}}
\end{center}
In this Supplemental material we provide a detailed proof of the general relation Eq.~(14) of the main text. We recall that $\Delta S$ is defined as
\begin{equation}
\label{DeltaS2}
\Delta S=-\sum_n \overline{h_n}\ln\overline{h_n}+\sum_n \overline{h_n\ln h_n},
\end{equation}
with transition probabilities given by
\begin{equation}
\label{hnbis2}
h_n=\!\!\sum_{m_1,m_2}\!e^{i(E'_{m_1}-E'_{m_2}) \tau}\langle n\ketbra{m_2}n_0{\rm a}ngle\langle n_0\ketbra{m_1}n{\rm a}ngle.
\end{equation}
To make notations lighter we will drop the index $n$ until the very last equation. The demonstration goes as follows. First we obtain an expression for the moments $\overline{h^k}$ as sums over integer partitions (section \ref{subs1}).
Then, through the inverse Laplace transform, we can get the distribution $P(h)$ to calculate \equa{DeltaS} (section \ref{subs2}).
To achieve this we first have to reexpress the moments in a way where resummation and simplification becomes feasible (section \ref{subs3}), and finally obtain the full expression for $\Delta S$ (section \ref{subs4}).
| 406 | 23,370 |
en
|
train
|
0.4935.5
|
\subsection{Moments as a sum over integer partitions}
\label{subs1}
The first step is to calculate the moments $\overline{h^k}$ averaged over $\tau$, assuming that the quantities
$e^{ \left[i\left(\sum_{i=1}^k E'_{m_{2i-1}}-\sum_{i=1}^{k} E'_{m_{2i}}\right)\tau\right]}$ average to 1 if the sets $\{m_{2i-1},1\leq i\leq k\}$ and $\{m_{2i},1\leq i\leq k\}$ are permutations of each other, and to 0 otherwise. In the main paper we gave the first averages
\begin{eqnarray}
\label{moment2}
\overline{h^2}/\overline{h}^2&=&2-\xi_2\\
\label{moment3}
\overline{h^3}/\overline{h}^3&=&6-9\xi_2+4\xi_3\\
\label{moment4}
\overline{h^4}/\overline{h}^4&=&24-72\xi_2+18(\xi_2)^2+64\xi_3-33\xi_4
\end{eqnarray}
in terms of the participation ratios (PR) of the vectors $(c_{m})_{m}$ for integer $q$,
\begin{equation}
\label{xiqn2}
\xi_{q}\equiv\frac{\sum_m c_{m}^q}{(\sum_m c_{m})^q}, {\bf ???}uad c_{m}=|\langle m|n_0{\rm a}ngle|^2|\langle m|n{\rm a}ngle|^2, {\bf ???}uad \sum_mc_m=\overline{h}.
\end{equation}
It is possible to obtain these expressions in a systematic way by introducing integer partitions. It is usual to denote by $\lambda\vdash k$ a partition $\lambda=(\lambda_1,\lambda_2,\ldots)$ of $k$, with $\lambda_1\geq\lambda_2\geq\ldots$ and $\sum_i\lambda_i=k$ (it can be padded on the right by an arbitrary number of zeros). The products of $\xi_q$ appearing in the expressions \eqref{moment2}--\eqref{moment4} for $\overline{h^k}$ correspond to all possible integer partitions of the integer $k$. For instance, noticing that $\xi_1=1$, the terms $\xi_4$, $\xi_3\xi_1$, $\xi_2^2$, $\xi_2\xi_1^2$ and $\xi_1^4$ contributing to Eq.~\eqref{moment4} correspond to the partitions $4=3+1=2+2=2+1+1=1+1+1+1$. Following standard textbooks \cite{Macdo}, one can define several families of symmetric polynomials of the variables $c_m$, $1\leq m\leq N$. We set
\begin{equation}
\label{mlambda}
m_{\lambda}=\sum_\sigma c_1^{\sigma(\lambda_1)}c_2^{\sigma(\lambda_2)}\ldots c_N^{\sigma(\lambda_N)},
\end{equation}
where the sum runs over all permutations of $(\lambda_1,\lambda_2,\ldots,\lambda_N)$ (see p.~18 of \cite{Macdo}); if $N$ is smaller than the number of nonzero $\lambda_i$'s then by convention $m_{\lambda}=0$. We also define the polynomials
\begin{equation}
\label{plambda}
p_{\lambda}=(\sum_m c_m^{\lambda_1})(\sum_m c_m^{\lambda_2})\ldots
\end{equation}
(see p.~24 of \cite{Macdo}). The $p_{\lambda}$ are simply related to the PR defined by Eq.~\eqref{xiqn2} by
\begin{equation}
\label{pxi}
p_{\lambda}=\overline{h}^k\xi_{\lambda_1}\xi_{\lambda_2}\cdots
\end{equation}
For $\lambda\vdash k$, the $p_{\lambda}$ and $m_{\lambda}$ are related by the linear relation
\begin{equation}
\label{pLm}
p_{\lambda}=\sum_{\mu\vdash k}L_{\lambda\mu}m_{\mu}
\end{equation}
(p.~103 of \cite{Macdo}), with $L_{\lambda\mu}$ an invertible lower-triangular matrix of integers indexed by partitions $\lambda,\mu$, of $k$.
Taking the $k\,$th power of Eq.~\eqref{hnbis2} and keeping only terms where the energies exactly compensate, we get the general expression of the $k\,$th moment as
\begin{equation}
\label{hk1}
\overline{h^k}=\sum_{m_1,\ldots,m_k}P(m_1,\ldots,m_k)c_{m_1}c_{m_2}\ldots c_{m_k}
\end{equation}
with $P(m_1,\ldots,m_k)$ the number of permutations of the $m_i$. We then group together all terms with the same `pattern' of indices, e.g., for $k=3$, terms with $m_1=m_2=m_3$ or $m_1=m_2\neq m_3$ or $m_1\neq m_2\neq m_3$, which corresponds to the different integer partitions 3=2+1=1+1+1. The expression \eqref{hk1} simply reduces to
\begin{equation}
\label{hk2}
\overline{h^k}=\sum_{\lambda\vdash k}P_{\lambda}^2m_{\lambda}
\end{equation}
with $P_{\lambda}$ the multinomial coefficient associated with $(\lambda_1,\lambda_2,\ldots)$ and $m_{\lambda}$ the symmetric polynomial \eqref{mlambda}. In order to relate the mean moments to the PR, we want to express $\overline{h^k}$ by means of the $p_{\lambda}$ rather than the $m_{\lambda}$; inverting the relation \eqref{pLm} we get
\begin{equation}
\label{hk3}
\overline{h^k}=\sum_{\lambda\vdash k}\sum_{\mu\vdash k}P_{\lambda}^2(L^{-1})_{\lambda\mu}p_{\mu}.
\end{equation}
Setting $Z_{\mu}=\sum_{\lambda\vdash k}P_{\lambda}^2(L^{-1})_{\lambda\mu}$ we have the final compact expression
\begin{equation}
\overline{h^k}=\sum_{\mu\vdash k}Z_{\mu}p_{\mu}.
\label{finalhk}
\end{equation}
The $L_{\lambda\mu}$, and thus the $Z_{\mu}$, can be calculated very easily using mathematical software, while $p_{\mu}$ is obtained from Eq.~\eqref{pxi}. For instance, one gets for $k=3=2+1=1+1+1$ (where partitions are ordered in that reverse lexicographic order) the matrix $L=\{\{1,0,0\},\{1,1,0\},\{1,3,6\}\}$, and multinomial coefficients $P_{3}=1$, $P_{21}=3$ and $P_{111}=6$, so that $Z_{\mu}$ is the vector $(4, -9, 6)$, which allows to recover indeed Eq.~\eqref{moment3}.
\subsection{From moments to entropy difference}
\label{subs2}
The knowledge of the moments \eqref{finalhk} allows to reconstruct the probability distribution $P(h)$. Indeed, let
\begin{equation}
\label{defM}
M(t)=\int_{0}^{\infty}dh P(h) e^{t h}
\end{equation}
be the moment generating function of $P(h)$. Then $M(t)$ can be expressed as a series
\begin{equation}
\label{sumM2}
M(t)=\sum_{k=0}^{\infty}\frac{\overline{h^k}}{k!}t^k.
\end{equation}
It is then possible to get $P(h)$ from inverse Laplace transform of $M(t)$, namely
\begin{equation}
\label{inverselaplace}
P(h)=\frac{1}{2i\pi}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty}dt e^{t h}M(-t)
\end{equation}
with $c$ a real number such that the contour in \eqref{inverselaplace} goes to the right of all poles (the $M(-t)$ comes from the fact that \eqref{defM} is not exactly a Laplace transform as there is a factor $e^{t h}$ rather than $e^{-t h}$). The entropy difference $\Delta S$ associated with $h$ is then simply given by
\begin{equation}
\label{entropydiff}
-\overline{h}\ln\overline{h}+\overline{h\ln h}=-\overline{h}\ln\overline{h}+\int_{0}^{\infty}dh P(h) h\ln h.
\end{equation}
In order to perform the sum over $k$ in \eqref{sumM2} we first have to rewrite the moments $\overline{h^k}$ under a form where the dependence on $k$ is more transparent.
\subsection{Reexpressing the moments}
\label{subs3}
The aim of this section is to show that $Z_{\mu}$ appearing in Eq.~\eqref{finalhk} can be put under the form
\begin{equation}
\label{zmutilde}
Z_{\mu}=k!\,\binom{k}{s}\tilde{Z}_{\mu'}
\end{equation}
where $s=\sum_{\mu_i\geq 2}\mu_i$ and $\mu'$ is the partition of $s$ obtained by removing the 1's from $\mu$, with $\tilde{Z}_{\mu'}$ rational numbers. For instance, for all partitions of the form $\mu=(211\ldots1)$ we have $\mu'=(2)$ and $s=2$. From its explicit definition $Z_{\mu}=\sum_{\lambda\vdash k}P_{\lambda}^2(L^{-1})_{\lambda\mu}$ it is easy to calculate $Z_{2}=-1$, $Z_{21}=-6$, $Z_{211}=-12$, $Z_{2111}=-20$ and so on, so that Eq.~\eqref{zmutilde} holds with $\tilde{Z}_{2}=-\frac12$. Similarly we can compute $\tilde{Z}_{3}=\frac23$, $\tilde{Z}_{4}=-\frac{11}{8}$, $\tilde{Z}_{22}=\frac34$. In the special case of partitions of the form $\mu=(111\ldots1)$, Eq.~\eqref{zmutilde} can still be satisfied by setting $\mu'=(0)$ and $\tilde{Z}_{0}=1$.
In order to show Eq.~\eqref{zmutilde}, we start by noting that the $L_{\lambda\mu}$ can be interpreted as the number of ways of adding together the $\lambda_i$ in order to obtain the $\mu_i$ (see \cite{Macdo} p.~103). For example, for $k=3$ the partition $\lambda=(111)$ can yield the partition $\mu=(21)$ by adding two 1's. There are three different ways of doing so, hence the entry $L_{111,21}=3$ given in subsection \ref{subs1}.
Let $\lambda\vdash k$ be a fixed partition of $k$ such that all $\lambda_i\geq 2$. From their definition, the quantities $Z_{\mu}$ verify
\begin{equation}
\label{sum1}
\sum_{\mu\vdash k}L_{\mu\lambda}Z_{\mu}= P_{\lambda}^2.
\end{equation}
It can be rewritten as a double sum over the number $r$ of 1's in the partition $\mu$ and over partitions $\mu'$ of $k-r$ not containing any 1. We thus have
\begin{equation}
\label{sumrmup}
\sum_{r=0}^{k}\sum_{\genfrac{}{}{0pt}{}{\mu'\vdash k-r}{\mu'_i\geq 2}}L_{\mu\lambda}\tilde{Z}_{\mu}k!\,\binom{k}{r}= P_{\lambda}^2,
\end{equation}
where we have introduced the notation $\tilde{Z}_{\mu}=Z_{\mu}/(k!\,\binom{k}{r})$, and $\mu=(\mu'1\ldots 1)$ with $r$ numbers 1. The goal is to show that $\tilde{Z}_{\mu}$ in fact only depends on $\mu'$ and not on $r$.
If we adjoin a 1 to the partition $\lambda$, we get the partition $(\lambda 1)$, which is a partition of $k+1$. For this partition, Eq.~\eqref{sumrmup} yields
\begin{equation}
\label{sumkr}
\sum_{r=0}^{k+1}\sum_{\genfrac{}{}{0pt}{}{\nu'\vdash k+1-r}{\nu'_i\geq 2}}L_{\nu(\lambda 1)}\tilde{Z}_{\nu}(k+1)!\,\binom{k+1}{r}= P_{(\lambda 1)}^2=(k+1)^2P_{\lambda}^2.
\end{equation}
Since $L_{\nu(\lambda 1)}$ can be interpreted as the number of ways of adding together the $\nu_i$ in order to obtain the elements of $(\lambda 1)$, which are the $\lambda_i$ and the additional term 1, necessarily this additional 1 has to come from a 1 appearing among the $\nu_i$. In particular this implies that the term $r=0$ in the sum \eqref{sumkr} must vanish, and that $\nu$ is of the form $(\mu 1)$. There are $r$ possible ways of choosing this additional 1 (the total number of 1's in $\nu$), and then the number of ways to group the remaining $\nu_i=\mu_i$ to get the $\lambda_i$ is precisely $L_{\mu\lambda}$. Thus $L_{\nu(\lambda 1)}=L_{(\mu 1)(\lambda 1)}=r L_{\mu\lambda}$, and $\nu'=\mu'$. Shifting the sum in \eqref{sumkr} yields
\begin{equation}
\label{sumkr2}
\sum_{r=0}^{k}\sum_{\genfrac{}{}{0pt}{}{\mu'\vdash k-r}{\mu'_i\geq 2}}(r+1)L_{\mu\lambda}\tilde{Z}_{(\mu 1)}(k+1)!\,\binom{k+1}{r+1}=(k+1)^2P_{\lambda}^2,
\end{equation}
which after simplification reduces to
\begin{equation}
\label{sumkr3}
\sum_{r=0}^{k}\sum_{\genfrac{}{}{0pt}{}{\mu'\vdash k-r}{\mu'_i\geq 2}} L_{\mu\lambda}\tilde{Z}_{(\mu 1)}k!\,\binom{k}{r}=P_{\lambda}^2.
\end{equation}
Comparing Eq.~\eqref{sumkr3} with Eq.~\eqref{sumrmup} one gets that $\tilde{Z}_{(\mu 1)}=\tilde{Z}_{\mu}$. Proceeding in the same way recursively one can show that all $\tilde{Z}_{(\mu 1\ldots 1)}$ are equal: we denote them $\tilde{Z}_{\mu'}$, which proves Eq.~\eqref{zmutilde}.
| 3,732 | 23,370 |
en
|
train
|
0.4935.6
|
\subsection{Reexpressing the moments}
\label{subs3}
The aim of this section is to show that $Z_{\mu}$ appearing in Eq.~\eqref{finalhk} can be put under the form
\begin{equation}
\label{zmutilde}
Z_{\mu}=k!\,\binom{k}{s}\tilde{Z}_{\mu'}
\end{equation}
where $s=\sum_{\mu_i\geq 2}\mu_i$ and $\mu'$ is the partition of $s$ obtained by removing the 1's from $\mu$, with $\tilde{Z}_{\mu'}$ rational numbers. For instance, for all partitions of the form $\mu=(211\ldots1)$ we have $\mu'=(2)$ and $s=2$. From its explicit definition $Z_{\mu}=\sum_{\lambda\vdash k}P_{\lambda}^2(L^{-1})_{\lambda\mu}$ it is easy to calculate $Z_{2}=-1$, $Z_{21}=-6$, $Z_{211}=-12$, $Z_{2111}=-20$ and so on, so that Eq.~\eqref{zmutilde} holds with $\tilde{Z}_{2}=-\frac12$. Similarly we can compute $\tilde{Z}_{3}=\frac23$, $\tilde{Z}_{4}=-\frac{11}{8}$, $\tilde{Z}_{22}=\frac34$. In the special case of partitions of the form $\mu=(111\ldots1)$, Eq.~\eqref{zmutilde} can still be satisfied by setting $\mu'=(0)$ and $\tilde{Z}_{0}=1$.
In order to show Eq.~\eqref{zmutilde}, we start by noting that the $L_{\lambda\mu}$ can be interpreted as the number of ways of adding together the $\lambda_i$ in order to obtain the $\mu_i$ (see \cite{Macdo} p.~103). For example, for $k=3$ the partition $\lambda=(111)$ can yield the partition $\mu=(21)$ by adding two 1's. There are three different ways of doing so, hence the entry $L_{111,21}=3$ given in subsection \ref{subs1}.
Let $\lambda\vdash k$ be a fixed partition of $k$ such that all $\lambda_i\geq 2$. From their definition, the quantities $Z_{\mu}$ verify
\begin{equation}
\label{sum1}
\sum_{\mu\vdash k}L_{\mu\lambda}Z_{\mu}= P_{\lambda}^2.
\end{equation}
It can be rewritten as a double sum over the number $r$ of 1's in the partition $\mu$ and over partitions $\mu'$ of $k-r$ not containing any 1. We thus have
\begin{equation}
\label{sumrmup}
\sum_{r=0}^{k}\sum_{\genfrac{}{}{0pt}{}{\mu'\vdash k-r}{\mu'_i\geq 2}}L_{\mu\lambda}\tilde{Z}_{\mu}k!\,\binom{k}{r}= P_{\lambda}^2,
\end{equation}
where we have introduced the notation $\tilde{Z}_{\mu}=Z_{\mu}/(k!\,\binom{k}{r})$, and $\mu=(\mu'1\ldots 1)$ with $r$ numbers 1. The goal is to show that $\tilde{Z}_{\mu}$ in fact only depends on $\mu'$ and not on $r$.
If we adjoin a 1 to the partition $\lambda$, we get the partition $(\lambda 1)$, which is a partition of $k+1$. For this partition, Eq.~\eqref{sumrmup} yields
\begin{equation}
\label{sumkr}
\sum_{r=0}^{k+1}\sum_{\genfrac{}{}{0pt}{}{\nu'\vdash k+1-r}{\nu'_i\geq 2}}L_{\nu(\lambda 1)}\tilde{Z}_{\nu}(k+1)!\,\binom{k+1}{r}= P_{(\lambda 1)}^2=(k+1)^2P_{\lambda}^2.
\end{equation}
Since $L_{\nu(\lambda 1)}$ can be interpreted as the number of ways of adding together the $\nu_i$ in order to obtain the elements of $(\lambda 1)$, which are the $\lambda_i$ and the additional term 1, necessarily this additional 1 has to come from a 1 appearing among the $\nu_i$. In particular this implies that the term $r=0$ in the sum \eqref{sumkr} must vanish, and that $\nu$ is of the form $(\mu 1)$. There are $r$ possible ways of choosing this additional 1 (the total number of 1's in $\nu$), and then the number of ways to group the remaining $\nu_i=\mu_i$ to get the $\lambda_i$ is precisely $L_{\mu\lambda}$. Thus $L_{\nu(\lambda 1)}=L_{(\mu 1)(\lambda 1)}=r L_{\mu\lambda}$, and $\nu'=\mu'$. Shifting the sum in \eqref{sumkr} yields
\begin{equation}
\label{sumkr2}
\sum_{r=0}^{k}\sum_{\genfrac{}{}{0pt}{}{\mu'\vdash k-r}{\mu'_i\geq 2}}(r+1)L_{\mu\lambda}\tilde{Z}_{(\mu 1)}(k+1)!\,\binom{k+1}{r+1}=(k+1)^2P_{\lambda}^2,
\end{equation}
which after simplification reduces to
\begin{equation}
\label{sumkr3}
\sum_{r=0}^{k}\sum_{\genfrac{}{}{0pt}{}{\mu'\vdash k-r}{\mu'_i\geq 2}} L_{\mu\lambda}\tilde{Z}_{(\mu 1)}k!\,\binom{k}{r}=P_{\lambda}^2.
\end{equation}
Comparing Eq.~\eqref{sumkr3} with Eq.~\eqref{sumrmup} one gets that $\tilde{Z}_{(\mu 1)}=\tilde{Z}_{\mu}$. Proceeding in the same way recursively one can show that all $\tilde{Z}_{(\mu 1\ldots 1)}$ are equal: we denote them $\tilde{Z}_{\mu'}$, which proves Eq.~\eqref{zmutilde}.
\subsection{Resummation of contributions}
\label{subs4}
We are now in a position to calculate the distribution $P(h)$. Using the above result, Eq.~\eqref{finalhk} can be rewritten
\begin{equation}
\overline{h^k}=\sum_{\mu\vdash k}k!\,\binom{k}{s}\tilde{Z}_{\mu'}p_{\mu'}p_1^{k-s},
\label{finalhk2}
\end{equation}
using the definition \eqref{plambda} of $p_{\lambda}$ and the notation $\mu=(\mu'11\ldots 1)$, and with $s=\sum_i\mu'_i$. Any given sequence of numbers $\mu'=(\mu'_1\mu'_2\ldots)$ with $\mu'_i\geq 2$ will contribute to each moment $\overline{h^k}$ with $k\geq s$ through the partition $(\mu'11\ldots 1)$ of $k$ with $(k-s)$ 1's. The contribution of $\mu'$ to $M(t)$ will be (using $p_1=\overline{h}$)
\begin{equation}
\sum_{k=s}^{\infty}t^k\binom{k}{s}\tilde{Z}_{\mu'}p_{\mu'}p_1^{k-s}=\frac{t^s\tilde{Z}_{\mu'}p_{\mu'}}{(1-\overline{h}t)^{s+1}}.\label{mtgeneral}
\end{equation}
The inverse Laplace transform \eqref{inverselaplace} then yields the contribution of $\mu'$ to the probability distribution $P(h)$. There is a single pole of order $s+1$ at $-1/\overline{h}$, whose residue is given by
\begin{equation}
\label{bigsum2}
\frac{1}{s!}\frac{\tilde{Z}_{\mu'}p_{\mu'}}{\overline{h}^{s+1}}\lim_{t\to -1/\overline{h}}\frac{\partial^s}{\partial t^s}\left(t^se^{t h}\right)=
\sum_{r=0}^{s}\binom{s}{r}^2\frac{r!}{s!}\left(-\frac{h}{\overline{h}}\right)^{s-r}
\frac{e^{-h/\overline{h}}}{\overline{h}^{s+1}}\tilde{Z}_{\mu'}p_{\mu'}.
\end{equation}
The contribution to the entropy difference \eqref{entropydiff} is then obtained by evaluating integrals of the form
\begin{equation}
\int_{0}^{\infty}\!dh\left(-\frac{h}{\overline{h}}\right)^{a}\frac{e^{-h/\overline{h}}}{\overline{h}}h\ln h =(-1)^a (a+1)!\,\, \overline{h} \left(\ln\overline{h}+\frac{|S_{a+2}|}{(a+1)!}-\gamma\right),
\end{equation}
where $S_{a}$ are Stirling numbers of the first kind and $\gamma=0.5772\ldots\,$ is Euler's constant. Performing the summation over $r$ in \eqref{bigsum2}, this term reduces to $\overline{h}\left(\ln\overline{h}+1-\gamma\right)$ if $s=0$, and to
\begin{equation}
\label{bigsum3}
\frac{\tilde{Z}_{\mu'}p_{\mu'}}{\overline{h}^{s-1}}\sum_{r=0}^{s}\binom{s}{r}(-1)^r(r+1)\left(\ln\overline{h}-\gamma+\frac{|S_{r+2}|}{(r+1)!}\right)=\frac{\tilde{Z}_{\mu'}p_{\mu'}}{s(s-1)\overline{h}^{s-1}}
\end{equation}
if $s\geq 2$. The last expression has been obtained by using the identities for $s\geq 2$ (see e.g.~\cite{spivey})
\begin{equation}
\sum_{r=0}^{s}\binom{s}{r}(-1)^r(r+1)=0
\end{equation}
and
\begin{equation}
\sum_{r=0}^{s}\binom{s}{r}(-1)^r\frac{|S_{r+2}|}{r!}=\frac{1}{s(s-1)}.
\end{equation}
Summing up contributions from all possible $\mu'$ one finally has
\begin{equation}
\label{bigsum4}
-\overline{h}\ln\overline{h}+\overline{h\ln h}=\overline{h}\left(1-\gamma\right)+\sum_{\mu'}\frac{\tilde{Z}_{\mu'}\overline{h}}{s(s-1)}\frac{p_{\mu'}}{\overline{h}^{s}}
\end{equation}
where $s=\sum_{i}\mu'_i$ and the sum runs over all sequences $\mu'=(\mu'_1\mu'_2\ldots)$ with $\mu'_i\geq 2$. By increasing order these partitions are $(2),(3),(4),(22),(5),(32),\ldots$. Using Eq.~\eqref{pxi} the term $p_{\mu'}/\overline{h}^{s}=\xi_{\mu'_1}\xi_{\mu'_2}\ldots$ is just a product of generalized participation ratios. Reintroducing the dependence in $n$ for $h\equiv h_n$, we get (recall that $\sum_n\overline{h_n}=1$) the final expression
\begin{equation}
\label{finalexpr}
-\sum_n \overline{h_n}\ln\overline{h_n}+\sum_n \overline{h_n\ln h_n}=1-\gamma+\sum_{\mu'}\frac{\tilde{Z}_{\mu'}}{s(s-1)}\overline{\xi}_{\mu'}
\end{equation}
with $s=\sum_{i}\mu'_i$ and $\overline{\xi}_{\mu}\equiv\sum_n\overline{h_n}\,\xi_{\mu_1,n}\xi_{\mu_2,n}\ldots$.
\end{document}
| 2,938 | 23,370 |
en
|
train
|
0.4936.0
|
\begin{document}
\begin{titlepage}
\begin{center}
\Large{\textbf{Experimental demonstration of scalable quantum key distribution over a thousand kilometers}} \\
\large{A.\,Aliev, V.\,Statiev, I.\,Zarubin, N.\,Kirsanov, D.\,Strizhak, A.\,Bezruchenko, A.\,Osicheva, A.\,Smirnov, M.\,Yarovikov, A.\,Kodukhov, V.\,Pastushenko, M.\,Pflitsch, V.\,Vinokur}
\Large{\it{Terra Quantum AG}}
\end{center}
\end{titlepage}
\section*{Abstract}
\noindent
Secure communication over long distances is one of the major problems of modern informatics. Classical transmissions are recognized to be vulnerable to quantum computer attacks.
Remarkably, the same quantum mechanics that engenders quantum computers offer guaranteed protection against these attacks via a quantum key distribution (QKD) protocol.
Yet, long-distance transmission is problematic since the signal decay in optical channels occurs at distances of about a hundred kilometers.
We resolve this problem by creating a QKD protocol, further referred to as the Terra Quantum QKD protocol (TQ-QKD protocol), using
semiclassical pulses containing enough photons for random bit encoding and exploiting erbium amplifiers to retranslate photon pulses and, at the same time, ensuring that at this intensity only a few photons could go outside the channel even at distances about hundred meters.
As a result, an eavesdropper will not be able to efficiently utilize the lost part of the signal.
A central TQ-QKD protocol’s component is the end-to-end control over losses in the transmission channel which, in principle, could allow an eavesdropper to obtain the transmitted information.
However, our control precision is such that if the degree of the leak falls below the control border, then the leaking states are quantum since they contain only a few photons.
Therefore, available to an eavesdropper parts of the bit encoding states representing `0' and `1' are nearly indistinguishable.
Our work presents the experimental realization of the TQ-QKD protocol ensuring secure communication over 1032 kilometers.
Moreover, further refining the quality of the scheme's components will greatly expand the attainable transmission distances.
This paves the way for creating a secure global QKD network in the upcoming years.
\section{Introduction}
The TQ-QKD protocol\,\cite{new_theory} resolves the problem of secure long-distance communications.
A threat of breaking current standardized public key algorithms (e.g. the RSA, ECC, and DSA) was brought by the emergence of Shor's algorithm.
Although Shor's algorithm can only be executed on massive quantum computers that as yet do not exist, this threat must not be ignored.
Fortunately, the same quantum physics brings in a possibility for QKD\cite{qkd:1, qkd:2, qkd:3, qkd:4, qkd:5, qkd:6, qkd:7}, a secure communication method that implements a cryptographic protocol involving components of quantum mechanics.
It enables communicating participants to generate a shared random secret key known only to them, which then can be used to encrypt and decrypt messages.
A unique property of quantum key distribution providing its security relies on the foundations of quantum mechanics: the ability of the communicating users to detect the presence of any third party trying to gain knowledge of the key.
This results from the fundamental quantum mechanics principle, the fact that measuring disturbs the measured quantum states.
A third party trying to eavesdrop on the key inevitably creates detectable anomalies.
By careful analysis of transmitting quantum states, a communication system detects eavesdropping and immediately takes measures to fully secure the transmission.
Most of the existing QKD applications are curtailed by channel losses that result in the exponential decay of the signal with the distance as dictated by the fundamental Pirandola-Laurenza-Ottaviani-Banchi (PLOB) bound\,\cite{plob}.
In this framework up-to-date record secret key generation rates\,\cite{compare_qkd_mdi, compare_qkd_bb84} do not exceed several bits per second at distances of 400\,km.
To overcome the PLOB bound, one can introduce trusted nodes\cite{trusted_nodes:1, trusted_nodes:2, trusted_nodes:3} where local secret keys are produced for each QKD link between nodes and stored in the nodes.
This implies re-coding of the quantum information into a fully classical one at these trusted nodes and completely eliminates quantum protection of an overall protocol.
The way to preserve quantumness is the use of quantum repeaters.
Ideally, quantum repeaters\cite{quantum_repeaters:1, quantum_repeaters:2, quantum_repeaters:3, quantum_repeaters:4, quantum_repeaters:5, quantum_repeaters:6, quantum_repeaters:7, quantum_repeaters:8, quantum_repeaters:9, quantum_repeaters:10, quantum_repeaters:11, quantum_repeaters:12, quantum_repeaters:13, quantum_repeaters:14, quantum_repeaters:15, quantum_repeaters:16} would have been expected to decontaminate and forward quantum signals without directly measuring or cloning them.
However, such idealized quantum repeaters remain unavailable for existing technologies.
The only available secure method to beat the PLOB bound is the Twin-Field QKD\,\cite{lucamarini, compare_qkd_tf, compare_qkd_tf_1002} (working only at relatively short distances) which sends quantum states from both Alice and Bob to the intermediate point.
This method, however, is not scalable and dramatically suffers from channel losses as well.
Figure\,\ref{comparing} summarizes some of the previous realizations of long-distance QKD including the current record distance\,\cite{compare_qkd_tf_1002} and underlines the superiority of our work in terms of speed and distance compared to other works. Our work presents the realization of the TQ-QKD protocol eliminating the PLOB bound by using quantum thermodynamic restrictions and quantum mechanics-based loss control in the optical channel.
The implementation of the secure long-distance transmission line is based on using the Erbium-Doped Fiber Amplifiers (EDFA) of our own Terra Quantum construction arranged every 50\,km which have enabled the practical realization of the optical fiber channel transmission line over 1032\,km.
\begin{figure}
\caption{\textbf{Comparing of our results with the earlier long-distance QKD realizations\,\cite{compare_qkd_mdi, compare_qkd_bb84, compare_qkd_tf, compare_qkd_tf_1002}
\label{comparing}
\end{figure}
\section{Terra Quantum QKD transmission line}\label{QKD_section}
\begin{figure}
\caption{\textbf{A setup of the QKD transmission line}
\label{QKDscheme}
\end{figure}
The Terra Quantum QKD transmission line (TQ-QKDTL) realizing the secure information transmission between the legitimate users, Alice and Bob, comprises Terra Quantum-made EDFAs (TQ-EDFA) with the amplifying coefficient 10\,dB. A setup of the implemented TQ-QKDTL is shown in Fig.\ref{QKDscheme}. At Alice's laboratory, the signals that are to be sent via the fiber optical channel are formed from the laser pulses by the amplitude modulator (AM) IxBlue MXAN-LN-10. Before arriving at AM, generated pulses pass through the phase modulator (PM), which shifts the phase of each pulse randomly. Phase randomization\,\cite{phase_rand, phase_rand2, phase_rand3, phase_rand4} decreases the effectiveness of the eavesdropper's attacks without affecting the probability distributions of Bob's measurement results. The phase shift of a particular pulse is controlled by the Terra Quantum-made random signal generator (RSG) using the avalanche breakdown described in Refs.\,\cite{qrng:1,qrng:2}.
Exploiting the quantum interference effect, the AM modulates the intensity of the optical signal.
The phase difference between the interfering components is set by the voltage coming to the radio frequency (RF) port.
Further, an additional direct current (DC) port is used for the bias point shifting.
The high-frequency electric signal is created by the field-programmable gate array (FPGA) where the original information bits created by Alice via using the TQ-QRNG are coming, and the electric amplifier working in the 10\,GHz (IxBlue DR-VE-10-MO). The random bits are generated using the TQ-QRNG that has been certified by the Federal Institute of Metrology METAS (Test Report No 116-05151).
Our TQ-QRNG uses the random time interval between the registrations of the photons generated by a weak LED light source via a single-photon detector as an entropy source. The hashed time intervals constitute the final random number sequences.
The FPGA board sets in the constant offset voltage for the bias control getting the information about the present power level coming from the AM via the monitoring detector. The light irradiation is generated by the TLX1 Thorlabs laser and has the 1530.33\,nm wavelength and the 10\,kHz frequency of the emission bandwidth. This wavelength is chosen since it corresponds to the peak of the EDFA amplification spectrum, see Fig.\,\ref{amplif_spectrum}.
At Bob's spot, Alice's optical signals are first amplified by the 20dB TQ-EDFA, then go through the narrow-band TQ-made optical filter with the 8.5\,GHz bandwidth, see Supplementary Information (SI), section\,\ref{narrow_filter_section} Since this filter is the fiber Bragg grating (FBG), its bandwidth is temperature sensitive. To stabilize the temperature of the narrow bandwidth filter, the Terra Quantum team developed a specific thermostat. Being filtered from the natural amplifier's noise referred to as an amplified spontaneous emission (ASE), Alice's signal arrives at Bob's detector (FPD610FC). Then the voltage coming from the detector is digitized by the oscilloscope, Tektronix DPO4104, and is treated by Bob's computer in an automatic regime.
| 2,532 | 13,635 |
en
|
train
|
0.4936.1
|
\section{The protocol}\label{protocol_section}
The structure of the TQ-QKD protocol is shown in Fig.\,\ref{protocol_block_scheme}. Before starting the secret communication, legitimate users must make sure that there is no eavesdropper intercepting the transmission line.
To that end, ideally, they have to execute the loss control procedure using the optical time domain reflectometry (OTDR) technique.
Corresponding devices commonly used for testing the integrity of fiber lines not containing amplifiers are described below in Section\,\ref{REFL_section}
We develop a special OTDR technique suitable for our TQ-QKD protocol which we describe in detail in a forthcoming publication.
The loss control procedure is repeated with a certain frequency enabling the determination of the reference line tomogram, the magnitude of which drifts slowly with time.
The actions of `bits sending' and `test pulse sending' (constituting the transmittometry) occur between each pair of consecutive acts of reflectometry.
The protocol is illustrated in Fig.\,\ref{protocol_block_scheme}.
The reflectometry and transmittometry facilitate continuous loss control in parallel with the bit distribution.
\begin{figure}
\caption{\textbf{The general TQ-QKD protocol structure.}
\label{protocol_block_scheme}
\end{figure}
At the beginning of the key generation process, Alice encodes the logical bits into phase-randomized coherent states. The pulses corresponding to the different bits have different average photon numbers and random phases. At his side, Bob carries out the measurements of the energies of the received states and exercises the subsequent classical post-selection of the results. The measurements can be formalized using the projective operators
\begin{equation}
\hat{E}_0=\sum \limits_{k=\Theta_3}^{\Theta_1} \ket{k} \bra{k},\quad\hat{E}_1 = \sum \limits_{k=\Theta_2}^{ \Theta_4} \ket{k} \bra{k},\quad\hat{E}_\text{fail} = \hat{\mathds{1}} - \hat{E}_0 - \hat{E}_1,
\label{Eint}
\end{equation}
where $\hat{E}_0$ and $\hat{E}_1$ correspond to the outcomes which Bob interprets as `0' and `1', respectively; the operator $\hat{E}_\text{fail}$ corresponds to the failed outcome (the corresponding bit positions will be removed at the postselection stage); $\ket{k}$ stands for a Fock state containing $k$ photons; $\hat{\mathds{1}}$ is the unity operator, and $\Theta_{1-4}$ are the postselection parameters that Bob chooses in correspondence to the amount of the leak $r_\text{E}$ created by an eavesdropper.
The exemplary probability distributions of the measured photon number corresponding to bits `0' and `1' are presented in Fig.\,\ref{postselection}.
Next, Bob sifts the measurement results through four $\Theta_{1-4}$ borders and obtains the raw key.
\begin{figure}
\caption{\textbf{Post-selection.}
\label{postselection}
\end{figure}
\subsection{Error correction}\label{error_correction}
To correct the errors that are bound to occur in the measured bit sequence, we employ the combination of two well-known correction codes, the repetition code and the low-density parity-check (LDPC) code.
The repetition code proves to be effective under the bit error rates (BERs) up to 30\%. This means that about one-third of all bits after the postselective measurement are inverted.
For the $n$ bit-long code, the fraction $R_{\text{rep}}$ of the corrected shared sequence that becomes known to Eve is
\begin{equation}
R_{\text{rep}} = \frac{n-1}{n}.
\end{equation}
The LDPC code is faster but is applicable only when the BER is below 11\%. We use the LDPC code with the code rate $R_\text{LDPC} = 1/2$ which means that the half of information becomes declassified.
In our case, the BERs constitute 35--40\%, and we resolve the issue by combining two methods.
Our sequence consists of 1944 blocks of the length $n$. Firstly, Bob uses Alice's repeat code syndromes to find properties of the bits `0' and `1' in each block. Here we divulge the $R_\text{rep}$ part of the sifted sequence. Then these parts of the signal go as an input of the LDPC code where we use Alice's LDPC syndrome to finally correct the shared sequence. At this step, we disclose the $R_\text{LDPC}$ part of the remaining secret information.
The fraction
\begin{equation}
R_{\text{LDPC\&rep}} = R_\text{rep} + R_\text{LDPC}(1 - R_\text{rep}) = \frac{n-1}{n} + \frac{1}{2}\cdot\frac{1}{n} = 1 - \frac{1}{2n}
\end{equation}
of the shared sequence becomes known to Eve.
The value of $n$ is chosen in accordance with the error value, the bigger the error, the bigger $n$ is needed for the successful correction.
Choosing a large $n$, one can use the repetition code only, however, in this case, a large fraction of the information gets public, and therefore the time for the key accumulation becomes large as well.
Therefore, one chooses the optimal $n$ allowing one to use the optimal combination of codes for every concrete error value enabling legitimate users to achieve the maximum key generation rate.
\subsection{{Privacy amplification}}\label{privacy_amplification}
One of the possible eavesdropper scenarios actions is the attack via the diverting of the part of the signal.
The amount of information that Eve gets during Alice's state transmission through the optical channel, $I(\text{A}, \text{E})$ (described in the section\,\ref{Eves_info_section}), does depend on the precision of the control over the loss coefficient in the line.
To estimate the magnitude to which the final key is to be compressed, we take into account all the possible leaks during the protocol implementation.
To begin, before starting the error correction, Alice divulges the $r_{\text{estim}} = 0.01$ part of her shared sequence in order to estimate the BER.
Then, during the correction itself, another share $R_{\text{LDPC\&rep}}$ of the sifted sequence gets declassified.
Finally, during the signal transmission, an eavesdropper could divert $I(\text{A}, \text{E})$ of the information amount.
As a result, all the losses related to the error corrections are $r_{\text{corr}} = r_{\text{estim}} + R_{\text{LDPC\&rep}}$, while the total losses are $r = r_{\text{corr}} + I(\text{A}, \text{E})$.
The distributed sequence is to be compressed to such an extent that the length of the final secret key became $L_\checkmark \cdot(1 - r)>0$, where $L_\checkmark = 1944 \cdot n$ is the length of the sifted sequence.
In case $r \geq 1$, the distributed sequence is not considered safe and is not added to the general key.
\subsection{Possible attacks and ways of protection}\label{eavesdropper}
Table\,\ref{atacks} presents the possible attacks, their description, and the corresponding ways of protection.
\begin{table}[h!]
\setlength{\tabcolsep}{4pt}
\raggedright
\centering
\begin{tabularx}{\textwidth}{|s|b|b|}
\hline
\textbf{The attack} & \textbf{Brief description} & \textbf{The way of protection}
\\ \hline\hline
Measurement of the natural losses (see subsection\,\ref{atack_demon}).
&
Rayleigh scattering can cause a leak of the signal. Then an eavesdropper can eavesdrop even without intercepting the line.
&
We work with moderate signal power. Corresponding losses constitute a few photons per 100 meters\,\cite{new_theory}, which does not allow measuring them using any realistic photodetectors.
\\ \hline
Measuring losses at the splices (see subsection\,\ref{attack_on_welding}).
&
An eavesdropper can collect the signal lighting up out of the cable at the fusion spliced joints without creating additional losses.
& We make careful specially optimized splicing procedures. Besides, we demonstrate that not all the losses in the channel go outside the cable.
\\ \hline
Creating a local leak (see subsection\,\ref{atack_attenuation}).
&
An eavesdropper can achieve diverting the part of the signal by local bending of the fiber channel.
&
Lock-in control, see subsection\,\ref{lock_in}
\\ \hline
Creation of the local leak and the increase of the amplifying coefficient in the magistral line amplifier (see subsection\,\ref{atack_amplifier}).
&
An eavesdropper can divert a part of the transmitting signal and compensate for the visible losses by increasing the amplifier power.
&
We design the TQ amplifier settled in an active fiber channel in a way that it always performs in a regime of maximal pumping power and population inversion.
\\ \hline
\end{tabularx}
\caption{Possible attacks against the protocol and protection methods.}
\label{atacks}
\end{table}
\subsubsection{Attacks with the measurements of the natural losses}\label{atack_demon}
During the signal transmission along the fiber optical channels, part of the signal scatters out due to Rayleigh scattering.
Then the eavesdropper can arrange an anomalously large distributed detector next to one of the magistral amplifiers and watch the complete signal even without the mechanical intervention into a transmitting channel. This problem has been described in\,\cite{new_theory}. To avoid this problem we work at the signal intensity that is much less than some minimal critical value at which the eavesdropping is still possible.
\subsubsection{Measuring losses at the splices}\label{attack_on_welding}
During the transmission, a portion of the signal may leak outside the channel at the splice joints.
The eavesdropper, thus, gets an opportunity of gathering this locally leaking radiation, avoiding, thereby, the creation of any additional losses through mechanical interventions.
However, as shown in the SI Section\,\ref{loss_section}, not all the losses go outside the transmission line. A significant portion of the losses remains confined within the cable and is inaccessible to potential eavesdroppers.
Our estimates suggest that the loss at a splice joint amounts to a mere 0.1\%, which is acceptable for transmissions spanning over 1000\,km.
Notably, one cannot avoid splice joints near amplifiers, but we implement additional protection at these points.
\subsubsection{The attack with creation of the local leak}\label{atack_attenuation}
Using a beamsplitter an eavesdropper can divert and use the part of the signal.
Our transmission channel design uses stringent physical loss control over the transmission line which makes it impossible for eavesdroppers to introduce a beamsplitter into the line remaining unnoticed.
The eavesdropper may try to arrange the optical fiber bending and collect all the outcoming from the channel signal. Yet we can detect even the fast eavesdropper's intervention using the regular transmittometry via detecting the sharp integral change of the intensity in the whole line, see Subsection\,\ref{lock_in}
\subsubsection{The attack against the amplifier}\label{atack_amplifier}
Diverting the part of the signal using the beamsplitter, an eavesdropper decreases the integral intensity in the line. Therefore, to remain unnoticed the eavesdropper should recover the observable losses using an amplifier.
The eavesdropper cannot introduce its own amplifier since the transmission line is permanently subject to loss control and any mechanical intervention will be immediately detected. The retuning amplifying coefficient of the magistral amplifier is also impossible since it works in the regime of the maximal pumping power. Moreover, because of the amplifier security control mechanism any mechanical action immediately increases the losses.
\subsection{Eavesdropper's information estimation}\label{Eves_info_section}
In the context of the proposed loss control approach, an eavesdropper is not able to conduct conventional attacks including the replacement of the optical line with the ideal (lossless) quantum channel, since such attacks dramatically change the line reflectogram and are to be easily detected.
Thus, Eve has to introduce local losses at some point in the line.
The cascade of $M_{1(2)}$ 50km-long fiber pieces and amplifiers before (after) Eve's intrusion point can be effectively represented as a pair of loss and amplification.
We introduce the notations $G_{1(2)}=GM_{1(2)}\left(1-T\right)+1$ and $T_{1(2)}=1/G_{1(2)}$ for the amplification factor (the ratio of the output photon number to the input photon number in the amplification channel) and transmission probability of the effective amplification and loss channels, respectively.
The detailed analysis of the signal's density matrix evolution is provided in Ref.\,\cite{new_theory}.
Here, we consider only Eve's quantum state $\hat{\rho}_{\text{E}}^{(a)}$ on the condition that the sent bit is $a$ and Bob obtains the conclusive measurement result.
Let $r_\text{E}$ be the minimal detectable artificial leakage and $\gamma_a$ be the amplitude of the bit-encoding pulse.
Eve's density matrix is
\begin{multline}
\hat{\rho}_{\text{E}}^{(a)}
=
\frac{1}{p(\checkmark|a)}
\int\!\!d^2\alpha\,\,
P\left(\alpha;\sqrt{T_1}\gamma_a,G_1\right)
\int\!\!d^2\beta\,\,
P\left(\beta;
\sqrt{(1-r_{\text{E}})T_2}\alpha,G_2\right)
\langle\beta|\left(\hat{E}_0+\hat{E}_1\right)|\beta \rangle
\\\times
\left[
\frac{1}{2\pi}\int\limits_{0}^{2\pi}\!\!d\varphi\,\,
\left|e^{i\varphi}\sqrt{r_{\text{E}}}|\alpha|\right\rangle \left\langle e^{i\varphi} \sqrt{r_{\text{E}}}|\alpha| \right|_\text{E}\right],
\label{eve_matr_ints}
\end{multline}
where
\begin{equation}
P(\alpha,\gamma,G) = \frac{1}{\pi (G-1)} \exp\left(-\frac{|\alpha- \sqrt{G} \gamma|^2}{G-1}\right)
\label{P}.
\end{equation}
The normalizing factor $p(\checkmark|a)$ is the probability of a conclusive measurement result at Bob's end provided that the sent bit is $a$ and can be determined from the condition $\text{tr}\big[\hat{\rho}_{\text{E}}^{(a)}\big]=1$.
The sum $p_\checkmark=p\left(\checkmark|0\right)+p\left(\checkmark|1\right)$ determines the average probability of a conclusive result.
The average maximum amount of information that Eve can extract from the states Eq.\,(\ref{eve_matr_ints}) is upper-bounded by the Holevo quantity\,\cite{Holevo} $\chi$ which, in our case, can be expressed as
\begin{equation}
I(\text{A},\text{E})\leq\chi
=
S\left(\frac{p\left(\checkmark|0\right)}{2p_\checkmark}\hat{\rho}_{\text{E}}^{(0)}+\frac{p\left(\checkmark|1\right)}{2p_\checkmark}\hat{\rho}_{\text{E}}^{(1)}\right)
-
\frac{p\left(\checkmark|0\right)}{2p_\checkmark}S\left(\hat{\rho}_{\text{E}}^{(0)}\right)
-
\frac{p\left(\checkmark|1\right)}{2p_\checkmark}S\left(\hat{\rho}_{\text{E}}^{(1)}\right),
\label{eve_holevo}
\end{equation}
where $S(\hat{\rho})=-\text{tr}\left[ \hat{\rho} \log_2 \hat{\rho} \right]$ is von Neumann entropy.
The value $I(\text{A},\text{E})$ includes information that Eve extracts from the measurement of the intercepted photons and does not comprise information leaked during the error correction procedure.
| 4,089 | 13,635 |
en
|
train
|
0.4936.2
|
\subsection{Eavesdropper's information estimation}\label{Eves_info_section}
In the context of the proposed loss control approach, an eavesdropper is not able to conduct conventional attacks including the replacement of the optical line with the ideal (lossless) quantum channel, since such attacks dramatically change the line reflectogram and are to be easily detected.
Thus, Eve has to introduce local losses at some point in the line.
The cascade of $M_{1(2)}$ 50km-long fiber pieces and amplifiers before (after) Eve's intrusion point can be effectively represented as a pair of loss and amplification.
We introduce the notations $G_{1(2)}=GM_{1(2)}\left(1-T\right)+1$ and $T_{1(2)}=1/G_{1(2)}$ for the amplification factor (the ratio of the output photon number to the input photon number in the amplification channel) and transmission probability of the effective amplification and loss channels, respectively.
The detailed analysis of the signal's density matrix evolution is provided in Ref.\,\cite{new_theory}.
Here, we consider only Eve's quantum state $\hat{\rho}_{\text{E}}^{(a)}$ on the condition that the sent bit is $a$ and Bob obtains the conclusive measurement result.
Let $r_\text{E}$ be the minimal detectable artificial leakage and $\gamma_a$ be the amplitude of the bit-encoding pulse.
Eve's density matrix is
\begin{multline}
\hat{\rho}_{\text{E}}^{(a)}
=
\frac{1}{p(\checkmark|a)}
\int\!\!d^2\alpha\,\,
P\left(\alpha;\sqrt{T_1}\gamma_a,G_1\right)
\int\!\!d^2\beta\,\,
P\left(\beta;
\sqrt{(1-r_{\text{E}})T_2}\alpha,G_2\right)
\langle\beta|\left(\hat{E}_0+\hat{E}_1\right)|\beta \rangle
\\\times
\left[
\frac{1}{2\pi}\int\limits_{0}^{2\pi}\!\!d\varphi\,\,
\left|e^{i\varphi}\sqrt{r_{\text{E}}}|\alpha|\right\rangle \left\langle e^{i\varphi} \sqrt{r_{\text{E}}}|\alpha| \right|_\text{E}\right],
\label{eve_matr_ints}
\end{multline}
where
\begin{equation}
P(\alpha,\gamma,G) = \frac{1}{\pi (G-1)} \exp\left(-\frac{|\alpha- \sqrt{G} \gamma|^2}{G-1}\right)
\label{P}.
\end{equation}
The normalizing factor $p(\checkmark|a)$ is the probability of a conclusive measurement result at Bob's end provided that the sent bit is $a$ and can be determined from the condition $\text{tr}\big[\hat{\rho}_{\text{E}}^{(a)}\big]=1$.
The sum $p_\checkmark=p\left(\checkmark|0\right)+p\left(\checkmark|1\right)$ determines the average probability of a conclusive result.
The average maximum amount of information that Eve can extract from the states Eq.\,(\ref{eve_matr_ints}) is upper-bounded by the Holevo quantity\,\cite{Holevo} $\chi$ which, in our case, can be expressed as
\begin{equation}
I(\text{A},\text{E})\leq\chi
=
S\left(\frac{p\left(\checkmark|0\right)}{2p_\checkmark}\hat{\rho}_{\text{E}}^{(0)}+\frac{p\left(\checkmark|1\right)}{2p_\checkmark}\hat{\rho}_{\text{E}}^{(1)}\right)
-
\frac{p\left(\checkmark|0\right)}{2p_\checkmark}S\left(\hat{\rho}_{\text{E}}^{(0)}\right)
-
\frac{p\left(\checkmark|1\right)}{2p_\checkmark}S\left(\hat{\rho}_{\text{E}}^{(1)}\right),
\label{eve_holevo}
\end{equation}
where $S(\hat{\rho})=-\text{tr}\left[ \hat{\rho} \log_2 \hat{\rho} \right]$ is von Neumann entropy.
The value $I(\text{A},\text{E})$ includes information that Eve extracts from the measurement of the intercepted photons and does not comprise information leaked during the error correction procedure.
\subsection{Secret key generation rate}\label{generation_speed}
Alice generates a random bit sequence using the TQ-QRNG with the speed of 5\,Mb/sec and loads generated string into her PC circuits.
The memory of Alice's circuit transfers the bit sequence to FPGA forming the electric pulses of the required length alternating with the sinusoidal signal proving the loss control.
This pulse mixture is transferred to an amplitude modulator transforming it into the optical signal that is sent with the rate of 7\,Kb/sec to the optical magistral line of the 1032\,km length.
Bob takes readings of the signal in portions which, at present, is the major effect restricting the key distribution rate stemming from the use of an oscilloscope. Bob accepts the signal every 3 seconds using his oscilloscope and launches, in parallel, the treatment of the sinusoidal signal component and the identification of the `0' and `1' states.
This is a postselective measurement after which approximately 60\% of the original bit sequence remains.
Bob accumulates these arriving bit sequences in his device memory until their amount becomes sufficient for exercising the error correction procedure. As soon as their number exceeds the minimum necessary for the error corrections, Bob informs Alice via the classical channel, and she sends him her error syndromes. Importantly, this additional communication occurs during the receiving of the signal by oscilloscope and thus does not slow down the work of the protocol, ensuring, at the same time, its reliability and stability.
In the course of the practical protocol implementation, we succeeded in transmitting the raw bit sequence with the rate\,800\,bps. The next step is the compression of the key over the quantity equal to the number of bits declassified during the protocol's work.
The compression coefficient is determined by the specific method of transmission of Alice's error syndromes to Bob and by the amount of information stolen by the eavesdropper. The length of the resulting key is
\begin{equation}
L_\text{f}=p_\checkmark L\cdot\big(S(\text{A})-f S(\text{A}|\text{B})-I(\text{A},\text{E})\big).
\end{equation}
Here, $S(\text{A})$ is Alice's system entropy provided that the postselection is carried out;
$S(\text{A}|\text{B})$ is Alice's system entropy obtained under the condition that the results of Bob's measurements are known and that the inconclusive outcomes are discarded.
The quantity $f$ is determined by the error correction code, and thus
$f S(\text{A}|\text{B})=r_{\text{corr}}$ is the fraction of the sequence leaked to the eavesdropper during the error correction procedure, and $p_\checkmark$ is the probability of a conclusive outcome at Bob's side, i.e., the probability that the bit is going to be taken into account, and $p_\checkmark L=L_\checkmark$ is the length of the sifted bit sequence.
Then the compression of the distributed sequence gives the rate of the final absolutely secret random key as 34\,bps.
\subsection{The encountered problems}
Table\,\ref{problems} summarizes the difficulties and problems that our realized protocol has overcome.
\begin{table}[!h]
\centering
\setlength{\tabcolsep}{4pt}
\raggedright
\begin{tabularx}{\textwidth}{|X|X|X|}
\hline
\textbf{The problem} & \textbf{Brief description} & \textbf{Solution}
\\ \hline\hline
An exponential decay of the signal in an optical fiber (Subsection\,\ref{Low_intensity_defence}).
&
The signal exponential decay in the optical channel results in the long-distance communication problem.
&
The use of the EDFA-like amplifiers was modified to fit our protocol.
\\ \hline
Detecting low-power information-carrying signal\, (Subsection\,\ref{problem:low_intensity}).
&
Our QKD protocol design implies low-power key distribution.
&
The TQ-made amplifier having an amplifying coefficient of 20\,dB and a low signal-to-noise ratio is developed.
\\ \hline
The high level of the ASE from the EDFA (Subsection\,\ref{problem:high_ASE}).
&
The EDFA randomly generates wide-spectrum optical irradiation; long optical lines contain many EDFAs which˙results in a high noise level.
&
The TQ-made narrow-band, 8.5 GHz, thermostabilized filter is developed.
\\ \hline
Laser irradiation generation in a long optical line without optical isolators (Subsection\,\ref{problem:generation}).
&
In long optical lines having active fiber segments in TQ-EDFAs, the unbalanced loss and amplification levels may cause laser irradiation due to multiple reflections at connectors and Rayleigh scattering.
&
The developed optical line has high control over the loss and amplification levels upon adding every amplifier, having enabled the right choice of the amplification coefficient corresponding to losses.
\\ \hline
Floating of the offset voltage at the amplitude modulator (Subsection\,\ref{problem:AM_drift}).
&
The working point of the voltage offset, i.e., the working point of the amplitude modulator may shift with time.
&
The part of the optical irradiation is diverted and controlled using an additional detector at Alice's side. Then the voltage is tuned according to the detector's indications.
\\ \hline
\end{tabularx}
\caption{The encountered problems and the ways of their solution.}
\label{problems}
\end{table}
\subsubsection{Exponential signal decay in the optical fiber}\label{Low_intensity_defence}
Upon spreading along the optical fiber, the optical signal exponentially decays with the distance. Therefore, for long-distance transmission, the TQ-EDFAs are used. The TQ-EDFA ensures the loss compensation at a distance of about 50\,km. Since this amplifier is not an ideal quantum repeater, it generates additional noise. Importantly, the TQ-EDFA design is adjusted to our protocol, see the details in SI, Subsection\,\ref{amplifier_section}
\subsubsection{Detecting the low power level}\label{problem:low_intensity}
Since, as has been mentioned above, a high power level excludes the possibility of the secret information, see Subsection\,\ref{atack_demon}, the information transmission is executed at a moderate intensity of the optical signal. Hence this signal is to be amplified before sending it to the high-sensitivity detector. To that end, our team developed a specific preamplifier with an amplifying coefficient of 20\,dB. An additional constructive feature of the TQ amplifier that differs it from the magistral amplifiers is the presence of optical isolators in its constructive design.
\subsubsection{The high ASE level}\label{problem:high_ASE}
A long-distance optical transmission line exploits magistral amplifiers EDFA to compensate for the exponential optical losses. Since EDFAs are not the ideal quantum repeaters, they generate ASE leading to the noise interfering with the detected information-carrying signal. We have developed a narrow-band 8.5\,GHz filter that has enabled to decrease the ASE power arriving at Bob's detector. An important technological component of the new TQ narrow band filter is a thermal stabilizer that has enabled the $\pm 0.01 \text{ K}$ of thermal stability precision, also developed by our team.
\subsubsection{Laser irradiation generated in the long-distance line not containing optical isolators}\label{problem:generation}
A long-distance optical channel exploits a significant number of EDFAs and an amplification effect is hosted by the erbium-doped fiber segments; these segments are referred to as active medium regions. Since in the optical line, several signal reflections occur the whole line is to be viewed as an active medium containing weak resonators. Therefore, the transfer of the ASE critical power is accompanied by the generation of the irradiation analogous to laser irradiation. We eliminate this parasitic generation by choosing the regime of the information signal transmission ensuring being in the ASE amplification peak, see Fig.\,\ref{amplif_spectrum} and using the TQ-EDFAs-designed to ensure the amplification coefficient of 10\,dB providing thus losses compensation over the 50\,km distance.
\subsubsection{Floating of the offset voltage at the amplitude modulator}\label{problem:AM_drift}
The possibility of the offset voltage floating resulting in the shift of the working point (often referred to as a bias point) of the amplitude modulator, leads also to the change of the average intensity of the optical signal. To compensate for the bias point shift, our TQ team has developed an algorithm that uses the data from the monitoring detector, shown in Fig.\,\ref{QKDscheme}, and returns the changing average intensity of the optical signal to its original magnitude.
| 3,288 | 13,635 |
en
|
train
|
0.4936.3
|
\section{Line control}\label{control_section}
The important element of the protocol is the control over the losses in the communication line enabling the legitimate users to momentarily estimate the information stolen by eavesdroppers.
\subsection{Transmittometry}\label{lock_in}
The control over the throughput losses is executed as follows. The sinusoidal-modulated signal with the frequency of 25\, MHz is sent along the optical line. The pulse length of every control sending is 1\,ms. At the line output after the TQ-manufactured preamplifier and optical filters, the signal is received by the FPD610 detector.
This approach is analogous to the lock-in method\,\cite{lock_in} where the signal taken from the detector is received by an oscilloscope and transferred to the user's computer. Further treatment is the search, at the modulation frequency, for the amplitude of the signal Fourier transforms at the reference frequency. The ratio of the obtained and reference amplitudes enables us to determine the magnitude of the losses. The reference loss magnitude is renewed at every reflectometry session. Figure\,\ref{lockin_stabil} shows the loss coefficient time dependence.
To model the eavesdropper presence, one introduces the losses into the line (in the implemented measurements the losses are about 3\%) and observes the loss coefficient time dependence. The moment of switching on the losses is marked by the arrow, see Fig.\,\ref{lockin_loss}.
\begin{figure}
\caption{The time dependence of the loss coefficient of the transmission line. Over a long time, the drifting of the transparency of the line turns out to be significant.}
\label{lockin_stabil}
\caption{Measurements detecting the intervention into the transmission line. The associated introduced losses are about 3\%.}
\label{lockin_loss}
\end{figure}
\subsubsection{Temperature dependence}\label{temp_dep_section}
The optical loss control in the communication line enables not only determining the magnitude of the signal diverted by an eavesdropper but also the natural changes in the optical fiber transparency. One of the reasons for the transparency coefficient change is the temperature variations. Using the OTDR, we measure the losses magnitude change at the coil containing the 50\,km of the fiber line upon the coil heating, see Fig.\,\ref{OTDR_loss_check}. The obtained reflectogram allows us to determine the loss magnitude of the whole segment of the optical fiber line from the slope of the log-log plot, see Fig.\,\ref{OTDR_loss_check}.
\begin{figure}
\caption{The time dependencies of the losses in the optical fiber and the fiber temperature. The temperature change is shown by the blue curve, the corresponding loss coefficient behavior is presented by the magenta one.}
\label{OTDR_loss_check}
\end{figure}
\subsection{Reflectometry}\label{REFL_section}
The OTDR procedure allows for detecting local interceptions in the communication line. The procedure consists of sending a short powerful optical pulse into the line and measuring the backscattered signal. The local losses usually occur in optical connections, splices, or bends, which are seen as the main points where the signal leaks out of the line. Although splices only dissipate some of the loss outward, see SI, Section\,2, and our line is designed to eliminate optical connections in order to prevent excessive leakage, there are still unavoidable splices and fiber interferences that must be observed. Rayleigh scattering, which is almost impossible for an eavesdropper to exploit, helps to detect faults along the line. If the eavesdropper diverts the part of the signal, the intensity of the backscattered radiation drops in the segment containing the intercept point.
This allows local interception to be detected.
Figure\,\ref{QKDscheme_OTDR} presents the setup of the complete TQ-QKD protocol realization. The protocol contains periodic measurements of the local losses. If local losses are not detected, the change in the loss coefficient is assigned to natural reasons. One of them may be the dependence of the fiber transparency upon temperature, see Subsection\,\ref{temp_dep_section} The losses coefficient is also subject to the influence of other slow fluctuation parameters producing a so-called flicker noise, which is a type of electronic noise with a $1/f$ power spectral density, so it is often referred to as $1/f$ noise\,\cite{flicker_noise1}, \cite{flicker_noise2}. The measurements of the line stability are presented in Fig.\,\ref{lockin_stabil}.
Furthermore, to accelerate the secret key distribution, we will use FPGA instead of the oscilloscope at Bob's side.
To remind here, while an ideal TQ-QKD protocol should contain a periodical reflectometry we postpone the detailed description of its experimental realization to the forthcoming publication.
\begin{figure}
\caption{\textbf{A setup of the TQ-QKD protocol realization set up including reflectometry}
\label{QKDscheme_OTDR}
\end{figure}
We present a concise description of the current state of our investigation in this direction.
Using the available FPD610-FC detector and TLX1 laser, according to scheme\,\ref{QKDscheme_OTDR}, we collected the reflectogram of the whole line. Figure\,\ref{real_refl} presents the results of the measurements.
Since the sensitivity of the FPD610-FC detector is by order of magnitude lower than the necessary one, and the TLX1 laser power is also over the order of magnitude lower than the one that is normally used for the pulse formation, we can only approximately detect the main features of the reflectogram.
The collected data enable us to conclude that the power of the detected signal due to the Rayleigh scattering does not decay upon going through amplifiers along the whole transmission line. This follows from the fact that the peaks presented in Fig.\,\ref{real_refl} do not lose their height.
This result allows us to further develop the OTDR technique for our system.
We use the oscilloscope to collect the data from the OTDR detector. The length of the probing pulse is 100\,ns.
\begin{figure}
\caption{The line reflectogram shows 19 peaks corresponding to the amplifier's positions as well as the peak at zero corresponding to the connector at the beginning of the transmission line necessary to connect the reflectometer. The data are reliable despite the insufficiently sensitive detector and insufficiently powerful laser.}
\label{real_refl}
\end{figure}
\section{The BER and key randomness analysis}\label{bitber_section}
To provide an understanding of how the proposed line protection by the TQ-QKD protocol is executed, we describe a picture of the signal-receiving procedure for Bob and the interception scheme for Eve.
We set that our loss control indicates that 1\% of a signal is intercepted by Eve. Since the electromagnetic signal is quantized and carried by discrete particles, photons, the photon number in the pulses corresponding to `0' and `1' states intercepted by Eve is 100 times less than Bob's corresponding photon numbers. Notably, for the quantum coherent states that we use for encoding `0' and `1' bits there are inevitable quantum fluctuations in the measured pulse intensities. Therefore, the less the number of photons $N$ that Eve manages to intercept, the bigger the relative fluctuations which scale as $1/\sqrt{N}$ are. To visualize this effect,
we plot the experimentally obtained distributions of the `0' and `1' states as functions of voltage corresponding to the intensity of an incoming signal, using the statistics obtained for a large number of the secret quantum keys,
at Bob's side, Fig.\,\ref{distribution_Bob}.
For Eve we measure the statistics of the number of photons received during the attack using our equipment, Fig.\,\ref{distribution_Eve}.
The measurement is made for the signal coming to Bob, but attenuated by a factor of 100.
In order to distinguish the signal coming to Eve, the amplification factor was increased.
In our protocol, Alice encodes the bit `0' via the state containing 11360 photons and the bit `1' via the state containing 15100 photons.
One sees that while at Bob's side the `0' and `1' states are still distinguishable, despite tough postselection, at Eve's side two corresponding distribution functions completely overlap making the determining of the bit state impossible. Bob knows only the total signal distribution, where he has to put in the postselection parameters (boundaries) to ensure efficient discrimination between the states. The distance between the distribution peaks depends on the difference between the intensity of the signals. We have also executed the tests verifying the randomness of the distributed key; an example of the auto-correlation function for the final key is shown below in Fig.\,\ref{fig:autocorr}. To check to which extent our random sequence corresponds to the binomial distribution, we compared the corresponding values of the average and the standard deviation.
In our case, where we use the selection of the 705 trials for the sequences of 2111 bits, the theoretical values of the average and the standard deviation for the final key realization are, correspondingly, $mean_{theor} =1055.50$ and $std_{theor} = 527.75$.
Accordingly, the experimental values are, correspondingly, $mean_{exp} = 1055.48$ and $std_{exp} = 531.74$.
\begin{figure}
\caption{Bob}
\label{distribution_Bob}
\caption{Eve}
\label{distribution_Eve}
\caption{{\textbf{The distribution of `0' and `1' bits at Bob's end of the line and for the eavesdropper.}
\label{distribution_1}
\end{figure}
\subsection{The intensity BER dependence}
Upon decreasing the states' distinguishability (the overlap of the distributions in Fig.\,\ref{distribution_Bob}), the value of the error increases. Figure\,\ref{ber_intensity} demonstrates that the error drops approximately linearly with the increase of the intensity differences in `0' and `1' states normalized by the average signal intensity at the line entrance.
\begin{figure}
\caption{\textbf{The BER dependence on the relative difference between the states intensities}
\label{ber_intensity}
\end{figure}
\subsection{Autocorrelation function}
To reveal the systematic dependencies between the bits in sequences at different time intervals we have checked its autocorrelation function. The absence of the peaks testifies to the statistical independence of the bits sequences and their approaching to complete randomness, see Fig.\,\ref{fig:autocorr}.
\begin{figure}
\caption{\textbf{Autocorrelation function of the final key}
\label{fig:autocorr}
\end{figure}
\section{Amplifiers noise}\label{noise_section}
During the quantum states transmission over the optical channel, the basic information signal gets distorted by noise which complicates the recognition of the states and reveals the eavesdropper. To solve this problem one has to understand the nature of the emerging noises since this understanding helps to reveal and identify the information component of the signal.
One of the sources of the noise is magistral amplifiers which amplify not only signal mode with the carrier wavelength but also the modes with other wavelengths (the amplification spectrum of the TQ-EDFA is discussed below in the SI Subsection\,\ref{amplifier_section}, see, in particular, Fig.\,\ref{amplif_spectrum}). Even in the absence of the signal from the laser, the amplifiers generate spontaneous irradiation (ASE) which constitutes a good part of the natural amplifier noise. We present the results of the measurements of the total ASE power depending on the number of amplifiers. The theory is given in\,\cite{new_theory}.
According to the theory, in the absence of a signal at the entrance to the communication line
\begin{equation}
n_\text{mode} = 2(G(M(1-T)+T)-1),
\end{equation}
where $n_\text{mode}$ is the average photon number in a single mode,
$G$ is the amplifying coefficient, $M$ is the number of the amplifiers, $T$ is the intensity dumping per 50\,km of the optical fiber. The factor of 2 appears due to two possible photon polarizations.
The number of the modes $N$ detected after the filter is given by $N = \alpha \Delta \nu \tau$, where $\alpha$ is the coefficient taking into account different intensities of the modes with the different wavelengths, $\Delta\nu$ is the filter spectrum bandwidth, and $\tau$ is the measurement time. To find $\alpha$, one has to assign a certain weight to each mode, determined by the bandwidth of the filter, see Fig.\,\ref{wide_filter_spec}. Here we use the Wavelength Division Multiplexing (WDM) filters with wide bandwidth and sharp wavelength borders. The obtained mode number for the time $\tau=2.5$\,ns is $N=200$.
Then the number of photons coming from the ASE to the detector during time $\tau$, the time of the bit sending, is given by
\begin{equation}
n = Nn_\text{mode}.
\end{equation}
Measurements of the average power at the line exit as a function of the amplifiers' number are presented in Fig.\ref{24}.
\begin{figure}
\caption{Transparency of the wide bandwidth filter as a function of the wavelength.}
\label{wide_filter_spec}
\caption{An average number of the ASE photons coming through the filter of the 68\,GHz bandwidth during 2.5\,ns.}
\label{24}
\end{figure}
\section*{Conclusion and discussion}
In conclusion, we have presented our first-ever practical implementation of the novel TQ-QKD protocol developed in\,\cite{new_theory}. We implemented the quantum key distribution protocol based on our theory for a record distance of 1032 km, see Fig.\ref{comparing}. This required an additional development of certain new devices. Importantly, we silhouetted the ways of further improvement of these devices which will enable us to achieve larger transmission distances leading eventually to developing a global QKD network. This, in its turn, will enable fully secure and high-rate information distribution over the globe.
While preparing our results for presentation, we have found out that the very recent state-of-the-art TF-QKD experiment described in Ref.\,\cite{compare_qkd_tf_1002} achieves a transmission distance of 1002 km utilizing ultra-low-loss fiber. This, in particular, implies that the fiber is produced through a custom manufacturing process. Installing such a custom-produced line is a labor-consuming and financially challenging subject compared to using the standard optical infrastructure. Moreover, even with the employment of this ultra-low-loss fiber and the integration of the most advanced noise suppression methodologies, the achieved key distribution rate registers a mere 0.0034 bits per second. This suggests that the 1000 km mark is the limit, at which the rate is already impractically low.
In contrast, we efficiently achieve 1032 km with the distribution rate 34\,bps which is larger by four orders of magnitude. Notably, this is still far from exhausting our full potential while employing the costly viable standard of the Single-Mode fiber line.
Thus, our protocol can be deployed on the existing fiber infrastructure.
\section*{Suplementary Information}\label{supplem_section}
\setcounter{section}{0}
\renewcommand{NOTE\arabic{section}}{NOTE\arabic{section}}
\input{supplementary.tex}
\end{document}
| 3,726 | 13,635 |
en
|
train
|
0.4937.0
|
\mathbf{b}egin{document}
\mathbf{b}oldsymbol{\mu}aketitle
\mathbf{b}egin{abstract}
Two types of low cost-per-iteration gradient descent methods have been extensively studied in parallel. One is online or stochastic gradient descent ( OGD/SGD), and the other is randomzied coordinate descent (RBCD). In this paper, we combine the two types of methods together and propose online randomized block coordinate descent (ORBCD). At each iteration, ORBCD only computes the partial gradient of one block coordinate of one mini-batch samples. ORBCD is well suited for the composite minimization problem where one function is the average of the losses of a large number of samples and the other is a simple regularizer defined on high dimensional variables. We show that the iteration complexity of ORBCD has the same order as OGD or SGD. For strongly convex functions, by reducing the variance of stochastic gradients, we show that ORBCD can converge at a geometric rate in expectation, matching the convergence rate of SGD with variance reduction and RBCD.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{abstract}
\section{Introduction}
In recent years, considerable efforts in machine learning have been devoted to solving the following composite objective minimization problem:
\mathbf{b}egin{align}\label{eq:compositeobj}
\mathbf{b}oldsymbol{\mu}in_{\mathbf{b}oldsymbol{\mu}athbf{x}}~f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) = \frac{1}{I}\sum_{i=1}^{I}f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) + \sum_{j=1}^{J}g_j(\mathbf{b}oldsymbol{\mu}athbf{x}_j)~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $\mathbf{b}oldsymbol{\mu}athbf{x}\in\R^{n\times 1}$ and $\mathbf{b}oldsymbol{\mu}athbf{x}_j$ is a block coordinate of $\mathbf{b}oldsymbol{\mu}athbf{x}$. $f(\mathbf{b}oldsymbol{\mu}athbf{x})$ is the average of some smooth functions, and $g(\mathbf{b}oldsymbol{\mu}athbf{x})$ is a \mathbf{b}oldsymbol{\mu}athbf{e}mph{simple} function which may be non-smooth. In particular, $g(\mathbf{b}oldsymbol{\mu}athbf{x})$ is block separable and blocks are non-overlapping. A variety of machine learning and statistics problems can be cast into the problem~\mathbf{b}oldsymbol{\mu}yref{eq:compositeobj}. In regularized risk minimization problems~\mathbf{c}ite{hastie09:statlearn}, $f$ is the average of losses of a large number of samples and $g$ is a simple regularizer on high dimensional features to induce structural sparsity~\mathbf{c}ite{bach11:sparse}. While $f$ is separable among samples, $g$ is separable among features.
For example, in lasso~\mathbf{c}ite{tibs96:lasso}, $f_i$ is a square loss or logistic loss function and $g(\mathbf{b}oldsymbol{\mu}athbf{x}) = \lambda \| \mathbf{b}oldsymbol{\mu}athbf{x} \|_1$ where $\lambda$ is the tuning parameter. In group lasso~\mathbf{c}ite{yuan07:glasso}, $g_j(\mathbf{b}oldsymbol{\mu}athbf{x}_j) = \lambda\| \mathbf{b}oldsymbol{\mu}athbf{x}_j \|_2$, which enforces group sparsity among variables. To induce both group sparsity and sparsity, sparse group lasso~\mathbf{c}ite{friedman:sglasso} uses composite regularizers $g_j(\mathbf{b}oldsymbol{\mu}athbf{x}_j) = \lambda_1\| \mathbf{b}oldsymbol{\mu}athbf{x}_j \|_2 + \lambda_2 \|\mathbf{b}oldsymbol{\mu}athbf{x}_j\|_1$ where $\lambda_1$ and $\lambda_2$ are the tuning parameters.
Due to the simplicity, gradient descent (GD) type methods have been widely used to solve problem~\mathbf{b}oldsymbol{\mu}yref{eq:compositeobj}. If $g_j$ is nonsmooth but simple enough for \mathbf{b}oldsymbol{\mu}athbf{e}mph{proximal mapping}, it is better to just use the gradient of $f_i$ but keep $g_j$ untouched in GD. This variant of GD is often called proximal splitting~\mathbf{c}ite{comb09:prox} or proximal gradient descent (PGD)~\mathbf{c}ite{tseng08:apgm,beck09:pgm} or forward/backward splitting method (FOBOS)~\mathbf{c}ite{duchi09}. Without loss of generality, we simply use GD to represent GD and its variants in the rest of this paper. Let $m$ be the number of samples and $n$ be dimension of features. $m$ samples are divided into $I$ blocks (mini-batch), and $n$ features are divided into $J$ non-overlapping blocks.
If both $m$ and $n$ are large, solving~\mathbf{b}oldsymbol{\mu}yref{eq:compositeobj} using batch methods like gradient descent (GD) type methods is computationally expensive.
To address the computational bottleneck, two types of low cost-per-iteration methods, online/stochastic gradient descent (OGD/SGD)~\mathbf{c}ite{Robi51:SP,Judi09:SP,celu06,Zinkevich03,haak06:logregret,Duchi10_comid,duchi09,xiao10} and randomized block coordinate descent (RBCD)~\mathbf{c}ite{nesterov10:rbcd,bkbg11:pbcd,rita13:pbcd,rita12:rbcd}, have been rigorously studied in both theory and applications.
Instead of computing gradients of all samples in GD at each iteration, OGD/SGD only computes the gradient of one block samples, and thus the cost-per-iteration is just
one $I$-th of GD. For large scale problems, it has been shown that OGD/SGD is faster than GD~\mathbf{c}ite{tari13:pdsvm,shsisr07:pegasos,shte09:sgd}. OGD and SGD have been generalized to handle composite objective functions~\mathbf{c}ite{nest07:composite,comb09:prox,tseng08:apgm,beck09:pgm,Duchi10_comid,duchi09,xiao10}. OGD and SGD use a decreasing step size and converge at a slower rate than GD. In stochastic optimization, the slow convergence speed is caused by the variance of stochastic gradients due to random samples, and
considerable efforts have thus been devoted to reducing the variance to accelerate SGD~\mathbf{c}ite{bach12:sgdlinear,bach13:sgdaverage,xiao14:psgdvd,zhang13:sgdvd,jin13:sgdmix,jin13:sgdlinear}.
Stochastic average gradient (SVG)~\mathbf{c}ite{bach12:sgdlinear} is the first SGD algorithm achieving the linear convergence rate for stronly convex functions, catching up with the convergence speed of GD~\mathbf{c}ite{nesterov04:convex}. However, SVG needs to store all gradients, which becomes an issue for large scale datasets. It is also difficult to understand the intuition behind the proof of SVG. To address the issue of storage and better explain the faster convergence,~\mathbf{c}ite{zhang13:sgdvd} proposed an explicit variance reduction scheme into SGD. The two scheme SGD is refered as stochastic variance reduction gradient (SVRG). SVRG computes the full gradient periodically and progressively mitigates the variance of stochastic gradient by removing the difference between the full gradient and stochastic gradient. For smooth and strongly convex functions, SVRG converges at a geometric rate in expectation. Compared to SVG, SVRG is free from the storage of full gradients and has a much simpler proof. The similar idea was also proposed independently by~\mathbf{c}ite{jin13:sgdmix}. The results of SVRG is then improved in~\mathbf{c}ite{kori13:ssgd}. In~\mathbf{c}ite{xiao14:psgdvd}, SVRG is generalized to solve composite minimization problem by incorporating the variance reduction technique into proximal gradient method.
On the other hand, RBCD~\mathbf{c}ite{nesterov10:rbcd,rita12:rbcd,luxiao13:rbcd,shte09:sgd,chang08:bcdsvm,hsieh08:dcdsvm,osher09:cdcs} has become increasingly popular due to high dimensional problem with structural regularizers. RBCD randomly chooses a block coordinate to update at each iteration. The iteration complexity of RBCD was established in~\mathbf{c}ite{nesterov10:rbcd}, improved and generalized to composite minimization problem by~\mathbf{c}ite{rita12:rbcd,luxiao13:rbcd}. RBCD can choose a constant step size and converge at the same rate as GD, although the constant is usually $J$ times worse~\mathbf{c}ite{nesterov10:rbcd,rita12:rbcd,luxiao13:rbcd}. Compared to GD, the cost-per-iteration of RBCD is much cheaper.
Block coordinate descent (BCD) methods have also been studied under a deterministic cyclic order~\mathbf{c}ite{sate13:cbcd,tseng01:ds,luo02:cbcd}. Although the convergence of cyclic BCD has been established~\mathbf{c}ite{tseng01:ds,luo02:cbcd}, the iteration of complexity is still unknown except for special cases~\mathbf{c}ite{sate13:cbcd}.
While OGD/SGD is well suitable for problems with a large number of samples, RBCD is suitable for high dimension problems with non-overlapping composite regularizers. For large scale high dimensional problems with non-overlapping composite regularizers, it is not economic enough to use one of them. Either method alone may not suitable for problems when data is distributed across space and time or partially available at the moment~\mathbf{c}ite{nesterov10:rbcd}. In addition, SVRG is not suitable for problems when the computation of full gradient at one time is expensive. In this paper,
we propose a new method named online randomized block coordinate descent (ORBCD) which combines the well-known OGD/SGD and RBCD together. ORBCD first randomly picks up one block samples and one block coordinates, then performs the block coordinate gradient descent on the randomly chosen samples at each iteration. Essentially, ORBCD performs RBCD in the online and stochastic setting.
If $f_i$ is a linear function, the cost-per-iteration of ORBCD is $O(1)$ and thus is far smaller than $O(n)$ in OGD/SGD and $O(m)$ in RBCD.
We show that the iteration complexity for ORBCD has the same order as OGD/SGD.
In the stochastic setting, ORBCD is still suffered from the variance of stochastic gradient. To accelerate the convergence speed of ORBCD, we adopt the varaince reduction technique~\mathbf{c}ite{zhang13:sgdvd} to alleviate the effect of randomness.
As expected, the linear convergence rate for ORBCD with variance reduction (ORBCDVD) is established for strongly convex functions for stochastic optimization. Moreover, ORBCDVD does not necessarily require to compute the full gradient at once which is necessary in SVRG and prox-SVRG. Instead, a block coordinate of full gradient is computed at each iteration and then stored for the next retrieval in ORBCDVD.
The rest of the paper is organized as follows. In Section~\mathbf{r}ef{sec:relate}, we review the SGD and RBCD. ORBCD and ORBCD with variance reduction are proposed in Section~\mathbf{r}ef{sec:orbcd}. The convergence results are given in Section~\mathbf{r}ef{sec:theory}. The paper is concluded in Section~\mathbf{r}ef{sec:conclusion}.
| 3,091 | 49,684 |
en
|
train
|
0.4937.1
|
\section{Related Work}\label{sec:relate}
In this section, we briefly review the two types of low cost-per-iteration gradient descent (GD) methods, i.e., OGD/SGD and RBCD. Applying GD on~\mathbf{b}oldsymbol{\mu}yref{eq:compositeobj}, we have the following iterate:
\mathbf{b}egin{align}\label{eq:fobos}
\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}}~\langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
In some cases, e.g. $g(\mathbf{b}oldsymbol{\mu}athbf{x})$ is $\mathbf{b}oldsymbol{\mu}athbf{e}ll_1$ norm,~\mathbf{b}oldsymbol{\mu}yref{eq:fobos} can have a closed-form solution.
\subsection{Online and Stochastic Gradient Descent}
In~\mathbf{b}oldsymbol{\mu}yref{eq:fobos}, it requires to compute the full gradient of $m$ samples at each iteration, which could be computationally expensive if $m$ is too large. Instead, OGD/SGD simply computes the gradient of one block samples.
In the online setting, at time $t+1$, OGD first presents a solution $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}$ by solving
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}}~\langle \nabla f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $f_t$ is given and assumed to be convex. Then a function $f_{t+1}$ is revealed which incurs the loss $f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t)$.
The performance of OGD is measured by the regret bound, which is the discrepancy between the cumulative loss over $T$ rounds and the best decision in hindsight,
\mathbf{b}egin{align}
R(T) = \sum_{t=1}^{T} { [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t)] - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^*)+g(\mathbf{b}oldsymbol{\mu}athbf{x}^*)]}~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $\mathbf{b}oldsymbol{\mu}athbf{x}^*$ is the best result in hindsight. The regret bound of OGD is $O(\sqrt{T})$ when using decreasing step size $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = O(\frac{1}{\sqrt{t}})$. For strongly convex functions, the regret bound of OGD is $O(\log T)$ when using the step size $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = O(\frac{1}{t})$. Since $f_t$ can be any convex function, OGD considers the worst case and thus the mentioned regret bounds are optimal.
In the stochastic setting, SGD first randomly picks up $i_t$-th block samples and then computes the gradient of the selected samples as follows:
\mathbf{b}egin{align}\label{eq:sgd}
\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}}~\langle \nabla f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
$\mathbf{b}oldsymbol{\mu}athbf{x}^t$ depends on the observed realization of the random variable $\mathbf{b}oldsymbol{\mu}athbf{x}i = \{ i_1, \mathbf{c}dots, i_{t-1}\}$ or generally $\{ \mathbf{b}oldsymbol{\mu}athbf{x}^1, \mathbf{c}dots, \mathbf{b}oldsymbol{\mu}athbf{x}^{t-1} \}$. Due to the effect of variance of stochastic gradient, SGD has to choose decreasing step size, i.e., $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = O(\frac{1}{\sqrt{t}})$, leading to slow convergence speed. For general convex functions, SGD converges at a rate of $O(\frac{1}{\sqrt{t}})$. For strongly convex functions, SGD converges at a rate of $O(\frac{1}{t})$. In contrast, GD converges linearly if functions are strongly convex.
To accelerate the SGD by reducing the variance of stochastic gradient, stochastic variance reduced gradient (SVRG) was proposed by~\mathbf{c}ite{zhang13:sgdvd}.~\mathbf{c}ite{xiao14:psgdvd} extends SVRG to composite functions~\mathbf{b}oldsymbol{\mu}yref{eq:compositeobj}, called prox-SVRG. SVRGs have two stages, i.e., outer stage and inner stage. The outer stage maintains an estimate $\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}$ of the optimal point $x^*$ and computes the full gradient of $\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}$
\mathbf{b}egin{align}
\tilde{\mathbf{b}oldsymbol{\mu}u} &= \frac{1}{n} \sum_{i=1}^{n} \nabla f_i(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) = \nabla f(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}})~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
After the inner stage is completed, the outer stage updates $\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}$. At the inner stage, SVRG first randomly picks $i_t$-th sample, then modifies stochastis gradient by subtracting the difference between the full gradient and stochastic gradient at $\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}$,
\mathbf{b}egin{align}
\mathbf{v}_{t} &= \nabla f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \nabla f_{i_t}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
It can be shown that the expectation of $\mathbf{v}_{t}$ given $\mathbf{b}oldsymbol{\mu}athbf{x}^{t-1}$ is the full gradient at $\mathbf{b}oldsymbol{\mu}athbf{x}^t$, i.e., $\mathbf{b}oldsymbol{\mu}athbb{E}\mathbf{v}_{t} = \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t)$. Although $\mathbf{v}_t$ is also a stochastic gradient, the variance of stochastic gradient progressively decreases. Replacing $\nabla f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t)$ by $\mathbf{v}_t$ in SGD step~\mathbf{b}oldsymbol{\mu}yref{eq:sgd},
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} & = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}}~\langle \mathbf{v}_{t}, \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
By reduding the variance of stochastic gradient, $\mathbf{b}oldsymbol{\mu}athbf{x}^t$ can converge to $\mathbf{b}oldsymbol{\mu}athbf{x}^*$ at the same rate as GD, which has been proved in~\mathbf{c}ite{zhang13:sgdvd,xiao14:psgdvd}.
For strongly convex functions, prox-SVRG~\mathbf{c}ite{xiao14:psgdvd} can converge linearly in expectation if $\mathbf{b}oldsymbol{\mu}athbf{e}ta > 4L$ and $m$ satisfy the following condition:
\mathbf{b}egin{align}\label{eq:svrg_rho}
\mathbf{r}ho = \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta^2}{\gamma(\mathbf{b}oldsymbol{\mu}athbf{e}ta-4L)m} + \frac{4L(m+1)}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-4L)m} < 1~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $L$ is the constant of Lipschitz continuous gradient. Note the step size is $1/\mathbf{b}oldsymbol{\mu}athbf{e}ta$ here.
| 2,717 | 49,684 |
en
|
train
|
0.4937.2
|
\subsection{Randomized Block Coordinate Descent}
Assume $\mathbf{b}oldsymbol{\mu}athbf{x}_{j} (1\leq j \leq J)$ are non-overlapping blocks. At iteration $t$, RBCD~\mathbf{c}ite{nesterov10:rbcd,rita12:rbcd,luxiao13:rbcd} randomly picks $j_t$-th coordinate and solves the following problem:
\mathbf{b}egin{align}\label{eq:rbcd}
\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}}~\langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle + g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Therefore, $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = (\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}, \mathbf{b}oldsymbol{\mu}athbf{x}_{k\neq j_t}^t)$. $\mathbf{b}oldsymbol{\mu}athbf{x}^t$ depends on the observed realization of the random variable
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbf{x}i = \{ j_1, \mathbf{c}dots, j_{t-1}\}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Setting the step size $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = L_{j_t}$ where $L_{j_t}$ is the Lipshitz constant of $j_t$-th coordinate of the gradient $\nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t)$, the iteration complexity of RBCD is
$O(\frac{1}{t})$. For strongly convex function, RBCD has a linear convergence rate. Therefore, RBCD converges at the same rate as GD, although the constant is $J$ times larger~\mathbf{c}ite{nesterov10:rbcd,rita12:rbcd,luxiao13:rbcd}.
\iffalse
Here we briefly review and simplify the proof of the iteration complexity of RBCD in~\mathbf{c}ite{nesterov10:rbcd,rita12:rbcd,luxiao13:rbcd}, which paves the way for the proof of ORBCD.
| 745 | 49,684 |
en
|
train
|
0.4937.3
|
\mathbf{b}egin{thm}
RBCD has the following iteration complexity:
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{T-1}}f(\mathbf{b}oldsymbol{\mu}athbf{x}^T) - f(\mathbf{b}oldsymbol{\mu}athbf{x}) & \leq \frac{J \left[ \mathbf{b}oldsymbol{\mu}athbb{E} [f(\mathbf{b}oldsymbol{\mu}athbf{x}^1)] - f(\mathbf{b}oldsymbol{\mu}athbf{x}^*) + \frac{L}{2} \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^1 \|_2^2 \mathbf{r}ight ]}{T} ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{thm}
Denoting $g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) \in \mathbf{b}oldsymbol{\mu}athbf{p}artial g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1})$, the optimality condition of~\mathbf{b}oldsymbol{\mu}yref{eq:rbcd} is
\mathbf{b}egin{align}
\langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) + \mathbf{b}oldsymbol{\mu}athbf{e}ta_t (\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \leq 0~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Rearranging the terms yields
\mathbf{b}egin{align}
& \langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \leq - \mathbf{b}oldsymbol{\mu}athbf{e}ta_t \langle \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 ) \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 ) ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Using the smoothness of $f$ and $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = (\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}, \mathbf{b}oldsymbol{\mu}athbf{x}_{k\neq j_t}^t)$, we have
\mathbf{b}egin{align}
& f(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1})- [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) ] \nonumber \\
& \leq \langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \mathbf{r}angle + \frac{L_{j_t}}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 + g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) - g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) - [g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t}) - g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) ] \nonumber \\
& = \langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle + \frac{L_{j_t}}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 - \langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle - [g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t}) - g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) ]\nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{L_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 - \langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle - [g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t}) - g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) ] \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) - \langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle - [g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t}) - g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) ]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where the last inequality is obtained by setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = L_{j_t}$.
Conditioned on $\mathbf{b}oldsymbol{\mu}athbf{x}^t$, take expectation over $j_t$ gives
\mathbf{b}egin{align}\label{eq1}
& \mathbf{b}oldsymbol{\mu}athbb{E}_{j_t}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1})|\mathbf{b}oldsymbol{\mu}athbf{x}^t] - [f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) +g(\mathbf{b}oldsymbol{\mu}athbf{x}^t)] \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}_{j_t}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) - \frac{1}{J} \sum_{j=1}^J \langle \nabla_{j_t} f(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle - \frac{1}{J}[g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - g(\mathbf{b}oldsymbol{\mu}athbf{x}) ]\nonumber \\
& = \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}_{j_t}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) - \frac{1}{J} \langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle - \frac{1}{J}[g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - g(\mathbf{b}oldsymbol{\mu}athbf{x}) ] \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}_{j_t}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) - \frac{1}{J} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - (f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})) ]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Taking expectation over $\mathbf{b}oldsymbol{\mu}athbf{x}i_t$, we have
\mathbf{b}egin{align}
& \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_t}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1})] -\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t)] \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_t}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) - \frac{1}{J} \{ \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) +g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) ] - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})] \} ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Setting $\mathbf{b}oldsymbol{\mu}athbf{x} = \mathbf{b}oldsymbol{\mu}athbf{x}^t$ gives
\mathbf{b}egin{align}
& \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t}}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1})] - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t})] \leq - \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_t}[\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Thus, $\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t})]$ decreases monotonically.
| 4,427 | 49,684 |
en
|
train
|
0.4937.4
|
Let $\mathbf{b}oldsymbol{\mu}athbf{x} = \mathbf{b}oldsymbol{\mu}athbf{x}^*$, which is an optimal solution. Rearranging the temrs of~\mathbf{b}oldsymbol{\mu}yref{eq1} yields
\mathbf{b}egin{align}
&\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t}}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t)] - [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^*) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ] \nonumber \\
&\leq J \left[ \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}} [f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t}) ] - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t}}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1})] + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t}}[\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2]) \mathbf{r}ight]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Let $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = L = \mathbf{b}oldsymbol{\mu}ax_j {L_{j}} $. Summing over $t$, we have
\mathbf{b}egin{align}
& \sum_{t=1}^T\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t})] - [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^*) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{*})] \nonumber \\
& \leq J \left\{ f(\mathbf{b}oldsymbol{\mu}athbf{x}^1) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{1}) -\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{T}} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^{T+1})+ g(\mathbf{b}oldsymbol{\mu}athbf{x}^{T+1})] + \frac{L}{2}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^1 \|_2^2 \mathbf{r}ight \} \nonumber \\
& \leq J \left\{ [f(\mathbf{b}oldsymbol{\mu}athbf{x}^1) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{1})] - [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^*) +g(\mathbf{b}oldsymbol{\mu}athbf{x}^*)] + \frac{L}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^1 \|_2^2 \mathbf{r}ight \} ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Using the monotonicity of $\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{t-1}}f(\mathbf{b}oldsymbol{\mu}athbf{x}^t)$ and dividing $T$ on both sides complete the proof.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
\fi
| 1,179 | 49,684 |
en
|
train
|
0.4937.5
|
\section{Online Randomized Block Coordinate Descent}\label{sec:orbcd}
In this section, our goal is to combine OGD/SGD and RBCD together to solve problem~\mathbf{b}oldsymbol{\mu}yref{eq:compositeobj}. We call the algorithm online randomized block coordinate descent (ORBCD), which computes one block coordinate of the gradient of one block of samples at each iteration. ORBCD essentially performs RBCD in online and stochastic setting.
Let $\{ \mathbf{b}oldsymbol{\mu}athbf{x}_1, \mathbf{c}dots, \mathbf{b}oldsymbol{\mu}athbf{x}_J \}, \mathbf{b}oldsymbol{\mu}athbf{x}_j\in \R^{n_j\times 1}$ be J non-overlapping blocks of $\mathbf{b}oldsymbol{\mu}athbf{x}$.
Let $U_j \in \R^{n\times n_j}$ be $n_j$ columns of an $n\times n$ permutation matrix $\mathbf{b}oldsymbol{\mu}athbf{U}$, corresponding to $j$ block coordinates in $\mathbf{b}oldsymbol{\mu}athbf{x}$. For any partition of $\mathbf{b}oldsymbol{\mu}athbf{x}$ and $\mathbf{b}oldsymbol{\mu}athbf{U}$,
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbf{x} = \sum_{j=1}^{J}U_j\mathbf{b}oldsymbol{\mu}athbf{x}_j~, \mathbf{b}oldsymbol{\mu}athbf{x}_j = U_j^T\mathbf{b}oldsymbol{\mu}athbf{x}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
The $j$-th coordinate of gradient of $f$ can be denoted as
\mathbf{b}egin{align}
\nabla_j f(\mathbf{b}oldsymbol{\mu}athbf{x}) = U_j^T \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x})~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Throughout the paper, we assume that the minimum of problem~\mathbf{b}oldsymbol{\mu}yref{eq:compositeobj} is attained. In addition, ORBCD needs the following assumption :
\mathbf{v}space{-3mm}
\mathbf{b}egin{asm}\label{asm:orbcd1}
$f_t$ or $f_i$ has block-wise Lipschitz continuous gradient with constant $L_j$, e.g.,
\mathbf{b}egin{align}
\| \nabla_j f_t(\mathbf{b}oldsymbol{\mu}athbf{x} + U_j h_j ) - \nabla_j f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) \|_2 \leq L_j \| h_j \|_2 \leq L \| h_j \|_2~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $L = \mathbf{b}oldsymbol{\mu}ax_j L_j$.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{asm}
\mathbf{b}egin{asm}\label{asm:orbcd2}
1. $\| \nabla f_t (\mathbf{b}oldsymbol{\mu}athbf{x}^t) \|_2 \leq R_f $, or $\| \nabla f (\mathbf{b}oldsymbol{\mu}athbf{x}^t) \|_2 \leq R_f $;
2. $\mathbf{b}oldsymbol{\mu}athbf{x}^t$ is assumed in a bounded set ${\mathbf{c}al X}$, i.e., $\sup_{\mathbf{b}oldsymbol{\mu}athbf{x},\mathbf{b}oldsymbol{\mu}athbf{y} \in {\mathbf{c}al X}} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{y} \|_2 = D$.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{asm}
While the Assumption~\mathbf{r}ef{asm:orbcd1} is used in RBCD, the Assumption~\mathbf{r}ef{asm:orbcd2} is used in OGD/SGD. We may assume the sum of two functions is strongly convex.
\mathbf{b}egin{asm}\label{asm:orbcd3}
$f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})$ or $f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})$ is $\gamma$-strongly convex, e.g.,
we have
\mathbf{b}egin{align}\label{eq:stronggcov}
f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) \geq f_t(\mathbf{b}oldsymbol{\mu}athbf{y}) + g(\mathbf{b}oldsymbol{\mu}athbf{y}) + \langle \nabla f_t(\mathbf{b}oldsymbol{\mu}athbf{y}) + g'(\mathbf{b}oldsymbol{\mu}athbf{y}), \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t} \mathbf{r}angle + \frac{\gamma}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{y} \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $\gamma > 0$ and $g'(\mathbf{b}oldsymbol{\mu}athbf{y})$ denotes the subgradient of $g$ at $\mathbf{b}oldsymbol{\mu}athbf{y}$.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{asm}
\subsection{ORBCD for Online Learning}
In online setting, ORBCD considers the worst case and runs at rounds.
At time $t$, given any function $f_t$ which may be agnostic, ORBCD randomly chooses $j_t$-th block coordinate and presents the solution by solving the following problem:
\mathbf{b}egin{align}\label{eq:orbcdo}
\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} &= \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}}~\langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle + g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 \nonumber \\
& = \text{Prox}_{g_{j_t}}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} -\frac{1}{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}\nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) )~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $\text{Prox}$ denotes the proximal mapping. If $f_t$ is a linear function, e.g., $f_t = l_t\mathbf{b}oldsymbol{\mu}athbf{x}^t$, then $\nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) = l_{j_t}$, so solving~\mathbf{b}oldsymbol{\mu}yref{eq:orbcdo} is $J$ times cheaper than OGD.
Thus, $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = ( \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}, \mathbf{b}oldsymbol{\mu}athbf{x}_{k\neq j_t}^t)$, or
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = \mathbf{b}oldsymbol{\mu}athbf{x}^t + U_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Then, ORBCD receives a loss function $f_{t+1}(\mathbf{b}oldsymbol{\mu}athbf{x})$ which incurs the loss $f_{t+1}(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1})$. The algorithm is summarized in Algorithm~\mathbf{r}ef{alg:orbcd_online}.
$\mathbf{b}oldsymbol{\mu}athbf{x}^t$ is independent of $j_t$ but depends on the sequence of observed realization of the random variable
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbf{x}i = \{ j_1, \mathbf{c}dots, j_{t-1} \}.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Let $\mathbf{b}oldsymbol{\mu}athbf{x}^*$ be the best solution in hindsight. The regret bound of ORBCD is defined as
\mathbf{b}egin{align}
R(T) = \sum_{t=1}^T\left \{ \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}[ f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) ] - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^*) +g(\mathbf{b}oldsymbol{\mu}athbf{x}^*)] \mathbf{r}ight \}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
By setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \sqrt{t} + L$ where $L=\mathbf{b}oldsymbol{\mu}ax_jL_j$, the regret bound of ORBCD is $O(\sqrt{T})$. For strongly convex functions, the regret bound of ORBCD is $O(\log T)$ by setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \frac{\gamma t}{J} + L$.
\mathbf{b}egin{algorithm*}[tb]
\mathbf{c}aption{Online Randomized Block Coordinate Descent for Online Learning}
\label{alg:orbcd_online}
\mathbf{b}egin{algorithmic}[1]
\STATE {\mathbf{b}fseries Initialization:} $\mathbf{b}oldsymbol{\mu}athbf{x}^1 = \mathbf{b}oldsymbol{\mu}athbf{0}$
\FOR{$t=1 \text{ to } T$}
\STATE randomly pick up $j_t$ block coordinates
\STATE $\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \in {\mathbf{c}al X}_j}~\langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle + g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2$~
\STATE $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = \mathbf{b}oldsymbol{\mu}athbf{x}^t + U_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t)$
\STATE receives the function $f_{t+1}(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})$ and incurs the loss $f_{t+1}(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1})$
\mathbf{b}oldsymbol{\mu}athbf{E}NDFOR
\mathbf{b}oldsymbol{\mu}athbf{e}nd{algorithmic}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{algorithm*}
| 3,332 | 49,684 |
en
|
train
|
0.4937.6
|
\subsection{ORBCD for Stochastic Optimization}
In the stochastic setting, ORBCD first randomly picks up $i_t$-th block sample and then randomly chooses $j_t$-th block coordinate. The algorithm has the following iterate:
\mathbf{b}egin{align}\label{eq:orbcds}
\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} & = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}}~\langle \nabla_{j_t} f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle + g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 \nonumber \\
& = \text{Prox}_{g_{j_t}}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} -\nabla_{j_t} f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t) )~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
For high dimensional problem with non-overlapping composite regularizers, solving~\mathbf{b}oldsymbol{\mu}yref{eq:orbcds} is computationally cheaper than solving~\mathbf{b}oldsymbol{\mu}yref{eq:sgd} in SGD.
The algorithm of ORBCD in both settings is summarized in Algorithm~\mathbf{r}ef{alg:orbcd_stochastic}.
$\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}$ depends on $(i_t, j_t)$, but $j_{t}$ and $i_{t}$ are independent.
$\mathbf{b}oldsymbol{\mu}athbf{x}^t$ is independent of $(i_t, j_t)$ but depends on the observed realization of the random variables
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbf{x}i = \{ ( i_1, j_1), \mathbf{c}dots, (i_{t-1}, j_{t-1}) \}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
The online-stochastic conversion rule~\mathbf{c}ite{Duchi10_comid,duchi09,xiao10} still holds here. The iteration complexity of ORBCD can be obtained by dividing the regret bounds in the online setting by $T$. Setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \sqrt{t} + L$ where $L=\mathbf{b}oldsymbol{\mu}ax_jL_j$, the iteration complexity of ORBCD is
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^t) + g(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^t) ] - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) +g(\mathbf{b}oldsymbol{\mu}athbf{x})] \leq O(\frac{1}{\sqrt{T}})~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
For strongly convex functions, setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \frac{\gamma t}{J} + L$,
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^t) + g(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^t) ] - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) +g(\mathbf{b}oldsymbol{\mu}athbf{x})] \leq O(\frac{\log T}{T})~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
The iteration complexity of ORBCD match that of SGD. Simiarlar as SGD, the convergence speed of ORBCD is also slowed down by the variance of stochastic gradient.
\mathbf{b}egin{algorithm*}[tb]
\mathbf{c}aption{Online Randomized Block Coordinate Descent for Stochastic Optimization}
\label{alg:orbcd_stochastic}
\mathbf{b}egin{algorithmic}[1]
\STATE {\mathbf{b}fseries Initialization:} $\mathbf{b}oldsymbol{\mu}athbf{x}^1 = \mathbf{b}oldsymbol{\mu}athbf{0}$
\FOR{$t=1 \text{ to } T$}
\STATE randomly pick up $i_t$ block samples and $j_t$ block coordinates
\STATE $\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \in {\mathbf{c}al X}_j}~\langle \nabla_{j_t} f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle + g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2$~
\STATE $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = \mathbf{b}oldsymbol{\mu}athbf{x}^t + U_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t)$
\mathbf{b}oldsymbol{\mu}athbf{E}NDFOR
\mathbf{b}oldsymbol{\mu}athbf{e}nd{algorithmic}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{algorithm*}
\mathbf{b}egin{algorithm*}[tb]
\mathbf{c}aption{Online Randomized Block Coordinate Descent with Variance Reduction}
\label{alg:orbcdvd}
\mathbf{b}egin{algorithmic}[1]
\STATE {\mathbf{b}fseries Initialization:} $\mathbf{b}oldsymbol{\mu}athbf{x}^1 = \mathbf{b}oldsymbol{\mu}athbf{0}$
\FOR{$t=2 \text{ to } T$}
\STATE $\mathbf{b}oldsymbol{\mu}athbf{x}_0 = \tilde{\mathbf{b}oldsymbol{\mu}athbf{x}} = \mathbf{b}oldsymbol{\mu}athbf{x}^t$.
\FOR{$k = 0\textbf{ to } m-1$}
\STATE randomly pick up $i_k$ block samples
\STATE randomly pick up $j_k$ block coordinates
\STATE $\mathbf{v}_{j_k}^{i_k} = \nabla_{j_k} f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^{k}) - \nabla_{j_k} f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u}_{j_k}$ where $\tilde{\mathbf{b}oldsymbol{\mu}u}_{j_k} = \nabla_{j_k} f(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}})$
\STATE $\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} = \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} }~\langle \mathbf{v}_{j_k}^{i_k}, \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} \mathbf{r}angle + g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_k}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k}\|_2^2$~
\STATE $\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} = \mathbf{b}oldsymbol{\mu}athbf{x}^k + U_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_j}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k)$
\mathbf{b}oldsymbol{\mu}athbf{E}NDFOR
\STATE $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = \mathbf{b}oldsymbol{\mu}athbf{x}^m$ or $\frac{1}{m}\sum_{k=1}^{m}\mathbf{b}oldsymbol{\mu}athbf{x}^k$
\mathbf{b}oldsymbol{\mu}athbf{E}NDFOR
\mathbf{b}oldsymbol{\mu}athbf{e}nd{algorithmic}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{algorithm*}
| 2,465 | 49,684 |
en
|
train
|
0.4937.7
|
\subsection{ORBCD with variance reduction}
In the stochastic setting, we apply the variance reduction technique~\mathbf{c}ite{xiao14:psgdvd,zhang13:sgdvd} to accelerate the rate of convergence of ORBCD, abbreviated as ORBCDVD. As SVRG and prox-SVRG, ORBCDVD consists of two stages. At time $t+1$, the outer stage maintains an estimate $\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}} = \mathbf{b}oldsymbol{\mu}athbf{x}^t$ of the optimal $\mathbf{b}oldsymbol{\mu}athbf{x}^*$ and updates $\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}$ every $m+1$ iterations.
The inner stage takes $m$ iterations which is indexed by $k = 0,\mathbf{c}dots, m-1$. At the $k$-th iteration, ORBCDVD randomly picks $i_k$-th sample and $j_k$-th coordinate and compute
\mathbf{b}egin{align}
\mathbf{v}_{j_k}^{i_k} &= \nabla_{j_k} f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla_{j_k} f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u}_{j_k}~, \label{eq:orbcdvd_vij}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where
\mathbf{b}egin{align}\label{eq:orbcdvd_mu}
\tilde{\mathbf{b}oldsymbol{\mu}u}_{j_k} = \frac{1}{n} \sum_{i=1}^{n} \nabla_{j_k} f_i(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) = \nabla_{j_k} f(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}})~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
$\mathbf{v}_{j_t}^{i_t}$ depends on $(i_t, j_t)$, and $i_t$ and $j_t$ are independent. Conditioned on $\mathbf{b}oldsymbol{\mu}athbf{x}^k$, taking expectation over $i_k, j_k$ gives
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}\mathbf{v}_{j_k}^{i_k} &= \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k} \mathbf{b}oldsymbol{\mu}athbb{E}_{j_k}[\nabla_{j_k} f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla_{j_k} f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u}_{j_k}] \nonumber \\
&= \frac{1}{J}\mathbf{b}oldsymbol{\mu}athbb{E}_{i_k} [\nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u} ] \nonumber \\
& = \frac{1}{J}\nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^k)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Although $\mathbf{v}_{j_k}^{i_k}$ is stochastic gradient, the variance $\mathbf{b}oldsymbol{\mu}athbb{E} \| \mathbf{v}_{j_k}^{i_k} - \nabla_{j_k} f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) \|_2^2$ decreases progressively and is smaller than $\mathbf{b}oldsymbol{\mu}athbb{E} \| \nabla f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) \|_2^2$.
Using the variance reduced gradient $\mathbf{v}_{j_k}^{i_k}$, ORBCD then performs RBCD as follows:
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} &= \mathbf{a}rgmin_{\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}}~ \langle \mathbf{v}_{j_k}^{i_k}, \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} \mathbf{r}angle + g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k \|_2^2 \label{eq:orbcdvd_xj}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
After $m$ iterations, the outer stage updates $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}$ which is either $\mathbf{b}oldsymbol{\mu}athbf{x}^m$ or $\frac{1}{m}\sum_{k=1}^{m}\mathbf{b}oldsymbol{\mu}athbf{x}^k$. The algorithm is summarized in Algorithm~\mathbf{r}ef{alg:orbcdvd}. At the outer stage,
ORBCDVD does not necessarily require to compute the full gradient at once. If the computation of full gradient requires substantial computational
efforts, SVRG has to stop and complete the full gradient step before making progress. In contrast, $\tilde{\mathbf{b}oldsymbol{\mu}u}$ can be partially computed at each iteration and then stored for the next retrieval in ORBCDVD.
Assume $\mathbf{b}oldsymbol{\mu}athbf{e}ta > 2L$ and $m$ satisfy the following condition:
\mathbf{b}egin{align}\label{eq:orbcdvd_rho}
\mathbf{r}ho = \frac{L(m+1)}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m} + \frac{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)J}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m} - \frac{1}{m}+ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta (\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)J}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m\gamma} < 1~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Then $h(\mathbf{b}oldsymbol{\mu}athbf{x})$ converges linearly in expectation, i.e.,
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - (f(\mathbf{b}oldsymbol{\mu}athbf{x}^*) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ] \leq O(\mathbf{r}ho^t)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta = 4L$ in~\mathbf{b}oldsymbol{\mu}yref{eq:orbcdvd_rho} yields
\mathbf{b}egin{align}
\mathbf{r}ho = \frac{m+1}{2m} + \frac{3J}{2m} - \frac{1}{m}+ \frac{6JL}{m\gamma} \leq \frac{1}{2} + \frac{3 J}{2m}(1+\frac{4 L}{\gamma})~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Setting $m = 18JL/\gamma$, then
\mathbf{b}egin{align}
\mathbf{r}ho \leq \frac{1}{2} + \frac{1}{12}(\frac{\gamma}{L}+4) \mathbf{a}pprox \frac{11}{12}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where we assume $\gamma/L \mathbf{a}pprox 1$ for simplicity.
| 2,212 | 49,684 |
en
|
train
|
0.4937.8
|
\section{The Rate of Convergence}\label{sec:theory}
The following lemma is a key building block of the proof of the convergence of ORBCD in both online and stochastic setting.
\mathbf{b}egin{lem}
Let the Assumption~\mathbf{r}ef{asm:orbcd1} and \mathbf{r}ef{asm:orbcd2} hold.
Let $\mathbf{b}oldsymbol{\mu}athbf{x}^t$ be the sequences generated by ORBCD. $j_t$ is sampled randomly and uniformly from $\{1,\mathbf{c}dots, J \}$. We have
\mathbf{b}egin{align}\label{eq:orbcd_key_lem}
& \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{R_f^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $L = \mathbf{b}oldsymbol{\mu}ax_j L_j$.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{lem}
\mathbf{b}oldsymbol{\mu}athbf{p}roof
The optimality condition is
\mathbf{b}egin{align}
\langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + \mathbf{b}oldsymbol{\mu}athbf{e}ta_t (\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \leq 0~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
| 755 | 49,684 |
en
|
train
|
0.4937.9
|
Rearranging the terms yields
\mathbf{b}egin{align}
& \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \leq - \mathbf{b}oldsymbol{\mu}athbf{e}ta_t \langle \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 ) \nonumber \\
& = \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 ) ~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where the last equality uses $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = (\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}, \mathbf{b}oldsymbol{\mu}athbf{x}_{k\neq {j_t}}^t)$.
By the smoothness of $f_t$, we have
\mathbf{b}egin{align}
f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) \leq f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + \langle \nabla_j f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_j^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_j^t \mathbf{r}angle + \frac{L_j}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_j^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_j^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Since $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}^t = U_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t})$,
\iffalse
the convexity of $g$ gives
\mathbf{b}egin{align}
& g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) \leq \langle g'(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}), \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \mathbf{r}angle \leq \langle g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) + \sum_{\mathbf{b}oldsymbol{\mu}athbb{I}_{j_t} \mathbf{c}ap \mathbf{b}oldsymbol{\mu}athbb{I}_k \neq \mathbf{b}oldsymbol{\mu}athbf{e}mptyset} g'_{k}(\mathbf{b}oldsymbol{\mu}athbf{x}_{k}^{t+1}), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \mathbf{r}angle \nonumber \\
& \leq \langle g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \mathbf{r}angle + \frac{1}{2\mathbf{a}lpha}\| \sum_{\mathbf{b}oldsymbol{\mu}athbb{I}_{j_t} \mathbf{c}ap \mathbf{b}oldsymbol{\mu}athbb{I}_k \neq \mathbf{b}oldsymbol{\mu}athbf{e}mptyset} g'_{k}(\mathbf{b}oldsymbol{\mu}athbf{x}_{k}^{t+1}) \|_2^2 + \frac{\mathbf{a}lpha}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 \nonumber \\
& \leq \langle g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \mathbf{r}angle + \frac{(J-1)R_g^2}{2\mathbf{a}lpha} + \frac{\mathbf{a}lpha}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}egin{align}
g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) \leq \langle g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \mathbf{r}angle~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}egin{align}
\| \mathbf{b}oldsymbol{\mu}athbf{x}_{k}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{k}^t \|_2^2 = \| \mathbf{b}oldsymbol{\mu}athbf{x}_{\mathbf{b}oldsymbol{\mu}athbb{I}_{j_t} \mathbf{c}ap \mathbf{b}oldsymbol{\mu}athbb{I}_k \neq \mathbf{b}oldsymbol{\mu}athbf{e}mptyset}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{\mathbf{b}oldsymbol{\mu}athbb{I}_{j_t} \mathbf{c}ap \mathbf{b}oldsymbol{\mu}athbb{I}_k \neq \mathbf{b}oldsymbol{\mu}athbf{e}mptyset}^t \|_2^2 \leq \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}egin{align}
\langle \sum_{\mathbf{b}oldsymbol{\mu}athbb{I}_{j_t} \mathbf{c}ap \mathbf{b}oldsymbol{\mu}athbb{I}_k \neq \mathbf{b}oldsymbol{\mu}athbf{e}mptyset} g'_{k}(\mathbf{b}oldsymbol{\mu}athbf{x}_{k}^{t+1}), \mathbf{b}oldsymbol{\mu}athbf{x}_{k}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{k}^t \mathbf{r}angle \leq \frac{(J-1)R_g^2}{2\mathbf{a}lpha} + \frac{\mathbf{a}lpha}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\fi
we have
\mathbf{b}egin{align}
& f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t)] \nonumber \\
& \leq \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \mathbf{r}angle + \frac{L_{j_t}}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 + g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}) - g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) + g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t}) - g_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}) \nonumber \\
& \leq \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle + \frac{L_{j_t}}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 - \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t}), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{L_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 - \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t}), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Rearranging the terms yields
\mathbf{b}egin{align}\label{eq:lem1}
\langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g_{j_t}'(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle &\leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{L_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 \nonumber \\
&+ f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t)- [ f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) ]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
The convexity of $f_t$ gives
\mathbf{b}egin{align}
f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) \leq \langle \nabla f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}^t - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \mathbf{r}angle = \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} \mathbf{r}angle \leq \frac{1}{2\mathbf{a}lpha} \| \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) \|_2^2 + \frac{\mathbf{a}lpha}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where the equality uses $\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} = (\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1}, \mathbf{b}oldsymbol{\mu}athbf{x}_{k\neq {j_t}}^t)$. Plugging into~\mathbf{b}oldsymbol{\mu}yref{eq:lem1}, we have
\mathbf{b}egin{align}
& \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'_{j_t}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{L_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 + \langle \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{L_{j_t} - \mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t \|_2^2 + \frac{\mathbf{a}lpha}{2}\| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^t - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t+1} \|_2^2 + \frac{1}{2\mathbf{a}lpha} \| \nabla_{j_t} f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) \|_2^2 ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Let $ L = \mathbf{b}oldsymbol{\mu}ax_{j} L_j$. Setting $\mathbf{a}lpha = \mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L$ where $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t > L$ completes the proof.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
| 5,187 | 49,684 |
en
|
train
|
0.4937.10
|
This lemma is also a key building block in the proof of iteration complexity of GD, OGD/SGD and RBCD. In GD, by setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = L$, the iteration complexity of GD can be established. In RBCD, by simply setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = L_{j_t}$, the iteration complexity of RBCD can be established.
| 107 | 49,684 |
en
|
train
|
0.4937.11
|
\subsection{Online Optimization}
Note $\mathbf{b}oldsymbol{\mu}athbf{x}^t$ depends on the sequence of observed realization of the random variable
$\mathbf{b}oldsymbol{\mu}athbf{x}i = \{ j_1, \mathbf{c}dots, j_{t-1} \}$.
The following theorem establishes the regret bound of ORBCD.
\mathbf{b}egin{thm}
Let $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \sqrt{t} + L$ in the ORBCD and the Assumption~\mathbf{r}ef{asm:orbcd1} and \mathbf{r}ef{asm:orbcd2} hold. $j_t$ is sampled randomly and uniformly from $\{1,\mathbf{c}dots, J \}$. The regret bound $R(T)$ of ORBCD is
\mathbf{b}egin{align}
R(T) \leq J ( \frac{\sqrt{T} + L}{2}D^2 + \sqrt{T} R^2 + g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) )~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{thm}
\mathbf{b}oldsymbol{\mu}athbf{p}roof
In~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_key_lem}, conditioned on $\mathbf{b}oldsymbol{\mu}athbf{x}^t$, take expectation over $j_t$, we have
\mathbf{b}egin{align}\label{eq:a}
\frac{1}{J} \langle \nabla f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle &\leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \mathbf{b}oldsymbol{\mu}athbb{E}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1})
~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Using the convexity, we have
\mathbf{b}egin{align}
f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})] \leq \langle \nabla f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Together with~\mathbf{b}oldsymbol{\mu}yref{eq:a}, we have
\mathbf{b}egin{align}
f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) ] &\leq J \left \{ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \mathbf{b}oldsymbol{\mu}athbb{E}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) \mathbf{r}ight \}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Taking expectation over $\mathbf{b}oldsymbol{\mu}athbf{x}i$ on both sides, we have
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} \left [ f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) ] \mathbf{r}ight ] &\leq J \left \{ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) \mathbf{r}ight .\nonumber \\
& + \left. \frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) \mathbf{r}ight \}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Summing over $t$ and setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \sqrt{t} + L$, we obtain the regret bound
\mathbf{b}egin{align}\label{eq:orbcd_rgt0}
& R(T) = \sum_{t=1}^T\left \{ \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) ] - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) +g(\mathbf{b}oldsymbol{\mu}athbf{x})] \mathbf{r}ight \} \nonumber \\
&\leq J \left \{ - \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_{T}}{2} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{T+1} \|_2^2 + \sum_{t=1}^{T}(\mathbf{b}oldsymbol{\mu}athbf{e}ta_{t} - \mathbf{b}oldsymbol{\mu}athbf{e}ta_{t-1}) \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t} \|_2^2 + \sum_{t=1}^{T}\frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{T+1}) \mathbf{r}ight \} \nonumber \\
& \leq J \left \{ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_T}{2} D^2 + \sum_{t=1}^{T}\frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \mathbf{r}ight \} \nonumber \\
& \leq J \left \{ \frac{\sqrt{T} + L}{2} D^2 + \sum_{t=1}^{T}\frac{R^2}{2\sqrt{t} } + g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \mathbf{r}ight \} \nonumber \\
& \leq J ( \frac{\sqrt{T} + L}{2} D^2+ \sqrt{T} R^2 + g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) )~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
which completes the proof.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
| 2,709 | 49,684 |
en
|
train
|
0.4937.12
|
If one of the functions is strongly convex, ORBCD can achieve a $\log(T)$ regret bound, which is established in the following theorem.
\mathbf{b}egin{thm}\label{thm:orbcd_rgt_strong}
Let the Assumption~\mathbf{r}ef{asm:orbcd1}-\mathbf{r}ef{asm:orbcd3} hold and $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \frac{\gamma t}{J} + L$ in ORBCD. $j_t$ is sampled randomly and uniformly from $\{1,\mathbf{c}dots, J \}$. The regret bound $R(T)$ of ORBCD is
\mathbf{b}egin{align}
R(T) \leq J^2R^2 \log(T) + J(g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ) ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{thm}
\mathbf{b}oldsymbol{\mu}athbf{p}roof
Using the strong convexity of $f_t + g$ in~\mathbf{b}oldsymbol{\mu}yref{eq:stronggcov}, we have
\mathbf{b}egin{align}
f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})] \leq \langle \nabla f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle - \frac{\gamma}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Together with~\mathbf{b}oldsymbol{\mu}yref{eq:a}, we have
\mathbf{b}egin{align}
f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) ] &\leq \frac{J\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - \gamma }{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \frac{J\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) \nonumber \\
& + \frac{JR^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + J [ g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \mathbf{b}oldsymbol{\mu}athbb{E}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) ] ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Taking expectation over $\mathbf{b}oldsymbol{\mu}athbf{x}i$ on both sides, we have
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} \left [ f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) ] \mathbf{r}ight ] &\leq \frac{J\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - \gamma}{2} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \frac{J\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2}\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}[\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2]) \nonumber \\
& + \frac{JR^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + J [ \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) ]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Summing over $t$ and setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \frac{\gamma t}{J} + L$, we obtain the regret bound
\mathbf{b}egin{align}
& R(T) = \sum_{t=1}^T\left \{ \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f_t(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) ] - [f_t(\mathbf{b}oldsymbol{\mu}athbf{x}) +g(\mathbf{b}oldsymbol{\mu}athbf{x})] \mathbf{r}ight \} \nonumber \\
&\leq - \frac{J\mathbf{b}oldsymbol{\mu}athbf{e}ta_{T}}{2} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{T+1} \|_2^2 + \sum_{t=1}^{T}\frac{J\mathbf{b}oldsymbol{\mu}athbf{e}ta_{t} -\gamma - J\mathbf{b}oldsymbol{\mu}athbf{e}ta_{t-1}}{2}\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t} \|_2^2 + \sum_{t=1}^{T}\frac{JR^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + J ( g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{T+1}) ) \nonumber \\
& \leq \sum_{t=1}^{T}\frac{J^2R^2}{2\gamma t } + J(g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ) \nonumber \\
& \leq J^2R^2 \log(T) + J(g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ) ~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
which completes the proof.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
In general, ORBCD can achieve the same order of regret bound as OGD and other first-order online optimization methods, although the constant could be $J$ times larger.
\iffalse
If setting $f_t = f$, ORBCD turns to batch optimization or randomized overlapping block coordinate descent (ROLBCD). By dividing the regret bound by $T$ and denoting $\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^T = \frac{1}{T}\sum_{t=1}^{T} \mathbf{b}oldsymbol{\mu}athbf{x}^t$, we obtain the iteration complexity of ROLBCD, i.e.,
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i_{T-1}^j}[ f(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^T) + g(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^T) ] - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) +g(\mathbf{b}oldsymbol{\mu}athbf{x})] = \frac{R(T)}{T} ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
The iteration complexity of ROLBCD is $O(\frac{1}{\sqrt{T}})$, which is worse than RBCD.
\fi
| 2,542 | 49,684 |
en
|
train
|
0.4937.13
|
\subsection{Stochastic Optimization}
In the stochastic setting, ORBCD first randomly chooses the $i_t$-th block sample and the $j_t$-th block coordinate.
$j_{t}$ and $i_{t}$ are independent. $\mathbf{b}oldsymbol{\mu}athbf{x}^t$ depends on the observed realization of the random variables
$\mathbf{b}oldsymbol{\mu}athbf{x}i = \{ ( i_1, j_1), \mathbf{c}dots, (i_{t-1}, j_{t-1}) \}$.
The following theorem establishes the iteration complexity of ORBCD for general convex functions.
\mathbf{b}egin{thm}\label{thm:orbcd_stc_ic}
Let $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \sqrt{t} + L$ and $\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^T = \frac{1}{T} \sum_{t=1}^{T}\mathbf{b}oldsymbol{\mu}athbf{x}^t $ in the ORBCD. $i_t, j_t$ are sampled randomly and uniformly from $\{1,\mathbf{c}dots, I \}$ and $\{1,\mathbf{c}dots, J \}$ respectively. The iteration complexity of ORBCD is
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^t) + g(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^t) ] - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) +g(\mathbf{b}oldsymbol{\mu}athbf{x})] \leq \frac{J ( \frac{\sqrt{T} + L}{2} D^2+ \sqrt{T} R^2 + g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) )}{T}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{thm}
\mathbf{b}oldsymbol{\mu}athbf{p}roof
In the stochastic setting, let $f_t$ be $f_{i_t}$ in~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_key_lem}, we have
\mathbf{b}egin{align}\label{eq:orbcd_key_stoc}
\langle \nabla_{j_t} f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g_{j_t}'(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Note $i_t, j_t$ are independent of $\mathbf{b}oldsymbol{\mu}athbf{x}^t$. Conditioned on $\mathbf{b}oldsymbol{\mu}athbf{x}^t$, taking expectation over $i_t$ and $j_t$, the RHS is
\mathbf{b}egin{align}
& \mathbf{b}oldsymbol{\mu}athbb{E}\langle \nabla_{j_t} f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g_{j_t}'(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle = \mathbf{b}oldsymbol{\mu}athbb{E}_{i_t} [ \mathbf{b}oldsymbol{\mu}athbb{E}_{j_t} [ \langle \nabla_{j_t} f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g_{j_t}'(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_t} \mathbf{r}angle ] ] \nonumber \\
& = \frac{1}{J} \mathbf{b}oldsymbol{\mu}athbb{E}_{i_t} [ \langle \nabla f_{i_t}(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle + \langle g'(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle ] \nonumber \\
& = \frac{1}{J}\langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Plugging back into~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_key_stoc}, we have
\mathbf{b}egin{align}\label{eq:orbcd_stc_0}
& \frac{1}{J}\langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'(\mathbf{b}oldsymbol{\mu}athbf{x}^t) , \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle \nonumber \\
&\leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \mathbf{b}oldsymbol{\mu}athbb{E} g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Using the convexity of $f + g$, we have
\mathbf{b}egin{align}
f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})] \leq \langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Together with~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_stc_0}, we have
\mathbf{b}egin{align}
f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) ] &\leq J \left \{ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2) + \frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \mathbf{b}oldsymbol{\mu}athbb{E}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) \mathbf{r}ight \}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Taking expectation over $\mathbf{b}oldsymbol{\mu}athbf{x}i$ on both sides, we have
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} \left [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) \mathbf{r}ight ] - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) ] &\leq J \left \{ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta_t}{2} ( \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}[\| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} \|_2^2]) \mathbf{r}ight .\nonumber \\
& + \left. \frac{R^2}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta_t - L)} + \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}g(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) \mathbf{r}ight \}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Summing over $t$ and setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \sqrt{t} + L$, following similar derivation in~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_rgt0}, we have
\mathbf{b}egin{align}
\sum_{t=1}^T\left \{ \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) ] - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) +g(\mathbf{b}oldsymbol{\mu}athbf{x})] \mathbf{r}ight \} \leq J ( \frac{\sqrt{T} + L}{2} D^2+ \sqrt{T} R^2 + g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) )~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Dividing both sides by $T$, using the Jensen's inequality and denoting $\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^T = \frac{1}{T}\sum_{t=1}^{T}\mathbf{b}oldsymbol{\mu}athbf{x}^t$ complete the proof.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
| 3,423 | 49,684 |
en
|
train
|
0.4937.14
|
For strongly convex functions, we have the following results.
\mathbf{b}egin{thm}
For strongly convex function, setting $\mathbf{b}oldsymbol{\mu}athbf{e}ta_t = \frac{\gamma t}{J} + L$ in the ORBCD. $i_t, j_t$ are sampled randomly and uniformly from $\{1,\mathbf{c}dots, I \}$ and $\{1,\mathbf{c}dots, J \}$ respectively. Let $\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^T = \frac{1}{T} \sum_{t=1}^{T}\mathbf{b}oldsymbol{\mu}athbf{x}^t $. The iteration complexity of ORBCD is
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^T) + g(\mathbf{b}ar{\mathbf{b}oldsymbol{\mu}athbf{x}}^T) ] - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) +g(\mathbf{b}oldsymbol{\mu}athbf{x})] \leq \frac{J^2R^2 \log(T) + J(g(\mathbf{b}oldsymbol{\mu}athbf{x}^1) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ) }{T}~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{thm}
\mathbf{b}oldsymbol{\mu}athbf{p}roof
If $f+g$ is strongly convex, we have
\mathbf{b}egin{align}
f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - [f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})] \leq \langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g'(\mathbf{b}oldsymbol{\mu}athbf{x}^t), \mathbf{b}oldsymbol{\mu}athbf{x}^{t} - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle - \frac{\gamma}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Plugging back into~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_stc_0}, following similar derivation in Theorem~\mathbf{r}ef{thm:orbcd_rgt_strong} and Theorem~\mathbf{r}ef{thm:orbcd_stc_ic} complete the proof.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
| 764 | 49,684 |
en
|
train
|
0.4937.15
|
\subsection{ORBCD with Variance Reduction}
According to the Theorem 2.1.5 in~\mathbf{c}ite{nesterov04:convex}, the block-wise Lipschitz gradient in Assumption~\mathbf{r}ef{asm:orbcd1} can also be rewritten as follows:
\mathbf{b}egin{align}
& f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) \leq f_i(\mathbf{b}oldsymbol{\mu}athbf{y}) + \langle \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) - \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{y}), \mathbf{b}oldsymbol{\mu}athbf{x}_j - \mathbf{b}oldsymbol{\mu}athbf{y}_j\mathbf{r}angle + \frac{L}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_j - \mathbf{b}oldsymbol{\mu}athbf{y}_j\|_2^2~, \label{eq:blk_lip1} \\
&\| \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) - \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{y}) \|_2^2 \leq L \langle \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) - \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{y}), \mathbf{b}oldsymbol{\mu}athbf{x}_j - \mathbf{b}oldsymbol{\mu}athbf{y}_j\mathbf{r}angle~.\label{eq:blk_lip2}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Let $\mathbf{b}oldsymbol{\mu}athbf{x}^*$ be an optimal solution. Define an upper bound of $f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) -(f(\mathbf{b}oldsymbol{\mu}athbf{x}^*)+g(\mathbf{b}oldsymbol{\mu}athbf{x}^*))$ as
\mathbf{b}egin{align}\label{eq:def_h}
h(\mathbf{b}oldsymbol{\mu}athbf{x},\mathbf{b}oldsymbol{\mu}athbf{x}^*)= \langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}), \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^*\mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
If $f(\mathbf{b}oldsymbol{\mu}athbf{x}) + g(\mathbf{b}oldsymbol{\mu}athbf{x})$ is strongly convex, we have
\mathbf{b}egin{align}\label{eq:orbcd_strong_h}
h(\mathbf{b}oldsymbol{\mu}athbf{x},\mathbf{b}oldsymbol{\mu}athbf{x}^*) \geq f(\mathbf{b}oldsymbol{\mu}athbf{x}) - f(\mathbf{b}oldsymbol{\mu}athbf{x}^*) + g(\mathbf{b}oldsymbol{\mu}athbf{x}) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \geq \frac{\gamma}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^* \|_2^2 ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}egin{lem}\label{lem:orbcdvd_lem1}
Let $\mathbf{b}oldsymbol{\mu}athbf{x}^*$ be an optimal solution and the Assumption~\mathbf{r}ef{asm:orbcd1}, we have
\mathbf{b}egin{align}
\frac{1}{I} \sum_{i=1}^{I} \| \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) - \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 \leq L h(\mathbf{b}oldsymbol{\mu}athbf{x},\mathbf{b}oldsymbol{\mu}athbf{x}^*)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $h$ is defined in~\mathbf{b}oldsymbol{\mu}yref{eq:def_h}.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{lem}
\mathbf{b}oldsymbol{\mu}athbf{p}roof Since the Assumption~\mathbf{r}ef{asm:orbcd1} hold, we have using
\mathbf{b}egin{align}\label{eq:orbcd_bd_h}
&\frac{1}{I} \sum_{i=1}^{I} \| \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) - \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 = \frac{1}{I} \sum_{i=1}^{I} \sum_{j=1}^J \| \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) - \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 \nonumber \\
&\leq \frac{1}{I} \sum_{i=1}^{I} \sum_{j=1}^J L \langle \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) - \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^*), \mathbf{b}oldsymbol{\mu}athbf{x}_j - \mathbf{b}oldsymbol{\mu}athbf{x}_j^*\mathbf{r}angle \nonumber \\
& = L [ \langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}), \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^*\mathbf{r}angle + \langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^*), \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle]~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where the inequality uses~\mathbf{b}oldsymbol{\mu}yref{eq:blk_lip2}. For an optimal solution $\mathbf{b}oldsymbol{\mu}athbf{x}^*$, $g'(\mathbf{b}oldsymbol{\mu}athbf{x}^*) + \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^*) = 0$ where $g'(\mathbf{b}oldsymbol{\mu}athbf{x}^*)$ is the subgradient of $g$ at $\mathbf{b}oldsymbol{\mu}athbf{x}^*$. The second term in~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_bd_h} can be rewritten as
\mathbf{b}egin{align}
& \langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^*), \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle = - \langle g'(\mathbf{b}oldsymbol{\mu}athbf{x}^*), \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x} \mathbf{r}angle = g(\mathbf{b}oldsymbol{\mu}athbf{x}) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Plugging into~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_bd_h} and using~\mathbf{b}oldsymbol{\mu}yref{eq:def_h} complete the proof.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
\mathbf{b}egin{lem}\label{lem:orbcdvd_lem2}
Let $\mathbf{v}_{j_k}^{i_k} $ and $\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1}$ be generated by~\mathbf{b}oldsymbol{\mu}yref{eq:orbcdvd_vij}-\mathbf{b}oldsymbol{\mu}yref{eq:orbcdvd_xj}. Conditioned on $\mathbf{b}oldsymbol{\mu}athbf{x}^k$, we have
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E} \| \mathbf{v}_{j_k}^{i_k} - \nabla_{j_k} f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) \|_2^2 \leq \frac{2L}{J} [h(\mathbf{b}oldsymbol{\mu}athbf{x}^{k},\mathbf{b}oldsymbol{\mu}athbf{x}^*) + h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}},\mathbf{b}oldsymbol{\mu}athbf{x}^*)]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{lem}
\mathbf{b}oldsymbol{\mu}athbf{p}roof
Conditioned on $\mathbf{b}oldsymbol{\mu}athbf{x}^k$, we have
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}[ \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u} ] = \frac{1}{I} \sum_{i=1}^{I} [ \nabla f_{i}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_{i}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u} ] = \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^k)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Note $\mathbf{b}oldsymbol{\mu}athbf{x}^k$ is independent of $i_k, j_k$. $i_k$ and $j_k$ are independent. Conditioned on $\mathbf{b}oldsymbol{\mu}athbf{x}^k$, taking expectation over $i_k, j_k$ and using~\mathbf{b}oldsymbol{\mu}yref{eq:orbcdvd_vij} give
\mathbf{b}egin{align}
&\mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{v}_{j_k}^{i_k} - \nabla_{j_k} f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) \|_2^2 = \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k} [ \mathbf{b}oldsymbol{\mu}athbb{E}_{j_k}\| \mathbf{v}_{j_k}^{i_k} - \nabla_{j_k} f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) \|_2^2] \nonumber \\
&= \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}[ \mathbf{b}oldsymbol{\mu}athbb{E}_{j_k}\| \nabla_{j_k} f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla_{j_k} f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u}_{j_k} - \nabla_{j_k} f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) \|_2^2] \nonumber \\
&= \frac{1}{J}\mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}\| \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) + \tilde{\mathbf{b}oldsymbol{\mu}u} - \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) \|_2^2 \nonumber \\
& \leq \frac{1}{J} \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}\| \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) \|_2^2 \nonumber \\
& \leq \frac{2}{J} \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}\| \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 + \frac{2}{J} \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}\| \nabla f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) - \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 \nonumber \\
& = \frac{2}{IJ} \sum_{i=1}^{I}\| \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 + \frac{2}{IJ}\sum_{i=1}^{I} \| \nabla f_i(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) - \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 \nonumber \\
& \leq \frac{2L}{J} [ h(\mathbf{b}oldsymbol{\mu}athbf{x}^{k}, \mathbf{b}oldsymbol{\mu}athbf{x}^*) + h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}, \mathbf{b}oldsymbol{\mu}athbf{x}^*)]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
The first inequality uses the fact $\mathbf{b}oldsymbol{\mu}athbb{E} \| \mathbf{b}oldsymbol{\mu}athbf{z}eta - \mathbf{b}oldsymbol{\mu}athbb{E}\mathbf{b}oldsymbol{\mu}athbf{z}eta \|_2^2 \leq \mathbf{b}oldsymbol{\mu}athbb{E} \| \mathbf{b}oldsymbol{\mu}athbf{z}eta \|_2^2$ given a random variable $\mathbf{b}oldsymbol{\mu}athbf{z}eta$, the second inequality uses $\| \mathbf{a} + \mathbf{b} \|_2^2 \leq 2 \| \mathbf{a} \|_2^2 + 2\|\mathbf{b}\|_2^2$, and the last inequality uses Lemma~\mathbf{r}ef{lem:orbcdvd_lem1}.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
| 3,952 | 49,684 |
en
|
train
|
0.4937.16
|
& \leq \frac{1}{J} \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}\| \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) \|_2^2 \nonumber \\
& \leq \frac{2}{J} \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}\| \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 + \frac{2}{J} \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}\| \nabla f_{i_k}(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) - \nabla f_{i_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 \nonumber \\
& = \frac{2}{IJ} \sum_{i=1}^{I}\| \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 + \frac{2}{IJ}\sum_{i=1}^{I} \| \nabla f_i(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}) - \nabla f_i(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \|_2^2 \nonumber \\
& \leq \frac{2L}{J} [ h(\mathbf{b}oldsymbol{\mu}athbf{x}^{k}, \mathbf{b}oldsymbol{\mu}athbf{x}^*) + h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}, \mathbf{b}oldsymbol{\mu}athbf{x}^*)]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
The first inequality uses the fact $\mathbf{b}oldsymbol{\mu}athbb{E} \| \mathbf{b}oldsymbol{\mu}athbf{z}eta - \mathbf{b}oldsymbol{\mu}athbb{E}\mathbf{b}oldsymbol{\mu}athbf{z}eta \|_2^2 \leq \mathbf{b}oldsymbol{\mu}athbb{E} \| \mathbf{b}oldsymbol{\mu}athbf{z}eta \|_2^2$ given a random variable $\mathbf{b}oldsymbol{\mu}athbf{z}eta$, the second inequality uses $\| \mathbf{a} + \mathbf{b} \|_2^2 \leq 2 \| \mathbf{a} \|_2^2 + 2\|\mathbf{b}\|_2^2$, and the last inequality uses Lemma~\mathbf{r}ef{lem:orbcdvd_lem1}.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
\mathbf{b}egin{lem}\label{lem:orbcdvd_lem3}
Under Assumption~\mathbf{r}ef{asm:orbcd1}, $f(\mathbf{b}oldsymbol{\mu}athbf{x}) = \frac{1}{I} \sum_{i=1}^{I}f_i(\mathbf{b}oldsymbol{\mu}athbf{x})$ has block-wise Lipschitz continuous gradient with constant $L$, i.e.,
\mathbf{b}egin{align}
\| \nabla_j f(\mathbf{b}oldsymbol{\mu}athbf{x} + U_j h_j ) - \nabla_j f(\mathbf{b}oldsymbol{\mu}athbf{x}) \|_2 \leq L \| h_j \|_2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{lem}
\mathbf{b}oldsymbol{\mu}athbf{p}roof
Using the fact that $f(\mathbf{b}oldsymbol{\mu}athbf{x}) = \frac{1}{I} \sum_{i=1}^{I}f_i(\mathbf{b}oldsymbol{\mu}athbf{x})$, we have
\mathbf{b}egin{align}
&\| \nabla_j f(\mathbf{b}oldsymbol{\mu}athbf{x} + U_j h_j ) - \nabla_j f(\mathbf{b}oldsymbol{\mu}athbf{x}) \|_2 = \| \frac{1}{I} \sum_{i=1}^{I} [\nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x} + U_j h_j ) - \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) ] \|_2 \nonumber \\
& \leq \frac{1}{I} \sum_{i=1}^{I} \| \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x} + U_j h_j ) - \nabla_j f_i(\mathbf{b}oldsymbol{\mu}athbf{x}) \|_2 \nonumber \\
& \leq L \| h_j \|_2~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where the first inequality uses the Jensen's inequality and the second inequality uses the Assumption~\mathbf{r}ef{asm:orbcd1}.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
Now, we are ready to establish the linear convergence rate of ORBCD with variance reduction for strongly convex functions.
\mathbf{b}egin{thm}\label{thm:orbcdvd}
Let $\mathbf{b}oldsymbol{\mu}athbf{x}^t$ be generated by ORBCD with variance reduction~\mathbf{b}oldsymbol{\mu}yref{eq:orbcdvd_mu}-\mathbf{b}oldsymbol{\mu}yref{eq:orbcdvd_xj}. $j_k$ is sampled randomly and uniformly from $\{1,\mathbf{c}dots, J \}$. Assume $\mathbf{b}oldsymbol{\mu}athbf{e}ta > 2L$ and $m$ satisfy the following condition:
\mathbf{b}egin{align}
\mathbf{r}ho = \frac{L(m+1)}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m} + \frac{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)J}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m} - \frac{1}{m}+ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta (\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)J}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m\gamma} < 1~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Then ORBCDVD converges linearly in expectation, i.e.,
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^t) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^t) - (f(\mathbf{b}oldsymbol{\mu}athbf{x}^*)+g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ] \leq \mathbf{r}ho^t [ \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} h(\mathbf{b}oldsymbol{\mu}athbf{x}^1, \mathbf{b}oldsymbol{\mu}athbf{x}^*)]~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where $h$ is defined in~\mathbf{b}oldsymbol{\mu}yref{eq:def_h}.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{thm}
\mathbf{b}oldsymbol{\mu}athbf{p}roof
The optimality condition of~\mathbf{b}oldsymbol{\mu}yref{eq:orbcdvd_xj} is
\mathbf{b}egin{align}
\langle \mathbf{v}_{j_k}^{i_k} + \mathbf{b}oldsymbol{\mu}athbf{e}ta (\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k) + g'_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1}), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} \mathbf{r}angle \leq 0~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
| 2,230 | 49,684 |
en
|
train
|
0.4937.17
|
Rearranging the terms yields
\mathbf{b}egin{align}
& \langle \mathbf{v}_{j_k}^{i_k} + g'_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1}) , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} \mathbf{r}angle \leq - \mathbf{b}oldsymbol{\mu}athbf{e}ta \langle \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} \mathbf{r}angle \nonumber \\
& \leq \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k \|_2^2 ) \nonumber \\
& = \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^k \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k \|_2^2 ) ~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where the last equality uses $\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} = (\mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1}, \mathbf{b}oldsymbol{\mu}athbf{x}_{k\neq {j_k}}^t)$.
Using the convecxity of $g_j$ and the fact that $g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1}) = g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})$, we have
\mathbf{b}egin{align}
& \langle \mathbf{v}_{j_k}^{i_k} , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} \mathbf{r}angle + g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}) \leq \langle \mathbf{v}_{j_k}^{i_k} , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1}) \nonumber \\
& + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^k \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x} - \mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k \|_2^2 ) ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
According to Lemma~\mathbf{r}ef{lem:orbcdvd_lem3} and using~\mathbf{b}oldsymbol{\mu}yref{eq:blk_lip1}, we have
\mathbf{b}egin{align}
\langle \nabla_{j_k} f(\mathbf{b}oldsymbol{\mu}athbf{x}^k), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} \mathbf{r}angle \leq f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - f(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1}) + \frac{L}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Letting $\mathbf{b}oldsymbol{\mu}athbf{x} = \mathbf{b}oldsymbol{\mu}athbf{x}^*$ and using the smoothness of $f$, we have
\mathbf{b}egin{align}
& \langle \mathbf{v}_{j_k}^{i_k} , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k} \mathbf{r}angle + g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \leq \langle \mathbf{v}_{j_k}^{i_k} - \nabla_{j_k} f(\mathbf{b}oldsymbol{\mu}athbf{x}^k), \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} \mathbf{r}angle + f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - [f(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})+g(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})] \nonumber \\
& + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^k \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^k \|_2^2 ) + \frac{L}{2} \| \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k+1} \|_2^2\nonumber \\
& \leq \frac{1}{2(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} \| \mathbf{v}_{j_k}^{i_k} - \nabla_{j_k} f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) \|_2^2 + f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - [f(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})+g(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})] + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^k \|_2^2 - \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} \|_2^2 ) ~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Taking expectation over $i_k, j_k$ on both sides and using Lemma~\mathbf{r}ef{lem:orbcdvd_lem2}, we have
\mathbf{b}egin{align}\label{eq:orbcdvd_expbd}
& \mathbf{b}oldsymbol{\mu}athbb{E} [ \langle \mathbf{v}_{j_k}^{i_k} , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^* \mathbf{r}angle + g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^*)] \nonumber \\
&\leq \frac{L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} [h(\mathbf{b}oldsymbol{\mu}athbf{x}^k,\mathbf{b}oldsymbol{\mu}athbf{x}^*) + h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}},\mathbf{b}oldsymbol{\mu}athbf{x}^*)] + f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \mathbf{b}oldsymbol{\mu}athbb{E}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})+g(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})] \nonumber \\
& + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^k \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} \|_2^2 )~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
The left hand side can be rewritten as
\mathbf{b}egin{align}
& \mathbf{b}oldsymbol{\mu}athbb{E} [\langle \mathbf{v}_{j_k}^{i_k} , \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^{k} - \mathbf{b}oldsymbol{\mu}athbf{x}_{j_k}^* \mathbf{r}angle + g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - g_{j_k}(\mathbf{b}oldsymbol{\mu}athbf{x}^*)] = \frac{1}{J} [ \mathbf{b}oldsymbol{\mu}athbb{E}_{i_k}\langle \mathbf{v}^{i_k} , \mathbf{b}oldsymbol{\mu}athbf{x}^k - \mathbf{b}oldsymbol{\mu}athbf{x}^* \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) -g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ] \nonumber \\
& = \frac{1}{J} [ \langle \nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) , \mathbf{b}oldsymbol{\mu}athbf{x}^k - \mathbf{b}oldsymbol{\mu}athbf{x}^* \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) -g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) ] = \frac{1}{J} h(\mathbf{b}oldsymbol{\mu}athbf{x}^k,\mathbf{b}oldsymbol{\mu}athbf{x}^*)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Plugging into~\mathbf{b}oldsymbol{\mu}yref{eq:orbcdvd_expbd} gives
\mathbf{b}egin{align}
\frac{1}{J} [ h(\mathbf{b}oldsymbol{\mu}athbf{x}^k,\mathbf{b}oldsymbol{\mu}athbf{x}^*) ] & \leq \frac{L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} [h(\mathbf{b}oldsymbol{\mu}athbf{x}^k,\mathbf{b}oldsymbol{\mu}athbf{x}^*) + h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}},\mathbf{b}oldsymbol{\mu}athbf{x}^*)] + f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \mathbf{b}oldsymbol{\mu}athbb{E}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})+g(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})] \nonumber \\
&+ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^k \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} \|_2^2 ) \nonumber \\
& \leq \frac{L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} [h(\mathbf{b}oldsymbol{\mu}athbf{x}^k,\mathbf{b}oldsymbol{\mu}athbf{x}^*) + h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}},\mathbf{b}oldsymbol{\mu}athbf{x}^*)] + f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \mathbf{b}oldsymbol{\mu}athbb{E}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})+g(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})] \nonumber \\
&+ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^k \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} \|_2^2 ) ~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Rearranging the terms yields
\mathbf{b}egin{align}
\frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta - 2L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} h(\mathbf{b}oldsymbol{\mu}athbf{x}^k,\mathbf{b}oldsymbol{\mu}athbf{x}^*) &\leq \frac{L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)}[ h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}},\mathbf{b}oldsymbol{\mu}athbf{x}^*) ] + f(\mathbf{b}oldsymbol{\mu}athbf{x}^k) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^k) - \mathbf{b}oldsymbol{\mu}athbb{E}[f(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})+g(\mathbf{b}oldsymbol{\mu}athbf{x}^{k+1})] \nonumber \\
& + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^k \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^{k+1} \|_2^2 )~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
At time $t+1$, we have $\mathbf{b}oldsymbol{\mu}athbf{x}_0 = \tilde{\mathbf{b}oldsymbol{\mu}athbf{x}} = \mathbf{b}oldsymbol{\mu}athbf{x}^t$. Summing over $k = 0,\mathbf{c}dots, m$ and taking expectation with respect to the history of random variable $\mathbf{b}oldsymbol{\mu}athbf{x}i$, we have
\mathbf{b}egin{align}
\frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta - 2L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} \sum_{k=0}^{m} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\mathbf{b}oldsymbol{\mu}athbf{x}_k,\mathbf{b}oldsymbol{\mu}athbf{x}^*) &\leq \frac{L(m+1)}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}},\mathbf{b}oldsymbol{\mu}athbf{x}^*) + \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}[ f(\mathbf{b}oldsymbol{\mu}athbf{x}_0) + g(\mathbf{b}oldsymbol{\mu}athbf{x}_0) ] - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} [ f(\mathbf{b}oldsymbol{\mu}athbf{x}_{m+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}_{m+1})] \nonumber \\
&+ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} ( \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}_0 \|_2^2 - \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}_{m+1} \|_2^2 ) \nonumber \\
&\leq \frac{Lm}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}},\mathbf{b}oldsymbol{\mu}athbf{x}^*) + \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\mathbf{b}oldsymbol{\mu}athbf{x}_0,\mathbf{b}oldsymbol{\mu}athbf{x}^*) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}_0 \|_2^2 \nonumber ~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where the last inequality uses
\mathbf{b}egin{align}
f(\mathbf{b}oldsymbol{\mu}athbf{x}_0) + g(\mathbf{b}oldsymbol{\mu}athbf{x}_0) - [ f(\mathbf{b}oldsymbol{\mu}athbf{x}_{m+1}) + g(\mathbf{b}oldsymbol{\mu}athbf{x}_{m+1})] & \leq f(\mathbf{b}oldsymbol{\mu}athbf{x}_0) + g(\mathbf{b}oldsymbol{\mu}athbf{x}_0) - [ f(\mathbf{b}oldsymbol{\mu}athbf{x}^*) + g(\mathbf{b}oldsymbol{\mu}athbf{x}^*)] \nonumber \\
& \leq \langle\nabla f(\mathbf{b}oldsymbol{\mu}athbf{x}_0), \mathbf{b}oldsymbol{\mu}athbf{x}_0 - \mathbf{b}oldsymbol{\mu}athbf{x}^* \mathbf{r}angle + g(\mathbf{b}oldsymbol{\mu}athbf{x}_0) - g(\mathbf{b}oldsymbol{\mu}athbf{x}^*) \nonumber \\
& = h(\mathbf{b}oldsymbol{\mu}athbf{x}_0,\mathbf{b}oldsymbol{\mu}athbf{x}^*)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Rearranging the terms gives
\mathbf{b}egin{align}
\frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta -2L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} \sum_{k=1}^{m} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\mathbf{b}oldsymbol{\mu}athbf{x}^k,\mathbf{b}oldsymbol{\mu}athbf{x}^*) \leq \frac{L(m+1)}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\tilde{\mathbf{b}oldsymbol{\mu}athbf{x}},\mathbf{b}oldsymbol{\mu}athbf{x}^*) + (1- \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta -2 L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} ) \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\mathbf{b}oldsymbol{\mu}athbf{x}_0,\mathbf{b}oldsymbol{\mu}athbf{x}^*) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}_0 \|_2^2~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Pick $x^{t+1}$ so that $h(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1}) \leq h(\mathbf{b}oldsymbol{\mu}athbf{x}_k), 1\leq k \leq m$, we have
\mathbf{b}egin{align}\label{eq:orbcdvd_lineareq0}
\frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta - 2L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} m \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} h(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1},\mathbf{b}oldsymbol{\mu}athbf{x}^*) \leq [ \frac{L(m+1)}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} + 1- \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta -2 L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} ] \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\mathbf{b}oldsymbol{\mu}athbf{x}^t,\mathbf{b}oldsymbol{\mu}athbf{x}^*) + \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{2} \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}\| \mathbf{b}oldsymbol{\mu}athbf{x}^* - \mathbf{b}oldsymbol{\mu}athbf{x}^t \|_2^2~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where ther right hand side uses $\mathbf{b}oldsymbol{\mu}athbf{x}^t = \mathbf{b}oldsymbol{\mu}athbf{x}_0 = \tilde{\mathbf{b}oldsymbol{\mu}athbf{x}}$. Using~\mathbf{b}oldsymbol{\mu}yref{eq:orbcd_strong_h}, we have
\mathbf{b}egin{align}
\frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta - 2L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} m \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} h(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1} ,\mathbf{b}oldsymbol{\mu}athbf{x}^*) \leq [ \frac{L(m+1)}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} + 1- \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta -2 L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} +\frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta}{\gamma} ] \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\mathbf{b}oldsymbol{\mu}athbf{x}^t,\mathbf{b}oldsymbol{\mu}athbf{x}^*)~.
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
Dividing both sides by $\frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta - 2L}{J(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)} m$, we have
\mathbf{b}egin{align}
\mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i} h(\mathbf{b}oldsymbol{\mu}athbf{x}^{t+1},\mathbf{b}oldsymbol{\mu}athbf{x}^*) \leq \mathbf{r}ho \mathbf{b}oldsymbol{\mu}athbb{E}_{\mathbf{b}oldsymbol{\mu}athbf{x}i}h(\mathbf{b}oldsymbol{\mu}athbf{x}^t,\mathbf{b}oldsymbol{\mu}athbf{x}^*)~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
where
\mathbf{b}egin{align}
\mathbf{r}ho = \frac{L(m+1)}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m} + \frac{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)J}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m} - \frac{1}{m}+ \frac{\mathbf{b}oldsymbol{\mu}athbf{e}ta (\mathbf{b}oldsymbol{\mu}athbf{e}ta-L)J}{(\mathbf{b}oldsymbol{\mu}athbf{e}ta-2L)m\gamma} < 1~,
\mathbf{b}oldsymbol{\mu}athbf{e}nd{align}
which completes the proof.
\mathbf{b}oldsymbol{\mu}athbf{q}ed
| 7,482 | 49,684 |
en
|
train
|
0.4937.18
|
\section{Conclusions}\label{sec:conclusion}
We proposed online randomized block coordinate descent (ORBCD) which combines online/stochastic gradient descent and randomized block coordinate descent. ORBCD is well suitable for large scale high dimensional problems with non-overlapping composite regularizers. We established the rate of convergence for ORBCD, which has the same order as OGD/SGD. For stochastic optimization with strongly convex functions, ORBCD can converge at a geometric rate in expectation by reducing the variance of stochastic gradient.
\section*{Acknowledgment}
H.W. and A.B. acknowledge the support of NSF via IIS-0953274, IIS-1029711, IIS- 0916750, IIS-0812183, NASA grant NNX12AQ39A, and the technical support from the University of Minnesota Supercomputing Institute. A.B. acknowledges support from IBM and Yahoo. H.W. acknowledges the support of DDF (2013-2014) from the University of Minnesota. H.W. also thanks Renqiang Min and Mehrdad Mahdavi for mentioning the papers about variance reduction when the author was in the NEC Research Lab, America.
\mathbf{b}ibliographystyle{plain}
\mathbf{b}ibliography{long,bcd,admm,onlinelearn,sparse,map}
\mathbf{b}oldsymbol{\mu}athbf{e}nd{document}
| 365 | 49,684 |
en
|
train
|
0.4938.0
|
\betaegin{document}
\pagestyle{plain}
\title{
A spinning construction for virtual 1-knots and 2-knots,
and the fiberwise and welded equivalence of virtual 1-knots
}
\alphauthor{Louis H. Kauffman, Eiji Ogasa, and Jonathan Schneider}
\date{}
\betaegin{abstract}
Spun-knots (respectively, spinning tori) in $S^4$
made from classical 1-knots compose an important class of
spherical 2-knots (respectively, embedded tori) contained in $S^4$.
Virtual 1-knots are generalizations of classical 1-knots.
We generalize these constructions to the virtual 1-knot case
by using what we call, in this paper, the spinning construction of a submanifold.
The construction proceeds as follows:
It has been known that
there is a consistent way to make an embedded circle $C$ contained in \\
(a closed oriented surface $F$)$\times$(a closed interval $[0,1]$) from any virtual 1-knot $K$.
Embed $F$ in $S^4$ by an embedding map $f$.
Let $F$ also denote $f(F).$
We can regard the tubular neighborhood of $F$ in $S^4$
as $F\times D^2$.
Let $[0,1]$ be a radius of $D^2$.
We can regard $F\times D^2$
as the result of rotating
$F\times [0,1]$ around $F\times \{0\}$.
Assume $C\cap(F\times\{0\})=\phi$.
Rotate $C$ together
when we rotate $F\times [0,1]$ around $F\times \{0\}$.
Thus we obtain an embedded torus $Q\subset S^4$.
We prove the following:
The embedding type $Q$ in $S^4$ depends only on $K$, and does not depend on $f$.
Furthermore,
the submanifolds, $Q$ and the embedded torus made from $K,$ defined by Satoh's method,
of $S^4$ are isotopic.
We generalize this construction in the virtual 1-knot case, and
we also succeed to make a consistent
construction of one-dimensional-higher tubes from
any virtual 2-dimensional knot.
Note that Satoh's method says nothing about the virtual 2-knot case.
Rourke's interpretation of Satoh's method is that
one puts `fiber-circles' on each point of each virtual 1-knot diagram.
If there is no virtual branch point in a virtual 2-knot diagram,
our way gives such fiber-circles to each point of the virtual 2-knot diagram.
Furthermore we prove the following:
If a virtual 2-knot diagram $\alphalpha$ has a virtual branch point,
$\alphalpha$ cannot be covered by such fiber-circles.
We obtain a new equivalence relation,
the $\mathcal E$-equivalence relation
of the set of virtual 2-knot diagrams,
that is much connected with
the welded equivalence relation and our spinning construction.
We prove that
there are virtual 2-knot diagrams, $J$ and $K$,
that are virtually nonequivalent
but are $\mathcal E$-equivalent.
Although Rourke claimed that
two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are
fiberwise equivalent if and only if
$\alphalpha$ and $\betaeta$ are welded equivalent,
we state that this claim is wrong.
We prove that
two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are
fiberwise equivalent if and only if
$\alphalpha$ and $\betaeta$ are rotational welded equivalent (the definiton of rotational welded equivalence is given in the body of the paper).
\epsilonnd{abstract}
\maketitle
\Z[\pi/\pi^{(n)}]ewpage
\tableofcontents
| 1,054 | 68,854 |
en
|
train
|
0.4938.1
|
\section{Introduction}\ellongleftarrowbel{jobun}
\subsection{
Spinning tori
}\ellongleftarrowbel{i1}\hskip20mm\\%
Spun-knots (respectively, spinning tori) in $S^4$
made from classical 1-knots compose an important class of
spherical 2-knots (respectively, embedded tori) contained in $S^4$.
See \cite{Zeeman} for the definition of spun-knots.
We review the construction of them below.\\
Let $\mathbb{R}^4=\{(x,y,z,w)|x,y,z,w\iotan\mathbb{R}\}$.
Regard
$\mathbb{R}^4$ as the result of rotating
$H=\{(x,,y,z,w)|x\geqq0,w=0\}$
around
$A=\{(x,,y,z,w)|x=0,w=0\}$
as the axis.
Take a 1-knot $K$ in $H$
so that $K\cap A$ is an arc (respectively, the empty set).
Rotate $K-{\rm Int}(K\cap A)$
around $A$ together
when we rotate $H$ around $A$.
The resultant submanifold of $\mathbb{R}^4$
is the spun knot (respectively, the spinning tori) of $K$.
We can easily also regard them as submanifolds of $S^4$.
We can define a spun link if $K$ is a link although we discuss the knot case mainly in this paper.
Our discussion can be easily generalized to the link case.
One of our themes in this paper is to generalize the spun knots of classical knots to the virtual knot case.
We begin by explaining why virtual knots are important.
\betaigbreak
\subsection{
History of relations between virtual knots and QFT,
and a reason why virtual knots are important
}\ellongleftarrowbel{i2}\hskip20mm\\%
Virtual 1-links are defined in \cite{Kauffman1,Kauffman, Kauffmani} as generalizations
of classical 1-links. One motivation for virtual 1-links is as follows.
Jones \cite{Jones} defined the Jones polynomial for classical 1-links in $S^3$.
The following had been well-known before the Jones polynomial was found:
The Alexander polynomial for classical 1-links in $S^3$
is defined in terms of the topology of the complement of the link and can be generalized to
give invariants of closed oriented 3-manifolds and of links within the 3-manifold.\\
Jones \cite[page 360, \S10]{Jones} tried to define a 3-manifold invariant associated with the Jones polynomial,
and succeeded in some cases.
Of course, when the Jones polynomial was found, the following question was regarded as a very natural one:
\smallbreak
{\betaf Question J.} Can we generalize the definition of the Jones polynomial for classical 1-links in $S^3$ to that in any 3-manifold?
\smallbreak
Note the result may not be a polynomial but a function of $t$.
\\
Witten \cite{W} wrote a quantum field theoretic path integral for any 1-link $L$ in any compact oriented 3-manifold $M$.
His path integral included the Jones polynomial for 1-links in $S^3$,
its generalizations and new (at the time) invariants of 3-manifolds.
This was a breakthrough for the philosophy of physics in that
one of the most natural geometrically intrinsic interpretations of a mathematical object
was done by using a path integral, and had not been done by any other way.
\smallbreak
\Z[\pi/\pi^{(n)}]oindent
{\betaf Note.}
Here, `geometrically intrinsic interpretation' means the point of view that would define a link invariant in terms of the embedding of the link in the ambient 3-dimensional manifolds
just as one can do naturally and easily in the case of the Alexander polynomial of 1-knots.
Jones \cite{Jones} defined the Jones polynomial by using representations of braid groups to an operator algebra (the Temperley-Lieb algebra). Representations, braid groups, operator algebras are mathematically explicit objects so some people may feel that that is enough to consider the meaning of the Jones polynomial.
\betaigbreak
If $M=S^3$, we can say at the physics level that the Witten path integral represents the Jones polynomial
for 1-links in $S^3$.
Reshetikhin, Turaev, Lickorish and others \cite{KM, Lickorish, Lickorishl, RT} etc.
generalized the result in \cite[page 360, \S10]{Jones} and
created rigorous definitions for invariants of 3-manifolds that parallel Witten's ideas, without using the functional integral.
They succeeded to define new invariants of closed oriented 3-manifolds and
invariants of links embedded in 3-manifolds
that we today call quantum invariants.
(Note, here, we distinguish the above invariants of links embedded in 3-manifolds
with the Jones polynomial for them as below.)
In both Witten's version and the Reshetikhin-Turaev versions the invariants of 3-manifolds are obtained by representing
the 3-manifold as surgery on a framed link and summing over invariants corresponding to appropriate representations decorating the surgery link. The same technique
applies when one includes an extra link component that is not part of the surgery data.
In this way, one obtains quantum invariants of links in 3-manifolds.
Another technique, formalized by Crane \cite{Crane} and by Kohno \cite{Kohno} uses a Heegard decomposition of the 3-manifold and algebraic structure of the conformal field theory
for the surface of the Heegard decompositon. These methods produce invariants for 3-manifolds and, in principle, invariants for links in 3-manifolds, but are much more indirect than the original physical idea of Witten that would integrate directly over the many possible evaluations of the Wilson loop for the knot or link in the 3-sphere, or the original combinatorial
skein techniques that produce the invariant of a link from its diagrammatic combinatorics.
See \cite{Kauffmanp}. \\
The Witten path integral is written also in the case where $L\Z[\pi/\pi^{(n)}]eq\phi$ and $M\Z[\pi/\pi^{(n)}]eq S^3$.
It corresponded to
Question J , which had been considered before the Witten path integral appeared.
\betaigbreak
In \cite{KauffmanJ} Kauffman found a definition of the Jones polynomial as a state summation over combinatorial states of the link diagram and found a diagrammatic interpretation of the Temperley-Lieb algebra that put the original definition of Jones in a wider context of generalized partition functions and statistical mecnanics on graphs and knot and link diagrams.
In \cite{Kauffman1,Kauffman, Kauffmani} Kauffman generalized the Jones polynomial
in the case where $M$ is (a closed oriented surface)$\times[-1,1]$.
In fact,
\cite{Kauffman1,Kauffman, Kauffmani} defined virtual 1-links
as
another way of describing
1-links in (a closed oriented surface)$\times[-1,1]$:
the set of virtual 1-links
is the same as
that of 1-links in (a closed oriented surface)$\times[-1,1],$ taken up to handle stabilization.
See Theorem \ref{vk}. We make the point here that the virtual knot theory is a context for links in the fundamental 3-manifolds of the form $F \times I$ where $F$ is a closed
surface. The state summation approach to the Jones polynomial generalizes to invariants of links in such thickened surfaces. This provides a significant and direct arena for
examining such structures without the functional integral. It also provides challenges for corresponding approaches that use the functional integral methods. It remains a serious
challenge to produce ways to work with the functional integrals that avoid difficulties in analysis.
\\
Path integrals represent the superposition principle dramatically.
This is a marvelous idea of Feynman.
The Witten path integral also represents a geometric idea of the Jones polynomial and quantum invariants physics-philosophically very well.
Witten found a Lagrangian via the Chern-Simons 3-form and Wilson line with a tremendous insight, and he calculated the path integral of the Lagrangian rigorously at physics level,
and showed that the result of the calculation is the Jones polynomial for links in $S^3$, and the quantum invariants of any closed oriented 3-manifold with or without embedded circles. It is a wonderful work of Witten.
However recall the following facts:
The Witten path integral for any 1-link in any closed 3-manifold has not been calculated
in mathematical level nor in physics level in any way that can be regarded as direct. This means that Question J is open in the general case.
That is,
nobody has succeeded to generalize the Jones polynomial in a direct way, and mathematically rigorously to the case where $M$ is not $S^3$, (respectively, $B^3$, $\mathbb{R}^3$), nor
(a closed oriented surface $F$)$\times[-1,1]$. (Note the last manifold is not closed.
Note that the discussion in the $S^3$ case is the same as that in the $B^3$ (respectively, $\mathbb{R}^3$) case. Virtual knot theory can also discuss the case where $F$ is compact and non-closed, but then we need to fix the embedding type of $F$ in $F\times[-1,1]$.)\\
Recall the following fact: Even if we make a (seemingly) meaningful Lagrangian,
the path integral associated with the Lagrangian cannot always be calculated.
An example is
the Witten path integral associated with
the general case of Question J.
Another one is the following. Today they do not know how to calculate the path integral
if we replace Chern-Simons-3-form on 3-manifolds with
Cern-Simons-$(2p+1)$-form on $(2p+1)$-manifolds,
where $p$ is any integer$\geq2$,
in the Witten path integral.
Indeed nowadays they only calculate path integrals only when they can calculate them.
If the path integral of the Lagrangian is not calculated explicitly, neither mathematicians nor physicists
regard the theory of the Lagrangian as a meaningful one.
Furthermore, even if we calculate path integrals, the result of the calculation is sometimes what we do not expect.
See an example of \cite{LeeYang} explained in
\cite[the last part of section 5.1]{Ryder}.\\
The heuristics of the Witten path integral have not been fully mined. See \cite{KauffmanPath} for a survey of the results of some of these heuristics in relation to the Jones
polynomial and Vassiliev invariants. It is possible that good heuristics will emerge for understanding invariants of links in 3-manifolds. But at the present time it is worth examining the
cases we do understand for working with generalizations of the Jones polynomial for links in thickened surfaces.
We had begun considering Question J
before the Witten path integral
appeared in this discussion.
Question J is also natural and important
even if we do not consider path integrals.
\\
\Z[\pi/\pi^{(n)}]oindent
{\betaf Note.}
(1)
We can observe some historical correspondences.
Feynman discovered path integrals
by using an analogy with (quantum) statistical mechanics,
and he interpreted quantum theory by using path integrals.
Operator algebras, path integrals, (quantum) statistical mechanics are closely related.
The Jones polynomial is discovered by using operator algebras (\cite{Jones}), next
is interpreted via (quantum) statistical mechanics (\cite{KauffmanJ}), then by using path integrals (\cite{W}).
Operator algebras, path integrals, and (quantum) statistical mechanics are related again with topology in
the background.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(2)
The Jones polynomial of 1-links in (a closed oriented surface)$\times$(the interval)
is discovered in \cite{Kauffman1,Kauffman, Kauffmani},
by using the analogy with state sums in (quantum) statistical mechanics in \cite{KauffmanJ}.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(3) \cite{KauffmanSaleur} found a relation between the Alexander-Conway polynomial between
1-dimensional classical knots and quantum field theory. The relation gives a different aspect from the Homflypt polynomial and the Witten path integral.
\cite{Ogasapath} found a relation between the degree of the Alexander polynomial of high dimensional knots and the Witten index of a supersymmetric quantum system.
It is also an outstanding open question whether we can define an analog to the Jones polynomial for high dimensional knots.
\betaigbreak
Virtual 1-links have many other important properties than the above one.
See \cite{Kauffman1,Kauffman, Kauffmani}.
Thus it is very natural to consider whether any property of classical 1-knots is possessed by virtual 1-knots, as below.
\betaigbreak
| 3,442 | 68,854 |
en
|
train
|
0.4938.2
|
\subsection{
Main results
}\ellongleftarrowbel{i3}\hskip20mm\\%
We generalize the construction of spun-knots (respectively, spinning tori) of classical 1-knots
to the virtual 1-knot case as follows.
Recall that, in \cite{Kauffman1, Kauffman, Kauffmani} there is given a consistent way
to make an embedded circle $C$ contained in
(a closed oriented surface $F$)$\times$(a closed interval $[0,1]$) from any virtual 1-knot $K$ diagram
(see Theorem \ref{vk}).
Note the following.
When we construct spun knots (spinning tori),
we regard $\mathbb{R}^4$ itself as the total space of the normal bundle of $A$ in $\mathbb{R}^4$.
Recall that $A$ is defined in \S\ref{i1}.
Embed $F$ in $\mathbb{R}^4\subset S^4$ by an embedding map $f$.
Let $F$ stand for $f(F).$.
Note that the tubular neighborhood of $F$ in $S^4$ is diffeomorphic to $F\times D^2$.
Let $[0,1]$ be a radius of $D^2$.
We can regard $F\times D^2$
as the result of rotating
$F\times [0,1]$ around $F\times \{0\}$.
Assume $C\cap(F\times\{0\})=\phi$.
Rotate $C$ together
when we rotate $F\times [0,1]$ around $F\times \{0\}$.
Thus we obtain an embedded torus $Q\subset S^4$. \\
We prove the following (Theorems \ref{honto} and \ref{mainkore}):
The embedding type $Q$ in $S^4$ depends only on $K$, and does not depend on $f$.
Furthermore the submanifolds,
$Q$ and the embedded torus made from $K$ defined by Satoh in \cite{Satoh},
of $S^4$ are isotopic.\\
This construction of $Q$ is an example of what we call the spinning construction of submanifolds
in Definition \ref{spinningsubmanifold}.
This paper does not discuss the case where $C\cap(F\times\{0\})\Z[\pi/\pi^{(n)}]eq\phi$.
\\
There are classical 1-knots, virtual 1-knots, and classical 2-knots
so it is natural to consider virtual 2-knots. We define virtual 2-knots in Definition \ref{JV}.
It is very natural to consider whether
any property of `classical 1-, and 2-knots and virtual 1-knots'
is possessed by virtual 2-knots.
It is natural to ask whether we can define one-dimensional-higher tubes
for virtual 2-knots
(Question \ref{North Carolina})
since
we succeed in the virtual 1-knot case
as explained above.
Note that Satoh's mehtod in \cite{Satoh} does not treat the virtual 2-knot case. \\
In the virtual 1-knot case,
in \cite{Rourke},
Rourke interpreted Satoh's method as follows:
Let $\alphalpha$ be any virtual 1-knot diagram.
Put `fiber-circles' on each point of $\alphalpha$ and obtain a one-dimensional-higher tube.
(We review this construction in Theorem \ref{Montana} and Definition \ref{Nebraska}).
If we try to generalize Rourke's way to the virtual 2-knot case,
we encounter the following situation.\\
Let $\alphalpha$ be any virtual 1-knot diagram. There are two cases:
\smallbreak\Z[\pi/\pi^{(n)}]oindent(1)
The case where $\alphalpha$ has no virtual branch point.
(We define virtual branch point in Definitions \ref{oyster} and \ref{JV}.)
\smallbreak\Z[\pi/\pi^{(n)}]oindent(2)
The case where $\alphalpha$ has a virtual branch point.
\smallbreak
In the case (1) , we can make a tube by Rourke's method.
See \cite[section 3.7.1]{J}, Note \ref{kaiga}, and Definition \ref{suiri}.
In the case (2), however, Schneider \cite{J} found
it difficult to define a tube near any virtual branch point. \\
Thus we consider the following two problems. \\
Can we put fiber-circles over each point of any virtual 2-knot
in a consistent way as described above,
and make a one-dimensional-higher tube
(Question \ref{North Dakota})? \\
Is there a one-dimensional-higher tube construction which is defined for all virtual 2-knots, and which agrees with the method in the case (1) written above
when there are no virtual branch points
(Question \ref{South Dakota})? \\
In Theorem \ref{vv} we give an affirmative answer to Question \ref{South Dakota}.
Our solution is a generalization of our method in the virtual 1-knot case
used in \S\S\ref{E}-\ref{Proof}.
We also use
a spinning construction of submanifolds
explained in Definition \ref{spinningsubmanifold}. \\
In Theorem \ref{Rmuri}
we give a negative answer to Question \ref{North Dakota}. \\
\\
We obtain a new equivalence relation,
the $\mathcal E$-equivalence relation of
the set of virtual
1- and 2-knot diagrams (Definition \ref{zoo}).
It is done by using the above spinning construction.
The $\mathcal E$-equivalence relation is closley connected with
the welded equivalence relation and our spinning construction.
Welded 1-links are defined in \cite{Rourke} associated with virtual 1-links.
Welded 1-links are related to tubes very much as we discuss in this paper.
We introduce welded 2-knots in Definition \ref{JW}. \\
We prove that there are virtual 2-knot diagrams, $J$ and $K$,
that are virtually nonequivalent
but are $\mathcal E$-equivalent
(Theorem \ref{Maine}).
Welded 1-,and 2-knots are recipients of the tube construction or the above spinning construction.
The above spinning construction is related to the fiberwise equivalence explained below.
We will explain their connection in this paper and this is a theme of this research.\\
Although Rourke claimed in \cite[Theorem 4.1]{Rourke} that
two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are fiberwise equivalent if and only if
$\alphalpha$ and $\betaeta$ are welded equivalent,
we state that this claim is wrong.
(See
\cite{Rourke} and Definitions \ref{Nevada}
of this paper for the definition of the fiberwise equivalence,
and
\cite{Rourke, Satoh} for that of the welded equivalence.)
The reason for the failure of Rourke's claim is given in
Theorems \ref{smooth} and \ref{fwrw}, and Claim \ref{panda}.
We prove in Theorems \ref{smooth} and \ref{fwrw} that {\iotat virtual 1-knot diagrams, $\alphalpha$ and $\betaeta$,
are fiberwise equivalent if and only if they are rotational welded equivalent.} The reader can recall that in virtual 1-knot theory there are Reidemeister-type moves for virtual
crossings. Rotational equivalence for virtual knots is obtained by making the virtual curl (analog of the first Reidemeister move) forbidden. Rotational equivalence for welded knots
also forbids the virtual curl move in the context of the rules for welded knots.
(See \cite{Kauffman, Kauffmanrw, J} for rotational welded equivalence.)
Our result is proved by using the property of virtual 2-knots found in Theorem \ref{Rmuri}.
Virtual 2-knots themselves are important, and furthermore they are also important for research in virtual 1-knots.
Our main results are
Theorems \ref{honto}, \ref{mainkore},
\ref{vv}, \ref{Rmuri},
\ref{Maine},
\ref{smooth},
and \ref{Montgomery}. \\
\betaigbreak
| 2,213 | 68,854 |
en
|
train
|
0.4938.3
|
\section{$\mathcal K(K)$ for a virtual 1-knot $K$}\ellongleftarrowbel{K}
\Z[\pi/\pi^{(n)}]oindent
We work in the smooth category unless we indicate otherwise.
In a part of \S\ref{New Mexico}
we will use the PL category in order to prove our results in the smooth category.
See Note \ref{haruwa}.
We review some facts on virtual 1-knots in this section before we state two of our main results,
Theorems \ref{honto} and \ref{mainkore},
in the following section. \\
\betaegin{figure}
\iotancludegraphics[width=140mm]{v.pdf}
\vskip-50mm
\caption{{\betaf A virtual crossing point and a surgery by a 1-handle}\ellongleftarrowbel{Alabama}}
\epsilonnd{figure}
Let $\alphalpha$ be a virtual 1-knot diagram.
In this paper we use Greek lowercase letters for virtual diagrams and
Roman capital letters for virtual knots.
See \cite{Kauffman1, Kauffman, Kauffmani}
for the definition and properties of virtual 1-knot diagrams and
those of virtual 1-knots.
For $\alphalpha$ there are a nonnegative integer $g$ and
an embedded circle contained in $\Sigma_g\times[0,1]$ as follows,
where $\Sigma_g$ is a closed oriented surface with genus $g$.
Take $\alphalpha$ in $\mathbb{R}^2$. (Recall that we can make the infinity point $\{*\}$
and $\mathbb{R}^2$ into $S^2$.)
Carry out a surgery on $\mathbb{R}^2$
by using a 3-dimensional 1-handle near a virtual crossing point as shown in
Figure \ref{Alabama}
and obtain $T^2-\{*\}$.
Note that the virtual 1-knot $K$ is oriented and that the arrows in
Figure \ref{Alabama}
denote the orientation.
Segments are changed as shown in the right figure of Figure \ref{Alabama}.
Do this procedure near all virtual crossing points.
Suppose that $\alphalpha$ has $g$ copies of virtual crossing point ($g\iotan\mathbb{N}\cup\{0\}$).
Here, $\mathbb{N}$ denotes the set of natural numbers.
Note that $a$ is a natural number if and only if $a$ is a positive integer.
What we obtain is $\Sigma_g-\{*\}$. We call it $\Sigma_g^\betaullet$.
(In \S\ref{Proof}, for a closed oriented surface $F$,
we define $F^\circ$ to be $F-$(an open 2-disc).
So, here, we use $^\betaullet$ not $^\circ$.)
Thus we obtain an immersed circle in $\Sigma_g^\betaullet$ from $\alphalpha$.
Call it $\mathcal I(\alphalpha)$.
Note that it is an immersion in ordinary sense (that is, it has no `virtual crossing point').
Regard $\Sigma_g$ as an abstract manifold.
Make $\Sigma_g^\betaullet\times[0,1]$.
There is a naturally embedded circle $\mathcal L(\alphalpha)$ contained in $\Sigma_g^\betaullet\times[0,1]$
whose projection by the projection $\Sigma_g^\betaullet\times[0,1]\to\Sigma_g^\betaullet\times\{0\}$ is
$\mathcal I(\alphalpha)$.
Suppose that $\mathcal L(\alphalpha)\cap(\Sigma_g^\betaullet\times\{0\})=\phi$.
Let ${\mathcal{K}}(\alphalpha)$ be
an embedded circle in $\Sigma_g\times[0,1]$ which we obtain naturally from $\mathcal L(\alphalpha)$.
$\Sigma_g$ is called a {\iotat representing surface}.
$\Sigma_g^\betaullet=\Sigma_g-\{*\}$
is also sometimes called a {\iotat representing surface}.
(The closure of ) any neighborhood of the immersed circle
in $\Sigma_g$ is also
called a {\iotat representing surface}.\\
\betaegin{thm}\ellongleftarrowbel{vk}
{\rm (\cite{Kauffman1, Kauffman, Kauffmani}.)}
Let $\alphalpha$ and $\betaeta$ be virtual 1-knot diagrams.
$\alphalpha$ and $\betaeta$ represent the same virtual 1-knot
if and only if
${\mathcal{K}}(\alphalpha)$ is obtained from ${\mathcal{K}}(\betaeta)$ by
a sequence of the following operations.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
$(1)$ A surgery on the surface by a 3-dimensional 1-handle, where
\hskip2mm$($The attached part of the handle$)\cap($the projection of the embedded circle$)=\phi$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
$(2)$ A surgery on the surface by a 3-dimensional 2-handle, where
\hskip2mm$($The attached part of the handle$)\cap($the projection of the embedded circle$)=\phi$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
$(3)$ An orientation preserving diffeomorphism map of the surface.
\epsilonnd{thm}
Hence the following definition makes sense.
Let $K$ be a virtual 1-knot.
Let $\alphalpha$ be a virtual 1-knot diagram of $K$.
Define $\mathcal K(K)$ to be $\mathcal K(\alphalpha)$.
\betaigbreak
\section{$\mathcal E(K)$ for a virtual 1-knot $K$}\ellongleftarrowbel{E}
\Z[\pi/\pi^{(n)}]oindent
We generalize spun knots and spinning tori,
and introduce a new class of submanifolds. \\
Let $n$ be a positive integer.
Two submanifolds $J$ and $K$ $\subset S^n$ are {\iotat $($ambient$)$ isotopic}
if there is a smooth orientation preserving family of diffeomorphisms $\epsilonta_t$ of $S^n$, $0\elleqq t\elleqq1$, with $\epsilonta_0$ the identity and $\epsilonta_1(J)=K$. \\
\betaegin{defn}\ellongleftarrowbel{spinningsubmanifold}
Let $F$ be a codimension two submanifold contained in a manifold $X$.
Suppose that the tubular neighborhood $N(F)$ of $F$ in $X$
is the product bundle. That is, we can regard $N(F)$ as $F\times D^2$.
See Figure \ref{tube}.
We can regard the closed 2-disc $D^2$ as the result of
rotating a radius $[0,1]$ around the center $\{o\}$ as the axis.
We can regard $N(F)$ as the result of
rotating $F\times[0,1]$ around $F=F\times\{0\}$
as the axis.
Suppose that a submanifold $P$ contained in $X$ is embedded in $F\times[0,1]$.
Let $P'$ be a submanifold $P\cap(F\times\{0\})$ of $F\times\{0\}$.
When we rotate $F\times[0,1]$ around $F$ and make $F\times D^2$,
rotate $P$ together, and call the resultant submanifold $Q$.
This submanifold $Q$ contained in $X$ is called
the {\iotat spinning submanifold} made from
$P$ by the rotation in $F\times D^2$ under the condition that
$P\cap(F\times\{0\})$ is the submanifold $P'$.
This way of construction of $Q$ is called a {\iotat spinning consruction} of submanifolds.
If $P$ is a subset not a submanifold, we can define $Q$ as well.
\epsilonnd{defn}
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=120mm]{tube.pdf}
\vskip-10mm
\caption{{\betaf The tubular neighborhood which is a product $D^2$-bundle}\ellongleftarrowbel{tube}}
\betaigbreak
\epsilonnd{figure}
Spun knots and spinning tori are spinning submanifolds.
\cite[Proof of Claim in page 3114]{Ogasa98SL} and
\cite[Lemma 5.3]{Ogasa98n} used spinning construction.
By the uniqueness of the tubular neighborhood, we have the following.
\betaegin{cla}\ellongleftarrowbel{atarimae1}
Let $\check f$ $($respectively, $\check g)$
$:F\times D^2\hookrightarrow X$ be an embedding map.
We can regard $\check f(\Sigma_g\times D^2)$
$($respectively, $\check g(\Sigma_g\times D^2))$
as the tubular neighborhood of
$\check f(\Sigma_g\times\{o\})$ $($respectively, $\check g(\Sigma_g\times\{o\}))$.
Let $\check f|_{\Sigma_g\times\{o\}}$
be isotopic to $\check g|_{\Sigma_g\times\{o\}}$.
Then submanifolds,
$\check f(\Sigma_g\times\{o\})$ and $\check g(\Sigma_g\times\{o\})$,
of $X$
are isotopic.
\epsilonnd{cla}\betaigbreak
Let $\alphalpha$ be a virtual 1-knot diagram.
Take $\Sigma_g\times[0,1]$ and $\mathcal K(\alphalpha)$ as in \S\ref{K},
that is, $K(\alphalpha)$ is a 1-knot in $\Sigma_g\times[0,1]$, where $\Sigma_g$ representing $\alphalpha$.
Assume $\mathcal K(\alphalpha)\cap(\Sigma_g\times\{0\})=\phi$.
Suppose $\mathcal K(\alphalpha)\cap(\Sigma_g\times\{0\})=\phi$.
Make $\Sigma_g\times D^2$, where we regard $[0,1]$ as a radius of $D^2$.
Let $\check f:\Sigma_g\times D^2\hookrightarrow S^4$ be an embedding map.
Let $\mathcal E_{\check f}(\alphalpha)$ be
the spinning submanifold made from $\mathcal K(\alphalpha)$
by the rotation in $\check f(\Sigma_g\times D^2)$.
Note $\mathcal E_{\check f}(\alphalpha)\subset S^4$.
Let $f$ be $\check f|_{\Sigma_g\times\{o\}}$.
By Claim \ref{atarimae1} it makes sense that
we call $\mathcal E_{\check f}(\alphalpha)$,
$\mathcal E_f(\alphalpha)$. \\
Suppose that $\alphalpha$ represents a virtual 1-knot $K$.
Theorem \ref{honto} is one of our main results. \\
\betaegin{thm}\ellongleftarrowbel{honto}
For an arbitrary virtual 1-knot $K$,
the submanifold type $\mathcal E_f(\alphalpha)$
of $S^4$ does not depend on the choice of a set $(\alphalpha, f)$.
\epsilonnd{thm} \betaigbreak
By Theorem \ref{honto} we can define $\mathcal E(K)$ for any virtual 1-knot $K$. \\
Let $\mathcal S(\alphalpha)$ be an embedded $S^1\times S^1$ contained in $S^4$
for a virtual 1-knot diagram $\alphalpha$, defined by Satoh in \cite{Satoh}.
It was proved
there that if $\alphalpha$ and $\betaeta$ represent the same virtual 1-knot,
the submanifolds,
$\mathcal S(\alphalpha)$ and $\mathcal S(\betaeta)$,
of $S^4$ are isotopic.
So we can define $\mathcal S(K)$ for any virtual 1-knot $K$. \\
We will prove the following in \S\ref{Proof}.
Theorem \ref{mainkore} is one of our main results.
\betaegin{thm}\ellongleftarrowbel{mainkore}
Let $K$ be a virtual 1-knot. Then
the submanifolds,
$\mathcal E(K)$ and $\mathcal S(K)$,
of $S^4$ are isotopic.
\epsilonnd{thm}
\betaegin{note}\ellongleftarrowbel{yuenchi}
If $K$ in Theorem \ref{mainkore} is a classical knot,
$\mathcal E(K)$ and $\mathcal S(K)$ is the spinning torus of $K$.
\epsilonnd{note}
\betaigbreak\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
\cite[section 10.2]{B1},
\cite[section 3.1.1]{B2} and
\cite{Dylan} proved only a special case of Theorem \ref{mainkore},
which is only Theorem \ref{mainmaenotame} of this paper.
We prove the general case.
Our result is stronger than the result in
\cite[section 10.2]{B1},
\cite[section 3.1.1]{B2} and
\cite{Dylan}.
\betaigbreak
| 3,427 | 68,854 |
en
|
train
|
0.4938.4
|
\section{Proof of Theorems \ref{honto} and \ref{mainkore}}\ellongleftarrowbel{Proof}
\Z[\pi/\pi^{(n)}]oindent
We first prove a special case.
\betaegin{thm}\ellongleftarrowbel{mainmaenotame}
Take a virtual 1-knot diagram $\alphalpha$ in \S\ref{K}.
Let $\check\iotaota:\Sigma_g\times D^2\to S^4$ be
an embedding map whose image of
$\Sigma_g^\betaullet$
by $\check\iotaota$
is $\Sigma_g^\betaullet$
in \S\ref{K}.
Let $\iotaota$ be $\check\iotaota\vert_{\Sigma_g}$.
Then the submanifolds,
$\mathcal E_{\iotaota}(\alphalpha)$ and $\mathcal S(\alphalpha)$,
of $S^4$ are isotopic.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{mainmaenotame}.}
Let $\mathbb{R}^4=\{(x,y,u,v)|x,y,u,v\iotan\mathbb{R}\}$,
$\mathbb{R}^2_b=\{(x,y)|x,y\iotan\mathbb{R}\}$,
and $\mathbb{R}^2_F=\{(u,v)|u,v\iotan\mathbb{R}\}$.
Note
$\mathbb{R}^4=\mathbb{R}^2_b\times\mathbb{R}^2_F$.
Regard $\mathbb{R}^3$ in \S2 as \\
$\mathbb{R}^2_b\times\{(u,v)| u\iotan\mathbb{R}, v=0\}$.
Take the tubular neighborhood of $\Sigma_g^\betaullet$ in $\mathbb{R}^3$.
It is diffeomorphic to $\Sigma_g^\betaullet\times[-1,1]$.
We can suppose that
$\Sigma_g^\betaullet, \Sigma_g^\betaullet\times[0,1]
\subset\mathbb{R}^2_b\times\{(u,v)| u\geqq0, v=0\}$. \\
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
Let $\Sigma_g\subset S^4$.
Suppose that $\{*\}\iotan S^4$ is included in $\Sigma_g$.
Then \Z[\pi/\pi^{(n)}]ewline
$S^4-\Sigma_g=(S^4-\{*\})-(\Sigma_g-\{*\})=R^4-\Sigma_g^\betaullet$. \\
Take the tubular neighborhood $N(\Sigma_g^\betaullet)$ of $\Sigma_g^\betaullet$ in $\mathbb{R}^4$.
Note that
$N(\Sigma_g^\betaullet)$ is diffeomorphic to $\Sigma_g^\betaullet\times D^2$.
We can regard $N(\Sigma_g^\betaullet)$
as the result of
rotating $\Sigma_g^\betaullet\times[0,1]$
around $\Sigma_g^\betaullet$ as the axis
(diffeomorphically not isometrically).
Suppose that $\mathcal L(\alphalpha)\cap(\Sigma_g^\betaullet\times\{0\})=\phi$.
Make the spinning submanifold
$\mathcal E_\iotaota(\alphalpha)$
from $\mathcal L(\alphalpha)$.
\\
We can suppose that
each fiber $D^2$ of
$N(\Sigma_g^\betaullet)$
is parallel to $\{(x,y)|x=0, y=0\}\times\mathbb{R}^2_F$
by using an isotopy of an embedding map of the tubular neighborhood. \\
We can suppose that $\mathcal I(\alphalpha)$ intersects each fiber $D^2$ transversely.
{\iotat Reason.}
Note a 1-handle drawn in the right-side of Figure \ref{Alabama}.
If $\mathcal I(\alphalpha)$ near the 1-handle is put like (Ac) in Figure \ref{Alaska},
$\mathcal I(\alphalpha)$ does not intersect each fiber $D^2$ transversely.
However we can do the following operation.
By using an isotopy of a part of $\mathcal I$
we change
the part of $\mathcal I(\alphalpha)$
from (Ac) to (Ob) in Figure \ref{Alaska}.
After this operation, $\mathcal I(\alphalpha)$ intersects each fiber $D^2$ transversely. \\
\betaegin{note}\ellongleftarrowbel{kabuto}
We will explain a property of (Ac), in Note \ref{kuwagata}.
It is important. We will use it in Alternative proof of Claim \ref{shichi} of \S\ref{v2}.
\epsilonnd{note} \betaigbreak
Note that, even if a part of $\mathcal I(\alphalpha)$ is (Ac),
we can make a spinning submanifold $\mathcal E_\iotaota(\alphalpha)$.
However, if there is not (Ac), we have an advantage as below. \\
\betaegin{figure}
\iotancludegraphics[width=130mm]{hand.pdf}
\vskip-50mm
\caption{{\betaf (Ac) and (Ob).
}\ellongleftarrowbel{Alaska}}
\epsilonnd{figure}
\betaegin{figure}
\iotancludegraphics[width=150mm]{obtuse.pdf}
\vskip-40mm
\caption{{\betaf Rotation around a part near (Ob).
The reason why (Ob) is useful for us.
}\ellongleftarrowbel{Arizona}}
\epsilonnd{figure}
\betaegin{figure}
\vskip-7mm
\iotancludegraphics[width=150mm]{acute.pdf}
\vskip-47mm
\caption{{\betaf Rotation around a part near (Ac).
The reason why (Ac) is not useful for us.
This property of (Ac) is used in in Alternative proof of Claim \ref{shichi} of \S\ref{v2}.
}\ellongleftarrowbel{Arkansas}}
\betaigbreak
\epsilonnd{figure}
If there is not (Ac) in $\mathcal I(\alphalpha)$, we have the following. \\
Take a point $q\iotan\mathbb{R}^2_b\times\{(u,v)| u=0, v=0\}$.
Note $\alphalpha\subset\mathbb{R}^2_b\times\{(u,v)| u=0, v=0\}$.
By the above construction of $\mathcal E_\iotaota(\alphalpha)$, we have the following. \\
\smallbreak\Z[\pi/\pi^{(n)}]oindent (i)
If $q\cap \alphalpha=\phi$,
$(\{q\}\times\mathbb{R}^2_F)\cap\mathcal E_\iotaota(\alphalpha)=\phi.$
\smallbreak\Z[\pi/\pi^{(n)}]oindent (ii)
If $q$ is a normal point of $\alphalpha$,
$(\{q\}\times\mathbb{R}^2_F)\cap\mathcal E_\iotaota(\alphalpha)$ is a single circle in $\{q\}\times\mathbb{R}^2$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent (iii)
If $q$ is a real crossing point of $\alphalpha$,
$(\{q\}\times\mathbb{R}^2_F)\cap\mathcal E_\iotaota(\alphalpha)$
is two circles in $\{q\}\times\mathbb{R}^2$
such that one of the two is in the inside of the other.
The inner (respectively, outer) circle corresponds
to the lower (respectively, upper) point of the singular point.
\smallbreak\Z[\pi/\pi^{(n)}]oindent (iv)
If $q$ is a virtual crossing point of $\alphalpha$,
$(\{q\}\times\mathbb{R}^2_F)\cap\mathcal E_\iotaota(\alphalpha)$
is two circles in $\{q\}\times\mathbb{R}^2$
such that each of the two is in the outside of the other each other.
It is Rourke's description of $\mathcal S(\alphalpha)$
in Theorem \ref{Montana} which is cited below from \cite{Rourke}.
(However \cite{Rourke} does not write a proof.
So \cite{J} wrote a proof.)\\
\betaegin{thm}\ellongleftarrowbel{Montana}
{\betaf (\cite{Rourke}.)}
Let $\alphalpha$ be a virtual 1-knot diagram.
Take an embedding map
$\varphi: S^1_b\times S^1_f\hookrightarrow \mathbb{R}^2_b\times\mathbb{R}^2_f$ with the following properties.
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(1)$
Let $\pi:\mathbb{R}^2_b\times\mathbb{R}^2_f\to\mathbb{R}^2_b$ be the natural projection.
$\pi\circ\varphi(S^1_b\times S^1_f)\subset\mathbb{R}^2_b$ defines
$\alphalpha$ without the notations of virtual crossings.
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(2)$
For points in $\mathbb{R}^2_b$, we have the following:
\betaegin{enumerate}
\iotatem
If $q\Z[\pi/\pi^{(n)}]otin\alphalpha$,
we have $\pi^{-1}(q)=\phi.$
\smallbreak
\iotatem
If $q$ is a normal point of $\alphalpha$,
we have that $\pi^{-1}(q)$ is a circle in $\{q\}\times\mathbb{R}^2$.
\smallbreak
\iotatem
If $q$ is a real crossing point of $\alphalpha$,
we have that $\pi^{-1}(q)$ is two circles in $\{q\}\times\mathbb{R}^2$
such that one of the two is in the inside of the other.
The inner $($respectively, outer$)$ circle corresponds
to the lower $($respectively, upper$)$ point of the singular point.
\smallbreak
\iotatem
If $q$ is a virtual crossing point of $\alphalpha$,
we have that $\pi^{-1}(q)$ is two circles in $\{q\}\times\mathbb{R}^2$
such that each of the two is in the outside of the other each other.
\epsilonnd{enumerate}
\smallbreak
Then the submanifolds,
$\mathcal S(\alphalpha)$ and $\varphi(S^1_b\times S^1_f)$,
of $S^4$ are isotopic
\epsilonnd{thm}
This completes the proof of Theorem \ref{mainmaenotame}.
\qed\\
\betaegin{defn}\ellongleftarrowbel{Nebraska}
In Theorem \ref{Montana},
each circle
$f(S^1_b\times S^1_f)\cap$(each fiber $\mathbb{R}^2_f$)
is called a {\iotat fiber-circle}.
We say that $f(S^1_b\times S^1_f)$ admits {\iotat Rourke's fibration}.
\epsilonnd{defn}\betaigbreak
\betaegin{note}\ellongleftarrowbel{kuwagata}
As we preannounced in Note \ref{kabuto}, we state a comment on (Ac).
If the projection on a surface includes (Ac),
$\mathcal E(\alphalpha)$ does not admit Rourke's fibration.
The reason is explained in Figures \ref{Arizona} and \ref{Arkansas}.
We will use this property,
which is raised by the difference between (Ac) and (Ob),
in Alternative proof of Claim \ref{shichi} of \S\ref{v2}.
\epsilonnd{note}\betaigbreak
We next prove the general case. \\
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorems \ref{honto} and \ref{mainkore}.}
We prove Theorem \ref{oh} below.
The key idea of the proof is Claim \ref{wow}.
Let $\Sigma$ be a closed oriented surface.
Let $G_1$ and $G_2$ be submanifolds of $S^4$
which are orientation preserving diffeomorphic to $\Sigma$.
It is known that
there is a case that
the submanifolds, $G_1$ and $G_2$, of $S^4$ are non-isotopic.
Let $\Sigma^\circ$ denote $\Sigma-(\text{an open 2-disc})$.
Let $$G^\circ_i\\=G_i-(\text{an open 2-disc})$$ be a submanifold of $S^4$
which are orientation preserving diffeomorphic to $\Sigma^\circ$ $(i=1,2)$. \\
\betaegin{cla}\ellongleftarrowbel{wow}
The submanifolds, $G^\circ_1$ and $G^\circ_2$, of $S^4$ are isotopic.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{wow}.}
$\Sigma^\circ$ has a handle decomposition
which consists of one 0-handle, 1-handles, and no 2-handle.
\qed\\
Let $i\iotan\{1,2\}$.
We can regard the tubular neighborhood of $G_i$ in $S^4$ as $G_i\times D^2$.
Embed $S^1$ in $G_i\times [0,1]$, where we regard $[0,1]$ as a radius of $D^2$,
and call the image $J_i$. Assume that $J_i\cap(G_i\times\{0\})=\phi$.
Suppose that there is
a bundle map $\check\sigma:G_1\times D^2\to G_2\times D^2$
such that $\check\sigma$ covers
an orientation preserving diffeomorphism map $\sigma:G_1\to G_2$ and
such that $\check\sigma(J_1)=J_2$. \\
Define a submanifold $E_i$ contained in $S^4$
to be the spinning submanifold made from $J_i$
by the rotation in $G_i\times D^2$.
\betaegin{thm}\ellongleftarrowbel{oh}
The submanifolds, $E_1$ and $E_2$, of $S^4$ are isotopic.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{oh}.}
We can suppose that
$J_i\subset G^\circ_i\times [0,1]$.
By the existence of $\sigma$,
there is a bundle map $\check\tau:G^\circ_1\times D^2\to G^\circ_2\times D^2$
such that $\check\tau$ covers an orientation preserving diffeomorphism map
$\tau: G^\circ_1\to G^\circ_2$ and
such that $\check\tau(J_1)=J_2$. \\
Note the following:
Let $f:\Sigma^\circ\to S^4$ be an embedding map.
We can regard $\tau$ as a diffeomorphism map
$\Sigma^\circ\to \Sigma^\circ$.
By Claim \ref{wow} ,
the submanifolds,
$f(\Sigma^\circ)$ and $f(\tau(\Sigma^\circ))$,
of $S^4$ are isotopic.
Therefore the submanifolds, $E_1$ and $E_2$, of $S^4$ are isotopic,
\qed
| 4,056 | 68,854 |
en
|
train
|
0.4938.5
|
\\
\betaegin{defn}\ellongleftarrowbel{Nebraska}
In Theorem \ref{Montana},
each circle
$f(S^1_b\times S^1_f)\cap$(each fiber $\mathbb{R}^2_f$)
is called a {\iotat fiber-circle}.
We say that $f(S^1_b\times S^1_f)$ admits {\iotat Rourke's fibration}.
\epsilonnd{defn}\betaigbreak
\betaegin{note}\ellongleftarrowbel{kuwagata}
As we preannounced in Note \ref{kabuto}, we state a comment on (Ac).
If the projection on a surface includes (Ac),
$\mathcal E(\alphalpha)$ does not admit Rourke's fibration.
The reason is explained in Figures \ref{Arizona} and \ref{Arkansas}.
We will use this property,
which is raised by the difference between (Ac) and (Ob),
in Alternative proof of Claim \ref{shichi} of \S\ref{v2}.
\epsilonnd{note}\betaigbreak
We next prove the general case. \\
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorems \ref{honto} and \ref{mainkore}.}
We prove Theorem \ref{oh} below.
The key idea of the proof is Claim \ref{wow}.
Let $\Sigma$ be a closed oriented surface.
Let $G_1$ and $G_2$ be submanifolds of $S^4$
which are orientation preserving diffeomorphic to $\Sigma$.
It is known that
there is a case that
the submanifolds, $G_1$ and $G_2$, of $S^4$ are non-isotopic.
Let $\Sigma^\circ$ denote $\Sigma-(\text{an open 2-disc})$.
Let $$G^\circ_i\\=G_i-(\text{an open 2-disc})$$ be a submanifold of $S^4$
which are orientation preserving diffeomorphic to $\Sigma^\circ$ $(i=1,2)$. \\
\betaegin{cla}\ellongleftarrowbel{wow}
The submanifolds, $G^\circ_1$ and $G^\circ_2$, of $S^4$ are isotopic.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{wow}.}
$\Sigma^\circ$ has a handle decomposition
which consists of one 0-handle, 1-handles, and no 2-handle.
\qed\\
Let $i\iotan\{1,2\}$.
We can regard the tubular neighborhood of $G_i$ in $S^4$ as $G_i\times D^2$.
Embed $S^1$ in $G_i\times [0,1]$, where we regard $[0,1]$ as a radius of $D^2$,
and call the image $J_i$. Assume that $J_i\cap(G_i\times\{0\})=\phi$.
Suppose that there is
a bundle map $\check\sigma:G_1\times D^2\to G_2\times D^2$
such that $\check\sigma$ covers
an orientation preserving diffeomorphism map $\sigma:G_1\to G_2$ and
such that $\check\sigma(J_1)=J_2$. \\
Define a submanifold $E_i$ contained in $S^4$
to be the spinning submanifold made from $J_i$
by the rotation in $G_i\times D^2$.
\betaegin{thm}\ellongleftarrowbel{oh}
The submanifolds, $E_1$ and $E_2$, of $S^4$ are isotopic.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{oh}.}
We can suppose that
$J_i\subset G^\circ_i\times [0,1]$.
By the existence of $\sigma$,
there is a bundle map $\check\tau:G^\circ_1\times D^2\to G^\circ_2\times D^2$
such that $\check\tau$ covers an orientation preserving diffeomorphism map
$\tau: G^\circ_1\to G^\circ_2$ and
such that $\check\tau(J_1)=J_2$. \\
Note the following:
Let $f:\Sigma^\circ\to S^4$ be an embedding map.
We can regard $\tau$ as a diffeomorphism map
$\Sigma^\circ\to \Sigma^\circ$.
By Claim \ref{wow} ,
the submanifolds,
$f(\Sigma^\circ)$ and $f(\tau(\Sigma^\circ))$,
of $S^4$ are isotopic.
Therefore the submanifolds, $E_1$ and $E_2$, of $S^4$ are isotopic,
\qed\\
Theorems \ref{vk} and \ref{oh} imply Theorems \ref{honto} and \ref{mainkore} \qed\\
We can extend all discussions in \S\S\ref{K}-\ref{Proof} and the following \S\ref{rt} to the virtual 1-link case easily.
When we define $\mathcal E(\alphalpha)$ in \S\ref{E},
we assume
$\mathcal L(\alphalpha)\cap\Sigma_g^\betaullet=\phi$.
Suppose that
$\mathcal L(\alphalpha)\cap(\Sigma_g^\betaullet)$ is an arc instead.
Then we obtain a spherical 2-knot in $\mathbb{R}^4$
as the spinning submanifold.
The class of such spherical 2-knots is also a generalization of 2-dimensional spun-knots of 1-knots,
and is also worth studying.
As we state in \S\ref{jobun}, we do not discuss this class in this paper.
\betaigbreak
| 1,554 | 68,854 |
en
|
train
|
0.4938.6
|
\section{Immersed solid tori} \ellongleftarrowbel{rt}
\betaegin{figure}
\iotancludegraphics[width=150mm]{3.pdf}
\vskip-40mm
\caption{{\betaf $\zeta^{-1}($each closed 2-disc$)$}\ellongleftarrowbel{Colorado}}
\epsilonnd{figure}
\Z[\pi/\pi^{(n)}]oindent By the definition of $\mathcal S(\hskip2mm)$ in \cite{Satoh}, we have (i)$\mathbb{R}ightarrow$(ii).
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(i) An embedded torus $Y$ contained in $S^4$ is isotopic
to $\mathcal S(\alphalpha)$ for a virtual 1-knot diagram $\alphalpha$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(ii) There is an immersion map $\zeta:S^1\times D^2\ellooparrowright S^4$
with the following properties: \Z[\pi/\pi^{(n)}]ewline
$\zeta(S^1\times\partial D^2)$ is $Y$.
The singular point set of $\zeta$ consists of double points and is a disjoint union of closed 2-discs,
and $\zeta^{-1}($each closed 2-disc$)$ is as shown in Figure \ref{Colorado}.
\smallbreak
By using the construction of $\mathcal E(\alphalpha)$,
we can also describe the immersed solid torus \\ in (ii)
as follows:
By using the projection `$\mathcal L(\alphalpha)\to \mathcal I(\alphalpha)$' in \S\ref{K},
we can make an immersed annulus in
$\Sigma_g\times[0,1]$ naturally.
Note that (the immersed annulus)$\cap\Sigma_g\times\{0\}\Z[\pi/\pi^{(n)}]eq\phi$.
Make a subset from this immersed annulus
by a spinning construction around $\Sigma_g$, defined in Definition \ref{spinningsubmanifold}.
Then the result is an immersed solid torus in (ii).
\smallbreak
We prove the converse of the above claim,
that is, the following.
\betaegin{thm}\ellongleftarrowbel{grt}
{\rm(ii)}$\mathbb{R}ightarrow${\rm(i)}.
\epsilonnd{thm}
We prove this theorem as an application of our results in \S\ref{Proof}
although it may be also proved in another way.
\\
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{grt}.}
Let $q\iotan\partial D^2$.
Let $C$ be $\zeta(S^1\times\{q\})$.
In the following paragraphs,
for $Y$, we will make an embedded oriented surface $F$ contained in $S^4$
so that we will put $C$ in the tubular neighborhood $N(F)$ of $F$ in $S^4$.
We will make $C\cap F=\phi$.
We will make $Y$ so that it will be the spinning submanifold of $C$ around $F$.
Let $\{o\}$ be the center of $D^2$.
We will let $F$ include $\zeta(S^1\times\{o\})$. \\
Let $\timesi:S^1\times D^2\times I\ellooparrowright S^4$ be an immersion map, where $I=[-1,1]$,
to satisfy that
$\timesi\vert_{S^1\times D^2\times \{0\}}=\zeta$ and that
\Z[\pi/\pi^{(n)}]ewline\hskip3cm$\timesi(\{x\}\times \{o\}\times I) \quad\betaot\quad \timesi(\{x\}\times D^2\times \{0\})$ \Z[\pi/\pi^{(n)}]ewline
for each $x$ if we give appropriate metrics to $S^4$ and $S^1\times D^2\times I$.
Then we can suppose the following:
\smallbreak\Z[\pi/\pi^{(n)}]oindent(1)
$P=\timesi(S^1\times \{o\}\times I)$ is a boundary-connected-sum of $n$ copies of the annulus ($n\iotan\mathbb{N}$).
\quad See Figure \ref{Connecticut}
for an example of $P$.
\betaegin{figure}
\betaegin{center}
\iotancludegraphics[width=70mm]{4.pdf}
\epsilonnd{center}
\vskip-20mm
\caption{{\betaf An example of $P$}\ellongleftarrowbel{Connecticut}}
\epsilonnd{figure}
\smallbreak\Z[\pi/\pi^{(n)}]oindent(2)
$Q=\timesi(S^1\times D^2\times I)$ is a boundary-connected-sum of $m$-copies of $S^1\times B^3$
($m\iotan\mathbb{N}$).
\smallbreak\Z[\pi/\pi^{(n)}]oindent(3)
$\partial P\subset\partial Q$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(4)
$Q$ is the tubular neighborhood of $P$ in $S^4$.
$Q$ is diffeomorphic to $P\times D^2$. \\
By the Mayer-Vietoris sequence,
$H_1(S^4, Q;\mathbb{Z})\cong
H_1(S^4- {\rm Int} Q, \partial Q;\mathbb{Z})\cong0$.
Hence there is an embedded oriented compact surface-with-boundary
$G$ contained in $S^4-{\rm Int} Q$ such that
$\partial G=\partial P$ and that
$G\cup P$ is an oriented closed surface $F$.
({\iotat Reason}: Consider a simplicial decomposition of $S^4-{\rm Int} Q$.)
We can regard $Y$ as the spinning submanifold made from $\zeta(S^1\times\{q\})$ around $F$.
Hence we can regard $\zeta(S^1\times\{q\})$ as
$\mathcal K(\betaeta)$ for a virtual 1-knot diagram $\betaeta$
in a fashion which is explained in \S\S\ref{E}-\ref{Proof},
and can regard $Y$ as $\mathcal E(\betaeta)$.\\
This completes the proof of Theorem \ref{grt}. \qed
\betaigbreak
| 1,667 | 68,854 |
en
|
train
|
0.4938.7
|
\section{The virtual 2-knot case}\ellongleftarrowbel{v2}
\Z[\pi/\pi^{(n)}]oindent
\Z[\pi/\pi^{(n)}]oindent
Virtual 2-knot theory is defined analogously to Virtual 1-knot theory, using
generic surfaces in 3-space as knot diagrams and using Roseman moves for knot
equivalence, and allowing the double-point arcs to have classical or virtual crossing
data.
See \cite{Takeda, J}.
Virtual 2-knot diagrams (respectively, virtual 2-knots) in \cite{J} and this paper
are the same as
virtual surface-knots (respectively, virtual surface-knot diagrams) in \cite{Takeda}.
\betaegin{defn}\ellongleftarrowbel{oyster}
Let $F$ be a closed surface.
A smooth map $f : F\to\mathbb{R}^3$ is considered {\iotat quasi-generic}
if it fails to be one-to-one only at transverse crossings of orders 2 and 3
as shown in Figures \ref{sashimid} and \ref{sashimit},
\betaegin{figure}
\betaegin{center}
\iotancludegraphics[width=40mm]{sashimid.pdf}
\epsilonnd{center}
\caption{{\betaf Transversal double points
}\ellongleftarrowbel{sashimid}}
\epsilonnd{figure}
\betaegin{figure}
\iotancludegraphics[width=70mm]{sashimit.pdf}
\caption{{\betaf Transversal triple points}\ellongleftarrowbel{sashimit}}
\epsilonnd{figure}
and it fails to be regular only at isolated {\iotat branch points}
where, locally, the image of a disk looks like the cone over a loop, with no other parts of the surface touching the vertex.
See Figure \ref{gbra}.
Branch points include the cone over any closed, regular, transversely self-intersecting
curve.
In particular, the cone over a figure-$\iotanfty$ curve is called
a {\iotat Whitney branch point}. See Figure \ref{sashimiv}.\\
\betaegin{figure}
\iotancludegraphics[width=70mm]{gbra.pdf}
\caption{{\betaf A general branch point}\ellongleftarrowbel{gbra}}
\epsilonnd{figure}
\betaegin{figure}
\iotancludegraphics[width=70mm]{sashimiv.pdf}
\caption{{\betaf Whitney-umbrella branch point
}\ellongleftarrowbel{sashimiv}}
\epsilonnd{figure}
A quasi-generic map $f$ is {\iotat generic} if the only branch points are Whitney
branch points.
The three features of a generic map---
Whitney branch points, double-point arcs,
and triple points---
have slice-histories corresponding respectively to the Reidemeister
$I$-, $II$-, and $III$- moves in 1-knot theory.
\epsilonnd{defn}
\betaegin{figure}\betaigbreak \iotancludegraphics[width=150mm]{s24from89.pdf}
\vskip-40mm
\caption{{\betaf The singular point sets of virtual 2-knots}\ellongleftarrowbel{JV1}} \betaigbreak \epsilonnd{figure}
\betaegin{defn}\ellongleftarrowbel{sashimi}
A {\iotat virtual 2-knot diagram} consists of a generic map $F$
together with classical and virtual crossing data along its double-point arcs.
Crossing data is representedgraphically as broken and unbroken surfaces:
See the left two figures of Figure \ref{JV1}.
Branch points can be classical or virtual:
See the middle three figures,
the figures which are not the above ones nor the following ones,
in Figure \ref{JV1}.
At triple points, three crossings meet. Triple points of the following types are
allowed: See the right three figures in Figure \ref{JV1}.
All other combinations of crossing data are forbidden. Note that the three allowed
triple points have slice-histories corresponding to the Reidemeister $III$-moves
in Virtual 1-knot theory.
A virtual 2-knot diagram may be reduced to its bare combinatorial structure,
forgetting all but the information that is invariant under isotopies of $\mathbb{R}^3$ and $F$.
In this regard, we do not distinguish diagrams that are related by isotopies of
$\mathbb{R}^3$ and $F$.
\epsilonnd{defn}\betaigbreak
\betaegin{defn}\ellongleftarrowbel{JV}{\betaf (\cite[section 3.5]{J}.)}
A virtual 2-knot diagram may be transformed by Roseman moves. There are
seven types of local moves, shown here without crossing data.
When a virtual 2-knot diagram undergoes a Roseman move, its crossing data
carried continuously by the move. Two diagrams related by a series of Roseman
moves are called {\iotat virtually equivalent}.
The equivalence classes are {\iotat Virtual 2-knot types}, or sometimes simply
{\iotat Virtual 2-knots}.
\epsilonnd{defn}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
The readers need not be familiar with Roseman moves in order to read this paper.\\
It is natural to ask whether we can define one-dimensional-higher tubes from virtual 2-knots
since we succeed in the virtual 1-knot case
as written in \S\S\ref{E}-\ref{Proof}. \\
The following facts let it be more natural:
The one-dimensional-higher tube $\mathcal E(K)$ made from a virtual 1-knot $K$
is the spun-knot of $K$ if $K$ is a classical knot (see \cite{Satoh}).
\cite{Zeeman} defined spun-knots not only for classical 1-knot but also for classical 2-knots. \\
\betaegin{que}\ellongleftarrowbel{North Carolina}
Can we define one-dimensional-higher tubes for virtual 2-knots
in a consistent way? Suppose that these tubes are diffeomorphic to
$F\times S^1$ if the virtual 2-knot is defined by $F$.
\epsilonnd{que}\betaigbreak
Note that Satoh's method in \cite{Satoh} did not say anything about the virtual 2-knot case.
In the virtual 1-knot case, in \cite{Rourke},
Rourke interpreted Satoh's method as we reviewed
in Theorem \ref{Montana} and Definition \ref{Nebraska}. \\
\betaegin{note}\ellongleftarrowbel{kaiga}
In the virtual 2-dimensional knot case
we also
use the terms `fiber-circle' and `Rourke-fibration'
in Definition \ref{Nebraska}.
\epsilonnd{note}
\betaegin{defn}\ellongleftarrowbel{suiri}
Let $M$ be a 3-dimensional compact submanifold of $\mathbb{R}^5$.
Regard $\mathbb{R}^5$ as $\mathbb{R}^3\times\mathbb{R}^2$.
We say that the submanifold $M$ admits
{\iotat Rourke fibration}, or
that $M$ is embedded {\iotat fibrewise}
if
$M\cap(p\times\mathbb{R}^2)$ is a collection of circles
for any point $p\iotan\mathbb{R}^3$.
We call the circles in $M\cap(p\times\mathbb{R}^2)$, {\iotat fiber circles}.
\epsilonnd{defn}
If we try to generalize Rourke's way to the virtual 2-knot case,
we will do the following:
Let $\alphalpha$ be a virtual 2-knot diagram.
Let $\mu=0,1,2,3$.
We give $\mu$-copies of circle to any $\mu$-tuple point in $\alphalpha$,
and construct the tube.
Of course we determine the position of fiber-circles in each fiber plane
by the property of the $\mu$-tuple point (See \cite[section 3.7.1]{J} for detail).
See Figure \ref{doremi}. \\
\betaegin{figure}
\iotancludegraphics[width=110mm]{doremi.pdf}
\vskip-20mm
\caption{{\betaf
The nest of circles in fibers.
}\ellongleftarrowbel{doremi}}
\epsilonnd{figure}
However we encounter the following situation.
Let $\alphalpha$ be any virtual 1-knot diagram.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(1)
The case where $\alphalpha$ has no virtual branch point.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(2)
The case where $\alphalpha$ has a virtual branch point.
\smallbreak
In the (1) case, we can make a tube by Rourke's way.
See \cite[section 3.7.1]{J}.
In the (2) case, however, \cite{J} found
it difficult to define a tube near any virtual branch point.
Thus it is natural to ask the following two questions. \\
\betaegin{que}\ellongleftarrowbel{North Dakota}
Can we put fiber-circles over each point of any virtual 2-knot
in a consistent way as written above,
and make one-dimensional-higher tube?
\epsilonnd{que}\betaigbreak
\betaegin{que}\ellongleftarrowbel{South Dakota}
Is there a one-dimensional-higher tube construction which is defined for all virtual 2-knots, and which agrees with the way in the (1) case written above
when there are no virtual branch points?
\epsilonnd{que}\betaigbreak
We generalize our method in \S\S\ref{E}-\ref{Proof}
and
give an affirmative answer to Question \ref{South Dakota},
and hence to Question \ref{North Carolina}.
See Theorem \ref{vv}.
We also use a spinning construction of submanifolds
explained in Definition \ref{spinningsubmanifold}.
Theorem \ref{Rmuri} gives a negative answer
to Question \ref{North Dakota}.
\betaigbreak
We make the virtual 2-knot version of
representing surfaces, which are defined above Theorem \ref{vk}.
\betaegin{defn}\ellongleftarrowbel{Jbase} {\betaf (\cite[section 3.5]{J}.)}
The development of an invariant for virtual 2-knot theory closely parallels that for virtual 1-knot theory. The idea is to think of a virtual 2-knot diagram as a classical 2-knot diagram ``drawn" on a closed 3-manifold. We then define an equivalence relation on these objects that extends classical move-equivalence and allows the 3-manifold to vary. Take as input a virtual 2-knot diagram $\alphalpha$.
Let $N(\alphalpha)$ be a neighborhood of the diagram,
which is a regular neighborhood except at virtual branch points,
in the following sense:
$N(\alphalpha)$ is formed by thickening $\alphalpha$ everywhere except at virtual branch points;
as you approach virtual branch points, let the thickening gradually diminish to zero,
so that near the virtual branch point $N(\alphalpha)$ looks like the cone over a thickened figure-$\iotanfty$. \\
Along each virtual crossing curve of $\alphalpha$, double the square-shaped junction of $N(\alphalpha)$ to create overlapping ``slabs".
Call this 3-manifold-with-boundary $B(\alphalpha)$.
It has a purely classical knot diagram in it. (To be precise, $B(\alphalpha)$ is not technically a 3-manifold-with-boundary at virtual branch points, since the ``slab" is pinched to zero thickness at these points.)
See Figures \ref{Jbase2} and \ref{Jbase3}. \\
\betaegin{figure}
\iotancludegraphics[width=130mm]{s26from91.pdf}
\vskip-30mm
\caption{{\betaf
The left upper figure is
a part of $N(\alphalpha)$ near a double point curve.
The right upper figure is that
near a triple point.
The left lower figure is that
near a virtual branch point.
The right lower figure is that
near a classical branch point.
}\ellongleftarrowbel{Jbase2}}
\epsilonnd{figure}
Now embed $B(\alphalpha)$ into any compact oriented 3-manifold (not necessarily connected).
The resultant compact 3-manifold
is called a {\iotat representing 3-manifold} $M$
associated with a virtual 2-knot diagram $\alphalpha$.
$M$ contains a classical 2-knot diagram $\mathcal I(\alphalpha)$.
\\
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=130mm]{s27from92.pdf}
\vskip-20mm
\caption{{\betaf
Make $B(\alphalpha)$ from $N(\alphalpha)$.
}\ellongleftarrowbel{Jbase3}}
\betaigbreak
\epsilonnd{figure}
\epsilonnd{defn}
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
Virtual 1-knot has two kinds of equivalent definitions:
one is defined by using diagrams with virtual points in $\mathbb{R}^2$.
The other is done by using representing surfaces.
See Theorem \ref{vk} and \S\ref{K} of this paper, and \cite{Kauffman1, Kauffman, Kauffmani}.
It is very natural to ask the following question.
\betaegin{que}\ellongleftarrowbel{imayatteru}
Do we have the virtual 2-knot version of Theorem \ref{vk} by using representing 3-manifolds in Definition \ref{Jbase}?
\epsilonnd{que}
This question is open. \cite{J} gave a partial answer.
We do not discuss it in this paper.
\vskip9mm
Note that $\mathcal I(\alphalpha)$ is an immersed surface in an ordinary sense.
That is, it does not include a virtual point.
Note that we cannot embed a
representing 3-manifold
in $\mathbb{R}^4$ in general.
We show an example.
Take the Boy surface in $\mathbb{R}^3$
(see \cite{Boy, OgasaBoy}).
We can regard it as
a virtual 2-knot diagram as follows:
Suppose that the only one immersed crossing curve
is a virtual one. That is, it consists of
one virtual triple point and other virtual double points.
Then no
representing 3-manifolds for this virtual 2-knot
can be embedded in $\mathbb{R}^4$. It is
proved by using obstruction classes of
the normal bundle of $\mathbb{R} P^2$ in $\mathbb{R}^4$. \\
However, by \cite{Hirsh}, we have the following.
\betaegin{thm}\ellongleftarrowbel{Hirsh}
Any $M$ in Definition \ref{Jbase} can be embedded in $\mathbb{R}^5$.
\epsilonnd{thm}
| 3,974 | 68,854 |
en
|
train
|
0.4938.8
|
\betaegin{que}\ellongleftarrowbel{South Dakota}
Is there a one-dimensional-higher tube construction which is defined for all virtual 2-knots, and which agrees with the way in the (1) case written above
when there are no virtual branch points?
\epsilonnd{que}\betaigbreak
We generalize our method in \S\S\ref{E}-\ref{Proof}
and
give an affirmative answer to Question \ref{South Dakota},
and hence to Question \ref{North Carolina}.
See Theorem \ref{vv}.
We also use a spinning construction of submanifolds
explained in Definition \ref{spinningsubmanifold}.
Theorem \ref{Rmuri} gives a negative answer
to Question \ref{North Dakota}.
\betaigbreak
We make the virtual 2-knot version of
representing surfaces, which are defined above Theorem \ref{vk}.
\betaegin{defn}\ellongleftarrowbel{Jbase} {\betaf (\cite[section 3.5]{J}.)}
The development of an invariant for virtual 2-knot theory closely parallels that for virtual 1-knot theory. The idea is to think of a virtual 2-knot diagram as a classical 2-knot diagram ``drawn" on a closed 3-manifold. We then define an equivalence relation on these objects that extends classical move-equivalence and allows the 3-manifold to vary. Take as input a virtual 2-knot diagram $\alphalpha$.
Let $N(\alphalpha)$ be a neighborhood of the diagram,
which is a regular neighborhood except at virtual branch points,
in the following sense:
$N(\alphalpha)$ is formed by thickening $\alphalpha$ everywhere except at virtual branch points;
as you approach virtual branch points, let the thickening gradually diminish to zero,
so that near the virtual branch point $N(\alphalpha)$ looks like the cone over a thickened figure-$\iotanfty$. \\
Along each virtual crossing curve of $\alphalpha$, double the square-shaped junction of $N(\alphalpha)$ to create overlapping ``slabs".
Call this 3-manifold-with-boundary $B(\alphalpha)$.
It has a purely classical knot diagram in it. (To be precise, $B(\alphalpha)$ is not technically a 3-manifold-with-boundary at virtual branch points, since the ``slab" is pinched to zero thickness at these points.)
See Figures \ref{Jbase2} and \ref{Jbase3}. \\
\betaegin{figure}
\iotancludegraphics[width=130mm]{s26from91.pdf}
\vskip-30mm
\caption{{\betaf
The left upper figure is
a part of $N(\alphalpha)$ near a double point curve.
The right upper figure is that
near a triple point.
The left lower figure is that
near a virtual branch point.
The right lower figure is that
near a classical branch point.
}\ellongleftarrowbel{Jbase2}}
\epsilonnd{figure}
Now embed $B(\alphalpha)$ into any compact oriented 3-manifold (not necessarily connected).
The resultant compact 3-manifold
is called a {\iotat representing 3-manifold} $M$
associated with a virtual 2-knot diagram $\alphalpha$.
$M$ contains a classical 2-knot diagram $\mathcal I(\alphalpha)$.
\\
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=130mm]{s27from92.pdf}
\vskip-20mm
\caption{{\betaf
Make $B(\alphalpha)$ from $N(\alphalpha)$.
}\ellongleftarrowbel{Jbase3}}
\betaigbreak
\epsilonnd{figure}
\epsilonnd{defn}
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
Virtual 1-knot has two kinds of equivalent definitions:
one is defined by using diagrams with virtual points in $\mathbb{R}^2$.
The other is done by using representing surfaces.
See Theorem \ref{vk} and \S\ref{K} of this paper, and \cite{Kauffman1, Kauffman, Kauffmani}.
It is very natural to ask the following question.
\betaegin{que}\ellongleftarrowbel{imayatteru}
Do we have the virtual 2-knot version of Theorem \ref{vk} by using representing 3-manifolds in Definition \ref{Jbase}?
\epsilonnd{que}
This question is open. \cite{J} gave a partial answer.
We do not discuss it in this paper.
\vskip9mm
Note that $\mathcal I(\alphalpha)$ is an immersed surface in an ordinary sense.
That is, it does not include a virtual point.
Note that we cannot embed a
representing 3-manifold
in $\mathbb{R}^4$ in general.
We show an example.
Take the Boy surface in $\mathbb{R}^3$
(see \cite{Boy, OgasaBoy}).
We can regard it as
a virtual 2-knot diagram as follows:
Suppose that the only one immersed crossing curve
is a virtual one. That is, it consists of
one virtual triple point and other virtual double points.
Then no
representing 3-manifolds for this virtual 2-knot
can be embedded in $\mathbb{R}^4$. It is
proved by using obstruction classes of
the normal bundle of $\mathbb{R} P^2$ in $\mathbb{R}^4$. \\
However, by \cite{Hirsh}, we have the following.
\betaegin{thm}\ellongleftarrowbel{Hirsh}
Any $M$ in Definition \ref{Jbase} can be embedded in $\mathbb{R}^5$.
\epsilonnd{thm}
We will define the virtual 2-knot version of
$\mathcal E(\alphalpha)$ in Definition \ref{wakeru} after some preliminaries.
Let $X$ be a 3-dimensional closed oriented abstract manifold.
Let $G_1$ and $G_2$ be submanifolds of $S^5$
which are diffeomorphic to $X$.
Recall the following fact
\betaegin{cla}\ellongleftarrowbel{kantan}
There is a case that
the submanifolds, $G_1$ and $G_2$, of $S^5$ are non-isotopic.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{kantan}.}
Take a spherical 3-knot $K$ whose Alexander polynomial is nontrivial.
Let $G_2$ be the knot-sum of $G_1$ and $K$.
See e.g \cite{Rolfsen} for the Alexander polynomial of 3-knots and the knot-sum.
\qed
| 1,779 | 68,854 |
en
|
train
|
0.4938.9
|
\\
While $G_1$ and $G_2$ may be non-isotopic
submanifolds, which are diffeomorphic to $X$ above,
of $S^5$,
it is the case that they are isotopic
after removing an open three-ball from each of them.
Let $X^\circ_i$ denote $X-\text{(an open 3-ball)}$.
Let $G^\circ_i=G_i-\text{(an open 3-ball)}$ be a submanifold of $S^5$ ($i=1,2$).
\betaegin{cla}\ellongleftarrowbel{wowwow}
The submanifolds, $G^\circ_1$ and $G^\circ_2$, of $S^5$ are isotopic.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
Claim \ref{wowwow} is the virtual 2-knot version of Claim \ref{wow}.
\\
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{wowwow}.}
$X^\circ$ has a handle decomposition
which consists of one 0-handle, 1-handles, 2-handles and no 3-handle.
The dimensions of the cores of these handles are 0, 1, or 2.
Hence the dimensions $\elleqq2$. (Here, it is important the dimension $\Z[\pi/\pi^{(n)}]eq$ 3.) The dimension of $S^5$ is 5.
Since $2<\Bbbkrac{3(2+1)}{2}$, Claim \ref{wowwow} holds by
\cite{Haefliger3}.
\qed
\betaegin{cla}\ellongleftarrowbel{jimeisoku}
Let $M$ be a compact 3-manifold. By Theorem $\ref{Hirsh}$, $M$ is embedded in $\mathbb{R}^5$.
The normal bundle $\Z[\pi/\pi^{(n)}]u$ of $M$ embedded in $\mathbb{R}^5$ is the trivial bundle for any embedding of $M$ in $\mathbb{R}^5$.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{jimeisoku}.}
If $M$ is closed, $M$ bounds a Seifert hypersurface $V$ in $S^5$ (See \cite[Theorem 2 page 49]{Kirby}).
Take the normal bundle $\alphalpha$ of $V$ in $\mathbb{R}^5$.
Then $\Z[\pi/\pi^{(n)}]u$ is a sum of vector bundles $\alphalpha|_M$ and an orientable $\mathbb{R}$-bundle over $M$.
Hence Claim \ref{jimeisoku} holds in this case. \\
In the case where $M$ is nonclosed,
take the double $DM$ of $M$ as abstract manifolds.
Then $DM$ can be embedded in $\mathbb{R}^5$.
By the previous paragraph, the normal bundle of this embedded $DM$ is trivial.
Then the restriction of this normal bundle to $M\subset DM$ is trivial.
By this fact and Claim \ref{wowwow}, Claim \ref{jimeisoku} holds in this case.
This completes the proof of Claim \ref{jimeisoku}.
\qed
\\
We introduce the virtual 2-knot version of
$\mathcal E(\alphalpha)$.
\betaegin{defn}\ellongleftarrowbel{wakeru}
Take an abstract manifold $M$ in Definition \ref{Jbase},
where $\mathcal I(\alphalpha)$ is still contained in $M$.
Make $M\times[0,1]$.
We can obtain an embedded surface
$\mathcal J(\alphalpha)$
contained in $M\times[0,1]$
such that the projection of $\mathcal J(\alphalpha)$
by the projection $M\times[0,1]\to M$
is $\mathcal I(\alphalpha)$.
We suppose $\mathcal J(\alphalpha)\cap(M\times\{0\})=\phi$.
Take any embedding of $M$ in $\mathbb{R}^5$.
Define a submanifold $\mathcal E(\alphalpha)$ contained in $S^5$
to be the spinning submanifold made from $\mathcal J(\alphalpha)$ around $M$.
(Recall Claim \ref{jimeisoku}.)
\epsilonnd{defn}
We prove the virtual 2-knot version of
Theorem \ref{honto},
which is Theorem \ref{vv}.
Theorem \ref{vv} is one of our main results.
It gives an affirmative answer to Question
\ref{North Carolina}.
\betaegin{thm}\ellongleftarrowbel{vv}
Let $\alphalpha$ and $\alphalpha'$ be virtual 2-knot diagrams
which represent the same virtual 2-knot.
Make $\mathcal E(\alphalpha)$ and $\mathcal E(\alphalpha')$
by using a
representing 3-manifold $M$ $($respectively, $M')$ associated with
$\alphalpha$ $($respectively, $\alphalpha').$
Then submanifolds, $\mathcal E(\alphalpha)$ and $\mathcal E(\alphalpha')$, of $\mathbb{R}^5$
are isotopic
even if $M$ is not diffeomorphic to $M'$.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Theorem \ref{vv}.}
It suffices to prove the following two cases:
\smallbreak
(i) $\alphalpha$ is obtained from $\alphalpha'$ by one of classical moves.
\smallbreak
(ii) $\alphalpha$ is obtained from $\alphalpha'$ by one of virtual moves.
\smallbreak
In the case (ii), there is a diffeomorphism map $f:M\to M'$
such that $f(\alphalpha)$ is isotopic to $\alphalpha'$ in $M'$.
Note that $\alphalpha\subset M$ and that $\alphalpha'\subset M'$.\\
In the case (i). Take a closed 3-ball $B$ where the classical move is carried out.
Note that $M\cup B$ (respectively, $M'\cup B$) is a representing 3manifold of
$\alphalpha$ (respectively $\alphalpha'$).
Note that there is a diffeomorphism map $f:M\cup B\to M'\cup B$
such that $f(\alphalpha)$ is isotopic to $\alphalpha'$ in $M'\cup B$.
Note that $\alphalpha\subset M\cup B$ and that $\alphalpha'\subset M'\cup B$.
In both cases, by the following Theorem \ref{bdy},
Theorem \ref{vv} holds.
\qed\\
We prove the following Theorem \ref{ohoh},
which is the virtual 2-knot version of Theorem \ref{oh}.
The key idea of the proof is Claim \ref{wowwow}
(recall Note below Claim \ref{wowwow}.)
Let $i=1,2$.
Take $G_i$ defined in Claim \ref{wowwow}.
We can regard the tubular neighborhood of $G_i$ in $S^5$ as $G_i\times D^2$.
Embed a closed oriented surface
in $G_i\times [0,1]$,
where we regard $[0,1]$ as a radius of $D^2$,
and call the image $J_i$.
Assume that $J_i\cap(G_i\times\{0\})=\phi.$
Suppose that there is a bundle map
$\check\sigma:G_1\times D^2\to G_2\times D^2$
such that $\check\sigma$ covers an orientation preserving diffeomorphism map $\sigma:G_1\to G_2$
and such that $\check\sigma(J_1)=J_2$.
Define a submanifold $E_i$ contained in $S^5$
to be the spinning submanifold made from $J_i$
by the rotation in $G_i\times D^2$.
\betaegin{thm}\ellongleftarrowbel{ohoh}
The submanifolds, $E_1$ and $E_2$, of $S^5$ are isotopic.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{ohoh}.}
We can suppose that $J_i\subset G^\circ_i\times [0,1]$.
By the existence of $\sigma$,
there is a bundle map $\check\tau:G^\circ_1\times D^2\to G^\circ_2\times D^2$
such that $\check\tau$ covers a diffeomorphism map
$\tau:G^\circ_1\to G^\circ_2$
and such that $\check\tau(J_1)=J_2$.
Note the following:
Let $f:M^\circ\to S^5$ be an embedding map.
We can regard $\tau$ as a diffeomorphism map
$M^\circ\to M^\circ$.
By Claim \ref{wowwow},
the submanifolds, $f(M^\circ)$ and $f(\tau(M^\circ))$, of $S^5$ are isotopic.
Therefore
the submanifolds, $E_1$ and $E_2$, of $S^5$ are isotopic.
\qed
\betaegin{thm}\ellongleftarrowbel{bdy}
Replace
the condition that $M$ is a closed compact oriented 3-manifold
with
the condition that $M$ is a non-closed compact oriented 3-manifold.
Then Theorem \ref{ohoh} also holds.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{bdy}.}
The proof of Theorem \ref{bdy}
is done in a similar fashion to that of Theorem \ref{ohoh}.
The proof of Theorem \ref{bdy} is easier than that of Theorem \ref{ohoh}.
\qed
| 2,609 | 68,854 |
en
|
train
|
0.4938.10
|
\\
We prove the following Theorem \ref{ohoh},
which is the virtual 2-knot version of Theorem \ref{oh}.
The key idea of the proof is Claim \ref{wowwow}
(recall Note below Claim \ref{wowwow}.)
Let $i=1,2$.
Take $G_i$ defined in Claim \ref{wowwow}.
We can regard the tubular neighborhood of $G_i$ in $S^5$ as $G_i\times D^2$.
Embed a closed oriented surface
in $G_i\times [0,1]$,
where we regard $[0,1]$ as a radius of $D^2$,
and call the image $J_i$.
Assume that $J_i\cap(G_i\times\{0\})=\phi.$
Suppose that there is a bundle map
$\check\sigma:G_1\times D^2\to G_2\times D^2$
such that $\check\sigma$ covers an orientation preserving diffeomorphism map $\sigma:G_1\to G_2$
and such that $\check\sigma(J_1)=J_2$.
Define a submanifold $E_i$ contained in $S^5$
to be the spinning submanifold made from $J_i$
by the rotation in $G_i\times D^2$.
\betaegin{thm}\ellongleftarrowbel{ohoh}
The submanifolds, $E_1$ and $E_2$, of $S^5$ are isotopic.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{ohoh}.}
We can suppose that $J_i\subset G^\circ_i\times [0,1]$.
By the existence of $\sigma$,
there is a bundle map $\check\tau:G^\circ_1\times D^2\to G^\circ_2\times D^2$
such that $\check\tau$ covers a diffeomorphism map
$\tau:G^\circ_1\to G^\circ_2$
and such that $\check\tau(J_1)=J_2$.
Note the following:
Let $f:M^\circ\to S^5$ be an embedding map.
We can regard $\tau$ as a diffeomorphism map
$M^\circ\to M^\circ$.
By Claim \ref{wowwow},
the submanifolds, $f(M^\circ)$ and $f(\tau(M^\circ))$, of $S^5$ are isotopic.
Therefore
the submanifolds, $E_1$ and $E_2$, of $S^5$ are isotopic.
\qed
\betaegin{thm}\ellongleftarrowbel{bdy}
Replace
the condition that $M$ is a closed compact oriented 3-manifold
with
the condition that $M$ is a non-closed compact oriented 3-manifold.
Then Theorem \ref{ohoh} also holds.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{bdy}.}
The proof of Theorem \ref{bdy}
is done in a similar fashion to that of Theorem \ref{ohoh}.
The proof of Theorem \ref{bdy} is easier than that of Theorem \ref{ohoh}.
\qed\\
We now have completed the proof of Theorem \ref{vv}, and
answered Question
\ref{North Carolina}.
We next answer Question \ref{South Dakota}.
We define a consistent way
to put a
representing 3-manifold in $\mathbb{R}^5$.
\betaegin{defn}\ellongleftarrowbel{hatena}
Let $\alphalpha$ be a virtual 2-knot diagram
contained in $\mathbb{R}^3$.
Regard $\mathbb{R}^3$ as
$\mathbb{R}^3\times\{0\}\times\{0\}$
$\subset\mathbb{R}^5=\mathbb{R}^3\times\mathbb{R}\times\mathbb{R}$.
Put `a
representing 3-manifold
for $\alphalpha$' in $\mathbb{R}^5$ as follows.
Take the neighborhood $T$ of $\alphalpha$ as
defined in Definition \ref{Jbase}.
Take a neighborhood of each of classical and virtual branch points
such that the neighborhood is diffeomorphic to the closed 3-ball
and
such that $\alphalpha\cap$(the neighborhood) is as drawn in Figure \ref{cvbr}. \\
\betaegin{figure}
\iotancludegraphics[width=40mm]{cvbr.pdf}
\caption{{\betaf
The intersection of
a classical or virtual branch point
and
its neighborhood explained in Definition \ref{hatena}
}\ellongleftarrowbel{cvbr}}
\epsilonnd{figure}
Let
$T'
=T-$Int(the neighborhoods of real branch points and those of virtual branch ones).
Along any virtual crossing line
we double $T'$
as done in Definition \ref{Jbase}.
Note that
this operation can be done in
$\mathbb{R}^5$
although it cannot be done in $\mathbb{R}^4$ in general.
Thus we obtain a compact oriented
3-dimensional submanifold $X\subset\mathbb{R}^5$ from $T'$. \\
For a real branch point, we attach
`the closed 3-ball which is a neighborhood of the real branch point'
to $X$.
Note that
near any virtual branch point,
the operation can be done in
$\mathbb{R}^3\times\mathbb{R}\times\{0\}$.
For a virtual branch point, we attach
`the closed 3-ball which is a neighborhood of the virtual branch point', as drawn in
Figures \ref{Delaware}-\ref{Hawaii},
to $X$.
Note that
in Figures \ref{Delaware}-\ref{Hawaii}
we draw $\mathbb{R}^4=\mathbb{R}^3\times\mathbb{R}\times\{0\}$.
Note that the virtual branch point vanishes in this closed 3-ball. \\
The resultant compact oriented 3-manifold is
a
representing 3-manifold with $\mathcal I(\alphalpha)$, which is defined in Definition \ref{Jbase}.
We call it $M_\iotaota$.
Recall that $\mathcal I(\alphalpha)$ has no virtual point and, in particular,
that $\mathcal I(\alphalpha)$ has no virtual branch point.
\\
Figure \ref{Delaware} draws a part of a
representing 3-manifold $M$ near a virtual branch point.
Figure \ref{Florida} adds a part of
$\mathcal I(\alphalpha)$
to Figures \ref{Delaware}.
Figure \ref{Georgia}
draws Figures \ref{Delaware}
by seeing from a different direction.
Figure \ref{Hawaii}
draws Figures \ref{Florida}
by seeing from a different direction.
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=140mm]{another2.pdf}
\caption{{\betaf
A part of a
representing 3-manifold near a virtual branch point. We do not draw
a virtual branch point here.
In \ref {Florida} we do it.
}\ellongleftarrowbel{Delaware}}
\betaigbreak
\epsilonnd{figure}
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=145mm]{another3.pdf}
\vskip-30mm
\caption{{\betaf
A part of a
representing 3-manifold near a virtual branch point. We draw
a virtual branch point here.
}\ellongleftarrowbel{Florida}}
\betaigbreak
\epsilonnd{figure}
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=140mm]{mpvbp2.pdf}
\vskip-40mm
\caption{{\betaf
A part of a
representing 3-manifold near a virtual branch point. We do not draw
a virtual branch point here.
In Figure \ref{Hawaii} we will do it.
}\ellongleftarrowbel{Georgia}}
\epsilonnd{figure}
\betaegin{figure}
\iotancludegraphics[width=140mm]{mpvbp3.pdf}
\vskip-30mm
\caption{{\betaf
A part of a
representing 3-manifold near a virtual branch point. We draw
a virtual branch point here.
We explain the most lower figure
in more detail in Figure \ref{hosoku}.
}\ellongleftarrowbel{Hawaii}}
\betaigbreak
\epsilonnd{figure}
\betaegin{figure}
\iotancludegraphics[width=110mm]{hachix.pdf}
\vskip-10mm
\caption{{\betaf
The explanation of the most lower figure of
Figure \ref{Hawaii}.
The neighborhood of the red curve of that figure is
obtained by curving the middle figure of the above figures.
}\ellongleftarrowbel{hosoku}}
\epsilonnd{figure}
\epsilonnd{defn}
We prove that we have an affirmative answer to
Question \ref{South Dakota}.
\betaegin{thm}\ellongleftarrowbel{konnyaku}
Let $\alphalpha$ be a virtual 2-knot diagram.
Make $\mathcal E(\alphalpha)$ by using $M_\iotaota$,
and call it $\mathcal E_\iotaota(\alphalpha)$.
If $\alphalpha$ has no virtual branch point, then
$\mathcal E_\iotaota(\alphalpha)$ admits Rourke fibration.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{konnyaku}.}
Let $p\iotan\alphalpha$.
Regard $\mathbb{R}^5$ in Definition \ref{hatena}
as $\mathbb{R}^3\times\mathbb{R}\times\mathbb{R}$.
By the construction of $\mathcal E_\iotaota(\alphalpha)$,
$\mathcal E_\iotaota(\alphalpha)\cap(p\times\mathbb{R}\times\mathbb{R})$
is the empty set or a collection of circles
such that this correspondence satisfies Rourke's description.
Hence Theorem \ref{konnyaku} holds.
\qed
\betaegin{note}\ellongleftarrowbel{vrei}
It is trivial that if we use another embedding of another $M$,
$\mathcal E(\alphalpha)$ associated with the embedding
may not admit Rourke fibration.
Such an example exists.
Let $\timesi$ be the trivial 2-knot diagram.
It is trivial that $\timesi$ admits Rourke fibration.
Let $\zeta$ be a virtual 2-knot diagram of the trivial 2-knot.
Assume that the singular point set of $\zeta$ consists of two virtual branch points
and one virtual segment. \\
A {\iotat virtual segment} is the segment with the following properties.
It is a segment included in a virtual 2-knot diagram.
One of the boundary is a virtual branch point.
The points in the interior of the segment are virtual double points.
It is drawn in Figure \ref{Maryland}.
It is drawn in Figure \ref{sashimiv} if the branch point there is a virtual branch point.
See \cite{J}. \\
$\zeta$ does not admit Rourke fibration by Theorem \ref{Rmuri}.
\epsilonnd{note}
Note the following claim.
\betaegin{cla}\ellongleftarrowbel{shichi}
Take $\mathcal E_\iotaota(\alphalpha)$ in
Theorem $\ref{konnyaku}$.
If $\alphalpha$ includes a virtual branch point,
$\mathcal E_\iotaota(\alphalpha)$ does not admit Rourke's fibration.
That is,
$\mathcal E_\iotaota(\alphalpha)$ is not embedded fiberwise.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
It is trivial that if we use another embedding of another $M$,
$\mathcal E(\alphalpha)$ associated with the embedding
may admit Rourke fibration.
Such an example exists.
It is the one in Note \ref{vrei}.
\\
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{shichi}.}
By Theorem \ref{Rmuri}. \qed
| 3,265 | 68,854 |
en
|
train
|
0.4938.11
|
\betaegin{note}\ellongleftarrowbel{vrei}
It is trivial that if we use another embedding of another $M$,
$\mathcal E(\alphalpha)$ associated with the embedding
may not admit Rourke fibration.
Such an example exists.
Let $\timesi$ be the trivial 2-knot diagram.
It is trivial that $\timesi$ admits Rourke fibration.
Let $\zeta$ be a virtual 2-knot diagram of the trivial 2-knot.
Assume that the singular point set of $\zeta$ consists of two virtual branch points
and one virtual segment. \\
A {\iotat virtual segment} is the segment with the following properties.
It is a segment included in a virtual 2-knot diagram.
One of the boundary is a virtual branch point.
The points in the interior of the segment are virtual double points.
It is drawn in Figure \ref{Maryland}.
It is drawn in Figure \ref{sashimiv} if the branch point there is a virtual branch point.
See \cite{J}. \\
$\zeta$ does not admit Rourke fibration by Theorem \ref{Rmuri}.
\epsilonnd{note}
Note the following claim.
\betaegin{cla}\ellongleftarrowbel{shichi}
Take $\mathcal E_\iotaota(\alphalpha)$ in
Theorem $\ref{konnyaku}$.
If $\alphalpha$ includes a virtual branch point,
$\mathcal E_\iotaota(\alphalpha)$ does not admit Rourke's fibration.
That is,
$\mathcal E_\iotaota(\alphalpha)$ is not embedded fiberwise.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
It is trivial that if we use another embedding of another $M$,
$\mathcal E(\alphalpha)$ associated with the embedding
may admit Rourke fibration.
Such an example exists.
It is the one in Note \ref{vrei}.
\\
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{shichi}.}
By Theorem \ref{Rmuri}. \qed \\
We give an alternative proof of Claim \ref{shichi} after Proof of Theorem \ref{Rmuri}.
\\
Theorem \ref{Rmuri} is an answer to Question \ref{North Dakota},
and is one of our main results.
\betaegin{thm}\ellongleftarrowbel{Rmuri}
The answer to Question $\ref{North Dakota}$
is negative.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{Rmuri}.}
We prove by `reductio ad absurdum'.
We suppose the following assumption, and will arrive at a contradiction.
\smallbreak
\Z[\pi/\pi^{(n)}]oindent
{\betaf Assumption.} The neighborhood of a virtual branch point can be covered by the fiber-circles.
\smallbreak
Note the fiber over the virtual segment as shown in Figure \ref{Maryland}.
Give a Euclidean metric to $\mathbb{R}^5$.
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=130mm]{0x1y2.pdf}
\vskip-60mm
\caption{{\betaf
The assumption of `reductio ad absurdum'}\ellongleftarrowbel{Maryland}}
\epsilonnd{figure}
By Assumption,
the circles,
$A$ and $B$ in Figure \ref{Maryland},
meet at the circle
$C$ when $\varepsilon\to0$.
Let $s$ be the area of $C$.
When $\varepsilon\to0$, $A\to C$ and $B\to C$.
Hence, we have the following.
\betaegin{equation}\ellongleftarrowbel{Massachusetts}
{\text{When $\varepsilon\to0$, (the area of $B)\to s$.}}
\epsilonnd{equation}
Note that $s$ is a fixed positive real number.
\betaegin{figure}
\iotancludegraphics[width=120mm]{x2.pdf}
\vskip-30mm
\caption{{\betaf A one parameter families}\ellongleftarrowbel{Missouri}}
\epsilonnd{figure}
Take a one-parameter-family for each point $p\iotan C.$ See Figure \ref{Missouri}.
Suppose that $a$ and $b$ go to $p$ when $\varepsilon\to0$.
Let $\delta(a,b)$
the distance along the trace of the one-parameter-family
between $a$ and $b$.
\betaegin{equation}\ellongleftarrowbel{Michigan}
{\text{When $\varepsilon\to0$, $\delta(a,b)\to0.$}}
\epsilonnd{equation}
\Z[\pi/\pi^{(n)}]oindent
In the fiber $\mathbb{R}^2$ which includes $A$ and $B$,
take any point $x\iotan A$.
Suppose that $x$ goes to $y\iotan B$ by the one-parameter-family.
In this fiber $\mathbb{R}^2$
take a disc of radius $2\delta(x,y)$ whose center is $x\iotan A$.
Call the sum of the discs, $N(A)$. See Figure \ref{Mississippi}.
When $\varepsilon\to0$, \\
(the area of $N(A))\to 0$.
By (\ref{Michigan}),
$B\subset N(A)$.
Note that in this fiber $\mathbb{R}^2$,
$B$ (respectively, $A$) is not included in the inside of $A$ (respectively, $B$).
Therefore, by Jordan curve theorem,
the inside of $B$ is also included in $N(A)$.
Hence we have the following.
\betaegin{equation}\ellongleftarrowbel{Minnesota}
{\text{When $\varepsilon\to0$, (the area of $B)\to 0$.}}
\epsilonnd{equation}
\betaegin{figure}
\iotancludegraphics[width=130mm]{z0x1y2.pdf}
\vskip-70mm
\caption{{\betaf $N(A)$}\ellongleftarrowbel{Mississippi}}
\epsilonnd{figure}
By (\ref{Massachusetts}) and (\ref{Minnesota}),
we arrived at a contradiction.
This completes the proof of Theorem \ref{Rmuri}.
\qed\\
We give a direct proof of why $\mathcal E_\iotaota(\alphalpha)$ does not admit Rourke fibration, without using Theorem \ref{Rmuri}.
Note that it is not an alternative one of Theorem \ref{Rmuri}.
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent{\betaf Alternative proof of Claim \ref{shichi}.}
If $p$ is a virtual branch point,
$p$ is in the boundary of a virtual segment in $\mathbb{R}^3$.
Take $\mathcal I(\alphalpha)$ immersed in $M$.
Let $\kappa:\mathcal I(\alphalpha)\to\alphalpha$ be the natural map defined in
Definition \ref{JV}.
We have the following.
$\kappa^{-1}$(the virtual segment)
is a union of two segments, $\Psi$ and $\Phi$.
A point of $\partial\Psi$ and that of $\partial\Phi$ meet at a point
as drawn in Figure \ref{Idaho}.
$\kappa$(this point) is the virtual branch point. \\
The two segments make an angle.
See Figures \ref{Delaware}-\ref{Hawaii}.
The angle is acute.
Even if we take an arbitrary
representing 3-manifold of the virtual 2-knot diagram $\alphalpha$,
the angle is acute not obtuse.
Furthermore the angle is put as drawn there.
The reason for this is that there is always an acute angle as drawn in
Figure \ref{Idaho} whichever representing 3-manifolds we take. \\
As we preannounced in Notes \ref{kabuto} and \ref{kuwagata},
we use Figures \ref{Arizona} and \ref{Arkansas}.
In particular, see the most lower figure of Figure \ref{Arkansas}.
Therefore
$\mathcal E_\iotaota(\alphalpha)\cap(p\times\mathbb{R}_u\times\mathbb{R}_v)$
is a bouquet,
not the empty set or a collection of circles.
\Z[\pi/\pi^{(n)}]ewpage{\color{white}a}
\vskip-30mm
\betaegin{figure}[H]
\iotancludegraphics[width=170mm]{neta.pdf}
\vskip-90mm
\caption{{\betaf
$\mathcal E_\iotaota(\alphalpha)$ made by the spinning construction can be embed in $\mathbb{R}^5$
but $\mathcal E_\iotaota(\alphalpha)$ does not admit Rourke fibration.
The reason why we cannot make the fiber over any virtual branch point a collection of circles is drawn. }\ellongleftarrowbel{Idaho}}
\epsilonnd{figure}
Therefore Claim \ref{shichi} holds.
\qed
\betaigbreak
| 2,369 | 68,854 |
en
|
train
|
0.4938.12
|
\section{The $\mathcal E$-equivalence}\ellongleftarrowbel{vw}
We introduce a new equivalence relation
of the set of 1-(respectively, 2-)dimensional virtual knots.
\betaegin{defn}\ellongleftarrowbel{zoo}
Let $K$ and $J$ be 1-(respectively, 2-)dimensional virtual knots.
If the submanifolds, $\mathcal E(K)$ and $\mathcal E(J)$,
of $\mathbb{R}^4$ (respectively, $\mathbb{R}^5$) are isotopic,
$K$ and $J$ are said to be
{\iotat $\mathcal E$-equivalent}.
See Theorem \ref{vv}, and the line right below Theorem \ref{honto} for $\mathcal E(\quad)$.
\epsilonnd{defn}
\betaegin{thm}\ellongleftarrowbel{milk}
{\rm {\betaf (By \cite[Theorem 2.2]{Rourke} and \cite[Proposition 3.3]{Satoh}.)}}
If two virtual 1-knots are welded equivalent,
then they are $\mathcal E$-equivalent.
Hence there are two virtual 1-knots, $J$ and $K$,
such that $J$ is not virtually equivalent to $K$
but such that $\mathcal E(J)$ is isotopic
to $\mathcal E(K)$.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{milk}.}
By \cite[Theorem 2.2]{Rourke}, there are two virtual 1-knots, $J$ and $K$,
such that
$J$ is not virtually equivalent to $K$
but such that
$J$ is welded equivalent to $K$.
By \cite[Proposition 3.3]{Satoh},
$J$ and $K$ are $\mathcal E$-equivalent.
\qed\\
Thus it is natural to ask whether
we have the virtual 2-knot version of Theorem \ref{milk}.
In other words, are there virtual 2-knots, $J$ and $K$,
which are $\mathcal E$-equivalent but which are not virtually equivalent?
We answer this question below.
\\
Let $\alphalpha$ be a 1-dimensional virtual knot diagram defined in $\mathbb{R}^2$.
Regard $\mathbb{R}^3$ as the result of rotating $\mathbb{R}^2_{\geqq0}=\mathbb{R}^1\times\{t\vert t\geqq 0\}$ around $\mathbb{R}^1\times\{t\vert t=0\}$ as the axis.
Take $\alphalpha$ in
$\mathbb{R}^1\times\{t\vert t> 0\}$.
When we rotate $\mathbb{R}^2_{\geqq0}$, rotate $\alphalpha$ together.
Then we obtain a 2-dimensional virtual knot diagram in $\mathbb{R}^3$ naturally,
and call it $\mathcal O(\alphalpha)$.
Note that
$\mathcal O(\alphalpha)$ is a virtual 2-knot diagram made from $T^2$. \\
If 1-dimensional virtual knot diagrams, $\alphalpha$ and $\betaeta$, are virtually equivalent,
it is trivial that
2-dimensional virtual knot diagrams, $\mathcal O(\alphalpha)$ and $\mathcal O(\betaeta)$
are virtually equivalent (see Definition \ref{JV}).
Hence it makes sense that we define an 2-dimensional virtual knot $\mathcal O(K)$ for a 1-dimensional virtual knot $K$. \\
Let $X$ be a classical surface knot contained in $\mathbb{R}^4=\mathbb{R}^3\times\{t\iotan\mathbb{R}\}$. Take $X$ in $\mathbb{R}^3\times\{t>0\}$. Regard $\mathbb{R}^5$ as the result of rotating $\mathbb{R}^3\times\{t\geqq0\}$ around $\mathbb{R}^3\times\{t=0\}$ as the axis. Then we rotate $X$ together.
Call the resultant 3-dimensional submanifold of $\mathbb{R}^5$, ${\mathcal O}(X)$.
Note the following: If $X$ is diffeomorphic to
a closed
surface $\Sigma_g$,
then
${\mathcal O}(X)$ is diffeomorphic to
$\Sigma_g\times S^1$. \\
\betaegin{pr}\ellongleftarrowbel{kakan}
Let $K$ be a virtual 1-knot.
Then
the submanifolds, $\mathcal E({\mathcal O}(K))$ and ${\mathcal O}(\mathcal E(K))$, of $\mathbb{R}^5$
are isotopic.
\epsilonnd{pr}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Proposition \ref{kakan}.}
By the definitions. \qed
\\
The {\iotat standardly embedded torus} or {\iotat standard torus} is
a submanifold of $\mathbb{R}^4$,
diffeomorphic to the torus,
and put in the standard position.
Let $\Sigma_g$ be an oriented closed surface.
The {\iotat standardly embedded surface
$($diffeomorphic to $\Sigma_g)$}
or
{\iotat standard surface $($diffeomorphic to $\Sigma_g)$}
is defined as well.
Note that we can regard classical 1- (respectively, 2-) knots as virtual knots. \\
Let $R$ be the virtual reef knot
whose diagram is drawn
in \cite[Figure 3, section three]{Rourke}.
We cite the diagram in Figure \ref{vr}.
As written there, $R$ is a nontrivial virtual 1-knot,
is welded equivalent to the trivial 1-knot,
and has the group $\mathbb{Z}$. \\
\betaegin{figure}
\iotancludegraphics[width=100mm]{vr.pdf}
\vskip-40mm
\caption{{\betaf Virtual reef knot}\ellongleftarrowbel{vr}}
\epsilonnd{figure}
\betaegin{cla}\ellongleftarrowbel{theta}
The virtual 2-knot $\mathcal O(R)$
is not virtually equivalent to
the standard torus.
\epsilonnd{cla}
\betaegin{note}\ellongleftarrowbel{tau}
The submanifold, $\mathcal E(R)$ and the standard torus,
of $\mathbb{R}^4$ are isotopic
because the virtual 1-knot $R$ is welded equivalent to the unknot.
\epsilonnd{note}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{theta}.}
The proof is done
in a similar way in \cite{Takeda} and a generalized fashion of the manner in
\cite[section three]{Rourke}:
The fundamental group of the virtual reef knot $R$ is $\mathbb{Z}$.
However the fundamental group of the mirror image of $R$ is
non-trivial.
The fundamental group of the mirror image of $R$ is the lower fundamental group of $\mathcal O(R)$ and
its non-triviality demonstrates the non-triviality of
$\mathcal O(R)$
as a virtual 2-knot.
\qed\\
Theorem \ref{Maine} is one of our main results. \\
\betaegin{thm}\ellongleftarrowbel{Maine}
There is a virtual 2-knot $K$
with the following conditions.
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(1)$
The virtual 2-knot $K$
is not virtually equivalent to
the standard surface.
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(2)$
The virtual 2-knot $K$
is $\mathcal E$-equivalent to
the standard surface.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{Maine}.}
Let $K$ be the virtual 2-knot $\mathcal O(R)$ in Claim \ref{theta}.
Claim \ref{theta} implies Theorem \ref{Maine}.(1).
Let $T$ denote the standard torus.
By Note \ref{tau}, $\mathcal E(R)=T$.
Proposition \ref{kakan} implies
$\mathcal E(K)
=\mathcal E(\mathcal O(R))
=\mathcal O(\mathcal E(R))
=\mathcal O(T)$,
where $=$ denotes the ambient isotopy
of submanifolds.
$\mathcal O(T)$ and $\mathcal E(T)$ are standardly embedded $T^3$ in $\mathbb{R}^5$ by the definition of them.
Hence we have Theorem \ref{Maine}.(2).
Therefore Theorem \ref{Maine} holds. \qed\\
We ask questions.
\betaegin{que}\ellongleftarrowbel{France}
(1) Do we have the following?
Let $\Sigma_g$ be
a closed oriented genus $g$ surface.
Let $Q$ (respectively, $Q'$) be
a virtual surface-knot
made from $\Sigma_g$.
If $Q$ and $Q'$ have the group $\mathbb{Z}$,
then the submanifolds, $\mathcal E(Q)$ and $\mathcal E(Q')$,
of $\mathbb{R}^5$ are isotopic.
\smallbreak
\Z[\pi/\pi^{(n)}]oindent
(2)
Is a virtual 1- (respectively, 2-) knot $K$ welded equivalent to the trivial 1-knot
if $K$ has the group $\mathbb{Z}$?
\epsilonnd{que}
\betaigbreak
| 2,490 | 68,854 |
en
|
train
|
0.4938.13
|
\section{The fibrewise equivalence}\ellongleftarrowbel{New Mexico}
| 19 | 68,854 |
en
|
train
|
0.4938.14
|
\subsection{
The fibrewise equivalence
is equal to the rotational welded equivalence,
and is different from the welded equivalence of virtual 1-knots
}\ellongleftarrowbel{sub1}\hskip20mm\\%
We research relations among
the fiberwise equivalence of virtual 1-knots,
the welded equivalence of them,
and
the rotational welded equivalence of them.
We mentioned it in the last few paragraphs of \S\ref{i3}.
See \cite{Rourke, Satoh} for the definition of the welded equivalence,
and \cite{Kauffman, Kauffmanrw, J} for that of the rotational welded equivalence,
as we also mentioned them in the last few paragraphs of \S\ref{i3}.\\
We first introduce the definition of the fiberwise equivalence of virtual 1-knots.
For our purpose (to prove Theorems \ref{smooth} and \ref{Montgomery}),
we will modify the definition a few times as below.
\betaegin{figure}
\iotancludegraphics[width=100mm]{xaa.pdf}
\vskip-40mm
\caption{{\betaf Fiberwise isotopy}\ellongleftarrowbel{Fib}}
\epsilonnd{figure}
\betaegin{defn}\ellongleftarrowbel{Nevada}
Let $\alphalpha$ and $\betaeta$ be virtual 1-knot diagrams.
We say that
$\alphalpha$ and $\betaeta$ are {\iotat fiberwise equivalent}
if
Rourke's description of $\mathcal S(\alphalpha)$
and
that of $\mathcal S(\betaeta)$
are `fiberwise isotopic'.
In other words, this means that
$\alphalpha$ and $\betaeta$ satisfy the following conditions.
There is an
embedding map
$$g:S^1_b\times[0,1]\times S^1_f\hookrightarrow\mathbb{R}^2_b\times[0,1]\times\mathbb{R}^2_f$$
with the following properties.
See Figure \ref{Fib}.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(1)
For any fixed $t\iotan[0,1]$, $g(S_b^1\times\{t\}\times S^1_f)\subset\mathbb{R}^2_b\times\{t\}\times\mathbb{R}^2_f$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(2)
For any fixed $p\iotan S^1_b$ and any fixed $t\iotan[0,1]$,
$g(\{p\}\times\{t\}\times S^1_f)$ is contained in
the same fiber $\{q\}\times\mathbb{R}^2_f$ for a point
$q\iotan\mathbb{R}^2_b\times[0,1]$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent (3)
Let $\pi:\mathbb{R}^2_b\times[0,1]\times\mathbb{R}^2_f\to\mathbb{R}^2_b\times[0,1]$.
$(\pi\circ g)(S_b^1\times\{0\}\times S^1_f)$
(respectively, \\$(\pi\circ g)(S_b^1\times\{1\}\times S^1_f)$)
$\subset \mathbb{R}^2_b\times\{0\}$ (respectively, $\mathbb{R}^2_b\times\{1\}$)
is the diagram $\alphalpha$ (respectively, $\betaeta$) without information whether
each crossing point is a classical one or a virtual one.
This information is given by the fiber-circles over each crossing point as
in Theorem \ref{Montana} and Definition \ref{Nebraska}.
$\pi\circ g$ meets $R^2_b\times\{0,1\}$ transversely.
\betaigbreak
In knot theory we usually use an `ambient' isotopy in order to define the equivalence relation of knots as below.
We impose the following condition (4).
(See \cite[sections 1.1 and 1.2]{BZ} for an explanation on this fact
in the 1-dimensional classical knot case.)
\smallbreak\Z[\pi/\pi^{(n)}]oindent (4)
Let $g_t$ denote
$$g|_{S^1_b\times\{t\}\times S^1_f}: S^1_b\times\{t\}\times S^1_f\hookrightarrow\mathbb{R}^2_b\times\{t\}\times\mathbb{R}^2_f$$
for $0\elleqq t\elleqq1$.
There is an an isotopy
$$H_t:\mathbb{R}^2_b\times\{t\}\times\mathbb{R}^2_f\to\mathbb{R}^2_b\times\{t\}\times\mathbb{R}^2_f (0\elleqq t\elleqq1)$$
such that
$H_0$ is the identity map
and such that
$g_t=H_t\circ g_0$ for any $t\iotan[0,1]$.
We call $g$ a {\iotat special isotopy} between $\alphalpha$ and $\betaeta$.
\epsilonnd{defn}
\betaigbreak
\betaegin{defn}\ellongleftarrowbel{anko}
Take $g$ in Definition \ref{Nevada}.
If $g'$ is obtained by moving $g$ by an ambient isotopy map
$G_t$, where $0\elleq t\elleq1$, $g_0=g$, and $g_1=g'$,
keeping the conditions $(1)$-$(4)$ of Definition \ref{Nevada},
then we say that $g'$ is {\iotat level preserving, fiberwise isotopic} or {\iotat special isotopic} to $g$,
or
that we {\iotat perturb $g$ in the special way} to obtain $g'$.
We write $g\sim g'$.
$G_t$ is called a
{\iotat level preserving, fiberwise isotopy}
or
{\iotat special isotopy} between $g$ and $g'$.
\epsilonnd{defn}
\betaigbreak
\betaegin{note}\ellongleftarrowbel{kakanzu}
The following holds.
Let $\rho:S^1_b\times[0,1]\times S^1_f\to S^1_b\times[0,1]$ be the natural projection.
Then there is a (not necessarily smooth) continuous map \\
$\underline{g}:S^1_b\times[0,1]\to \mathbb{R}^2\times[0,1]$
such that $\pi\circ g=\underline{g}\circ\rho$.
That is, there is the following commutative diagram.
$$
\betaegin{matrix}
S^1_b\times[0,1]\times S^1_f&\stackrel{g}\to&\mathbb{R}^2_b\times[0,1]\times \mathbb{R}^2_f\\
\downarrow_\rho&\circlearrowright &\downarrow_\pi \\
S^1_b\times[0,1]&\stackrel{\underline{g}}\to&\mathbb{R}^2_b\times[0,1]
\epsilonnd{matrix}
$$
\epsilonnd{note}
\betaegin{defn}\ellongleftarrowbel{neba}
Under the above condition, we say that $\underline{g}$ is {\iotat covered} by $g$.
\epsilonnd{defn}
\betaigbreak
The following theorem is one of our main results.
\betaegin{thm}\ellongleftarrowbel{smooth}
Two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are smooth fiberwise equivalent if and only if
$\alphalpha$ and $\betaeta$ are smooth rotational welded equivalent.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.} See Note \ref{xmikan}. \betaigbreak
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Theorem \ref{smooth}.}
The `if' part is easy.
We prove the `only if' part.
\\
\Z[\pi/\pi^{(n)}]oindent{\betaf Strategy.}
See (I) and (II) below.
We want to prove (I)$\Leftrightarrow$(II).
It is easy to prove (II)$\mathbb{R}ightarrow$(I).
We will prove (I) $\mathbb{R}ightarrow$ (II) as follows.
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent(I) Smooth virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are smooth fiberwise equivalent.
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent(II) Smooth virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are smooth rotational welded equivalent.
\betaigbreak
See (1) below.
In Claim \ref{xbeef}
we will prove (I)$\mathbb{R}ightarrow$(1).
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent(1) There is a PL virtual 1-knot diagram $\alphalpha'$ (respectively, $\betaeta'$)
which is piecewise smooth isotopic to $\alphalpha$ (respectively, $\betaeta$)
such that $\alphalpha'$ and $\betaeta'$ are PL fiberwise equivalent.
\betaigbreak
See (2) below.
In Theorem \ref{fwrw}, we will prove (1)$\mathbb{R}ightarrow$(2).
It will be proved in the text
which starts from Proposition \ref{polygon},
and ends in Claim \ref{takusan}.
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent(2) $\alphalpha'$ and $\betaeta'$ are PL rotational welded equivalent.
\betaigbreak
In Lemma \ref{PLtosmooth}, we will prove (2)
$\mathbb{R}ightarrow$ (II). Thus we will finish the proof of (I) $\mathbb{R}ightarrow$ (II).
\betaigbreak
\betaigbreak
Assume that smooth virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are smooth fiberwise equivalent.
We do not know whether or not there are two
special isotopies $g$ and $g'$ between $\alphalpha$ and $\betaeta$
with the following properties.
$g$ and $g'$ are not smooth
special isotopic
but piecewise smooth special isotopic
Although we do not answer this question,
we accomplish the proof of (I)$\Leftrightarrow$(II).
\betaigbreak
Take $g$
in Definition \ref{Nevada}.
We do not know
whether there is a smooth $g'$ with $g'\sim g$
with the following properties:
There is a finite simplicial structure on
Im $\pi\circ g'$
which restricts to
a finite simplicial structure on the singular subset
of Im $\pi\circ g'$.
One reason is as follows. Im $\pi\circ g$ may be the projection of a wild embedding
for a smooth $g$ even if $g$ is not a wild embedding map.
Although we do not answer this question, we accomplish the proof of (I)$\Leftrightarrow$(II).
\betaegin{defn}\ellongleftarrowbel{PLNevada}
Consider the conditions of Definition \ref{Nevada} in the PL category.
The equivalence relation is called {\iotat PL fiberwise equivalence}.
\epsilonnd{defn}
\betaegin{cla}\ellongleftarrowbel{xbeef}
If virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are smooth fiberwise equivalence,
then $\alphalpha$ and $\betaeta$ are PL fiberwise equivalence.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{xbeef}.}
It is enough to prove that
the map $g$ in Definition \ref{Nevada} is approximated
by a fiberwise level-preserving PL embedding map. We prove it below.
Regard $S^1_b$ as $[0,1]/\sim$, where $0\sim1$.
Regard $S^1_f$ as $[0,1]/\sim$, where $0\sim1$.
Hence we can regard
$S^1_b\times[0,1]\times S^1_f$
as the one made from $[0,1]\times[0,1]\times[0,1]$ by these equivalence relations.
Let $n$ be any positive integer.
Take points
$(\Bbbkrac{i}{2n},\Bbbkrac{j}{2n},\Bbbkrac{k}{2n})\iotan[0,1]\times[0,1]\times[0,1]$,
where $i$ (respectively, $j$, $k$) is any integer with the condition
$0\elleqq$$i$ (respectively, $j$, $k$) $\elleqq 2n$.
Let $l$ be any integer with the condition $0\elleqq 2l$(respecctively, $2l+2)\elleqq 2n$.
Take any cube $C$ whose vertices are
$(\Bbbkrac{\alphalpha}{2n},\Bbbkrac{\betaeta}{2n},\Bbbkrac{\gamma}{2n})$,
where $\alphalpha$ (respectively, $\betaeta$, $\gamma$) is any integer in $\{2l,2l+2\}$.
Take a simplicial division on $S^1_b\times[0,1]\times S^1_f$ as follows.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(1)
0-simplices are all $(\Bbbkrac{i}{2n},\Bbbkrac{j}{2n},\Bbbkrac{k}{2n})\iotan[0,1]\times[0,1]\times[0,1]$ as above.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(2)
1-simplices are defined as follows.
Take any cube $C$.
Note that each of six sites includes nine 0-simlices,
that the sum of six sites includes 26 0-simlices,
and
that $C$ includes 27 0-simlices.
Take the 0-simplex $P$ in $C$ which is not included in any site.
Take any segment whose boundary is $P$ and one of the other 28 0-simplices.
Take 16 segments in each site of $C$ as drawn in Figure \ref{menkirikata}.
1-simplices are these two kinds of segment.
\betaegin{figure}
\iotancludegraphics[width=120mm]{menkirikata.pdf}
\vskip-30mm
\caption{{\betaf 16 1-simplices on a site of $C$}\ellongleftarrowbel{menkirikata}}
\epsilonnd{figure}
\smallbreak\Z[\pi/\pi^{(n)}]oindent(3)
The set of 1-simplices defines 2-simplices naturally.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(4)
The set of 2-simplices defines 3-simplices naturally.
\betaigbreak
| 3,982 | 68,854 |
en
|
train
|
0.4938.15
|
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent(2) $\alphalpha'$ and $\betaeta'$ are PL rotational welded equivalent.
\betaigbreak
In Lemma \ref{PLtosmooth}, we will prove (2)
$\mathbb{R}ightarrow$ (II). Thus we will finish the proof of (I) $\mathbb{R}ightarrow$ (II).
\betaigbreak
\betaigbreak
Assume that smooth virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are smooth fiberwise equivalent.
We do not know whether or not there are two
special isotopies $g$ and $g'$ between $\alphalpha$ and $\betaeta$
with the following properties.
$g$ and $g'$ are not smooth
special isotopic
but piecewise smooth special isotopic
Although we do not answer this question,
we accomplish the proof of (I)$\Leftrightarrow$(II).
\betaigbreak
Take $g$
in Definition \ref{Nevada}.
We do not know
whether there is a smooth $g'$ with $g'\sim g$
with the following properties:
There is a finite simplicial structure on
Im $\pi\circ g'$
which restricts to
a finite simplicial structure on the singular subset
of Im $\pi\circ g'$.
One reason is as follows. Im $\pi\circ g$ may be the projection of a wild embedding
for a smooth $g$ even if $g$ is not a wild embedding map.
Although we do not answer this question, we accomplish the proof of (I)$\Leftrightarrow$(II).
\betaegin{defn}\ellongleftarrowbel{PLNevada}
Consider the conditions of Definition \ref{Nevada} in the PL category.
The equivalence relation is called {\iotat PL fiberwise equivalence}.
\epsilonnd{defn}
\betaegin{cla}\ellongleftarrowbel{xbeef}
If virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are smooth fiberwise equivalence,
then $\alphalpha$ and $\betaeta$ are PL fiberwise equivalence.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{xbeef}.}
It is enough to prove that
the map $g$ in Definition \ref{Nevada} is approximated
by a fiberwise level-preserving PL embedding map. We prove it below.
Regard $S^1_b$ as $[0,1]/\sim$, where $0\sim1$.
Regard $S^1_f$ as $[0,1]/\sim$, where $0\sim1$.
Hence we can regard
$S^1_b\times[0,1]\times S^1_f$
as the one made from $[0,1]\times[0,1]\times[0,1]$ by these equivalence relations.
Let $n$ be any positive integer.
Take points
$(\Bbbkrac{i}{2n},\Bbbkrac{j}{2n},\Bbbkrac{k}{2n})\iotan[0,1]\times[0,1]\times[0,1]$,
where $i$ (respectively, $j$, $k$) is any integer with the condition
$0\elleqq$$i$ (respectively, $j$, $k$) $\elleqq 2n$.
Let $l$ be any integer with the condition $0\elleqq 2l$(respecctively, $2l+2)\elleqq 2n$.
Take any cube $C$ whose vertices are
$(\Bbbkrac{\alphalpha}{2n},\Bbbkrac{\betaeta}{2n},\Bbbkrac{\gamma}{2n})$,
where $\alphalpha$ (respectively, $\betaeta$, $\gamma$) is any integer in $\{2l,2l+2\}$.
Take a simplicial division on $S^1_b\times[0,1]\times S^1_f$ as follows.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(1)
0-simplices are all $(\Bbbkrac{i}{2n},\Bbbkrac{j}{2n},\Bbbkrac{k}{2n})\iotan[0,1]\times[0,1]\times[0,1]$ as above.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(2)
1-simplices are defined as follows.
Take any cube $C$.
Note that each of six sites includes nine 0-simlices,
that the sum of six sites includes 26 0-simlices,
and
that $C$ includes 27 0-simlices.
Take the 0-simplex $P$ in $C$ which is not included in any site.
Take any segment whose boundary is $P$ and one of the other 28 0-simplices.
Take 16 segments in each site of $C$ as drawn in Figure \ref{menkirikata}.
1-simplices are these two kinds of segment.
\betaegin{figure}
\iotancludegraphics[width=120mm]{menkirikata.pdf}
\vskip-30mm
\caption{{\betaf 16 1-simplices on a site of $C$}\ellongleftarrowbel{menkirikata}}
\epsilonnd{figure}
\smallbreak\Z[\pi/\pi^{(n)}]oindent(3)
The set of 1-simplices defines 2-simplices naturally.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(4)
The set of 2-simplices defines 3-simplices naturally.
\betaigbreak
Let $n$ be sufficiently large.
Take the image of all 0-simplices by $g$ in $\mathbb{R}^2_b\times[0,1]\times\mathbb{R}^2_f$.
They determine a fiberwise level-preserving PL embedding map of $S^1_b\times[0,1]\times S^1_f$ naturally.
{\iotat Reason.} Im $g$ is a smooth regular submanifold.
Hence it has a tubular neighborhood.
This completes the proof of Claim \ref{xbeef}. \qed
| 1,601 | 68,854 |
en
|
train
|
0.4938.16
|
\betaegin{note}\ellongleftarrowbel{bango}
If $C=$(Im $g$)$\cap$(a fiber $\mathbb{R}^2_f$) is PL homeomorphic to a circle,
then $C$ is a polygon. However the number of the vertices of $C$ depends on fibers.
\epsilonnd{note}
\betaegin{note}\ellongleftarrowbel{haruwa}
From here to the end of the proof of Theorem \ref{fwrw}, we work in the PL category
unless we indicate otherwise.
After that, we will go back to the smooth category.
When we move a map by isotopy, we take a PL subdivision if necessary.
\epsilonnd{note}
Claim \ref{xbeef} implies the following.
\betaegin{pr}\ellongleftarrowbel{polygon}
$g$ in Definition \ref{PLNevada} satisfies the condition that a finite simplicial structure on Im $\pi\circ g$
which restricts to a finite simplicial structure on the singular subset of Im $\pi\circ g$.
\epsilonnd{pr}
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=150mm]{zudelta.pdf}
\vskip-40mm
\caption{{\betaf The $\Delta^1$-move.}\ellongleftarrowbel{zudelta}}
\epsilonnd{figure}
We call the operation drawn in Figure \ref{zudelta},
the $\Delta^1$-move of virtual 1-knot diagrams.
Note that we do not draw the other part of this diagram. The other part may intersect the part drawn in Figure \ref{zudelta}.
By Proposition \ref{polygon}, $\alphalpha, \betaeta$ in Definition \ref{PLNevada}
have the following properties:
\betaegin{cla}\ellongleftarrowbel{Delta}
$\alphalpha$ $($respectively, $\betaeta$$)$ is obtained from $\betaeta$ $($respectively, $\alphalpha$$)$ by
a finite step of $\Delta^1$-moves.
\epsilonnd{cla}
\betaegin{defn}\ellongleftarrowbel{PLyugentsuki}
Add the following condition to Definition \ref{PLNevada} without changing the other parts.
(Note we work in the PL category.)
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(\ref{PLyugentsuki}.1)$ In each fiber $\mathbb{R}^2_f$, there are a finite number of circles.
$($That is, $<\iotanfty.)$
\epsilonnd{defn}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
See Note \ref{xudon}.
Recall Note \ref{bango}.
\\
Indeed, the following holds.
\betaegin{thm}\ellongleftarrowbel{herasu}
Definitions $\ref{PLNevada}$ and $\ref{PLyugentsuki}$ are equivalent.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{herasu}.}
It is trivial that if $g$ satisfies Definition \ref{PLyugentsuki}, then $g$ satisfies Definition \ref{PLNevada}.
We prove that if $g$ satisfies Definition \ref{PLNevada},
then we can perturb $g$ in the special way so that $g$ satisfies Definition \ref{PLyugentsuki}.
Suppose that $g$ satisfies Definition \ref{PLNevada}.
Let $q\iotan\mathbb{R}^2_b$ and $t\iotan[0,1]$.
Since Im$g$ is a compact PL regular submanifold,
Im$g\cap(\{q\}\times\{t\}\times\mathbb{R}_f)$ is
a disjoint union of a finite number of circles and a finite number of annuli.
Note that the union of them is a regular submanifold of $\{q\}\times\{t\}\times\mathbb{R}^2_f$.
Take a tubular neighborhood $N$ of each annulus
in $\mathbb{R}^2_b\times[0,1]\times\mathbb{R}^2_f$, to be small enough.
Stretch each annulus into the direction perpendicular to $\{x\}\times\{t\}\times\mathbb{R}^2_f$.
Then we can obtain a new $g$ which satisfies Definition \ref{PLyugentsuki}.
The idea of how we stretch is drawn in Figure \ref{hipparu}
Note that
Figures \ref{hipparu} draws `figures in PL category' although the figures are smoothened.
\qed
| 1,180 | 68,854 |
en
|
train
|
0.4938.17
|
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=145mm]{hipparu.pdf}
\vskip-37mm
\caption{{\betaf
The idea of how we stretch $g(S^1_b\times[0,1]\times S^1_f)\cap N$
}\ellongleftarrowbel{hipparu}}
\epsilonnd{figure}
\betaigbreak
A point
$p\iotan$Im$\pi\circ g$
$=(\pi\circ g)(S^1_b\times[0,1]\times S^1_f)$
$=(\underline{g}\circ\rho)(S^1_b\times[0,1]\times S^1_f)$
$=\underline{g}(S^1_b\times[0,1])$
is called a {\iotat multiple point} or {\iotat $n$-tuple point}
if $\underline{g}^{-1}(p)\iotan S^1_b\times[0,1]$
consists of $n$ points ($n\geqq2$).
(Note that in Definition \ref{PLyugentsuki}, $n<\iotanfty$.)
A point $p\iotan$Im$\pi\circ g$
is called a {\iotat single point}
if $\underline{g}^{-1}(p)$ consists of a single points.
The {\iotat singular point set} of $p\iotan$Im$\pi\circ g$ consists of branch points and multiple points. \\
Note the following facts.
Take $g$ in Definition \ref{PLyugentsuki}, and $\underline{g}$ which is covered by $g$.
Recall that `cover' is defined in Definition \ref{neba}.
Suppose that $\underline{g}$ is a generic map.
Note Im $\underline{g}$.
We can define whether each double point is classical or virtual
by using the information of the fiber-circles over each point as in Theorem \ref{Montana},
Definition \ref{Nebraska}, Note \ref{kaiga}, and Definition \ref{suiri}.
There is a case where a classical (respectively, virtual) double point appears.
The information of fiber-circles over each branch point determines that the branch point is classical.
{\iotat Reason.} By Theorem \ref{Rmuri},
there are no virtual branch point. \\
Note each triple point.
There are three circles in the fiber over each triple point.
There are four cases how three circles are put in the fiber.
See Notes \ref{umeboshi} and \ref{faso}, Definnition \ref{JW}, and Figure \ref{JW1}.
There is a case where each of the four occurs.
\betaegin{note}\ellongleftarrowbel{umeboshi}
$(\pi\circ g)(S^1_b\times[0,1]\times S^1_f)$ in $\mathbb{R}^2_b\times[0,1]$
is a welded 2-knot with a fixed boundary in general,
and
is not a virtual 2-knot with a fixed boundary in general.
See \cite[sections 3.5-3.7]{J} for their definitions
and their difference.
In the welded 2-knot case we also use the terms, `fiber-circle' and `Rourke-fibration'.
See Note \ref{faso}.
\epsilonnd{note}
Here we cite the definition of welded 2-knots from \cite{J}.
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=140mm]{l83from93.pdf}
\vskip-30mm
\caption{{\betaf The singular point sets of welded 2-knots}
\ellongleftarrowbel{JW1}}
\betaigbreak
\epsilonnd{figure}
Recall that a 2-knot diagram is (the image of) a generic map of a surface in 3-space, with
classical and virtual crossing data along the double-point arcs.
Also recall that 2-knot diagrams may be transformed by Roseman moves, which preserve the
crossing data locally.
\betaegin{defn}\ellongleftarrowbel{JW}{\betaf (\cite[section 3.6]{J}.)}
If all triple points of a 2-knot diagram are of the four types shown in Figure \ref{JW1},
the diagram is called a Welded 2-knot diagram. If a pair of Welded 2-knot
diagrams are related by a series of Roseman moves, with only Welded diagrams
appearing throughout the process, then the diagrams are Welded equivalent and
belong to the same Welded 2-knot type.
\epsilonnd{defn}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
The above definition makes sense both in the smooth and the PL category.
The readers need not be familiar with Roseman moves in order to read this paper.
\betaegin{note}\ellongleftarrowbel{faso}
When we consider circles in fiber $\mathbb{R}^2$ as in Note \ref{umeboshi},
there is a new type drawn in Figure \ref{rashi},
which is not in Figure \ref{doremi}.
\epsilonnd{note}
\betaegin{figure}
\iotancludegraphics[width=100mm]{rashi.pdf}
\vskip-20mm
\caption{{\betaf A new type of a nest of circles.}\ellongleftarrowbel{rashi}}
\epsilonnd{figure}
\betaegin{note}\ellongleftarrowbel{oshii}
By Proposition \ref{polygon}, $\underline{g}$ in Definition \ref{PLyugentsuki}
satisfies the conditions (I)-(III) below, but $\underline{g}$ is not generic.
\smallbreak\Z[\pi/\pi^{(n)}]oindent (I)
$\underline{g}:S^1_b\times[0,1]\to\mathbb{R}^2_b\times[0,1]$
is a continuous map such that
$\underline{g}(S^1_b\times\{t\})\subset\mathbb{R}^2_b\times\{t\}$ for any $t\iotan[0,1]$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent (II)
Let $t\iotan[0,1]$.
There are closed intervals,
$I_1,..., I_\mu$ ($\mu\iotan\mathbb{N}$),
such that
$S^1_b\times\{t\}\\=I_1\cup...\cup I_\mu$
and such that
$\underline{g}|_{I_i}$ is a PL
embedding for each $i$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent (III)
There are closed 2-discs,
$D^2_1,..., D^2_\Z[\pi/\pi^{(n)}]u$ ($\Z[\pi/\pi^{(n)}]u\iotan\mathbb{N}$),
such that
$S^1_b\times[0,1]=D^2_1\cup...\cup D^2_\Z[\pi/\pi^{(n)}]u$
and such that
$\underline{g}|_{D^2_i}$ is a PL
embedding for each $i$.
\epsilonnd{note}
\betaegin{defn}\ellongleftarrowbel{wasabi}
If a map
$\underline{g}:S^1_b\times[0,1]\to\mathbb{R}^2_b\times[0,1]$ satisfies the conditions (I)-(III) in Note \ref{oshii},
then $\underline{g}$ is said to be {\iotat level preserving}.
If $\underline{g}'$ is obtained by moving $\underline{g}$ by a homotopy
$\underline{G_t}$, where $0\elleq t\elleq1$, $\underline{G_0}=g$ and $\underline{G_1}=g'$,
keeping the conditions (I)-(III) of in Note \ref{oshii},
then we say that $\underline{g}'$ is {\iotat level preserving homotopic} to $\underline{g}$
or
that we {\iotat perturb $\underline{g}$ in the special way} and obtain $\underline{g}'$.
We write $\underline{g}\sim\underline{g}'$.
$\underline{G_t}$ is called a {\iotat level preserving homotopy} or a {\iotat special homotopy}.
Let $g: S^1_b\times[0,1]\times S^1_f\to\mathbb{R}^2_b\times[0,1]\times \mathbb{R}^2_f$
be a map in Definition \ref{PLyugentsuki}.
Take a special homotopy
$\underline{G_t}$ of $\underline{g}$, and
a special isotopy
$G_t$ of $g$
where $0\elleqq t\elleqq1.$
If $\underline{G_t}$
is covered by $G_t$
for any element $t$ in $\{t|0\elleqq t\elleqq1\}$,
then we say that
$\underline{G_t}$ is {\iotat covered} by $G_t$.
\epsilonnd{defn}
\betaegin{defn}\ellongleftarrowbel{gene}
Add the following condition to
Definition \ref{PLyugentsuki} without changing the other parts.
\Z[\pi/\pi^{(n)}]oindent$(\ref{gene}.1)$
We can perturb $g$ in Definition \ref{PLyugentsuki} in the special way
so that $g$ covers a PL level preserving, generic map $S^1_b\times[0,1]\to\mathbb{R}^2_b\times[0,1]$.
\epsilonnd{defn}
We prove the following theorem.
\betaegin{thm}\ellongleftarrowbel{Wyoming}
Definition $\ref{gene}$ is equivalent to Definition $\ref{PLyugentsuki}$
$($and, by Theorem $\ref{herasu},$ is equivalent to Definition $\ref{PLNevada}.)$
\epsilonnd{thm}
\betaegin{note}\ellongleftarrowbel{shio}
Even if we perturb $\underline{g}: S^1_b\times[0,1]\to\mathbb{R}^2_b\times[0,1]$, which is covered by $g$,
in the special way
by a special homotopy $\underline{G_t}$,
$\underline{G_t}$ is not covered by a special isotopy $G_t$ of $g$
in general.
We must make
$\underline{G_t}$
under the condition that
$\underline{G_t}$ is covered by $G_t$.
\epsilonnd{note}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Theorem \ref{Wyoming}.}
It is trivial that
if $g$ satisfies Definition \ref{gene}, then $g$ satisfies Definition \ref{PLyugentsuki}.
We prove the following.
\betaegin{cla}\ellongleftarrowbel{kamen}
If $g$ satisfies Definition \ref{PLyugentsuki},
we can perturb $g$ so that $g$ satisfies Definition \ref{gene}.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note}.
Recall that $\pi\circ g$ does not cover a generic map $\underline{g}$ in general.
\\
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Claim \ref{kamen}.}
\Z[\pi/\pi^{(n)}]oindent
{\betaf The first step.}
Recall that by Definition \ref{PLyugentsuki}, for each $t\iotan[0,1]$, (Im$(\pi\circ g)$)$\cap(\mathbb{R}^2\times\{t\})$ is an immersed circle. We prove the following.
\betaegin{cla}\ellongleftarrowbel{Oregon}
We can perturb $g$
in the special way
so that the singular point set of \\
$(${\rm Im}$(\pi\circ g))\cap(\mathbb{R}^2\times\{t\})$
is a finite number of points
except for a finite number of levels $t\iotan[0,1]$.
In other words, we can do so that for only a finite number of levels $t\iotan[0,1]$,
the singular point set of $($Im$(\pi\circ g))\cap(\mathbb{R}^2\times\{t\})$ includes a finite number of segments.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Claim \ref{Oregon}.}
Let $I$ denote the interior of a 1-simplex which is in a simplicial complex structure of
the singular point set of (Im$(\pi\circ g)$)$\cap(\mathbb{R}^2\times\{\gamma\})$
for a real number $\gamma\iotan[0,1]$.
Assume that $I$ consists of multiple PL points.
Suppose that there are real numbers
$\alphalpha,\betaeta\iotan[0,1]$ with the following properties: \\
$\alphalpha<\gamma<\betaeta$.
For $\alphalpha<t<\betaeta$,
$h_t:R^2_b\times\{\gamma\}\to\mathbb{R}^2_b\times(\alphalpha,\betaeta)$ is an isotopy
($t$ runs in $(\alphalpha,\betaeta)$)
such that
$h_t(({\rm Im}(\pi\circ g))\cap(\mathbb{R}^2\times\{\gamma\}))=$
$({\rm Im}(\pi\circ g))\cap(\mathbb{R}^2\times\{t \})$
for all $t\iotan(\alphalpha, \betaeta)$,
and such that
$h_t$ preserves
the information of fiber-circles over
two immersed circles
which are put in the both sides of $=$.
(Here,
the information of fiber-circles means what we define in
Theorem \ref{Montana}, Definition \ref{Nebraska}, Note \ref{kaiga}, and Definition \ref{suiri}.)
Note that $(\underline{g})^{-1}(I)$
is a disjoint union of $n$ open segments $I_1,...,I_n$ in $S^1_b\times[0,1]$.
Note that $\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$
consists of $n$-tuple points,
is an open set, and
is a discrete submanifold of $\mathbb{R}^2_b\times[0,1]$.
We can perturb $g$ in the special way
so that $\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$
separates $n$ copies of
$\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$
and
so that we keep the boundary of the closure of
$\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$
since
there does not appear a new singularity of the immersed annulus.
Figure \ref{step1} is an example. \\
\betaegin{note}\ellongleftarrowbel{ultra}
Figures \ref{step1}-\ref{dia1} draw `figures in PL category' although the figures are smoothened.
When we move a map by isotopy, we take a PL subdivision if necessary.
\epsilonnd{note}
| 3,968 | 68,854 |
en
|
train
|
0.4938.18
|
\epsilonnd{defn}
We prove the following theorem.
\betaegin{thm}\ellongleftarrowbel{Wyoming}
Definition $\ref{gene}$ is equivalent to Definition $\ref{PLyugentsuki}$
$($and, by Theorem $\ref{herasu},$ is equivalent to Definition $\ref{PLNevada}.)$
\epsilonnd{thm}
\betaegin{note}\ellongleftarrowbel{shio}
Even if we perturb $\underline{g}: S^1_b\times[0,1]\to\mathbb{R}^2_b\times[0,1]$, which is covered by $g$,
in the special way
by a special homotopy $\underline{G_t}$,
$\underline{G_t}$ is not covered by a special isotopy $G_t$ of $g$
in general.
We must make
$\underline{G_t}$
under the condition that
$\underline{G_t}$ is covered by $G_t$.
\epsilonnd{note}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Theorem \ref{Wyoming}.}
It is trivial that
if $g$ satisfies Definition \ref{gene}, then $g$ satisfies Definition \ref{PLyugentsuki}.
We prove the following.
\betaegin{cla}\ellongleftarrowbel{kamen}
If $g$ satisfies Definition \ref{PLyugentsuki},
we can perturb $g$ so that $g$ satisfies Definition \ref{gene}.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note}.
Recall that $\pi\circ g$ does not cover a generic map $\underline{g}$ in general.
\\
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Claim \ref{kamen}.}
\Z[\pi/\pi^{(n)}]oindent
{\betaf The first step.}
Recall that by Definition \ref{PLyugentsuki}, for each $t\iotan[0,1]$, (Im$(\pi\circ g)$)$\cap(\mathbb{R}^2\times\{t\})$ is an immersed circle. We prove the following.
\betaegin{cla}\ellongleftarrowbel{Oregon}
We can perturb $g$
in the special way
so that the singular point set of \\
$(${\rm Im}$(\pi\circ g))\cap(\mathbb{R}^2\times\{t\})$
is a finite number of points
except for a finite number of levels $t\iotan[0,1]$.
In other words, we can do so that for only a finite number of levels $t\iotan[0,1]$,
the singular point set of $($Im$(\pi\circ g))\cap(\mathbb{R}^2\times\{t\})$ includes a finite number of segments.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Claim \ref{Oregon}.}
Let $I$ denote the interior of a 1-simplex which is in a simplicial complex structure of
the singular point set of (Im$(\pi\circ g)$)$\cap(\mathbb{R}^2\times\{\gamma\})$
for a real number $\gamma\iotan[0,1]$.
Assume that $I$ consists of multiple PL points.
Suppose that there are real numbers
$\alphalpha,\betaeta\iotan[0,1]$ with the following properties: \\
$\alphalpha<\gamma<\betaeta$.
For $\alphalpha<t<\betaeta$,
$h_t:R^2_b\times\{\gamma\}\to\mathbb{R}^2_b\times(\alphalpha,\betaeta)$ is an isotopy
($t$ runs in $(\alphalpha,\betaeta)$)
such that
$h_t(({\rm Im}(\pi\circ g))\cap(\mathbb{R}^2\times\{\gamma\}))=$
$({\rm Im}(\pi\circ g))\cap(\mathbb{R}^2\times\{t \})$
for all $t\iotan(\alphalpha, \betaeta)$,
and such that
$h_t$ preserves
the information of fiber-circles over
two immersed circles
which are put in the both sides of $=$.
(Here,
the information of fiber-circles means what we define in
Theorem \ref{Montana}, Definition \ref{Nebraska}, Note \ref{kaiga}, and Definition \ref{suiri}.)
Note that $(\underline{g})^{-1}(I)$
is a disjoint union of $n$ open segments $I_1,...,I_n$ in $S^1_b\times[0,1]$.
Note that $\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$
consists of $n$-tuple points,
is an open set, and
is a discrete submanifold of $\mathbb{R}^2_b\times[0,1]$.
We can perturb $g$ in the special way
so that $\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$
separates $n$ copies of
$\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$
and
so that we keep the boundary of the closure of
$\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$
since
there does not appear a new singularity of the immersed annulus.
Figure \ref{step1} is an example. \\
\betaegin{note}\ellongleftarrowbel{ultra}
Figures \ref{step1}-\ref{dia1} draw `figures in PL category' although the figures are smoothened.
When we move a map by isotopy, we take a PL subdivision if necessary.
\epsilonnd{note}
\betaegin{figure}
\vskip-30mm
\iotancludegraphics[width=170mm]{step1.pdf}
\vskip-50mm
\caption{{\betaf An example of fiberwise isotopy. }\ellongleftarrowbel{step1}}
\epsilonnd{figure}
Note that
the boundary of the closure of each $I$ may have a singular point set.
The repetition of this procedure and
Proposition \ref{polygon}
imply
Claim \ref{Oregon}. \qed
| 1,617 | 68,854 |
en
|
train
|
0.4938.19
|
\betaegin{note}\ellongleftarrowbel{honmaya}
Note each point in the resultant part which is made
from $\underset{\alphalpha<t<\betaeta}{\cup} h_t(I)$ by the separation.
By the definition of $I$, it is a single point.
\epsilonnd{note}
\Z[\pi/\pi^{(n)}]oindent
{\betaf The second step.} We prove the following.
\betaegin{cla}\ellongleftarrowbel{Pennsylvania}
Suppose that $g$ satisfies
the condition of Claim {\rm \ref{Oregon}}.
We can perturb $g$ in the special way
so that $\pi\circ g$ covers a level preserving transverse immersion $\underline{g}$
except for a finite number of points contained in $S^1_b\times[0,1]$
with the following property:
Let $P$ be an exceptional point.
Then $\underline{g}^{-1}(\underline{g}(P))$ may be more than one point.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Claim \ref{Pennsylvania}.}
Since $g$ satisfies the condition of Claim \ref{Oregon},
the singular point set of Im$(\pi\circ g)$ is a 1-dimensional finite simplicial complex.
Recall Proposition \ref{polygon}
and Note \ref{honmaya}.
Take the interior $I$ of a 1-simplex in the singular point set of the simplicial complex structure
with the following property:
\smallbreak\Z[\pi/\pi^{(n)}]oindent(1)
$\underline{g}^{-1}(I)$ is disjoint $n$ open segments $I_1,...,I_n$ in $S^1_b\times[0,1]$ ($n\iotan\mathbb{N}$).
$\underline{g}\vert_{I_i}$ is an embedding map.
\smallbreak\Z[\pi/\pi^{(n)}]oindent(2)
There is an open neighborhood $U$ of $I$ in $\mathbb{R}^2_b\times[0,1]$
with the following property:
There are open discs
$D^2_i$ embedded in $S^1_b\times[0,1]$
each of which is a tubular neighborhood of $I_i$ in $S^1_b\times[0,1]$
for each $i$.
$D^2_i\cap D^2_j=\phi$ for each distinct $i,j$.
$\underline{g}|_{D^2_i}$ is an embedding map.
$U\cap\underline{g}(S^1_b\times[0,1])$ is
$\underline{g}(D^2_1)\cup...\cup\underline{g}(D^2_n)$.
$\underline{g}(D^2_i)\cap\underline{g}(D^2_j)=I$ for each distinct $i,j$.
\smallbreak
We perturb $\underline{g}|_{(D^2_1\cup...\cup D^2_n)\cap(\underline{g}^{-1}(U))}$
in the special way
below
but we must remember Note \ref{shio}.
\smallbreak
Let $V$ be an open neighborhood of $U$ in $\mathbb{R}^2_b\times[0,1]$.
Hence $\overline{U}\subset V$.
Let $\wp:\mathbb{R}^2_b\times[0,1]\times\mathbb{R}^2_f\to\mathbb{R}^2_f.$
Combine this map $\wp$ and the diagram in Note \ref{kakanzu}:
$$
\betaegin{matrix}
S^1_b\times[0,1]\times S^1_f&\stackrel{g}\to&\mathbb{R}^2_b\times[0,1]\times \mathbb{R}^2_f
&\stackrel{\wp}\to\mathbb{R}^2_f\\
\downarrow_\rho&\circlearrowright &\downarrow_\pi&& \\
S^1_b\times[0,1]&\stackrel{\underline{g}}\to&\mathbb{R}^2_b\times[0,1]&&
\epsilonnd{matrix}
$$
\betaegin{cla}\ellongleftarrowbel{shoga}
We can perturb $g$ in the special way,
keeping out $V$ $($not $U)$,
with the following properties:
The image $\wp(g(\rho^{-1}(D^2_i)))$
is a circle $C_i$.
We have $C_i\cap C_j=\phi$ for each distinct $i, j$.
The map $\wp\vert_{g(\rho^{-1}(D^2_i))}$ is the projection.
\epsilonnd{cla}
\betaigbreak\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{shoga}.}
Take a point $\sigma\iotan I$.
Let $\underline{g}^{-1}(\sigma)=\{\sigma_1,...,\sigma_n\}$ and $\sigma_i\iotan I_i$.
Then the image of $\wp(g(\rho^{-1}(\sigma_i)))$ is a circle $C'_i$,
and we have $C'_i\cap C'_j=\phi$ for each distinct $i, j$.
We can take $g$ so that the circle $C_i$ which we want is this circle $C'_i$ for each $i$.
Then Claim \ref{shoga} holds. \qed
\betaigbreak\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
The reason why we prepare $V$ is
as follows:
Before the perturbation, the map $\wp\vert_{g(\rho^{-1}(\partial D^2_i))}$ is not a projection.
Note $\partial D^2_i\subset\overline U$.
However
$\wp\vert_{g(\rho^{-1}(\partial D^2_i))}$ is the projection
after the perturbation.
\\
We next make $\underline{g}|_{(D^2_1\cup...\cup D^2_n)\cap(\underline{g}^{-1}(U))}$
a level preserving transverse immersion
since we can perturb $g$ in the special way,
keeping out $U$, with the following properties:
For any point $q\iotan D^2_i$
and any point $r$ in the circle $\rho^{-1}(q)$,
$\wp(g(r))\iotan\mathbb{R}^2_f$ is fixed while we perturb $g$.
Claim \ref{shoga} ensures that while we perturb $g$ in this way,
we keep a property that $g$ is an embedding map.
Figure \ref{Ohio} is an example of this
procedure. The repetition of this procedure implies Claim \ref{Pennsylvania}. \qed\\
\betaegin{figure}
\iotancludegraphics[width=150mm]{tsume.pdf}
\vskip-35mm
\caption{{\betaf A special
isotopy of $g$. The intersection of four sheets in the upper figure
is perturbed and is changed into the one in the lower figure.}\ellongleftarrowbel{Ohio}}
\epsilonnd{figure}
\Z[\pi/\pi^{(n)}]oindent
{\betaf The third step.} We prove the following.
\betaegin{cla}\ellongleftarrowbel{Rhode Island}
Suppose that
$g$ satisfies the condition of Claim {\rm\ref{Pennsylvania}}.
We can perturb $g$
in the special way
so that $\pi\circ g$ covers a level preserving transverse immersion $\underline{g}$
except for a finite number of points contained in $S^1_b\times[0,1]$
with the following property:
Let $P$ be any exceptional point.
The set
$\underline{g}^{-1}(\underline{g}(P))$ consists of only one point.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Claim \ref{Rhode Island}.}
Assume that
$f^{-1}(f(P))$ consists of $m$ points
$P_1,...,P_m$ ($m\iotan\mathbb{N}$) in $S^1_b\times[0,1]$.
Take an open neighborhood $U$ of $f(P)$ in $\mathbb{R}^2_b\times[0,1]$ with the following properties:
There are open discs
$D^2_i$ in $S^1_b\times[0,1]$
which is a tubular neighborhood of $P_i$ in $S^1_b\times[0,1]$
for each $i$.
$D^2_i\cap D^2_j=\phi$ for each distinct $i,j$.
$U\cap\underline{g}(S^1_b\times[0,1])$ is
$\underline{g}(D^2_1)\cup...\cup\underline{g}(D^2_m)$.
$\underline{g}(D^2_1))\cap...\cap\underline{g}(D^2_m)=f(P)$.
For a pair $(i,j)$, we may have
$\underline{g}(D^2_i))\cap\underline{g}(D^2_j)\underset{\Z[\pi/\pi^{(n)}]eq}{\supset}f(P)$.
\smallbreak
We perturb $\underline{g}|_{(D^2_1\cup...\cup D^2_n)\cap(\underline{g}^{-1}(U))}$
in the special way
below
but we must remember Note \ref{shio}.
Take an open neighborhood $V$ of $U$ in $\mathbb{R}^2_b\times[0,1]$
such that $\overline{U}\subset V$.
We can perturb $g$ in the special way,
keeping out $V$ (not $U$),
with the following properties:
$\wp(g(\rho^{-1(}D^2_i)))$ is a circle $C_i$.
$\wp\vert_{g(\rho^{-1(}D^2_i))}$ is the projection. \\
We can make $\underline{g}|_{(D^2_1\cup...\cup D^2_n)\cap(\underline{g}^{-1}(U))}$
a level preserving transverse immersion except for a finite number of points
since we can perturb $g$ in the special way,
keeping out $U$,
with the following properties:
For any point $q\iotan D^2_i$
and any point $r$ in the circle $\rho^{-1}(q)$,
$\wp(g(r))\iotan\mathbb{R}^2_f$ is fixed while we perturb $g$.
(Note that while we perturb $g$ in this way,
we keep a property that $g$ is a
embedding map.)
The repetition of this procedure
and Note \ref{honmaya} imply
Claim \ref{Rhode Island}. \qed
| 2,787 | 68,854 |
en
|
train
|
0.4938.20
|
\\
\betaegin{figure}
\iotancludegraphics[width=150mm]{tsume.pdf}
\vskip-35mm
\caption{{\betaf A special
isotopy of $g$. The intersection of four sheets in the upper figure
is perturbed and is changed into the one in the lower figure.}\ellongleftarrowbel{Ohio}}
\epsilonnd{figure}
\Z[\pi/\pi^{(n)}]oindent
{\betaf The third step.} We prove the following.
\betaegin{cla}\ellongleftarrowbel{Rhode Island}
Suppose that
$g$ satisfies the condition of Claim {\rm\ref{Pennsylvania}}.
We can perturb $g$
in the special way
so that $\pi\circ g$ covers a level preserving transverse immersion $\underline{g}$
except for a finite number of points contained in $S^1_b\times[0,1]$
with the following property:
Let $P$ be any exceptional point.
The set
$\underline{g}^{-1}(\underline{g}(P))$ consists of only one point.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Claim \ref{Rhode Island}.}
Assume that
$f^{-1}(f(P))$ consists of $m$ points
$P_1,...,P_m$ ($m\iotan\mathbb{N}$) in $S^1_b\times[0,1]$.
Take an open neighborhood $U$ of $f(P)$ in $\mathbb{R}^2_b\times[0,1]$ with the following properties:
There are open discs
$D^2_i$ in $S^1_b\times[0,1]$
which is a tubular neighborhood of $P_i$ in $S^1_b\times[0,1]$
for each $i$.
$D^2_i\cap D^2_j=\phi$ for each distinct $i,j$.
$U\cap\underline{g}(S^1_b\times[0,1])$ is
$\underline{g}(D^2_1)\cup...\cup\underline{g}(D^2_m)$.
$\underline{g}(D^2_1))\cap...\cap\underline{g}(D^2_m)=f(P)$.
For a pair $(i,j)$, we may have
$\underline{g}(D^2_i))\cap\underline{g}(D^2_j)\underset{\Z[\pi/\pi^{(n)}]eq}{\supset}f(P)$.
\smallbreak
We perturb $\underline{g}|_{(D^2_1\cup...\cup D^2_n)\cap(\underline{g}^{-1}(U))}$
in the special way
below
but we must remember Note \ref{shio}.
Take an open neighborhood $V$ of $U$ in $\mathbb{R}^2_b\times[0,1]$
such that $\overline{U}\subset V$.
We can perturb $g$ in the special way,
keeping out $V$ (not $U$),
with the following properties:
$\wp(g(\rho^{-1(}D^2_i)))$ is a circle $C_i$.
$\wp\vert_{g(\rho^{-1(}D^2_i))}$ is the projection. \\
We can make $\underline{g}|_{(D^2_1\cup...\cup D^2_n)\cap(\underline{g}^{-1}(U))}$
a level preserving transverse immersion except for a finite number of points
since we can perturb $g$ in the special way,
keeping out $U$,
with the following properties:
For any point $q\iotan D^2_i$
and any point $r$ in the circle $\rho^{-1}(q)$,
$\wp(g(r))\iotan\mathbb{R}^2_f$ is fixed while we perturb $g$.
(Note that while we perturb $g$ in this way,
we keep a property that $g$ is a
embedding map.)
The repetition of this procedure
and Note \ref{honmaya} imply
Claim \ref{Rhode Island}. \qed\\
\Z[\pi/\pi^{(n)}]oindent{\betaf The fourth step.}
Take $\pi\circ g$ and $\underline{g}$ in Claim \ref{Rhode Island}.
Let $P$ be any exceptional point. Recall that $P\iotan S_b^1\times[0,1]$.
Let $N(P)$ be the tubular neighborhood of $P$ in $S_b^1\times[0,1]$.
Take the tubular neighborhood $B$ of $\underline{g}(P)$ in $\mathbb{R}^2_b\times[0,1]$.
We can suppose that
$\underline{g}(N(P))\subset B$
and that
$\underline{g}(\partial N(P))\subset \partial B$.
The image $\underline{g}(N(P))$ makes $\underline{g}(P)$
a branch point
(recall Definition \ref{oyster}).
Here we ignore the information
of fiber circles over $P$.
The information of Rourke fiber makes $\underline{g}(\partial N(P))\subset \partial B$,
a virtual 1-knot diagram $\omega$ in $\partial B-$(a point).
Note that $\partial B-$(a point) is the 2-space and
that the point is not included in $\omega$.
Recall virtual segments defined in Note \ref{vrei}.
A {\iotat classical segment} is the segment with the following properties.
It is a segment included in a virtual 2-knot diagram.
One of the boundary is a classical branch point.
The points in the interior of the segment are classical double points.
An example is drawn in Figure \ref{sashimiv} if the branch point there is a classical branch point. \\
\betaegin{cla}\ellongleftarrowbel{mochi}
We can assume that all branch points of Im $\underline{g}$ are classical Whitney branch points.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{mochi}.}
Since $\underline{g}(P)$ is a branch point,
$n$ virtual segments and $m$ classical segments
meet at
$\underline{g}(P)$,
where $\{n,m\}\subset\mathbb{N}\cup\{0\}$ and $n+m\geqq2$.
We can prove that there is no virtual segment
in the same fashion as the one in the proof of Theorem \ref{Rmuri}.
(Note that in \S\ref{v2} we proved Theorem \ref{Rmuri} in the smooth category
but we can prove the PL version of Theorem \ref{Rmuri} in the same way.)
Therefore
more than one classical segment meet at $\underline{g}(P)$.
Hence $\omega$ is a classical diagram and
determines a classical 1-knot.
\\
In order to complete the proof of Claim \ref{mochi}, we will prove Claim \ref{koma}.
In order to prove Claim \ref{koma}, we prove the following Claim \ref{tantei}.
\betaegin{defn}\ellongleftarrowbel{prod}
Let $u,v\iotan[0,1]$. Let $u\elleq t\elleq v$.
The map $\underline{g}|_{S^1_b\times[u,v]}$ is called a {\iotat product map}
if there is an isotopy $\iotaota_t$
of $\mathbb{R}^2$ from the identity map
such that $\iotaota_t:\mathbb{R}^2\times\{u\}\to\mathbb{R}^2\times\{t\}$
carries $({\rm Im}\underline{g})\cap(\mathbb{R}^2\times\{u\})$ to $({\rm Im}\underline{g})\cap(\mathbb{R}^2\times\{t\})$.
Let $B^3$ be an embedded closed 3-ball in $\mathbb{R}^2_b\times[0,1]$.
The map $\underline{g}|_{S^1_b\times[u,v]}$ is called a {\iotat product map out $B^3$}
if there is an isotopy $\iotaota_t$
of $\mathbb{R}^2$ from the identity map
such that $\iotaota_t:\mathbb{R}^2\times\{u\}\to\mathbb{R}^2\times\{t\}$
carries $({\rm Im}\underline{g}-B^3)\cap(\mathbb{R}^2\times\{u\})$
to $({\rm Im}\underline{g}-B^3)\cap(\mathbb{R}^2\times\{t\})$.
\epsilonnd{defn}
We have the following.
\betaegin{figure}\iotancludegraphics[width=140mm]{todome.pdf}
\vskip-40mm
\caption{{\betaf
A branch point moved by a special
isotopy of $g$ }}\ellongleftarrowbel{todome}
\epsilonnd{figure}
\betaegin{cla}\ellongleftarrowbel{tantei}
By using a special
isotopy of $g$,
any branch point is moved
as drawn in Figure $\ref{todome}$:
Let $\alphalpha_u$ $($respectively, $\alphalpha_v)$ be an immersed circle determined by
$\underline{g}(S^1_b\times\{u\})\subset\mathbb{R}^2_b\times\{u\}$
$($respectively, $\underline{g}(S^1_b\times\{v\})\subset\mathbb{R}^2_b\times\{v\})$
with the information of Rourke fiber determined by
$g(S^1_b\times\{u\}\times S^1_f)\subset\mathbb{R}^2_b\times\{u\}\times\mathbb{R}^2_f$
$($respectively, \\$g(S^1_b\times\{v\}\times S^1_f)\subset\mathbb{R}^2_b\times\{v\}\times\mathbb{R}^2_f).$
The map $\underline{g}|_{S^1_b\times[u,v]}$ is a product map out $B$.
Hence $\alphalpha_u=\alphalpha_v\#\omega$,
where $\#$ denotes the connected sum of immersed circles into $\mathbb{R}^2$
and $=$ means that there is an orientation preserving diffeomorphism
of $\mathbb{R}^2$ which carries the left hand side to the right side one.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{tantei}.}
For each $t$, $(\mathbb{R}^2_b\times\{t\})\cap$Im $\underline{g}$ is connected.
Hence
the branch point is not a local maximal (respectively, minimal) point of
the restriction of the height function $\mathbb{R}^2_b\times[0,1]\to[0,1]$ to Im $\underline{g}$.
\\
Claim \ref{sukoshi} follows from Claim \ref{tako}.
\betaegin{cla}\ellongleftarrowbel{sukoshi}
Let $X$ be the closed interval contained in $\mathcal S$. Assume that $X$ does not have self-intersection.
Then, by using a special
isotopy of $g$,
we can move {\rm Int}$X$ as drawn in Figure \ref{igaini}
with the following properties:
We move {\rm Int}$X$ by an isotopy of embedding of {\rm Int}$X$,
keeping the position of $\partial X$ in $\mathbb{R}^2_b\times[0,1]$.
We keep the position of $\overline{\mathcal S-X}$ in $\mathbb{R}^2_b\times[0,1]$.
We keep the condition $X\cap(\mathcal S-X)=\phi.$
\epsilonnd{cla}
\betaegin{figure}
\iotancludegraphics[width=140mm]{igai.pdf}
\vskip-20mm
\caption{{\betaf Changing $X$.}\ellongleftarrowbel{igaini}}
\epsilonnd{figure}
In Figure \ref{beefsteak} there is an example of Claim \ref{sukoshi}.
Threre is drawn
how $X$ changes by a special
isotopy of $g$
in the case of the upper two figures in
the right column of Figure \ref{igaini}.
Note that Int$X$ consists of double points.
Each point of $\partial X$ is a branch, double or triple one.
Let $B$ be an open disc contained in $S^1_b\times[0,1]\times S^1_f$. By Definition \ref{PLNevada}.(1), $\pi\circ g(B)$ is not parallel to $\mathbb{R}^2_b\times\{0\}$.
Note that if $\pi\circ g(B)$ is parallel to $\mathbb{R}^2_b\times\{0\}$,
the phenomenon in the right column of Figure \ref{beefsteak} does not occur. \\
\betaegin{figure}
\iotancludegraphics[width=120mm]{beef.pdf}
\caption{{\betaf
While the middle part of two sheets approaches by
a special
isotopy of $g$,
the intersection $X$ in Lemma \ref{sukoshi} changes.
}\ellongleftarrowbel{beefsteak}}
\epsilonnd{figure}
\betaigbreak
\betaegin{cla}\ellongleftarrowbel{tako}
Let $B^3$ be a closed $($respectively, open$)$ 3-ball embedded in $R^2_b\times[0,1]$.
Take any orientation preserving isotopy of
diffeomorphism of $B^3$ fixing $\partial B^3$ from the identity map.
We can give a coordinate $(x,y,t)$ to $p\iotan B^3\subset\mathbb{R}^2_b\times[0,1]$.
Suppose that this isotopy carries $p$ to a point whose coordinate is $(x,y,t')$,
where $t'\Z[\pi/\pi^{(n)}]eq t$ or $t'=t$ holds.
Use this isotopy and make a homotopy of $\underline{g}.$
Suppose that this homotopy is a special
homotopy of $\underline{g}$.
Then this homotopy of $\underline{g}$ can be covered by a
special
isotopy of $g$.
\epsilonnd{cla}
By Claim \ref{sukoshi},
we can let the interior of all classical segments exist below (respectively, over) the branch point
with respect to the height
as drawn in Figure \ref{todome}.
By the first, second and third steps,
the singular point set of Im $\underline{g}$
is a finite simplicial complex.
Hence we have the following.
\betaegin{cla}\ellongleftarrowbel{koma}
There are only finite number of $t\iotan[0,1]$ with the following properties:
There is no real number $\varepsilon$ such that
the map $\underline{g}|_{S^1_b\times[t-\varepsilon, t+\varepsilon]}$
is a product map out $B$.
\epsilonnd{cla}
By Claims \ref{tako} and \ref{koma}, we have Claim \ref{tantei}.
\qed
| 3,959 | 68,854 |
en
|
train
|
0.4938.21
|
\\
\betaegin{cla}\ellongleftarrowbel{kinako}
$\omega$ defines the trivial knot.
Hence we obtain $\alphalpha_v$ from $\alphalpha_u$
by using only classical Reidemeister moves.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{kinako}.}
By the map $g|_{S^1_b\times[u,v]\times S^1_f}$,
$\alphalpha_u$ and $\alphalpha_v$ are fiberwise equivalent.
Therefore
the submanifolds, $\mathcal S(\alphalpha_u)$ and $\mathcal S(\alphalpha_v)$, of $\mathbb{R}^4$
are isotopic.
Hence
$\pi_1(\mathbb{R}^4-\mathcal S(\alphalpha_u))\\\cong
\pi_1(\mathbb{R}^4-\mathcal S(\alphalpha_v))$.
Hence
the group of $\alphalpha_u$ and that of $\alphalpha_v$ are isomorphic.
Since $\alphalpha_u\\=\alphalpha_v\#\omega$,
the group of $\alphalpha_u$ is
the free product of that of $\alphalpha_v$ and that of $\omega$.
Hence
the group of $\omega$ is $\mathbb{Z}$.
Since $\omega$ defines a classical 1-knot,
$\omega$ defines the trivial 1-knot.
Since $\omega$ is a classical 1-knot diagram and represents the trivial 1-knot,
$\omega$ is changed into the trivial 1-knot diagram by using only classical Reidemeister moves.
Hence Claim \ref{kinako} holds.
\qed
\\
It is easy to prove that
if two virtual 1-knot diagrams are obtained each other
by using only classical Reidemeister moves,
they are fiberwise equivalent.
Therefore,
we change $\underline{g}|_{S^1_b\times[u,v]}$ in $B$
so that we let
the map $\underline{g}|_{S^1_b\times[u,v]}$ be a level preserving,
generic map.
Hence the following holds: If $\underline{g}(S^1_b\times[u,v])$ includes a branch point,
it is the classical Whitney branch point.
These classical Whitney branch points appear
when we carry out classical Reidemeister $I$ move
while we change $\alphalpha_u$ into $\alphalpha_v$.
After repeating this procedure,
all branch points of Im $\underline{g}$ are classical Whitney branch points.
This completes the proof of Claim \ref{mochi}. \qed\\
This completes the proof of Claim \ref{kamen}. \qed\\
This completes the proof of Theorem \ref{Wyoming}. \qed \\
Claim \ref{kinako} implies the following Proposition \ref{amakara}.
\betaegin{defn}\ellongleftarrowbel{shuza}
Virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are said to be
{\iotat strongly fiberwise equivalent}
if $\alphalpha$ and $\betaeta$ satisfy the conditions
which are made by replacing the phrase
`level preserving generic map' with
`level preserving transverse immersion'
without changing other parts in Definition \ref{gene}.
\epsilonnd{defn}
\betaegin{pr}\ellongleftarrowbel{amakara}
If virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are fiberwise equivalent,
there is a sequence of virtual 1-knot diagrams,
$\alphalpha=\alphalpha_1,\betaeta_1,\alphalpha_2,\betaeta_2,...,
\alphalpha_{k-1},\betaeta_{k-1},\alphalpha_k,\betaeta_k=\betaeta$
$(k\iotan\mathbb{N})$,
such that
$\alphalpha_i$ and $\betaeta_i$
are strongly fiberwise equivalent
$(1\elleqq i\elleqq k)$
and such that
$\betaeta_i$ and $\alphalpha_{i+1}$ $(1\elleqq i\elleqq k-1)$
are classical move equivalent $($and therefore rotational welded equivalent$)$.
\epsilonnd{pr}
We prove the following theorem.
\betaegin{thm}\ellongleftarrowbel{ike}
If $g$ satisfies Definition $\ref{shuza}$,
then the following hold.
Let $\mathcal S$ be the singular point set of
$(\pi\circ g)(S^1_b\times[0,1]\times S^1_f)$ in Definition $\ref{shuza}.$
Note that $\mathcal S$ is a finite 1-dimensional simplicial complex.
\smallbreak\Z[\pi/\pi^{(n)}]oindent{\rm (i)}
$\mathcal S\cap(\mathbb{R}^2_b\times\{0$ $($respectively, $1)\})$ is
a set of virtual and classical crossing points of $\alphalpha$ $($respectively, $\betaeta),$
and therefore is a set of double points.
It consists of 0-simplices.
Only one 1-simplex
is attached to each of these 0-simplices.
These 1-simplices
meet \\
$\mathbb{R}^2_b\times\{0$ $($respectively, $1)\}$ transversely.
\smallbreak\Z[\pi/\pi^{(n)}]oindent{\rm (ii)}
Triple points are 0-simplices.
$($Recall
Notes $\ref{umeboshi}$ and $\ref{faso}$, Definnition $\ref{JW}.)$
\smallbreak\Z[\pi/\pi^{(n)}]oindent{\rm (iii)}
The restriction of
`the height function $\mathfrak h:\mathbb{R}^2_b\times[0,1]\to[0,1]$'
to the interior of any 1-simplex
in $\mathcal S$
has no critical point.
$($Hence we have the following:
For each $\zeta\iotan[0,1]$, $\mathcal S\cap(\mathbb{R}^2_b\times\{\zeta\})$ is a finite number of points.
No 1-simplex is parallel to $\mathbb{R}^2_b\times\{0\}.)$
\smallbreak\Z[\pi/\pi^{(n)}]oindent{\rm (iv)}
Let $\zeta\iotan(0,1).$
$\mathcal S\cap(\mathbb{R}^2_b\times\{\zeta\})$ includes no or only one 0- simplex.
\smallbreak\Z[\pi/\pi^{(n)}]oindent{\rm (v)}
In $\mathbb{R}^2_b\times(0,1)$, 0-simplices
appear only in the two
cases of Figure $\ref{zero}$.
\betaegin{figure}
\vskip-30mm
\iotancludegraphics[width=135mm]{zero.pdf}
\vskip-30mm
\caption{{\betaf 0-simplices in $\mathcal S$ }}
\ellongleftarrowbel{zero}
\epsilonnd{figure}
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Theorem \ref{ike}.}
Theorem \ref{ike}.(i)
follows from Definition \ref{shuza}.(5) for any simplicial complex structure.
Theorem \ref{ike}.(ii)
holds for any simplicial complex structure
by the definition of simplicial complex structure.
\\
Proof of Theorem \ref{ike}.(iii).
Suppose that there is an $X$
as is an example explained in Claim \ref{sukoshi} and Figure \ref{beefsteak}.
Repeating this procedure
we can take a simplicial complex structure
such that
(any 1-simplex)$\cap(\mathbb{R}^2_b\times\{t\})$
for any $t\iotan[0,1]$ is a finite number of points.
Therefore
the restriction of $\mathfrak h$ to the interior of any 1-simplex of this simplicial complex structure
has a finite number of critical points.
Make a new simplicial complex structure
so that the critical points are new 0-simplicies
so that we keep the condition of Theorems \ref{ike}.(i) and (ii).
Suppose that
there is a 0-simplex
$e^0$ to which only two 1-simplices
$e^1_1$ and $e^1_2$, attach,
and that
$e^0$ is not a critical point of the restriction of $\mathfrak h$ to
(Int$e^1_1)\cup e^0\cup$(Int$e^1_2$)
=Int($e^1_1\cup e^0\cup e^1_2$).
Make a new simplicial complex structure
such that
$e^1_1\cup e^0\cup e^1_2$
is changed into a new 1-simplex
without changing other simplicial complex structure.
This completes the proof of Theorem \ref{ike}.(iii).
\\
Theorem \ref{ike}.(iv)
holds because, by Claims \ref{koma} and \ref{tako},
we can change the height of any 0-simplex
so that we keep the condition of Theorems \ref{ike}.(i)-(iii).
\\
Proof of Theorem \ref{ike}.(v).
There are only two
cases:
(P) Only two 1-simplices attach to a 0-simplex.
(Q) Only six 1-simplices attach to a 0-simplex.
Note that, by Theorem \ref{ike}.(iii), each 1-simplex is attached to two different 0-simplices.
In the case (P),
by Theorem \ref{ike}.(iii),
the 0-simplex
exists as drawn in Type P of Figure \ref{zero}.
In the case (Q),
as drawn in Figure \ref{beefsteak} associated with Claim \ref{sukoshi},
we can move 1-simplex
so that we have the condition as drawn in Type Q of Figure \ref{zero}, and
so that we keep the condition of Theorems \ref{ike}.(i)-(iv).
See Figure \ref{tsuika2} for an example of this move.
\betaegin{figure}
\iotancludegraphics[width=150mm]{tsuika2.pdf}
\vskip-40mm
\caption{{\betaf
The singularity in the upper figure is made into the one in the lower which consists of one Type Q and three Type P.}}
\ellongleftarrowbel{tsuika2} \epsilonnd{figure}
This completes
the proof of Theorem \ref{ike}.(v).\\
This completes the proof of Theorem \ref{ike}.
\qed
| 2,820 | 68,854 |
en
|
train
|
0.4938.22
|
\\
We have the following theorem.
\betaegin{thm}\ellongleftarrowbel{fwrw}
Two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are PL fiberwise equivalent if and only if
$\alphalpha$ and $\betaeta$ are PL rotational welded equivalent.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent
{\betaf Proof of Theorem \ref{fwrw}.}
The `if' part is easy.
We prove the `only if' part.
By Proposition \ref{amakara}, it suffices to prove Claim \ref{takusan}
\betaegin{cla}\ellongleftarrowbel{takusan}
Two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are
PL strongly fiberwise equivalent only if
$\alphalpha$ and $\betaeta$ are PL rotational welded equivalent.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{takusan}.}
Let $C_\zeta=\underline{g}(S^1_b\times\{\zeta\})=$
$\underline{g}(S^1_b\times[0,1])
\cap(\mathbb{R}^2_b\times\{\zeta\})\\=$
$(\pi\circ g(S^1_b\times[0,1]\times S^1_f))
\cap(\mathbb{R}^2_b\times\{\zeta\})$.
By Theorem \ref{ike},
$C_\zeta$ is
an immersed circle in $\mathbb{R}^2_b\times\{\zeta\}$ and
its singular point set is a finite number of points.
$C_\zeta$ changes from $\alphalpha$ to $\betaeta$ step by step
as $\zeta$ runs from 0 to 1.
If, for a $\zeta_p$, $C_{\zeta_p}$ includes a 0-simplex
of the simplicial complex structure
in Theorem \ref{ike}.
A classical or virtual Reidemeister move is done there.
We do any of them only there.
If $\zeta_q<\zeta<\zeta_r$,
$C_\zeta$ includes no 0-simplex.
Then $C_\zeta$ is not changed while $\zeta$ runs from $\zeta_q$ to $\zeta_r$.
We investigate how $C_\zeta$ changes in detail.
Near a 0-simplex
in $\mathbb{R}^2_b\times[0,1]$,
Im $\underline{g}$ is drawn as in Figure \ref{tsuika}
since $\underline{g}$ is a transverse immersion.
Here, note that we can move $\mathcal S$ by using
a special
isotopy of $g$. \\
\betaegin{figure}
\iotancludegraphics[width=110mm]{tsuika.pdf}
\vskip-10mm
\caption{{\betaf How sheets intersect near Types P and Q.}}
\ellongleftarrowbel{tsuika} \epsilonnd{figure}
Therefore
we have only the following two facts on $\mathcal S$ and local moves on the knot diagrams.
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent (i)
Let $\sigma, \tau\iotan[0,1]$.
Suppose that $\mathcal S\cap(\mathbb{R}^2_b\times(\sigma,\tau))$ includes no 0-simplex.
It holds that $\underline{g}|_{S^1_b\times[\sigma,\tau]}$ is a product map.
Then $(\pi\circ g)(S^1_b\times[0,1]\times S^1_f)\cap(\mathbb{R}^2_b\times\{\sigma\})$ can be obtained from $(\pi\circ g)(S^1_b\times[0,1]\times S^1_f)\cap(\mathbb{R}^2_b\times\{\tau\})$ by an isotopy of $\mathbb{R}^2_b$.
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent (ii)
Let $\timesi\iotan[0,1].$
Suppose that $\mathcal S\cap(\mathbb{R}^2_b\times\{\timesi\})$ includes only one 0-simplex.
Suppose that
$\mathcal S\cap(\mathbb{R}^2_b\times(\timesi, \timesi+\varepsilon])$
(respectively,
$\mathcal S\cap(\mathbb{R}^2_b\times[\timesi-\varepsilon, \timesi))$)
includes no 0-simplex.
Let
$D=(\pi\circ g)(S^1_b\times[0,1]\times S^1_f)\cap(\mathbb{R}^2_b\times\{\timesi-\varepsilon'\})$
and
$U\\
=(\pi\circ g)(S^1_b\times[0,1]\times S^1_f)\cap(\mathbb{R}^2_b\times\{\timesi+\varepsilon'\})$.
If the 0-simplex
is put in Type P or Q,
then $U$ is obtained from $D$ by
one welded move other than a virtual Reidemeister $I$ move.
(Note.
Type P causes classical and virtual Reidemeister $II$ moves.
Type Q causes classical and virtual Reidemeister $III$ moves.
Four types of triple points correspond to four types of classical and virtual Reidemeister $III$ moves.)
Therefore
$\alphalpha$ is changed into $\betaeta$ by welded moves other than the virtual Reidemeister $I$ move.
Hence $\alphalpha$ is rotational welded equivalent to $\betaeta$.
This completes the proof of Claim \ref{takusan}. \qed\\
This completes the proof of Theorem \ref{fwrw}.
\qed\\
We will complete the proof of Theorem \ref{smooth} and go back to the smooth category.
We said that the `if' part of Theorem \ref{smooth} is easy.
We will prove the `only if' part of Theorem \ref{smooth} by using the following lemma.
\betaegin{lem}\ellongleftarrowbel{PLtosmooth}
Let $\alphalpha$ and $\betaeta$ be smooth virtual 1-knot diagrams.
Let $\alphalpha'$ $($respectively, $\betaeta')$ be a PL virtual 1-knot diagram
which is piecewise smooth, planar, ambient isotopic to $\alphalpha$ $($respectively, $\betaeta)$.
Then we have the following.
$\alphalpha$ and $\betaeta$ are smooth rotational welded equivalent if and only if
$\alphalpha'$ and $\betaeta'$ are PL rotational welded equivalent.
\epsilonnd{lem}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Lemma \ref{PLtosmooth}.}
Let $\timesi$ and $\zeta$ be smooth virtual 1-knot diagrams.
If $\timesi$ and $\zeta$ are PL, planar, ambient isotopic to a PL virtual 1-knot diagram $\gamma$,
then $\timesi$ is smooth, planar, ambient isotopic to $\zeta$.
{\iotat Reason.} Smoothen the corner of $\gamma$.
Each of PL rotational welded Reidemeister moves is regarded as smooth rotational welded Reidemeister move.
This completes the proof of Lemma \ref{PLtosmooth}.
\qed\\
Assume that two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are smooth fiberwise equivalent.
By Claim \ref{xbeef}, they are PL fiberwise equivalent.
By Theorem \ref{fwrw}, they are PL rotational welded equivalent.
By Lemma \ref{PLtosmooth}, they are smooth rotational welded equivalent.
Therefore the `only if' part of Theorem \ref{smooth} is true.
This completes the proof of Theorem \ref{smooth}. \qed\\
We are now back to the smooth category.
\\
\Z[\pi/\pi^{(n)}]oindent {\betaf Note.}
Figure \ref{dia1} explains Figure \ref{tsuika2} in more detail.
\betaegin{figure} \vskip-10mm \iotancludegraphics[width=120mm]{dia1.pdf}
\caption{{\betaf
This pair of the left figure and the right one is an example of a pair of the figures of Figure \ref{tsuika2}. The left (respectively, right) figure is an example of a sequence of diagrams associated with the upper (respectively, lower) figure of Figure \ref{tsuika2}. The left sequence is perturbed and is made into the right sequence.These diagrams are drawn without information of virtual multiple points and classical ones.
}}\ellongleftarrowbel{dia1} \epsilonnd{figure}
\betaegin{note}\ellongleftarrowbel{xudon}
In \cite{Rourke}, the fiberwise equivalence is defined by the following definition.
We call the equivalence relation the {\iotat $f$-fiberwise equivalence} in this paper.
Note that we work in the smooth category.
\betaegin{defn}\ellongleftarrowbel{yugentsuki}
Add the following condition to Definition \ref{Nevada} without changing the other parts.
We call the equivalence relation the {\iotat $f$-fiberwise equivalence}.
Note that we work in the smooth category.
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(\ref{yugentsuki}.1)$ In each fiber $\mathbb{R}^2_f$, there are a finite number of circles.
$($That is, $<\iotanfty.)$
\epsilonnd{defn}
We said that it is easy to prove the following (i). It is also easy to prove the \\Bbbkollowing (ii).
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(i) If virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are rotational welded equivalent,
then $\alphalpha$ and $\betaeta$ are fiberwise equivalent.
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(ii) If virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are rotational welded equivalent,
then $\alphalpha$ and $\betaeta$ are $f$-fiberwise equivalent.
\betaigbreak
Theorem \ref{smooth} and the above (ii) imply the following (iii).
\smallbreak\Z[\pi/\pi^{(n)}]oindent
(iii) If virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are fiberwise equivalent,
then $\alphalpha$ and $\betaeta$ are $f$-fiberwise equivalent.
(Note that we work in the smooth category.)
\\
The converse of (iii) is trivial. Hence we have the following:
Virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are fiberwise equivalent
if and only if $\alphalpha$ and $\betaeta$ are $f$-fiberwise equivalent
\epsilonnd{note}
\betaigbreak
\betaegin{note}\ellongleftarrowbel{xmikan}
Although Rourke claimed in \cite[Theorem 4.1]{Rourke} that
two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are
fiberwise equivalent if and only if $\alphalpha$ and $\betaeta$ are welded equivalent
in the PL (respectively, smooth) category,
we state that this claim is wrong, as we mentioned it
in the last few paragraphs of \S\ref{i3}.
The reason for the wrongness is Theorems \ref{smooth} and \ref{fwrw} and Claim \ref{panda}.
\epsilonnd{note}
We introduce a new equivalence relation of the set of virtual 1-knot diagrams.
\betaegin{defn}\ellongleftarrowbel{parity}
Let $\alphalpha$ and $\betaeta$ be virtual 1-knot diagrams.
We say that
$\alphalpha$ and $\betaeta$ are {\iotat virtually parity equivalent}
if $\alphalpha$ and $\betaeta$ have the same parity of virtual crossing points.
\epsilonnd{defn}
We prove several results associated with the virtual parity.
\betaegin{cla}\ellongleftarrowbel{hirumeshi}
If two virtual 1-knot diagrams
$\alphalpha$ and $\betaeta$ are
rotational welded equivalent,
then $\alphalpha$ and $\betaeta$ are virtually parity equivalent.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{hirumeshi}.}
We can obtain $\alphalpha$ from $\betaeta$
by some welded-moves
other than virtual Reidemeister $I$ move.
\qed
\betaegin{cla}\ellongleftarrowbel{panda}
The welded equivalence does not imply the rotational welded equivalence.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Note.}
By their definitions, the rotational welded equivalence implies the welded equivalence.
\\
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{panda}.}
Call the virtual 1-knot diagram in Figure \ref{New Hampshire},
the {\iotat virtual figure $\iotanfty$ knot diagram}.
\betaegin{figure}
\iotancludegraphics[width=80mm]{x5.pdf}
\vskip-30mm
\caption{{\betaf The virtual figure $\iotanfty$ knot diagram}\ellongleftarrowbel{New Hampshire}}
\epsilonnd{figure}
The virtual figure $\iotanfty$ knot diagram
and the trivial 1-knot diagram
are welded equivalent by the definition
but are not rotational welded equivalent
by Claim \ref{hirumeshi}.
This completes the proof of Claim \ref{panda}.
\qed\\
\vskip10mm
| 3,568 | 68,854 |
en
|
train
|
0.4938.23
|
\subsection{Related topics}\ellongleftarrowbel{sub2}\hskip10mm\\%
Theorem \ref{Montgomery} is one of our main results.
\betaegin{thm}\ellongleftarrowbel{Montgomery}
If two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$ are fiberwise equivalent,
then $\alphalpha$ and $\betaeta$ are virtually parity equivalent.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{Montgomery}.}
Theorem \ref{smooth}
and Claim \ref{hirumeshi} imply Theorem \ref{Montgomery}. \qed\\
It is known that
the usual trefoil knot diagram is not welded equivalent
to the trivial knot diagram
(see
\cite{Kauffman, Kauffmanrw, Rourke, Satoh, J}).
Hence these two diagrams are not also rotational welded equivalent.
Hence we have the following.
\betaegin{cla}\ellongleftarrowbel{ice}
The converse of Theorem $\ref{Montgomery}$ is not true in general.
\epsilonnd{cla}
We have the following.
\betaegin{cla}\ellongleftarrowbel{cream}
The number of virtual crossing points of virtual 1-knot diagrams is not
an invariant of the fiberwise equivalence
$($respectively, the rotational welded equivalence$)$
in general.
\epsilonnd{cla}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Claim \ref{cream}.}
The two virtual knot diagrams in Figrue \ref{oldWestVirginia}
are fiberwise equivalent (respectively, rotational welded equivalent). \qed\\
We introduce a `weaker' equivalence relation than the fiberwise equivalence defined by
Definition \ref{Nevada}.
We want to replace `level preserving embedding of $S^1_b\times[0,1]$' in
Definition \ref{Nevada}
with an oriented compact surface
which is not necessarily
`level preserving embedding of $S^1_b\times[0,1]$',
and loose a few conditions there.
We prove
in Theorem \ref{parityhozon}
that
this equivalence relation is equivalent to
the virtual parity equivalence relation.
\betaegin{figure}
\iotancludegraphics[width=70mm]{oldWestVirginia.pdf}
\caption{{\betaf
Two virtual knot diagrams which are rotational welded equivalent
(respectively, fiberwise equivalent).
}}\ellongleftarrowbel{oldWestVirginia}
\epsilonnd{figure}
\betaegin{defn}\ellongleftarrowbel{sigma}
Let $\alphalpha$ and $\betaeta$ be virtual 1-knot diagrams.
We say that
$\alphalpha$ and $\betaeta$ are {\iotat weakly fiberwise equivalent}
if $\alphalpha$ and $\betaeta$ satisfy
the following conditions.
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(1)$
There is
a compact generic oriented surface $F$ with boundary whose boudary is a disjoint union of two circles, which is contained in $\mathbb{R}^2_b\times[0,1]$, and
$F$ is covered by Rourke's fibration.
Note that thus there is a submanifold of $\mathbb{R}^2_b\times[0,1]\times\mathbb{R}^2_f$ which is diffeomorphic
to $F\times S^1$.
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(2)$
The in-out information of fiber circles gives $\alphalpha$ and $\betaeta$
the information whether each multiple (respectively, branch) point is virtual or classical
as in Theorem \ref{Montana}.
\smallbreak\Z[\pi/\pi^{(n)}]oindent$(3)$
$\partial F$ is $\alphalpha$ and $\betaeta$.
$F$ meets $\mathbb{R}^2_b\times\{0\}$ $($respectively, $\mathbb{R}^2_b\times\{1\})$
at $\alphalpha$
$($respectively, $\betaeta)$
transversely.
\smallbreak
If $F$ above is an annulus,
we say that
$\alphalpha$ and $\betaeta$ are {\iotat fiberwise cobordant}.
\epsilonnd{defn}
\betaegin{thm}\ellongleftarrowbel{parityhozon}
Let $\alphalpha$ and $\betaeta$ be virtual 1-knot diagrams.
$\alphalpha$ and $\betaeta$ are weakly fiberwise equivalent
if and only if
$\alphalpha$ and $\betaeta$ are virtually parity equivalent.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{parityhozon}.}
The `only if' part:
We use `reductio ad absurdum'.
We suppose an assumption:
$\alphalpha$ and $\betaeta$ are not virtually parity equivalent.
Take a generic surface which connects $\alphalpha$ and $\betaeta$
as in Definition \ref{sigma}.
Then this generic surface must have at least one
virtual branch point because
the union of
$\alphalpha$ and $\betaeta$
has
an odd number of
virtual crossing point.
By Theorem \ref{Rmuri}, this generic surface never exists.
(See Figure \ref{Texas}.)
We arrived at a contradiction.
Hence the above assumption is false and the `only if' part is true.
\\
\betaegin{figure}
\iotancludegraphics[width=80mm]{x5yx.pdf}
\caption{{\betaf Let $\alphalpha$ be the virtual figure $\iotanfty$ knot diagram and $\betaeta$,
the trivial 1-knot diagram.
For example, $(\pi\circ g)(S_b^1\times[0,1]\times S^1_f)$ cannot be realized as drawn above,
by Theorem \ref{Rmuri}.
}}\ellongleftarrowbel{Texas}
\epsilonnd{figure}
The `if' part:
It suffices to prove that
$\alphalpha\alphamalg(-\betaeta)$ in $\mathbb{R}^2$,
where $\alphamalg$ denote the disjoint union of the diagrams,
is weakly fiberwise equivalent to the trivial 1-knot diagram.
We can attach bands
as drawn in Figure \ref{bands}
so that the orientations of virtual knot diagrams and those of the bands are compatible.
Thus $\alphalpha\alphamalg(-\betaeta)$ is
weakly fiberwise equivalent to
the disjoint union of
nonnegative even integer of copies of the virtual figure $\iotanfty$ knot and a classical link diagram.
We can attach a band to two copies of the virtual figure $\iotanfty$ knot diagram and
combine them as drawn in Figure \ref{WestVirginia},
so that the orientation of the band and those of the knot diagrams
are compatible,
and call the resultant diagram $\zeta$.
Thus $\alphalpha\alphamalg(-\betaeta)$in $\mathbb{R}^2$ is
weakly fiberwise equivalent to
the disjoint union of a finite number of copies of $\zeta$.
It is easy to prove that $\zeta$ is
rotational welded equivalent to the trivial knot.
Suppose that we obtain the $\mu$-component trivial 1-link diagram after that.
Attach $\mu-1$ copies of 2-disc to $(\mu-1)$ components of this trivial 1-link diagram.
Hence $\alphalpha\alphamalg(-\betaeta)$ is weakly fiberwise equivalent to the trivial 1-knot diagram.
Hence $\alphalpha$ and $\betaeta$ are weakly fiberwise equivalent.
This completes the proof of Theorem \ref{parityhozon}. \qed
\betaegin{figure}\betaigbreak
\iotancludegraphics[width=140mm]{bands.pdf}
\vskip-70mm
\caption{{\betaf Attaching bands.}\ellongleftarrowbel{bands}}
\epsilonnd{figure}
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=72mm]{WestVirginia.pdf}
\smallbreak
\caption{{\betaf
A combination of two copies of the virtual figure $\iotanfty$ knot diagram
}}\ellongleftarrowbel{WestVirginia}
\betaigbreak
\epsilonnd{figure}
\betaegin{defn}\ellongleftarrowbel{nono}
We define
the `{\iotat nonorientably weakly fiberwise equivalence}'.
In Definition \ref{sigma}
replace `oriented surface' with `non-orientable surface',
and
replace `weakly fiberwise equivalent' with
`nonorientably weakly fiberwise equivalent'.
\epsilonnd{defn}
\betaegin{thm}\ellongleftarrowbel{yuru}
Let $\alphalpha$ and $\betaeta$ be virtual 1-knot diagrams.
$\alphalpha$ and $\betaeta$ are nonorientably weakly fiberwise equivalent
if and only if
$\alphalpha$ and $\betaeta$ are virtually parity equivalent.
\epsilonnd{thm}
\Z[\pi/\pi^{(n)}]oindent{\betaf Proof of Theorem \ref{yuru}.}
The `if' part:
By Theorem \ref{parityhozon},
$\alphalpha$ and $\betaeta$ are weakly fiberwise equivalent.
It is trivial that
if $\alphalpha$ and $\betaeta$ are weakly fiberwise equivalent,
then $\alphalpha$ and $\betaeta$ are nonorientably weakly fiberwise equivalent.
{\iotat Reason.}
Take a generic oriented surface
for $\alphalpha$ and $\betaeta$
as in Definition \ref{sigma}.
Take an immersed Klein bottle in $\mathbb{R}^2_b\times[0,1]$.
Connect
the generic oriented surface
and
the immersed Klein bottle
by using an embedded 3-dimensional 1-handle in $\mathbb{R}^3$
such that
the intersection of the 1-handle and
the oriented surface (respectively, immersed Klein bottle)
is only the attaching part of the 1-handle.
The resultant generic nonorientable surface implies that
$\alphalpha$ and $\betaeta$ are nonorientably weakly fiberwise equivalent.
The proof of the `only if' part is
the same as that of the `only if' part of Theorem \ref{parityhozon}
if we replace the words `Definition \ref{sigma}'
with `Definition \ref{nono}',
and remove the sentence `(See Figure \ref{Texas}.)'.
\qed
\betaigbreak
Define {\iotat Whitney degree} of any virtual 1-knot diagram $\alphalpha$ to be
Whitney degree which is defined by $\alphalpha$
when we regard $\alphalpha$ as an immersed oriented circle in $\mathbb{R}^2$.
Two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$
are said to be {\iotat Whitney parity equivalent}
if the parity of Whitney degree of $\alphalpha$ is the same as that of $\betaeta$.
Two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$
are said to be {\iotat classically parity equivalent}
if the parity of the classical crossing points of $\alphalpha$ is the same as that of $\betaeta$.
The following holds.
Let $\alphalpha$ and $\betaeta$ be virtual 1-knot diagrams which are rotational welded equivalent (respectively, fiberwise equivalent). (Note Theorem \ref{smooth}.)
Then
$\alphalpha$ and $\betaeta$
are classically parity equivalent
if and only if
$\alphalpha$ and $\betaeta$
are Whitney parity equivalent. \\
\Z[\pi/\pi^{(n)}]oindent{\iotat Reason.}
$\alphalpha$ and $\betaeta$
are Whitney parity equivalent
if and only if
the number of the classical Reidemeister $I$ moves is even
in a sequence of rotational welded moves which $\alphalpha$ is changed into $\betaeta$.
Note that we cannot use the virtual Reidemeister $I$ move by definition.
\betaigbreak
Some readers may ask the following question:
Suppose that
two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$
do not have any classical crossing point
and that Whitney degrees are different.
Then is it valid that
$\alphalpha$ and $\betaeta$ are not rotational welded equivalent?
The answer is negative.
We show a counter example in Figure \ref{Whitneydegree}.(i) (respectively, \ref{Whitneydegree}.(ii)).
The proof that each pair is rotational welded equivalent is left to the reader.
\\
Two virtual 1-knot diagrams $\alphalpha$ and $\betaeta$
are said to be {\iotat mixed parity equivalent}
if the parity of
the sum of the classical and virtual crossing points of $\alphalpha$ is
the same as
that of $\betaeta$.
The following holds.
Let $\alphalpha$ and $\betaeta$ be virtual 1-knot diagrams which are welded equivalent. Then
$\alphalpha$ and $\betaeta$
are mixed parity equivalent
if and only if
$\alphalpha$ and $\betaeta$
are Whitney parity equivalent. \\
\Z[\pi/\pi^{(n)}]oindent{\iotat Reason.}
$\alphalpha$ and $\betaeta$
are Whitney parity equivalent
if and only if
the sum of the number of
the classical and virtual Reidemeister $I$ moves
is even
in a sequence of welded moves which $\alphalpha$ is changed into $\betaeta$.
\betaegin{figure}
\betaigbreak
\iotancludegraphics[width=120mm]{Whitneydegree.pdf}
\smallbreak
\caption{{\betaf
Two pairs of virtual knot diagrams
}}\ellongleftarrowbel{Whitneydegree}
\betaigbreak
\epsilonnd{figure}
\betaigbreak
| 3,659 | 68,854 |
en
|
train
|
0.4938.24
|
\section{Virtual high dimensional knots}\ellongleftarrowbel{vhigh}
\Z[\pi/\pi^{(n)}]oindent
See \cite{KauffmanOgasa, KauffmanOgasaII, KauffmanOgasaB, LevineOrr, Ogasa}
for codimension two high dimensional knots.
See \cite{Haefliger, Haefliger2, Levine} for high codimensional high dimensional knots.
It is natural to attempt to define virtual high dimensional knots and
their one-dimensional-higher tubes.
We could define $n$-dimensional virtual knots by using virtual $n$-knot diagrams in $\mathbb{R}^{n+1}$.
We would make any virtual $n$-knot
into a submanifold
of
(a closed oriented $n$-manifold $M$) $\times[0,1]$
as we do in the virtual 1- and 2-dimensional cases.
We want to make a bijection between
the set of such submanifolds and that of virtual $n$-knots.
The 1-dimensional case is done (see Theorem \ref{vk} and Definition \ref{Jbase}).
We should define a one-dimensional-higher tube
as the spinning submanifold made from $K$ around $M$.
Satoh's method makes no sense in the dimension greater than one.
Rourke's way also makes non-sense by Theorem \ref{Rmuri}.
Furthermore we must note
that
the $n$-dimensional case ($n\iotan\mathbb{N}-\{1,2\}$) of Theorems \ref{oh} and \ref{ohoh}
does not necessarily hold
in smooth category (respectively, PL category)
because
it is not trivial to produce an analogue of their proof
by the following fact of \cite{Hudson}:
There is an integer $p\geqq3$ and
are two smooth (respectively, PL) $a$-dimensional submanifolds, $X$ and $Y$, of $S^{a+p}$ which are diffeoomorphic (respectively, PL homeomorphic) each other
but which are non-isotopic
as smooth submanifolds (respectively, PL submanifolds).
To complete these topics in this section is left to the readers as problems.
\betaegin{thebibliography}{ABCD}
\betaibitem{B1} D. Bar-Natan:
Balloons and hoops and their universal finite-type invariant, BF theory, and an ultimate Alexander invariant, {\iotat Acta Math. Vietnam.} 40 (2015) 271-329.
\betaibitem{B2}
D. Bar-Natan and Z. Dansco:
Finite-type invariants of w-knotted objects, I: w-knots and the Alexander polynomial:
{\iotat Algebr. Geom. Topol.} 16 (2016) 1063-1133.
\betaibitem{Boy}
W. Boy: \"Uber die Curvatura integra und die Topologie geschlossener Flächen,
{\iotat Math. Ann.} 57 (1903) 151-184.
\betaibitem{BZ}
G. Burde and H. Zieschang:
Knots, De Gruyter,
{\iotat Studies in Math.} no.5 {\iotat De Gruyter} (1985).
\betaibitem{Crane}
L. Crane: 2-d physics and 3-d topology. Comm. Math. Phys. 135 (1991), no. 3, 615�640.
\betaibitem{Haefliger}
A. Haefliger:
Knotted (4k − 1) spheres in 6k space {\iotat Annals of Math.} 75 (1962) 452-466.
\betaibitem{Haefliger2}
A. Haefliger:
Differentiable embeddings of $S^n$ in $S^{n+q}$ for $q > 2$
{\iotat Annals of Math} 83 (1966) 402-436.
\betaibitem{Haefliger3}
A. Haefliger:
Differentiable imbeddings
{\iotat Bull. Amer. Math. Soc.} 67 (1961) 109-112.
\betaibitem{Hirsh} W. Hirsch: The imbedding of bounding manifolds in euclidean space {\iotat Ann. of Math} 74 (1961), 494–497.
\betaibitem{Hudson}
J. F. P. Hudson: Knotted tori {\iotat Topology} 2 (1963) 11-22.
\betaibitem{Jones}
V. F. R. Jones: Hecke Algebra representations of braid groups and link
{\iotat Ann. of Math.} 126, 335-388, 1987.
\betaibitem{KauffmanJ}
L. H. Kauffman:
State models and the Jones polynomial
{\iotat Topology} 26 (1987) 395-407.
\betaibitem{KauffmanSaleur}
L. H. Kauffman and H. Saleur:
Free fermions and the Alexander-Conway polynomial
{\iotat Comm. Math. Phys.} 141 (1991) 293–327.
\betaibitem{Kauffmanp}
L. H. Kauffman:
Knots and physics,
Second Edition.
{\iotat World Scientific Publishing} 1994.
\betaibitem{Kauffman1}
L. H. Kauffman:
Talks at MSRI Meeting in January 1997, AMS Meeting at University of Maryland, College Park in March 1997, Isaac Newton Institute Lecture in November 1997, Knots in Hellas Meeting in Delphi, Greece in July 1998, APCTP-NANKAI Symposium on Yang-Baxter Systems, Non-Linear Models and Applications at Seoul, Korea in October 1998
\betaibitem{Kauffman}
L. H. Kauffman:
Virtual knot theory,
{\iotat European J. Combin.} 20 (1999) 663-690,
math/9811028 [math.GT].
\betaibitem{Kauffmani}
L. H. Kauffman:
Introduction to virtual knot theory,
{\iotat J. Knot Theory Ramifications} 21 (2012), no. 13, 1240007, 37 pp.
\betaibitem{Kauffmanrw}
L. H. Kauffman:
Rotational virtual knots and quantum link invariants.
J. Knot Theory Ramifications 24 (2015), no. 13, 1541008, 46 pp.
\betaibitem{KauffmanPath}
L. H. Kauffman: Chern-Simons theory, Vassiliev invariants, loop quantum gravity and functional integration without integration. Internat. J. Modern Phys. A 30 (2015), no. 35, 1530067, 27 pp.
\betaibitem{KauffmanOgasa}
L. H. Kauffman and E. Ogasa:
Local moves of knots and products of knots,
{\iotat
Volume three of Knots in Poland III,
Banach Center Publications} 103 (2014) 159-209,
{arXiv: 1210.4667 [math.GT]}.
\betaibitem{KauffmanOgasaII}
L. H. Kauffman and E. Ogasa:
Local moves on knots and products of knots II,
{arXiv: 1406.5573[math.GT]}.
\betaibitem{KauffmanOgasaB}
L. H. Kauffman and E. Ogasa:
Brieskorn submanifolds, Local moves on knots, and knot products,
{\iotat Jounal of knot theory and its ramifications}, (to appear)
arXiv: 1504.01229 [mathGT].
\betaibitem{Kirby}
R. Kirby: The topology of 4-manifolds,
{\iotat Lecture Notes in Math. $(Springer Verlag)$} 1374 (1989).
\betaibitem{KM}
R. Kirby and P. Melvin:
The 3-manifold invariants of Witten and Reshetikhin-Turaev for sl(2, C)
{\iotat Inventiones mathematicae} 105 (1991) 473–545.
\betaibitem{Kohno}
T. Kohno: Conformal field theory and topology. Translated from the 1998 Japanese original by the author. Translations of Mathematical Monographs, 210. Iwanami Series in Modern Mathematics. American Mathematical Society, Providence, RI, 2002. x+172 pp. ISBN: 0-8218-2130-X
\betaibitem{LeeYang}
T. D. Lee and C. N. Yang;
Theory of Charged Vector Mesons Interacting with the Electromagnetic Field,
{\iotat Phys. Rev.} 128 (885) 1962.
\betaibitem{Levine}
J. Levine:
A classification of differentiable knots. {\iotat Annals of Math.} 82 (1965) 15-50.
\betaibitem{LevineOrr}
J. Levine and K.E. Orr: A survey of applications of surgery to knot and
link theory {\iotat Surveys on surgery theory: surveys presented in honor of
C.T.C. Wall Vol. 1, 345-364, Ann. of Math. Stud., 145, Princeton
Univ. Press, Princeton, NJ,} (2000).
\betaibitem{Lickorish}
W. B. R. Lickorish:
Invariants for 3-manifolds from the combinatorics of the Jones polynomial
{\iotat Pacific J. Math.} 149 (1991) 337-347.
\betaibitem{Lickorishl}
W. B. R. Lickorish:
Three-manifolds and the Temperley-Lieb algebra
{\iotat Mathematische Annalen} 290 (1991) 657–670.
\betaibitem{Ogasa98SL} E. Ogasa:
The intersection of spheres in a sphere and
a new application of the Sato-Levine invariant,
{\iotat Proceedings of the American Mathematical Society}
126 (1998).3109-3116, UTMS95-54.
\betaibitem{Ogasa98n}
E. Ogasa:
Intersectional pairs of $n$-knots, local moves of $n$-knots and invariants of $n$-knots,
{\iotat Math. Res. Lett.}
5 (1998) 577-582,
Univ. of Tokyo preprint UTMS 95-50.
\betaibitem{Ogasapath}
E. Ogasa:
Supersymmetry, homology with twisted coefficients and n-dimensional knots,
{\iotat International Journal of Modern Physics A}
21, (2006), pp.4185-4196, hep-th/0311136.
\betaibitem{OgasaBoy} E. Ogasa:
Make your Boy surface,
arXiv:1303.6448 math.GT
\betaibitem{Ogasa}
E. Ogasa:
Introduction to high dimensional knots,
arXiv:1304.6053 [math.GT].
\betaibitem{RT}
N. Reshetikhin and V. G. Turaev:
Invariants of 3-manifolds via link polynomials and quantum groups,
{\iotat Inventiones mathematicae} 103 (1991) 547–597.
\betaibitem{Rolfsen}
D. Rolfsen: Knots and links
{\iotat Publish or Perish, Inc.} 1976.
\betaibitem{Rourke}
C. P. Rourke: What is a welded link?
{\iotat Intelligence of low dimensional topology 2006,
Ser. Knots and Everything, 40, World Sci. Publ., Hackensack, NJ} (2007) 263-270.
\betaibitem{Ryder}
L. H. Ryder:
Quantum Field Theory,
{Cambridge University Press, the second edition} 1996.
\betaibitem{Satoh}
S. Satoh:
Virtual knot presentation of ribbon torus-knots {\iotat J. Knot Theory Ramifications}
9 (2000) 531-542.
\betaibitem{J}
J. Schneider:
Diagrammatic Theories of 1- and 2- Dimensional Knots,
{\iotat PhD thesis, University of Illinois Chicago.} (2016).
The readers may find this article in https://www.lib.umich.edu/.
\betaibitem{Takeda} Y. Takeda:
Introduction to virtual surface-knot theory
{\iotat J. Knot Theory Ramifications} 21 (2012) 1250131 6pp.
\betaibitem{Dylan} D. Thurston:
Private communication between Bar-Nathan, Dancso, and D. Thurston
written
in
\cite[section 10.2]{B1} and
\cite[section 3.1.1]{B2}.
\betaibitem{W} E. Witten: Quantum field theory and the Jones polynomial
{\iotat Comm. Math. Phys. } 121, 351-399, 1989.
\betaibitem{Zeeman} E. Zeeman:
Twisting spun knots, {\iotat Trans. Amer. Math. Soc.},
115 (1965) 471-495.
\epsilonnd{thebibliography}
{\Bbbkootnotesize \Z[\pi/\pi^{(n)}]oindent{\betaf Acknowledgment.} Kauffman's work was supported by the Laboratory of Topology and Dynamics, Novosibirsk State University
(contract no.14.Y26.31.0025 with the Ministry of Education and Science of the Russian Federation).}
| 3,988 | 68,854 |
en
|
train
|
0.4938.25
|
\betaibitem{LeeYang}
T. D. Lee and C. N. Yang;
Theory of Charged Vector Mesons Interacting with the Electromagnetic Field,
{\iotat Phys. Rev.} 128 (885) 1962.
\betaibitem{Levine}
J. Levine:
A classification of differentiable knots. {\iotat Annals of Math.} 82 (1965) 15-50.
\betaibitem{LevineOrr}
J. Levine and K.E. Orr: A survey of applications of surgery to knot and
link theory {\iotat Surveys on surgery theory: surveys presented in honor of
C.T.C. Wall Vol. 1, 345-364, Ann. of Math. Stud., 145, Princeton
Univ. Press, Princeton, NJ,} (2000).
\betaibitem{Lickorish}
W. B. R. Lickorish:
Invariants for 3-manifolds from the combinatorics of the Jones polynomial
{\iotat Pacific J. Math.} 149 (1991) 337-347.
\betaibitem{Lickorishl}
W. B. R. Lickorish:
Three-manifolds and the Temperley-Lieb algebra
{\iotat Mathematische Annalen} 290 (1991) 657–670.
\betaibitem{Ogasa98SL} E. Ogasa:
The intersection of spheres in a sphere and
a new application of the Sato-Levine invariant,
{\iotat Proceedings of the American Mathematical Society}
126 (1998).3109-3116, UTMS95-54.
\betaibitem{Ogasa98n}
E. Ogasa:
Intersectional pairs of $n$-knots, local moves of $n$-knots and invariants of $n$-knots,
{\iotat Math. Res. Lett.}
5 (1998) 577-582,
Univ. of Tokyo preprint UTMS 95-50.
\betaibitem{Ogasapath}
E. Ogasa:
Supersymmetry, homology with twisted coefficients and n-dimensional knots,
{\iotat International Journal of Modern Physics A}
21, (2006), pp.4185-4196, hep-th/0311136.
\betaibitem{OgasaBoy} E. Ogasa:
Make your Boy surface,
arXiv:1303.6448 math.GT
\betaibitem{Ogasa}
E. Ogasa:
Introduction to high dimensional knots,
arXiv:1304.6053 [math.GT].
\betaibitem{RT}
N. Reshetikhin and V. G. Turaev:
Invariants of 3-manifolds via link polynomials and quantum groups,
{\iotat Inventiones mathematicae} 103 (1991) 547–597.
\betaibitem{Rolfsen}
D. Rolfsen: Knots and links
{\iotat Publish or Perish, Inc.} 1976.
\betaibitem{Rourke}
C. P. Rourke: What is a welded link?
{\iotat Intelligence of low dimensional topology 2006,
Ser. Knots and Everything, 40, World Sci. Publ., Hackensack, NJ} (2007) 263-270.
\betaibitem{Ryder}
L. H. Ryder:
Quantum Field Theory,
{Cambridge University Press, the second edition} 1996.
\betaibitem{Satoh}
S. Satoh:
Virtual knot presentation of ribbon torus-knots {\iotat J. Knot Theory Ramifications}
9 (2000) 531-542.
\betaibitem{J}
J. Schneider:
Diagrammatic Theories of 1- and 2- Dimensional Knots,
{\iotat PhD thesis, University of Illinois Chicago.} (2016).
The readers may find this article in https://www.lib.umich.edu/.
\betaibitem{Takeda} Y. Takeda:
Introduction to virtual surface-knot theory
{\iotat J. Knot Theory Ramifications} 21 (2012) 1250131 6pp.
\betaibitem{Dylan} D. Thurston:
Private communication between Bar-Nathan, Dancso, and D. Thurston
written
in
\cite[section 10.2]{B1} and
\cite[section 3.1.1]{B2}.
\betaibitem{W} E. Witten: Quantum field theory and the Jones polynomial
{\iotat Comm. Math. Phys. } 121, 351-399, 1989.
\betaibitem{Zeeman} E. Zeeman:
Twisting spun knots, {\iotat Trans. Amer. Math. Soc.},
115 (1965) 471-495.
\epsilonnd{thebibliography}
{\Bbbkootnotesize \Z[\pi/\pi^{(n)}]oindent{\betaf Acknowledgment.} Kauffman's work was supported by the Laboratory of Topology and Dynamics, Novosibirsk State University
(contract no.14.Y26.31.0025 with the Ministry of Education and Science of the Russian Federation).}
\betaigbreak
\Z[\pi/\pi^{(n)}]oindent
Louis H. Kauffman:
Department of Mathematics, Statistics and Computer Science \\ 851 South Morgan Street University of Illinois at Chicago
Chicago, Illinois 60607-7045, and\\ Department of Mechanics and Mathematics, Novosibirsk State University, Novosibirsk, Russia\quad [email protected]
\betaigbreak\Z[\pi/\pi^{(n)}]oindent
Eiji Ogasa: Computer Science, Meijigakuin University, Yokohama, Kanagawa, 244-8539, Japan
\quad [email protected] \quad
[email protected]
\betaigbreak\Z[\pi/\pi^{(n)}]oindent
Jonathan Schneider:
Department of Mathematics, College of DuPage,
425 Fawell Boulevard, Glen Ellyn, Illinois, 60137, USA \quad
[email protected]
\epsilonnd{document}
| 1,807 | 68,854 |
en
|
train
|
0.4939.0
|
\begin{document}
\title{\sc\bf\large\MakeUppercase{Dirichlet approximation of equilibrium distributions in Cannings models with mutation}}
\author{\sc Han~L.~Gan, Adrian R\"ollin, and Nathan Ross}
\deltaate{\it Washington University in St.\ Louis, National University of Singapore, and University of Melbourne}
\maketitle
\begin{abstract}
Consider a haploid population of fixed finite size with a finite number of allele types and having Cannings exchangeable genealogy with neutral mutation. The stationary distribution of the Markov chain of allele counts in each generation is an important quantity in population genetics but has no tractable description in general.
We provide upper bounds on the distributional distance between the Dirichlet distribution and this finite population stationary distribution for the Wright-Fisher genealogy with general mutation structure and the Cannings exchangeable genealogy with parent independent mutation structure. In the first case, the bound is small if the population is large and the mutations do not depend too much on parent type; ``too much" is naturally quantified by our bound. In the second case, the bound is small if the population is large and the chance of three-mergers in the Cannings genealogy is small relative to the chance of two-mergers; this is the same condition to ensure convergence of the genealogy to Kingman's coalescent. These results follow from a new development of Stein's method for the Dirichlet distribution based on Barbour's generator approach and a probabilistic description of the semigroup of the Wright-Fisher diffusion due to Griffiths, and Li and Tavar\'e.
\epsnd{abstract}
\section{Introduction}
We consider a neutral Cannings model with mutation in a haploid population of constant size~$N$ with~$K$ alleles. In each generation every individual has a random number of offspring such that the total number of offspring is~$N$. Different generations have i.i.d.\ offspring count vectors with distribution given by an exchangeable vector~$\tV:=(V_1, \ldots, V_N)$ not identically equal to~$(1, \ldots, 1)$;~$V_i$ is the number of offspring of individual~$1\leq i \leq N$. The random genealogy induced from this description is referred to as the \epsmph{Cannings model} \citep*{Cannings1974}; particular instances are the \epsmph{Wright-Fisher model}, where~$\mathscr{L}(\tV)$ is multinomial with~$N$ trials and probabilities~$(1/N, \ldots, 1/N)$, and the \epsmph{Moran model}, where~$\mathscr{L}(\tV)$ is described by choosing a uniform pair of indices~$(I,J)$ and setting~$V_I=2$,~$V_J=0$ and~$V_i=1$ for~$i\not=I, J$. On top of the random genealogy given by~$\mathscr{L}(\tV)$, we put a mutation structure as follows. Given a child's parent type is~$i$, the child is of type~$j$ with probability~$p_{i j}$, where~$\sum_{j} p_{i j} = 1$. The type of each child in a given generation is chosen independently conditional on the genealogy of that generation and the parent's type. It is easy to see that this rule induces a time-homogeneous Markov chain~$(\tX(0), \tX(1), \ldots)$ with state space~$\{\tx\in\mathrm{I}Z_{\geq0}^{K-1}: \sum_{i=1}^{K-1} x_i\leq N\},$ where for~$i=1,\ldots, K-1$,~$X_i(n)$ is the number of individuals in the population having allele~$i$ at time~$n$; note that the count for allele~$K$ is given by~$N-\sum_{i=1}^{K-1} X_i(n)$.
Since~$\tX(n)$ is a Markov chain on a finite state space, it has a stationary distribution. But it is typically not possible to write down an expression for such a stationary distribution --- an important exception is the Wright-Fisher model with parent independent mutation (PIM), meaning~$p_{i j}$ does not depend on~$i$ for~$j\not=i$. In general, if the population size~$N\to\infty$, then under some weak conditions \citep*{Mohle2000} (discussed in more detail below in Remark~\ref{rem1}) the Cannings genealogy viewed backwards in time converges to Kingman's coalescent \citep*{Kingman1982, Kingman1982b, Kingman1982a} and the mutation structure on top of the coalescent has a nice Poisson process description. But even in this limit the stationary distribution (of now proportions of the~$K$ alleles)
is notoriously difficult to handle outside of the PIM case; see \citep*{Griffiths1994} \citep*{Bhaskar2012} for work on sampling under the stationary distributions and \citep*{Ethier1992} for a probabilistic construction. Even if a formula in the limit were available, it is in any case important in population-mutation models to understand the difference between finite~$N$ likelihoods and those in the~$N\to\infty$ limit \citep*{Bhaskar2014} \citep*{Fu2006} \citep*{Lessard2007, Lessard2010} \citep*{Mohle2004}.
Our approach to understanding these finite population stationary distributions is to determine when they are close to the Dirichlet distribution, which arises as the stationary limit in the PIM case (in this case the process converges in a suitable sense to the Wright-Fisher diffusion).
In the next section we present our main results. First, we give two approximation theorems providing upper bounds on the distributional distance between the Dirichlet distribution and, for the first result, the finite population stationary distribution for the Wright-Fisher genealogy with general mutation structure and, for the second result, the Cannings exchangeable genealogy with parent independent mutation structure. Second, we discuss a new development of Stein's method for the Dirichlet distribution from which the first two results follow.
| 1,481 | 53,110 |
en
|
train
|
0.4939.1
|
\section{Main results}
Before stating our main results, we need some notation and definitions, as well as a short discussion regarding Lipschitz functions defined on open convex sets and their extension to the boundary.
Denote by~$\Dir(\ta)$ the Dirichlet distribution with parameters~$\ta=(a_1,\deltaots,a_K)$, where~$a_1>0, \ldots, a_K>0$, supported on the~$(K-1)$-dimensional open simplex, which we parameterize as
\be{
\Delta_{K}=\left\{\tx=(x_1,\ldots, x_{K-1}): x_1> 0, \ldots, x_{K-1} > 0, \sum_{i=1}^{K-1} x_i < 1\right\}\subset \mathrm{I}R^{K-1}.
}
Denote by~$\-\Delta_K$ the closure of~$\Delta_K$.
On~$\Delta_{K}$,~$\Dir(\ta)$ has density
\ben{\label{1}
\psi_\ta(x_1,\deltaots,x_{K-1}) = \frac{\Gamma(s)}{\prod_{i=1}^K \Gamma(a_i)} \prod_{i=1}^K x_i^{a_i-1},
}
where~$s=\sum_{i=1}^K a_i$, and where we set~$x_K=1-\sum_{i=1}^{K-1} x_i$, as we shall often do in this paper whenever considering vectors taking values in~$\Delta_{K}$.
\deltaef\mathrm{BC}{\mathrm{BC}}
\deltaefC_{\mathrm{L}}{C_{\mathrm{L}}}
\deltaefC_{\mathrm{b}}{C_{\mathrm{b}}}
Let~$U$ be an open subset of~$\mathrm{I}R^n$. For~$m\geq 1$, we denote by~$\mathrm{BC}^{m,1}(U)$ the set of bounded functions~$g:U\to\mathrm{I}R$ that have~$m$ bounded and continuous partial derivatives and whose~$m$-th partial derivatives are Lipschitz continuous. In line with this notation, we denote by~$\mathrm{BC}^{0,1}(U)$ the set of bounded functions that are Lipschitz continuous.
We denote by~$\norm{g}_\infty$ the supremum norm of~$g$, and, if the~$k$-th partial derivatives of~$g$ exist, we let
\be{
\abs{g}_k=\sup_{1\leq i_1,\deltaots,i_k\leq n} \bbnorm{\frac{\partial^k g}{\partial x_{i_1}\cdots\partial x_{i_k}}}_\infty
}
and
\be{
\abs{g}_{k,1} =
\sup_{1\leq i_1,\deltaots,i_k\leq n}\sup_{\tx,\ty\in U}
\bbbabs{
\frac{\partial^k\bklr{g(\tx)-g(\ty)}}{\partial x_{i_1}\cdots\partial x_{i_k}}}\frac{1}{\norm{\tx-\ty}_1}.
}
Note that we use the~$L_1$-norm in our definition of the Lipschitz constant instead of the usual~$L_2$-norm. This is purely a matter of convenience, since the~$L_1$-norm shows up naturally in our proofs.
If~$g\in \mathrm{BC}^{m,1}(U)$ and~$U$ is convex, then all partial derivatives up to order~$m-1$ are Lipschitz continuous, too, and for any~$0\leq k \leq m-1$,
\ben{\label{2}
\abs{g}_{k,1} = \abs{g}_{k+1}.
}
As a result, if~$U$ is an open convex set, then any function~$g\in \mathrm{BC}^{m,1}(U)$ and all its partial derivatives up to order~$m$ can be extended continuously to a function~$\-g$ defined on the closure~$\-U$ in a unique way, and we have~$\norm{\-g}_\infty=\norm{g}_\infty$,~$\abs{\-g}_k=\abs{g}_k$ for~$1\leq k\leq m$ and~$\abs{\-g}_{m,1}=\abs{g}_{m,1}$. We can therefore identify the set of functions~$\mathrm{BC}^{m,1}(U)$ with set of extended functions~$\mathrm{BC}^{m,1}(\-U)$.
\subsection{Wright-Fisher model with general mutation structure}
Our first result is a bound on the approximation of the stationary distribution
of the Wright-Fisher model with general mutation structure by
a Dirichlet distribution.
\begin{theorem}\label{THM1}
Let the~$(K-1)$-dimensional vector\/~$\tX$ be distributed as a stationary distribution of the Wright-Fisher model for a population of~$N$ haploid individuals with~$K$ types and mutation
structure~$p_{i j}$,~$1\leq i,j\leq K$; set~$\textnormal{\textbf{W}}=\tX/N$. Let\/~$\ta$ be a~$K$-vector of positive numbers, set~$s=\sum_i a_i$, and let\/~$\tZ\sim \Dir(\ta)$. Then, for any~$h\in \mathrm{BC}^{2,1}(\-\Delta_K)$,
\be{
\left|\mathrm{I}E h(\textnormal{\textbf{W}})-\mathrm{I}E h(\tZ) \right| \leq \frac{\abs{h}_1}{s} A_1+\frac{\abs{h}_2}{2(s+1)} A_2 + \frac{\abs{h}_{2,1}}{18(s+2)} A_3,
}
where
\bg{
A_1 = 2N(K+1)\tau,\quad
A_2= N K^2 \mu^2 + 2K\mu, \quad
A_3= 8 N K^3 \mu^3 +\frac{16\sqrt{2}K^3}{N^{1/2}},
}
with
\ben{
\tau = \sum_{i=1}^K\sum_{\substack{j=1\\j\neq i}}^K\,\bbabs{\,p_{ij}-\frac{a_j}{2N}},\qquad \mu = \sum_{i=1}^K\sum_{\substack{j=1\\j\neq i}}^K p_{ij}.
}
Moreover, there is a constant~$C=C(\ta)$ such that
\be{
\sup_{A\in \mathcal{C}_{K-1}}\babs{\mathrm{I}P[\textnormal{\textbf{W}}\in A] - \mathrm{I}P[\tZ\in A]}\leq C\bklr{A_1+A_2+A_3}^{\theta/(3+\theta)},
}
where~$\mathcal{C}_{K-1}$ is the family of convex sets on\/~$\mathrm{I}R^{K-1}$ and where
$\theta=\theta(\ta)>0$ is given at~\epsqref{10}.
\epsnd{theorem}
\begin{remark}
To interpret the bounds of the theorem, if
$p_{i j}=\frac{a_j}{2N}+\varepsilon_{ij}$ for~$i\not=j$, and we assume~$\abs{\varepsilon_{ij}}\leq \varepsilon$, then
\ba{
\tau \leq K(K-1)\varepsilon
\qquad
\mu \leq \frac{(K-1)s}{2N}+K(K-1)\varepsilon,
}
so that for fixed~$K$ and~$\ta$ (though note that, for smooth functions, the reliance on these parameters is explicit),
\be{
A_1 = \mathrm{O}(N\varepsilon), \qquad A_2 = \mathrm{O}(N^{-1}+N \varepsilon^2), \qquad A_3 = \mathrm{O}(N^{-1/2}+N \varepsilon^3).
}
In particular, in the PIM case, where~$\varepsilon=0$, our bound on smooth functions is of order~$N^{-1/2}$, and for the convex set metric of order~$N^{-1/8}$ if~$\min\{a_1,\deltaots,a_K\}\geq 1$ and otherwise the order of the bound is some negative power of~$N$ having
a more complicated relationship to~$\ta$, but which is easily read from~\epsqref{10}.
In the special case where~$K=2$,~$\varepsilon=0$ and~$h$ has six bounded derivatives, \citep*{Ethier1977} derived a bound analogous to that of Theorem~\ref{THM1}, but of order
$N^{-1}$.
In the general case our bound quantifies the effect of non-PIM: if
$N\varepsilon \to 0$ as~$N\to\infty$ then the stationary distribution converges to the Dirichlet distribution.
\epsnd{remark}
\subsection{Cannings model with parent-independent mutation structure}
Our next result is for the general Cannings exchangeable non-degenerate genealogy. The bounds
are in terms of the moments of the offspring vector~$\tV$; hence, let
\ban{\label{3}
\alpha:=\mathrm{I}E\klg{V_1 (V_1-1)},
\quad
\beta:=\mathrm{I}E\klg{ V_1 (V_1-1)(V_1-2)},
\quad
\gamma:=\mathrm{I}E \klg{V_1 (V_1-1)V_2(V_2-1)}
}
(and note that these quantities depend on~$N$).
\begin{theorem}\label{THM2}
Let the $(K-1)$-dimensional vector~$\tX$ be a stationary distribution of the Cannings model for a population of size~$N\geq 4$ with non-degenerate exchangeable genealogy~$\mathscr{L}(\tV)$. Assume we have
parent independent mutation structure; that is,~$p_{ij}=\pi_j$,~$1\leq i\not= j \leq K$,~ for some~$\pi_1,\deltaots,\pi_K>0$,
and $p_{ii}=1-\sum_{j\not=i} \pi_j$.
Let~$\alpha$,~$\beta$, and~$\gamma$ be as defined at~\epsqref{3},
and for~$i=1,\ldots, K$, set~$a_i=\frac{2(N-1)\pi_i}{\alpha}$ and~$s=\sum_i a_i$. Let
$\textnormal{\textbf{W}}=\tX/N$, and let~$\tZ\sim \Dir(\ta)$. Then, for any~$h\in\mathrm{BC}^{2,1}(\-\Delta_K)$,
\be{
\left|\mathrm{I}E h(\textnormal{\textbf{W}})-\mathrm{I}E h(\tZ) \right| \leq \frac{\abs{h}_2}{2(s+1)} A_2 + \frac{\abs{h}_{2,1}}{18(s+2)} A_3,
}
where, with~$\epsta=N\alpha^{-1}\sum_{j=1}^K\pi_j=\frac{sN}{2(N-1)}$,
\ba{
A_2&= \bbklr{\frac{\alpha}{N}}^2\epsta^2 K^2 + \frac{\alpha}{N}\bbklr{\epsta^2 (K^2+1) + 2\epsta K^2} + \frac{3\epsta K}{N} ,\\
A_3 &= 2K^3\left(1+\epsta\sqrt{\frac{\alpha}{N}}+\sqrt{\frac{\epsta}{N}} \right)\left(
\epsta\bbbklr{\frac{\alpha}{N}}^{3/4}
+ \left( \frac{12\beta}{\alpha N}+ \frac{ 24 \gamma}{\alpha N}\right)^{1/4}
+\frac{1}{N^{1/2}}\left(3\epsta^2\frac{\alpha}{N} + \frac{\epsta}{N} \right)^{1/4}
\right)^2.
}
Moreover, there is a constant~$C=C(\ta)$ such that
\ben{
\sup_{A\in \mathcal{C}_{K-1}}\babs{\mathrm{I}P[\textnormal{\textbf{W}}\in A] - \mathrm{I}P[\tZ\in A]}\leq C\bklr{A_2+A_3}^{\theta/(3+\theta)},
}
where~$\mathcal{C}_{K-1}$ is the family of convex sets on\/~$\mathrm{I}R^{K-1}$ and where
$\theta=\theta(\ta)>0$ is given at~\epsqref{10}.
\epsnd{theorem}
\begin{remark}\label{rem1}
To interpret the bound of the theorem, we note that the bound goes to zero if, as~$N\to\infty$, $\epsta\leq s$ remains bounded and all three of
\ban{\label{4}
&\frac{\alpha}{N}, \qquad \frac{\beta}{\alpha N}, \qquad
\frac{\gamma}{\alpha N},
}
tend to zero. And for the convergence to be to non-degenerate, we must have
\ben{\label{5}
a_i = \frac{2(N-1)\pi_i}{\alpha}\to \tilde{a}_i,
}
for some limiting positive~$\tilde{a}_i$,~$i=1,\ldots, K$, which also implies that
$\epsta=sN/(2(N-1))$ converges to a positive constant.
As briefly mentioned above, under appropriate assumptions, the exchangeable genealogy alone (that is, without mutation structure)
converges to Kingman's coalescent as~$N\to\infty$. This convergence occurs if and only if
\citep*{Mohle2000} \citep*{Mohle2001, Mohle2003}
\ben{\label{6}
\frac{\beta}{ \alpha N}\to 0.
}
In this case and also assuming a limiting scaling of the mutation
probabilities given by~\epsqref{5},
the finite population stationary distribution converges to the
stationary distribution of a Wright-Fisher diffusion, that is,~$\Dir(\ta)$.
At first glance it appears that demanding the terms of~\epsqref{4} tend to zero
is a stronger requirement for convergence than the sufficient~\epsqref{6},
but \citep*[Lemma~5.5]{Mohle2003}, \citep*[Display~(16)]{Mohle2000}
show that~\epsqref{6} also implies
\be{
\frac{\alpha}{N}\to 0\, \qquad \text{ and } \qquad \frac{\gamma}{\alpha N}\to 0.
}
So, in fact, our bound goes to zero assuming only~\epsqref{6} and thus quantifies the convergence of the stationary distribution in terms of natural quantities. Assuming~$\epsta$ remains bounded, we obtain
\be{
A_2 = \mathrm{O}\bbbklr{K^2\frac{\alpha}{N}+\frac{K}{N}},\qquad A_3 = \mathrm{O}\bbbklr{K^3\bbklr{\frac{\alpha}{N}}^{3/2} + K^3\bbklr{\frac{\beta}{\alpha N}}^{1/2}+K^3\bbklr{\frac{\gamma}{\alpha N}}^{1/2}+\frac{K^3}{N}}.
}
\epsnd{remark}
\begin{remark}
For the stationary distribution of types in an exchangeable Cannings genealogy with general mutation structure,
a bound with features similar to those of Theorems~\ref{THM1} and~\ref{THM2} should be possible using our methods. However,
the formulation and proof of such a result would be rather messy, and so, for the sake of exposition and clarity, we
present two separate theorems to handle more specific situations.
\epsnd{remark}
| 3,852 | 53,110 |
en
|
train
|
0.4939.2
|
\subsection{Stein's method of exchangeable pairs for the Dirichlet distribution}
Theorems~\ref{THM1} and~\ref{THM2} follow from a new development of Stein's method for the Dirichlet distribution.
Stein's method \citep*{Stein1972, Stein1986} is a powerful tool for providing bounds on the approximation of
a probability distribution of interest by a well understood target distribution; see \citep*{Chen2011} and \citep*{Ross2011}
for recent introductions, and \citep*{Chatterjee2014} for a recent literature survey. We show a general exchangeable pairs Dirichlet approximation theorem very much in the spirit of exchangeable pairs approximation results for other distributions; e.g., normal \citep*[Theorem~1.1]{Rinott1997}; multivariate normal \citep*[Theorem~2.3]{Chatterjee2008} \citep*[Theorem~2.1]{Reinert2009};
exponential \citep*[Theorem~1.1]{Chatterjee2011} \citep*[Theorem~1.1]{Fulman2013}; beta \citep*[Theorem~4.4]{Dobler2015}; limits in Curie-Weiss models \citep*[Theorem~1.1]{Chatterjee2011a}. In what follows, sums range from~$1$ to~$K$ unless otherwise stated.
\begin{theorem}\label{THM3}
Let~$\ta=(a_1,\ldots, a_K)$ be a vector of positive numbers and set~$s=\sum_{i=1}^K a_i$.
Let~$(\textnormal{\textbf{W}},\textnormal{\textbf{W}}')$ be an exchangeable pair of~$(K-1)$ dimensional random vectors with non-negative entries with sum no greater than one.
Also let~$\Lambda$ be an invertible matrix and~$\tR$ be a random vector such that
\ben{
\mathrm{I}E [\textnormal{\textbf{W}}'-\textnormal{\textbf{W}}| \textnormal{\textbf{W}}]=\Lambda (\ta-s \textnormal{\textbf{W}}) + \tR. \label{7}
}
Then, for any~$h\in\mathrm{BC}^{2,1}(\-\Delta_K)$,
\ben{\label{8}
\abs{\mathrm{I}E h(\textnormal{\textbf{W}})-\mathrm{I}E h(\tZ)}\leq \frac{\abs{h}_1}{s} A_1 + \frac{\abs{h}_2}{2(s+1)} A_2 +\frac{\abs{h}_{2,1}}{6(s+2)} A_3,
}
where
\ba{
A_1&:=\sum_{m,i} \babs{(\Lambda^{-1})_{i,m}}\mathrm{I}E \abs{R_m },\\
A_2&:= \sum_{m,i,j} \babs{(\Lambda^{-1})_{i,m}} \mathrm{I}E \left| \Lambda_{m,i} W_i(\deltaelta_{i j}-W_j)-\frac{1}{2}\mathrm{I}E[ (W_m'-W_m)(W_j'-W_j)|\textnormal{\textbf{W}}] \right|, \\
A_3&:=\sum_{m,i,j,k} \babs{(\Lambda^{-1})_{i,m}} \mathrm{I}E \left|(W_m'-W_m)(W_j'-W_j)(W_k'-W_k) \right|.
}
Moreover, there exists a constant~$C=C(\ta)$ such that
\ben{\label{9}
\sup_{A\in \mathcal{C}_{K-1}}\babs{\mathrm{I}P[\textnormal{\textbf{W}}\in A] - \mathrm{I}P[\tZ\in A]}\leq C\bklr{A_1+A_2+A_3}^{\theta/(3+\theta)},
}
where~$\mathcal{C}_{K-1}$ is the family of convex sets on\/~$\mathrm{I}R^{K-1}$ and
\ben{\label{10}
\theta = \frac{\theta_\wedge}{\theta_\wedge+\theta_\circ},\qquad \theta_\wedge = 1\wedge\min\{a_1,\deltaots,a_K\},\qquad \theta_\circ=\sum_{i=1}^K\bklr{1-1\wedge a_i}.
}
Additionally, if~$\Lambda$ is a multiple of the identity matrix, then
the result still holds assuming only that~$\mathscr{L}(\textnormal{\textbf{W}})=\mathscr{L}(\textnormal{\textbf{W}}')$, in which
case the factor~$\frac{\abs{h}_{2,1}}{6(s+2)}$ in \epsqref{8} can be improved
to~$\frac{\abs{h}_{2,1}}{18(s+2)}$
\epsnd{theorem}
The layout of the remainder of the paper is as follows. We finish the introduction by applying
Theorem~\ref{THM3} in an easy example, the multi-colored P\'olya urn.
In Section~\ref{sec1} we develop Stein's method for the Dirichlet distribution and
prove Theorem~\ref{THM3}.
In Section~\ref{sec2} we prove Theorem~\ref{THM1}, the bounds for the Wright-Fisher model and
in Section~\ref{sec3} we prove Theorem~\ref{THM2}, the bounds for the PIM Cannings model.
\paragraph{A simple example: Multi-colored P\'olya Urn.}
In order to illustrate how Theorem~\ref{THM3} is applied, we use it to bound the error in approximating the counts in the classical P\'olya urn by a Dirichlet distribution. The result is new to us, but a bound in the
Wasserstein distance could be obtained
from analogous bounds for the beta distribution \citep*{Goldstein2013} \citep*{Dobler2015} using
the iterative urn approach of \citep*{Pekoz2014a}.
An urn initially contains~$a_i>0$ ``balls" of color~$i$ for~$i =1,2,\ldots, K$ with a total number of balls~$s= \sum_{i=1}^K a_i$. At each time step, draw a ball uniformly at random from the urn and replace it along with another ball of the same color.
Let~$\tX(n) = (X_1(n), X_2(n), \ldots , X_{K-1}(n))$, where~$X_i(n)$ is the number of times color~$i$ was drawn up to and including the~$n$th draw. It is well known (see, e.g., \citep*{Mahmoud2009}) that as~$n \to \infty$,
\be{
\textnormal{\textbf{W}}(n):= \frac{\tX(n)}{n} \stackrel{d}{\longrightarrow} \Dir(\ta),
}
and we provide a bound on the approximation of the distribution of~$\textnormal{\textbf{W}}(n)$ by the Dirichlet limit.
\begin{theorem}\label{THM4}
Let~$\ta=(a_1, \ldots, a_K)$ be a vector of positive numbers,~$s=\sum_{i=1}^K{a_i}$,~$\tZ\sim\Dir(\ta)$, and~$\textnormal{\textbf{W}}(n)$ be the P\'olya urn proportions as defined above. Then, for any~$h\in\mathrm{BC}^{2,1}(\-\Delta_K)$,
\be{
\abs{\mathrm{I}E h(\textnormal{\textbf{W}}(n)) -\mathrm{I}E h(\tZ)}\leq \frac{s}{n(s+1)} \abs{h}_2 + \frac{(K-1)(3K-5) (n+s-1)}{18n^2(s+2)} \abs{h}_{2,1}.
}
Moreover, there exists a constant~$C=C(\ta)$ such that
\be{
\sup_{A\in \mathcal{C}_{K-1}}\abs{\mathrm{I}P[\textnormal{\textbf{W}}\in A] - \mathrm{I}P[\tZ\in A]}\leq Cn^{-\theta/(3+\theta)},
}
where $\theta=\theta(\ta)>0$ is defined at~\epsqref{10}.
\epsnd{theorem}
We use Theorem~\ref{THM3} to prove the result. To define the exchangeable pair,
note that we can set~$\tX(n) = \sum_{j=1}^n \textnormal{\textbf{Y}}(j)$ where, for~$\te_i$ equal to the~$i$th unit vector,~$\textnormal{\textbf{Y}}(j)=\te_i$ if color~$i$ is drawn on the~$j$th draw.
It is easy to check that
\be{
\mathrm{I}P\bkle{\textnormal{\textbf{Y}}(j) = \te_i \big\vert \tX(j-1)} = \frac{X_i(j-1) + a_i}{j-1+s}.
}
We define the exchangeable pair~$(\textnormal{\textbf{W}}, \textnormal{\textbf{W}}')$ (dropping the~$n$ to ease notation) by resampling the last draw~$\textnormal{\textbf{Y}}(n)$; that is,
\be{
\textnormal{\textbf{W}}' = \textnormal{\textbf{W}} - \frac{\textnormal{\textbf{Y}}(n)}{n} + \frac{\textnormal{\textbf{Y}}'(n)}{n},
}
where conditional on~$\tX(n-1)$,~$\textnormal{\textbf{Y}}'(n)$ and~$\textnormal{\textbf{Y}}(n)$ are i.i.d.\ Before computing the terms appearing in the bound of Theorem~\ref{THM3}, we record a lemma.
\begin{lemma}\label{lem1}
Recalling the notation and definitions above, and let~$\delta_{ij}$ denote the Kronecker delta function,
\ba{
\mathrm{I}E(\textnormal{\textbf{Y}}'(n) | \textnormal{\textbf{W}}) &= \frac{1}{n+s-1}\left[ \ta + (n-1)\textnormal{\textbf{W}}\right],\\
\mathrm{I}E(Y_i'(n)Y_j(n) | \textnormal{\textbf{W}}) &= \frac{1}{n+s-1} \mathrm{I}E \left[ a_i W_j + nW_iW_j - W_i \delta_{ij}\right].
}
\epsnd{lemma}
\begin{proof}
First note that
\be{
\mathrm{I}E[\textnormal{\textbf{Y}}'(n)| (\textnormal{\textbf{Y}}(1), \ldots, \textnormal{\textbf{Y}}(n))]=\frac{1}{n+s-1} \left[ \ta + n\textnormal{\textbf{W}} - \textnormal{\textbf{Y}}(n) \right].
}
The first equality now follows by taking expectation conditional on~$\textnormal{\textbf{W}}$ and noting that exchangeability implies~$\mathrm{I}E [\textnormal{\textbf{Y}}(n)| \textnormal{\textbf{W}}]=\textnormal{\textbf{W}}$.
For the second identity, use the previous display to find
\ba{
\mathrm{I}E[ Y_i'(n) Y_j(n) | Y_j(1), \ldots Y_j(n)] = \frac{Y_j(n)}{n+s-1} \left[ a_i +n W_i - Y_i(n) \right],
}
and taking expectation conditional on~$\textnormal{\textbf{W}}$, noting~$\mathrm{I}E \textnormal{\textbf{Y}}(n)=\textnormal{\textbf{W}}$ and~$Y_i(n) Y_j(n)=\deltaelta_{i j} Y_i(n)$,
yields
\be{
\mathrm{I}E(Y_i'(n)Y_j(n)| \textnormal{\textbf{W}})= \frac{1}{n+s-1} \mathrm{I}E \left[ a_i W_j + nW_iW_j - W_i \delta_{ij}\right]. \qed
| 3,001 | 53,110 |
en
|
train
|
0.4939.3
|
here
}
\epsnd{proof}
\begin{proof}[Proof of Theorem~\ref{THM4}]
We apply Theorem~\ref{THM3} with the exchangeable pair defined above. We show below
that for~$i,j,k\in\{1,\ldots, K\}$,
\ban{
&\mathrm{I}E[\textnormal{\textbf{W}}'-\textnormal{\textbf{W}}|\textnormal{\textbf{W}}]= \frac{1}{n(n+s-1)}\left(\ta - s \textnormal{\textbf{W}}\right), \label{12}\\
&\mathrm{I}E[(W_i' - W_i)(W_j' - W_j)| \textnormal{\textbf{W}}] =\frac{\delta_{ij}(a_i + (2n+s)W_i) - a_i W_j - a_j W_i - 2nW_iW_j}{n^2(n+s-1)}, \label{13}\\
&\mathrm{I}E \left| (W_i'-W_i)(W_j'-W_j)(W_k'-W_k) \right| \leq n^{-3}(1-\mathrm{I}[\text{$i,j,k$ distinct}]), \label{14}
}
so we can apply the Theorem~\ref{THM3} with~$\Lambda=\frac{1}{n (n+s-1)} \times \mathrm{I}d$.
In this case, using~\epsqref{13},
\ba{
A_2&= n(n+s-1)\sum_{i,j=1}^{K-1} \mathrm{I}E \left| \frac{1}{n(n+s-1)} W_i(\delta_{ij}-W_j) - \frac{1}{2} \mathrm{I}E\left[(W_i'-W_i)(W_j'-W_j)|\textnormal{\textbf{W}}\right]\right|\\
&=\frac{1}{2n} \sum_{i,j=1}^{K-1} \mathrm{I}E \left| \delta_{ij}(a_i + sW_i) - a_iW_j - a_jW_i\right|\leq \frac{2s}{n}.
}
Now using~\epsqref{14}, we have
\be{
A_3\leq \frac{(K-1)(3K-5) (n+s-1)}{n^2}.
}
Finally the form of~\epsqref{12} makes it clear that~$R=0$ and so~$A_1=0$. Putting together the last two displays yields the result.
All that is left is to show~\epsqref{12},~\epsqref{13}, and~\epsqref{14}. Lemma~\ref{lem1} implies
\be{
\mathrm{I}E[\textnormal{\textbf{W}}' - \textnormal{\textbf{W}} | \textnormal{\textbf{W}}]= \frac{1}{n} \mathrm{I}E [ \textnormal{\textbf{Y}}'(n) - \textnormal{\textbf{Y}}(n) | \textnormal{\textbf{W}} ]=\frac{1}{n(n+s-1)}\left(\ta - s \textnormal{\textbf{W}}\right),
}
which is~\epsqref{12}. For~\epsqref{13}, use Lemma~\ref{lem1} and that~$Y_i(n) Y_j(n)=\deltaelta_{i j} Y_i(n)$ and~$Y_i'(n) Y_j'(n)=\deltaelta_{i j} Y_i'(n)$ to find
\ba{
&\mathrm{I}E[(W_i' - W_i)(W_j' - W_j)| \textnormal{\textbf{W}}] = \frac{1}{n^2} \mathrm{I}E[ Y_i'(n)Y_j'(n) + Y_i(n)Y_j(n) - Y_i'(n)Y_j(n) - Y_i(n)Y_j'(n) | \textnormal{\textbf{W}}]\\
&= \frac{1}{n^2} \left[\frac{\delta_{ij}}{n+s-1}(a_i + (n-1)W_i) + \delta_{ij}W_i - \frac{1}{n+s-1} \left(a_i W_j + a_j W_i + 2nW_iW_j - 2 W_i \delta_{ij}\right) \right]\\
&= \frac{1}{n^2(n+s-1)} \left[ \delta_{ij}(a_i + (2n+s)W_i) - a_i W_j - a_j W_i - 2nW_iW_j\right].
}
Finally~\epsqref{14} follows noting that~$\abs{W_i'-W_i}\leq 1/n$ and that at most two of the~$(W_i'-W_i)$ can be non-zero.
The bound on the convex set metric is immediate from the bounds on~$A_2$ and~$A_3$ and \epsqref{9}.
\epsnd{proof}
| 1,273 | 53,110 |
en
|
train
|
0.4939.4
|
\section{Stein's~method~for~the~Dirichlet~distribution}\label{sec1}
\subsection{Stein operator}
In order to apply Stein's method we need a characterizing operator for the Dirichlet distribution, which is provided below.
Let~$\deltaelta_{ij}$ denote the Kronecker delta function, and for a function~$f$, let~$f_j$ be the partial derivative of~$f$ with respect to the~$j$th component,~$f_{i j}$ the 2nd partial derivative, and so on.
\begin{lemma}
Let~$a_1, \ldots, a_K$ be positive numbers and~$s=\sum_{i=1}^K a_i$. The random vector~$\textnormal{\textbf{W}}\in\Delta_{K}$ has distribution~$\Dir (a_1,\ldots,a_K)$ if and only if for all~$f\in\mathrm{BC}^{2,1}(\Delta_K)$
\be{
\mathrm{I}E\left[ \sum_{i,j=1}^{K-1} W_i(\deltaelta_{i j}-W_j)f_{i j}(\textnormal{\textbf{W}})
+\sum_{i=1}^{K-1}(a_i-s W_i)f_{i}(\textnormal{\textbf{W}})\right]=0.
}
\epsnd{lemma}
The forward implication of the lemma is straightforward
and the backwards follows by taking expectations against polynomials~$f$ to yield formulas for mixed moments of~$\textnormal{\textbf{W}}$. Also note that
\ben{\label{15}
\mathcal{A}f(\tx) := \frac{1}{2}\left[\sum_{i,j=1}^{K-1} x_i(\deltaelta_{i j}-x_j) f_{i j}(\tx)
+\sum_{i=1}^{K-1}(a_i-s x_i)f_i(\tx)\right]
}
is the generator of the Wright-Fisher diffusion which has the Dirichlet as its unique stationary distribution; see \citep*{Wright1949}, \citep*{Ethier1976}, \citep*{Shiga1981}.
| 516 | 53,110 |
en
|
train
|
0.4939.5
|
\subsection{Bounds on the solution to the Stein equation}
To apply Stein's method, we proceed as follows. Let~$\tZ\sim\Dir(\ta)$, and let~$h:\-\Delta_K\to\mathrm{I}R$ be some measurable test function. If~$h$ is bounded, then clearly~$\mathrm{I}E \abs{h(\tZ)} < \infty$. Assume we have a function~$f:=f_h$ that solves
\ben{\label{16}
\sum_{i,j=1}^{K-1} x_i(\deltaelta_{i j}-x_j)f_{i j}(\tx)
+\sum_{i=1}^{K-1}(a_i-s x_i) f_i (\tx)=h(\tx)-\mathrm{I}E h(\tZ) =: \tilde{h}(\tx),
}
and note that replacing~$\tx$ by~$\textnormal{\textbf{W}}$ in this equation and taking expectation gives an expression for~$\mathrm{I}E h(\textnormal{\textbf{W}})-\mathrm{I}E h(\tZ)$
in terms of just~$\textnormal{\textbf{W}}$ and~$f$.
Since this operator is twice the generator of the Wright-Fisher diffusion given by~\epsqref{15}, we use the generator approach of \citep*{Barbour1990} \citep*{Gotze1991},
and observe we may set~$f$ to be
\ben{\label{17}
f(\tx) = - \frac{1}{2}\int_0^\infty \mathrm{I}E \left[ h(\tZ_\txt) - h(\tZ) \right] dt = - \frac{1}{2}\int_0^\infty \mathrm{I}E\tilde{h}(\tZ_\txt) dt,
\qquad\tx\in\Delta_K, }
where~$(\tZ_\txt)_{t\geq0}$ is the Wright-Fisher diffusion, defined by the generator~$\mathcal{A}$ with~$\tZ_\tx(0) = \tx$ (the factor
of~$1/2$ in the expression appears since~\epsqref{15} is twice the generator of~$\tZ$).
Using a probabilistic description of the Wright-Fisher semigroup due to
\citep*{Griffiths1983} and \citep*{Tavare1984} we show that the above integral is well defined, and we obtain the following bounds on the solution~\epsqref{17} to the Stein equation~\epsqref{16}.
\begin{theorem}\label{THM5}
If~$h:\-\Delta_K\to\mathrm{I}R$ is continuous, then~$f$ defined by \epsqref{17} is twice partially differentiable and solves \epsqref{16} for all~$\tx\in\Delta_K$, and we have the bound
\ben{\label{18}
\norm{f}_\infty\leq \frac{(s+1)}{s}\norm{\tilde h}_\infty.
}
If~$h\in\mathrm{BC}^{m,1}(\-\Delta_K)$ for some~$m\geq 0$, then~$f\in\mathrm{BC}^{m,1}(\-\Delta_K)$, and we have the bounds
\ben{\label{19}
\abs{f}_k\leq \frac{\abs{h}_k}{k(s+k-1)},\quad 1\leq k\leq m,\qquad\text{and}
\qquad
\abs{f}_{m,1}\leq \frac{\abs{h}_{m,1}}{m(s+m-1)}.
}
If~$m\geq 2$, then
equation \epsqref{16} holds for all~$\tx\in\-\Delta_K$.
\epsnd{theorem}
\begin{remark}
The Dirichlet distribution is a multivariate generalization of the Beta distribution for which Stein's method
has recently been developed \citep*{Dobler2012, Dobler2015} \citep*{Goldstein2013} where bounds are derived
for the~$K=2$ case of the Stein equation used here. Direct comparisons are difficult in general since
typically different derivatives of the test function appear. However one easily comparable bound is \citep*[Proposition~4.2(b)]{Dobler2015} that~$|f|_1 \leq |h|_1/s$, which is the same as our bound in this case.
In general, the bounds appearing in these other works are quite complicated, involving different expressions for different regions of the parameter space, whereas
our bounds are very clean and have a simple relationship to the parameters. Furthermore our bounds apply in the multivariate setting.
\epsnd{remark}
\begin{proof}[Proof of Theorem~\ref{THM5}] Throughout the proof we make the simplifying assumption that~$\mathrm{I}E h(\tZ)=0$ so that~$\tilde h = h$.
Following the generator approach of \citep*{Barbour1990}, \citep*{Gotze1991}; see also \citep*[Appendix~B]{Gorham2016}; let~$\tx\in\-\Delta_K$,
let~$(\tZ_\txt)_{t\geq0}$ be the Wright-Fisher diffusion defined by the generator~$\mathcal{A}$ with~$\tZ_\tx(0) = \tx$,
and let~$f$ be as defined in~\epsqref{17}.
\noindent{\bf Construction of semigroup.}
The key to our bounds is a construction of the marginal variable~$\tZ_\txt$ from \citep*{Griffiths1983} \citep*{Tavare1984}; see also the introduction of~\citep*{Barbour2000}.
Let~$L_t$ be a pure death process on~$\{0, 1, \ldots\}\cup\{\infty\}$ started at~$\infty$ with death rates
\ben{\label{20}
q_{i, i-1}= \frac{1}{2} i(i-1+s).
}
Denote by~$\mathrm{MN}_K(n; p_1,\deltaots,p_K)$ the~$K$-dimensional multinomial distribution with~$n$ trials and probabilities~$p_1,\deltaots,p_K$; by slight misuse of notation, we write~$\mathrm{MN}_K(L_t;\tx,x_K)$ to be
short for~$\mathrm{MN}_K(L_t;x_1,\deltaots,x_{K-1},x_K)$. Conditional on~$L_t$, let~$\tN\sim\mathrm{MN}_K(L_t;\tx,x_K)$, where~$x_K=1-\sum_{i=1}^{K-1}x_i$.
Then,
\be{
\mathscr{L}\bklr{\tZ_\txt\big\vert L_t,\tN} \sim \Dir(\ta+\tN).
}
\noindent{\bf Existence of solution to Stein equation on~$\boldsymbol{\Delta_K}$ and bound (\ref{18}).}
For~$n\geq 1$, let~$Y_n$ be the time the process~$L_t$ spends in state~$n$ and note that~$Y_n$ is exponentially distributed with rate~$n(n-1+s)/2$. Since
\be{
\sum_{n\geq 1} \mathrm{I}E Y_n = \sum_{n\geq 1}\frac{2}{n(n+s-1)} \leq \frac{2(s+1)}{s},
}
the random variable~$T = \inf\{t>0\,:\,L_t=0\} = \sum_{n\geq 1}Y_n$ is finite almost surely and has finite expectation. Observing that~$\mathrm{I}E\bklr{{ h}(\tZ_\txt)\big| L_t=0}=0$ since, given~$L_t=0$, we have~$\tZ_\txt \sim \Dir(\ta)$, it follows that
\besn{\label{457}
\int_0^\infty \babs{\mathrm{I}E\bklr{ h(\tZ_\txt) }} dt &\leq \int_0^\infty \norm{h}_\infty \mathrm{I}P(L_t>0) dt \\
&\leq \norm{h}_\infty \int_0^\infty \mathrm{I}P(T>t) dt =\norm{ h}_\infty\mathrm{I}E T < \infty.
}
Thus,
$f$ in \epsqref{17} is well-defined.
To show that $f$ is in the domain of~$\cal{A}$ and satisfies $\mathcal{A}f= h$ under the assumption that~$h\in\mathrm{BC}(\-\Delta_K)$, the Banach space of bounded and continuous functions equipped with sup-norm, we follow the argument of \citep*[Pages~301-2]{Barbour1990} also used in \citep*[Appendix~B]{Gorham2016}. First,
\citep*[Theorem~1]{Ethier1976} implies that
the semigroup $(T_t)_{t\geq0}$ defined by $T_t g(\tx)=\mathrm{I}E g(\tZ_\txt)$ for $g\in\mathrm{BC}(\-\Delta_K)$ is strongly continuous. Note also that $\mathrm{BC}^{m,1}(\-\Delta_K)\subset\mathrm{BC}(\-\Delta_K)$ for all $m\geq 0$.
We can therefore apply
\citep*[Proposition~1.5(a), Page~9]{Ethier1986}, which implies that $f\s u(\tx):=- \frac{1}{2}\int_0^u \mathrm{I}E{h}(\tZ_\txt) dt$ is in the domain of~$\mathcal{A}$ and satisfies
\be{
\mathcal{A} f\s u (\tx)=h(\tx)-\mathrm{I}E h(\tZ_\tx(u)).
}
Furthermore, \citep*[Corollary~1.6, Page~10]{Ethier1986} implies that $\mathcal{A}$ is a closed operator,
so it is enough to show that as $u\to\infty$,
\ben{\label{458}
\norm{f\s u-f}_\infty\to0 \,\,\, \mbox{ and } \,\,\, \norm{\mathcal{A}f\s u-h}_{\infty}\to 0.
}
By definitions,~\epsqref{458} is implied by
\be{
\sup_{\tx\in\-\Delta_K}\int_u^\infty \babs{\mathrm{I}E\bklr{ h(\tZ_\txt) }} dt\to 0 \,\,\, \mbox{ and } \,\,\, \sup_{\tx\in\-\Delta_K}\mathrm{I}E h(\tZ_\tx(u))\to 0,
}
as $u\to\infty$. But the first limit follows from~\epsqref{457} and the second is because
\be{
\sup_{\tx\in\-\Delta_K}\mathrm{I}E h(\tZ_\tx(u))\leq \norm{h}_\infty \sup_{\tx\in\-\Delta_K} \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(\tZ_\tx(u)), \Dir(\ta)} \leq \norm{h}_\infty \mathrm{I}P(L_u>0)\to 0.
}
The boundedness of the solution follows essentially from the computations above, but we give a slightly different argument
in detail, since a similar but more complicated one is used later. Compute
\bes{
-2f(\tx)
& =\int_0^\infty \mathrm{I}E h(\tZ_\txt) dt
=\mathrm{I}E \int_0^\infty \mathrm{I}E \bklr{h(\tZ_\txt)\big|L_t} dt \\
& =\mathrm{I}E \int_0^\infty \sum_{n\geq1} \mathrm{I}E\bklr{h(\tZ_\txt)\big| L_t=n}\mathrm{I}[L_t=n] dt\\
&=\mathrm{I}E \sum_{n\geq1} \int_0^\infty \mathrm{I}E\bklr{h(\tZ_\tx(1))\big| L_1=n}\mathrm{I}[L_t=n] dt \\
& =\mathrm{I}E \sum_{n\geq1} \mathrm{I}E\bklr{h(\tZ_\tx(1))\big| L_1=n}Y_n
=\sum_{n\geq 1} \mathrm{I}E\bklr{ h(\tZ_\tx(1))\big| L_1=n}\mathrm{I}E Y_n,
}
where we have used dominated convergence multiple times to interchange expectation, integration and summation, along with the fact that~$\mathrm{I}E \bklr{h(\tZ_\txt)\big|L_t=n}$ only depends on~$n$ and not on~$t$ and can therefore be replaced by~$\mathrm{I}E \bklr{h(\tZ_\tx(1))\big|L_1=n}$ (or with~$t$ being replaced by any other fixed positive time).
This leads to
\be{
\abs{f(\tx)}\leq \frac{1}{2}\norm{ h}_\infty \sum_{n\geq 1} \mathrm{I}E Y_n
= \norm{ h}_\infty \sum_{n\geq 1}\frac{1}{n(n-1+s)}\leq \frac{(s+1)}{s}\norm{ h}_\infty,
}
which is \epsqref{18}.
| 3,310 | 53,110 |
en
|
train
|
0.4939.6
|
To show that $f$ is in the domain of~$\cal{A}$ and satisfies $\mathcal{A}f= h$ under the assumption that~$h\in\mathrm{BC}(\-\Delta_K)$, the Banach space of bounded and continuous functions equipped with sup-norm, we follow the argument of \citep*[Pages~301-2]{Barbour1990} also used in \citep*[Appendix~B]{Gorham2016}. First,
\citep*[Theorem~1]{Ethier1976} implies that
the semigroup $(T_t)_{t\geq0}$ defined by $T_t g(\tx)=\mathrm{I}E g(\tZ_\txt)$ for $g\in\mathrm{BC}(\-\Delta_K)$ is strongly continuous. Note also that $\mathrm{BC}^{m,1}(\-\Delta_K)\subset\mathrm{BC}(\-\Delta_K)$ for all $m\geq 0$.
We can therefore apply
\citep*[Proposition~1.5(a), Page~9]{Ethier1986}, which implies that $f\s u(\tx):=- \frac{1}{2}\int_0^u \mathrm{I}E{h}(\tZ_\txt) dt$ is in the domain of~$\mathcal{A}$ and satisfies
\be{
\mathcal{A} f\s u (\tx)=h(\tx)-\mathrm{I}E h(\tZ_\tx(u)).
}
Furthermore, \citep*[Corollary~1.6, Page~10]{Ethier1986} implies that $\mathcal{A}$ is a closed operator,
so it is enough to show that as $u\to\infty$,
\ben{\label{458}
\norm{f\s u-f}_\infty\to0 \,\,\, \mbox{ and } \,\,\, \norm{\mathcal{A}f\s u-h}_{\infty}\to 0.
}
By definitions,~\epsqref{458} is implied by
\be{
\sup_{\tx\in\-\Delta_K}\int_u^\infty \babs{\mathrm{I}E\bklr{ h(\tZ_\txt) }} dt\to 0 \,\,\, \mbox{ and } \,\,\, \sup_{\tx\in\-\Delta_K}\mathrm{I}E h(\tZ_\tx(u))\to 0,
}
as $u\to\infty$. But the first limit follows from~\epsqref{457} and the second is because
\be{
\sup_{\tx\in\-\Delta_K}\mathrm{I}E h(\tZ_\tx(u))\leq \norm{h}_\infty \sup_{\tx\in\-\Delta_K} \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(\tZ_\tx(u)), \Dir(\ta)} \leq \norm{h}_\infty \mathrm{I}P(L_u>0)\to 0.
}
The boundedness of the solution follows essentially from the computations above, but we give a slightly different argument
in detail, since a similar but more complicated one is used later. Compute
\bes{
-2f(\tx)
& =\int_0^\infty \mathrm{I}E h(\tZ_\txt) dt
=\mathrm{I}E \int_0^\infty \mathrm{I}E \bklr{h(\tZ_\txt)\big|L_t} dt \\
& =\mathrm{I}E \int_0^\infty \sum_{n\geq1} \mathrm{I}E\bklr{h(\tZ_\txt)\big| L_t=n}\mathrm{I}[L_t=n] dt\\
&=\mathrm{I}E \sum_{n\geq1} \int_0^\infty \mathrm{I}E\bklr{h(\tZ_\tx(1))\big| L_1=n}\mathrm{I}[L_t=n] dt \\
& =\mathrm{I}E \sum_{n\geq1} \mathrm{I}E\bklr{h(\tZ_\tx(1))\big| L_1=n}Y_n
=\sum_{n\geq 1} \mathrm{I}E\bklr{ h(\tZ_\tx(1))\big| L_1=n}\mathrm{I}E Y_n,
}
where we have used dominated convergence multiple times to interchange expectation, integration and summation, along with the fact that~$\mathrm{I}E \bklr{h(\tZ_\txt)\big|L_t=n}$ only depends on~$n$ and not on~$t$ and can therefore be replaced by~$\mathrm{I}E \bklr{h(\tZ_\tx(1))\big|L_1=n}$ (or with~$t$ being replaced by any other fixed positive time).
This leads to
\be{
\abs{f(\tx)}\leq \frac{1}{2}\norm{ h}_\infty \sum_{n\geq 1} \mathrm{I}E Y_n
= \norm{ h}_\infty \sum_{n\geq 1}\frac{1}{n(n-1+s)}\leq \frac{(s+1)}{s}\norm{ h}_\infty,
}
which is \epsqref{18}.
\noindent{\bf Preliminaries for partial derivatives.}
To show the existence and bounds for the partial derivatives,
we need some couplings. Let $\te_i$ denote the unit vector with a one in the $i$th coordinate (and zeros in all others with dimension from context).
Fix~$m\geq 0$ and~$1\leq i_1,\deltaots,i_{m+1}\leq K-1$. Let~$\tx = (x_1, x_2, \ldots, x_{K-1})\in\Delta_K$. Choose~$\varepsilon_1,\deltaots,\varepsilon_{m+1}>0$ arbitrarily, but small enough that~$x_K:=1-\sum_{j=1}^{K-1} x_j>\sum_{j=1}^{m+1}\varepsilon_j$, or equivalently, that~$\tx+\sum_{j=1}^{m+1}\varepsilon_j\te_{i_j}\in\Delta_K$. Then, proceed with the following steps.
\begin{enumerate}[$(i)$]
\item Let~$L_t$ be the pure death process as described above.
\item Given~$L_t$, let~$\textnormal{\textbf{M}} :=(\tB, \tN)\sim \mathrm{MN}_{m+1+K}(L_t;\eps_1, \deltaots, \eps_{m+1}, \tx , x_K-\sum_{j=1}^{m+1}\varepsilon_j)$, where~$\tB=(B_1,\deltaots,B_{m+1})$ and~$\tN=(N_1,\deltaots,N_K)$.
\item Given~$L_t$ and~$\textnormal{\textbf{M}}$, let
\ben{\label{21}
( \delta_1, \deltaots, \delta_{m+1}, \textnormal{\textbf{D}}_\tx) \sim \Dir(\tB, \ta+\tN).
}
\item Set~$\boldsymbol{\eps}_j = \eps_j \te_{i_j}$ for~$1\leq j\leq m+1$.
As described immediately below, basic facts
about the multinomial and Dirichlet distributions imply that
\ben{\label{22}
\textnormal{\textbf{D}}_\tx \epsqreflaw \tZ_\txt
}
and that
\ben{\label{23}
\textnormal{\textbf{D}}_{\tx} + \sum_{j\in A} \delta_{j}\te_{i_j} \epsqreflaw \tZ_{\tx+\sum_{j\in A} \boldsymbol{\eps}_{j}}(t)
}
for any subset~$A\subset\{1,\deltaots,m+1\}$.
\epsnd{enumerate}
To see why~\epsqref{22} and~\epsqref{23} are true, use the
following standard facts.
\begin{itemize}
\item Let~$(\xi_1, \deltaots, \xi_{p-1}) \sim \Dir(y_1,\ldots,y_p)$. If~$A=\{i_1,\ldots,i_j\}\subset\{1,\ldots, p-1\}$
is any subset of indices, then
\be{
\bklr{\xi_{i_1},\ldots, \xi_{i_j} } \sim \Dir\bbklr{y_{i_1},\ldots,y_{i_j}, y_p+\sum_{k\not\in A} y_k}.
}
Furthermore, letting~$\boldsymbol{\xi}^{(k)}\in \mathrm{I}R^{p-2}$ denote~$(\xi_1, \deltaots, \xi_{p-1})$ with the~$k$th coordinate
removed,
we have for~$i<k$ (and a similar statement for $i>k$),
\be{
\boldsymbol{\xi}^{(k)}+\te_i \xi_k
\sim \Dir\bklr{y_{1},\ldots,y_{i-1}, y_{i}+y_{k},y_{i+1} \ldots, y_{k-1}, y_{k+1}, \ldots, y_{p} }.
}
\item Let~$(\zeta_1,\ldots,\zeta_p)\sim \mathrm{MN}_p(b; y_1,\ldots,y_p)$. If~$A=\{i_1,\ldots,i_j\}\subset\{1,\ldots, p\}$
is any
subset of indices, then
\be{
\bbklr{\zeta_{i_1},\ldots, \zeta_{i_j}, \sum_{k\not\in A} \zeta_k} \sim \mathrm{MN}_{j+1}\bbklr{b; y_{i_1},\ldots,y_{i_j}, \sum_{k\not\in A} y_k}.
}
Furthermore, letting~$\boldsymbol{\zeta}^{(k)}\in \mathrm{I}R^{p-1}$ denote~$(\zeta_1, \deltaots, \zeta_{p})$ with the~$k$th coordinate
removed,
we have for~$i<k$ (and a similar statement for $i>k$),
\be{
\boldsymbol{\zeta}^{(k)}+\te_i \zeta_k
\sim \mathrm{MN}_{p-1}\bklr{b; y_{1},\ldots,y_{i-1}, y_{i}+y_{k},y_{i+1} \ldots, y_{k-1}, y_{k+1}, \ldots, y_{p} }.
}
\epsnd{itemize}
The first item above follows from the usual decomposition of the components of the Dirichlet distribution in terms of ratios of gamma variables, and the second is straightforward from the
probabilistic description of the multinomial distribution.
\noindent {\bf Existence of partial derivatives and bounds (\ref{19}).} To ease notation, for vectors $\tx,\ty$ and a function $g$, define $\Delta_{\ty}g(\tx)=g(\tx+\ty)-g(\ty)$ (context should clarify when we mean the simplex or the difference operator). Assume now that $h\in\mathrm{BC}^{m,1}(\-\Delta_K)$ for some $m\geq 0$, let~$1\leq i_1,\deltaots,i_{m+1}\leq K-1$, and recall the coupling and associated notation defined above. For any~$1\leq k\leq m+1$, we have
\bes{
\frac{\babs{\Delta_{\boldsymbol{\eps}_{1}}\cdots\Delta_{\boldsymbol{\eps}_{k}}f(\tx)}}{{\varepsilon_1\cdots\varepsilon_k}}
& = \frac{1}{2{\varepsilon_1\cdots\varepsilon_k}}\bbbabs{\int_{0}^\infty \mathrm{I}E \bklr{\Delta_{\boldsymbol{\eps}_{i}}\cdots\Delta_{\boldsymbol{\eps}_{k}}\bklr{{ h}(\tZ_{\cdot}(t))}(\tx)}dt} \\
& = \frac{1}{2{\varepsilon_1\cdots\varepsilon_k}}\bbbabs{\int_{0}^\infty \mathrm{I}E \bklr{\Delta_{\boldsymbol{\eps}_{i}}\cdots\Delta_{\boldsymbol{\eps}_{k}}{{ h}(\textnormal{\textbf{D}}_{\tx})}}dt} \\
& \leq \frac{\abs{h}_{k-1,1}}{2{\varepsilon_1\cdots\varepsilon_k}}\int_{0}^\infty {\mathrm{I}E \bklr{\deltaelta_1\cdots\deltaelta_m}}dt,
}
where in the last step we have applied Lemma~\ref{lem2}. Now using formulas for Dirichlet and multinomial moments, we have
\be{
\mathrm{I}E \bklr{\deltaelta_1\cdots\deltaelta_m\big\vert L_t,M} = \frac{B_1\cdots B_m}{(L_t+s)(L_t+s+1)\cdots(L_t+s+m-1)}
}
and
\be{
\mathrm{I}E \bklr{B_1\cdots B_m\big\vert L_t} = \varepsilon_1\cdots\varepsilon_k L_t(L_t-1)\cdots(L_t-m+1).
}
Thus,
\bes{
\frac{1}{\varepsilon_1\cdots\varepsilon_k}\int_0^\infty \mathrm{I}E\bklr{\deltaelta_1\cdots\deltaelta_k} dt
& = \sum_{n\geq 1}\frac{n(n-1)\cdots(n-k+1)}{(n+s)(n+s+1)\cdots(n+s+k-1)}\mathrm{I}E Y_n \\
& = \sum_{n\geq 1}\frac{2(n-1)\cdots(n-k+1)}{(n+s-1)(n+s)\cdots(n+s+k-1)} = \frac{2}{k(s+k-1)}.
}
Hence, it follows that
\be{\label{24}
\frac{\babs{\Delta_{\boldsymbol{\eps}_{1}}\cdots\Delta_{\boldsymbol{\eps}_{k}}f(\tx)}}{{\varepsilon_1\cdots\varepsilon_k}} \leq \frac{\abs{h}_{k-1,1}}{k(s+k-1)} =: M_k
}
Since~$\tx$ and~$\varepsilon_1,\cdots,\varepsilon_k$ are arbitrary,
\epsqref{31} in Lemma~\ref{lem5} is satisfied, and we conclude that~$f\in\mathrm{BC}^{m,1}(\-\Delta_K)$ and that
for $k=1,\ldots, m+1$,
$\abs{f}_{k-1,1} \leq \frac{\abs{h}_{k-1,1}}{k(s+k-1)}$, which is \epsqref{19}.
\noindent {\bf Extension to~$\boldsymbol{\-\Delta_K}$.} Assume now~$m\geq 2$. Since~$f\in\mathrm{BC}^{m,1}(\-\Delta_K)$, we have in particular that the~$f_i$ and the~$f_{ij}$ can be extended continuously and uniquely to the boundary of~$\-\Delta_K$. Since the left hand side of \epsqref{16} only consists of finite sums and continuous transformations of the~$f_i$ and~$f_{ij}$ and is equal to the right hand side of \epsqref{16} on~$\Delta_K$, it follows
that \epsqref{16} also holds on the boundary of~$\-\Delta_K$.
\epsnd{proof}
| 3,784 | 53,110 |
en
|
train
|
0.4939.7
|
\subsection{Proof of Theorem~\ref{THM3}}
\begin{proof}[Proof of Theorem~\ref{THM3}] Since $h\in\mathrm{BC}^{2,1}(\-\Delta_K)$, Theorem~\ref{THM5} implies that there is a function $f\in\mathrm{BC}^{2,1}(\-\Delta_K)$ solving \epsqref{16}. Exchangeability implies
\bes{
0&=\tsfrac{1}{2}\mathrm{I}E[ (\textnormal{\textbf{W}}'-\textnormal{\textbf{W}})^t \Lambda^{-t} (\nabla f(\textnormal{\textbf{W}}')+\nabla f(\textnormal{\textbf{W}}))] \\
&=\mathrm{I}E[ (\textnormal{\textbf{W}}'-\textnormal{\textbf{W}})^t \Lambda^{-t}\nabla f(\textnormal{\textbf{W}}))] +\tsfrac{1}{2}\mathrm{I}E[ (\textnormal{\textbf{W}}'-\textnormal{\textbf{W}}) ^t\Lambda^{-t} (\nabla f(\textnormal{\textbf{W}}')-\nabla f(\textnormal{\textbf{W}}))],
}
and applying the linearity condition~\epsqref{7} yields
\bes{
&\mathrm{I}E [(\ta-s\textnormal{\textbf{W}})^t \nabla f(\textnormal{\textbf{W}})] \\
&\qquad= -\tsfrac{1}{2}\mathrm{I}E[ (\textnormal{\textbf{W}}'-\textnormal{\textbf{W}})^t \Lambda^{-t} (\nabla f(\textnormal{\textbf{W}}')-\nabla f(\textnormal{\textbf{W}}))]-\mathrm{I}E [\tR^t\Lambda^{-t}\nabla f(\textnormal{\textbf{W}})].
}
By the fundamental theorem of calculus,
\bes{
f_i(\textnormal{\textbf{w}}') & = f_i(\textnormal{\textbf{w}}) + \int_{0}^1\sum_{j=1}^{K-1}(w'_j-w_j)f_{ij}(\textnormal{\textbf{w}}+(\textnormal{\textbf{w}}'-\textnormal{\textbf{w}})t)dt \\
& = f_i(\textnormal{\textbf{w}}) + \sum_{j=1}^{K-1}(w'_j-w_j)f_{ij}(\textnormal{\textbf{w}}) + \sum_{j=1}^{K-1}\int_{0}^1(w'_j-w_j)\bklr{f_{ij}(\textnormal{\textbf{w}}+(\textnormal{\textbf{w}}'-\textnormal{\textbf{w}})t)-f_{ij}(\textnormal{\textbf{w}})}dt.
}
Since~$f_{ij}$ is Lipschitz continuous,
\be{
\babs{f_{ij}(\textnormal{\textbf{w}}+(\textnormal{\textbf{w}}'-\textnormal{\textbf{w}})t)-f_{ij}(\textnormal{\textbf{w}})}
\leq \abs{f}_{2,1}t\sum_{k=1}^{K-1}\abs{w'_k-w_k};
}
hence, there are~$\tilde{Q}_{ijk} = \tilde{Q}_{ijk}(\textnormal{\textbf{w}},\textnormal{\textbf{w}}',f)$ such that~$\abs{\tilde{Q}_{ijk}}\leq \abs{f}_{2,1}$ and
\bes{
&(\textnormal{\textbf{w}}'-\textnormal{\textbf{w}})^t \Lambda^{-t} (\nabla f(\textnormal{\textbf{w}}')-\nabla f(\textnormal{\textbf{w}})) \\
&\qquad= \sum_{m,i,j} (\Lambda^{-1})_{i,m} (w_m'-w_m)(w_j'-w_j) f_{i j}(\textnormal{\textbf{w}}) \\
&\qquad\qquad+\frac{1}{2}\sum_{m,i,j,k} (\Lambda^{-1})_{i,m} (w_m'-w_m)(w_j'-w_j)(w_k'-w_k)\tilde{Q}_{ijk}.
}
Combining the previous three displays, we have
\ban{
&\mathrm{I}E\left[ \sum_{i,j=1}^{K-1} W_i(\deltaelta_{i j}-W_j) f_{i j}(\textnormal{\textbf{W}})+\sum_{i=1}^{K-1}(a_i-s W_i) f_i(\textnormal{\textbf{W}})\right] \notag \\
&\quad=\mathrm{I}E\left[\sum_{i,j=1}^{K-1} \left(W_i(\deltaelta_{i j}-W_j) -\frac{1}{2}\sum_{m=1}^{K-1}(\Lambda^{-1})_{i,m} (W_m'-W_m)(W_j'-W_j) \right)f_{i j}(\textnormal{\textbf{W}})\right]\notag \\
&\qquad\quad-\frac{1}{2}\sum_{m,i,j,k}(\Lambda^{-1})_{i,m} \mathrm{I}E\left[(W_m'-W_m)(W_j'-W_j)(W_k'-W_k)\tilde{Q}_{ijk}\right] \notag \\
&\qquad\quad\qquad -\mathrm{I}E \left[\sum_{i,j} R_j(\Lambda^{-1})_{i,j}f_i(\textnormal{\textbf{W}})\right].\notag
}
We can further simplify the first summand above to
\bes{
\mathrm{I}E\left[\sum_{i,j,m} (\Lambda^{-1})_{i,m} \left( \Lambda_{m,i} W_i(\deltaelta_{i j}-W_j) -\frac{1}{2} (W_m'-W_m)(W_j'-W_j) \right)f_{i j}(\textnormal{\textbf{W}})\right],
}
and now the theorem follows from judicious use of the triangle inequality and the bound \epsqref{19} from Theorem~\ref{THM5}.
If~$\Lambda$ is a multiple of the identity matrix
then, following ideas of \citep*{Rollin2008},
the proof is nearly identical but started from
\bes{
f(\textnormal{\textbf{W}})-f(\textnormal{\textbf{W}}')&=\sum_{i=1}^{K-1} (W_i'-W_i) f_i(\textnormal{\textbf{W}})+\frac{1}{2}\sum_{i,j=1}^{K-1} (W_i'-W_i)(W_j'-W_j) f_{i j}(\textnormal{\textbf{W}}) \\
&\qquad+\frac{1}{6} \sum_{i,j,k=1}^{K-1}(W_i'-W_i)(W_j'-W_j) (W_k'-W_k) \tilde S_{ijk},
}
where~$\tilde S_{ijk}=\tilde S_{ijk}(\textnormal{\textbf{W}},\textnormal{\textbf{W}}',f)$ satisfies $\abs{\tilde S_{ijk}}\leq \abs{f}_{2,1}$.
From here, the proof follows as above by taking expectation, noting that~$\mathrm{I}E[f(\textnormal{\textbf{W}})-f(\textnormal{\textbf{W}}')]=0$ (since~$\mathscr{L}(\textnormal{\textbf{W}})=\mathscr{L}(\textnormal{\textbf{W}}')$)
and that the expectation of the first term on the right hand side above can be simplified using the linearity condition~\epsqref{7}.
The bound on the convex set distance \epsqref{9} directly follows from \epsqref{8} and Lemma~\ref{lem9}.
\epsnd{proof}
| 1,896 | 53,110 |
en
|
train
|
0.4939.8
|
\subsection{Auxiliary results}
In what follows, we define, as usual,~$\Delta_\ty g(\tx)=g(\tx +\ty)-g(\tx)$ and denote by~$\te_i$ the~$i$th unit vector in~$\mathrm{I}R^n$ .
\begin{lemma} \label{lem2}
Let~$U\subset\mathrm{I}R^n$ be a convex open set, and let~$g\in\mathrm{BC}^{m,1}(U)$ for some~$m\geq 0$. Let~$\tx\in U$, let~$1\leq k\leq m+1$, and let~$\ty\s 1, \ldots, \ty\s k\in\mathrm{I}R^n$ be such that~$\tx + \sum_{i=1}^j \ty\s{j} \in U$ for all~$1\leq j\leq k$.
Then, if~$k\leq m$,
\be{
\left|\left(\prod_{i=1}^k \Delta_{\ty\s i}\right) g(\tx)\right|\leq \abs{g}_{k} \prod_{i=1}^k \norm{\ty\s i}_1,
}
and if~$k=m+1$, the same estimate holds with~$\abs{g}_{k}$ replaced by~$\abs{g}_{m,1}$ on the right hand side.
\epsnd{lemma}
\begin{proof} Assume~$k\leq m$.
Applying the easy identity
\be{
g(\tx +\ty)-g(\tx)=\int_0^1 \sum_{i=1}^n \frac{\partial g}{\partial x_i} (\tx + t\ty) \ty_i dt
}
repeatedly~$k$ times yields
\ben{\label{25}
\left(\prod_{i=1}^k \Delta_{\ty\s i}\right) g(\tx)=\int_{[0,1]^k} \sum_{i_1, \ldots, i_k=1}^{n}\frac{\partial^k g}{\prod_{ j =1}^k \partial x_{i_j }} \left(\tx + \sum_{j=1}^k \ty\s{j} t_j\right)\prod_{j=1}^k y\s{j}_{i_j} d\textbf{t}.
}
Thus
\be{
\left|\left(\prod_{i=1}^k \Delta_{\ty\s i} \right) g(\tx)\right| \leq \abs{g}_k \sum_{i_1, \ldots, i_k=1}^{n} \prod_{j=1}^k \abs{ y\s{j}_{i_j}} = \abs{g}_k \prod_{i=1}^k \norm{\ty\s i}_1.
}
For~$k=m+1$, use~\epsqref{25} for $k=m$ to find
\bes{
\left(\prod_{i=1}^{m+1} \Delta_{\ty\s i}\right) g(\tx)&=\int_{[0,1]^m} \sum_{i_1, \ldots, i_m=1}^{n}\frac{\partial^m \Delta_{\ty\s{m+1}}g}{\prod_{ j =1}^m \partial x_{i_j }} \left(\tx + \sum_{j=1}^m \ty\s{j} t_j\right)\prod_{j=1}^k y\s{j}_{i_j} d\textbf{t} \\
&=\int_{[0,1]^m} \sum_{i_1, \ldots, i_m=1}^{n}\Delta_{\ty\s{m+1}}\frac{\partial^m g}{\prod_{ j =1}^m \partial x_{i_j }} \left(\tx + \sum_{j=1}^m \ty\s{j} t_j\right)\prod_{j=1}^k y\s{j}_{i_j} d\textbf{t}.
}
Since the $m+1$ partials are Lipschitz, we find
\be{
\bbbbabs{\Delta_{\ty\s{m+1}}\frac{\partial^m g}{\prod_{ j =1}^m \partial x_{i_j }}\left(\tx + \sum_{j=1}^m \ty\s{j} t_j\right)}
\leq \abs{g}_{m,1} \norm{\ty\s{m+1}}_1,
}
and the result now easily follows by combining this with the previous display.
\epsnd{proof}
\begin{lemma}\label{lem3} Let~$U\subset\mathrm{I}R^n$ be an open convex set, and let~$g:U\to\mathrm{I}R$ be a function. Then,~$g$ is~$M$-Lipschitz continuous with respect to the~$L_1$-norm, if and only if it is
coordinate-wise~$M$-Lipschitz continuous; that is,
\be{
\sup_{x\in U}\sup_u \frac{\abs{\Delta_u g(x)}}{\abs{u}}\leq M.
}
\epsnd{lemma}
\begin{proof} It is clear that if~$g$ is~$M$-Lipschitz continuous, then it is in particular~$M$-Lipschitz continuous in each coordinate. The reverse direction is easily proved using convexity and a telescoping sum argument along the coordinates.
\epsnd{proof}
We were not able to locate the next two lemmas in the literature; hence, we give self-contained proofs.
There is strong resemblance with the theory of \epsmph{bounded~$k$-th variation}, see for example \citep*[Theorem~11]{Russell1973}, but we were not able to find a result that would directly apply to our situation; we also refer to recent survey textbooks \citep*{Mukhopadhyay2012} and \citep*{Appell2014}.
In what follows, we assume that~$u$ and~$v$ appearing in terms like~$\Delta_u f(z)$,~$\Delta_u\Delta_v f(z)$,~$\Delta_{u\varepsilon_i} g(\tx)$ and~$\Delta_{u\varepsilon_i}\Delta_{v\varepsilon_j} g(\tx)$ are such that~$z+u$,~$z+u+v$,~$\tx+u\te_i$ and~$\tx+u\te_i+v\te_j$ are within the domains of the functions being evaluated.
\begin{lemma}\label{lem4} Let~$f:(a,b)\to\mathrm{I}R$ be a function. If
\ben{\label{26}
M_1:=\sup_{z}\sup_{u}\frac{\abs{\Delta_uf(z)}}{\abs{u}}<\infty, \qquad
M_2:=\sup_{z}\sup_{ u,v}\frac{\abs{\Delta_u\Delta_v f(z)}}{\abs{uv}}<\infty,
}
then~$f$ is differentiable and~$f'$ is~$M_2$-Lipschitz-continuous.
\epsnd{lemma}
\begin{proof}
Since, by the first condition of~\epsqref{26}, $f$ is $M_1$-Lipschitz, Rademacher's theorem implies that there is a dense set $E\subset (a,b)$ on which $f$ has a derivative $f'$. On $E$, the second condition of~\epsqref{26} implies that $f'$ is $M_2$-Lipschitz, and so by Kirszbraun's theorem,~$f'$ can be extended to an $M_2$-Lipschitz function $\tilde f'$ on $(a,b)$. We show that for $x\not\in E$, $\tilde f'(x)$ is in fact the derivative of $f$ at~$x$. Fix $\varepsilon>0$. Let $x'\in E$ such that $\abs{x'-x}<\varepsilon/(3M_2)$, such that $\abs{h^{-1}\Delta_hf(x')-f'(x')}\leq \varepsilon/3$, and such that $\abs{\tilde f'(x')-\tilde f'(x)}< \varepsilon/3$. Then for any $0<h\leq \varepsilon$,
\ba{
\bbbabs{\frac{\Delta_h f(x)}{h}- \tilde f'(x)}
&\leq \bbbabs{\frac{\Delta_h f(x)}{h}-\frac{\Delta_h f(x')}{h}}
+ \bbbabs{\frac{\Delta f(x')}{h}-\tilde f'(x')}
+ \babs{\tilde f'(x')-\tilde f'(x)} \\
&= \bbbabs{\frac{\Delta_{x-x'}\Delta_h f(x')}{h}}
+ \bbbabs{\frac{\Delta f(x')}{h}- f'(x')}
+ \babs{\tilde f'(x')-\tilde f'(x)} \\
&\leq \frac{\varepsilon}{3} + \frac{\varepsilon}{3}+ \frac{\varepsilon}{3}=\varepsilon.
}
Hence, $\lim_{h\to0}h^{-1}\Delta_h f(x) = \tilde f'(x)$, as desired.
\epsnd{proof}
\begin{lemma}\label{lem5} Let~$U\subset\mathrm{I}R^n$ be an open convex set, let~$g:U\to\mathrm{I}R$ be a bounded function, and let~$m\geq 0$. If, for each~$1\leq k\leq m+1$, there is a constant~$M_k<\infty$, such that, for each set of indices~$1\leq i_1,\deltaots,i_k\leq n$,
\ben{\label{31}
\sup_{x\in U }\sup_{u_1,\deltaots,u_k}\frac{\abs{\Delta_{u_1\te_{i_1}}\cdots\Delta_{u_k\te_{i_k}} g(x)}}{\abs{u_1\cdots u_k}} \leq M_k,
}
then~$g\in \mathrm{BC}^{m,1}(U)$
and
\ben{\label{32}
\abs{g}_{k,1}\leq M_{k+1},\qquad 0\leq k\leq m.
}
\epsnd{lemma}
\begin{proof} If~$m=0$, the result is immediate since \epsqref{31} is just the coordinate-wise~$M_1$-Lipschitz condition, which implies that~$g$ is Lipschitz, and \epsqref{32} follows from Lemma~\ref{lem3}. Now, assume~$m\geq 1$. Fix a set of~$m$ indices~$1\leq i_1,\deltaots,i_m\leq n$. We proceed by induction and start with~$k=1$. Let~$\tx= (x_1,\deltaots,x_n)\in U~$, let~$a<x_i<b$ such that
\be{
t(x_1,\deltaots,x_{i_1-1},a,x_{i_1-1},\deltaots,x_n)
+ (1-t)(x_1,\deltaots,x_{i_1-1},b,x_{i_1-1},\deltaots,x_n) \in U, \qquad 0\leq t\leq1,
}
and, with~$\tx_z = (x_1,\deltaots,x_{i_1-1},z,x_{i_1-1},\deltaots,x_n)$, let~$f(z) = g(\tx_z)$. By the assumptions on~$g$, we have
\bg{
\sup_{z}\sup_{u}\frac{\abs{\Delta_{u}f(z)}}{\abs{u}}
= \sup_{z}\sup_{u_1}\frac{\abs{\Delta_{u_1\eps_{i_1}}g(\tx_z)}}{\abs{u_1}}\leq M_1<\infty,\\
\sup_{z}\sup_{u,v}\frac{\abs{\Delta_{u}\Delta_{v}f(z)}}{\abs{uv}}
=\sup_{z}\sup_{u_1,u_2}\frac{\abs{\Delta_{u_1\eps_{i_1}}\Delta_{u_2\eps_{i_1}}g(\tx_z)}}{\abs{u_1u_2}}\leq M_2<\infty
}
(note that in the second expression, the second difference is also in the direction~$\eps_{i_1}$)
so that the conditions \epsqref{26} are satisfied. Applying Lemma~\ref{lem4}, we conclude that~$f'(z)=\frac{\partial}{\partial x_{i_1}} g(\tx_z)$ exists and that it is~$M_2$-Lipschitz continuous in direction~$i_1$, but the same argument there together with \epsqref{31} yields~$M_2$-Lipschitz continuity in any other direction, so that by Lemma~\ref{lem3},~$\frac{\partial}{\partial x_{i_1}} g(\tx)$ is~$M_2$-Lipschitz. Since~$\tx$ was arbitrary,~$\frac{\partial}{\partial x_{i_1}} g(\tx)$ exists in all of~$U$ and is~$M_2$-Lipschitz, which concludes the base case.
Assume now that~$1<k<m$ and that~$\frac{\partial^{k-1}}{\partial x_{i_1}\cdots\partial x_{i_{k-1}}} g(\tx)$ exists in all of~$U$. Let~$x\in U$, let~$a$,~$b$ and~$\tx_z$ be as before, and let~$f(z) = \frac{\partial^{k-1}}{\partial x_{i_1}\cdots\partial x_{i_{k-1}}} g(\tx_z)$.
From the assumptions on~$g$ and since~$\frac{\partial^{k-1}}{\partial x_{i_1}\cdots\partial x_{i_{k-1}}} g(\tx)$ exists, we have
\bg{
\sup_{z}\sup_{u}\frac{\abs{\Delta_{u}f(z)}}{\abs{u}}
= \sup_{z}\sup_{u_{k}}\bbbabs{\lim_{u_1\to0}\cdots\lim_{u_{k-1}\to0}\frac{\Delta_{u_1\te_{i_1}}\cdots\Delta_{u_{k}\te_{i_{k}}}g(\tx_z)}{u_1\deltaots u_{k}}} \leq M_{k} < \infty \\
\sup_{z}\sup_{u,v}\frac{\abs{\Delta_{u}\Delta_v f(z)}}{\abs{uv}}
= \sup_{z}\sup_{u_{k},u_{k+1}}\bbbabs{\lim_{u_1\to0}\cdots\lim_{u_{k-1}\to0}\frac{\Delta_{u_1\te_{i_1}}\cdots\Delta_{u_{k}\te_{i_{k}}}\Delta_{u_{k+1}\te_{i_{k}}}g(\tx_z)}{u_1\deltaots u_{k}u_{k+1}}} \leq M_{k+1} < \infty
}
so that the conditions \epsqref{26} are satisfied. Applying Lemma~\ref{lem4}, we conclude that~$f'(z)=\frac{\partial^{k}}{\partial x_{i_1}\cdots\partial x_{i_{k}}} g(\tx_z)$ exists
and that it is~$M_{k+1}$-Lipschitz continuous in direction~$i_k$, but the same argument there together with \epsqref{31} yields~$M_{k+1}$-Lipschitz continuity in any other direction, so that by Lemma~\ref{lem3},~$\frac{\partial^{k}}{\partial x_{i_1}\cdots\partial x_{i_{k}}} g(\tx_z)$ is~$M_{k+1}$-Lipschitz. Since~$\tx$ was arbitrary,~$\frac{\partial^{k}}{\partial x_{i_1}\cdots\partial x_{i_{k}}} g(\tx_z)$ exists in all of~$U$ and is~$M_{k+1}$-Lipschitz, which concludes the induction step.
\epsnd{proof}
| 3,891 | 53,110 |
en
|
train
|
0.4939.9
|
Assume now that~$1<k<m$ and that~$\frac{\partial^{k-1}}{\partial x_{i_1}\cdots\partial x_{i_{k-1}}} g(\tx)$ exists in all of~$U$. Let~$x\in U$, let~$a$,~$b$ and~$\tx_z$ be as before, and let~$f(z) = \frac{\partial^{k-1}}{\partial x_{i_1}\cdots\partial x_{i_{k-1}}} g(\tx_z)$.
From the assumptions on~$g$ and since~$\frac{\partial^{k-1}}{\partial x_{i_1}\cdots\partial x_{i_{k-1}}} g(\tx)$ exists, we have
\bg{
\sup_{z}\sup_{u}\frac{\abs{\Delta_{u}f(z)}}{\abs{u}}
= \sup_{z}\sup_{u_{k}}\bbbabs{\lim_{u_1\to0}\cdots\lim_{u_{k-1}\to0}\frac{\Delta_{u_1\te_{i_1}}\cdots\Delta_{u_{k}\te_{i_{k}}}g(\tx_z)}{u_1\deltaots u_{k}}} \leq M_{k} < \infty \\
\sup_{z}\sup_{u,v}\frac{\abs{\Delta_{u}\Delta_v f(z)}}{\abs{uv}}
= \sup_{z}\sup_{u_{k},u_{k+1}}\bbbabs{\lim_{u_1\to0}\cdots\lim_{u_{k-1}\to0}\frac{\Delta_{u_1\te_{i_1}}\cdots\Delta_{u_{k}\te_{i_{k}}}\Delta_{u_{k+1}\te_{i_{k}}}g(\tx_z)}{u_1\deltaots u_{k}u_{k+1}}} \leq M_{k+1} < \infty
}
so that the conditions \epsqref{26} are satisfied. Applying Lemma~\ref{lem4}, we conclude that~$f'(z)=\frac{\partial^{k}}{\partial x_{i_1}\cdots\partial x_{i_{k}}} g(\tx_z)$ exists
and that it is~$M_{k+1}$-Lipschitz continuous in direction~$i_k$, but the same argument there together with \epsqref{31} yields~$M_{k+1}$-Lipschitz continuity in any other direction, so that by Lemma~\ref{lem3},~$\frac{\partial^{k}}{\partial x_{i_1}\cdots\partial x_{i_{k}}} g(\tx_z)$ is~$M_{k+1}$-Lipschitz. Since~$\tx$ was arbitrary,~$\frac{\partial^{k}}{\partial x_{i_1}\cdots\partial x_{i_{k}}} g(\tx_z)$ exists in all of~$U$ and is~$M_{k+1}$-Lipschitz, which concludes the induction step.
\epsnd{proof}
The following is a specialisation of \citep*[Lemma~2.1]{Bentkus2003} to the convex set metric; we need some notation first. Let~$A\subset \mathrm{I}R^K$ be convex, let~$d(\tx,A) = \inf_{\ty\in A}|\tx-\ty|$, and define the sets
\ben{\label{33}
A^\varepsilon = \{\tx\in \mathrm{I}R^K\,:\,d(\tx,A)\leq \varepsilon\},\qquad A^{-\varepsilon} = \{\tx\in A\,:\, B(\tx;\varepsilon)\subset A\},
}
where~$B(\tx;\varepsilon)$ is the closed ball of radius~$\varepsilon$ around~$\tx$.
\deltaef\mathcal{C}_{K-1}{\mathcal{C}_{K-1}}
\begin{lemma}[{\citep*[Lemma~2.1]{Bentkus2003}}] \label{lem6} Let~$\mathcal{C}_K$ be the family of convex sets of\/~$\mathrm{I}R^K$, and for fixed~$\varepsilon>0$, let~$\{\varphi_{\varepsilon,A}; \, A\in\mathcal{C}_K\}$ be a family of functions satisfying
\ben{\label{34}
0\leq \varphi_{\varepsilon,A} \leq 1,
\qquad
\text{$\varphi_{\varepsilon,A}(\tx) = 1$ for~$\,\tx\in A$},
\qquad
\text{$\varphi_{\varepsilon,A}(\tx) = 0$ for~$\,\tx\not\in A^{\varepsilon}$.}
}
Then, for any two random vectors~$\tX$ and~$\textnormal{\textbf{Y}}$,
\bes{
&\sup_{A\in\mathcal{C}_K}\babs{\mathrm{I}P[\tX\in A]-\mathrm{I}P[\textnormal{\textbf{Y}}\in A]} \\
&\quad \leq \sup_{A\in\mathcal{C}_K}\babs{\mathrm{I}E\varphi_{\varepsilon,A}(\tX)-\mathrm{I}E\varphi_{\varepsilon,A}(\textnormal{\textbf{Y}})}
+ \sup_{A\in\mathcal{C}_K}\max\bklg{\mathrm{I}P[\textnormal{\textbf{Y}}\in A\setminus A^{-\varepsilon}],\mathrm{I}P[\textnormal{\textbf{Y}}\in A^\varepsilon\setminus A]}
}
\epsnd{lemma}
\deltaef\mathcal{S}{\mathcal{S}}
\begin{lemma}[Smoothing operator] \label{lem7}Let~$f:\mathrm{I}R^n\to \mathrm{I}R$ be a bounded and Lebesgue measurable function. For~$\varepsilon>0$, define the smoothing operator~$\mathcal{S}_\varepsilon$ as
\be{
(\mathcal{S}_\varepsilon f)(\tx) = \frac{1}{(2\varepsilon)^n}\int\limits_{x_1-\varepsilon}^{x_1+\varepsilon}\cdots\int\limits_{x_n-\varepsilon}^{x_n+\varepsilon}f(z)\,dz_n\cdots dz_1.
}
Then, for any $m\geq 1$, we have that $\mathcal{S}_\varepsilon^m f\in\mathrm{BC}^{m-1,1}(\mathrm{I}R^n)$, and for fixed~$\tx\in\mathrm{I}R^n$,~$(\mathcal{S}_\varepsilon^ m f)(\tx)$ does not depend on~$f(\ty)$,~$\ty\in\mathrm{I}R^n\setminus B(\tx;mn^{1/2}\varepsilon)$. Moreover, we have the bounds
\ben{\label{35}
\norm{\mathcal{S}^m_\varepsilon f}_{\infty}\leq\norm{f}_\infty,
\qquad
\abs{\mathcal{S}^m_\varepsilon f}_{k-1,1}\leq \frac{\norm{f}_\infty}{\varepsilon^k}, \quad 1\leq k\leq m.
}
\epsnd{lemma}
\begin{proof} The claim that $f(\tx)$ does not depend on $f(\ty)$,~$\ty\in\mathrm{I}R^n\setminus B(\tx;mn^{1/2}\varepsilon)$, is a straightforward consequence of the definition, as is the bound
\ben{\label{36}
\norm{\mathcal{S}_\varepsilon f}_\infty \leq \norm{f}_\infty.
}
Now, it is easy to see that for $u>0$ and $1\leq i\leq n$,
\bes{
\abs{\Delta_{u\te_{i}}\mathcal{S}_\varepsilon f (\tx)}
\leq \begin{cases}
\deltaisplaystyle2\norm{f}_\infty& \text{if $u>2\varepsilon$,}\\[2ex]
\deltaisplaystyle\frac{u\norm{f}_\infty}{\varepsilon}& \text{if $u\leq 2\varepsilon$,}\\
\epsnd{cases}
}
so that $\abs{\Delta_{u\te_{i}}\mathcal{S}_\varepsilon f (\tx)}\leq u\norm{f}_{\infty}/\varepsilon$ for all $x$ and all $u$, which implies that
\ben{\label{37}
\bbbnorm{\frac{\Delta_{u\te_{i}}\mathcal{S}_\varepsilon f}{u}}_\infty
\leq \frac{\norm{f}_\infty}{\varepsilon}.
}
Fix $1\leq k\leq m$, $u_1,\deltaots,u_k>0$ and $1\leq i_1,\deltaots,i_k\leq n$. Noting that~$\Delta_{u\te_i}\mathcal{S}_\varepsilon g = \mathcal{S}_\varepsilon \Delta_{u\te_i}g$, we can write
\be{
\Delta_{u_1\te_{i_1}}\cdots\Delta_{u_k\te_{i_k}}\mathcal{S}_\varepsilon^m = (\Delta_{u_1\te_{i_1}}\mathcal{S}_\varepsilon)\cdots(\Delta_{u_k\te_{i_k}}\mathcal{S}_\varepsilon) \mathcal{S}_\varepsilon^{l-m}.
}
Applying \epsqref{37} repeatedly~$k$ times and if $k<m$ applying in addition \epsqref{36}, we obtain \epsqref{31} with $M_k = \norm{f}/\varepsilon^k$, so that the claim follows from Lemma~\ref{lem5}.
\epsnd{proof}
\begin{lemma}\label{lem8} Let~$\varepsilon>0$, and let~$A\subset \mathrm{I}R^n$ be convex. There exists a function $\varphi=\varphi_{\varepsilon,A}\in\mathrm{BC}^{2,1}(\mathrm{I}R^n)$ satisfying~\epsqref{34} with
\ben{\label{38}
\abs{\varphi}_1\leq \frac{9n^{1/2}}{\varepsilon},
\qquad
\abs{\varphi}_2\leq \frac{81n}{\varepsilon^2},
\qquad
\abs{\varphi}_{2,1}\leq \frac{729n^{3/2}}{\varepsilon^3}.
}
\epsnd{lemma}
\begin{proof} Let~$\deltaelta = \frac{\varepsilon}{9\sqrt{n}}$. Define
\be{
\varphi(\tx) = \mathcal{S}_\deltaelta^3 I_{A^{\varepsilon/3}}(\tx);
}
the claim then follows from Lemma~\ref{lem7}.
\epsnd{proof}
| 2,492 | 53,110 |
en
|
train
|
0.4939.10
|
\deltaef\mathcal{S}{\mathcal{S}}
\begin{lemma}[Smoothing operator] \label{lem7}Let~$f:\mathrm{I}R^n\to \mathrm{I}R$ be a bounded and Lebesgue measurable function. For~$\varepsilon>0$, define the smoothing operator~$\mathcal{S}_\varepsilon$ as
\be{
(\mathcal{S}_\varepsilon f)(\tx) = \frac{1}{(2\varepsilon)^n}\int\limits_{x_1-\varepsilon}^{x_1+\varepsilon}\cdots\int\limits_{x_n-\varepsilon}^{x_n+\varepsilon}f(z)\,dz_n\cdots dz_1.
}
Then, for any $m\geq 1$, we have that $\mathcal{S}_\varepsilon^m f\in\mathrm{BC}^{m-1,1}(\mathrm{I}R^n)$, and for fixed~$\tx\in\mathrm{I}R^n$,~$(\mathcal{S}_\varepsilon^ m f)(\tx)$ does not depend on~$f(\ty)$,~$\ty\in\mathrm{I}R^n\setminus B(\tx;mn^{1/2}\varepsilon)$. Moreover, we have the bounds
\ben{\label{35}
\norm{\mathcal{S}^m_\varepsilon f}_{\infty}\leq\norm{f}_\infty,
\qquad
\abs{\mathcal{S}^m_\varepsilon f}_{k-1,1}\leq \frac{\norm{f}_\infty}{\varepsilon^k}, \quad 1\leq k\leq m.
}
\epsnd{lemma}
\begin{proof} The claim that $f(\tx)$ does not depend on $f(\ty)$,~$\ty\in\mathrm{I}R^n\setminus B(\tx;mn^{1/2}\varepsilon)$, is a straightforward consequence of the definition, as is the bound
\ben{\label{36}
\norm{\mathcal{S}_\varepsilon f}_\infty \leq \norm{f}_\infty.
}
Now, it is easy to see that for $u>0$ and $1\leq i\leq n$,
\bes{
\abs{\Delta_{u\te_{i}}\mathcal{S}_\varepsilon f (\tx)}
\leq \begin{cases}
\deltaisplaystyle2\norm{f}_\infty& \text{if $u>2\varepsilon$,}\\[2ex]
\deltaisplaystyle\frac{u\norm{f}_\infty}{\varepsilon}& \text{if $u\leq 2\varepsilon$,}\\
\epsnd{cases}
}
so that $\abs{\Delta_{u\te_{i}}\mathcal{S}_\varepsilon f (\tx)}\leq u\norm{f}_{\infty}/\varepsilon$ for all $x$ and all $u$, which implies that
\ben{\label{37}
\bbbnorm{\frac{\Delta_{u\te_{i}}\mathcal{S}_\varepsilon f}{u}}_\infty
\leq \frac{\norm{f}_\infty}{\varepsilon}.
}
Fix $1\leq k\leq m$, $u_1,\deltaots,u_k>0$ and $1\leq i_1,\deltaots,i_k\leq n$. Noting that~$\Delta_{u\te_i}\mathcal{S}_\varepsilon g = \mathcal{S}_\varepsilon \Delta_{u\te_i}g$, we can write
\be{
\Delta_{u_1\te_{i_1}}\cdots\Delta_{u_k\te_{i_k}}\mathcal{S}_\varepsilon^m = (\Delta_{u_1\te_{i_1}}\mathcal{S}_\varepsilon)\cdots(\Delta_{u_k\te_{i_k}}\mathcal{S}_\varepsilon) \mathcal{S}_\varepsilon^{l-m}.
}
Applying \epsqref{37} repeatedly~$k$ times and if $k<m$ applying in addition \epsqref{36}, we obtain \epsqref{31} with $M_k = \norm{f}/\varepsilon^k$, so that the claim follows from Lemma~\ref{lem5}.
\epsnd{proof}
\begin{lemma}\label{lem8} Let~$\varepsilon>0$, and let~$A\subset \mathrm{I}R^n$ be convex. There exists a function $\varphi=\varphi_{\varepsilon,A}\in\mathrm{BC}^{2,1}(\mathrm{I}R^n)$ satisfying~\epsqref{34} with
\ben{\label{38}
\abs{\varphi}_1\leq \frac{9n^{1/2}}{\varepsilon},
\qquad
\abs{\varphi}_2\leq \frac{81n}{\varepsilon^2},
\qquad
\abs{\varphi}_{2,1}\leq \frac{729n^{3/2}}{\varepsilon^3}.
}
\epsnd{lemma}
\begin{proof} Let~$\deltaelta = \frac{\varepsilon}{9\sqrt{n}}$. Define
\be{
\varphi(\tx) = \mathcal{S}_\deltaelta^3 I_{A^{\varepsilon/3}}(\tx);
}
the claim then follows from Lemma~\ref{lem7}.
\epsnd{proof}
\begin{lemma}\label{lem9} Let~$\mathcal{C}_{K-1}$ be the class of convex sets on~$\mathrm{I}R^{K-1}$. Let~$\tZ\sim \Dir(a_1,\deltaots,a_K)$ and assume
\ben{\label{39}
\abs{\mathrm{I}E h(\textnormal{\textbf{W}}) - \mathrm{I}E h(\tZ)} \leq c_0\abs{h}_0 + c_1\abs{h}_1 + c_2\abs{h}_2 + c_3\abs{h}_{2,1},
}
for any $h\in\mathrm{BC}^{2,1}(\-\Delta_K)$.
Then there is a constant~$C>0$ depending only on~$a_1,\deltaots,a_K$ such that
\ben{\label{40}
\sup_{A\in \mathcal{C}_{K-1}}\abs{\mathrm{I}P[\textnormal{\textbf{W}}\in A]-\mathrm{I}P[\tZ\in A]} \leq c_0+C(c_1+c_2+c_3)^{\theta/(3+\theta)},
}
where
\ben{
\theta = \frac{\theta_\wedge}{\theta_\wedge+\theta_\circ},\qquad \theta_\wedge = 1\wedge\min\{a_1,\deltaots,a_K\},\qquad \theta_\circ=\sum_{i=1}^K\bklr{1-1\wedge a_i}.
}
\epsnd{lemma}
\begin{proof} Since both $\textnormal{\textbf{W}}$ and $\tZ$ take values in $\-\Delta_K$, we may assume without loss of generality that~$A\subset \-\Delta_K$.
Fix~$\varepsilon>0$; from Lemma~\ref{lem6} we have
\bes{
& \sup_{A\in \mathcal{C}_{K-1}}\abs{\mathrm{I}P[\textnormal{\textbf{W}}\in A]-\mathrm{I}P[\tZ\in A]} \\
& \qquad\leq \sup_{A\in \mathcal{C}_{K-1}}\abs{\mathrm{I}E \varphi_{\varepsilon,A}(\textnormal{\textbf{W}}) - \mathrm{I}E \varphi_{\varepsilon,A}(\tZ)} + \sup_{A\in \mathcal{C}_{K-1}} \mathrm{I}P[\tZ\in A^\varepsilon\setminus A]\vee \mathrm{I}P[\tZ\in A\setminus A^{-\varepsilon}] \\
& =: R_1 + R_2,
}
where the $\varphi_{\varepsilon,A}$ are chosen as in Lemma~\ref{lem8}. Using \epsqref{39} and \epsqref{38}
\be{
R_1\leq c_0 + \frac{9(K-1)^{1/2}c_1}{\varepsilon} + \frac{81(K-1)c_2}{\varepsilon^2} + \frac{729(K-1)^{3/2}c_3}{\varepsilon^3}.
}
In order to bound~$R_2$ we proceed as follows. Let~$\deltaelta\geq\varepsilon$ (to be chosen later), and consider~$\-\Delta^{-\deltaelta}_{K-1}$, the~$\deltaelta$-shrinkage of~$\-\Delta_{K-1}$ as defined in~\epsqref{33}.
For given convex~$A\subset \-\Delta_{K-1}$, let~$A_\circ = A \cap \-\Delta^{-\deltaelta}_{K-1}$ (which is again convex) and note that
\ba{
\mathrm{I}P[\tZ\in A^\varepsilon\setminus A] & \leq \mathrm{I}P[\tZ\in \-\Delta_{K-1}\setminus\-\Delta_{K-1}^{-\deltaelta}] + \mathrm{I}P[\tZ\in A^{\varepsilon}_\circ\setminus A_\circ], \\
\mathrm{I}P[\tZ\in A\setminus A^{-\varepsilon}] & \leq \mathrm{I}P[\tZ\in \-\Delta_{K-1}\setminus\-\Delta_{K-1}^{-\deltaelta}] + \mathrm{I}P[\tZ\in A_\circ\setminus A^{-\varepsilon}_\circ],
}
so that
\be{
\mathrm{I}P[\tZ\in A^\varepsilon\setminus A]\vee \mathrm{I}P[\tZ\in A\setminus A^{-\varepsilon}]\leq
\mathrm{I}P[\tZ\in \-\Delta_{K-1}\setminus\-\Delta_{K-1}^{-\deltaelta}] + \mathrm{I}P[\tZ\in A_\circ^\varepsilon\setminus A_\circ]\vee \mathrm{I}P[\tZ\in A_\circ\setminus A_\circ^{-\varepsilon}].
}
Using a union bound and the fact that the marginals of~$\tZ$ have beta distributions,
\bes{
\mathrm{I}P\bkle{\tZ\in \-\Delta_{K-1}\setminus\-\Delta^{-\deltaelta}_{K-1}}
&\leq \sum_{i=1}^K\bklr{\mathrm{I}P[Z_i\leq \deltaelta] + \mathrm{I}P[Z_i\geq 1-\deltaelta]} \\
& \leq \sum_{i=1}^K\frac{\Gamma(s)}{\Gamma(a_i)\Gamma(s-a_i)}\bbklr{\frac{\deltaelta^{a_i}}{a_i}+\frac{\deltaelta^{s-a_i}}{s-a_i}}
\leq C\deltaelta^{\theta_\wedge}.
}
Now, the density~$\psi_\ta$ of the Dirichlet distribution (see~\epsqref{1}), restricted to~$\-\Delta^{-\deltaelta}_{K-1}$, is bounded by
\ben{\label{41}
\bbnorm{\psi_\ta\big\vert_{\-\Delta^{-\deltaelta}_{K-1}}} \leq \frac{\Gamma(s)}{\prod_{i=1}^K \Gamma(a_i)} \deltaelta^{-\theta_\circ}.
}
From Steiner's formula for convex bodies, which describes the volume of~$\varepsilon$-enlargements of convex bodies (see e.g.\ \citep*[Theorem~46]{Morvan2008}), the Hausdorff-continuity of the corresponding coefficients in Steiner's formula (so called \epsmph{Quermassintegrale}; see \citep*[Theorem~50]{Morvan2008}) and compactness of~$\-\Delta_{K-1}$, and the bound \epsqref{41}, we conclude that there is a constant~$S_K$ that only depends on the dimension~$K$ such that, for convex~$A\subset \-\Delta^{-\deltaelta}_{K-1}$,~$\mathop{\mathrm{Vol}}(A^\varepsilon\setminus A)\leq \varepsilon S_K$ and~$\mathop{\mathrm{Vol}}(A\setminus A^{-\varepsilon})\leq \varepsilon S_K$, so that if~$\deltaelta>\varepsilon$,
\be{
\mathrm{I}P[\tZ\in A_\circ^\varepsilon\setminus A_\circ]\vee \mathrm{I}P[\tZ\in A_\circ\setminus A_\circ^{-\varepsilon}]\leq \frac{\Gamma(s)}{\prod_{i=1}^K \Gamma(a_i)}(\deltaelta-\varepsilon)^{-\theta_\circ}\cdot \varepsilon S_K
}
(note that an upper bound on~$S_K$ could be obtained in principle by evaluating the coefficients in the Steiner formula for the convex set~$\-\Delta_{K-1}$).
Note that in the previous display if~$\theta_\circ=0$ then the inequality still
holds without the factor of~$(\deltaelta-\varepsilon)$ even if~$\deltaelta=\varepsilon$.
Thus we have
\be{
\mathrm{I}P[\tZ\in A^\varepsilon\setminus A]\vee \mathrm{I}P[\tZ\in A\setminus A^{-\varepsilon}]\leq C \bkle{\deltaelta^{\theta_\wedge} + \varepsilon\bklr{\mathrm{I}[\theta_\circ>0](\deltaelta-\varepsilon)^{-\theta_\circ}
+\mathrm{I}[\theta_\circ=0]}}.
}
Choosing~$\deltaelta = \varepsilon^{1/(\theta_\wedge+\theta_\circ)}$,
we have that~$\deltaelta\geq\varepsilon$,
and~$\deltaelta=\varepsilon$ only if~$\theta_\circ=0$, so that
\be{
\sup_{A\in \mathcal{C}_{K-1}}\abs{\mathrm{I}P[\textnormal{\textbf{W}}\in A]-\mathrm{I}P[\tZ\in A]} \leq C\bbklr{c_0 + \frac{c_1}{\varepsilon} + \frac{c_2}{\varepsilon^2} + \frac{c_3}{\varepsilon^3} + \varepsilon^\theta},
}
for some constant~$C=C(\ta)$.
Without loss of generality we may assume that~$C\geq 1$ in \epsqref{40} so that if~$c_1+c_2+c_3 \geq 1$, then~\epsqref{40} is trivially true. If~$c_1+c_2+c_3 < 1$, choose~$\varepsilon = (c_1+c_2+c_3)^{1/\klr{3+\theta}}<1$ and bound both~$1/\varepsilon$ and~$1/\varepsilon^2$ by~$1/\varepsilon^3$; this again yields \epsqref{40}.
\epsnd{proof}
| 3,490 | 53,110 |
en
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.