set
stringclasses 1
value | id
stringlengths 5
9
| chunk_text
stringlengths 1
115k
| chunk_num_tokens
int64 1
106k
| document_num_tokens
int64 58
521k
| document_language
stringclasses 2
values |
---|---|---|---|---|---|
train | 0.10.19 | \section{Conclusion: An idealistic galloping cost model}
In the above sections, we observed the impact of
using a galloping sub-routine for a fixed or a
variable parameter~$\mathbf{t}$.
Although choosing a constant value of~$\mathbf{t}$ (e.g.,~$\mathbf{t} = 7$, as advocated in~\cite{McIlroy1993}) already
leads to very good results, letting~$\mathbf{t}$ vary, for instance
by using the logarithmic variant of the sub-routine,
provides us with even better complexity guarantees,
with an often negligible overhead of~$\mathcal{O}(n \log(\mathcal{H}^\ast+1) + n)$ element comparisons:
up to a small error, this provides us with the following
\emph{idealistic} cost model for run merges, allowing us to
simultaneously identify the parameter~$\mathbf{t}$ with~$+\infty$ and
with a constant.
\begin{definition}
Let~$A$ and~$B$ be two non-decreasing runs
with~$a_{\rightarrow i}$ (respectively,~$b_{\rightarrow i}$) elements of value~$i$ for all~$i \in {\{1,2,\ldots,\sigma\}}$.
The \emph{idealistic galloping cost} of merging~$A$ and~$B$
is defined as the quantity
\[\sum_{i=1}^\sigma \mathsf{cost}_{\mathsf{ideal}}^\ast(a_{\rightarrow i}) + \mathsf{cost}_{\mathsf{ideal}}^\ast(b_{\rightarrow i}),\]
where~$\mathsf{cost}_{\mathsf{ideal}}^\ast(m) = \min\{m, \log_2(m+1) + \mathcal{O}(1)\}$.
\end{definition}
We think that
this idealistic cost model
is both simple and precise enough to
allow studying the complexity of natural merge
sorts in general, provided that they use
the galloping sub-routine.
Thus, it would be interesting to use that
cost model in order to study, for
instance, the least constant~$\mathsf{c}$
for which various algorithms such as
TimSort\xspace or \textalpha-MergeSort\xspace require up to~$\mathsf{c} n (1 + o(1)) \mathcal{H}^\ast + \mathcal{O}(n)$
element comparisons.
We also hope that this simpler framework
will foster the interest for the galloping merging
sub-routine of TimSort\xspace, and possibly lead to
amending Swift and Rust
implementations of TimSort\xspace to
include that sub-routine, which we believe is
too efficient in relevant cases to be omitted.
\end{document} | 624 | 49,320 | en |
train | 0.11.0 | \betaegin{equation}gin{document}
\title{Accelerated Particle Swarm Optimization and Support Vector Machine for Business Optimization and Applications }
\alphauthor{Xin-She Yang$^1$, Suash Deb$^2$ and Simon Fong$^3$ \\ \\
1) Department of Engineering,
University of Cambridge, \\
Trumpinton Street,
Cambridge CB2 1PZ, UK. \\
\alphand
2) Department of Computer Science \& Engineering, \\
C. V. Raman College of Engineering, \\
Bidyanagar, Mahura, Janla,
Bhubaneswar 752054, INDIA. \\
\alphand
3) Department of Computer and Information Science, \\
Faculty of Science and Technology, \\
University of Macau, Taipa, Macau. \\
}
\deltaate{}
\maketitle
\betaegin{equation}gin{abstract}
Business optimization is becoming increasingly important because all business activities
aim to maximize the profit and performance of products and services, under
limited resources and appropriate constraints. Recent developments in support vector machine
and metaheuristics show many advantages of these techniques.
In particular, particle swarm optimization is now widely used in solving tough optimization
problems. In this paper, we use a
combination of a recently developed Accelerated PSO and a nonlinear support vector machine to
form a framework for solving business optimization problems.
We first apply the proposed APSO-SVM to production optimization, and then use it for
income prediction and project scheduling. We also carry out some parametric studies and discuss the
advantages of the proposed metaheuristic SVM. \\ \\
{\betaf Keywords:} Accelerated PSO, business optimization, metaheuristics, PSO, support vector machine,
project scheduling. \\
\noindent Reference to this paper should be made as follows: \\ \\
Yang, X. S., Deb, S., and Fong, S., (2011), Accelerated Particle Swarm Optimization
and Support Vector Machine for Business Optimization and Applications, in: Networked Digital Technologies (NDT2011),
Communications in Computer and Information Science, Vol. 136, Springer,
pp. 53-66 (2011).
\epsilonnd{abstract}
\qquadection{Introduction}
Many business activities often have to deal with large, complex databases.
This is partly driven by information technology, especially the Internet,
and partly driven by the need to extract meaningful knowledge by data mining.
To extract useful information among a huge amount of data requires
efficient tools for processing vast data sets. This is equivalent to
trying to find an optimal solution to a highly nonlinear problem with multiple, complex
constraints, which is a challenging task. Various techniques for such data mining and optimization have been
developed over the past few decades. Among these techniques,
support vector machine is one of the best techniques for regression,
classification and data mining \cite{Howley,Kim,Pai,Shi,Shi2,Vapnik}.
On the other hand, metaheuristic algorithms also become powerful for solving
tough nonlinear optimization problems \cite{Blum,Kennedy,Kennedy2,Yang,Yang2}.
Modern metaheuristic algorithms have been developed with an aim to carry out
global search, typical examples are genetic algorithms \cite{Gold},
particle swarm optimisation (PSO) \cite{Kennedy}, and Cuckoo Search \cite{YangDeb,YangDeb2}.
The efficiency of metaheuristic algorithms can be attributed to the
fact that they imitate the best features in nature, especially the selection of the fittest
in biological systems which have evolved by natural selection over millions of years.
Since most data have noise or associated randomness, most these algorithms
cannot be used directly. In this case, some form of averaging or reformulation of the problem
often helps. Even so, most algorithms become difficult to implement for such type of optimization.
In addition to the above challenges, business optimization often concerns
with a large amount but often incomplete data, evolving dynamically over
time. Certain tasks cannot start before other required tasks are completed,
such complex scheduling is often NP-hard and no universally efficient tool exists.
Recent trends indicate that metaheuristics can be very promising, in combination with
other tools such as neural networks and support vector machines \cite{Howley,Kim,Tabu,Smola}.
In this paper, we intend to present a simple framework of business optimization using
a combination of support vector machine with accelerated PSO. The paper is
organized as follows: We first will briefly review particle swarm optimization and accelerated PSO,
and then introduce the basics of support vector machines (SVM). We then
use three case studies to test the proposed framework. Finally, we discussion its implications
and possible extension for further research.
\qquadection{Accelerated Particle Swarm Optimization}
\qquadubsection{PSO}
Particle swarm optimization (PSO) was developed by Kennedy and
Eberhart in 1995 \cite{Kennedy,Kennedy2}, based on the swarm behaviour such
as fish and bird schooling in nature. Since then, PSO has
generated much wider interests, and forms an exciting, ever-expanding
research subject, called swarm intelligence. PSO has been applied
to almost every area in optimization, computational intelligence,
and design/scheduling applications. There are at least two dozens of
PSO variants, and hybrid algorithms by combining PSO
with other existing algorithms are also increasingly popular.
PSO searches the space of an objective function
by adjusting the trajectories of individual agents,
called particles, as the piecewise paths formed by positional
vectors in a quasi-stochastic manner. The movement of a swarming particle
consists of two major components: a stochastic component and a deterministic component.
Each particle is attracted toward the position of the current global best
$\ff{g}^*$ and its own best location $\ff{x}_i^*$ in history,
while at the same time it has a tendency to move randomly.
Let $\ff{x}_i$ and $\ff{v}_i$ be the position vector and velocity for
particle $i$, respectively. The new velocity vector is determined by the
following formula
\betaegin{equation} \ff{v}_i^{t+1}= \ff{v}_i^t + \alpha \ff{\epsilon}_1
[\ff{g}^*-\ff{x}_i^t] + \beta \ff{\epsilon}_2 [\ff{x}_i^*-\ff{x}_i^t].
\label{pso-speed-100}
\epsilonnd{equation}
where $\ff{\epsilon}_1$ and $\ff{\epsilon}_2$ are two random vectors, and each
entry taking the values between 0 and 1.
The parameters $\alpha$ and $\beta$ are the learning parameters or
acceleration constants, which can typically be taken as, say, $\alpha\approx\beta \approx2$.
There are many variants which extend the standard PSO
algorithm, and the most noticeable improvement is probably to use an inertia function $\theta
(t)$ so that $\ff{v}_i^t$ is replaced by $\theta(t) \ff{v}_i^t$
\betaegin{equation} \ff{v}_i^{t+1}=\theta \ff{v}_i^t + \alpha \ff{\epsilon}_1
[\ff{g}^*-\ff{x}_i^t] + \beta \ff{\epsilon}_2 [\ff{x}_i^*-\ff{x}_i^t],
\label{pso-speed-150}
\epsilonnd{equation}
where $\theta \in (0,1)$ \cite{Chat,Clerc}. In the simplest case,
the inertia function can be taken as a constant, typically $\theta \approx 0.5 \qquadim 0.9$.
This is equivalent to introducing a virtual mass to stabilize the motion
of the particles, and thus the algorithm is expected to converge more quickly.
\qquadubsection{Accelerated PSO}
The standard particle swarm optimization uses both the current global best
$\ff{g}^*$ and the individual best $\ff{x}^*_i$. The reason of using the individual
best is primarily to increase the diversity in the quality solutions, however,
this diversity can be simulated using some randomness. Subsequently, there is
no compelling reason for using the individual best, unless the optimization
problem of interest is highly nonlinear and multimodal.
A simplified version
which could accelerate the convergence of the algorithm is to use the global
best only. Thus, in the accelerated particle swarm optimization (APSO) \cite{Yang,Yang2}, the
velocity vector is generated by a simpler formula
\betaegin{equation} \ff{v}_i^{t+1}=\ff{v}_i^t + \alpha \ff{\epsilon}_n + \beta (\ff{g}^*-\ff{x}_i^t), \label{pso-sys-10} \epsilonnd{equation}
where $\ff{\epsilon}_n$ is drawn from $N(0,1)$
to replace the second term.
The update of the position
is simply \betaegin{equation} \ff{x}_i^{t+1}=\ff{x}_i^t + \ff{v}_i^{t+1}. \label{pso-sys-20} \epsilonnd{equation} In order to
increase the convergence even further, we can also write the
update of the location in a single step
\betaegin{equation} \ff{x}_i^{t+1}=(1-\beta) \ff{x}_i^t+\beta \ff{g}^* +\alpha \ff{\epsilon}_n. \label{APSO-500} \epsilonnd{equation}
This simpler version will give the same order of convergence.
Typically, $\alphalpha = 0.1 L \qquadim 0.5 L$ where $L$ is the scale of each variable, while $\betaegin{equation}ta= 0.1 \qquadim 0.7$
is sufficient for most applications. It is worth pointing out that
velocity does not appear in equation (\ref{APSO-500}), and there is no need to deal with
initialization of velocity vectors.
Therefore, APSO is much simpler. Comparing with many PSO variants, APSO uses only two parameters,
and the mechanism is simple to understand.
A further improvement to the accelerated PSO is to reduce the randomness
as iterations proceed.
This means that we can use a monotonically decreasing function such as
\betaegin{equation} \alpha =\alpha_0 e^{-\qquadigmaamma t}, \epsilonnd{equation}
or \betaegin{equation} \alpha=\alpha_0 \qquadigmaamma^t, \qquad (0<\qquadigmaamma<1), \epsilonnd{equation}
where $\alpha_0 \approx0.5 \qquadim 1$ is the initial value of the randomness parameter.
Here $t$ is the number of iterations or time steps.
$0<\qquadigmaamma<1$ is a control parameter \cite{Yang2}. For example, in our implementation, we will use
\betaegin{equation} \alpha=0.7^t, \epsilonnd{equation}
where $t \in [0,t_{\max}]$ and $t_{\max}$ is the maximum of iterations.
\qquadection{Support Vector Machine}
Support vector machine (SVM) is an efficient tool for data mining and classification
\cite{Vapnik2,Vapnik3}. Due to the vast volumes of
data in business, especially e-commerce, efficient
use of data mining techniques becomes a necessity.
In fact, SVM can also be considered as an optimization tool, as its objective is
to maximize the separation margins between data sets. The proper combination of SVM
with metaheuristics could be advantageous.
\qquadubsection{Support Vector Machine}
A support vector machine essentially transforms a set of data
into a significantly higher-dimensional space by nonlinear transformations
so that regression and data fitting can be carried out in this high-dimensional space.
This methodology can be used for data classification, pattern recognition,
and regression, and its theory was based on statistical machine learning theory
\cite{Smola,Vapnik,Vapnik2}.
For classifications with the learning examples
or data $(\ff{x}_i, y_i)$ where $i=1,2,...,n$ and $y_i \in \{-1,+1\}$,
the aim of the learning is to find a function
$\partialhi_{\alpha} (\ff{x})$ from allowable functions $\{\partialhi_{\alpha}: \alpha \in \Omega \}$
such that $\partialhi_{\alpha}(\ff{x}_i) \mapsto y_i$ for $(i=1,2,...,n)$
and that the expected risk $E(\alpha)$ is minimal. That is the minimization
of the risk \betaegin{equation} E(\alpha)=\kk{1}{2} \int |\partialhi_{\alpha}(x) -y| dQ(\ff{x},y), \epsilonnd{equation}
where $Q(\ff{x},y)$ is an unknown probability distribution, which makes
it impossible to calculate $E(\alpha)$ directly. A simple approach is
to use the so-called empirical risk
\betaegin{equation} E_p(\alpha) \approx \kk{1}{2 n} \qquadum_{i=1}^n \betaig|\partialhi_{\alpha}(\ff{x}_i)-y_i \betaig|. \epsilonnd{equation}
However, the main flaw of this approach is that a small risk or error on
the training set does not necessarily guarantee a small
error on prediction if the number $n$ of training data is small \cite{Vapnik3}.
For a given probability of at least $1-p$, the Vapnik bound for the
errors can be written as
\betaegin{equation} E(\alpha) \le R_p(\alpha) + \Psi \Big(\kk{h}{n}, \kk{\log (p)}{n} \Big), \epsilonnd{equation}
where
\betaegin{equation} \Psi \betaig(\kk{h}{n}, \kk{\log(p)}{n} \betaig) =\qquadqrt{\kk{1}{n} \betaig[h (\log \kk{2n}{h}+1)
-\log(\kk{p}{4})\betaig]}. \epsilonnd{equation}
Here $h$ is a parameter, often referred to as the Vapnik-Chervonenskis
dimension or simply VC-dimension \cite{Vapnik}, which describes the capacity
for prediction of the function set $\partialhi_{\alpha}$.
In essence, a linear support vector machine is to
construct two hyperplanes as far away as possible
and no samples should be between these two planes.
Mathematically, this is equivalent to two equations
\betaegin{equation} \ff{w} \cdot \ff{x} + \ff{b} = \partialm 1, \epsilonnd{equation}
and a main objective of
constructing these two hyperplanes is to maximize the distance (between the two planes)
\betaegin{equation} d=\kk{2}{||\ff{w}||}. \epsilonnd{equation}
Such maximization of $d$ is equivalent to the minimization of $||w||$ or more conveniently $||w||^2$.
From the optimization point of view, the maximization of margins can be written as
\betaegin{equation} \textrm{minimize } \kk{1}{2} ||\ff{w}||^2 = \kk{1}{2} (\ff{w} \cdot \ff{w}). \epsilonnd{equation}
This essentially becomes an optimization problem
\betaegin{equation} \textrm{minimize } \Psi= \kk{1}{2} || \ff{w} ||^2 +\lambda \qquadum_{i=1}^n \epsilonta_i, \epsilonnd{equation}
\betaegin{equation} \textrm{subject to } y_i (\ff{w} \cdot \ff{x}_i + \ff{b}) \qquadigmae 1-\epsilonta_i, \label{svm-ineq-50} \epsilonnd{equation}
\betaegin{equation} \qquad \qquad \qquad \epsilonta_i \qquadigmae 0, \qquad (i=1,2,..., n), \epsilonnd{equation}
where $\lambda>0$ is a parameter to be chosen appropriately.
Here, the term $\qquadum_{i=1}^n \epsilonta_i$ is essentially a measure of
the upper bound of the number of misclassifications on the training data.
\qquadubsection{Nonlinear SVM and Kernel Tricks} | 3,999 | 18,055 | en |
train | 0.11.1 | A further improvement to the accelerated PSO is to reduce the randomness
as iterations proceed.
This means that we can use a monotonically decreasing function such as
\betaegin{equation} \alpha =\alpha_0 e^{-\qquadigmaamma t}, \epsilonnd{equation}
or \betaegin{equation} \alpha=\alpha_0 \qquadigmaamma^t, \qquad (0<\qquadigmaamma<1), \epsilonnd{equation}
where $\alpha_0 \approx0.5 \qquadim 1$ is the initial value of the randomness parameter.
Here $t$ is the number of iterations or time steps.
$0<\qquadigmaamma<1$ is a control parameter \cite{Yang2}. For example, in our implementation, we will use
\betaegin{equation} \alpha=0.7^t, \epsilonnd{equation}
where $t \in [0,t_{\max}]$ and $t_{\max}$ is the maximum of iterations.
\qquadection{Support Vector Machine}
Support vector machine (SVM) is an efficient tool for data mining and classification
\cite{Vapnik2,Vapnik3}. Due to the vast volumes of
data in business, especially e-commerce, efficient
use of data mining techniques becomes a necessity.
In fact, SVM can also be considered as an optimization tool, as its objective is
to maximize the separation margins between data sets. The proper combination of SVM
with metaheuristics could be advantageous.
\qquadubsection{Support Vector Machine}
A support vector machine essentially transforms a set of data
into a significantly higher-dimensional space by nonlinear transformations
so that regression and data fitting can be carried out in this high-dimensional space.
This methodology can be used for data classification, pattern recognition,
and regression, and its theory was based on statistical machine learning theory
\cite{Smola,Vapnik,Vapnik2}.
For classifications with the learning examples
or data $(\ff{x}_i, y_i)$ where $i=1,2,...,n$ and $y_i \in \{-1,+1\}$,
the aim of the learning is to find a function
$\partialhi_{\alpha} (\ff{x})$ from allowable functions $\{\partialhi_{\alpha}: \alpha \in \Omega \}$
such that $\partialhi_{\alpha}(\ff{x}_i) \mapsto y_i$ for $(i=1,2,...,n)$
and that the expected risk $E(\alpha)$ is minimal. That is the minimization
of the risk \betaegin{equation} E(\alpha)=\kk{1}{2} \int |\partialhi_{\alpha}(x) -y| dQ(\ff{x},y), \epsilonnd{equation}
where $Q(\ff{x},y)$ is an unknown probability distribution, which makes
it impossible to calculate $E(\alpha)$ directly. A simple approach is
to use the so-called empirical risk
\betaegin{equation} E_p(\alpha) \approx \kk{1}{2 n} \qquadum_{i=1}^n \betaig|\partialhi_{\alpha}(\ff{x}_i)-y_i \betaig|. \epsilonnd{equation}
However, the main flaw of this approach is that a small risk or error on
the training set does not necessarily guarantee a small
error on prediction if the number $n$ of training data is small \cite{Vapnik3}.
For a given probability of at least $1-p$, the Vapnik bound for the
errors can be written as
\betaegin{equation} E(\alpha) \le R_p(\alpha) + \Psi \Big(\kk{h}{n}, \kk{\log (p)}{n} \Big), \epsilonnd{equation}
where
\betaegin{equation} \Psi \betaig(\kk{h}{n}, \kk{\log(p)}{n} \betaig) =\qquadqrt{\kk{1}{n} \betaig[h (\log \kk{2n}{h}+1)
-\log(\kk{p}{4})\betaig]}. \epsilonnd{equation}
Here $h$ is a parameter, often referred to as the Vapnik-Chervonenskis
dimension or simply VC-dimension \cite{Vapnik}, which describes the capacity
for prediction of the function set $\partialhi_{\alpha}$.
In essence, a linear support vector machine is to
construct two hyperplanes as far away as possible
and no samples should be between these two planes.
Mathematically, this is equivalent to two equations
\betaegin{equation} \ff{w} \cdot \ff{x} + \ff{b} = \partialm 1, \epsilonnd{equation}
and a main objective of
constructing these two hyperplanes is to maximize the distance (between the two planes)
\betaegin{equation} d=\kk{2}{||\ff{w}||}. \epsilonnd{equation}
Such maximization of $d$ is equivalent to the minimization of $||w||$ or more conveniently $||w||^2$.
From the optimization point of view, the maximization of margins can be written as
\betaegin{equation} \textrm{minimize } \kk{1}{2} ||\ff{w}||^2 = \kk{1}{2} (\ff{w} \cdot \ff{w}). \epsilonnd{equation}
This essentially becomes an optimization problem
\betaegin{equation} \textrm{minimize } \Psi= \kk{1}{2} || \ff{w} ||^2 +\lambda \qquadum_{i=1}^n \epsilonta_i, \epsilonnd{equation}
\betaegin{equation} \textrm{subject to } y_i (\ff{w} \cdot \ff{x}_i + \ff{b}) \qquadigmae 1-\epsilonta_i, \label{svm-ineq-50} \epsilonnd{equation}
\betaegin{equation} \qquad \qquad \qquad \epsilonta_i \qquadigmae 0, \qquad (i=1,2,..., n), \epsilonnd{equation}
where $\lambda>0$ is a parameter to be chosen appropriately.
Here, the term $\qquadum_{i=1}^n \epsilonta_i$ is essentially a measure of
the upper bound of the number of misclassifications on the training data.
\qquadubsection{Nonlinear SVM and Kernel Tricks}
The so-called kernel trick is an important technique, transforming data dimensions
while simplifying computation.
By using Lagrange multipliers $\alpha_i \qquadigmae 0$, we can rewrite the above constrained optimization
into an unconstrained version, and we have
\betaegin{equation} L=\kk{1}{2} ||\ff{w}||^2 +\lambda \qquadum_{i=1}^n \epsilonta_i - \qquadum_{i=1}^n \alpha_i [y_i (\ff{w} \cdot \ff{x}_i + \ff{b}) -(1-\epsilonta_i)]. \epsilonnd{equation}
From this, we can write the Karush-Kuhn-Tucker conditions
\betaegin{equation} \partialab{L}{\ff{w}}=\ff{w} - \qquadum_{i=1}^n \alpha_i y_i \ff{x}_i =0, \epsilonnd{equation}
\betaegin{equation} \partialab{L}{\ff{b}} = -\qquadum_{i=1}^n \alpha_i y_i =0, \epsilonnd{equation}
\betaegin{equation} y_i (\ff{w} \cdot \ff{x}_i+\ff{b})-(1-\epsilonta_i) \qquadigmae 0, \epsilonnd{equation}
\betaegin{equation} \alpha_i [y_i (\ff{w} \cdot \ff{x}_i + \ff{b}) -(1-\epsilonta_i)]=0, \qquad (i=1,2,...,n), \label{svm-KKT-150} \epsilonnd{equation}
\betaegin{equation} \alpha_i \qquadigmae 0, \qquad \epsilonta_i \qquadigmae 0, \qquad (i=1,2,...,n). \epsilonnd{equation}
From the first KKT condition, we get
\betaegin{equation} \ff{w}=\qquadum_{i=1}^n y_i \alpha_i \ff{x}_i. \epsilonnd{equation}
It is worth pointing out here that
only the nonzero $\alpha_i$ contribute to overall solution. This comes from the
KKT condition (\ref{svm-KKT-150}),
which implies that when $\alpha_i \ne 0$, the inequality (\ref{svm-ineq-50}) must be
satisfied exactly, while $\alpha_0=0$ means the inequality is automatically
met. In this latter case, $\epsilonta_i=0$. Therefore, only the corresponding training
data $(\ff{x}_i, y_i)$ with $\alpha_i>0$ can contribute to the solution, and thus such
$\ff{x}_i$ form the support vectors (hence, the name support vector machine). \index{support vectors}
All the other data with $\alpha_i=0$ become irrelevant.
It has been shown that the solution for $\alpha_i$ can be found
by solving the following quadratic programming \cite{Vapnik,Vapnik3}
\betaegin{equation} \textrm{maximize } \qquadum_{i=1}^n \alpha_i -\kk{1}{2} \qquadum_{i,j=1}^n \alpha_i \alpha_j y_i y_j (\ff{x}_i \cdot \ff{x}_j), \epsilonnd{equation}
subject to
\betaegin{equation} \qquadum_{i=1}^n \alpha_i y_i=0, \qquad 0 \le \alpha_i \le \lambda, \qquad (i=1,2,...,n). \epsilonnd{equation}
From the coefficients $\alpha_i$, we can write the final classification or decision
function as \betaegin{equation} f(\ff{x}) =\textrm{sgn} \betaig [ \qquadum_{i=1}^n \alpha_i y_i (\ff{x} \cdot \ff{x}_i) + \ff{b} \betaig ], \epsilonnd{equation}
where sgn is the classic sign function.
As most problems are nonlinear in business applications, and the above linear SVM cannot be
used. Ideally, we should find some nonlinear transformation $\partialhi$ so that the
data can be mapped onto a high-dimensional space where the classification
becomes linear. The transformation should be
chosen in a certain way so that their dot product leads to a
kernel-style function $K(\ff{x},\ff{x}_i)=\partialhi(\ff{x}) \cdot \partialhi(\ff{x}_i)$.
In fact, we do not need to know such transformations,
we can directly use the kernel functions $K(\ff{x},\ff{x}_i)$ to complete this task.
This is the so-called kernel function trick. Now the main task is to chose
a suitable kernel function for a given, specific problem.
For most problems in nonlinear support vector machines, we can
use $K(\ff{x},\ff{x}_i)=(\ff{x} \cdot x_i)^d$ for polynomial classifiers,
$K(\ff{x},\ff{x}_i)=\tanh[k (\ff{x} \cdot \ff{x}_i) +\Theta)]$ for neural networks,
and by far the most widely used kernel is the Gaussian radial basis function (RBF)
\betaegin{equation} K(\ff{x},\ff{x}_i)=\epsilonxp \Big[-\kk{||\ff{x}-\ff{x}_i||^2}{(2 \qquadigma^2)} \Big]
=\epsilonxp \Big[-\qquadigmaamma ||\ff{x}-\ff{x}_i||^2 \Big],\epsilonnd{equation}
for the nonlinear classifiers. This kernel can easily be extended to
any high dimensions. Here $\qquadigma^2$ is the variance and $\qquadigmaamma=1/2\qquadigma^2$ is
a constant. In general, a simple bound of $0 < \qquadigmaamma \le C$ is used, and
here $C$ is a constant.
Following the similar procedure as discussed earlier for linear SVM,
we can obtain the coefficients $\alpha_i$ by solving the following optimization
problem \betaegin{equation} \textrm{maximize } \qquadum_{i=1}^n \alpha_i -\kk{1}{2} \alpha_i \alpha_j y_i y_j K(\ff{x}_i,\ff{x}_j). \epsilonnd{equation}
It is worth pointing out under Mercer's conditions for the kernel function,
the matrix $\ff{A}=y_i y_j K(\ff{x}_i, \ff{x}_j)$ is a symmetric positive definite matrix \cite{Vapnik3}, which
implies that the above maximization is a quadratic programming problem, and can thus
be solved efficiently by standard QP techniques \cite{Smola}.
\qquadection{Metaheuristic Support Vector Machine with APSO}
\qquadubsection{Metaheuristics}
There are many metaheuristic algorithms for optimization and most these algorithms
are inspired by nature \cite{Yang}. Metaheuristic algorithms such as genetic
algorithms and simulated annealing are widely used, almost routinely, in many
applications, while relatively new algorithms such as particle swarm optimization \cite{Kennedy},
firefly algorithm and cuckoo search are becoming more and more popular \cite{Yang,Yang2}.
Hybridization of these algorithms with existing algorithms are also emerging.
The advantage of such a combination is to use a balanced tradeoff between
global search which is often slow and a fast local search. Such a balance
is important, as highlighted by the analysis by Blum and Roli \cite{Blum}.
Another advantage of this method is that we can use any algorithms we like
at different stages of the search or even at different stage of iterations.
This makes it easy to combine the advantages of various algorithms so as
to produce better results.
Others have attempted to carry out parameter optimization associated with neural
networks and SVM. For example, Liu et al. have used SVM optimized by PSO for
tax forecasting \cite{Liu}. Lu et al. proposed a model for finding optimal parameters in SVM by PSO
optimization \cite{Lu}. However, here we intend to propose a generic framework
for combining efficient APSO with SVM, which can be extended to
other algorithms such as firefly algorithm \cite{YangFA,Yang2010}.
\qquadubsection{APSO-SVM}
Support vector machine has a major advantage, that is, it is less likely to
overfit, compared with other methods such as regression and neural networks.
In addition, efficient quadratic programming can be used for training support vector machines.
However, when there is noise in the data, such algorithms are not quite suitable.
In this case, the learning or training to estimate the parameters in the
SVM becomes difficult or inefficient.
Another issue is that the choice of the values of
kernel parameters $C$ and $\qquadigma^2$ in the kernel functions;
however, there is no agreed guideline on how
to choose them, though the choice of their values should make
the SVM as efficiently as possible. This itself
is essentially an optimization problem.
Taking this idea further, we first use an educated guess set of
values and use the metaheuristic algorithms such as accelerated PSO
or cuckoo search to find the best kernel parameters
such as $C$ and $\qquadigma^2$ \cite{Yang,YangDeb}.
Then, we used these parameters to construct the support vector machines
which are then used for solving the problem of interest. During the iterations
and optimization, we can also modify kernel parameters and
evolve the SVM accordingly. This framework can be called a metaheuristic
support vector machine. Schematically, this Accelerated PSO-SVM can be represented as
shown in Fig. 1.
\vcode{0.7}{
Define the objective; \\
Choose kernel functions; \\
Initialize various parameters; \\
{\betaf while} (criterion) \\
\indent $\quad$ Find optimal kernel parameters by APSO; \\
\indent $\quad$ Construct the support vector machine; \\
\indent $\quad$ Search for the optimal solution by APSO-SVM; \\
\indent $\quad$ Increase the iteration counter; \\
{\betaf end} \\
Post-processing the results;
}{Metaheuristic APSO-SVM. }
For the optimization of parameters and business applications discussed below, APSO
is used for both local and global search \cite{Yang,Yang2}.
\qquadection{Business Optimization Benchmarks} | 4,027 | 18,055 | en |
train | 0.11.2 | For most problems in nonlinear support vector machines, we can
use $K(\ff{x},\ff{x}_i)=(\ff{x} \cdot x_i)^d$ for polynomial classifiers,
$K(\ff{x},\ff{x}_i)=\tanh[k (\ff{x} \cdot \ff{x}_i) +\Theta)]$ for neural networks,
and by far the most widely used kernel is the Gaussian radial basis function (RBF)
\betaegin{equation} K(\ff{x},\ff{x}_i)=\epsilonxp \Big[-\kk{||\ff{x}-\ff{x}_i||^2}{(2 \qquadigma^2)} \Big]
=\epsilonxp \Big[-\qquadigmaamma ||\ff{x}-\ff{x}_i||^2 \Big],\epsilonnd{equation}
for the nonlinear classifiers. This kernel can easily be extended to
any high dimensions. Here $\qquadigma^2$ is the variance and $\qquadigmaamma=1/2\qquadigma^2$ is
a constant. In general, a simple bound of $0 < \qquadigmaamma \le C$ is used, and
here $C$ is a constant.
Following the similar procedure as discussed earlier for linear SVM,
we can obtain the coefficients $\alpha_i$ by solving the following optimization
problem \betaegin{equation} \textrm{maximize } \qquadum_{i=1}^n \alpha_i -\kk{1}{2} \alpha_i \alpha_j y_i y_j K(\ff{x}_i,\ff{x}_j). \epsilonnd{equation}
It is worth pointing out under Mercer's conditions for the kernel function,
the matrix $\ff{A}=y_i y_j K(\ff{x}_i, \ff{x}_j)$ is a symmetric positive definite matrix \cite{Vapnik3}, which
implies that the above maximization is a quadratic programming problem, and can thus
be solved efficiently by standard QP techniques \cite{Smola}.
\qquadection{Metaheuristic Support Vector Machine with APSO}
\qquadubsection{Metaheuristics}
There are many metaheuristic algorithms for optimization and most these algorithms
are inspired by nature \cite{Yang}. Metaheuristic algorithms such as genetic
algorithms and simulated annealing are widely used, almost routinely, in many
applications, while relatively new algorithms such as particle swarm optimization \cite{Kennedy},
firefly algorithm and cuckoo search are becoming more and more popular \cite{Yang,Yang2}.
Hybridization of these algorithms with existing algorithms are also emerging.
The advantage of such a combination is to use a balanced tradeoff between
global search which is often slow and a fast local search. Such a balance
is important, as highlighted by the analysis by Blum and Roli \cite{Blum}.
Another advantage of this method is that we can use any algorithms we like
at different stages of the search or even at different stage of iterations.
This makes it easy to combine the advantages of various algorithms so as
to produce better results.
Others have attempted to carry out parameter optimization associated with neural
networks and SVM. For example, Liu et al. have used SVM optimized by PSO for
tax forecasting \cite{Liu}. Lu et al. proposed a model for finding optimal parameters in SVM by PSO
optimization \cite{Lu}. However, here we intend to propose a generic framework
for combining efficient APSO with SVM, which can be extended to
other algorithms such as firefly algorithm \cite{YangFA,Yang2010}.
\qquadubsection{APSO-SVM}
Support vector machine has a major advantage, that is, it is less likely to
overfit, compared with other methods such as regression and neural networks.
In addition, efficient quadratic programming can be used for training support vector machines.
However, when there is noise in the data, such algorithms are not quite suitable.
In this case, the learning or training to estimate the parameters in the
SVM becomes difficult or inefficient.
Another issue is that the choice of the values of
kernel parameters $C$ and $\qquadigma^2$ in the kernel functions;
however, there is no agreed guideline on how
to choose them, though the choice of their values should make
the SVM as efficiently as possible. This itself
is essentially an optimization problem.
Taking this idea further, we first use an educated guess set of
values and use the metaheuristic algorithms such as accelerated PSO
or cuckoo search to find the best kernel parameters
such as $C$ and $\qquadigma^2$ \cite{Yang,YangDeb}.
Then, we used these parameters to construct the support vector machines
which are then used for solving the problem of interest. During the iterations
and optimization, we can also modify kernel parameters and
evolve the SVM accordingly. This framework can be called a metaheuristic
support vector machine. Schematically, this Accelerated PSO-SVM can be represented as
shown in Fig. 1.
\vcode{0.7}{
Define the objective; \\
Choose kernel functions; \\
Initialize various parameters; \\
{\betaf while} (criterion) \\
\indent $\quad$ Find optimal kernel parameters by APSO; \\
\indent $\quad$ Construct the support vector machine; \\
\indent $\quad$ Search for the optimal solution by APSO-SVM; \\
\indent $\quad$ Increase the iteration counter; \\
{\betaf end} \\
Post-processing the results;
}{Metaheuristic APSO-SVM. }
For the optimization of parameters and business applications discussed below, APSO
is used for both local and global search \cite{Yang,Yang2}.
\qquadection{Business Optimization Benchmarks}
Using the framework discussed earlier, we can easily implement it in
any programming language, though we have implemented using Matlab.
We have validated our implementation using the standard test
functions, which confirms the correctness of the implementation.
Now we apply it to carry out case studies with known analytical solution
or the known optimal solutions. The Cobb-Douglas production
optimization has an analytical solution which can be used for
comparison, while the second case is a standard benchmark in
resource-constrained project scheduling \cite{Kol}.
\qquadubsection{Production Optimization}
Let us first use the proposed approach to study
the classical Cobb-Douglas production optimization.
For a production of a series of products and the labour costs,
the utility function can be written
\betaegin{equation} q=\partialrod_{j=1}^n u_j^{\alpha_j} =u_1^{\alpha_1} u_2^{\alpha_2} \cdots u_n^{\alpha_n}, \epsilonnd{equation}
where all exponents $\alpha_j$ are non-negative, satisfying
\betaegin{equation} \qquadum_{j=1}^n \alpha_j =1. \epsilonnd{equation}
The optimization is the minimization of the utility
\betaegin{equation} \textrm{minimize } q \epsilonnd{equation}
\betaegin{equation} \textrm{subject to } \qquadum_{j=1}^n w_j u_j =K, \epsilonnd{equation}
where $w_j (j=1,2,...,n)$ are known weights.
This problem can be solved using the Lagrange multiplier method as
an unconstrained problem
\betaegin{equation} \partialsi=\partialrod_{j=1}^n u_j^{\alpha_j} + \lambda (\qquadum_{j=1}^n w_j u_j -K), \epsilonnd{equation}
whose optimality conditions are
\betaegin{equation} \partialab{\partialsi}{u_j} = \alpha_j u_j^{-1} \partialrod_{j=1}^n u_j^{\alpha_j} + \lambda w_j =0, \quad (j=1,2,...,n), \epsilonnd{equation}
\betaegin{equation} \partialab{\partialsi}{\lambda} = \qquadum_{j=1}^n w_j u_j -K =0. \epsilonnd{equation}
The solutions are
\betaegin{equation} u_1=\kk{K}{w_1 [1+\kk{1}{\alpha_1} \qquadum_{j=2}^n \alpha_j ]}, \;
u_j=\kk{w_1 \alpha_j}{w_j \alpha_1} u_1, \epsilonnd{equation}
where $(j=2,3, ..., n)$.
For example, in a special case of $n=2$, $\alpha_1=2/3$, $\alpha_2=1/3$, $w_1=5$,
$w_2=2$ and $K=300$, we have
\[ u_1=\kk{Q}{w_1 (1+\alpha_2/\alpha_1)} =40, \;
u_2=\kk{K \alpha_2}{w_2 \alpha_1 (1+\alpha_2/\alpha_1)}=50. \]
As most real-world problem has some uncertainty, we can now add
some noise to the above problem. For simplicity, we just modify
the constraint as
\betaegin{equation} \qquadum_{j=1}^n w_j u_j = K (1+ \betaegin{equation}ta \epsilonpsilon), \epsilonnd{equation}
where $\epsilonpsilon$ is a random number drawn from a Gaussian distribution with a zero mean
and a unity variance,
and $0 \le \betaegin{equation}ta \ll 1$ is a small positive number.
We now solve this problem as an optimization problem by the proposed APSO-SVM.
In the case of $\betaegin{equation}ta=0.01$,
the results have been summarized in Table 1
where the values are provided with different problem size $n$ with
different numbers of iterations. We can see that the results converge
at the optimal solution very quickly.
\betaegin{equation}gin{table}[ht]
\caption{Mean deviations from the optimal solutions.}
\centering
\betaegin{equation}gin{tabular}{lllll}
\hline \hline
size $n$ & Iterations & deviations \\
\hline
10 & 1000 & 0.014 \\
20 & 5000 & 0.037 \\
50 & 5000 & 0.040 \\
50 & 15000 & 0.009 \\
\hline
\epsilonnd{tabular}
\epsilonnd{table}
\qquadection{Income Prediction}
Studies to improve the accuracy of classifications are extensive. For example, Kohavi proposed a
decision-tree hybrid in 1996 \cite{UCI}. Furthermore, an efficient training algorithm for support vector machines was proposed by Platt in 1998 \cite{Platt,Platt2},
and it has some significant impact on machine learning, regression and data mining.
A well-known benchmark for classification and regression is the income prediction using the
data sets from a selected 14 attributes of a household from a sensus form \cite{UCI,Platt}.
We use the same data sets at ftp://ftp.ics.uci.edu/pub/machine-learning-databases/adult
for this case study. There are 32561 samples in the training set with 16281 for testing.
The aim is to predict if an individual's income is above or below 50K ?
Among the 14 attributes, a subset can be selected, and a subset such as age, education level,
occupation, gender and working hours are commonly used.
Using the proposed APSO-SVM and choosing the limit value of $C$ as $1.25$,
the best error of $17.23\%$ is obtained (see Table \ref{table-3}), which is comparable with most accurate predictions
reported in \cite{UCI,Platt}.
\betaegin{equation}gin{table}[ht]
\caption{Income prediction using APSO-SVM. \label{table-3}}
\centering
\betaegin{equation}gin{tabular}{l|l|l}
\hline \hline
Train set (size) & Prediction set & Errors (\%) \\
\hline
512 & 256 & $24.9$ \\
1024 & 256 & $20.4$ \\
16400 & 8200 & $17.23$ \\
\hline
\epsilonnd{tabular}
\epsilonnd{table}
\qquadubsection{Project Scheduling}
Scheduling is an important class of discrete optimization with a wider range
of applications in business intelligence. For resource-constrained project scheduling problems,
there exists a standard benchmark
library by Kolisch and Sprecher \cite{Kol,Kol2}. The basic model consists
of $J$ activities/tasks, and some activities cannot start before all its predecessors $h$
are completed. In addition, each activity $j=1,2,...,J$
can be carried out, without interruption,
in one of the $M_j$ modes, and performing any activity $j$ in any chosen
mode $m$ takes $d_{jm}$ periods, which is supported by a set of renewable
resource $R$ and non-renewable resources $N$. The project's makespan or upper
bound is T, and the overall capacity of non-renewable resources is
$K_r^{\nu}$ where $r \in N$. For an activity $j$ scheduled in mode $m$,
it uses $k^{\rho}_{jmr}$ units of renewable resources
and $k^{\nu}_{jmr}$ units of non-renewable resources
in period $t=1,2,..., T$.
For activity $j$, the shortest duration is fit into the time
windows $[EF_j, LF_j]$ where $EF_j$ is the earliest finish times,
and $LF_j$ is the latest finish times. Mathematically, this
model can be written as \cite{Kol}
\betaegin{equation} \textrm{Minimize }\; \Psi (\ff{x}) \qquadum_{m=1}^{M_j} \qquadum_{t=EF_j}^{LF_j} t \cdot x_{jmt}, \epsilonnd{equation}
subject to
\[ \qquadum_{m=1}^{M_h} \qquadum_{t=EF_j}^{LF_j} t \;\; x_{hmt} \le \qquadum_{m=1}^{M_j} \qquadum_{t=EF_j}^{LF_j} (t-d_{jm}) x_{jmt},
(j=2,..., J), \]
\[ \qquadum_{j=1}^J \qquadum_{m=1}^{M_j} k^{\rho}_{jmr} \qquadum_{q=\max\{t,EF_j\}}^{\min\{t+d_{jm}-1,LF_j\}} x_{jmq} \le K_r^{\rho},
(r \in R), \]
\betaegin{equation} \qquadum_{j=1}^J \qquadum_{m=1}^{M_j} k_{jmr}^{\nu} \qquadum_{t=EF_j}^{LF_j} x_{jmt} \le K^{\nu}_r, (r \in N), \epsilonnd{equation}
and
\betaegin{equation} \qquadum_{j=1}^{M_j} \qquadum{t=EF_j}^{LF_j} =1, \qquad j=1,2,...,J, \epsilonnd{equation}
where $x_{jmt} \in \{0,1\}$ and $t=1,...,T$.
As $x_{jmt}$ only takes two values $0$ or $1$, this problem
can be considered as a classification problem, and metaheuristic
support vector machine can be applied naturally.
\betaegin{equation}gin{table}[ht]
\caption{Kernel parameters used in SVM.}
\centering
\betaegin{equation}gin{tabular}{l|l}
\hline \hline
Number of iterations & SVM kernel parameters \\
\hline
1000 & $C=149.2$, $\qquadigma^2=67.9$ \\
5000 & $C=127.9$, $\qquadigma^2=64.0$ \\
\hline
\epsilonnd{tabular}
\epsilonnd{table}
Using the online benchmark library \cite{Kol2}, we have solved this type
of problem with $J=30$ activities (the standard test set j30). The run time
on a modern desktop computer is about 2.2 seconds for $N=1000$ iterations
to 15.4 seconds for $N=5000$ iterations. We have run
the simulations for 50 times so as to obtain meaningful statistics. | 4,046 | 18,055 | en |
train | 0.11.3 | Studies to improve the accuracy of classifications are extensive. For example, Kohavi proposed a
decision-tree hybrid in 1996 \cite{UCI}. Furthermore, an efficient training algorithm for support vector machines was proposed by Platt in 1998 \cite{Platt,Platt2},
and it has some significant impact on machine learning, regression and data mining.
A well-known benchmark for classification and regression is the income prediction using the
data sets from a selected 14 attributes of a household from a sensus form \cite{UCI,Platt}.
We use the same data sets at ftp://ftp.ics.uci.edu/pub/machine-learning-databases/adult
for this case study. There are 32561 samples in the training set with 16281 for testing.
The aim is to predict if an individual's income is above or below 50K ?
Among the 14 attributes, a subset can be selected, and a subset such as age, education level,
occupation, gender and working hours are commonly used.
Using the proposed APSO-SVM and choosing the limit value of $C$ as $1.25$,
the best error of $17.23\%$ is obtained (see Table \ref{table-3}), which is comparable with most accurate predictions
reported in \cite{UCI,Platt}.
\betaegin{equation}gin{table}[ht]
\caption{Income prediction using APSO-SVM. \label{table-3}}
\centering
\betaegin{equation}gin{tabular}{l|l|l}
\hline \hline
Train set (size) & Prediction set & Errors (\%) \\
\hline
512 & 256 & $24.9$ \\
1024 & 256 & $20.4$ \\
16400 & 8200 & $17.23$ \\
\hline
\epsilonnd{tabular}
\epsilonnd{table}
\qquadubsection{Project Scheduling}
Scheduling is an important class of discrete optimization with a wider range
of applications in business intelligence. For resource-constrained project scheduling problems,
there exists a standard benchmark
library by Kolisch and Sprecher \cite{Kol,Kol2}. The basic model consists
of $J$ activities/tasks, and some activities cannot start before all its predecessors $h$
are completed. In addition, each activity $j=1,2,...,J$
can be carried out, without interruption,
in one of the $M_j$ modes, and performing any activity $j$ in any chosen
mode $m$ takes $d_{jm}$ periods, which is supported by a set of renewable
resource $R$ and non-renewable resources $N$. The project's makespan or upper
bound is T, and the overall capacity of non-renewable resources is
$K_r^{\nu}$ where $r \in N$. For an activity $j$ scheduled in mode $m$,
it uses $k^{\rho}_{jmr}$ units of renewable resources
and $k^{\nu}_{jmr}$ units of non-renewable resources
in period $t=1,2,..., T$.
For activity $j$, the shortest duration is fit into the time
windows $[EF_j, LF_j]$ where $EF_j$ is the earliest finish times,
and $LF_j$ is the latest finish times. Mathematically, this
model can be written as \cite{Kol}
\betaegin{equation} \textrm{Minimize }\; \Psi (\ff{x}) \qquadum_{m=1}^{M_j} \qquadum_{t=EF_j}^{LF_j} t \cdot x_{jmt}, \epsilonnd{equation}
subject to
\[ \qquadum_{m=1}^{M_h} \qquadum_{t=EF_j}^{LF_j} t \;\; x_{hmt} \le \qquadum_{m=1}^{M_j} \qquadum_{t=EF_j}^{LF_j} (t-d_{jm}) x_{jmt},
(j=2,..., J), \]
\[ \qquadum_{j=1}^J \qquadum_{m=1}^{M_j} k^{\rho}_{jmr} \qquadum_{q=\max\{t,EF_j\}}^{\min\{t+d_{jm}-1,LF_j\}} x_{jmq} \le K_r^{\rho},
(r \in R), \]
\betaegin{equation} \qquadum_{j=1}^J \qquadum_{m=1}^{M_j} k_{jmr}^{\nu} \qquadum_{t=EF_j}^{LF_j} x_{jmt} \le K^{\nu}_r, (r \in N), \epsilonnd{equation}
and
\betaegin{equation} \qquadum_{j=1}^{M_j} \qquadum{t=EF_j}^{LF_j} =1, \qquad j=1,2,...,J, \epsilonnd{equation}
where $x_{jmt} \in \{0,1\}$ and $t=1,...,T$.
As $x_{jmt}$ only takes two values $0$ or $1$, this problem
can be considered as a classification problem, and metaheuristic
support vector machine can be applied naturally.
\betaegin{equation}gin{table}[ht]
\caption{Kernel parameters used in SVM.}
\centering
\betaegin{equation}gin{tabular}{l|l}
\hline \hline
Number of iterations & SVM kernel parameters \\
\hline
1000 & $C=149.2$, $\qquadigma^2=67.9$ \\
5000 & $C=127.9$, $\qquadigma^2=64.0$ \\
\hline
\epsilonnd{tabular}
\epsilonnd{table}
Using the online benchmark library \cite{Kol2}, we have solved this type
of problem with $J=30$ activities (the standard test set j30). The run time
on a modern desktop computer is about 2.2 seconds for $N=1000$ iterations
to 15.4 seconds for $N=5000$ iterations. We have run
the simulations for 50 times so as to obtain meaningful statistics.
The optimal kernel parameters found for the support vector machines
are listed in Table 3, while the deviations from the known best solution
are given in Table 4 where the results by other methods are also compared.
\betaegin{equation}gin{table}[ht]
\caption{Mean deviations from the optimal solution (J=30).}
\centering
\betaegin{equation}gin{tabular}{lllll}
\hline \hline
Algorithm & Authors & $N=1000$ & $5000$ \\
\hline
PSO \cite{Tcho} & Kemmoe et al. (2007) & 0.26 & 0.21 \\
hybribd GA \cite{Valls} & Valls eta al. (2007) & 0.27 & 0.06 \\
Tabu search \cite{Tabu} & Nonobe \& Ibaraki (2002) & 0.46 & 0.16 \\
Adapting GA \cite{Hart} & Hartmann (2002) & 0.38 & 0.22 \\
{\betaf Meta APSO-SVM } & this paper & {\betaf 0.19 } & {\betaf 0.025} \\
\hline
\epsilonnd{tabular}
\epsilonnd{table}
From these tables, we can see that the proposed metaheuristic support vector machine
starts very well, and results are comparable with those by other methods such as hybrid
genetic algorithm. In addition, it converges more quickly, as the number of
iterations increases. With the same amount of function evaluations involved,
much better results are obtained, which implies that APSO is very efficient,
and subsequently the APSO-SVM is also efficient in this context. In addition, this
also suggests that this proposed framework is appropriate for automatically choosing
the right parameters for SVM and solving nonlinear optimization problems.
\qquadection{Conclusions}
Both PSO and support vector machines are now widely used as optimization techniques
in business intelligence. They can also be used for data mining to extract useful information
efficiently. SVM can also be considered as an optimization
technique in many applications including business optimization. When there is noise in data,
some averaging or reformulation may lead to better performance. In addition, metaheuristic
algorithms can be used to find the optimal kernel parameters for a support vector machine
and also to search for the optimal solutions. We have used three very different case studies to demonstrate
such a metaheuristic SVM framework works.
Automatic parameter tuning and efficiency improvement will be an important topic for
further research. It can be expected that this framework can be used for other applications.
Furthermore, APSO can also be used to combine with other algorithms such as neutral networks
to produce more efficient algorithms \cite{Liu,Lu}. More studies in this area are highly needed.
\betaegin{equation}gin{thebibliography}{50}
\betaibitem{Blum} Blum C. and Roli A., Metaheuristics in combinatorial optimization: Overview and
conceptural comparision, {\it ACM Comput. Surv.}, {\betaf 35}, 268-308 (2003).
\betaibitem{Chat} A. Chatterjee and P. Siarry, Nonlinear inertia
variation for dynamic adaptation in particle swarm optimization, {\it
Comp. Oper. Research}, {\betaf 33}, 859-871 (2006).
\betaibitem{Clerc} M. Clerc, J. Kennedy, The particle swarm - explosion, stability,
and convergence in a multidimensional complex space, {\it IEEE Trans. Evolutionary
Computation}, {\betaf 6}, 58-73 (2002).
\betaibitem{Hart} Hartmann S., A self-adapting genetic algorithm for project scheduling under resource
constraints, {\it Naval Res. Log.}, {\betaf 49}, 433-448 (2002).
\betaibitem{Howley} Howley T. and Madden M. G., The genetic kernel support vector machine: description
and evaluation, {\it Artificial Intelligence Review}, {\betaf 24}, 379-395 (2005).
\betaibitem{Gold} Goldberg D. E., {\it Genetic Algorithms in Search, Optimisation and
Machine Learning}, Reading, Mass.: Addison Wesley (1989).
\betaibitem{Kennedy}
J. Kennedy and R. C. Eberhart, Particle swarm optimization, in: {\it
Proc. of IEEE International Conference on Neural Networks},
Piscataway, NJ. pp. 1942-1948 (1995).
\betaibitem{Kennedy2} J. Kennedy, R. C. Eberhart, {\it Swarm intelligence}, Academic Press, 2001.
\betaibitem{Kim} Kim K., Financial forecasting using support vector machines,
{\it Neurocomputing}, {\betaf 55}, 307-319 (2003).
\betaibitem{UCI} Kohavi R., Scaling up the accuracy of naive-Bayes classifiers: a decision-tree hybrid,
{\it Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining}, pp. 202-207, AAAI Press, (1996).
ftp://ftp.ics.uci.edu/pub/machine-learning-databases/adult
\betaibitem{Kol} Kolisch R. and Sprecher A., PSPLIB - a project scdeluing problem library,
OR Software-ORSEP (operations research software exchange prorgam) by H. W. Hamacher,
{\it Euro. J. Oper. Res.}, {\betaf 96}, 205-216 (1996).
\betaibitem{Kol2} Kolisch R. and Sprecher A., The Library PSBLIB,
http://129.187.106.231/psplib/
\betaibitem{Liu} Liu L.-X., Zhuang Y. and Liu X. Y., Tax forecasting theory and
model based on SVM optimized by PSO, {\it Expert Systems with Applications},
{\betaf 38}, January 2011, pp. 116-120 (2011).
\betaibitem{Lu} Lu N., Zhou J. Z., He Y., Y., Liu Y., Particle Swarm
Optimization for Parameter Optimization of Support Vector Machine Model,
{\it 2009 Second International Conference on Intelligent Computation Technology
and Automation}, IEEE publications, pp. 283-284 (2009).
\betaibitem{Tabu} Nonobe K. and Ibaraki T., Formulation and tabu search algorithm
for the resource constrained project scheduling problem (RCPSP), in: {\it Essays
and Surveys in Metaheuristics} (Eds. Ribeiro C. C. and Hansen P.), pp. 557-588 (2002).
\betaibitem{Pai} Pai P. F. and Hong W. C., Forecasting regional electricity load based on
recurrent support vector machines with genetic algorithms,
{\it Electric Power Sys. Res.}, {\betaf 74}, 417-425 (2005).
\betaibitem{Platt} Platt J. C., Sequential minimal optimization: a fast algorithm for training
support vector machines, Techical report MSR-TR-98014, Microsoft Research, (1998).
\betaibitem{Platt2} Plate J. C., Fast training of support vector machines using sequential minimal optimization, in: {\it Advances in Kernel Methods -- Support Vector Learning} (Eds. B. Scholkopf,
C. J. Burges and A. J. Smola), MIT Press, pp. 185-208 (1999).
\betaibitem{Shi} Shi G. R., The use of support vector machine for oil and gas
identification in low-porosity and low-permeability reservoirs,
{\it Int. J. Mathematical Modelling and Numerical Optimisation},
{\betaf 1}, 75-87 (2009).
\betaibitem{Shi2} Shi G. R. and Yang X.-S., Optimization and data mining for
fracture prediction in geosciences, {\it Procedia Computer Science},
{\betaf 1}, 1353-1360 (2010).
\betaibitem{Smola} Smola A. J. and Sch\"olkopf B.,
A tutorial on support vector regression, (1998).
http://www.svms.org/regression/
\betaibitem{Tcho} Tchomt\'e S. K., Gourgand M. and Quilliot A.,
Solving resource-constrained project scheduling problem with
particle swarm optimization, in: Proceeding of 3rd Multidsciplinary Int.
Scheduling Conference (MISTA 2007), 28 - 31 Aug 2007, Paris,
pp. 251-258 (2007).
\betaibitem{Valls} Valls V., Ballestin F. and Quintanilla S., A hybrid genetic
algorithm for the resource-constrained project scheduling problem,
{\it Euro. J. Oper. Res.}, doi:10.1016/j.ejor.2006.12.033, (2007).
\betaibitem{Vapnik} Vapnik V., {\it Estimation of Dependences Based on
Empirical Data} (in Russian), Moscow, 1979. [English translation published
by Springer-Verlag, New York, 1982]
\betaibitem{Vapnik2} Vapnik V., {\it The nature of Statistical Learning Theory},
Springer-Verlag, New York, 1995. | 4,005 | 18,055 | en |
train | 0.11.4 | \betaibitem{Gold} Goldberg D. E., {\it Genetic Algorithms in Search, Optimisation and
Machine Learning}, Reading, Mass.: Addison Wesley (1989).
\betaibitem{Kennedy}
J. Kennedy and R. C. Eberhart, Particle swarm optimization, in: {\it
Proc. of IEEE International Conference on Neural Networks},
Piscataway, NJ. pp. 1942-1948 (1995).
\betaibitem{Kennedy2} J. Kennedy, R. C. Eberhart, {\it Swarm intelligence}, Academic Press, 2001.
\betaibitem{Kim} Kim K., Financial forecasting using support vector machines,
{\it Neurocomputing}, {\betaf 55}, 307-319 (2003).
\betaibitem{UCI} Kohavi R., Scaling up the accuracy of naive-Bayes classifiers: a decision-tree hybrid,
{\it Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining}, pp. 202-207, AAAI Press, (1996).
ftp://ftp.ics.uci.edu/pub/machine-learning-databases/adult
\betaibitem{Kol} Kolisch R. and Sprecher A., PSPLIB - a project scdeluing problem library,
OR Software-ORSEP (operations research software exchange prorgam) by H. W. Hamacher,
{\it Euro. J. Oper. Res.}, {\betaf 96}, 205-216 (1996).
\betaibitem{Kol2} Kolisch R. and Sprecher A., The Library PSBLIB,
http://129.187.106.231/psplib/
\betaibitem{Liu} Liu L.-X., Zhuang Y. and Liu X. Y., Tax forecasting theory and
model based on SVM optimized by PSO, {\it Expert Systems with Applications},
{\betaf 38}, January 2011, pp. 116-120 (2011).
\betaibitem{Lu} Lu N., Zhou J. Z., He Y., Y., Liu Y., Particle Swarm
Optimization for Parameter Optimization of Support Vector Machine Model,
{\it 2009 Second International Conference on Intelligent Computation Technology
and Automation}, IEEE publications, pp. 283-284 (2009).
\betaibitem{Tabu} Nonobe K. and Ibaraki T., Formulation and tabu search algorithm
for the resource constrained project scheduling problem (RCPSP), in: {\it Essays
and Surveys in Metaheuristics} (Eds. Ribeiro C. C. and Hansen P.), pp. 557-588 (2002).
\betaibitem{Pai} Pai P. F. and Hong W. C., Forecasting regional electricity load based on
recurrent support vector machines with genetic algorithms,
{\it Electric Power Sys. Res.}, {\betaf 74}, 417-425 (2005).
\betaibitem{Platt} Platt J. C., Sequential minimal optimization: a fast algorithm for training
support vector machines, Techical report MSR-TR-98014, Microsoft Research, (1998).
\betaibitem{Platt2} Plate J. C., Fast training of support vector machines using sequential minimal optimization, in: {\it Advances in Kernel Methods -- Support Vector Learning} (Eds. B. Scholkopf,
C. J. Burges and A. J. Smola), MIT Press, pp. 185-208 (1999).
\betaibitem{Shi} Shi G. R., The use of support vector machine for oil and gas
identification in low-porosity and low-permeability reservoirs,
{\it Int. J. Mathematical Modelling and Numerical Optimisation},
{\betaf 1}, 75-87 (2009).
\betaibitem{Shi2} Shi G. R. and Yang X.-S., Optimization and data mining for
fracture prediction in geosciences, {\it Procedia Computer Science},
{\betaf 1}, 1353-1360 (2010).
\betaibitem{Smola} Smola A. J. and Sch\"olkopf B.,
A tutorial on support vector regression, (1998).
http://www.svms.org/regression/
\betaibitem{Tcho} Tchomt\'e S. K., Gourgand M. and Quilliot A.,
Solving resource-constrained project scheduling problem with
particle swarm optimization, in: Proceeding of 3rd Multidsciplinary Int.
Scheduling Conference (MISTA 2007), 28 - 31 Aug 2007, Paris,
pp. 251-258 (2007).
\betaibitem{Valls} Valls V., Ballestin F. and Quintanilla S., A hybrid genetic
algorithm for the resource-constrained project scheduling problem,
{\it Euro. J. Oper. Res.}, doi:10.1016/j.ejor.2006.12.033, (2007).
\betaibitem{Vapnik} Vapnik V., {\it Estimation of Dependences Based on
Empirical Data} (in Russian), Moscow, 1979. [English translation published
by Springer-Verlag, New York, 1982]
\betaibitem{Vapnik2} Vapnik V., {\it The nature of Statistical Learning Theory},
Springer-Verlag, New York, 1995.
\betaibitem{Vapnik3} Scholkopf B., Sung K., Burges C., Girosi F., Niyogi P., Poggio T.
and Vapnik V., Comparing support vector machine with Gaussian kernels to
radial basis function classifiers, {\it IEEE Trans. Signal Processing},
{\betaf 45}, 2758-2765 (1997).
\betaibitem{Yang} Yang X. S., {\it Nature-Inspired Metaheuristic Algorithms},
Luniver Press, (2008).
\betaibitem{YangFA} Yang X. S.,
Firefly algorithms for multimodal optimization, in: {\it Stochastic Algorithms: Foundations and Applications},
SAGA 2009, Lecture Notes in Computer Sciences, {\betaf 5792}, pp. 169-178 (2009).
\betaibitem{YangDeb}
Yang X.-S. and Deb, S., Cuckoo search via L\'evy flights, in:
{\it Proceeings of World Congress on Nature \& Biologically Inspired
Computing} (NaBIC 2009, India), IEEE Publications, USA, pp. 210-214 (2009).
\betaibitem{YangDeb2} Yang X. S. and Deb S., Engineering optimization by cuckoo search,
{\it Int. J. Mathematical Modelling and Numerical Optimisation}, {\betaf 1},
330-343 (2010).
\betaibitem{Yang2010} Yang X. S.,
Firefly algorithm, stochastic test functions and design optimisation,
{\it Int. J. Bio-inspired Computation}, {\betaf 2}, 78-84 (2010).
\betaibitem{Yang2} Yang X. S., {\it Engineering Optimization: An Introduction with
Metaheuristic Applications}, John Wiley \& Sons, (2010).
\epsilonnd{thebibliography}
\epsilonnd{document} | 1,978 | 18,055 | en |
train | 0.12.0 | \begin{document}
\title{Graph2Graph Learning\ with Conditional Autoregressive Models}
\begin{abstract}
We present a graph neural network model for solving graph-to-graph learning problems. Most deep learning on graphs considers ``simple'' problems such as graph classification or regressing real-valued graph properties. For such tasks, the main requirement for intermediate representations of the data is to maintain the structure needed for output, i.e.~keeping classes separated or maintaining the order indicated by the regressor. However, a number of learning tasks, such as regressing graph-valued output, generative models, or graph autoencoders, aim to predict a graph-structured output. In order to successfully do this, the learned representations need to preserve far more structure. We present a conditional auto-regressive model for graph-to-graph learning and illustrate its representational capabilities via experiments on challenging subgraph predictions from graph algorithmics; as a graph autoencoder for reconstruction and visualization; and on pretraining representations that allow graph classification with limited labeled data.
\end{abstract}
\section{Introduction}
Graphs are everywhere! While machine learning and deep learning on graphs have for long caught wide interest, most research continues to focus on relatively simple tasks, such as graph classification~\cite{ying2018hierarchical,xu2018powerful}, or regressing a single continuous value from graph-valued input~\cite{chen2019alchemy,yang2019analyzing,dwivedi2020benchmarkgnns,ok20similarity,bianchi21arma,zhang20dlgsurvey,wu21comprehensivesurvey}. While such tasks are relevant and challenging, they ask relatively little from the learned intermediate representations: For graph classification, performance relies on keeping classes separate, and for regressing a single real variable, performance relies on ordering graphs according to the output variable, but within those constraints, intermediate graph embeddings can shuffle the graphs considerably without affecting performance.
In this paper, we consider Graph2Graph learning, {\it{i.e., }} problems whose input features and output predictions are both graphs. Such problems include both Graph2Graph regression and graph autoencoder models. For such problems, the model has to learn intermediate representations that carry rich structural information about the encoded graphs.
\paragraph{Related work.} While most existing work on graph neural networks (GNNs) centers around graph classification, some work moves in the direction of more complex output. Within chemoinformatics, several works utilize domain specific knowledge to predict graph-valued output, e.g.~ chemical reaction outcomes predicted via difference networks~\cite{jin2017predicting}. Similarly utilizing domain knowledge, junction trees are used to define graph variational autoencoders~\cite{junctionVAE} and Graph2Graph translation models~\cite{jin2018learning} designed specifically for predicting molecular structures, and interpretable substructures and structural motifs are used to design generative models of molecular Graph2Graphs in~\cite{jin2020multi} and~\cite{jin2020hierarchical}, respectively. Some work on encoder-decoder networks involving graph pooling and unpooling exist, but are only applied to node- and graph classification~\cite{gao2019graph,ying2018hierarchical}. More general generative models have also appeared, starting with the Variational Graph Autoencoder~\cite{kipf2016variational} which is primarily designed for link prediction in a single large graph. In~\cite{li2020dirichlet}, clustering is leveraged as part of a variational graph autoencoder to ensure common embedding of clusters. This, however, leads to a strong dependence on the similarity metric used to define clusters, which may be unrelated to the task at hand. In~\cite{you2018graphrnn}, a graph RNN, essentially structured as a residual decoder architecture, is developed by using the adjacency matrix to give the graph an artificial sequential structure. This generative model samples from a distribution fitted to a population of data, and does not generalize to graph-in, graph-out prediction problems.
\textbf{We contribute} a model for Graph2Graph prediction with flexible modelling capabilities. Utilizing the graph representation from~\cite{you2018graphrnn}, we build a full encoder-decoder network analogous to previous work in sequence-to-sequence learning~\cite{sutskever2014sequence}. Drawing on tools from image segmentation, we obtain edge-wise probabilities for the underlying discrete graphs, as well as flexible loss functions for handling the class imbalance implied by sparse graphs. Our graph RNN creates rich graph representations for complex graph prediction tasks. We illustrate its performance both for graph regression, as a graph autoencoder including visualization, and for unsupervised pretraining of graph representations to allow graph classification with limited labeled data. | 1,122 | 11,579 | en |
train | 0.12.1 | \section{Method}
\label{sec:Method}
Our framework aims to generate new output graphs as a function of input graphs. Below, we achieve this by treating Graph2Graph regression as a sequence-to-sequence learning task by representing the graph as a sequence structure.
\subsection{Graph Representation}
Our framework takes as input a collection of undirected graphs with node- and edge attributes, denoted as $G = \{V, E\}$, where $V$ is the set of nodes and $E$ is the set of edges between nodes. Given a fixed node ordering $\pi$, a graph $G$ is uniquely represented by its attributed adjacency matrix
$A \in \{ \mathbb{R}^\gamma\}^{n\times n}$, where $\gamma$ is the attribute dimension for edges in $G$ and $n$ is the number of nodes. Moreover, $A_{i,j}$ is not $null$ iff $(v_{i}, v_{j}) \in E$.
Consistent with the representation in~\cite{you2018graphrnn}~\cite{popova2019molecularrnn}, a graph $G$ will be represented as sequences of adjacency vectors $\{X_1, X_2,\cdots, X_n\}$ obtained by breaking $A$ up by rows. $X_i = (A_{i, i-1},A_{i, i-2},\cdots,A_{i,1})$ encodes the sequence of attributes of edges connecting the node $v_i$ to its previous nodes $\{v_{i-1}, v_{i-2}, ..., v_1\}$. The graph $G$ is thus transformed into a sequence of sequences, spanning across graph nodes $V$ and edges $E$. A toy example (one graph with 4 nodes) is shown in Fig.~\ref{fig:graph_representation}. This representation allows us to build on graph representation learning for sequences.
\begin{figure}
\caption{Graph representation: a toy example.}
\label{fig:graph_representation}
\caption{Rolled representation of the proposed Graph2Graph network}
\label{fig:rolling_network_architecture}
\end{figure}
\subsection{Graph2Graph Network Architecture}
Using the above graph representation, we propose a novel encoder-decoder architecture for Graph2Graph predictions building on previous models for sequence-to-sequence prediction~\cite{sutskever2014sequence}. At the encoding phase, graph information will be captured at edge level as well as node level, and analogously, the decoder will infer graph information at edge and node level in turn. A graphical illustration of proposed network is shown in Fig.~\ref{fig:rolling_network_architecture}, with its unrolled version in Fig.~\ref{fig:unrolling_network_architecture}.
\begin{figure}
\caption{\textbf{Left:}
\label{fig:unrolling_network_architecture}
\end{figure}
\subsubsection{Encoder}
The encoder aims to extract useful information from the input graph to feed into the decoding inference network. Utilizing the sequential representation of the input graph $G$, we use expressive recurrent neural networks (RNN) to encode $G$. The structure and attributes in $G$ are summarized across two levels of RNNs, each based on~\cite{sutskever2014sequence}: the node level RNN, denoted as \textit{encNodeRNN}, and the edge level RNN, denoted as \textit{encEdgeRNN}. For the \textit{encNodeRNN}, we apply bidirectional RNNs~\cite{schuster1997bidirectional} to encode information from both previous and following nodes, see Fig.~\ref{fig:unrolling_network_architecture} right subfigure.
The encoder network \textit{encEdgeRNN} reads elements in $X_i$ as they are ordered, {\it{i.e., }} from $X_{i,1}$ to $X_{i,i-1}$, using the forward RNN $\overrightarrow{g}_{edge}$. Here, $\overrightarrow{g}_{edge}$ is a state-transition function, more precisely a Gated Recurrent Unit (GRU). The GRU $\overrightarrow{g}_{edge}$ produce a sequence of forward hidden states $(\overrightarrow{h}^{edge}_{i,1},\overrightarrow{h}^{edge}_{i,2}, \cdots, \overrightarrow{h}^{edge}_{i,i-1})$. Then we pick $\overrightarrow{h}^{edge}_{i,i-1}$ to be the context vector of $X_i$, and input it into \textit{encNodeRNN}
The encoder network \textit{encNodeRNN} is a bidirectional RNN, which receives input from \textit{encEdgeRNN} and transmits hidden states over time, as shown in Figs.~\ref{fig:rolling_network_architecture} and~\ref{fig:unrolling_network_architecture}. This results in concatenated hidden states $h^{node}_{i} = \overrightarrow{h}^{node}_i\|\overleftarrow{h}^{node}_{n-i}$. The final hidden state $\overrightarrow{h}^{node}_{n}$ and $\overleftarrow{h}^{node}_{1}$ are concatenated and used as the initial hidden state for the decoder.
\subsubsection{Decoder}
Once the input graph is encoded, the decoder seeks to infer a corresponding output graph for a given problem. To help understand how the decoder works, we consider a particular graph prediction problem motivated by an NP hard problem from graph algorithms, namely the identification of a \emph{maximum clique}. In this application, we take a graph as input and aim to predict which nodes and edges belong to its maximum clique. Thus, the decoder predicts whether to retain or discard the next edge given all previous nodes, predicted edges and the context vector of the input graph.
The decoder defines a probability distribution $p(Ys)$, where $Ys = (Y_0,Y_1,\dots,Y_{m-1}) $ is the sequence of adjacency vectors that forms the output as predicted by the decoder. Here, $p(Ys)$ is a joint probability distribution on the adjacency vector predictions and can be decomposed as the product of ordered conditional distributions:
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
p(Ys) = \prod_{i=0}^{m-1}p(Y_i|Y_0,Y_1,\dots,Y_{i-1}, C),
\label{eq:dec_nodeProbDistribution}
\end{equation}
\noindentndent where $m$ is the node number of the predicted graph, and $C$ is the context vector used to facilitate predicting $Y_i$, as explained in Sec.~\ref{sec:attention} below. Denote $p(Y_i|Y_0,Y_1,\dots,Y_{i-1}, C)$ as $p(Y_i|Y_{<i}, C)$ for simplication.
In order to maintain edge dependencies in the prediction phase, $p(Y_i|Y_{<i}, C)$ is factorized as composed as a product of conditional probabilities.
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
p(Y_i|Y_{<i}, C) = \prod_{k=0}^{i-1}p(Y_{i,k}|Y_{i,<k},c; Y_{<i}, C)
\label{eq:dec_edgeProbDistribution}
\end{equation}
\noindentndent where $c$ is the edge context vector for predicting $Y_{i,k}$.
The cascaded relation between Equation~\eqref{eq:dec_nodeProbDistribution} and Equation~\eqref{eq:dec_edgeProbDistribution}, is reflected in practice by their approximation by the two cascaded RNNs called \textit{decNodeRNN} and \textit{decEdgeRNN}. Here, \textit{decNodeRNN} transits graph information from node $i-1$ to node $i$ hence generating a node (see Eq.~\eqref{eq:decNodeRNN}), and \textit{decEdgeRNN} generates edge predictions for the generated node (Eq.~\eqref{eq:decEdgeRNN}). The output of each \textit{decNodeRNN} cell will serve as an initital hidden state for \textit{decEdgeRNN}. Moreover, at each step, the output of the \textit{decEdgeRNN} will be fed into an MLP head with sigmoid activation function, generating a probability of keeping this edge in the output graph (Eq.~\eqref{eq:decEdgeMLP}). This use of sigmoid activation is similar to its use in image segmentation, where a pixel-wise foreground probability is obtained from it.
\begin{align}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\label{eq:decNodeRNN}
&s^{node}_i = f_{node}(s^{node}_{i-1}, f_{edge,down}(Y_{i-1}), C_i ) \\
\label{eq:decEdgeRNN}
&s^{edge}_{i,j}=f_{edge}(s^{edge}_{i,j-1}, emb(Y_{i,j-1}), c_{i,j}) \\
\label{eq:decEdgeMLP}
&o_{i,j} = \text{MLP}(s^{edge}_{i,j})
\end{align}
Our decoder has some features in common with the graph RNN~\cite{you2018graphrnn}. We extend this by utilizing attention based on the decoder, as well as by adding in extra GRUs in the $f_{edge,down}$, encoding the predicted adjacency vector from the previous step in order to improve model expressive capability.
\begin{algorithm}[t]
\caption{\label{alg:overall algorithm}Graph to Graph learning algorithm.}
\textit{Input:} graph $\textbf{X} = (X_1,\dots, X_n)$.
\textit{Output:} graph $\textbf{Y} = (Y_1,\dots, Y_m)$.
\begin{algorithmic}[1]
\STATE For each $X_i, i=1,\dots,n$, use edge RNN $g_{edge}$ to calculate their encodings$\{XEmb_1,\dots,XEmb_n\}$ and hidden states $h^{edge}_{i, j}$.
\STATE Forward node RNN $\overrightarrow{g}_{node}$ reads $\{XEmb_1,\dots,XEmb_n\}$ and calculates forward node hidden states $\{\overrightarrow{h}^{node}_1,\dots,\overrightarrow{h}^{node}_n\}$
\STATE Reverse node RNN $\overleftarrow{g}_{node}$ reads $\{XEmb_n,\dots,XEmb_1\}$ and calculates reverse node hidden states $\{\overleftarrow{h}^{node}_1,\dots,\overleftarrow{h}^{node}_n\}$
\STATE Final hidden state $h^{node}_i = \textsc{Concat}(\overrightarrow{h}^{node}_i, \overleftarrow{h}^{node}_{n-i+1})$
\STATE Set initial hidden state of decoder node RNN as $s^{node}_1 = h^{node}_n$, initial edge sequence encoding $YEmb_0 = \mathrm{SOS}, i=1$
\STATE While $i<=m$ do
\begin{itemize}
\item $f_{edge,down}$ encodes $Y_{i-1}$ to be $YEmb_{i-1}$
\item $s^{node}_i = f_{node}(s^{node}_{i-1},YEmb_{i-1}, \textup{NodeAttn}(h^{node}_{1:n},s^{node}_{i-1} ))$
\item $s^{edge}_{i,0} = s^{node}_i, j=1, Y_{i,0} = \mathrm{SOS_{decEdge1}}$
\item While $j<=i$ do
\begin{itemize}
\item $s^{edge}_{i,j} = f_{edge}(s^{edge}_{i,j-1}, Y_{i,j-1}, \text{EdgeAttn}(s^{edge}_{i,k}, h^{edge}_{:,:}))$
\item $Y_{i,j} = \text{MLP}(s^{edge}_{i,j})$
\item $j\leftarrow j+1$
\end{itemize}
\item $i\leftarrow i+1$
\end{itemize}
\STATE Return predicted graph sequence $Y_{1:m}$
\end{algorithmic}
\label{alg:overall overall algorithm}
\end{algorithm}
\subsubsection{Attention mechanism} \label{sec:attention}
Seen from Eq.~\eqref{eq:dec_nodeProbDistribution}~\eqref{eq:dec_edgeProbDistribution}, the final edge probabilities are conditioned on a node context vector $C$, as well as an edge context vector $c$. We derive these following~\cite{bahdanau2014neural}:
The node context vector $C_i$, is defined from the hidden vectors $(h^{node}_1, h^{node}_2, \dots, h^{node}_n)$ generated by the encoder. Each hidden vector $h^{node}_i$ has captured global information from the graph $G$, with a main focus on the $i$-th node. Now, the $i^{th}$ node context vector $C_i$ is computed as the weighted sum across all these hidden vectors:
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
C_i = \sum_{j=1}^{n}vHiddenWeight_{i,j}h^{node}_j.
\label{eq:nodeAttnComputation}
\end{equation}
The weight assigned to $h^{node}_j$ is computed as the normalized compatibility of $h^{node}_j$ with $s^{node}_{i-1}$:
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
vHiddenWeight_{i,j} = \frac{\exp(\phi(s^{node}_{i-1}, h^{node}_j))}
{\sum_{k=1}^{n}\exp(\phi(s^{node}_{i-1}, h^{node}_k))}
\label{eq:nodeAttnWeightComputation}
\end{equation}
\noindentndent where $\phi$ is a compatibility function, parameterized as a feedforward neural network and trained jointly along with other modules in the whole model.
The computation of the edge level context $c_{i,j}$ follows the same scheme. The overall framwork is summarized in Algorithm~\ref{alg:overall algorithm}. | 3,636 | 11,579 | en |
train | 0.12.2 | \section{Experiments and Evaluation}
\label{sec:ExperimentsAndEvaluation}
Next, we show how our model can be applied in a wide range of tasks, including Graph2Graph regression applied as a heuristic to solve challenging graph algorithmic problems; representation learning as a graph autoencoder; and utilizing the graph autoencoder to learn semantic representations for graph classification with limited labeled data.
\paragraph{General experimental setup.} All models were trained on a single NVIDIA Titan GPU with 12GB memory, using the Adam~\cite{Kingma2015} optimizer with learning rate $\in \{0.01, 0.003\}$ and batch size $\in \{64,128\}$.
\paragraph{Loss Function.} For all Graph2Graph learning tasks, we used the Focal loss~\cite{lin2017focal} function known from segmenting images with unbalanced classes, analogous to the relatively sparse graphs. Note that image segmentations and binary graphs are similar as target objects, in the sense that both consist of multiple binary classifications.
For an input graph $\textbf{X}$, a ground truth output graph $\mathbb{Y}$ and a Graph2Graph network $\mathcal{M}$, the loss $\ell_{i,j}$ for the edge between node $i$ and its $j$-th previous node is :
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\ell_{i,j} = -(1-p^t_{i,j})^\gamma \log(p^t_{i,j}),
\label{eq:lossFuncSingle}
\end{equation}
\noindentndent where $p^t_{i,j}$ denotes likelihood of \emph{correct prediction}, and $\gamma>0$ is a hyperparameter used to reduce relative loss for well classified edges. In our experiment $\gamma=2$.
The final loss is a sum of edge-wise losses: $\mathcal{L}(\mathcal{M}(\textbf{X}), \mathbb{Y}) = \sum_{(i, j)\in\mathcal{I}}\ell_{i,j}$, where $\mathcal{I}$ is the set of node index pairs. This varies depending on application: For the maximum clique prediction (Sec.~\ref{subsec:maxCliquePrediction}), we have $\forall a \in \mathcal{I}, \textbf{X}_a = 1$, restricting to predicting subgraphs of the input. For the autoencoder (Sec.~\ref{sec:ae} and~\ref{sec:graphClassification}), on the other hand, $\mathcal{I}$ contains all index pairs in \textbf{X}.
\begin{table}[t]
\center
{
\caption{\label{tab:datasets_3tasks}Summary of datasets used in our experiments on three applications.}
\resizebox{\textwidth}{!}
{
\setlength{\tabcolsep}{4pt}{
\begin{tabular}{@{}llcccccc@{}}
\toprule
~ &\textbf{Dataset} &\# Graphs &\# Node (avg) &\# Node (max) &\# Training &\# Validation &\# Test \\
\midrule
\multirowcell{3}{Sec.~\ref{subsec:maxCliquePrediction}\\ \textbf{Maximum Clique}} &DBLP\_v1 &{$14488$} &{$11.7$} &{$39$} &{$8692$} &{$2898$} &{$2898$} \\
~ &IMDB-MULTI &{$1500$} &{$13.0$} &{$89$} &{$900$} &{$300$} &{$300$} \\
~ &deezer\_ego\_nets &{$2558$} &{$16.7$} &{$30$} &{$1552$} &{$518$} &{$518$} \\
\midrule
\multirowcell{3}{Sec.~\ref{sec:ae}\\ \textbf{AutoEncoder}} &DBLP\_v1 &{$19455$} &{$10.5$} &{$39$} &{$11673 $} &{$3891 $} &{$3891$} \\
~ &IMDB-MULTI &{$1500$} &{$13.0$} &{$89$} &{$900$} &{$300$} &{$300$} \\
~ &MUTAG &{$188$} &{$17.9$} &{$28$} &{$112$} &{$38$} &{$38$} \\
\midrule
\multirowcell{3}{Sec.~\ref{sec:graphClassification}\\ \textbf{Classification}} &DBLP\_v1 &{$19455$} &{$10.5$} &{$39$} &{$11673 $} &{$3891 $} &{$3891$} \\
~ &IMDB-MULTI &{$1500$} &{$13.0$} &{$89$} &{$1200$} &{$150$} &{$150$} \\
~ &AIDS &{$2000$} &{$15.7$} &{$\emph{50}$} &{$1600$} &{$200$} &{$200$} \\
~ &NCI1 &{$4100$} &{$29.9$} &{$\emph{50}$} &{$3288$} &{$411$} &{$411$} \\
~ &IMDB-BINARY &{$1000$} &{$19.8$} &{$\emph{100}$} &{$800$} &{$100$} &{$100$} \\
\bottomrule
\end{tabular}
}
}
}
\end{table}
\begin{table}[t]
\center
\setlength\tabcolsep{10pt}
{
\caption{\label{tab:maxClique_acc_iou}Results of maximal clique prediction in term of accuracy(\%) and edge IOUs(\%) on DBLP\_v1, IMDB-MULTI and deezer\_ego\_nets dataset. OOM means out of memory even if batch size is 1, Graph2Graph denotes our proposed model. Results are based on a random split.}
\resizebox{\textwidth}{!}
{
\begin{tabular}{@{}lcccccc@{}}
\toprule
~ & \multicolumn{2}{c}{DBLP\_v1}& \multicolumn{2}{c}{IMDB-MULTI}& \multicolumn{2}{c}{deezer\_ego\_nets} \\
\midrule
\textbf{Models} & Accuracy &edge IoU & Accuracy &edge IoU & Accuracy & edge IoU \\
\cmidrule(r){1-1} \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
MLP &$85.0$ & $93.8$ & $ 61.0 $ & $85.7 $ & $33.2$ & $66.7$ \\
GRU w/o Attn &$85.96$ & $95.47$ &$54.33$ &$79.82$ &$42.86$ & $69.76 $\\
GRU with Attn &\multicolumn{2}{c}{>3days} &\multicolumn{2}{c}{OOM}& $46.53$& $76.75$ \\
Graph2Graph &$\textbf{95.51}$ & $\textbf{97.43}$ & $\textbf{82.3}$ & $\textbf{92.5}$ & $\textbf{58.5}$ & $\textbf{81.8}$ \\
\bottomrule
\end{tabular}
}
}
\end{table}
\begin{table}[!t]
\center
{
\caption{\label{tab:ablation_maxClique_acc_iou}Ablation study on maximal clique prediction in term of accuracies and edge IOUs on DBLP\_v1, IMDB-MULTI and deezer\_ego\_nets dataset. Graph2Graph denotes our proposed model. Results are computed on a random split}
\resizebox{\textwidth}{!}
{
\begin{tabular}{@{}lcccccc@{}}
\toprule
~ & \multicolumn{2}{c}{DBLP\_v1}& \multicolumn{2}{c}{IMDB-MULTI}& \multicolumn{2}{c}{deezer\_ego\_nets} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
\textbf{Models} & Accuracy &edge IoU & Accuracy &edge IoU& Accuracy & edge IoU \\
\midrule
Graph2Graph w/o NodeAttn &$93.48$ & $96.15$ &$63.67$ &$83.21$ &$50.39$ &$78.90$ \\
Graph2Graph w/o EdgeAttn &$94.58$ & $96.58$ &$80.00$ &$90.51$ &$56.76$ &$\textbf{82.90}$\\
Graph2Graph full &$\textbf{95.51}$ &$\textbf{97.43}$ & $\textbf{82.3}$ & $\textbf{92.5}$ & $\textbf{58.5}$ & $81.8$ \\
\bottomrule
\end{tabular}
}
}
\end{table}
\begin{figure}
\caption{Examples of maximal clique predictions; edge probability shown in colorbar.}
\label{fig:maxClique_example_1}
\label{fig:maxClique_example_2}
\label{fig:maxClique_example_colorBar}
\label{fig:maxClique_visualization}
\end{figure}
\subsection{Solving graph algorithmic problems via Graph2Graph regression}\label{subsec:maxCliquePrediction}
Many important graph problems are NP complete~\cite{bomze1999maximum, alekseev2007np}, enhancing the need for efficient heuristics. While most existing heuristics are deterministic in their nature, and hence also in the mistakes that they make, a neural network would be trained to perform the task at hand for a particular data distribution. This gives possibilities for improving both quality, tailoring the predictions to the data, and computational complexity, which is low at test time.
Here, we train the Graph2Graph architecture to predict maximum cliques, an NP complete problem, and illustrate its performance on several benchmark datasets.
\paragraph{Problem Definition and settings} The Maximal Clique (MC) is the \textit{complete} subgraph $\graph^{*}$ of a given graph $G$, which contains the maximal number of nodes.
To reduce computational time, we employed a fixed attention mechanism for both node level and edge level by setting $vHiddenWeight_{i,j} =1, \text{if } i=j$.
\paragraph{Data.}
Our experiments are carried out using the datasets DBLP\_v1~\cite{pan2013graph}, IMDB-MULTI~\cite{yanardag2015deep}, and deezer\_ego\_nets~\cite{rozemberczki2020api} from the TUD graph benchmark database~\cite{tud}. Graphs whose maximum clique contained less than 3 nodes were excluded, and for the deezer\_ego\_nets, we excluded those graphs that had more than 30 nodes, giving the dataset statistics shown in Table~\ref{tab:datasets_3tasks}. In each graph dataset, 60\% were used for training, 20\% for validation and 20\% for test.
\paragraph{Results.}
Performance was quantified both in terms of accuracy and Intersection over Union (IoU) on correctly predicted edges. While the former counts completely correctly predicted subgraphs, the latter quantifies near-successes, analogous to its use in image segmentation. We compare our performance with other similar autoencoder architectures with backbones as MLP, GRU with~\cite{bahdanau2014neural} and without Attention~\cite{cho2014learning}, by flattening a graph as one-level sequence. The results found in Table~\ref{tab:maxClique_acc_iou} clearly show that the Graph2Graph architecture outperforms the alternative models. This high performance is also illustrated in Fig.~\ref{fig:maxClique_visualization}, which shows visual examples of predicted maximal cliques. These are illustrated prior to thresholding, with edge probabilities indicated by edge color. More examples are found in the Supplementary Material.
\paragraph{Ablation Study.} By removing node level attention (\textit{Graph2Graph w/o NodeAttn}) and edge level attention (\textit{Graph2Graph w/o EdgeAttn}) from the original model, we investigate the components' contributions to the MC prediction; see Table~\ref{tab:ablation_maxClique_acc_iou} for results. We see that Graph2Graph Models have better performance over Graph2Graph without NodeAttn and that without EdgeAttn on all three datasets under both metrics, except the edgeIoU on deezer\_ego\_nets. These results demonstrate the contributions of attention mechanism at node level and edge level to performance improvement.
\begin{figure}
\caption{Latent representation for DBLP\_v1 test set.}
\label{fig:dblp_tsne}
\end{figure}
\begin{figure}
\caption{Latent representation for test sets of IMDB-MULTI (left) and MUTAG (right).}
\label{fig:imdb_mutag_tsne}
\end{figure} | 3,790 | 11,579 | en |
train | 0.12.3 | \subsection{Solving graph algorithmic problems via Graph2Graph regression}\label{subsec:maxCliquePrediction}
Many important graph problems are NP complete~\cite{bomze1999maximum, alekseev2007np}, enhancing the need for efficient heuristics. While most existing heuristics are deterministic in their nature, and hence also in the mistakes that they make, a neural network would be trained to perform the task at hand for a particular data distribution. This gives possibilities for improving both quality, tailoring the predictions to the data, and computational complexity, which is low at test time.
Here, we train the Graph2Graph architecture to predict maximum cliques, an NP complete problem, and illustrate its performance on several benchmark datasets.
\paragraph{Problem Definition and settings} The Maximal Clique (MC) is the \textit{complete} subgraph $\graph^{*}$ of a given graph $G$, which contains the maximal number of nodes.
To reduce computational time, we employed a fixed attention mechanism for both node level and edge level by setting $vHiddenWeight_{i,j} =1, \text{if } i=j$.
\paragraph{Data.}
Our experiments are carried out using the datasets DBLP\_v1~\cite{pan2013graph}, IMDB-MULTI~\cite{yanardag2015deep}, and deezer\_ego\_nets~\cite{rozemberczki2020api} from the TUD graph benchmark database~\cite{tud}. Graphs whose maximum clique contained less than 3 nodes were excluded, and for the deezer\_ego\_nets, we excluded those graphs that had more than 30 nodes, giving the dataset statistics shown in Table~\ref{tab:datasets_3tasks}. In each graph dataset, 60\% were used for training, 20\% for validation and 20\% for test.
\paragraph{Results.}
Performance was quantified both in terms of accuracy and Intersection over Union (IoU) on correctly predicted edges. While the former counts completely correctly predicted subgraphs, the latter quantifies near-successes, analogous to its use in image segmentation. We compare our performance with other similar autoencoder architectures with backbones as MLP, GRU with~\cite{bahdanau2014neural} and without Attention~\cite{cho2014learning}, by flattening a graph as one-level sequence. The results found in Table~\ref{tab:maxClique_acc_iou} clearly show that the Graph2Graph architecture outperforms the alternative models. This high performance is also illustrated in Fig.~\ref{fig:maxClique_visualization}, which shows visual examples of predicted maximal cliques. These are illustrated prior to thresholding, with edge probabilities indicated by edge color. More examples are found in the Supplementary Material.
\paragraph{Ablation Study.} By removing node level attention (\textit{Graph2Graph w/o NodeAttn}) and edge level attention (\textit{Graph2Graph w/o EdgeAttn}) from the original model, we investigate the components' contributions to the MC prediction; see Table~\ref{tab:ablation_maxClique_acc_iou} for results. We see that Graph2Graph Models have better performance over Graph2Graph without NodeAttn and that without EdgeAttn on all three datasets under both metrics, except the edgeIoU on deezer\_ego\_nets. These results demonstrate the contributions of attention mechanism at node level and edge level to performance improvement.
\begin{figure}
\caption{Latent representation for DBLP\_v1 test set.}
\label{fig:dblp_tsne}
\end{figure}
\begin{figure}
\caption{Latent representation for test sets of IMDB-MULTI (left) and MUTAG (right).}
\label{fig:imdb_mutag_tsne}
\end{figure}
\subsection{Graph autoencoder via Graph2Graph prediction} \label{sec:ae}
It is well known from image- and language processing~\cite{cho2014learning}~\cite{sutskever2014sequence} that encoder-decoder networks often learn semantically meaningful embeddings when trained to reproduce their input as an autoencoder. We utilize the encoder-decoder structure of our proposed Graph2Graph model to train the network as a graph autoencoder, mapping graphs to continuous context vectors, and back to an approximation of the original graph.
\paragraph{Problem definition and settings.} Given input graphs $G$, train the Graph2Graph network $\mathcal{M} \colon G \mapsto H$ for the prediction $H = \hat{G}$ to reconstruct $G$ as well as possible. The encoder uses single directional RNNs for both edgeRNN and nodeRNN, {\it{i.e., }} $h^{edge}_i = \overrightarrow{h}^{edge}_i$, $h^{node}_i = \overrightarrow{h}^{node}_i$. We use no edge context $c$, and constrain all node contexts $C$ as $h^{node}_n$ to obtain more compressed encodings. The resulting $h^{node}_n$ serves as a latent representation of $G$ in a learned latent space, which in our experiments has dimension 128.
\paragraph{Data.} We use the full TU~\cite{tud} datasets DBLP\_v1, IMDB-MULTI and MUTAG~\cite{debnath1991structure}, using 60/20/20\% for training/validation/test, respectively; see Table~\ref{tab:datasets_3tasks} for details.
\paragraph{Results.} A visual comparison of original and reconstructed graphs is found in Fig.~\ref{fig:graphReconstruction}, and Figs.~\ref{fig:dblp_tsne} and~\ref{fig:imdb_mutag_tsne} visually demonstrate the ability of the learned representation to preserve graph structure by visualizing the the test-set latent features in a 2D t-SNE plot~\cite{van2008visualizing}. Note, in particular, how graphs that are visually similar are found nearby each other. It is evident from the t-SNE plots that the Graph2Graph model has captured semantic information of the graphs, whereas adjacency matrix embeddings (see supplementary material) fail to capture such patterns. Note also that even on a the very small trainingset of MUTAG, the embedding still preserves semantic structure. The expressiveness of the latent space embeddings is further validated on the proxy task of graph classification with limited labeled data below. More visualization results can be found in the supplementary material, including a comparison with t-SNE on the original adjacency matrices.
\begin{figure}
\caption{Autoencoder reconstructions from DBLP\_v1. Edge probability in grayscale (see colorbar).}
\label{fig:graphReconstruction_1}
\label{fig:graphReconstruction_2}
\label{fig:graphReconstruction_colorBar}
\label{fig:graphReconstruction}
\end{figure}
\subsection{Pretraining semantic representations for graph classification with limited labeled data}
\label{sec:graphClassification}
In this section, we utilize the graph autoencoder from Sec.~\ref{sec:ae} to learn semantic, low-dimensional embeddings of the dataset graphs, and apply an MLP on the latent variables for classification. In particular, we investigate the ``limited labels'' setting, which appears frequently in real life settings where data is abundant, but labeled data is scarce and expensive.
\paragraph{Problem formulation and settings.} The graph autoencoder is trained on a large training set (not using graph labels). An MLP is subsequently trained for classification on labeled training set subsamples of the latent representations, to study how the resulting model's performance depends on the size of the labeled training set. We compare our own subset model with the state-of-the-art Graph Isomorphism Network~\cite{xu2018powerful} (GIN) using similar hyperparameters (2 layers of MLP, each with hidden dimension 64; batch size 32) and selecting, for each model, the best performing epoch out of 100. Both models are given purely structural data, supplying GIN with node degrees as node labels. Both models are trained on randomly sampled subsets consisting of 0.1\%, 0.25\%, 0.5\%, 1\%, 5\%, 10\% and 100\% of the labeled training data, respectively; keeping the validation and training sets fixed. Training is repeated on new random subsets 10 times.
\paragraph{Data.} We use the DBLP\_v1, NCI1~\cite{wale2008comparison,kim2016pubchem,shervashidze2011weisfeiler}, AIDS~\cite{aidsdata,riesen2008iam}, IMDB-BINARY~\cite{yanardag2015deep} and IMDB-MULTI datasets for classification. The rather large DBLP\_v1 is divided into 60/20/20 \% splits for training/validation/testing, whereas the remaining datasets are divided into 80/10/10 \% splits to ensure that the best models are robustly trained. These splits were kept fixed across models.
\paragraph{Results.}
As shown in Fig.~\ref{fig:small_labels}, the model pretrained as a Graph2Graph autoencoder outperforms GIN on the small training sets, and approaches similar performance on the full training set.
\begin{figure}
\caption{Results from classification with features pre-trained as a Graph2Graph autoencoder.}
\label{fig:small_labels}
\end{figure} | 2,263 | 11,579 | en |
train | 0.12.4 | \section{Discussion and Conclusion}
\label{sec:DiscussionConclusion}
In this paper, we have turned our attention to graph-to-graph prediction; a problem which has so far seen very limited development within graph deep learning. We have utilized this both for graph-in, graph-out regression; a graph autoencoder; and for unsupervised pretraining of semantic representations that allow learning discriminative classification models with very little labeled data.
Our paper proposes a new, general family of problems for deep learning on graphs; namely predictions whose input and output are all graphs. We propose several tasks for learning with such models, and establish methods for validating them using publicly available benchmark data.
Our experimental validation differs from state-of-the-art in graph classification, in that we work with fixed training/validation/test splits as is commonly done in deep learning. To make this feasible, we have chosen to validate on datasets that are larger than the most commonly used graph benchmark datasets. Nevertheless, we have worked with publicly available benchmark data, making it easy for others to improve on and compare to our models. Further encouraging this, our code will be made publicly available upon publication.
While our model opens up for general solutions to new problems, it also has weaknesses. First, our current implementation assumes that all graphs have the same size, obtaining this by zero-padding all graphs to the maximal size. While this assumption is also found in other graph deep learning work, it is an expensive one, and in future work we will seek to remove it.
Our model depends on the order of the nodes used to create the adjacency matrix, and thus per se depends on node permutation. However, in a similar fashion as~\cite{you2018graphrnn}, all graphs are represented using a depth first order before feeding them into the model, which ensures that different permutations to the input graph still gives consistent output.
The performance of the state-of-the-art benchmark GIN is lower than that reported in the literature~\cite{xu2018powerful}, for two main reasons. First, as has previously been pointed out~\cite{xu2018powerful}, the most common way to report performance for graph neural networks is by reporting the largest encountered validation performance; an approach that is explained by the widespread use of small datasets. As we have chosen to perform validation with larger datasets, we do not do this. Second, this tendency is emphasized by our use of structural information alone in order to assess differences in models rather than differences in information supplied.
This also emphasizes the potential for even stronger representations. The Graph2Graph network currently only uses structural information, and the extension to graphs with node labels or node- and edge weights, as well as graphs whose structures or attributes are stochastic, forms important directions for future work.
In conclusion, we present an autoregressive model for graph-to-graph predictions and show its utility in several different tasks, ranging from graph-valued regression, via autoencoders and their use for visualization, to graph classification with limited labeled data based on latent representations pretrained as an autoencoder.
Most work in deep learning for graphs addresses problems whose output is ``simple'', such as classification with discrete output, or regression with real-valued output. This paper demonstrates, quantitatively and visually, that graph neural networks can be used to learn far richer outputs, with corresponding rich internal representations.
\small
\end{document} | 768 | 11,579 | en |
train | 0.13.0 | \begin{document}
\title{Steiner trees and higher geodecity}
\begin{abstract}
Let~$G$ be a connected graph and $\ell : E(G) \to \mathbb{R}^+$ a length-function on the edges of~$G$. The \emph{Steiner distance} $\rm sd_G(A)$ of $A \subseteq V(G)$ within~$G$ is the minimum length of a connected subgraph of~$G$ containing~$A$, where the length of a subgraph is the sum of the lengths of its edges.
It is clear that every subgraph $H \subseteq G$, with the induced length-function $\ell|_{E(H)}$, satisfies $\rm sd_H(A) \geq \rm sd_G(A)$ for every $A \subseteq V(H)$. We call $H \subseteq G$ \emph{$k$-geodesic in~$G$} if equality is attained for every $A \subseteq V(H)$ with $|A| \leq k$. A subgraph is \emph{fully geodesic} if it is $k$-geodesic for every $k \in \mathbb{N}$. It is easy to construct examples of graphs $H \subseteq G$ such that~$H$ is $k$-geodesic, but not $(k+1)$-geodesic, so this defines a strict hierarchy of properties. We are interested in situations in which this hierarchy collapses in the sense that if $H \subseteq G$ is $k$-geodesic, then~$H$ is already fully geodesic in~$G$.
Our first result of this kind asserts that if~$T$ is a tree and $T \subseteq G$ is 2-geodesic with respect to some length-function~$\ell$, then it is fully geodesic. This fails for graphs containing a cycle. We then prove that if~$C$ is a cycle and $C \subseteq G$ is 6-geodesic, then~$C$ is fully geodesic. We present an example showing that the number~6 is indeed optimal.
We then develop a structural approach towards a more general theory and present several open questions concerning the big picture underlying this phenomenon.
\end{abstract}
\begin{section}{Introduction}
\label{intro}
Let~$G$ be a graph and $\ell : E(G) \to \mathbb{R}^+$ a function that assigns to every edge $e \in E(G)$ a positive \emph{length} $\ell(e)$. This naturally extends to subgraphs $H \subseteq G$ as $\ell(H) := \sum_{e \in E(H)} \ell(e)$. The \emph{Steiner distance} $\rm sd_G(A)$ of a set $A \subseteq V(G)$ is defined as the minimum length of a connected subgraph of~$G$ containing~$A$, where $\rm sd_G(A) := \infty$ if no such subgraph exists. Every such minimizer is necessarily a tree and we say it is a \emph{Steiner tree for~$A$ in~$G$}. In the case where $A = \{ x, y\}$, the Steiner distance of~$A$ is the ordinary distance~$\rm d_G(x,y)$ between~$x$ and~$y$. Hence this definition yields a natural extension of the notion of ``distance'' for sets of more than two vertices. Corresponding notions of radius, diameter and convexity have been studied in the literature \cite{chartrand, steinerdiamdeg, steinerdiamgirth, steinerdiamplanar, steinerconvexnhood, steinerconvexgeom}. Here, we initiate the study of \emph{Steiner geodecity}, with a focus on structural assumptions that cause a collapse in the naturally arising hierarchy.
Let $H \subseteq G$ be a subgraph of~$G$, equipped with the length-function $\ell|_{E(H)}$. It is clear that for every $A \subseteq V(H)$ we have $\rm sd_H(A) \geq \rm sd_G(A)$. For a natural number~$k$, we say that~$H$ is \emph{$k$-geodesic in~$G$} if $\rm sd_H(A) = \rm sd_G(A)$ for every $A \subseteq V(H)$ with $|A| \leq k$. We call~$H$ \emph{fully geodesic in~$G$} if it is $k$-geodesic for every $k \in \mathbb{N}$.
By definition, a $k$-geodesic subgraph is $m$-geodesic for every $m \leq k$. In general, this hierarchy is strict: In Section~\ref{general theory} we provide, for every $k \in \mathbb{N}$, examples of graphs $H \subseteq G$ and a length-function $\ell : E(G) \to \mathbb{R}^+$ such that~$H$ is $k$-geodesic, but not $(k+1)$-geodesic. On the other hand, it is easy to see that if $H \subseteq G$ is a 2-geodesic \emph{path}, then it is necessarily fully geodesic, because the Steiner distance of any $A \subseteq V(H)$ in~$H$ is equal to the maximum distance between two $a, b \in A$. Our first result extends this to all trees.
\begin{theorem} \label{tree 2-geo}
Let~$G$ be a graph with length-function~$\ell$ and $T \subseteq G$ a tree. If~$T$ is 2-geodesic in~$G$, then it is fully geodesic.
\end{theorem}
Here, it really is necessary for the subgraph to be acyclic (see Corollary~\ref{H_2 forests}). Hence the natural follow-up question is what happens in the case where the subgraph is a cycle.
\begin{theorem} \label{cycle 6-geo}
Let~$G$ be a graph with length-function~$\ell$ and $C \subseteq G$ a cycle. If~$C$ is 6-geodesic in~$G$, then it is fully geodesic.
\end{theorem}
Note that the number~6 cannot be replaced by any smaller integer.
In Section~\ref{preliminaries} we introduce notation and terminology needed in the rest of the paper. Section~\ref{toolbox} contains observations and lemmas that will be used later. We then prove Theorem~\ref{tree 2-geo} in Section~\ref{sct on trees}. In Section~\ref{sct on cycles} we prove Theorem~\ref{cycle 6-geo} and provide an example showing that the number~6 is optimal. Section~\ref{general theory} contains an approach towards a general theory, aiming at a deeper understanding of the phenomenon displayed in Theorem~\ref{tree 2-geo} and Theorem~\ref{cycle 6-geo}. Finally, we take the opportunity to present the short and easy proof that in any graph~$G$ with length-function~$\ell$, the cycle space of~$G$ is generated by the set of fully geodesic cycles.
\end{section}
\begin{section}{Preliminaries}
\label{preliminaries}
All graphs considered here are finite and undirected. It is convenient for us to allow parallel edges. In particular, a cycle may consist of just two vertices joined by two parallel edges. Loops are redundant for our purposes and we exclude them to avoid trivialities. Most of our notation and terminology follows that of~\cite{diestelbook}, unless stated otherwise.
A set~$A$ of vertices in a graph~$G$ is called \emph{connected} if and only if $G[A]$ is.
Let $G, H$ be two graphs. A \emph{model of~$G$ in~$H$} is a family of disjoint connected \emph{branch-sets} $B_v \subseteq V(H)$, $v \in V(G)$, together with an injective map $\beta : E(G) \to E(H)$, where we require that for any $e \in E(G)$ with endpoints $u, v \in V(G)$, the edge $\beta(e) \in E(H)$ joins vertices from~$B_u$ and~$B_v$. We say that~$G$ \emph{is a minor of~$H$} if~$H$ contains a model of~$G$.
We use additive notation for adding or deleting vertices and edges. Specifically, let~$G$ be a graph, $H$ a subgraph of~$G$, $v \in V(G)$ and $e =xy \in E(G)$. Then $H + v$ is the graph with vertex-set $V(H) \cup \{ v \}$ and edge-set $E(H) \cup \{ vw \in E(G) \colon w \in V(H) \}$. Similarly, $H + e$ is the graph with vertex-set $V(H) \cup \{ x, y\}$ and edge-set $E(H) \cup \{ e \}$.
Let~$G$ be a graph with length-function~$\ell$. A \emph{walk} in~$G$ is an alternating sequence $W = v_1e_1v_2 \ldots e_kv_{k+1}$ of vertices~$v_i$ and edges~$e_i$ such that $e_i = v_i v_{i+1}$ for every $1 \leq i \leq k$. The walk~$W$ is \emph{closed} if $v_1 = v_{k+1}$. Stretching our terminology slightly, we define the \emph{length} of the walk as $\rm len_G(W) := \sum_{1 \leq i \leq k} \ell(e_i)$. The \emph{multiplicity}~$\rm m_W(e)$ of an edge $e \in E(G)$ is the number of times it is traversed by~$W$, that is, the number of indices $1 \leq j \leq k$ with $e = e_j$. It is clear that
\begin{equation} \label{length walk}
\rm len_G(W) = \sum_{e \in E(G)} \rm m_W(e) \ell(e) .
\end{equation}
Let~$G$ be a graph and~$C$ a cycle with $V(C) \subseteq V(G)$. We say that a walk~$W$ in~$G$ is \emph{traced by~$C$} in~$G$ if it can be obtained from~$C$ by choosing a starting vertex $x \in V(C)$ and an orientation~$\overrightarrow{C}$ of~$C$ and replacing every $\overrightarrow{ab} \in E(\overrightarrow{C})$ by a shortest path from~$a$ to~$b$ in~$G$. A cycle may trace several walks, but they all have the same length: Every walk~$W$ traced by~$C$ satisfies
\begin{equation} \label{length traced walk}
\rm len_G(W) = \sum_{ab \in E(C)} \rm d_G(a, b) .
\end{equation}
Even more can be said if the graph~$G$ is a tree. Then all the shortest $a$-$b$-paths for $ab \in E(C)$ are unique and all walks traced by~$C$ differ only in their starting vertex and/or orientation. In particular, every walk~$W$ traced by~$C$ in a tree~$T$ satisfies
\begin{equation} \label{multiplicities traced walk tree}
\forall e \in E(T): \, \, \rm m_W(e) = | \{ ab \in E(C) \colon e \in aTb \} | ,
\end{equation}
where $aTb$ denotes the unique $a$-$b$-path in~$T$.
Let~$T$ be a tree and $X \subseteq V(T)$. Let $e \in E(T)$ and let $T_1^e, T_2^e$ be the two components of $T -e$. In this manner, $e$ induces a bipartition $X = X_1^e \cup X_2^e$ of~$X$, given by $X_i^e = V(T_i^e) \cap X$ for $i \in \{ 1, 2 \}$. We say that the bipartition is \emph{non-trivial} if neither of $X_1^e, X_2^e$ is empty. The set of leaves of~$T$ is denoted by~$L(T)$. If $L(T) \subseteq X$, then every bipartition of~$X$ induced by an edge of~$T$ is non-trivial.
Let~$G$ be a graph with length-function~$\ell$, $A \subseteq V(G)$ and~$T$ a Steiner tree for~$A$ in~$G$. Since $\ell(e) >0 $ for every $e \in E(G)$, every leaf~$x$ of~$T$ must lie in~$A$, for otherwise $T - x$ would be a tree of smaller length containing~$A$.
In general, Steiner trees need not be unique. If~$G$ is a tree, however, then every $A \subseteq V(G)$ has a unique Steiner tree given by $\bigcup_{a, b \in A} aTb$.
\end{section}
\begin{section}{The toolbox}
\label{toolbox}
The first step in all our proofs is a simple lemma that guarantees the existence of a particularly well-behaved substructure that witnesses the failure of a subgraph to be $k$-geodesic.
Let~$H$ be a graph, $T$ a tree and~$\ell$ a length-function on~$T \cup H$. We call~$T$ a \emph{shortcut tree for~$H$} if the following hold:
\begin{enumerate}[ itemindent=0.8cm, label=(SCT\,\arabic*)]
\item $V(T) \cap V(H) = L(T)$, \label{sct vert}
\item $E(T) \cap E(H) = \emptyset$, \label{sct edge}
\item $\ell(T) < \rm sd_H( L(T) )$, \label{sct shorter}
\item For every $B \subseteqsetneq L(T)$ we have $\rm sd_H(B) \leq \rm sd_T(B)$. \label{sct minim}
\end{enumerate}
Note that, by definition, $H$ is not $|L(T)|$-geodesic in~$T \cup H$.
\begin{lemma} \label{shortcut tree}
Let~$G$ be a graph with length-function~$\ell$, $k$ a natural number and $H \subseteq G$. If~$H$ is not $k$-geodesic in~$G$, then~$G$ contains a shortcut tree for~$H$ with at most~$k$ leaves.
\end{lemma} | 3,663 | 24,280 | en |
train | 0.13.1 | Let~$G$ be a graph with length-function~$\ell$. A \emph{walk} in~$G$ is an alternating sequence $W = v_1e_1v_2 \ldots e_kv_{k+1}$ of vertices~$v_i$ and edges~$e_i$ such that $e_i = v_i v_{i+1}$ for every $1 \leq i \leq k$. The walk~$W$ is \emph{closed} if $v_1 = v_{k+1}$. Stretching our terminology slightly, we define the \emph{length} of the walk as $\rm len_G(W) := \sum_{1 \leq i \leq k} \ell(e_i)$. The \emph{multiplicity}~$\rm m_W(e)$ of an edge $e \in E(G)$ is the number of times it is traversed by~$W$, that is, the number of indices $1 \leq j \leq k$ with $e = e_j$. It is clear that
\begin{equation} \label{length walk}
\rm len_G(W) = \sum_{e \in E(G)} \rm m_W(e) \ell(e) .
\end{equation}
Let~$G$ be a graph and~$C$ a cycle with $V(C) \subseteq V(G)$. We say that a walk~$W$ in~$G$ is \emph{traced by~$C$} in~$G$ if it can be obtained from~$C$ by choosing a starting vertex $x \in V(C)$ and an orientation~$\overrightarrow{C}$ of~$C$ and replacing every $\overrightarrow{ab} \in E(\overrightarrow{C})$ by a shortest path from~$a$ to~$b$ in~$G$. A cycle may trace several walks, but they all have the same length: Every walk~$W$ traced by~$C$ satisfies
\begin{equation} \label{length traced walk}
\rm len_G(W) = \sum_{ab \in E(C)} \rm d_G(a, b) .
\end{equation}
Even more can be said if the graph~$G$ is a tree. Then all the shortest $a$-$b$-paths for $ab \in E(C)$ are unique and all walks traced by~$C$ differ only in their starting vertex and/or orientation. In particular, every walk~$W$ traced by~$C$ in a tree~$T$ satisfies
\begin{equation} \label{multiplicities traced walk tree}
\forall e \in E(T): \, \, \rm m_W(e) = | \{ ab \in E(C) \colon e \in aTb \} | ,
\end{equation}
where $aTb$ denotes the unique $a$-$b$-path in~$T$.
Let~$T$ be a tree and $X \subseteq V(T)$. Let $e \in E(T)$ and let $T_1^e, T_2^e$ be the two components of $T -e$. In this manner, $e$ induces a bipartition $X = X_1^e \cup X_2^e$ of~$X$, given by $X_i^e = V(T_i^e) \cap X$ for $i \in \{ 1, 2 \}$. We say that the bipartition is \emph{non-trivial} if neither of $X_1^e, X_2^e$ is empty. The set of leaves of~$T$ is denoted by~$L(T)$. If $L(T) \subseteq X$, then every bipartition of~$X$ induced by an edge of~$T$ is non-trivial.
Let~$G$ be a graph with length-function~$\ell$, $A \subseteq V(G)$ and~$T$ a Steiner tree for~$A$ in~$G$. Since $\ell(e) >0 $ for every $e \in E(G)$, every leaf~$x$ of~$T$ must lie in~$A$, for otherwise $T - x$ would be a tree of smaller length containing~$A$.
In general, Steiner trees need not be unique. If~$G$ is a tree, however, then every $A \subseteq V(G)$ has a unique Steiner tree given by $\bigcup_{a, b \in A} aTb$.
\end{section}
\begin{section}{The toolbox}
\label{toolbox}
The first step in all our proofs is a simple lemma that guarantees the existence of a particularly well-behaved substructure that witnesses the failure of a subgraph to be $k$-geodesic.
Let~$H$ be a graph, $T$ a tree and~$\ell$ a length-function on~$T \cup H$. We call~$T$ a \emph{shortcut tree for~$H$} if the following hold:
\begin{enumerate}[ itemindent=0.8cm, label=(SCT\,\arabic*)]
\item $V(T) \cap V(H) = L(T)$, \label{sct vert}
\item $E(T) \cap E(H) = \emptyset$, \label{sct edge}
\item $\ell(T) < \rm sd_H( L(T) )$, \label{sct shorter}
\item For every $B \subseteqsetneq L(T)$ we have $\rm sd_H(B) \leq \rm sd_T(B)$. \label{sct minim}
\end{enumerate}
Note that, by definition, $H$ is not $|L(T)|$-geodesic in~$T \cup H$.
\begin{lemma} \label{shortcut tree}
Let~$G$ be a graph with length-function~$\ell$, $k$ a natural number and $H \subseteq G$. If~$H$ is not $k$-geodesic in~$G$, then~$G$ contains a shortcut tree for~$H$ with at most~$k$ leaves.
\end{lemma}
\begin{proof}
Among all $A \subseteq V(H)$ with $|A| \leq k$ and $\rm sd_G(A) < \rm sd_H(A)$, choose~$A$ such that $\rm sd_G(A)$ is minimum. Let $T \subseteq G$ be a Steiner tree for~$A$ in~$G$. We claim that~$T$ is a shortcut tree for~$H$.
\textit{Claim 1:} $L(T) = A = V(T) \cap V(H)$.
The inclusions $L(T) \subseteq A \subseteq V(T) \cap V(H)$ are clear. We show $V(T) \cap V(H) \subseteq L(T)$. Assume for a contradiction that $x \in V(T) \cap V(H)$ had degree $d \geq 2$ in~$T$. Let $T_1, \ldots, T_d$ be the components of $T - x$ and for $j \in [d]$ let $A_j := A \cap V(T_j) \cup \{ x \}$. Since $L(T) \subseteq A$, every tree~$T_i$ contains some $a \in A$ and so $A \not \subseteq A_j$. In particular $|A_j| \leq k$. Moreover $\rm sd_G(A_j) \leq \ell( T_j + x) < \ell(T)$, so by our choice of~$A$ and~$T$ it follows that $\rm sd_G(A_j) = \rm sd_H(A_j)$. Therefore, for every $j \in [d]$ there exists a connected $S_j \subseteq H$ with $A_j \subseteq V(S_j)$ and $\ell(S_j) \leq \ell(T_j + x)$. But then $S := \bigcup_j S_j \subseteq H$ is connected, contains~$A$ and satisfies
\[
\ell(S) \leq \sum_{j = 1}^d \ell(S_j) \leq \sum_{j=1}^d \ell(T_j + x) = \ell(T) ,
\]
which contradicts the fact that $\rm sd_H(A) > \ell(T)$ by choice of~$A$ and~$T$.
\textit{Claim 2:} $E(T) \cap E(H) = \emptyset$.
Assume for a contradiction that $xy \in E(T) \cap E(H)$. By Claim~1, $x, y \in L(T)$ and so~$T$ consists only of the edge~$xy$. But then $T \subseteq H$ and $\rm sd_H(A) \leq \ell(T)$, contrary to our choice of~$A$ and~$T$.
\textit{Claim 3:} $\ell(T) < \rm sd_H(L(T))$.
We have $\ell(T) = \rm sd_G(A) < \rm sd_H(A)$. By Claim~1, $A = L(T)$.
\textit{Claim 4:} For every $B \subseteqsetneq L(T)$ we have $\rm sd_H(B) \leq \rm sd_T(B)$.
Let $B \subseteqsetneq L(T)$ and let $T' := T - (A \setminus B)$. By Claim~1, $T'$ is the tree obtained from~$T$ by chopping off all leaves not in~$B$ and so
\[
\rm sd_G(B) \leq \ell(T') < \ell(T) = \rm sd_G(A) .
\]
By minimality of~$A$, it follows that $\rm sd_H(B) = \rm sd_G(B) \leq \rm sd_T(B)$.
\end{proof} | 2,433 | 24,280 | en |
train | 0.13.2 | Our proofs of Theorem~\ref{tree 2-geo} and Theorem~\ref{cycle 6-geo} proceed by contradiction and follow a similar outline. Let $H \subseteq G$ be a subgraph satisfying a certain set of assumptions. The aim is to show that~$H$ is fully geodesic. Assume for a contradiction that it was not and apply Lemma~\ref{shortcut tree} to find a shortcut tree~$T$ for~$H$. Let~$C$ be a cycle with $V(C) \subseteq L(T)$ and let $W_H, W_T$ be walks traced by~$C$ in~$H$ and~$T$, respectively. If $|L(T)| \geq 3$, then it follows from~(\ref{length traced walk}) and~\ref{sct minim} that $\rm len(W_H) \leq \rm len(W_T)$.
Ensure that $\rm m_{W_T}(e) \leq 2$ for every $e \in E(T)$ and that $\rm m_{W_H}(e) \geq 2$ for all $e \in E(S)$, where $S \subseteq H$ is connected with $L(T) \subseteq V(S)$. Then
\[
2\, \rm sd_H(L(T)) \leq 2\, \ell(S) \leq \rm len(W_H) \leq \rm len(W_T) \leq 2 \, \ell(T) ,
\]
which contradicts~\ref{sct shorter}.
The first task is thus to determine, given a tree~$T$, for which cycles~$C$ with $V(C) \subseteq V(T)$ we have $m_W(e) \leq 2$ for all $e \in E(T)$, where~$W$ is a walk traced by~$C$ in~$T$. Let $S \subseteq T$ be the Steiner tree for~$V(C)$ in~$T$. It is clear that~$W$ does not traverse any edges $e \in E(T) \setminus E(S)$ and $L(S) \subseteq V(C) \subseteq V(S)$. Hence we can always reduce to this case and may for now assume that $S = T$ and $L(T) \subseteq V(C)$.
\begin{lemma} \label{pos even}
Let~$T$ be a tree, $C$ a cycle with $L(T) \subseteq V(C) \subseteq V(T)$ and~$W$ a walk traced by~$C$ in~$T$. Then $m_W(e)$ is positive and even for every $e \in E(T)$.
\end{lemma}
\begin{proof}
Let $e \in E(T)$ and let $V(C) = V(C)_1 \cup V(C)_2$ be the induced bipartition. Since $L(T) \subseteq V(C)$, this bipartition is non-trivial. By~(\ref{multiplicities traced walk tree}), $m_W(e)$ is the number of $ab \in E(C)$ such that $e \in aTb$. By definition, $e \in aTb$ if and only if~$a$ and~$b$ lie in different sides of the bipartition. Every cycle has a positive even number of edges across any non-trivial bipartition of its vertex-set.
\end{proof}
\begin{lemma} \label{equality achieved}
Let~$T$ be a tree, $C$ a cycle with $L(T) \subseteq V(C) \subseteq V(T)$. Then
\[
2 \ell(T) \leq \sum_{ab \in E(C)} \rm d_T(a, b) .
\]
Moreover, there is a cycle~$C$ with $V(C) = L(T)$ for which equality holds.
\end{lemma}
\begin{proof}
Let~$W$ be a walk traced by~$C$ in~$T$. By Lemma~\ref{pos even}, (\ref{length walk}) and~(\ref{length traced walk})
\[
2 \ell(T) \leq \sum_{e \in E(T)} \rm m_W(e) \ell(e) = \rm len(W) = \sum_{ab \in E(C)} \rm d_T(a, b) .
\]
To see that equality can be attained, let~$2T$ be the multigraph obtained from~$T$ by doubling all edges. Since all degrees in~$2T$ are even, it has a Eulerian trail~$W$, which may be considered as a walk in~$T$ with $\rm m_W(e) = 2$ for all $e \in E(T)$. This walk traverses the leaves of~$T$ in some cyclic order, which yields a cycle~$C$ with $V(C) = L(T)$. It is easily verified that~$W$ is traced by~$C$ in~$T$ and so
\[
2 \ell(T) = \sum_{e \in E(T)} \rm m_W(e) \ell(e) = \rm len(W) = \sum_{ab \in E(C)} \rm d_T(a, b) .
\]
\end{proof}
We have now covered everything needed in the proof of Theorem~\ref{tree 2-geo}, so the curious reader may skip ahead to Section~\ref{sct on trees}.
\begin{figure}
\caption{A tree with four leaves}
\label{4leaftree}
\end{figure}
In general, not every cycle~$C$ with $V(C) = L(T)$ achieves equality in Lemma~\ref{equality achieved}. Consider the tree~$T$ from Figure~\ref{4leaftree} and the following three cycles on $L(T)$
\[
C_1 = abcda, \, \, C_2 = acdba, \, \, C_3 = acbda .
\]
\begin{figure}
\caption{The three cycles on~$T$}
\label{4leaftree3cyc}
\end{figure}
For the first two, equality holds, but not for the third one. But how does~$C_3$ differ from the other two? It is easy to see that we can add~$C_1$ to the planar drawing of~$T$ depicted in Figure~\ref{4leaftree}: There exists a planar drawing of $T \cup C_1$ extending this particular drawing. This is not true for~$C_2$, but it can be salvaged by exchanging the positions of~$a$ and~$b$ in Figure~\ref{4leaftree}. Of course, this is merely tantamount to saying that $T \cup C_i$ is planar for $i \in \{ 1, 2\}$.
On the other hand, it is easy to see that $T \cup C_3$ is isomorphic to~$K_{3,3}$ and therefore non-planar.
\begin{lemma} \label{euler planar}
Let~$T$ be a tree and~$C$ a cycle with $V(C) = L(T)$. Let~$W$ be a walk traced by~$C$ in~$T$. The following are equivalent:
\begin{enumerate}[label=(a)]
\item $T \cup C$ is planar.
\item For every $e \in E(T)$, both $V(C)_1^e, V(C)_2^e$ are connected in~$C$.
\item $W$ traverses every edge of~$T$ precisely twice.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) $\Rightarrow$ (b): Fix a planar drawing of $T \cup C$. The closed curve representing~$C$ divides the plane into two regions and the drawing of~$T$ lies in the closure of one of them. By symmetry, we may assume that it lies within the closed disk inscribed by~$C$. Let $A \subseteq V(C)$ disconnected and choose $a, b \in A$ from distinct components of~$C[A]$. $C$ is the disjoint union of two edge-disjoint $a$-$b$-paths $S_1, S_2$ and both of them must meet $C \setminus A$, say $c \in V(S_1) \setminus A$ and $d \in V(S_2) \setminus A$.
The curves representing $aTb$ and $cTd$ lie entirely within the disk and so they must cross. Since the drawing is planar, $aTb$ and~$cTd$ have a common vertex. In particular, $A$ cannot be the set of leaves within a component of $T - e$ for any edge $e \in E(T)$.
(b) $\Rightarrow$ (c): Let $e \in E(T)$. By assumption, there are precisely two edges $f_1, f_2 \in E(C)$ between $V(C)_1^e$ and $V(C)_2^e$. These edges are, by definition, the ones whose endpoints are separated in~$T$ by~$e$. By~(\ref{multiplicities traced walk tree}), $m_W(e) =2$.
(c) $\Rightarrow$ (a): For $ab \in E(C)$, let $D_{ab} := aTb + ab \subseteq T \cup C$. The set $\mathcal{D} := \{ D_{ab} \colon ab \in E(C) \}$ of all these cycles is the fundamental cycle basis of $T \cup C$ with respect to the spanning tree~$T$. Every edge of~$C$ occurs in only one cycle of~$\mathcal{D}$. By assumption and~(\ref{multiplicities traced walk tree}), every edge of~$T$ lies on precisely two cycles in~$\mathcal{D}$. Covering every edge of the graph at most twice, the set~$\mathcal{D}$ is a \emph{sparse basis} of the cycle space of $T \cup C$. By MacLane's Theorem, $T \cup C$ is planar.
\end{proof}
\end{section}
\begin{section}{Shortcut trees for trees}
\label{sct on trees}
\begin{proof}[Proof of Theorem~\ref{tree 2-geo}]
Assume for a contradiction that $T \subseteq G$ was not fully geodesic and let $R \subseteq T$ be a shortcut tree for~$T$. Let $T' \subseteq T$ be the Steiner tree for $L(R)$ in~$T$. By Lemma~\ref{equality achieved}, there is a cycle~$C$ with $V(C) = L(R)$ such that
\[
2 \ell(R) = \sum_{ab \in E(C)} \rm d_R(a, b) .
\]
Note that~$T'$ is 2-geodesic in~$T$ and therefore in~$G$, so that $\rm d_{T'}(a,b) \leq \rm d_R(a,b)$ for all $ab \in E(C)$. Since every leaf of~$T'$ lies in $L(R) = V(C)$, we can apply Lemma~\ref{equality achieved} to~$T'$ and~$C$ and conclude
\[
2 \ell(T') \leq \sum_{ab \in E(C)} \rm d_{T'}(a, b) \leq \sum_{ab \in E(C)} \rm d_R(a, b) = 2\ell(R) ,
\]
which contradicts~\ref{sct shorter}.
\end{proof}
\end{section}
\begin{section}{Shortcut trees for cycles}
\label{sct on cycles}
By Lemma~\ref{shortcut tree}, it suffices to prove the following.
\begin{theorem} \label{shortcut tree cycle strong}
Let~$T$ be a shortcut tree for a cycle~$C$. Then $T \cup C$ is a subdivision of one of the five (multi-)graphs in Figure~\ref{five shortcut trees}. In particular, $C$ is not 6-geodesic in $T \cup C$.
\end{theorem}
\begin{figure}
\caption{The five possible shortcut trees for a cycle}
\label{five shortcut trees}
\end{figure}
Theorem~\ref{shortcut tree cycle strong} is best possible in the sense that for each of the graphs in Figure~\ref{five shortcut trees} there exists a length-function which makes the tree inside a shortcut tree for the outer cycle, see Figure~\ref{cycleshortcutlength}. These length-functions were constructed in a joint effort with Pascal Gollin and Karl Heuer in an ill-fated attempt to prove that a statement like Theorem~\ref{cycle 6-geo} could not possibly be true.
\begin{figure}
\caption{Shortcut trees for cycles}
\label{cycleshortcutlength}
\end{figure}
This section is devoted entirely to the proof of Theorem~\ref{shortcut tree cycle strong}. Let~$T$ be a shortcut tree for a cycle~$C$ with length-function $\ell : E(T \cup C) \to \mathbb{R}^+$ and let $L := L(T)$.
The case where $|L| = 2$ is trivial, so we henceforth assume that $|L| \geq 3$. By suppressing any degree-2 vertices, we may assume without loss of generality that $V(C) = L(T)$ and that~$T$ contains no vertices of degree~2.
\begin{lemma} \label{cover disjoint trees}
Let $T_1, T_2 \subseteq T$ be edge-disjoint trees. For $i \in \{ 1, 2\}$, let $L_i := L \cap V(T_i)$. If $L = L_1 \cup L_2$ is a non-trivial bipartition of~$L$, then both $C[L_1], C[L_2]$ are connected.
\end{lemma}
\begin{proof}
By~\ref{sct minim} there are connected $S_1, S_2 \subseteq C$ with $\ell(S_i) \leq \rm sd_T(L_i) \leq \ell(T_i)$ for $i \in \{ 1, 2 \}$. Assume for a contradiction that~$C[L_1]$ was not connected. Then $V(S_1) \cap L_2$ is non-empty and $S_1 \cup S_2 $ is connected, contains~$L$ and satisfies
\[
\ell( S_1 \cup S_2) \leq \ell(S_1) + \ell(S_2) \leq \ell(T_1) + \ell(T_2) \leq \ell(T) ,
\]
which contradicts~\ref{sct shorter}.
\end{proof} | 3,802 | 24,280 | en |
train | 0.13.3 | \end{section}
\begin{section}{Shortcut trees for trees}
\label{sct on trees}
\begin{proof}[Proof of Theorem~\ref{tree 2-geo}]
Assume for a contradiction that $T \subseteq G$ was not fully geodesic and let $R \subseteq T$ be a shortcut tree for~$T$. Let $T' \subseteq T$ be the Steiner tree for $L(R)$ in~$T$. By Lemma~\ref{equality achieved}, there is a cycle~$C$ with $V(C) = L(R)$ such that
\[
2 \ell(R) = \sum_{ab \in E(C)} \rm d_R(a, b) .
\]
Note that~$T'$ is 2-geodesic in~$T$ and therefore in~$G$, so that $\rm d_{T'}(a,b) \leq \rm d_R(a,b)$ for all $ab \in E(C)$. Since every leaf of~$T'$ lies in $L(R) = V(C)$, we can apply Lemma~\ref{equality achieved} to~$T'$ and~$C$ and conclude
\[
2 \ell(T') \leq \sum_{ab \in E(C)} \rm d_{T'}(a, b) \leq \sum_{ab \in E(C)} \rm d_R(a, b) = 2\ell(R) ,
\]
which contradicts~\ref{sct shorter}.
\end{proof}
\end{section}
\begin{section}{Shortcut trees for cycles}
\label{sct on cycles}
By Lemma~\ref{shortcut tree}, it suffices to prove the following.
\begin{theorem} \label{shortcut tree cycle strong}
Let~$T$ be a shortcut tree for a cycle~$C$. Then $T \cup C$ is a subdivision of one of the five (multi-)graphs in Figure~\ref{five shortcut trees}. In particular, $C$ is not 6-geodesic in $T \cup C$.
\end{theorem}
\begin{figure}
\caption{The five possible shortcut trees for a cycle}
\label{five shortcut trees}
\end{figure}
Theorem~\ref{shortcut tree cycle strong} is best possible in the sense that for each of the graphs in Figure~\ref{five shortcut trees} there exists a length-function which makes the tree inside a shortcut tree for the outer cycle, see Figure~\ref{cycleshortcutlength}. These length-functions were constructed in a joint effort with Pascal Gollin and Karl Heuer in an ill-fated attempt to prove that a statement like Theorem~\ref{cycle 6-geo} could not possibly be true.
\begin{figure}
\caption{Shortcut trees for cycles}
\label{cycleshortcutlength}
\end{figure}
This section is devoted entirely to the proof of Theorem~\ref{shortcut tree cycle strong}. Let~$T$ be a shortcut tree for a cycle~$C$ with length-function $\ell : E(T \cup C) \to \mathbb{R}^+$ and let $L := L(T)$.
The case where $|L| = 2$ is trivial, so we henceforth assume that $|L| \geq 3$. By suppressing any degree-2 vertices, we may assume without loss of generality that $V(C) = L(T)$ and that~$T$ contains no vertices of degree~2.
\begin{lemma} \label{cover disjoint trees}
Let $T_1, T_2 \subseteq T$ be edge-disjoint trees. For $i \in \{ 1, 2\}$, let $L_i := L \cap V(T_i)$. If $L = L_1 \cup L_2$ is a non-trivial bipartition of~$L$, then both $C[L_1], C[L_2]$ are connected.
\end{lemma}
\begin{proof}
By~\ref{sct minim} there are connected $S_1, S_2 \subseteq C$ with $\ell(S_i) \leq \rm sd_T(L_i) \leq \ell(T_i)$ for $i \in \{ 1, 2 \}$. Assume for a contradiction that~$C[L_1]$ was not connected. Then $V(S_1) \cap L_2$ is non-empty and $S_1 \cup S_2 $ is connected, contains~$L$ and satisfies
\[
\ell( S_1 \cup S_2) \leq \ell(S_1) + \ell(S_2) \leq \ell(T_1) + \ell(T_2) \leq \ell(T) ,
\]
which contradicts~\ref{sct shorter}.
\end{proof}
\begin{lemma} \label{planar 3reg}
$T \cup C$ is planar and 3-regular.
\end{lemma}
\begin{proof}
Let $e \in E(T)$, let $T_1, T_2$ be the two components of $T - e$ and let $L = L_1 \cup L_2$ be the induced (non-trivial) bipartition of~$L$. By Lemma~\ref{cover disjoint trees}, both $C[L_1], C[L_2]$ are connected. Therefore $T \cup C$ is planar by Lemma~\ref{euler planar}.
To see that $T \cup C$ is 3-regular, it suffices to show that no $t \in T$ has degree greater than~3 in~$T$. We just showed that $T \cup C$ is planar, so fix some planar drawing of it. Suppose for a contradiction that $t \in T$ had $d \geq 4$ neighbors in~$T$. In the drawing, these are arranged in some cyclic order as $t_1, t_2, \ldots, t_d$. For $j \in [d]$, let $R_j := T_j + t$, where~$T_j$ is the component of $T - t$ containing~$t_j$. Let~$T_{\rm odd}$ be the union of all~$R_j$ for odd $j \in [d]$ and~$T_{\rm even}$ the union of all~$R_j$ for even $j \in [d]$. Then $T_{\rm odd}, T_{\rm even} \subseteq T$ are edge-disjoint and yield a nontrivial bipartition $L = L_{\rm odd} \cup L_{\rm even}$ of the leaves. But neither of $C[L_{\rm odd}], C[L_{\rm even}]$ is connected, contrary to Lemma~\ref{cover disjoint trees}.
\end{proof}
\begin{lemma} \label{consec cycle long}
Let $e_0 \in E(C)$ arbitrary. Then for any two consecutive edges $e_1, e_2$ of~$C$ we have $\ell(e_1) + \ell(e_2) > \ell(e_0)$. In particular $\ell(e_0) < \ell(C)/2$.
\end{lemma}
\begin{proof}
Suppose that $e_1, e_2 \in E(C)$ are both incident with $x \in L$. Let $S \subseteq C$ be a Steiner tree for $B := L \setminus \{ x \}$ in~$C$. By~\ref{sct minim} and~\ref{sct shorter} we have
\[
\ell(S) \leq \rm sd_T(B) \leq \ell(T) < \rm sd_C(L) .
\]
Thus $x \notin S$ and $E(S) = E(C) \setminus \{ e_1, e_2 \}$. Thus $P := C - e_0$ is not a Steiner tree for~$B$ and we must have $\ell(P) > \ell(S)$.
\end{proof}
Let $t \in T$ and~$N$ its set of neighbors in~$T$. For every $s \in N$ the set~$L_s$ of leaves~$x$ with $s \in tTx$ is connected in~$C$. Each $C[L_s]$ has two edges $f_s^1, f_s^2 \in E(C)$ incident to it.
\begin{lemma} \label{good root}
There is a $t \in T$ such that for every $s \in N$ and any $f \in \{ f_s^1, f_s^2 \}$ we have $\ell(C[L_s] + f) < \ell(C)/2$.
\end{lemma}
\begin{proof}
We construct a directed graph~$D$ with $V(D) = V(T)$ as follows. For every $t \in T$, draw an arc to any $s \in N$ for which $\ell(C[L_s] + f_s^i) \geq \ell(C)/2$ for some $i \in \{ 1, 2 \}$.
\textit{Claim:} If $\overrightarrow{ts} \in E(D)$, then $\overrightarrow{st} \notin E(D)$.
Assume that there was an edge $st \in E(T)$ for which both $\overrightarrow{st}, \overrightarrow{ts} \in E(D)$. Let $T_s, T_t$ be the two components of $T - st$, where $s \in T_s$, and let $L = L_s \cup L_t$ be the induced bipartition of~$L$. By Lemma~\ref{cover disjoint trees}, both $C[L_s]$ and $C[L_t]$ are connected paths, say with endpoints $a_s, b_s$ and $a_t, b_t$ (possibly $a_s = b_s$ or $a_t = b_t$) so that $a_sa_t \in E(C)$ and $b_sb_t \in E(C)$ (see Figure~\ref{badneighbors}). Without loss of generality $\ell(a_sa_t) \leq \ell(b_sb_t)$. Since $\overrightarrow{ts} \in E(D)$ we have $\ell(C[L_t] + b_sb_t) \geq \ell(C)/2$ and therefore $C[L_s] + a_sa_t$ is a shortest $a_t$-$b_s$-path in~$C$.
Similarly, it follows from $\overrightarrow{st} \in E(D)$ that $\rm d_C( a_s, b_t) = \ell(C[L_t] + a_sa_t)$.
\begin{figure}
\caption{The setup in the proof of Lemma~\ref{good root}
\label{badneighbors}
\end{figure}
Consider the cycle $Q := a_tb_sa_sb_ta_t$ and let $W_T, W_C$ be walks traced by~$Q$ in~$T$ and in~$C$, respectively. Then $\rm len(W_T) \leq 2 \, \ell(T)$, whereas
\[
\rm len(W_C) = 2 \, \ell (C - b_sb_t) \geq 2 \, \rm sd_C(L) .
\]
By~\ref{sct minim} we have $\rm d_C(x,y) \leq \rm d_T(x,y)$ for all $x, y \in L$ and so $\rm len(W_C) \leq \rm len(W_T)$. But then $\rm sd_C(L) \leq \ell(T)$, contrary to~\ref{sct shorter}. This finishes the proof of the claim.
Since every edge of~$D$ is an orientation of an edge of~$T$ and no edge of~$T$ is oriented both ways, it follows that~$D$ has at most $|V(T)| - 1$ edges. Since~$D$ has $|V(T)|$ vertices, there is a $t \in V(T)$ with no outgoing edges.
\end{proof} | 3,207 | 24,280 | en |
train | 0.13.4 | \begin{lemma} \label{consec cycle long}
Let $e_0 \in E(C)$ arbitrary. Then for any two consecutive edges $e_1, e_2$ of~$C$ we have $\ell(e_1) + \ell(e_2) > \ell(e_0)$. In particular $\ell(e_0) < \ell(C)/2$.
\end{lemma}
\begin{proof}
Suppose that $e_1, e_2 \in E(C)$ are both incident with $x \in L$. Let $S \subseteq C$ be a Steiner tree for $B := L \setminus \{ x \}$ in~$C$. By~\ref{sct minim} and~\ref{sct shorter} we have
\[
\ell(S) \leq \rm sd_T(B) \leq \ell(T) < \rm sd_C(L) .
\]
Thus $x \notin S$ and $E(S) = E(C) \setminus \{ e_1, e_2 \}$. Thus $P := C - e_0$ is not a Steiner tree for~$B$ and we must have $\ell(P) > \ell(S)$.
\end{proof}
Let $t \in T$ and~$N$ its set of neighbors in~$T$. For every $s \in N$ the set~$L_s$ of leaves~$x$ with $s \in tTx$ is connected in~$C$. Each $C[L_s]$ has two edges $f_s^1, f_s^2 \in E(C)$ incident to it.
\begin{lemma} \label{good root}
There is a $t \in T$ such that for every $s \in N$ and any $f \in \{ f_s^1, f_s^2 \}$ we have $\ell(C[L_s] + f) < \ell(C)/2$.
\end{lemma}
\begin{proof}
We construct a directed graph~$D$ with $V(D) = V(T)$ as follows. For every $t \in T$, draw an arc to any $s \in N$ for which $\ell(C[L_s] + f_s^i) \geq \ell(C)/2$ for some $i \in \{ 1, 2 \}$.
\textit{Claim:} If $\overrightarrow{ts} \in E(D)$, then $\overrightarrow{st} \notin E(D)$.
Assume that there was an edge $st \in E(T)$ for which both $\overrightarrow{st}, \overrightarrow{ts} \in E(D)$. Let $T_s, T_t$ be the two components of $T - st$, where $s \in T_s$, and let $L = L_s \cup L_t$ be the induced bipartition of~$L$. By Lemma~\ref{cover disjoint trees}, both $C[L_s]$ and $C[L_t]$ are connected paths, say with endpoints $a_s, b_s$ and $a_t, b_t$ (possibly $a_s = b_s$ or $a_t = b_t$) so that $a_sa_t \in E(C)$ and $b_sb_t \in E(C)$ (see Figure~\ref{badneighbors}). Without loss of generality $\ell(a_sa_t) \leq \ell(b_sb_t)$. Since $\overrightarrow{ts} \in E(D)$ we have $\ell(C[L_t] + b_sb_t) \geq \ell(C)/2$ and therefore $C[L_s] + a_sa_t$ is a shortest $a_t$-$b_s$-path in~$C$.
Similarly, it follows from $\overrightarrow{st} \in E(D)$ that $\rm d_C( a_s, b_t) = \ell(C[L_t] + a_sa_t)$.
\begin{figure}
\caption{The setup in the proof of Lemma~\ref{good root}
\label{badneighbors}
\end{figure}
Consider the cycle $Q := a_tb_sa_sb_ta_t$ and let $W_T, W_C$ be walks traced by~$Q$ in~$T$ and in~$C$, respectively. Then $\rm len(W_T) \leq 2 \, \ell(T)$, whereas
\[
\rm len(W_C) = 2 \, \ell (C - b_sb_t) \geq 2 \, \rm sd_C(L) .
\]
By~\ref{sct minim} we have $\rm d_C(x,y) \leq \rm d_T(x,y)$ for all $x, y \in L$ and so $\rm len(W_C) \leq \rm len(W_T)$. But then $\rm sd_C(L) \leq \ell(T)$, contrary to~\ref{sct shorter}. This finishes the proof of the claim.
Since every edge of~$D$ is an orientation of an edge of~$T$ and no edge of~$T$ is oriented both ways, it follows that~$D$ has at most $|V(T)| - 1$ edges. Since~$D$ has $|V(T)|$ vertices, there is a $t \in V(T)$ with no outgoing edges.
\end{proof}
Fix a node $t \in T$ as guaranteed by the previous lemma. If~$t$ was a leaf with neighbor~$s$, say, then $\ell(f_s^1) = \ell(C) - \ell(C[L_s] + f_s^2) > \ell(C)/2$ and, symmetrically, $\ell(f_s^2) > \ell(C)/2$, which is impossible. Hence by Lemma~\ref{planar 3reg}, $t$ has three neighbors $s_1, s_2, s_3 \in T$ and we let $L_i := C[L_{s_i}]$ and $\ell_i := \ell(L_i)$. There are three edges $f_1, f_2, f_3 \in E(C) \setminus \bigcup E(L_i)$, where $f_1$ joins~$L_1$ and~$L_2$, $f_2$ joins~$L_2$ and~$L_3$ and~$f_3$ joins~$L_3$ and~$L_1$. Each~$L_i$ is a (possibly trivial) path whose endpoints we label $a_i, b_i$ so that, in some orientation, the cycle is given by
\[
C = a_1L_1b_1 + f_1 + a_2L_2b_2 + f_2 + a_3L_3b_3 + f_3 .
\]
Hence $f_1 = b_1a_2$, $f_2 = b_2a_3$ and $f_3 = b_3a_1$ (see Figure~\ref{trail Q}).
The fact that $\ell_1 + \ell(f_1) \leq \ell(C)/2$ means that $L_1 + f_1$ is a shortest $a_1$-$a_2$-path in~$C$ and so $\rm d_C(a_1, a_2) = \ell_1 + \ell(f_1)$. Similarly, we thus know the distance between all other pairs of vertices with just one segment~$L_i$ and one edge~$f_j$ between them.
\begin{figure}
\caption{The cycle~$Q$}
\label{trail Q}
\end{figure}
If $|L_i| \leq 2$ for every $i \in [3]$, then $T \cup C$ is a subdivision of one the graphs depicted in Figure~\ref{five shortcut trees} and we are done. Hence from now on we assume that at least one~$L_i$ contains at least~3 vertices.
\begin{lemma} \label{jumps}
Suppose that $\max \{ |L_s| \colon s \in N \} \geq 3$. Then there is an $s \in N$ with $\ell( f_s^1 + C[L_s] + f_s^2) \leq \ell(C)/2$.
\end{lemma}
\begin{proof}
For $j \in [3]$, let $r_j := \ell( f_{s_j}^1 + L_j + f_{s_j}^2)$. Assume wlog that $|L_1| \geq 3$. Then~$L_1$ contains at least two consecutive edges, so by Lemma~\ref{consec cycle long} we must have $\ell_1 > \ell(f_2)$. Therefore
\[
r_2 + r_3 = \ell(C) + \ell(f_2) - \ell_1 < \ell(C) ,
\]
so the minimum of $r_2, r_3$ is less than $\ell(C)/2$.
\end{proof} | 2,291 | 24,280 | en |
train | 0.13.5 | Fix a node $t \in T$ as guaranteed by the previous lemma. If~$t$ was a leaf with neighbor~$s$, say, then $\ell(f_s^1) = \ell(C) - \ell(C[L_s] + f_s^2) > \ell(C)/2$ and, symmetrically, $\ell(f_s^2) > \ell(C)/2$, which is impossible. Hence by Lemma~\ref{planar 3reg}, $t$ has three neighbors $s_1, s_2, s_3 \in T$ and we let $L_i := C[L_{s_i}]$ and $\ell_i := \ell(L_i)$. There are three edges $f_1, f_2, f_3 \in E(C) \setminus \bigcup E(L_i)$, where $f_1$ joins~$L_1$ and~$L_2$, $f_2$ joins~$L_2$ and~$L_3$ and~$f_3$ joins~$L_3$ and~$L_1$. Each~$L_i$ is a (possibly trivial) path whose endpoints we label $a_i, b_i$ so that, in some orientation, the cycle is given by
\[
C = a_1L_1b_1 + f_1 + a_2L_2b_2 + f_2 + a_3L_3b_3 + f_3 .
\]
Hence $f_1 = b_1a_2$, $f_2 = b_2a_3$ and $f_3 = b_3a_1$ (see Figure~\ref{trail Q}).
The fact that $\ell_1 + \ell(f_1) \leq \ell(C)/2$ means that $L_1 + f_1$ is a shortest $a_1$-$a_2$-path in~$C$ and so $\rm d_C(a_1, a_2) = \ell_1 + \ell(f_1)$. Similarly, we thus know the distance between all other pairs of vertices with just one segment~$L_i$ and one edge~$f_j$ between them.
\begin{figure}
\caption{The cycle~$Q$}
\label{trail Q}
\end{figure}
If $|L_i| \leq 2$ for every $i \in [3]$, then $T \cup C$ is a subdivision of one the graphs depicted in Figure~\ref{five shortcut trees} and we are done. Hence from now on we assume that at least one~$L_i$ contains at least~3 vertices.
\begin{lemma} \label{jumps}
Suppose that $\max \{ |L_s| \colon s \in N \} \geq 3$. Then there is an $s \in N$ with $\ell( f_s^1 + C[L_s] + f_s^2) \leq \ell(C)/2$.
\end{lemma}
\begin{proof}
For $j \in [3]$, let $r_j := \ell( f_{s_j}^1 + L_j + f_{s_j}^2)$. Assume wlog that $|L_1| \geq 3$. Then~$L_1$ contains at least two consecutive edges, so by Lemma~\ref{consec cycle long} we must have $\ell_1 > \ell(f_2)$. Therefore
\[
r_2 + r_3 = \ell(C) + \ell(f_2) - \ell_1 < \ell(C) ,
\]
so the minimum of $r_2, r_3$ is less than $\ell(C)/2$.
\end{proof}
By the previous lemma, we may wlog assume that
\begin{equation} \label{jump edge}
\ell(f_2) + \ell_3 + \ell(f_3) \leq \ell(C)/2 ,
\end{equation}
so that $f_2 + L_3 + f_3$ is a shortest $a_1$-$b_2$-path in~$C$. Together with the inequalities from Lemma~\ref{good root}, this will lead to the final contradiction.
Consider the cycle $Q = a_1b_2a_2a_3b_3b_1a_1$ (see Figure~\ref{trail Q}). Let~$W_T$ be a walk traced by~$Q$ in~$T$. Every edge of~$T$ is traversed at most twice, hence
\begin{equation}
\sum_{ab \in E(Q)} \rm d_T(a, b) = \ell(W_T) \leq 2\ell(T) . \label{sum through tree}
\end{equation}
Let~$W_C$ be a walk traced by~$Q$ in~$C$. Using~(\ref{jump edge}) and the inequalities from Lemma~\ref{good root}, we see that
\begin{align*}
\ell(W_C) &= \sum_{ab \in E(Q)} \rm d_C(a, b) = 2 \ell_1 + 2 \ell_2 + 2 \ell_3 + 2 \ell(f_2) + 2 \ell(f_3) \\
&= 2 \ell(C) - 2 \ell(f_1) .
\end{align*}
But by~\ref{sct minim} we have $\rm d_C(a,b) \leq \rm d_T(a,b)$ for all $a, b \in L(T)$ and therefore $\ell(W_C) \leq \ell(W_T)$. Then by~(\ref{sum through tree})
\[
2 \ell(C) - 2 \ell(f_1) = \ell(W_C) \leq \ell(W_T) \leq 2 \ell(T).
\]
But then $S := C - f_1$ is a connected subgraph of~$C$ with $L(T) \subseteq V(S)$ satisfying $\ell(S) \leq \ell(T)$. This contradicts~\ref{sct shorter} and finishes the proof of Theorem~\ref{shortcut tree cycle strong}.
\qed
\end{section}
\begin{section}{Towards a general theory}
\label{general theory}
We have introduced a notion of higher geodecity based on the concept of the Steiner distance of a set of vertices. This introduces a hierarchy of properties: Every $k$-geodesic subgraph is, by definition, also $m$-geodesic for any $m < k$. This hierarchy is strict in the sense that for every~$k$ there are graphs~$G$ and $H \subseteq G$ and a length-function~$\ell$ on~$G$ such that~$H$ is $k$-geodesic in~$G$, but not $(k+1)$-geodesic. To see this, let~$G$ be a complete graph with $V(G) = [k+1] \cup \{ 0 \}$ and let~$H$ be the subgraph induced by $[k+1]$. Define $\ell(0j) := k-1$ and $\ell(ij) := k$ for all $i, j \in [k+1]$. If~$H$ was not $k$-geodesic, then~$G$ would contain a shortcut tree~$T$ for~$H$ with $|L(T)| \leq k$. Then~$T$ must be a star with center~$0$ and
\[
\ell(T) = (k-1)|L(T)| \geq k(|L(T)|-1) .
\]
But any spanning tree of $H[L(T)]$ has length $k(|L(T)|-1) $ and so $\rm sd_H(L(T)) \leq \ell(T)$, contrary to~\ref{sct shorter}. Hence~$H$ is a $k$-geodesic subgraph of~$G$. However, the star~$S$ with center~$0$ and $L(S) = [k+1]$ shows that
\[
\rm sd_G(V(H)) \leq (k+1)(k-1) < k^2 = \rm sd_H(V(H)) = k^2 .
\]
Theorem~\ref{tree 2-geo} and Theorem~\ref{cycle 6-geo} demonstrate a rather strange phenomenon by providing situations in which this hierarchy collapses.
For a given natural number $k \geq 2$, let us denote by~$\mathcal{H}_k$ the class of all graphs~$H$ with the property that whenever~$G$ is a graph with $H \subseteq G$ and~$\ell$ is a length-function on~$G$ such that~$H$ is $k$-geodesic, then~$H$ is also fully geodesic.
By definition, this yields an ascending sequence $\mathcal{H}_2 \subseteq \mathcal{H}_3 \subseteq \ldots $ of classes of graphs. By Theorem~\ref{tree 2-geo} all trees lie in~$\mathcal{H}_2$. By Theorem~\ref{cycle 6-geo} all cycles are contained in~$\mathcal{H}_6$. The example above shows that $K_{k+1} \notin \mathcal{H}_k$.
We now describe some general properties of the class~$\mathcal{H}_k$.
\begin{theorem} \label{H_k minor closed}
For every natural number $k \geq 2$, the class~$\mathcal{H}_k$ is closed under taking minors.
\end{theorem}
To prove this, we first provide an easier characterization of the class~$\mathcal{H}_k$.
\begin{proposition} \label{H_k sct}
Let $k \geq 2$ be a natural number and~$H$ a graph. Then $H \in \mathcal{H}_k$ if and only if every shortcut tree for~$H$ has at most~$k$ leaves.
\end{proposition}
\begin{proof}
Suppose first that $H \in \mathcal{H}_k$ and let~$T$ be a shortcut tree for~$H$. By~\ref{sct shorter}, $H$ is not $|L(T)|$-geodesic in $T \cup H$. Let~$m$ be the minimum integer such that~$H$ is not $m$-geodesic in $T \cup H$. By Lemma~\ref{shortcut tree}, $T \cup H$ contains a shortcut tree~$S$ with at most~$m$ leaves for~$H$. But then by~\ref{sct vert} and~\ref{sct edge}, $S$ is the Steiner tree in~$T$ of $B := L(S) \subseteq L(T)$. If $B \subseteqsetneq L(T)$, then $\ell(S) = \rm sd_T(B) \geq \rm sd_H(B)$ by~\ref{sct minim}, so we must have $B = L(T)$ and $m \geq |L(T)|$. Thus~$H$ is $(|L(T)| - 1)$-geodesic in $T \cup H$, but not $|L(T)|$-geodesic. As $H \in \mathcal{H}_k$, it must be that $|L(T)| - 1 < k$.
Suppose now that every shortcut tree for~$H$ has at most~$k$ leaves and let $H \subseteq G$ $k$-geodesic with respect to some length-function $\ell : E(G) \to \mathbb{R}^+$. If~$H$ was not fully geodesic, then~$G$ contained a shortcut tree~$T$ for~$H$. By assumption, $T$ has at most~$k$ leaves. But then $\rm sd_G( L(T)) \leq \ell(T) < \rm sd_H(L(T))$, so~$H$ is not $k$-geodesic in~$G$.
\end{proof}
\begin{lemma} \label{wlog connected}
Let $k \geq 2$ be a natural number and~$G$ a graph. Then $G \in \mathcal{H}_k$ if and only if every component of~$G$ is in~$\mathcal{H}_k$.
\end{lemma}
\begin{proof}
Every shortcut tree for a component~$K$ of~$G$ becomes a shortcut tree for~$G$ by taking $\ell(e) := 1$ for all $e \in E(G) \setminus E(K)$. Hence if $G \in \mathcal{H}_k$, then every component of~$G$ is in~$\mathcal{H}_k$ as well.
Suppose now that every component of~$G$ is in~$\mathcal{H}_k$ and that~$T$ is a shortcut tree for~$G$. If there is a component~$K$ of~$G$ with $L(T) \subseteq V(K)$, then~$T$ is a shortcut tree for~$K$ and so $|L(T)| \leq k$ by assumption. Otherwise, let $t_1 \in L(T) \cap V(K_1)$ and $t_2 \in L(T) \cap V(K_2)$ for distinct components $K_1, K_2$ of~$G$. By~\ref{sct minim}, it must be that $L(T) = \{ t_1, t_2 \}$ and so $|L(T)| = 2 \leq k$.
\end{proof} | 3,382 | 24,280 | en |
train | 0.13.6 | \begin{lemma} \label{wlog connected}
Let $k \geq 2$ be a natural number and~$G$ a graph. Then $G \in \mathcal{H}_k$ if and only if every component of~$G$ is in~$\mathcal{H}_k$.
\end{lemma}
\begin{proof}
Every shortcut tree for a component~$K$ of~$G$ becomes a shortcut tree for~$G$ by taking $\ell(e) := 1$ for all $e \in E(G) \setminus E(K)$. Hence if $G \in \mathcal{H}_k$, then every component of~$G$ is in~$\mathcal{H}_k$ as well.
Suppose now that every component of~$G$ is in~$\mathcal{H}_k$ and that~$T$ is a shortcut tree for~$G$. If there is a component~$K$ of~$G$ with $L(T) \subseteq V(K)$, then~$T$ is a shortcut tree for~$K$ and so $|L(T)| \leq k$ by assumption. Otherwise, let $t_1 \in L(T) \cap V(K_1)$ and $t_2 \in L(T) \cap V(K_2)$ for distinct components $K_1, K_2$ of~$G$. By~\ref{sct minim}, it must be that $L(T) = \{ t_1, t_2 \}$ and so $|L(T)| = 2 \leq k$.
\end{proof}
\begin{lemma} \label{sct minor closed}
Let~$G, H$ be two graphs and let~$T$ be a shortcut tree for~$G$. If~$G$ is a minor of~$H$, then there is a shortcut tree~$T'$ for~$H$ which is isomorphic to~$T$.
\end{lemma}
\begin{proof}
Since~$G$ is a minor of~$H$, there is a family of disjoint connected sets $B_v \subseteq V(H)$, $v \in V(G)$, and an injective map $\beta : E(G) \to E(H)$ such that for $uv \in E(G)$, the end vertices of $\beta(uv) \in E(H)$ lie in~$B_u$ and~$B_v$.
Let~$T$ be a shortcut tree for~$G$ with $\ell : E(T \cup G) \to \mathbb{R}^+$. By adding a small positive real number to every $\ell(e)$, $e \in E(T)$, we may assume that the inequalities in~\ref{sct minim} are strict, that is
\[
\rm sd_G(B) \leq \rm sd_T(B) - \epsilon
\]
for every $B \subseteq L(T)$ with $2 \leq |B| < |L(T)|$, where $\epsilon > 0$ is some constant.
Obtain the tree~$T'$ from~$T$ by replacing every $t \in L(T)$ by an arbitrary $x_t \in B_t$ and every $t \in V(T) \setminus L(T)$ by a new vertex~$x_t$ not contained in $V(H)$, maintaining the adjacencies. It is clear by definition that $V(T') \cap V(H) = L(T')$ and $E(T') \cap E(H) = \emptyset$. We now define a length-function $\ell' : E(T' \cup H) \to \mathbb{R}^+$ as follows.
For every edge $st \in E(T)$, the corresponding edge $x_sx_t \in E(T')$ receives the same length $\ell'(x_sx_t) := \ell(st)$. Every $e \in E(H)$ that is contained in one of the branchsets~$B_v$ is assigned the length $\ell '(e) := \delta$, where $\delta := \epsilon / |E(H)| $. For every $e \in E(G)$ we let $\ell '( \beta(e)) := \ell(e)$. To all other edges of~$H$ we assign the length~$\ell(T) + 1$.
We now show that~$T'$ is a shortcut tree for~$H$ with the given length-function~$\ell'$. Suppose that $S' \subseteq H$ was a connected subgraph with $L(T') \subseteq V(S')$ and $\ell'(S') \leq \ell'(T')$. By our choice of~$\ell'$, every edge of~$S'$ must either lie in a branchset~$B_v$ or be the image under~$\beta$ of some edge of~$G$, since otherwise $\ell'(S') > \ell(T) = \ell'(T')$. Let $S \subseteq V(G)$ be the subgraph where $v \in V(S)$ if and only if $V(S') \cap B_v$ is non-empty and $e \in E(S)$ if and only if $\beta(e) \in E(S')$. Since~$S'$ is connected, so is~$S$: For any non-trivial bipartition $V(S) = U \cup W$ the graph~$S'$ contains an edge between $\bigcup_{u \in U} B_u$ and $\bigcup_{w \in W} B_w$, which in turn yields an edge of~$S$ between~$U$ and~$W$. Moreover $L(T) \subseteq V(S)$, since~$V(S')$ contains~$x_t$ and thus meets~$B_t$ for every $t \in L(T)$. Finally, $\ell(S) \leq \ell(S')$, which contradicts our assumption that~$T$ is a shortcut tree for~$G$.
For $B' \subseteq L(T')$ with $2 \leq |B'| < |L(T')|$, let $B := \{ t \in T \colon x_t \in B' \}$. By assumption, there is a connected $S \subseteq G$ with $B \subseteq V(S)$ and $\ell(S) \leq \rm sd_T(B) - \epsilon$. Let
\[
S' := \bigcup_{v \in V(S)} H[B_v] + \{ \beta(e) \colon e \in E(S) \}.
\]
For every $x_t \in B'$ we have $t \in B \subseteq V(S)$ and so $x_t \in B_t \subseteq V(S')$. Since~$S$ is connected and every $H[B_v]$ is connected, $S'$ is connected as well. Moreover
\[
\ell'(S') \leq \delta |E(H)| + \ell(S) \leq \rm sd_T(B) = \rm sd_{T'}(B') .
\]
\end{proof}
\begin{proof}[Proof of Theorem~\ref{H_k minor closed}]
Let~$H$ be a graph in~$\mathcal{H}_k$ and~$G$ a minor of~$H$. Let~$T$ be a shortcut tree for~$G$. By Lemma~\ref{sct minor closed}, $H$ has a shortcut tree~$T'$ which is isomorphic to~$T$. By Proposition~\ref{H_k sct} and assumption on~$H$, $T$ has $|L(T')| \leq k$ leaves. Since~$T$ was arbitrary, it follows from Proposition~\ref{H_k sct} that $G \in \mathcal{H}_k$.
\end{proof}
\begin{corollary} \label{H_2 forests}
$\mathcal{H}_2$ is the class of forests.
\end{corollary}
\begin{proof}
By Theorem~\ref{tree 2-geo} and Lemma~\ref{wlog connected}, every forest is in~$\mathcal{H}_2$. On the other hand, if~$G$ contains a cycle, then it contains the triangle~$C_3$ as a minor. We saw in Section~\ref{sct on cycles} that~$C_3$ has a shortcut tree with~3 leaves. By Lemma~\ref{sct minor closed}, so does~$G$ and hence $G \notin \mathcal{H}_2$ by Proposition~\ref{H_k sct}.
\end{proof}
\begin{corollary} \label{num leaf bounded}
For every natural number $k \geq 2$ there exists an integer $m = m(k)$ such that every graph that is not in~$\mathcal{H}_k$ has a shortcut tree with more than~$k$, but not more than~$m$ leaves.
\end{corollary}
\begin{proof}
Let $k \geq 2$ be a natural number. By Theorem~\ref{H_k minor closed} and the Graph Minor Theorem of Robertson and Seymour~\cite{graphminorthm} there is a finite set~$R$ of graphs such that for every graph~$H$ we have $H \in \mathcal{H}_k$ if and only if~$H$ does not contain any graph in~$R$ as a minor. Let $m(k) := \max_{G \in R} |G|$.
Let~$H$ be a graph and suppose $H \notin \mathcal{H}_k$. Then~$H$ contains some $G \in R$ as a minor. By Proposition~\ref{H_k sct}, this graph~$G$ has a shortcut tree~$T$ with more than~$k$, but certainly at most~$|G|$ leaves. By Lemma~\ref{sct minor closed}, $H$ has a shortcut tree isomorphic to~$T$.
\end{proof}
We remark that we do not need the full strength of the Graph Minor Theorem here: We will see in a moment that the tree-width of graphs in~$\mathcal{H}_k$ is bounded for every $k \geq 2$, so a simpler version of the Graph Minor Theorem can be applied, see~\cite{excludeplanar}. Still, it seems that Corollary~\ref{num leaf bounded} ought to have a more elementary proof.
\begin{question}
Give a direct proof of Corollary~\ref{num leaf bounded} that yields an explicit bound on~$m(k)$. What is the smallest possible value for~$m(k)$?
\end{question}
In fact, we are not even aware of any example that shows one cannot simply take $m(k) = k+1$.
Given that~$\mathcal{H}_2$ is the class of forests, it seems tempting to think of each class~$\mathcal{H}_k$ as a class of ``tree-like'' graphs. In fact, containment in~$\mathcal{H}_k$ is related to the tree-width of the graph, but the relation is only one-way.
\begin{proposition} \label{low tw example}
For any integer $k \geq 1$, the graph $K_{2, 2k}$ is not in~$\mathcal{H}_{2k-1}$.
\end{proposition}
\begin{proof}
Let~$H$ be a complete bipartite graph $V(H) = A \cup B \cup \{ x, y \}$ with $|A| = |B| = k$, where $uv \in E(H)$ if and only if $u \in A \cup B$ and $v \in \{ x, y \}$ (or vice versa). We construct a shortcut tree for $H \cong K_{2,2k}$ with~$2k$ leaves.
For $x', y' \notin V(H)$, let~$T$ be the tree with $V(T) = A \cup B \cup \{ x', y' \}$, where~$x'$ is adjacent to every $a \in A$, $y'$ is adjacent to every $b \in B$ and $x'y' \in E(T)$. It is clear that $V(T) \cap V(H) = L(T)$ and~$T$ and~$H$ are edge-disjoint. We now define a length-function $\ell : E(T \cup H) \to \mathbb{R}^+$ that turns~$T$ into a shortcut tree for~$H$.
For all $a \in A$ and all $b \in B$, let
\begin{gather*}
\ell ( a x) = \ell (a x' ) = \ell ( b y) = \ell (b y') = k-1, \\
\ell ( a y ) = \ell ( a y') = \ell ( b x ) = \ell ( b x' ) = k, \\
\ell ( x' y' ) = k - 1 .
\end{gather*}
Let $A' \subseteq A, B' \subseteq B$. We determine $\rm sd_H(A' \cup B')$. By symmetry, it suffices to consider the case where $|A'| \geq |B'|$. We claim that
\[
\rm sd_H( A' \cup B') = (k-1)|A' \cup B'| + |B'| .
\]
It is easy to see that $\rm sd_H( A' \cup B') \leq (k-1)|A'| + k|B'|$, since $S^* := H[A' \cup B' \cup \{ x \}]$ is connected and achieves this length. Let now $S \subseteq H$ be a tree with $A' \cup B' \subseteq V(S)$.
If every vertex in $A' \cup B'$ is a leaf of~$S$, then~$S$ can only contain one of~$x$ and~$y$, since it does not contain a path from~$x$ to~$y$. But then~$S$ must contain one of $H[A' \cup B' \cup \{x \}]$ and $H[ A' \cup B' \cup \{ y \} ]$, so $\ell(S) \geq \ell(S^*)$.
Suppose now that some $x \in A' \cup B'$ is not a leaf of~$S$. Then~$x$ has two incident edges, one of length~$k$ and one of length~$k-1$. For $s \in S$, let $r(s)$ be the sum of the lengths of all edges of~$S$ incident with~$s$. Then $\ell(s) \geq k-1$ for all $s \in A' \cup B'$ and $\ell(x) \geq 2k-1$. Since $A' \cup B'$ is independent in~$H$ (and thus in~$S$), it follows that
\begin{align*}
\ell(S) &\geq \sum_{s \in A' \cup B'} r(s) \geq |( A' \cup B') \setminus \{ x \}| (k-1) + (2k-1) \\
&= |A' \cup B'|(k-1) + k \geq (k-1)|A' \cup B'| + |B'| .
\end{align*}
Thus our claim is proven. For $A', B'$ as before, it is easy to see that
\[
\rm sd_T( A' \cup B') = \begin{cases}
(k-1)|A' \cup B'| , &\text{ if } B' = \emptyset \\
(k-1)|A' \cup B'| + k- 1, &\text{ otherwise.}
\end{cases}
\]
We thus have $\rm sd_T( A' \cup B' ) < \rm sd_H(A' \cup B')$ if and only if $|A'| = |B'| = k$. Hence~\ref{sct shorter} and~\ref{sct minim} are satisfied and~$T$ is a shortcut tree for~$H$ with~$2k$ leaves.
\end{proof} | 3,907 | 24,280 | en |
train | 0.13.7 | Note that the graph $K_{2,k}$ is planar and has tree-width~2. Hence there is no integer~$m$ such that all graphs of tree-width at most~2 are in~$\mathcal{H}_m$. Using Theorem~\ref{H_k minor closed}, we can turn Proposition~\ref{low tw example} into a positive result, however.
\begin{corollary} \label{H_k exclude K_2,k+2}
For any $k \geq 2$, no $G \in \mathcal{H}_k$ contains $K_{2, k+2}$ as a minor.
\qed
\end{corollary}
In particular, it follows from the Grid-Minor Theorem~\cite{excludeplanar} and planarity of~$K_{2,k}$ that the tree-width of graphs in~$\mathcal{H}_k$ is bounded. Bodlaender et al~\cite{excludeK2k} gave a more precise bound for this special case, showing that graphs excluding $K_{2,k}$ as a minor have tree-width at most~$2(k-1)$.
It seems plausible that a qualitative converse to Corollary~\ref{H_k exclude K_2,k+2} might hold.
\begin{question} \label{K_2,k as culprit}
Is there a function $q : \mathbb{N} \to \mathbb{N}$ such that every graph that does not contain $K_{2,k}$ as a minor is contained in~$\mathcal{H}_{q(k)}$?
\end{question}
Since no subdivision of a graph~$G$ contains $K_{2, |G| + e(G) + 1}$ as a minor, a positive answer would prove the following.
\begin{conjecture}
For every graph~$G$ there exists an integer~$m$ such that every subdivision of~$G$ lies in~$\mathcal{H}_m$.
\end{conjecture}
\end{section}
\begin{section}{Generating the cycle space}
Let~$G$ be a graph with length-function~$\ell$. It is a well-known fact (see e.g.~\cite[Chapter~1, exercise~37]{diestelbook}) that the set of 2-geodesic cycles generates the cycle space of~$G$. This extends as follows, showing that fully geodesic cycles abound.
\begin{proposition} \label{generate cycle space}
Let~$G$ be a graph with length-function~$\ell$. The set of fully geodesic cycles generates the cycle space of~$G$.
\end{proposition}
We remark, first of all, that the proof is elementary and does not rely on Theorem~\ref{cycle 6-geo}, but only requires Lemma~\ref{shortcut tree} and Lemma~\ref{pos even}.
Let~$\mathcal{D}$ be the set of all cycles of~$G$ which cannot be written as a 2-sum of cycles of smaller length. The following is well-known.
\begin{lemma}
The cycle space of~$G$ is generated by~$\mathcal{D}$.
\end{lemma}
\begin{proof}
It suffices to show that every cycle is a 2-sum of cycles in~$\mathcal{D}$. Assume this was not the case and let $C \subseteq G$ be a cycle of minimum length that is not a 2-sum of cycles in~$\mathcal{D}$. In particular, $C \notin \mathcal{D}$ and so there are cycles $C_1, \ldots, C_k$ with $C = C_1 \oplus \ldots \oplus C_k$ and $\ell(C_i) < \ell(C)$ for every $i \in [k]$. By our choice of~$C$, every~$C_i$ can be written as a 2-sum of cycles in~$\mathcal{D}$. But then the same is true for~$C$, which is a contradiction.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{generate cycle space}]
We show that every $C \in \mathcal{D}$ is fully geodesic. Indeed, let $C \subseteq G$ be a cycle which is not fully geodesic and let $T\subseteq G$ be a shortcut tree for~$C$. There is a cycle~$D$ with $V(D) = L(T)$ such that~$C$ is a union of edge-disjoint $L(T)$-paths $P_{ab}$ joining~$a$ and~$b$ for $ab \in E(D)$.
For $ab \in E(D)$ let $C_{ab} := aTb + P_{ab}$. Every edge of~$C$ lies in precisely one of these cycles. An edge $e \in E(T)$ lies in $C_{ab}$ if and only if $e \in aTb$. By Lemma~\ref{pos even} and~(\ref{multiplicities traced walk tree}), every $e \in E(T)$ lies in an even number of cycles~$C_{ab}$. Therefore $C = \bigoplus_{ab \in E(D)} C_{ab}$.
For every $ab \in E(D)$, $C$ contains a path~$S$ with $E(S) = E(C) \setminus E(P_{ab})$ with $L(T) \subseteq V(S)$. Since~$T$ is a shortcut tree for~$C$, it follows from~\ref{sct shorter} that
\[
\ell(C_{ab}) \leq \ell(T) + \ell(P_{ab}) < \ell(S) + \ell(P_{ab}) = \ell(C) .
\]
In particular, $C \notin \mathcal{D}$.
\end{proof}
The fact that 2-geodesic cycles generate the cycle space has been extended to the topological cycle space of locally finite graphs graphs by Georgakopoulos and Spr\"{u}ssel~\cite{agelos}. Does Proposition~\ref{generate cycle space} have a similar extension?
\end{section}
\end{document} | 1,595 | 24,280 | en |
train | 0.14.0 | \begin{document}
\title{Proving UNSAT in SMT: \ The Case of Quantifier Free Non-Linear Real Arithmetic}
\begin{abstract}
We discuss the topic of unsatisfiability proofs in SMT, particularly with reference to quantifier free non-linear real arithmetic. We outline how the methods here do not admit trivial proofs and how past formalisation attempts are not sufficient. We note that the new breed of local search based algorithms for this domain may offer an easier path forward.
\end{abstract}
\section{Introduction}
Since 2013, SAT Competitions have required certificates for unsatisfiability which are verified offline \cite{HJS18}. As the SAT problems tackled have grown larger, and the solvers have grown more complicated, such proofs have become more important for building trust in these solvers. The SAT community has agreed on DRAT as a common format for presenting such proofs (although within this there are some flavours \cite{RB19}).
The SMT community has long recognized the value of proof certificates, but alas producing them turned our to be much more difficult than for the SAT case. The current version of the SMT-LIB Language (v2.6) \cite{SMTLIB} specifies API commands for requesting and inspecting proofs from solvers but sets no requirements on the form those proofs take. In fact on page 66 it writes explicitly: ``\emph{The format of the proof is solver-specific}''. We assume that this is a place holder for future work on an SMT-LIB proof format, rather than a deliberate design. The paper \cite{BdMF15} summarises some of the requirements, challenges and various approaches taken to proofs in SMT. Key projects that have been working on this issue include LFSC \cite{Stump2012} and veriT \cite{Barbosa2020}, but there has not been a general agreement in the community yet.
Our long-term vision is that an SMT solver would be able to emit a ``proof'' that covers both the Boolean reasoning and the theory reasoning (possibly from multiple theories) such that a theorem prover (or a combination of multiple theorem provers) could verify its correctness, where the inverted commas indicate that some programming linkage between the theorem provers might be necessary. We would still be some way from having a fully verified one-stop checker as in GRAT \cite{Lammich2020}, but would be a lot closer to it than we are now.
In \cite{BdMF15} the authors explain that since in SMT the propositional and theory reasoning are not strongly mixed, an SMT proof can be an interleaving of SAT proofs and theory reasoning proofs in the shape of a Boolean resolution tree whose leaves are clauses. They identify the main challenge of proof production as keeping enough information to produce proofs, without hurting efficiency too much. This may very well be true for many cases, but for the area of interest for the authors, \verb+QF_NRA+ (Quantifier-Free Nonlinear Real Arithmetic), there is the additional challenge of providing the proofs of the theory lemmas themselves.
\section{Quantifier Free Non-Linear Real Arithmetic}
\verb+QF_NRA+ typically considers a logical formula $\Phi$ where the literals are statements about the signs of polynomials with rational coefficients, i.e. $f_i(x_1,\ldots,x_n)\sigma_i0$ with $\sigma_i\in\{=,\ne,>,\ge,<,\le\}$.
Any SMT solver which claims to tackle this logic completely relies in some way on the theory of Cylindrical Algebraic Decomposition (CAD). This was initiated by Collins \cite{Collins1975} in the 1970s with many subsequent developments since: see for example the collection \cite{CJ98} or the introduction of the recent paper \cite{EBD20}. The key idea is to decompose infinite space $\mathbb{R}^n$ into a finite number of disjoint regions upon each of which the truth of the constraints is constant. This may be achieved by decomposing to ensure the signs of the polynomials involved are invariant, although optimisations can produce a coarser, and thus cheaper, decomposition.
In the case of unsatisfiability an entire CAD truth invariant for the constraints may be produced, and the solver can check that the formula is unsatisfiable for a sample of each cell. How may this be verified? The cylindrical condition\footnote{Formally, the condition is that projection of any two cells onto a lower dimensional space with respect to the variable ordering are either equal or disjoint. Informally, this means the cells are stacked in cylinders over a decomposition in lower dimensional space.} means that checking our cells decompose the space is trivial, but the fact that the constraints have invariant truth-value is a deep implication of the algorithm, not necessarily apparent from the output.
\subsection*{Past QF\_NRA Formalisation Attempts}
There was a project in Coq to formalise Quantifier Elimination in Real Closed Fields. This may also be tackled by CAD, and of course has SMT in \verb+QF_NRA+ as a sub-problem. Work began on an implementation of CAD in Coq with some of the underlying infrastructure formalised \cite{Mahboubi2007}, but the project proceeded to instead formalise QE via alternative methods \cite{CM10}, \cite{CM12b} which are far less efficient\footnote{Although CAD is doubly exponential in the number of variables, the methods verified do not even have worst case complexity bound by a finite tower of exponentials!}. We learn that the CAD approach was not proven correct in the end \cite[bottom of p. 38]{CM12b}. Thus while it is formalised that Real QE (and thus satisfiability) is decidable, this does not offer a route to verifying current solver results.
The only other related work in the literature we found is \cite{NMD15} which essentially formalises something like CAD but only for problems in one variable.
\section{Potential from Coverings Instead of Decompositions?}
There has been recent interaction between the SMT community and the computer algebra community \cite{SC2} from which many of these methods originate. Computer algebra implementations are being adapted for SMT compliance \cite{SC2}, as CAD was in \cite{KA20}, and there has also be success when they are used directly \cite{FOSKT18}. Most excitingly, there have been some entirely new algorithmic approaches developed.
Perhaps most notable is the NLSAT algorithm of Jovanovi\'{c} and de Moura \cite{JdM12}, introduced in 2012 and since generalised into the model constructing satisfiability calculus (mcSAT) framework \cite{dMJ13}. In mcSAT the search for a Boolean model and a theory model are mutually guided by each other away from unsatisfiable regions. Partial solution candidates for the Boolean structure and for the corresponding theory constraints are constructed incrementally in parallel. Boolean conflicts are generalised using propositional resolution as normal. At the theory level, when an assignment (sample point) is determined not to satisfy all constraints then this is generalised from the point to a region containing the point on which the same constraints fail for the same reason.
In NLSAT, which only considers \verb+QF_NRA+, the samples are generalised to CAD cells\footnote{But not necessarily one that would be produced within any entire CAD for the problem.} being excluded by adding a new clause with the negation of the algebraic description of the cell. In UNSAT cases these additional clauses become mutually exclusive, in effect the cells generated cover all possible space in $\mathbb{R}^n$. However, as these are not arranged cylindrically, this may not be trivial to check from the output. We note also the more efficient algorithm to compute these single CAD cells in \cite{BK15}, and the new type of decomposition they inspired in \cite{Brown2015}.
Another new approach was presented recently in \cite{ADEK21}: conflict driven cylindrical algebraic covering (CDCAC). Like NLSAT this produces a covering of $\mathbb{R}^n$ to show unsatisfiability. Essentially, a depth first search is performed according to the theory variables. Conflicts over particular assignments are generalised to cells until a covering of a dimension is obtained, and then this covering is generalised to a cell in the dimension below. In this procedure the covering itself is explicit and easy to verify. Further, CDCAD computes the covering relative to a set of constraints to check their consistency independent of the Boolean search, meaning it can be more easily integrated into an CDCL(T)-style SMT solver and combined with its other theory solving modules than NLSAT, which is a solving framework on its own.
Both NLSAT and CDCAC rely on CAD theory to conclude that the generalisations of conflicts from models to cells are valid, and so the verification of such theory is still a barrier to verifiable proofs. But unlike CAD itself, the conflicts that are being generalized are local for both NLSAT and CDCAC. This may allow a simpler path for verification of individual cases based on the particular relationships of the polynomials involved. It was observed in \cite{ADEKT20} that a trace of the computation from CDCAC appears far closer to a human derived proof than any of the other algorithms discussed here. Whether this means it will be more susceptible to machine verification remains to be seen.
\section{Other approaches for QF\_NRA}
We wrote earlier that all solvers tackling \verb+QF_NRA+ in a complete manner rely on CAD based approaches, as these are the only complete methods that have been implemented.
However, we should note that most solvers also employ a variety of incomplete methods for \verb+QF_NRA+ which tend to be far more efficient than CAD based ones and so are attempted first, and may also be used to solve sub-problems or simplify the input to CAD. These include incremental linearisation \cite{CGIRS18c}, interval constraint propagation \cite{TVO17}, virtual substitution \cite{Weispfenning1997a}, subtropical satisfiability \cite{FOSV17} and Gr\"obner bases \cite{HEDP16}.
So, although we think there is potential for verifying output of a cylindrical covering based algorithm, we caution that to obtain fully verified proofs for \verb+QF_NRA+ problems we must take on a greater body of work: to generate proofs for all these methods and furthermore integrate them into or combine them with the CAD proofs.
\label{sect:bib}
\end{document} | 2,560 | 2,560 | en |
train | 0.15.0 | \begin{document}
\title{PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation \\
}
\author{\IEEEauthorblockN{Mingzhe Liu$^{*1}$, Han Huang$^{*1}$, Hao Feng$^1$, Leilei Sun\textsuperscript{\Letter}$^{1}$, Bowen Du$^{1}$, Yanjie Fu$^{2}$}
\IEEEauthorblockA{$^1$State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China\\
$^2$Department of Computer Science, University of Central Florida, FL 32816, USA\\
\{mzliu1997, h-huang, pinghao, leileisun, dubowen\}@buaa.edu.cn, [email protected]
}\thanks{$^*\,$Equal contribution. \Letter $\,$ Corresponding author.}
}
\maketitle
\begin{abstract}
Spatiotemporal data mining plays an important role in air quality monitoring, crowd flow modeling, and climate forecasting. However, the originally collected spatiotemporal data in real-world scenarios is usually incomplete due to sensor failures or transmission loss. Spatiotemporal imputation aims to fill the missing values according to the observed values and the underlying spatiotemporal dependence of them.
The previous dominant models impute missing values autoregressively and suffer from the problem of error accumulation.
As emerging powerful generative models, the diffusion probabilistic models can be adopted to impute missing values conditioned by observations and avoid inferring missing values from inaccurate historical imputation.Â
However, the construction and utilization of conditional information are inevitable challenges when applying diffusion models to spatiotemporal imputation.
To address above issues, we propose a conditional diffusion framework for spatiotemporal imputation with enhanced prior modeling, named PriSTI.
Our proposed framework provides a conditional feature extraction module first to extract the coarse yet effective spatiotemporal dependencies from conditional information as the global context prior. Then, a noise estimation module transforms random noise to realistic values, with the spatiotemporal attention weights calculated by the conditional feature, as well as the consideration of geographic relationships.
PriSTI outperforms existing imputation methods in various missing patterns of different real-world spatiotemporal data, and effectively handles scenarios such as high missing rates and sensor failure.
The implementation code is available at \url{https://github.com/LMZZML/PriSTI}.
\end{abstract}
\begin{IEEEkeywords}
Spatiotemporal Imputation, Diffusion Model, Spatiotemporal Dependency Learning
\end{IEEEkeywords}
\section{Introduction}
Spatiotemporal data is a type of data with intrinsic spatial and temporal patterns, which is widely applied in the real world for tasks such as air quality monitoring \cite{cao2018brits, yi2016st}, traffic status forecasting \cite{li2017diffusion, wu2019graph}, weather prediction \cite{bauer2015quiet} and so on.
However, due to the sensor failures and transmission loss \cite{yi2016st}, the incompleteness in spatiotemporal data is a common problem, characterized by the randomness of missing value's positions and the diversity of missing patterns, which results in incorrect analysis of spatiotemporal patterns and further interference on downstream tasks.
In recent years, extensive research \cite{cao2018brits, liu2019naomi, cini2021filling} has dived into spatiotemporal imputation, with the goal of exploiting spatiotemporal dependencies from available observed data to impute missing values.
\begin{figure*}
\caption{The motivation of our proposed methods. We summarize the existing methods that can be applied to spatiotemporal imputation, and compare our proposed methods with the recent existing methods. The grey shadow represents the missing part, while the rest with blue solid line represents observed values $X$.}
\label{fig:motivation}
\end{figure*}
The early studies applied for spatiotemporal imputation usually impute along the temporal or spatial dimension with statistic and classic machine learning methods, including but not limited to autoregressive moving average (ARMA) \cite{ansley1984estimation, harvey1990forecasting}, expectation-maximization algorithm (EM) \cite{shumway1982approach, nelwamondo2007missing}, k-nearest neighbors (KNN) \cite{trevor2009elements, beretta2016nearest}, etc.
But these methods impute missing values based on strong assumptions such as the temporal smoothness and the similarity between time series, and ignore the complexity of spatiotemporal correlations.
With the development of deep learning, most effective spatiotemporal imputation methods \cite{cao2018brits, yoon2018gain, cini2021filling} use the recurrent neural network (RNN) as the core to impute missing values by recursively updating their hidden state, capturing the temporal correlation with existing observations.
Some of them also simply consider the feature correlation \cite{cao2018brits} by the multilayer perceptron (MLP) or spatial similarity between different time series \cite{cini2021filling} by graph neural networks.
However, these approaches inevitably suffer from error accumulation \cite{liu2019naomi}, i.e., inference missing values from inaccurate historical imputation, and only output the deterministic values without reflecting the uncertainty of imputation.
More recently, diffusion probabilistic models (DPM) \cite{sohl2015deep, ho2020denoising, song2020score}, as emerging powerful generative models with impressive performance on various tasks, have been adopted to impute multivariate time series. These methods impute missing values starting from randomly sampled Gaussian noise, and convert the noise to the estimation of missing values \cite{tashiro2021csdi}.
Since the diffusion models are flexible in terms of neural network architecture, they can circumvent the error accumulation problem from RNN-based methods through utilizing architectures such as attention mechanisms when imputation,
which also have a more stable training process than generative adversarial networks (GAN).
However, when applying diffusion models to imputation problem, the modeling and introducing of the conditional information in diffusion models are the inevitable challenges. For spatiotemporal imputation, the challenges can be specific to the construction and utilization of conditional information with spatiotemporal dependencies.
Tashiro et al. \cite{tashiro2021csdi} only model temporal and feature dependencies by attention mechanism when imputing, without considering spatial similarity such as geographic proximity and time series correlation.
Moreover, they combine the conditional information (i.e., observed values) and perturbed values directly as the input for models during training, which may lead to inconsistency inside the input spatiotemporal data, increasing the difficulty for the model to learn spatiotemporal dependencies.
To address the above issues, we propose a conditional diffusion framework for SpatioTemporal Imputation with enhanced Prior modeling (PriSTI).
We summarize the existing methods that can be applied to spatiotemporal imputation, and compare the differences between our proposed method and the recent existing methods, as shown in Figure \ref{fig:motivation}.
Since the main challenge of applying diffusion models on spatiotemporal imputation is how to model and utilize the spatiotemporal dependencies in conditional information for the generation of missing values, our proposed method reduce the difficulty of spatiotemporal dependencies learning by extracting conditional feature from observation as a global context prior.
The imputation process of spatiotemporal data with our proposed method is shown in the right of Figure \ref{fig:motivation}, which gradually transform the random noise to imputed missing values by the trained PriSTI.
PriSTI takes observed spatiotemporal data and geographic information as input. During training, the observed values are randomly erased as imputation target through a specific mask strategy.
The incomplete observed data is first interpolated to obtain the enhanced conditional information for diffusion model.
For the construction of conditional information, a conditional feature extraction module is provided to extract the feature with spatiotemporal dependencies from the interpolated information.
Considering the imputation of missing values not only depends on the values of nearby time and similar time series, but also is affected by geographically surrounding sensors, we design the specialized spatiotemporal dependencies learning methods. The proposed method comprehensively aggregates spatiotemporal global features and geographic information to fully exploit the explicit and implicit spatiotemporal relationships in different application scenarios.
For the utilization of conditional information, we design a noise estimation module to mitigate the impact of the added noise on the spatiotemporal dependencies learning. The noise estimation module utilizes the extracted conditional feature, as the global context prior, to calculate the spatiotemporal attention weights, and predict the added Gaussian noise by spatiotemporal dependencies.
PriSTI performs well in spatiotemporal data scenarios with spatial similarity and feature correlation.
For three real-world datasets in the fields of air quality and traffic, our proposed method outperforms existing methods in various missing patterns. Moreover, PriSTI can support downstream tasks through imputation, and effectively handles the case of high missing rates and sensor failure.
Our contributions are summarized as follows:
\begin{itemize}
\item We propose PriSTI, a conditional diffusion framework for spatiotemporal imputation, which constructs and utilizes conditional information with spatiotemporal global correlations and geographic relationships.
\item To reduce the difficulty of learning spatiotemporal dependencies, we design a specialized noise prediction model that extracts conditional features from enhanced observations, calculating the spatiotemporal attention weights using the extracted global context prior.
\item Our proposed method achieves the best performance on spatiotemporal data in various fields, and effectively handles application scenarios such as high missing rates and sensor failure.
\end{itemize}
The rest of this paper is organized as follows. We first state the definition of the spatiotemporal imputation problem and briefly introduce the background of diffusion models in Section \ref{sec:problem_def}. Then we introduce how the diffusion models are applied to spatiotemporal imputation, as well as the details of our proposed framework in Section \ref{sec:method}. Next, we evaluate the performance of our proposed method in various missing patterns in Section \ref{sec:exp}. Finally, we review the related work for spatiotemporal imputation in Section \ref{sec:related_work} and conclude our work in Section \ref{sec:conclusion}.
\section{Preliminaries}\label{sec:problem_def}
In this section, we introduce some key definitions in spatiotemporal imputation, state the problem definition and briefly introduce the diffusion probabilistic models.
\textbf{Spatiotemporal data. }
We formalize spatiotemporal data as a sequence $X_{1:L}=\{X_1, X_2,\cdots, X_L\}\in \mathbb{R}^{N\times L}$ over consecutive time, where $X_l\in\mathbb{R}^N$ is the values observed at time $l$ by $N$ observation nodes, such as air monitoring stations and traffic sensors. Not all observation nodes have observed values at time $l$. We use a binary mask $M_l\in\{0,1\}^N$ to represent the observed mask at time $l$, where $m_l^{i,j}=1$ represents the value is observed while $m_l^{i,j}=0$ represents the value is missing.
Since there is no ground truth for real missing data in practice, we manually select the imputation target $\widetilde{X}\in \mathbb{R}^{N\times L}$ from available observed data for training and evaluation, and identify them with the binary mask $\widetilde{M}\in \mathbb{R}^{N\times L}$.
\textbf{Adjacency matrix. }
The observation nodes can be formalized as a graph $G=\langle V,E\rangle$, where $V$ is the node set and $E$ is the edge set measuring the pre-defined spatial relationship between nodes, such as geographical distance.
We denote $A\in \mathbb{R}^{N\times N}$ to represent the geographic information as the adjacency matrix of the graph $G$. In this work, we only consider the setting of static graph, i.e., the geographic information $A$ does not change over time.
\textbf{Problem statement. }
Given the incomplete observed spatiotemporal data $X$ and geographical information $A$, our task of spatiotemporal imputation is to estimate the missing values or corresponding distributions in spatiotemporal data $X_{1:L}$.
\textbf{Diffusion probabilistic models. }
Diffusion probabilistic models \cite{dickstein15, ho2020denoising} are deep generative models that have achieved cutting-edge results in the field of image synthesis \cite{rombach2022high}, audio generation \cite{kong2020diffwave}, etc., which generate samples consistent with the original data distribution by adding noise to the samples and learning the reverse denoising process.
The diffusion probabilistic model can be formalized as two Markov chain processes of length $T$, named the \textit{diffusion process} and the \textit{reverse process}.
Let $\widetilde{X}^0\sim p_{data}$ where $p_{data}$ is the clean data distribution, and $\widetilde{X}^t$ is the sampled latent variable sequence, where $t=1,\cdots,T$ is the diffusion step. $\widetilde{X}^T\sim \mathcal{N}(0, \bm{I})$ where $\mathcal{N}$ is Gaussian distribution. The diffusion process adds Gaussian noise gradually into $\widetilde{X}^0$ until $\widetilde{X}^0$ is close to $\widetilde{X}^T$, while the reverse process denoises $\widetilde{X}^t$ to recover $\widetilde{X}^0$.
More details about applying the diffusion models on spatiotemporal imputation are introduced in Section \ref{sec:method}.
\begin{table}[t]
\centering
\caption{Important notations and corresponding descriptions.}
\label{tab:notation}
\setlength{\tabcolsep}{1mm}
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{c|l}
\toprule
Notations & Descriptions\cr
\midrule
$\bm{X}$ & Spatiotemporal data \cr
$\bm{\widetilde{X}}$ & Manually selected imputation target \cr
$N$ & The number of the observation nodes \cr
$L, l$ & Length of the observed time and observed time step \cr
$T, t$ & Length of the diffusion steps and diffusion step \cr
$\bm{A}$ & Adjacency matrix of geographic information \cr
$\mathcal{X}$ & Interpolated conditional information \cr
$\beta_t, \alpha_t, \hat{\alpha}_t$ & Constant hyperparameters of diffusion model \cr
$\epsilon_{\theta}$ & Noise prediction model \cr
\bottomrule
\end{tabular}}
\end{table} | 3,692 | 24,635 | en |
train | 0.15.1 | \section{Methodology}\label{sec:method}
The pipeline of our proposed spatiotemporal imputation framework, PriSTI, is shown in Figure \ref{fig:framework}. PriSTI adopts a conditional diffusion framework to exploit spatiotemporal global correlation and geographic relationship for imputation. To address the challenge about the construction and utilization of conditional information when impute by diffusion model, we design a specialized noise prediction model to enhance and extract the conditional feature.
In this section, we first introduce how the diffusion models are applied to spatiotemporal imputation, and then introduce the detail architecture of the noise prediction model, which is the key to success of diffusion models.
\subsection{Diffusion Model for Spatiotemporal Imputation}\label{sec:ddpm4imp}
To apply the diffusion models on spatiotemporal imputation, we regard the spatiotemporal imputation problem as a conditional generation task.
The previous studies \cite{tashiro2021csdi} have shown the ability of conditional diffusion probabilistic model for multivariate time series imputation. The spatiotemporal imputation task can be regarded as calculating the conditional probability distribution $q(\widetilde{X}_{1:L}^0|X_{1:L})$, where the imputation of $\widetilde{X}_{1:L}^0$ is conditioned by the observed values $X_{1:L}$.
However, the previous studies impute without considering the spatial relationships, and simply utilize the observed values as conditional information. In this section, we explain how our proposed framework impute spatiotemporal data by the diffusion models.
In the following discussion, we use the superscript $t\in\{0, 1, \cdots, T\}$ to represent the diffusion step, and omit the subscription $1:L$ for conciseness.
As mentioned in Section \ref{sec:problem_def}, the diffusion probabilistic model includes the \textit{diffusion process} and \textit{reverse process}.
The \textit{diffusion process} for spatiotemporal imputation is irrelavant with conditional information, adding Gaussian noise into original data of the imputation part, which is formalized as:
\begin{equation}
\begin{aligned}
& q(\widetilde{X}^{1:T}|\widetilde{X}^{0})=\prod_{t=1}^T q(\widetilde{X}^{t}|\widetilde{X}^{t-1}), \\
& q(\widetilde{X}^{t}|\widetilde{X}^{t-1})=\mathcal{N}(\widetilde{X}^t; \sqrt{1-\beta_t}\widetilde{X}^{t-1}, \beta_t \bm{I}),
\end{aligned}
\end{equation}
where $\beta_t$ is a small constant hyperparameter that controls the variance of the added noise.
The $\widetilde{X}^t$ is sampled by $\widetilde{X}^t=\sqrt{\bar{\alpha}_t}\widetilde{X}^0+\sqrt{1-\bar{\alpha}_t}\epsilon$, where $\alpha_t=1-\beta_t$, $\bar{\alpha}_t=\prod_{i=1}^t\alpha_i$, and $\epsilon$ is the sampled standard Gaussian noise. When $T$ is large enough, $q(\widetilde{X}^T|\widetilde{X}^0)$ is close to standard normal distribution .
The \textit{reverse process} for spatiotemporal imputation gradually convert random noise to missing values with spatiotemporal consistency based on conditional information. In this work, the reverse process is conditioned on the interpolated conditional information $\mathcal{X}$ that enhances the observed values, as well as the geographical information $A$. The reverse process can be formalized as:
\begin{equation}\label{eq:reverse_process}
\begin{aligned}
& p_{\theta}(\widetilde{X}^{0:T-1}|\widetilde{X}^{T}, \mathcal{X}, A)=\prod_{t=1}^T p_{\theta}(\widetilde{X}^{t-1}|\widetilde{X}^{t}, \mathcal{X}, A), \\
& p_{\theta}(\widetilde{X}^{t-1}|\widetilde{X}^{t}, \mathcal{X}, A)=\mathcal{N}(\widetilde{X}^{t-1}; \mu_{\theta}(\widetilde{X}^{t}, \mathcal{X}, A, t), \sigma_t^2 \bm{I}).
\end{aligned}
\end{equation}
Ho et al. \cite{ho2020denoising} introduce an effective parameterization of $\mu_{\theta}$ and $\sigma_t^2$. In this work, they can be defined as:
\begin{equation}\label{eq:mu_sigma}
\begin{aligned}
& \mu_{\theta}(\widetilde{X}^{t}, \mathcal{X}, A, t)=\frac{1}{\sqrt{\bar{\alpha}_t}}\left(\widetilde{X}^{t}-\frac{\beta_t}{\sqrt{1-\bar{\alpha}_t}}\epsilon_{\theta}(\widetilde{X}^{t}, \mathcal{X}, A, t)\right), \\
& \sigma_t^2=\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_t}\beta_t,
\end{aligned}
\end{equation}
where $\epsilon_{\theta}$ is a neural network parameterized by $\theta$, which takes the noisy sample $\widetilde{X}^{t}$ and conditional information $\mathcal{X}$ and adjacency matrix $A$ as input, predicting the added noise $\epsilon$ on imputation target to restore the original information of the noisy sample.
Therefore, $\epsilon_{\theta}$ is often named \textit{noise prediction model}. The noise prediction model does not limit the network architecture, whose flexibility is benificial for us to design the model suitable for spatiotemporal imputation.
\begin{figure}
\caption{The pipeline of PriSTI. PriSTI takes observed values and geographic information as input. It first interpolates observations and models the global context prior by the conditional feature extraction module, and then utilizes the noise estimation module to predict noise with help of the conditional information.}
\label{fig:framework}
\end{figure}
\textbf{Training Process.}
During training, we mask the input observed value $X$ through a random mask strategy to obtain the imputation target $\widetilde{X}^t$, and the remaining observations are used to serve as the conditional information for imputation. Similar to CSDI \cite{tashiro2021csdi}, we provide the mask strategies including point strategy, block strategy and hybrid strategy (see more details in Section \ref{sec:exp_set}). Mask strategies produce different masks for each training sample.
After obtaining the training imputation target $\widetilde{X}^0$ and the interpolated conditional information $\mathcal{X}$, the training objective of spatiotemporal imputation is:
\begin{equation}\label{eq:loss}
\mathcal{L}(\theta)=\mathbb{E}_{\widetilde{X}^0\sim q(\widetilde{X}^0), \epsilon\sim\mathcal{N}(0,I)}\left\Vert\epsilon-\epsilon_{\theta}(\widetilde{X}^{t}, \mathcal{X}, A, t)\right\Vert^2.
\end{equation}
Therefore, in each iteration of the training process, we sample the Gaussian noise $\epsilon$, the imputation target $\widetilde{X}^t$ and the diffusion step $t$, and obtain the interpolated conditional information $\mathcal{X}$ based on the remaining observations.
More details on the training process of our proposed framework is shown in Algorithm \ref{alg:train}.
\begin{algorithm}[t]
\caption{Training process of PriSTI.}
\label{alg:train}
\hspace*{0.02in} {\bf Input:}Incomplete observed data $X$, the adjacency matrix $A$, the number of iteration $N_{it}$, the number of diffusion steps $T$, noise levels sequence $\bar{\alpha}_t$. \\
\hspace*{0.02in} {\bf Output:}{Optimized noise prediction model $\epsilon_{\theta}$.}
\begin{algorithmic}[1]
\For {$i=1$ \text{to} $N_{it}$}
\State $\widetilde{X}^0 \gets \text{Mask}(X)$;
\State $\mathcal{X} \gets \text{Interpolate}(\widetilde{X}^0)$;
\State Sample $t \sim \text{Uniform}(\{1,\cdots,T\})$, $\epsilon\sim\mathcal{N}(0,\textbf{\text{I}})$;
\State $\widetilde{X}^t \gets \sqrt{\bar{\alpha}_t}\widetilde{X}^0+\sqrt{1-\bar{\alpha}_t}\epsilon$;
\State Updating the gradient $\nabla_{\theta}\left\Vert\epsilon-\epsilon_{\theta}(\widetilde{X}^{t}, \mathcal{X}, A, t)\right\Vert^2$.
\EndFor
\end{algorithmic}
\end{algorithm}
\textbf{Imputation Process.}
When using the trained noise prediction model $\epsilon_{\theta}$ for imputation, the observed mask $\widetilde{M}$ of the data is available, so the imputation target $\widetilde{X}$ is the all missing values in the spatiotemporal data, and the interpolated conditional information $\mathcal{X}$ is constructed based on all observed values.
The model receives $\widetilde{X}^T$ and $\mathcal{X}$ as inputs and generates samples of the imputation results through the process in Equation (\ref{eq:reverse_process}).
The more details on the imputation process of our proposed framework is shown in Algorithm \ref{alg:impute}.
\begin{algorithm}[t]
\caption{Imputation process with PriSTI.}
\label{alg:impute}
\hspace*{0.02in} {\bf Input:}A sample of incomplete observed data $X$, the adjacency matrix $A$, the number of diffusion steps $T$, the optimized noise prediction model $\epsilon_{\theta}$.\\
\hspace*{0.02in} {\bf Output:}{Missing values of the imputation target $\widetilde{X}^0$.}
\begin{algorithmic}[1]
\State $\mathcal{X} \gets \text{Interpolate}(X)$;
\State Set $\widetilde{X}^T\sim\mathcal{N}(0, \textbf{\text{I}})$;
\For {$t=T$ \text{to} $1$}
\State $\mu_{\theta}(\widetilde{X}^{t}, \mathcal{X}, A, t) \gets \frac{1}{\sqrt{\bar{\alpha}_t}}\left(\widetilde{X}^{t}-\frac{\beta_t}{\sqrt{1-\bar{\alpha}_t}}\epsilon_{\theta}(\widetilde{X}^{t}, \mathcal{X}, A, t)\right)$
\State $\widetilde{X}^{t-1} \gets \mathcal{N}(\mu_{\theta}(\widetilde{X}^{t}, \mathcal{X}, A, t), \sigma_t^2 \bm{I})$
\EndFor
\end{algorithmic}
\end{algorithm}
Through the above framework, the diffusion model can be applied to spatiotemporal imputation with the conditional information. However, the construction and utilization of conditional information with spatiotemporal dependencies are still challenging. It is necessary to design a specialized noise prediction model $\epsilon_{\theta}$ to reduce the difficulty of learning spatiotemporal dependencies with noisy information, which will be introduced in next section. | 2,742 | 24,635 | en |
train | 0.15.2 | \subsection{Design of Noise Prediction Model}
In this section, we illustrate how to design the noise prediction model $\epsilon_{\theta}$ for spatiotemporal imputation.
Specifically, we first interpolate the observed value to obtain the enhanced coarse conditional information. Then, a \textit{conditional feature extraction module} is designed to model the spatiotemporal correlation from the coarse interpolation information. The output of the conditional feature extraction module is utilized in the designed \textit{noise estimation module} to calculate the attention weights, which provides a better global context prior for spatiotemporal dependencies learning.
\subsubsection{Conditional Feature Extraction Module}
\begin{figure}
\caption{The architecture of the conditional feature extraction module (left) and noise estimation module (right). Both modules utilize the same components, including a temporal attention $\text{Attn}
\label{fig:STDL}
\end{figure}
The conditional feature extraction module is dedicated to model conditional information when the diffusion model is applied to spatiotemporal imputation.
According to the diffusion model for imputation described above, $\epsilon_{\theta}$ takes the conditional information $\mathcal{X}$ and the noisy information $\widetilde{X}^t$ as input. The previous studies, such as CSDI \cite{tashiro2021csdi}, regards the observed values as conditional information, and takes the concatenation of conditional information and perturbed values as the input, which are distinguished only by a binary mask.
However, the trend of the time series in imputation target are unstable due to the randomness of the perturbed values, which may cause the noisy sample to have an inconsistent trend with the original time series (such as the $\widetilde{X}_{1:L}^t$ in Figure \ref{fig:framework}), especially when the diffusion step $t$ is close to $T$.
Although CSDI utilizes two different Transformer layers to capture the temporal and feature dependencies, the mixture of conditional and noisy information increases the learning difficulty of the noise prediction model, which can not be solved by a simple binary mask identifier.
To address the above problem, we first enhance the obvserved values for the conditional feature extraction, expecting the designed model to learn the spatiotemporal dependencies based on this enhanced information.
In particular, inspired by some spatiotemporal forecasting works based on temporal continuity \cite{choi2022graph}, we apply linear interpolation to the time series of each node to initially construct a coarse yet effective interpolated conditional information $\mathcal{X}$ for denoising.
Intuitively, this interpolation does not introduce randomness to the time series, while also retaining a certain spatiotemporal consistency.
From the test results of linear interpolation on the air quality and traffic speed datasets (see Table \ref{tab:overallmae}), the spatiotemporal information completed by the linear interpolation method is available enough for a coarse conditional information.
Moreover, the fast computation of linear interpolation satisfies the training requirements of real-time construction under random mask strategies in our framework.
Although linear interpolation solves the completeness of observed information simply and efficiently, it only simply describes the linear uniform change in time, without modeling temporal nonlinear relationship and spatial correlations.
Therefore, we design a learnable module $\gamma(\cdot)$ to model a conditional feature $H^{pri}$ with spatiotemporal information as a global context prior, named Conditional Feature Extraction Module.
The module $\gamma(\cdot)$ takes the interpolated conditional information $\mathcal{X}$ and adjacency matrix $A$ as input, extract the spatiotemporal dependencies from $\mathcal{X}$ and output $H^{pri}$ as the global context for the calculation of spatiotemporal attention weights in noise prediction.
In particular, the conditional feature $H^{pri}\in\mathbb{R}^{N\times L\times d}$ is obtained by $H^{pri}=\gamma(\mathcal{H}, A)$, where $\mathcal{H}=\text{Conv}(\mathcal{X})$ and $\text{Conv}(\cdot)$ is $1\times1$ convolution, $\mathcal{H}\in \mathbb{R}^{N\times L\times d}$, and $d$ is the channel size.
The conditional feature extraction module $\gamma(\cdot)$ comprehensively combines the spatiotemporal global correlations and geographic dependency, as shown in the left of Figure \ref{fig:STDL}, which is formalized as:
\begin{equation}\label{eq:gps}
\begin{aligned}
& H^{pri} = \gamma(\mathcal{H}, A)=\text{MLP}(\varphi_{\text{SA}}(\mathcal{H})+\varphi_{\text{TA}}(\mathcal{H})+\varphi_{\text{MP}}(\mathcal{H}, A)),\\
& \varphi_{\text{SA}}(\mathcal{H})=\text{Norm}(\text{Attn}_{spa}(\mathcal{H})+\mathcal{H}),\\
& \varphi_{\text{TA}}(\mathcal{H})=\text{Norm}(\text{Attn}_{tem}(\mathcal{H})+\mathcal{H}),\\
& \varphi_{\text{MP}}(\mathcal{H}, A)=\text{Norm}(\text{MPNN}(\mathcal{H}, A)+\mathcal{H}),\\
\end{aligned}
\end{equation}
where $\text{Attn}(\cdot)$ represents the global attention, and the subscripts $spa$ and $tem$ represent spatial attention and temporal attention respectively. We use the dot-product multi-head self-attention in Transformer \cite{vaswani2017attention} to implement $\text{Attn}(\cdot)$. And $\text{MPNN}(\cdot)$ represents the spatial message passing neural network, which can be implemented by any graph neural network. We adopt the graph convolution module from Graph Wavenet \cite{wu2019graph}, whose adjacency matrix includes a bidirectional distance-based matrix and an adaptively learnable matrix.
The extracted conditional feature $H^{pri}$ solves the problem of constructing conditional information. It does not contain the added Gaussian noise, and includes temporal dependencies, spatial global correlations and geographic dependencies compared with observed values. To address the remaining challenge of utilizing conditional information, $H^{pri}$ serves as a coarse prior to guide the learning of spatiotemporal dependencies, which is introduced in the next section.
\subsubsection{Noise Estimation Module}
The noise estimation module is dedicated to the utilization of conditional information when the diffusion model is applied to spatiotemporal imputation.
Since the information in noisy sample may have a wide deviation from the real spatiotemporal distribution because of the randomness of the Gaussian noise, it is difficult to learn spatiotemporal dependencies directly from the mixture of conditional and noisy information.
Our proposed noise estimation module captures spatiotemporal global correlations and geographical relationships by a specialized attention mechanism, which reduce the difficulty of spatiotemporal dependencies learning caused by the sampled noise.
Specifically, the inputs of the noise estimation module include two parts: the noisy information $H^{in}=\text{Conv}(\mathcal{X} || \widetilde{X}^t)$ that consists of interpolation information $\mathcal{X}$ and noise sample $\widetilde{X}^t$, and the prior information including the conditional feature $H^{pri}$ and adjacency matrix $A$.
To comprehensively consider the spatiotemporal global correlation and geographic relationship of the missing data, the temporal features $H^{tem}$ are first learned through a temporal dependency learning module $\gamma_{\mathcal{T}}(\cdot)$, and then the temporal features are aggregated through a spatial dependency learning module $\gamma_{\mathcal{S}}(\cdot)$.
The architecture of the noise estimation module is shown as the right of Figure \ref{fig:STDL}, which is formalized as follows:
\begin{equation}\label{eq:gps_deep}
\begin{aligned}
& H^{tem}=\gamma_{\mathcal{T}}(H^{in})=\text{Attn}_{tem}(H^{in}), \\
& H^{spa}=\gamma_{\mathcal{S}}(H^{tem}, A)=\text{MLP}(\varphi_{\text{SA}}(H^{tem})+\varphi_{\text{MP}}(H^{tem}, A)),
\end{aligned}
\end{equation}
where $\text{Attn}_{tem}(\cdot)$, $\varphi_{\text{SA}}(\cdot)$ and $\varphi_{\text{MP}}(\cdot)$ are same as the components in Equation (\ref{eq:gps}), which are used to capture spatiotemporal global attention and geographic similarity, and $H^{tem}, H^{spa}\in \mathbb{R}^{N \times L \times d}$ are the outputs of temporal and spatial dependencies learning modules.
However, in Eq. (\ref{eq:gps_deep}), spatiotemporal dependencies learning is performed on the mixture of conditional noisy information, i.e. $H^{in}$. When the diffusion step $t$ approaches $T$, noise sample $\widetilde{X}^t$ would increase the difficulty of spatiotemporal dependencies learning. To reduce the impact of $\widetilde{X}^t$ while convert it into Gaussian noise,
we change the input of the attention components $\text{Attn}_{tem}(\cdot)$ and $\text{Attn}_{spa}(\cdot)$, which calculate the attention weights by using the conditional feature $H^{pri}$.
In particular, take temporal attention $\text{Attn}_{tem}(\cdot)$ as an example, we rewrite the dot-product attention $\text{Attn}_{tem}(Q_{\mathcal{T}},K_{\mathcal{T}},V_{\mathcal{T}})=\text{softmax}(\frac{Q_{\mathcal{T}}K_{\mathcal{T}}^T}{\sqrt{d}})\cdot V_{\mathcal{T}}$ as $\text{Attn}_{tem}(\mathcal{A}_{\mathcal{T}}, V_{\mathcal{T}}) = \mathcal{A}_{\mathcal{T}} \cdot V_{\mathcal{T}}$, where $\mathcal{A}_{\mathcal{T}}=\text{softmax}(\frac{Q_{\mathcal{T}}K_{\mathcal{T}}^T}{\sqrt{d}})$ is the attention weight.
We calculate the attention weight $\mathcal{A}_{\mathcal{T}}$ by the conditional feature $H^{pri}$, i.e., we set the input $Q_{\mathcal{T}}$, $K_{\mathcal{T}}$ and $V_{\mathcal{T}}$ as:
\begin{equation}\label{eq:cross_att}
Q_{\mathcal{T}}=H^{pri}\cdot W^Q_{\mathcal{T}}, K_{\mathcal{T}}=H^{pri}\cdot W^K_{\mathcal{T}}, V_{\mathcal{T}}=H^{in}\cdot W^V_{\mathcal{T}},
\end{equation}
where $W^Q_{\mathcal{T}}, W^K_{\mathcal{T}}, W^V_{\mathcal{T}}\in\mathbb{R}^{d\times d}$ are learnable projection parameters.
The spatial attention $\text{Attn}_{spa}(\mathcal{A}_{\mathcal{S}}, V_{\mathcal{S}})$ calculates the attention weight in the same way:
\begin{equation}\label{eq:cross_att_spa}
Q_{\mathcal{S}}=H^{pri}\cdot W^Q_{\mathcal{S}}, K_{\mathcal{S}}=H^{pri}\cdot W^K_{\mathcal{S}}, V_{\mathcal{S}}=H^{tem}\cdot W^V_{\mathcal{S}}.
\end{equation}
The noise estimation module consists of Equation (\ref{eq:gps_deep}) - (\ref{eq:cross_att_spa}), which has the same attention and MPNN components as the conditional feature extraction module with different input and architecture.
The conditional feature extraction module models spatiotemporal dependencies only from the interpolated conditional information $\mathcal{X}$ in a single layer, so it extracts information through a wide network architecture, i.e., directly aggregates the spatiotemporal global correlation and geographic dependency.
Since the noise estimation module needs to convert the noisy sample to standard Gaussian distribution in multiple layers, it learns spatiotemporal dependencies from the noisy samples with help of the conditional feature through a deep network architecture, i.e., extracts the temporal correlation first and aggregates the temporal feature through the spatial global correlation and geographic information.
In addition, when the number of nodes in the spatiotemporal data is large, the computational cost of spatial global attention is high, and the time complexity of its similarity calculation and weighted summation are both $O(N^2d)$.
Therefore, we map $N$ nodes to $k$ virtual nodes, where $k<N$.
We rewrite the $K_{\mathcal{S}}$ and $V_{\mathcal{S}}$ in Equation (\ref{eq:cross_att_spa}) when attention is used for spatial dependencies learning as:
\begin{equation}\label{eq:node_samp}
K_{\mathcal{S}}=H^{pri}\cdot P^K_{\mathcal{S}} W^K_{\mathcal{S}} , V_{\mathcal{S}}= H^{tem}\cdot P^V_{\mathcal{S}} W^V_{\mathcal{S}},
\end{equation}
where $P^K_{\mathcal{S}}, P^V_{\mathcal{S}}\in\mathbb{R}^{N\times d}$ is the downsampling parameters. And the time complexity of the modified spatial attention is reduced to $O(Nkd)$.
\subsubsection{Auxiliary Information and Output}
We add auxiliary information $U=\text{MLP}(U_{tem}, U_{spa})$ to both the conditional feature extraction module and the noise estimation module to help the imputation, where $U_{tem}$ is the sine-cosine temporal encoding \cite{vaswani2017attention}, and $U_{spa}$ is learnable node embedding.
We expand and concatenate $U_{tem}\in\mathbb{R}^{L\times 128}$ and $U_{spa}\in\mathbb{R}^{N\times 16}$, and obtain auxiliary information $U\in\mathbb{R}^{N\times L\times d}$ that can be input to the model through an MLP layer.
The noise estimation module stacks multiple layers, and the output $H^{spa}$ of each layer is divided into residual connection and skip connection after a gated activation unit. The residual connection is used as the input of the next layer, and the skip connections of each layer are added and through two layers of $1\times 1$ convolution to obtain the output of the noise prediction model $\epsilon_{\theta}$. The output only retains the value of imputation target, and the loss is calculated by Equation (\ref{eq:loss}). | 3,407 | 24,635 | en |
train | 0.15.3 | \section{Experiments}\label{sec:exp}
In this section, we first introduce the dataset, baselines, evaluation metrics and settings of our experiment. Then, we evaluate our proposed framework PriSTI with a large amount of experiments for spatiotemporal imputation to answer the following research questions:
\begin{itemize}
\item \textbf{RQ1}: Can PriSTI provide superior imputation performance in various missing patterns compared to several state-of-the-art baselines?
\item \textbf{RQ2}: How is the imputation performance for PriSTI for different missing rate of spatiotemporal data?
\item \textbf{RQ3}: Does PriSTI benefit from the construction and utilization of the conditional information?
\item \textbf{RQ4}: Does PriSTI extract the temporal and spatial dependencies from the observed spatiotemporal data?
\item \textbf{RQ5}: Can PriSTI impute the time series for the unobserved sensors only based on the geographic location?
\end{itemize}
\subsection{Dataset}
We conduct experiments on three real-world datasets: an air quality dataset AQI-36, and two traffic speed datasets METR-LA and PEMS-BAY.
AQI-36 \cite{yi2016st} contains hourly sampled PM2.5 observations from 36 stations in Beijing, covering a total of 12 months. METR-LA \cite{li2017diffusion} contains traffic speed collected by 207 sensors in the highway of Los Angeles County \cite{jagadish2014big} in 4 months, and PEMS-BAY \cite{li2017diffusion} contains traffic speed collected by 325 sensors on highways in the San Francisco Bay Area in 6 months. Both traffic datasets are sampled every 5 minutes.
For the geographic information, the adjacency matrix is obtained based on the geographic distances between monitoring stations or sensors followed the previous works \cite{li2017diffusion}. We build the adjacency matrix for the three datasets using thresholded Gaussian kernel \cite{shuman2013emerging}.
\subsection{Baselines}
To evaluate the performance of our proposed method, we compare with classic models and state-of-the-art methods for spatiotemporal imputation. The baselines include statistic methods (MEAN, DA, KNN, Lin-ITP), classic machine learning methods (MICE, VAR, Kalman), low-rank matrix factorization methods (TRMF, BATF), deep autoregressive methods (BRITS, GRIN) and deep generative methods (V-RIN, GP-VAE, rGAIN, CSDI).
We briefly introduce the baseline methods as follows:
(1)\textbf{MEAN}: directly use the historical average value of each node to impute.
(2)\textbf{DA}: impute missing values with the daily average of corresponding time steps.
(3)\textbf{KNN}: calculate the average value of nearby nodes based on geographic distance to impute.
(4)\textbf{Lin-ITP}: linear interpolation of the time series for each node, as implemented by torchcde\footnote{https://github.com/patrick-kidger/torchcde}.
(5)\textbf{KF}: use Kalman Filter to impute the time series for each node, as implemented by filterpy\footnote{https://github.com/rlabbe/filterpy}.
(6)\textbf{MICE} \cite{white2011multiple}: multiple imputation method by chain equations;
(7)\textbf{VAR}: vector autoregressive single-step predictor.
(8)\textbf{TRMF} \cite{yu2016temporal}: a temporal regularized matrix factorization method.
(9)\textbf{BATF} \cite{chen2019missing}: a Bayesian augmented tensor factorization model, which incorporates the generic forms of spatiotemporal domain knowledge. We implement TRMF and BATF using the code in the Transdim\footnote{https://github.com/xinychen/transdim} repository. The rank is set to 10, 40 and 50 on AQI-36, METR-LA and PEMS-BAY, respectively.
(10)\textbf{V-RIN} \cite{mulyadi2021uncertainty}: a method to improve deterministic imputation using the quantified uncertainty of VAE, whose probability imputation result is provided by the quantified uncertainty.
(11)\textbf{GP-VAE} \cite{fortuin2020gp}: a method for time series probabilistic imputation by combining VAE with Gaussian process.
(12)\textbf{rGAIN}: GAIN \cite{yoon2018gain} with a bidirectional recurrent encoder-decoder, which is a GAN-based method.
(13)\textbf{BRITS} \cite{cao2018brits}: a multivariate time series imputation method based on bidirectional RNN.
(14)\textbf{GRIN} \cite{cini2021filling}: a bidirectional GRU based method with graph neural network for multivariate time series imputation.
(15)\textbf{CSDI} \cite{tashiro2021csdi}: a probability imputation method based on conditional diffusion probability model, which treats different nodes as multiple features of the time series, and using Transformer to capture feature dependencies.
In the experiment, the baselines MEAN, KNN, MICE, VAR, rGAIN, BRITS and GRIN are implemented by the code\footnote{https://github.com/Graph-Machine-Learning-Group/grin} provided by the authors of GRIN \cite{cini2021filling}.
We reproduced these baselines, and the results are consistent with their claims, so we retained the results claimed in GRIN for the above baselines.
The implementation details of the remaining baselines have been introduced as above.
\subsection{Evaluation metrics}
We apply three evaluation metrics to measure the performance of spatiotemporal imputation: Mean Absolute Error (MAE), Mean Squared Error (MSE) and Continuous Ranked Probability Score (CRPS) \cite{matheson1976scoring}.
MAE and MSE reflect the absolute error between the imputation values and the ground truth,
and CRPS evaluates the compatibility of the estimated probability distribution with the observed value.
We introduce the calculation details of CRPS as follows.
For a missing value $x$ whose estimated probability distribution is $D$, CRPS measures the compatibility of $D$ and $x$, which can be defined as the integral of the quantile loss $\Lambda_{\alpha}$:
\begin{equation}
\begin{aligned}
\text{CRPS}(D^{-1},x) & =\int^1_0 2\Lambda_{\alpha}(D^{-1}(\alpha), x)d\alpha,\\
\Lambda_{\alpha}(D^{-1}(\alpha), x) & =(\alpha-\mathbb{I}_{x<D^{-1}(\alpha)})(x-D^{-1}(\alpha)),
\end{aligned}
\end{equation}
where $\alpha\in[0,1]$ is the quantile levels, $D^{-1}(\alpha)$ is the $\alpha$-quantile of distribution $D$, $\mathbb{I}$ is the indicator function.
Since our distribution of missing values is approximated by generating 100 samples, we compute quantile losses for discretized quantile levels with 0.05 ticks following \cite{tashiro2021csdi} as:
\begin{equation}
\text{CRPS}(D^{-1},x) \simeq \sum_{i=1}^{19}2\Lambda_{i\times 0.05}(D^{-1}(i\times 0.05), x)/19.
\end{equation}
We compute CRPS for each estimated missing value and use the average as the evaluation metric, which is formalized as:
\begin{equation}
\text{CRPS}(D, \widetilde{X})=\frac{\sum_{\tilde{x}\in\widetilde{X}}\text{CRPS}(D^{-1},\tilde{x})}{|\widetilde{X}|}.
\end{equation} | 1,973 | 24,635 | en |
train | 0.15.4 | \subsection{Experimental settings}\label{sec:exp_set}
\textbf{Dataset.}
We divide training/validation/test set following the settings of previous work \cite{yi2016st, cini2021filling}.
For AQI-36, we select Mar., Jun., Sep., and Dec. as the test set, the last 10\% of the data in Feb., May, Aug., and Nov. as the validation set, and the remaining data as the training set.
For METR-LA and PEMS-BAY, we split the training/validation/test set by $70\%/10\%/20\%$.
\textbf{Imputation target.}
For air quality dataset AQI-36, we adapt the same evaluation strategy as the previous work provided by \cite{yi2016st}, which simulates the distribution of real missing data.
For the traffic datasets METR-LA and PEMS-BAY, we use the artificially injected missing strategy provided by \cite{cini2021filling} for evaluation, as shown in Figure \ref{fig:bp-missing}, which includes two missing patterns:
(1) \textbf{Block missing}: based on randomly masking 5\% of the observed data, mask observations ranging from 1 to 4 hours for each sensor with 0.15\% probability;
(2) \textbf{Point missing}: randomly mask 25\% of observations.
The missing rate of each dataset under different missing patterns has been marked in Table \ref{tab:overallmae}. It is worth noting that in addition to the manually injected faults, each dataset also has original missing data (13.24\% in AQI-36, 8.10\% in METR-LA and 0.02\% in PEMS-BAY). All evaluations are performed only on the manually masked parts of the test set.
\begin{figure}
\caption{The illustration of some missing patterns.}
\label{fig:bp-missing}
\end{figure}
\textbf{Training strategies.}
As mentioned in Section \ref{sec:ddpm4imp}, on the premise of known missing patterns in test data, we provide three mask strategies.
The details of these mask strategies are described as follows:
\begin{itemize}
\item Point strategy: take a random value $m$ between [0, 100], randomly select $m$\% of the data from $X$ as the imputation target $\widetilde{X}$, and the remaining unselected data is regarded as observed values in training process.
\item Block strategy: For each node, a sequence with a length in the range $[L/2, L]$ is selected as the imputation target with a probability from 0 to 15\%. In addition, 5\% of the observed values are randomly selected and added to the imputation target.
\item Hybrid strategy: For each training sample $X$, it has a 50\% probability to be masked by the point strategy, and the other 50\% probability to be masked by the block strategy or a historical missing pattern i.e., the missing patterns of other samples in the training set.
\end{itemize}
We utilize different mask strategies for various missing patterns and datasets to make the model simulate the corresponding missing patterns as much as possible during training. Since AQI-36 has much original missing in the training set, which is fewer in traffic datasets, during the training process of PriSTI, we adopt hybrid strategy with historical missing pattern on AQI-36, hybrid strategy with block strategy on block-missing of traffic datasets, and point strategy on point-missing of traffic datasets.
\textbf{Hyperparameters of PriSTI.} For the hyperparameters of PriSTI, the batch size is 16. The learning rate is decayed to 0.0001 at 75\% of the total epochs, and decayed to 0.00001 at 90\% of the total epochs. The hyperparameter for diffusion model include a minimum noise level $\beta_1$ and a maximum noise level $\beta_T$. We adopted the quadratic schedule for other noise levels following \cite{tashiro2021csdi}, which is formalized as:
\begin{equation}
\beta_t=\left(\frac{T-t}{T-1}\sqrt{\beta_1}+\frac{t-1}{T-1}\sqrt{\beta_T}\right)^2.
\end{equation}
The diffusion time embedding and temporal encoding are implemented by the sine and cosine embeddings followed previous works \cite{kong2020diffwave,tashiro2021csdi}.
We summary the hyperparameters of PriSTI in Table \ref{tab:exp_setting}. All the experiments are run for 5 times.
\begin{table}[t]
\centering
\caption{The hyperparameters of PriSTI for all datasets.}
\label{tab:exp_setting}
\setlength{\tabcolsep}{1mm}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{cccc}
\toprule
Description & AQI-36 & METR-LA& PEMS-BAY\cr
\midrule
Batch size & 16 & 16 & 16 \cr
Time length $L$ & 36 & 24 & 24 \cr
Epochs & 200 & 300 & 300 \cr
Learning rate & 0.001 & 0.001 & 0.001 \cr
Layers of noise estimation & 4 & 4 & 4 \cr
Channel size $d$ & 64 & 64 & 64 \cr
Number of attention heads & 8 & 8 & 8 \cr
Minimum noise level $\beta_1$ & 0.0001 & 0.0001 & 0.0001 \cr
Maximum noise level $\beta_T$ & 0.2 & 0.2 & 0.2 \cr
Diffusion steps $T$ & 100 & 50 & 50 \cr
Number of virtual nodes $k$ & 16 & 64 & 64 \cr
\bottomrule
\end{tabular}}
\end{table}
\begin{table*}[ht]
\centering | 1,516 | 24,635 | en |
train | 0.15.5 | \textbf{Training strategies.}
As mentioned in Section \ref{sec:ddpm4imp}, on the premise of known missing patterns in test data, we provide three mask strategies.
The details of these mask strategies are described as follows:
\begin{itemize}
\item Point strategy: take a random value $m$ between [0, 100], randomly select $m$\% of the data from $X$ as the imputation target $\widetilde{X}$, and the remaining unselected data is regarded as observed values in training process.
\item Block strategy: For each node, a sequence with a length in the range $[L/2, L]$ is selected as the imputation target with a probability from 0 to 15\%. In addition, 5\% of the observed values are randomly selected and added to the imputation target.
\item Hybrid strategy: For each training sample $X$, it has a 50\% probability to be masked by the point strategy, and the other 50\% probability to be masked by the block strategy or a historical missing pattern i.e., the missing patterns of other samples in the training set.
\end{itemize}
We utilize different mask strategies for various missing patterns and datasets to make the model simulate the corresponding missing patterns as much as possible during training. Since AQI-36 has much original missing in the training set, which is fewer in traffic datasets, during the training process of PriSTI, we adopt hybrid strategy with historical missing pattern on AQI-36, hybrid strategy with block strategy on block-missing of traffic datasets, and point strategy on point-missing of traffic datasets.
\textbf{Hyperparameters of PriSTI.} For the hyperparameters of PriSTI, the batch size is 16. The learning rate is decayed to 0.0001 at 75\% of the total epochs, and decayed to 0.00001 at 90\% of the total epochs. The hyperparameter for diffusion model include a minimum noise level $\beta_1$ and a maximum noise level $\beta_T$. We adopted the quadratic schedule for other noise levels following \cite{tashiro2021csdi}, which is formalized as:
\begin{equation}
\beta_t=\left(\frac{T-t}{T-1}\sqrt{\beta_1}+\frac{t-1}{T-1}\sqrt{\beta_T}\right)^2.
\end{equation}
The diffusion time embedding and temporal encoding are implemented by the sine and cosine embeddings followed previous works \cite{kong2020diffwave,tashiro2021csdi}.
We summary the hyperparameters of PriSTI in Table \ref{tab:exp_setting}. All the experiments are run for 5 times.
\begin{table}[t]
\centering
\caption{The hyperparameters of PriSTI for all datasets.}
\label{tab:exp_setting}
\setlength{\tabcolsep}{1mm}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{cccc}
\toprule
Description & AQI-36 & METR-LA& PEMS-BAY\cr
\midrule
Batch size & 16 & 16 & 16 \cr
Time length $L$ & 36 & 24 & 24 \cr
Epochs & 200 & 300 & 300 \cr
Learning rate & 0.001 & 0.001 & 0.001 \cr
Layers of noise estimation & 4 & 4 & 4 \cr
Channel size $d$ & 64 & 64 & 64 \cr
Number of attention heads & 8 & 8 & 8 \cr
Minimum noise level $\beta_1$ & 0.0001 & 0.0001 & 0.0001 \cr
Maximum noise level $\beta_T$ & 0.2 & 0.2 & 0.2 \cr
Diffusion steps $T$ & 100 & 50 & 50 \cr
Number of virtual nodes $k$ & 16 & 64 & 64 \cr
\bottomrule
\end{tabular}}
\end{table}
\begin{table*}[ht]
\centering
\caption{The results of MAE and MSE for spatiotemporal imputation.}
\label{tab:overallmae}
\resizebox{0.95\textwidth}{!}{
\setlength{\tabcolsep}{1mm}{
\renewcommand{1}{1}
\begin{tabular}{ccccccccccc}
\toprule
\multirow{3}{*}{Method}&
\multicolumn{2}{c}{AQI-36}& \multicolumn{4}{c}{METR-LA}& \multicolumn{4}{c}{PEMS-BAY}\cr
\cmidrule(lr){2-3} \cmidrule(lr){4-7} \cmidrule(lr){8-11}
& \multicolumn{2}{c}{Simulated failure (24.6\%)}& \multicolumn{2}{c}{Block-missing (16.6\%)}& \multicolumn{2}{c}{Point-missing (31.1\%)}& \multicolumn{2}{c}{Block-missing (9.2\%)}& \multicolumn{2}{c}{Point-missing (25.0\%)}\cr
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11}
& MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE \cr
\midrule
Mean & 53.48$\pm$0.00 & 4578.08$\pm$0.00 & 7.48$\pm$0.00 & 139.54$\pm$0.00 & 7.56$\pm$0.00 & 142.22$\pm$0.00 & 5.46$\pm$0.00 & 87.56$\pm$0.00 & 5.42$\pm$0.00 & 86.59$\pm$0.00 \cr
DA & 50.51$\pm$0.00 & 4416.10$\pm$0.00 & 14.53$\pm$0.00 & 445.08$\pm$0.00 & 14.57$\pm$0.00 & 448.66$\pm$0.00 & 3.30$\pm$0.00 & 43.76$\pm$0.00 & 3.35$\pm$0.00 & 44.50$\pm$0.00 \cr
KNN & 30.21$\pm$0.00 & 2892.31$\pm$0.00 & 7.79$\pm$0.00 & 124.61$\pm$0.00 & 7.88$\pm$0.00 & 129.29$\pm$0.00 & 4.30$\pm$0.00 & 49.90$\pm$0.00 & 4.30$\pm$0.00 & 49.80$\pm$0.00 \cr
Lin-ITP & 14.46$\pm$0.00 & 673.92$\pm$0.00 & 3.26$\pm$0.00 & 33.76$\pm$0.00 & 2.43$\pm$0.00 & 14.75$\pm$0.00 & 1.54$\pm$0.00 & 14.14$\pm$0.00 & 0.76$\pm$0.00 & 1.74$\pm$0.00 \cr
\midrule
KF & 54.09$\pm$0.00 & 4942.26$\pm$0.00 & 16.75$\pm$0.00 & 534.69$\pm$0.00 & 16.66$\pm$0.00 & 529.96$\pm$0.00 & 5.64$\pm$0.00 & 93.19$\pm$0.00 & 5.68$\pm$0.00 & 93.32$\pm$0.00 \cr
MICE & 30.37$\pm$0.09 & 2594.06$\pm$7.17 & 4.22$\pm$0.05 & 51.07$\pm$1.25 & 4.42$\pm$0.07 & 55.07$\pm$1.46 & 2.94$\pm$0.02 & 28.28$\pm$0.37 & 3.09$\pm$0.02 & 31.43$\pm$0.41 \cr
VAR & 15.64$\pm$0.08 & 833.46$\pm$13.85 & 3.11$\pm$0.08 & 28.00$\pm$0.76 & 2.69$\pm$0.00 & 21.10$\pm$0.02 & 2.09$\pm$0.10 & 16.06$\pm$0.73 & 1.30$\pm$0.00 & 6.52$\pm$0.01 \cr
TRMF & 15.46$\pm$0.06 & 1379.05$\pm$34.83 & 2.96$\pm$0.00 & 22.65$\pm$0.13 & 2.86$\pm$0.00 & 20.39$\pm$0.02 & 1.95$\pm$0.01 & 11.21$\pm$0.06 & 1.85$\pm$0.00 & 10.03$\pm$0.00 \cr
BATF & 15.21$\pm$0.27 & 662.87$\pm$29.55 & 3.56$\pm$0.01 & 35.39$\pm$0.03 & 3.58$\pm$0.01 & 36.05$\pm$0.02 & 2.05$\pm$0.00 & 14.48$\pm$0.01 & 2.05$\pm$0.00 & 14.90$\pm$0.06 \cr
\midrule
V-RIN & 10.00$\pm$0.10 & 838.05$\pm$24.74 & 6.84$\pm$0.17 & 150.08$\pm$6.13 & 3.96$\pm$0.08 & 49.98$\pm$1.30 & 2.49$\pm$0.04 & 36.12$\pm$0.66 & 1.21$\pm$0.03 & 6.08$\pm$0.29 \cr
GP-VAE & 25.71$\pm$0.30 & 2589.53$\pm$59.14 & 6.55$\pm$0.09 & 122.33$\pm$2.05 & 6.57$\pm$0.10 & 127.26$\pm$3.97 & 2.86$\pm$0.15 & 26.80$\pm$2.10 & 3.41$\pm$0.23 & 38.95$\pm$4.16 \cr
rGAIN & 15.37$\pm$0.26 & 641.92$\pm$33.89 & 2.90$\pm$0.01 & 21.67$\pm$0.15 & 2.83$\pm$0.01 & 20.03$\pm$0.09 & 2.18$\pm$0.01 & 13.96$\pm$0.20 & 1.88$\pm$0.02 & 10.37$\pm$0.20 \cr
BRITS & 14.50$\pm$0.35 & 622.36$\pm$65.16 & 2.34$\pm$0.01 & 17.00$\pm$0.14 & 2.34$\pm$0.00 & 16.46$\pm$0.05 & 1.70$\pm$0.01 & 10.50$\pm$0.07 & 1.47$\pm$0.00 & 7.94$\pm$0.03 \cr
GRIN & 12.08$\pm$0.47 & 523.14$\pm$57.17 & 2.03$\pm$0.00 & 13.26$\pm$0.05 & 1.91$\pm$0.00 & 10.41$\pm$0.03 & 1.14$\pm$0.01 & 6.60$\pm$0.10 & 0.67$\pm$0.00 & 1.55$\pm$0.01 \cr
CSDI & 9.51$\pm$0.10 & 352.46$\pm$7.50 & 1.98$\pm$0.00 & 12.62$\pm$0.60 & 1.79$\pm$0.00 & 8.96$\pm$0.08 & 0.86$\pm$0.00 & 4.39$\pm$0.02 & 0.57$\pm$0.00 & 1.12$\pm$0.03 \cr
\midrule
PriSTI & \textbf{9.03$\pm$0.07} & \textbf{310.39$\pm$7.03} & \textbf{1.86$\pm$0.00} & \textbf{10.70$\pm$0.02} & \textbf{1.72$\pm$0.00} & \textbf{8.24$\pm$0.05} & \textbf{0.78$\pm$0.00} & \textbf{3.31$\pm$0.01} & \textbf{0.55$\pm$0.00} & \textbf{1.03$\pm$0.00} \cr
\bottomrule
\end{tabular}}}
\end{table*}
\begin{table}[t]
\centering
\caption{The results of CRPS for spatiotemporal imputation.}
\label{tab:pro_est} | 3,830 | 24,635 | en |
train | 0.15.6 | \begin{table}[t]
\centering
\caption{The results of CRPS for spatiotemporal imputation.}
\label{tab:pro_est}
\renewcommand{1}{1}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{Method}&
AQI-36& \multicolumn{2}{c}{METR-LA}& \multicolumn{2}{c}{PEMS-BAY}\cr
\cmidrule(lr){2-2} \cmidrule(lr){3-4} \cmidrule(lr){5-6}
& {SF}& {Block}& {Point}& {Block}& {Point}\cr
\midrule
V-RIN & 0.3154 & 0.1283 & 0.0781 & 0.0394 & 0.0191 \cr
GP-VAE & 0.3377 & 0.1118 & 0.0977 & 0.0436 & 0.0568 \cr
CSDI & 0.1056 & 0.0260 & 0.0235 & 0.0127 & 0.0067 \cr
\midrule
PriSTI & \textbf{0.0997} & \textbf{0.0244} & \textbf{0.0227} & \textbf{0.0093} & \textbf{0.0064} \cr
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[t]
\centering
\caption{The prediction on AQI-36 after imputation.}
\label{tab:prediction}
\renewcommand{1}{1}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{cccccc}
\toprule
Metric & Ori. & BRITS & GRIN & CSDI & PriSTI \cr
\midrule
MAE & 36.97 & 34.61 & 33.77 & 30.20 & \textbf{29.34} \cr
RMSE &60.37 & 56.66 & 54.06 & 46.98 & \textbf{45.08} \cr
\bottomrule
\end{tabular}}
\end{table} | 646 | 24,635 | en |
train | 0.15.7 | \subsection{Results}
\subsubsection{Overall Performance (RQ1)}
We first evaluate the spatiotemporal imputation performance of PriSTI compared with other baselines.
Since not all methods can provide the probability distribution of missing values, i.e. evaluated by the CRPS metric, we show the deterministic imputation result evaluated by MAE and MSE in Table \ref{tab:overallmae}, and select V-RIN, GP-VAE, CSDI and PriSTI to be evaluated by CRPS, which is shown in Table \ref{tab:pro_est}.
The probability distribution of missing values of these four methods are simulated by generating 100 samples, while their deterministic imputation result is the median of all generated samples.
Since CRPS fluctuates less across 5 times experiments (the standard error is less than 0.001 for CSDI and PriSTI), we only show the mean of 5 times experiments in Table \ref{tab:pro_est}.
It can be seen from Table \ref{tab:overallmae} and Table \ref{tab:pro_est} that our proposed method outperforms other baselines on various missing patterns in different datasets. We summarize our findings as follows:
(1) The statistic methods and classic machine learning methods performs poor on all the datasets. These methods impute missing values based on assumptions such as stability or seasonality of time series, which can not cover the complex temporal and spatial correlations in real-world datasets.
The matrix factorization methods also perform not well due to the low-rank assumption of data.
(2) Among deep learning methods, GRIN, as an autoregressive state-of-the-art multivariate time series imputation method, performs better than other RNN-based methods (rGAIN and BRITS) due to the extraction of spatial correlations. However, the performance of GRIN still has a gap compared to the diffusion model-based methods (CSDI), which may be caused by the inherent defect of error accumulation in autoregressive models.
(3) For deep generative models, the VAE-based methods (V-RIN and GP-VAE) can not outperform CSDI and PriSTI. Our proposed method PriSTI outperforms CSDI in various missing patterns of every datasets, which indicates that our design of conditional information construction and spatiotemporal correlation can improve the performance of diffusion model for imputation task.
In addition, for the traffic datasets, we find that our method has a more obvious improvement than CSDI in the block-missing pattern compared with point-missing, which indicates that the interpolated information may provide more effective conditional information than observed values especially when the missing is continuous at time.
In addition, we select the methods of the top 4 performance rankings (i.e., the average ranking of MAE and MSE) in Table \ref{tab:overallmae} (PriSTI, CSDI, GRIN and BRITS) to impute all the data in AQI-36, and then use the classic spatiotemporal forecasting method Graph Wavenet \cite{wu2019graph} to make predictions on the imputed dataset. We divide the imputed dataset into training, valid and test sets as 70\%/10\%/20\%, and use the data of the past 12 time steps to predict the next 12 time steps. We use the MAE and RMSE (the square root of MSE) for evaluation. The prediction results is shown in Table \ref{tab:prediction}. Ori. represents the raw data without imputation.
The results in Table \ref{tab:prediction} indicate that the prediction performance is affected by the data integrity, and the prediction performance on the data imputed by different methods also conforms to the imputation performance of these methods in Table \ref{tab:overallmae}. This demonstrates that our method can also help the downstream tasks after imputation.
\subsubsection{Sensitivity analysis (RQ2)}
It is obvious that the performance of model imputation is greatly affected by the distribution and quantity of observed data. For spatiotemporal imputation, sparse and continuously missing data is not conducive to the model learning the spatiotemporal correlation.
To test the imputation performance of PriSTI when the data is extremely sparse, we evaluate the imputation ability of PriSTI in the case of 10\%-90\% missing rate compared with the three baselines with the best imputation performance (BRITS, GRIN and CSDI).
We evaluate in the block-missing and point-missing patterns of METR-LA, respectively. To simulate the sparser data in different missing patterns, for the block missing pattern, we increase the probability of consecutive missing whose lengths in the range $[12, 48]$; for the point missing pattern, we randomly drop the observed values according to the missing rate.
We train one model for each method, and use the trained model to test with different missing rates for different missing patterns. For BRITS and GRIN, their models are trained on data that is randomly masked by 50\% with the corresponding missing pattern. For CSDI and PriSTI, their models are trained with the mask strategies consistent with the original experimental settings.
The MAE of each method under different missing rates are shown in Figure \ref{fig:sensitivity_analysis}. When the missing rate of METR-LA reaches 90\%, PriSTI improves the MAE performance of other methods by 4.67\%-34.11\% in block-missing pattern and 3.89\%-43.99\% in point-missing pattern.
The result indicates that our method still has better imputation performance than other baselines at high missing rate, and has a greater improvement than other methods when data is sparser.
We believe that this is due to the interpolated conditional information we construct retains the spatiotemporal dependencies that are more in line with the real distribution than the added Gaussian noise as much as possible when the data is highly sparse.
\begin{figure}
\caption{The imputation results of different missing rates.}
\label{fig:sens_a}
\label{fig:sens_b}
\label{fig:sensitivity_analysis}
\end{figure}
\subsubsection{Ablation study (RQ3 and RQ4)}
We design the ablation study to evaluate the effectiveness of the conditional feature extraction module and noise estimation module. We compare our method with the following variants:
\begin{itemize}
\item \textit{mix-STI}: the input of noise estimation module is the concatenation of the observed values $X$ and sampled noise $\widetilde{X}^T$, and the interpolated conditional information $\mathcal{X}$ and conditional feature extraction module are not used.
\item \textit{w/o CF}: remove conditional features in Equation (\ref{eq:cross_att}) and (\ref{eq:cross_att_spa}), i.e., the conditional feature extraction module is not used, and all the $Q$, $K$, and $V$ are the concatenation of interpolated conditional information $\mathcal{X}$ and sampled noise $\widetilde{X}^T$ when calculating the attention weights.
\item \textit{w/o spa}: remove the spatial dependencies learning module $\gamma_\mathcal{S}$ in Equation (\ref{eq:gps_deep}).
\item \textit{w/o tem}: remove the temporal dependencies learning module $\gamma_\mathcal{T}$ in Equation (\ref{eq:gps_deep}).
\item \textit{w/o MPNN}: remove the component of message passing neural network $\varphi_{\text{MP}}$ in the spatial dependencies learning module $\gamma_\mathcal{S}$.
\item \textit{w/o Attn}: remove the component of spatial global attention $\varphi_{\text{SA}}$ in the spatial dependencies learning $\gamma_\mathcal{S}$.
\end{itemize}
\begin{table}[t]
\centering
\caption{Ablation studies.}
\label{tab:abl}
\renewcommand{1}{1}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{cccc}
\toprule
\multirow{2}{*}{Method}&
AQI-36 & \multicolumn{2}{c}{METR-LA}\cr
\cmidrule(lr){2-2} \cmidrule(lr){3-4}
& {Simulated failure}& {Block-missing}& {Point-missing}\cr
\midrule
mix-STI & 9.83$\pm$0.04 & 1.93$\pm$0.00 & 1.74$\pm$0.00 \cr
w/o CF & 9.28$\pm$0.05 & 1.95$\pm$0.00 & 1.75$\pm$0.00 \cr
\midrule
w/o spa & 10.07$\pm$0.04 & 3.51$\pm$0.01 & 2.23$\pm$0.00 \cr
w/o tem & 10.95$\pm$0.08 & 2.43$\pm$0.00 & 1.92$\pm$0.00 \cr
w/o MPNN & 9.10$\pm$0.10 & 1.92$\pm$0.00 & 1.75$\pm$0.00 \cr
w/o Attn & 9.15$\pm$0.08 & 1.91$\pm$0.00 & 1.74$\pm$0.00 \cr
\midrule
PriSTI & \textbf{9.03$\pm$0.07} & \textbf{1.86$\pm$0.00} & \textbf{1.72$\pm$0.00}\cr
\bottomrule
\end{tabular}}
\end{table}
The variants \textit{mix-STI} and \textit{w/o CF} are used to evaluate the effectiveness of the construction and utilization of the conditional information, where \textit{w/o CF} utilizes the interpolated information $\mathcal{X}$ while \textit{mix-STI} does not.
The remaining variants are used to evaluate the spatiotemporal dependencies learning of PriSTI. \textit{w/o spa} and \textit{w/o tem} are used to prove the necessity of learning temporal and spatial dependencies in spatiotemporal imputation, and \textit{w/o MPNN} and \textit{w/o Attn} are used to evaluate the effectiveness of spatial global correlation and geographic dependency.
Since the spatiotemporal dependencies and missing patterns of the two traffic datasets are similar, we perform ablation study on datasets AQI-36 and METR-LA, and their MAE results are shown in Table \ref{tab:abl}, from which we have the following observations:
(1) According to the results of \textit{mix-STI}, the enhanced conditional information and the extraction of conditional feature is effective for spatiotemporal imputation. We believe that the interpolated conditional information is effective for the continuous missing, such as the simulated failure in AQI-36 and the block-missing in traffic datasets.
And the results of \textit{w/o CF} indicate that the construction and utilization of conditional feature improve the imputation performance for diffusion model, which can demonstrate that the conditional feature extraction module and the attention weight calculation in the noise estimation module is beneficial for the spatiotemporal imputation of PriSTI, since they model the spatiotemporal global correlation with the less noisy information.
2) The results of \textit{w/o spa} and \textit{w/o tem} indicate that both temporal and spatial dependencies are necessary for the imputation. This demonstrate that our proposed noise estimation module captures the spatiotemporal dependencies based on the conditional feature, which will also be validated qualitatively in Section \ref{sec:case_study}.
3) From the results of \textit{w/o MPNN} and \textit{w/o Attn}, the components of the spatial global attention and message passing neural network have similar effects on imputation results, but using only one of them is not as effective as using both, which indicates that spatial global correlation and geographic information are both necessary for the spatial dependencies.
Whether the lack of geographic information as input or the lack of captured implicit spatial correlations, the imputation performance of the model would be affected.
We believe that the combination of explicit spatial relationships and implicit spatial correlations can extract useful spatial dependencies in real-world datasets.
\begin{figure}
\caption{The visualization of the probabilistic imputation in AQI-36 and block-missing pattern of METR-LA. Each subfigure represents a sensor, and the time windows of all sensors are aligned. The black crosses represent observations, and dots of various colors represent the ground truth of missing values. The solid green line is the deterministic imputation result, and the green shadow represents the quantile between 0.05 to 0.95.}
\label{fig:visual_aqi}
\label{fig:visual_la_block}
\label{fig:visual_la_block}
\end{figure}
\subsubsection{Case study (RQ4)}\label{sec:case_study}
We plot the imputation result at the same time of some nodes in AQI-36 and the block-missing pattern of METR-LA to qualitatively analyze the spatiotemporal imputation performance of our method, as shown in Figure \ref{fig:visual_la_block}.
Each of the subfigure represents a sensor, the black cross represents the observed value, and the dots of other colors represent the ground truth of the part to be imputed. The green area is the part between the 0.05 and 0.95 quantiles of the estimated probability distribution, and the green solid line is the median, that is, the deterministic imputation result.
We select 5 sensors in AQI-36 and METR-LA respectively, and display their geographic location on the map. Taking METR-LA as the example, it can be observed from Figure \ref{fig:visual_la_block} that sensor 188 and 194 have almost no missing values in the current time window, while their surrounding sensors have continuous missing values, and sensor 192 even has no observed values, which means its temporal information is totally unavailable.
However, the distribution of our generated samples still covers most observations, and the imputation results conform to the time trend of different nodes.
This indicates that on the one hand, our method can capture the temporal dependencies by the given observations for imputation, and on the other hand, when the given observations are limited, our method can utilize spatial dependencies to impute according to the geographical proximity or nodes with similar temporal pattern. For example, for the traffic system, the time series corresponding to sensors with close geographical distance are more likely to have similar trends.
\begin{figure}
\caption{The imputation for unobserved sensors in AQI-36. The orange dotted line represents the ground truth, the green solid line represents the deterministic imputation result, and the green shadow represents the quantile between 0.05 to 0.95.}
\label{fig:mask_sensor}
\end{figure} | 3,663 | 24,635 | en |
train | 0.15.8 | \subsubsection{Case study (RQ4)}\label{sec:case_study}
We plot the imputation result at the same time of some nodes in AQI-36 and the block-missing pattern of METR-LA to qualitatively analyze the spatiotemporal imputation performance of our method, as shown in Figure \ref{fig:visual_la_block}.
Each of the subfigure represents a sensor, the black cross represents the observed value, and the dots of other colors represent the ground truth of the part to be imputed. The green area is the part between the 0.05 and 0.95 quantiles of the estimated probability distribution, and the green solid line is the median, that is, the deterministic imputation result.
We select 5 sensors in AQI-36 and METR-LA respectively, and display their geographic location on the map. Taking METR-LA as the example, it can be observed from Figure \ref{fig:visual_la_block} that sensor 188 and 194 have almost no missing values in the current time window, while their surrounding sensors have continuous missing values, and sensor 192 even has no observed values, which means its temporal information is totally unavailable.
However, the distribution of our generated samples still covers most observations, and the imputation results conform to the time trend of different nodes.
This indicates that on the one hand, our method can capture the temporal dependencies by the given observations for imputation, and on the other hand, when the given observations are limited, our method can utilize spatial dependencies to impute according to the geographical proximity or nodes with similar temporal pattern. For example, for the traffic system, the time series corresponding to sensors with close geographical distance are more likely to have similar trends.
\begin{figure}
\caption{The imputation for unobserved sensors in AQI-36. The orange dotted line represents the ground truth, the green solid line represents the deterministic imputation result, and the green shadow represents the quantile between 0.05 to 0.95.}
\label{fig:mask_sensor}
\end{figure}
\subsubsection{Imputation for sensor failure (RQ5)}
We impute the spatiotemporal data in a more extreme case: some sensors fail completely, i.e., they cannot provide any observations. The available information is only their location, so we can only impute by the observations from other sensors.
This task is often studied in the research related to Kriging \cite{stein1999interpolation}, which requires the model to reconstruct a time series for a given location based on geographic location and observations from other sensors.
We perform the experiment of sensor failure on AQI-36. According to \cite{cini2021filling}, we select the air quality monitoring station with the highest (station 14) and lowest (station 31) connectivity. Based on the original experimental settings unchanged, all observations of these two nodes are masked during the training process. The results of the imputation for unobserved sensors are shown in Figure \ref{fig:mask_sensor}, where the orange dotted line is the ground truth, the green solid line is the median of the generated samples, and the green shadow is the quantiles between 0.05 and 0.95.
We use MAE to quantitatively evaluate the results, the MAE of station 14 is 10.23, and the MAE of station 31 is 15.20.
Since among all baselines only GRIN can impute by geographic information, the MAE compared to the same experiments in GRIN is also shown in Figure \ref{fig:mask_sensor}, and PriSTI has a better imputation performance on the unobserved nodes.
This demonstrates the effectiveness of PriSTI in exploiting spatial relationship for imputation.
Assuming that the detected spatiotemporal data are sufficiently dependent on spatial geographic location, our proposed method may be capable of reconstructing the time series for a given location within the study area, even if no sensors are deployed at that location.
\subsubsection{Hyperparameter analysis and time costs}
We conduct analysis and experiments on some key hyperparameters in PriSTI to illustrate the setting of hyperparameters and the sensitivity of the model to these parameters. Taking METR-LA as example, we analyze the three key hyperparameters: the channel size of hidden state $d$, maximum noise level $\beta_T$ and number of virtual nodes $k$, as shown in Figure \ref{fig:sensitivity_parameter}.
Among them, $d$ and $k$ affect the amount of information learned by the model. $\beta_T$ affects the level of sampled noise, and too large or too too small value is not conducive to the learning of noise prediction model.
According to the results in Figure \ref{fig:sensitivity_parameter}, we set 0.2 to $\beta_T$ which is the optimal value. For $d$ and $k$, although the performance is better with larger value, we set both $d$ and $k$ to 64 considering the efficiency.
In addition, we select several deep learning methods with the higher imputation performance ranking to compare their efficiency with PriSTI on dataset AQI-36 and METR-LA. The total training time and inference time of different methods are shown in Figure \ref{fig:efficiency}.
The experiments are conducted on AMD EPYC 7371 CPU, NVIDIA RTX 3090.
It can be seen that the efficiency gap between methods on datasets with less nodes (AQI-36) is not large, but the training time cost of generative methods CSDI and PriSTI on datasets with more nodes (METR-LA) is higher. Due to the construction of conditional information, PriSTI has 25.7\% more training time and 17.9\% more inference time on METR-LA than CSDI.
\begin{figure}
\caption{The sensitivity study of key parameters.}
\label{fig:hp_a}
\label{fig:hp_b}
\label{fig:hp_c}
\label{fig:sensitivity_parameter}
\end{figure}
\begin{figure}
\caption{The time costs of PriSTI and other baselines.}
\label{fig:eff_a}
\label{fig:eff_b}
\label{fig:efficiency}
\end{figure} | 1,497 | 24,635 | en |
train | 0.15.9 | \section{Related Work}\label{sec:related_work}
Since the spatiotemporal data can be imputed along temporal or spatial dimension, there are a large amount of literature for missing value imputation in spatiotemporal data.
For the time series imputation, the early studies imputed missing values by statistical methods such as local interpolation \cite{kreindler2006effects, acuna2004treatment}, which reconstructs the missing value by fitting a smooth curve to the observations.
Some methods also impute missing values based on the historical time series through EM algorithm \cite{shumway1982approach, nelwamondo2007missing} or the combination of ARIMA and Kalman Filter \cite{harvey1990forecasting, ansley1984estimation}.
There are also some early studies filling in missing values through spatial relationship or neighboring sequence, such as KNN \cite{trevor2009elements, beretta2016nearest} and Kriging \cite{stein1999interpolation}.
In addition, the low-rank matrix factorization \cite{salakhutdinov2008bayesian, yu2016temporal, chen2019missing, chen2021bayesian} is also a feasible approach for spatiotemporal imputation, which exploits the intrinsic spatial and temporal patterns based on the prior knowledge.
For instance, TRMF \cite{yu2016temporal} incorporates the structure of temporal dependencies into a temporal regularized matrix factorization framework.
BATF \cite{chen2019missing} incorporates domain knowledge from transportation systems into an augmented tensor factorization model for traffic data modeling.
In recent years, there have been many studies on spatiotemporal data imputation through deep learning methods \cite{liu2019naomi, ma2019cdsa}.
Most deep learning imputation methods focus on the multivariate time series and use RNN as the core to model temporal relationships \cite{che2018recurrent, yoon2018estimating, cao2018brits, cini2021filling}.
The RNN-based approach for imputation is first proposed by GRU-D \cite{che2018recurrent} and is widely used in deep autoregressive imputation methods. Among the RNN-based methods, BRITS \cite{cao2018brits} imputes the missing values on the hidden state through a bidirectional RNN and considers the correlation between features.
GRIN \cite{cini2021filling} introduces graph neural networks based on BRITS to exploit the inductive bias of historical spatial patterns for imputation.
In addition to directly using RNN to estimate the hidden state of missing parts, there are also a number of methods using GAN to generate missing data \cite{luo2018multivariate, yoon2018gain, miao2021generative}.
For instance, GAIN \cite{yoon2018gain} imputes data conditioned on observation values by the generator, and utilizes the discriminator to distinguish the observed and imputed part.
SSGAN \cite{miao2021generative} proposes a semi-supervised GAN to drive the generator to estimate missing values using observed information and data labels.
However, these methods are still RNN-based autoregressive methods, which are inevitably affected by the problem of error accumulation, i.e., the current missing value is imputed by the inaccurate historical estimated values in a sequential manner.
To address this problem, Liu et al. \cite{liu2019naomi} proposes NAOMI, developing a non-autoregressive decoder that recursively updates the hidden state, and using generative adversarial training to impute.
Fortuin et al. \cite{fortuin2020gp} propose a multivariate time series imputation method utilizing the VAE architecture with a Gaussian process prior in the latent space to capture temporal dynamics.
Some other works capture spatiotemporal dependencies through the attention mechanism \cite{ma2019cdsa, shukla2021multi, du2022saits}, which not only consider the temporal dependencies but also exploit the geographic locations \cite{ma2019cdsa} and correlations between different time series \cite{du2022saits}.
Recently, a diffusion model-based generative imputation framework CSDI \cite{tashiro2021csdi} shows the performance advantages of deep generative models in multivariate time series imputation tasks.
The Diffusion Probabilistic Models (DPM) \cite{sohl2015deep, ho2020denoising, song2020score}, as deep generative models, have achieved great performance than other generative methods in several fields such as image synthesis \cite{rombach2022high, ho2020denoising}, audio generation \cite{kong2020diffwave, goel2022s}, and graph generation \cite{huang2022graphgdp, huang2023conditional}.
In terms of imputation tasks, there are existing methods for 3D point cloud completion \cite{lyu2021conditional} and multivariate time series imputation \cite{tashiro2021csdi} through conditional DPM.
CSDI imputes the missing data through score-based diffusion models conditioned on observed data, exploiting temporal and feature correlations by a two dimensional attention mechanism.
However, CSDI takes the concatenation of observed values and noisy information as the input when training, increasing the difficulty of the attention mechanism's learning.
Different from existing diffusion model-based imputation methods, our proposed method construct the prior and imputes spatiotemporal data based on the extracted conditional feature and geographic information.
\section{Conclusion}\label{sec:conclusion}
We propose PriSTI, a conditional diffusion framework for spatiotemporal imputation, which imputes missing values with help of the extracted conditional feature to calculate temporal and spatial global correlations.
Our proposed framework captures spatiotemporal dependencies by comprehensively considering spatiotemporal global correlation and geographic dependency.
PriSTI achieves more accurate imputation results than state-of-the-art baselines in various missing patterns of spatiotemporal data in different fields, and also handles the case of high missing rates and sensor failure.
In future work, we will consider improving the scalability and computation efficiency of existing frameworks on larger scale spatiotemporal datasets, and how to impute by longer temporal dependencies with refined conditional information.
\section*{Acknowledgment}
We thank anonymous reviewers for their helpful comments.
This research is supported by the National Natural Science Foundation of China (62272023).
\end{document} | 1,669 | 24,635 | en |
train | 0.16.0 | \begin{document}
\title{ Quantum measurement bounds beyond the uncertainty relations}
\author{Vittorio Giovannetti$^1$, Seth Lloyd$^2$, Lorenzo Maccone$^3$}
\affiliation{ $^1$ NEST, Scuola Normale Superiore and Istituto
Nanoscienze-CNR,
piazza dei Cavalieri 7, I-56126 Pisa, Italy \\
$^2$Dept.~of Mechanical Engineering, Massachusetts Institute of
Technology, Cambridge, MA 02139, USA \\
$^3$Dip.~Fisica ``A.~Volta'', INFN Sez.~Pavia, Universit\`a di
Pavia, via Bassi 6, I-27100 Pavia, Italy}
\begin{abstract}
We give a bound to the precision in the estimation of a parameter in
terms of the expectation value of an observable. It is an extension
of the Cram\'er-Rao inequality and of the Heisenberg uncertainty
relation, where the estimation precision is typically bounded in
terms of the variance of an observable.
\end{abstract}
\maketitle
Quantum measurements are limited by bounds such as the Heisenberg
uncertainty relations \cite{heisenberg,robertson} or the quantum
Cram\'er-Rao inequality \cite{holevo,helstrom,BRAU96,BRAU94}, which
typically constrain the ability in recovering a target quantity
(e.g.~a relative phase) through the {\em standard deviation} of a
conjugate one (e.g.~the energy) evaluated on the state of the probing
system. Here we give a new bound related to the {\em expectation
value}: we show that the precision in the quantity cannot scale
better than the inverse of the expectation value (above a ``ground
state'') of its conjugate counterpart. It is especially relevant in
the expanding field of quantum metrology \cite{review}: it settles in
the positive the longstanding conjecture of quantum optics
\cite{caves,yurke,barry,ou,bollinger,smerzi}, recently challenged
\cite{dowling,rivasluis,zhang}, that the ultimate phase-precision
limit in interferometry is lower bounded by the inverse of the total
number of photons employed in the estimation process.
The aim of Quantum Parameter
Estimation~\cite{holevo,helstrom,BRAU96,BRAU94} is to recover the {\em
unknown} value $x$ of a parameter that is written into the state
$\rho_x$ of a probe system through some {\em known} encoding mechanism
$U_x$. For example, we can recover the relative optical delay $x$
among the two arms of a Mach-Zehnder interferometer described by its
unitary evolution $U_x$ using as probe a light beam fed into the
interferometer. The statistical nature of quantum mechanics induces
fluctuations that limit the ultimate precision which can be achieved
(although we can exploit quantum ``tricks'' such as entanglement and
squeezing in optimizing the state preparation of the probe and/or the
detection stage \cite{GIOV06}).
In particular, if the encoding stage is repeated
several times using $\nu$ identical copies of the same probe input
state $\rho_x$, the root mean square error (RMSE) $\Delta X$ of the
resulting estimation process is limited by the quantum Cram\'er-Rao
bound~\cite{holevo,helstrom,BRAU96,BRAU94} $\Delta X\geqslant
1/\sqrt{\nu Q(x)}$, where ${Q}(x)$ is the quantum Fisher information.
For pure probe states and unitary encoding mechanism $U_x$, ${Q}(x)$
is equal to the variance $(\Delta H)^2$ (calculated on the probe
state) of the generator $H$ of the transformation $U_x=e^{-ixH}$.
In this case, the Cram\'er-Rao bound takes the form
\begin{eqnarray}
\Delta
X\geqslant 1/(\sqrt{\nu} \Delta H)\label{QC}\;
\end{eqnarray}
of an uncertainty relation \cite{BRAU94,BRAU96}. In
fact, if the parameter $x$ can be connected to an observable,
Eq.~\eqref{QC} corresponds to the Heisenberg uncertainty relation for
conjugate variables~\cite{heisenberg,robertson}. This bound is
asymptotically achievable in the limit of $\nu \rightarrow \infty$
\cite{holevo,helstrom}.
\begin{figure}
\caption{ Lower bounds to the precision estimation $\Delta X$ as a
function of the experimental repetitions $\nu$. The green area in
the graph represents the forbidden values due to our bound
\eqref{ris}
\end{figure}
Here we will derive a bound in terms of the expectation value of $H$,
which (in the simple case of constant $\Delta X$) takes the form (see
Fig.~\ref{f:compar})
\begin{eqnarray}
\Delta X\geqslant \kappa/[\nu(\langle H\rangle-E_0)]
\labell{ris}\;,
\end{eqnarray}
where $E_0$ is the value of a ``ground state'', the minimum eigenvalue
of $H$ whose eigenvector is populated in the probe state (e.g.~the
ground state energy when $H$ is the probe's Hamiltonian), and $\kappa\simeq
0.091$ is a constant of order one. Our bound holds both for biased and
unbiased measurement procedures, and for pure and mixed probe states.
When $\Delta X$ is dependent on $x$, a constraint of the
form~(\ref{ris}) can be placed on the average value of $\Delta X(x)$
evaluated on any two values $x$ and $x'$ of the parameter which are
sufficiently separated, namely
\begin{eqnarray} \label{bd1} \frac{\Delta X(x) + \Delta X(x')}{2} &\geqslant
& \frac{\kappa}{\nu(\langleH\rangle - E_0)} \;.
\end{eqnarray}
Hence, we cannot exclude that strategies whose error $\Delta X$ depend
on $x$ may have a ``sweet spot'' where the bound \eqref{ris} may be
beaten \cite{rivasluis}, but inequality \eqref{bd1} shows that the
average value of $\Delta X$ is subject to the bound. Thus, these
strategies are of no practical use, since the sweet spot depends on
the unknown parameter $x$ to be estimated and the extremely good
precision in the sweet spot must be counterbalanced by a
correspondingly bad precision nearby.
Proving the bound~\eqref{ris} in full generality is clearly not a
trivial task since no definite relation can be established between
$\nu(\langle H\rangle-E_0)$ and the term $\sqrt{\nu} \Delta H$ on
which the Cram\'er-Rao bound is based. In particular, scaling
arguments on $\nu$ cannot be used since, on one hand, the value of
$\nu$ for which Eq.~(\ref{QC}) saturates is not known (except in the
case in which the estimation strategy is fixed \cite{caves}, which has
little fundamental relevance) and, on the other hand, input probe
states $\rho$ whose expectation values $\langle H \rangle$ depend
explicitly on $\nu$ may be employed, e.g.~see Ref.~\cite{rivasluis}.
To circumvent these problems our proof is based on the quantum speed
limit~\cite{qspeed}, a generalization of the Margolus-Levitin
~\cite{margolus} and Bhattacharyya bounds~\cite{bhatta,man} which
links the fidelity $F$ between the two joint states
$\rho_x^{\otimes\nu}$ and $\rho_{x'}^{\otimes\nu}$ to the difference
$x'-x$ of the parameters $x$ and $x'$ imprinted on the states through
the mapping $U_x=e^{-ixH}$ [The fidelity between two states $\rho$ and
$\sigma$ is defined as $F=\{\mbox{Tr}[\sqrt{\sqrt{\rho}
\sigma\sqrt{\rho}}]\}^2$. A connection between quantum metrology and
the Margolus-Levitin theorem was proposed in \cite{kok}, but this
claim was subsequently retracted in \cite{erratum}.] In the case of
interest here, the quantum speed limit \cite{qspeed} implies
\begin{eqnarray}
|x'-x|\geqslant\frac\pi2
\max \left[\frac{\alpha(F)}{\nu(\langleH\rangle-E_0)}\;,\
\frac{\beta(F)}{\sqrt{\nu} \Delta H}\right] \;\labell{newqsl}\;,
\end{eqnarray}
where the $\nu$ and $\sqrt{\nu}$ factors at the denominators arise
from the fact that here we are considering $\nu$ copies of the probe
states $\rho_x$ and $\rho_{x'}$, and where
$\alpha(F)\simeq\beta^2(F)=4\arccos^2(\sqrt{F})/\pi^2$ are the
functions plotted in Fig.~\ref{f:qsl} of the supplementary material.
The inequality~\eqref{newqsl} tells us that the parameter difference
$|x'-x|$ induced by a transformation $e^{-i(x'-x)H}$ which employs
resources $\langleH\rangle-E_0$ and $\Delta H$ cannot be arbitrarily small (when
the parameter $x$ coincides with the evolution time, this sets a limit
to the ``speed'' of the evolution, the quantum speed limit).
We now give the main ideas of the proof of \eqref{ris} by focusing on
a simplified scenario, assuming pure probe states $|\psi_x\rangle=U_x
|\psi\rangle$, and unbiased estimation strategies constructed in terms
of projective measurements with RSME $\Delta X$ that do not depend on
$x$ (all these assumptions are dropped in the supplementary material).
For unbiased estimation, $x=\sum_j P_j(x) x_j$ and the RMSE coincides
with the variance of the distribution $P_j(x)$, i.e.~$\Delta
X=\sqrt{\sum_j P_j(x) [ x_j-x]^2}$, where $P_j(x) = |\langle x_j |
\psi_x \rangle^{\otimes\nu}|^2$ is the probability of obtaining the
result $x_j$ while measuring the joint state
$|\psi_x\rangle^{\otimes\nu}$ with a projective measurement on the
joint basis $|x_j\rangle$. Let us consider two values $x$ and $x'$ of
the parameter that are further apart than the measurement's RMSE,
i.e.~$x'-x=2 \lambda \Delta X$ with $\lambda>1$. If no such $x$ and
$x'$ exist, the estimation is extremely poor: basically the whole
domain of the parameter is smaller than the RMSE. Hence, for
estimation strategies that are sufficiently accurate to be of
interest, we can always assume that such a choice is possible (see
below). The Tchebychev inequality states that for an arbitrary
probability distribution $p$, the probability that a result $x$ lies
more than $\lambda\Delta X$ away from the average $\mu$ is upper
bounded by $1/\lambda^2$, namely $p(|x-\mu|\geqslant \lambda\Delta
X)\leqslant 1/\lambda^2$. It implies that the probability that
measuring $|\Psi_{x'}\rangle :=|\psi_{x'}\rangle^{\otimes\nu}$ the
outcome $x_j$ lies within $\lambda\Delta X$ of the mean value
associated with $|\Psi_x\rangle:=|\psi_x\rangle^{\otimes\nu}$ cannot
be larger $1/\lambda^2$. By the same reasoning, the probability that
measuring $|\Psi_{x}\rangle$ the outcome $x_j$ will lie within
$\lambda\Delta X$ of the mean value associated with
$|\Psi_{x'}\rangle$ cannot be larger $1/\lambda^2$. This implies that
the overlap between the states $|\Psi_{x}\rangle$ and
$|\Psi_{x'}\rangle$ cannot be too large: more precisely, $F=|
\langle\Psi_x|\Psi_{x'} \rangle|^2\leqslant 4/\lambda^2$. Replacing
this expression into \eqref{newqsl} (exploiting the fact that $\alpha$
and $\beta$ are decreasing functions) we obtain
\begin{eqnarray}
2\lambda \Delta X\geqslant
\frac\pi{2}
\max \left[\frac{\alpha(4/\lambda^2)}{\nu(\langleH\rangle-E_0)}\;,\
\frac{\beta(4/\lambda^2)}{\sqrt{\nu} \Delta H}\right]
\labell{ineq}\;,
\end{eqnarray}
whence we obtain \eqref{ris} by optimizing over $\lambda$ the first
term of the $\max$, i.e.~choosing
$\kappa=\sup_\lambda\pi\:\alpha(4/\lambda^2)/(4\lambda)\simeq 0.091$. The
second term of the $\max$ gives rise to a quantum Cram\'er-Rao type
uncertainty relation (or a Heisenberg uncertainty relation) which,
consistently with the optimality of Eq.~(\ref{QC}) for $\nu\gg1$, has
a pre-factor $\pi \beta(4/\lambda^2)/ (4 \lambda)$ which is smaller
than $1$ for all $\lambda$. This means that for large $\nu$ the bound
\eqref{ris} will be asymptotically superseded by the Cram\'er-Rao
part, which scales as $\propto 1/\sqrt{\nu}$ and is achievable in this
regime.
Analogous results can be obtained (see supplementary material) when
considering more general scenarios where the input states of the
probes are not pure, the estimation process is biased, and it is
performed with arbitrary POVM measurements. (In the case of biased
measurements, the constant $\kappa$ in \eqref{ris} and \eqref{bd1}
must be replaced by $\kappa= \sup_{\lambda} \pi
\alpha(4/\lambda^2)/[4(\lambda+1)]\simeq 0.074$, where a $+1$ term
appears in the denominator.) In this generalized context, whenever the
RMSE depends explicitly on the value $x$ of the parameter, the
result~\eqref{ris} derived above is replaced by the weaker
relation~\eqref{bd1}. Such inequality clearly does not necessarily
exclude the possibility that at a ``sweet spot'' the estimation might
violate the scaling~(\ref{ris}). However, Eq.~(\ref{bd1}) is still
sufficient strong to exclude accuracies of the form $\Delta X(x)
=1/R(x,\nu\langle H\rangle)$ where, as in Refs.~\cite{ssw,rivasluis},
$R(x,z)$ is a function of $z$ which, for all $x$, increases more than
linearly, i.e.~$\lim_{z\rightarrow \infty} z/R(x,z)=0$.
The bound~\eqref{ris} has been derived under the explicit assumption
that $x$ and $x'$ exists such that $x'-x\geqslant 2 \lambda \Delta X$
for some $\lambda >1$, which requires one to have $x'-x\geqslant 2
\Delta X$. This means that the estimation strategy must be good
enough: the probe is sufficiently sensitive to the transformation
$U_x$ that it is shifted by more than $\Delta X$ during the
interaction. The existence of pathological estimation strategies which
violate such condition cannot be excluded {\em a priori}. Indeed
trivial examples of this sort can be easily constructed, a fact which
may explain the complicated history of the Heisenberg bound with
claims \cite{caves,yurke,barry,ou,bollinger,smerzi} and counterclaims
\cite{dowling,rivasluis,zhang,ssw}. It should be stressed however,
that the assumption $x'-x\geqslant 2 \Delta X$ is always satisfied
except for extremely poor estimation strategies with such large errors
as to be practically useless. One may think of repeating such a poor
estimation strategy $\nu>1$ times and of performing a statistical
average to decrease its error. However, for sufficiently large $\nu$
the error will decrease to the point in which the $\nu$ repetitions of
the poor strategy are, collectively, a good strategy, and hence again
subject to our bounds \eqref{ris} and \eqref{bd1}. | 4,081 | 12,101 | en |
train | 0.16.1 | Analogous results can be obtained (see supplementary material) when
considering more general scenarios where the input states of the
probes are not pure, the estimation process is biased, and it is
performed with arbitrary POVM measurements. (In the case of biased
measurements, the constant $\kappa$ in \eqref{ris} and \eqref{bd1}
must be replaced by $\kappa= \sup_{\lambda} \pi
\alpha(4/\lambda^2)/[4(\lambda+1)]\simeq 0.074$, where a $+1$ term
appears in the denominator.) In this generalized context, whenever the
RMSE depends explicitly on the value $x$ of the parameter, the
result~\eqref{ris} derived above is replaced by the weaker
relation~\eqref{bd1}. Such inequality clearly does not necessarily
exclude the possibility that at a ``sweet spot'' the estimation might
violate the scaling~(\ref{ris}). However, Eq.~(\ref{bd1}) is still
sufficient strong to exclude accuracies of the form $\Delta X(x)
=1/R(x,\nu\langle H\rangle)$ where, as in Refs.~\cite{ssw,rivasluis},
$R(x,z)$ is a function of $z$ which, for all $x$, increases more than
linearly, i.e.~$\lim_{z\rightarrow \infty} z/R(x,z)=0$.
The bound~\eqref{ris} has been derived under the explicit assumption
that $x$ and $x'$ exists such that $x'-x\geqslant 2 \lambda \Delta X$
for some $\lambda >1$, which requires one to have $x'-x\geqslant 2
\Delta X$. This means that the estimation strategy must be good
enough: the probe is sufficiently sensitive to the transformation
$U_x$ that it is shifted by more than $\Delta X$ during the
interaction. The existence of pathological estimation strategies which
violate such condition cannot be excluded {\em a priori}. Indeed
trivial examples of this sort can be easily constructed, a fact which
may explain the complicated history of the Heisenberg bound with
claims \cite{caves,yurke,barry,ou,bollinger,smerzi} and counterclaims
\cite{dowling,rivasluis,zhang,ssw}. It should be stressed however,
that the assumption $x'-x\geqslant 2 \Delta X$ is always satisfied
except for extremely poor estimation strategies with such large errors
as to be practically useless. One may think of repeating such a poor
estimation strategy $\nu>1$ times and of performing a statistical
average to decrease its error. However, for sufficiently large $\nu$
the error will decrease to the point in which the $\nu$ repetitions of
the poor strategy are, collectively, a good strategy, and hence again
subject to our bounds \eqref{ris} and \eqref{bd1}.
Our findings are particularly relevant in the field of quantum optics,
where a controversial and longly debated problem
\cite{caves,yurke,barry,ou,bollinger,smerzi,ssw,dowling,rivasluis,zhang}
is to determine the scaling of the ultimate limit in the
interferometric precision of estimating a phase as a function of the
energy $\langle H\rangle$ devoted to preparing the $\nu$ copies of the
probes: it has been conjectured
\cite{caves,yurke,barry,ou,bollinger,smerzi} that the phase RMSE is
lower bounded by the inverse of the total number of photons employed
in the experiment, the ``Heisenberg bound'' for
interferometry\footnote{This ``Heisenberg''
bound~\cite{caves,yurke,barry,ou,bollinger,smerzi} should not be
confused with the Heisenberg scaling defined for general quantum
estimation problem~\cite{review} in which the $\sqrt{\nu}$ at the
denominator of Eq.~(\ref{QC}) is replaced by $\nu$ by feeding the
$\nu$ inputs with entangled input states -- e.g. see
Ref.~\cite{review,GIOV06}.}. Its achievability has been recently
proved \cite{HAYA10-1}, and, in the context of quantum parameter
estimation, it corresponds to an equation of the form of
Eq.~\eqref{ris}, choosing $x=\phi$ (the relative phase between the
modes in the interferometer) and $H=a^\dag a$ (the number operator).
The validity of this bound has been questioned several times
\cite{ssw,dowling,rivasluis,zhang}. In particular schemes have been
proposed~\cite{ssw,rivasluis} that apparently permit better scalings
in the achievable RMSE (for instance $\Delta X \approx (\nu\langle
H\rangle)^{-\gamma}$ with $\gamma>1$). None of these protocols have
conclusively proved such scalings for arbitrary values of the
parameter $x$, but a sound, clear argument against the possibility of
breaking the $\gamma=1$ scaling of Eq.~(\ref{ris}) was missing up to
now. Our results validate the Heisenberg bound by showing that it
applies to all those estimation strategies whose RMSE $\Delta X$ do
not depend on the value of the parameter $x$, and that the remaining
strategies can only have good precision for isolated (hence
practically useless) values of the unknown parameter $x$.\newline
V.G. acknowledges support by MIUR through FIRB-IDEAS Project No.
RBID08B3FM. S.L. acknowledges Intel, Lockheed Martin, DARPA, ENI under
the MIT Energy Initiative, NSF, Keck under xQIT, the MURI QUISM
program, and Jeffrey Epstein. L.M. acknowledges useful discussions
with Prof. Alfredo Luis.
\subsection*{Supplementary material}
Our bound refers to the estimation of the value $x$ of a real
parameter $X$ that identifies a unitary transformation $U_x=e^{-iHx}$,
generated by an Hermitian operator $H$. The usual setting in quantum
channel parameter estimation~(see \cite{review} for a recent review)
is to prepare $\nu$ copies of a probe system in a fiducial state
$\rho$, apply the mapping $U_x$ to each of them as
$\rho\to\rho_x=U_x\rho {U_x}^\dag$, and then perform a (possibly
joint) measurement on the joint output state $\rho_x^{\otimes \nu}$,
the measurement being described by a generic Positive Operator-Valued
Measure (POVM) of elements $\{ E_j\}$. [The possibility of applying a
joint transformation on the $\nu$ probes before the interaction $U_x$
(e.g.~to entangle them as studied in \cite{GIOV06}) can also be
considered, but it is useless in this context, since it will not
increase the linear scaling in $\nu$ of the term $\nu(\langle
H\rangle-E_0)$ that governs our bounds.] The result $j$ of the measurement is finally
used to recover the quantity $x$ through some data processing which
assigns to each outcome $j$ of the POVM a value $x_j$ which represents
the estimation of $x$. The accuracy of the process can be gauged by
the RMSE of the problem, i.e.~by the quantity
\begin{eqnarray}\label{defdelta}
\Delta X := \sqrt{\sum_{j} P_j(x) [ x_j -x ]^2 } = \sqrt{ \delta^2X +
(\bar{x}-x )^2 },
\end{eqnarray}
where $P_j(x) = \mbox{Tr}[ E_j \rho_x^{\otimes \nu}]$ is the
probability of getting the outcome $j$ when measuring $\rho_x^{\otimes
\nu}$, $\bar{x} := \sum_{j} P_j(x) x_j$ is the average of the
estimator function, and where
\begin{eqnarray}
\delta^2 X := \sum_{j} P_j(x) [ x_j - \bar{x}]^2\;,
\end{eqnarray}
is the variance of the random variable $x_j$. The estimation is said
to be unbiased if $\bar{x}$ coincides with the real value $x$,
i.e.~$\bar{x}=x$, so that, in this case, $\Delta X$ coincides with
$\delta X$. General estimators however may be biased with $\bar{x}\neq
x$, so that $\Delta X > \delta X$ (in this case, they are called
asymptotically unbiased if $\bar{x}$ converges to $x$ in the limit
$\nu\rightarrow \infty$).
In the main text we restricted our analysis to pure states of the
probe $\rho=|\psi\rangle\langle \psi|$ and focused on projective
measurements associated to unbiased estimation procedures whose RMSE
$\Delta X$ is independent on $x$.
Here we extend the proof to drop the above simplifying assumptions,
considering a generic (non necessarily unbiased) estimation process
which allows one to determine the value of the real parameter $X$
associated with the non necessarily pure input state $\rho$.
Take two values $x$ and $x'$ of $X$ such that their associated RMSE
verifies the following constraints
\begin{eqnarray}
&&\Delta X(x) \neq 0\;,\label{pos}\\
&&|x-x'| =
( \lambda+1) [\Delta X(x)+\Delta X(x') ] \;,
\label{dist}
\end{eqnarray}
for some fixed value $\lambda$ greater than 1 (the right hand side of
Eq.~(\ref{dist}) can be replaced by $\lambda [\Delta X(x) +\Delta
X(x')]$ if the estimation is unbiased). In these expressions $\Delta
X(x)$ and $\Delta X(x')$ are the RMSE of the estimation evaluated
through Eq.~(\ref{defdelta}) on the output states $\rho_x^{\otimes
\nu}$ and $\rho_{x'}^{\otimes \nu}$ respectively (to include the
most general scenario we do allow them to depend explicitly on the
values taken by the parameter $X$). In the case in which the
estimation is asymptotically unbiased and the quantum Fisher
information $Q(x)$ of the problem takes finite values, the condition
(\ref{pos}) is always guaranteed by the quantum Cram\'{e}r-Rao
bound~\cite{holevo,helstrom,BRAU96,BRAU94} (but notice that our proof
holds also if the quantum Cram\'{e}r-Rao bound does not apply -- in
particular, we do not require the estimation to be asymptotically
unbiased). The condition~(\ref{dist}) on the other hand is verified
by any estimation procedure which achieves a reasonable level of
accuracy: indeed, if it is not verified, then this implies that the
interval over which $X$ can span is not larger than twice the average
RMSE achievable in the estimation. | 2,678 | 12,101 | en |
train | 0.16.2 | V.G. acknowledges support by MIUR through FIRB-IDEAS Project No.
RBID08B3FM. S.L. acknowledges Intel, Lockheed Martin, DARPA, ENI under
the MIT Energy Initiative, NSF, Keck under xQIT, the MURI QUISM
program, and Jeffrey Epstein. L.M. acknowledges useful discussions
with Prof. Alfredo Luis.
\subsection*{Supplementary material}
Our bound refers to the estimation of the value $x$ of a real
parameter $X$ that identifies a unitary transformation $U_x=e^{-iHx}$,
generated by an Hermitian operator $H$. The usual setting in quantum
channel parameter estimation~(see \cite{review} for a recent review)
is to prepare $\nu$ copies of a probe system in a fiducial state
$\rho$, apply the mapping $U_x$ to each of them as
$\rho\to\rho_x=U_x\rho {U_x}^\dag$, and then perform a (possibly
joint) measurement on the joint output state $\rho_x^{\otimes \nu}$,
the measurement being described by a generic Positive Operator-Valued
Measure (POVM) of elements $\{ E_j\}$. [The possibility of applying a
joint transformation on the $\nu$ probes before the interaction $U_x$
(e.g.~to entangle them as studied in \cite{GIOV06}) can also be
considered, but it is useless in this context, since it will not
increase the linear scaling in $\nu$ of the term $\nu(\langle
H\rangle-E_0)$ that governs our bounds.] The result $j$ of the measurement is finally
used to recover the quantity $x$ through some data processing which
assigns to each outcome $j$ of the POVM a value $x_j$ which represents
the estimation of $x$. The accuracy of the process can be gauged by
the RMSE of the problem, i.e.~by the quantity
\begin{eqnarray}\label{defdelta}
\Delta X := \sqrt{\sum_{j} P_j(x) [ x_j -x ]^2 } = \sqrt{ \delta^2X +
(\bar{x}-x )^2 },
\end{eqnarray}
where $P_j(x) = \mbox{Tr}[ E_j \rho_x^{\otimes \nu}]$ is the
probability of getting the outcome $j$ when measuring $\rho_x^{\otimes
\nu}$, $\bar{x} := \sum_{j} P_j(x) x_j$ is the average of the
estimator function, and where
\begin{eqnarray}
\delta^2 X := \sum_{j} P_j(x) [ x_j - \bar{x}]^2\;,
\end{eqnarray}
is the variance of the random variable $x_j$. The estimation is said
to be unbiased if $\bar{x}$ coincides with the real value $x$,
i.e.~$\bar{x}=x$, so that, in this case, $\Delta X$ coincides with
$\delta X$. General estimators however may be biased with $\bar{x}\neq
x$, so that $\Delta X > \delta X$ (in this case, they are called
asymptotically unbiased if $\bar{x}$ converges to $x$ in the limit
$\nu\rightarrow \infty$).
In the main text we restricted our analysis to pure states of the
probe $\rho=|\psi\rangle\langle \psi|$ and focused on projective
measurements associated to unbiased estimation procedures whose RMSE
$\Delta X$ is independent on $x$.
Here we extend the proof to drop the above simplifying assumptions,
considering a generic (non necessarily unbiased) estimation process
which allows one to determine the value of the real parameter $X$
associated with the non necessarily pure input state $\rho$.
Take two values $x$ and $x'$ of $X$ such that their associated RMSE
verifies the following constraints
\begin{eqnarray}
&&\Delta X(x) \neq 0\;,\label{pos}\\
&&|x-x'| =
( \lambda+1) [\Delta X(x)+\Delta X(x') ] \;,
\label{dist}
\end{eqnarray}
for some fixed value $\lambda$ greater than 1 (the right hand side of
Eq.~(\ref{dist}) can be replaced by $\lambda [\Delta X(x) +\Delta
X(x')]$ if the estimation is unbiased). In these expressions $\Delta
X(x)$ and $\Delta X(x')$ are the RMSE of the estimation evaluated
through Eq.~(\ref{defdelta}) on the output states $\rho_x^{\otimes
\nu}$ and $\rho_{x'}^{\otimes \nu}$ respectively (to include the
most general scenario we do allow them to depend explicitly on the
values taken by the parameter $X$). In the case in which the
estimation is asymptotically unbiased and the quantum Fisher
information $Q(x)$ of the problem takes finite values, the condition
(\ref{pos}) is always guaranteed by the quantum Cram\'{e}r-Rao
bound~\cite{holevo,helstrom,BRAU96,BRAU94} (but notice that our proof
holds also if the quantum Cram\'{e}r-Rao bound does not apply -- in
particular, we do not require the estimation to be asymptotically
unbiased). The condition~(\ref{dist}) on the other hand is verified
by any estimation procedure which achieves a reasonable level of
accuracy: indeed, if it is not verified, then this implies that the
interval over which $X$ can span is not larger than twice the average
RMSE achievable in the estimation.
Since the fidelity between two quantum states is the minimum of the
classical fidelity of the probability distributions from arbitrary
POVMs~\cite{nc}, we can bound the fidelity between $\rho_x^{\otimes
\nu}$ and $\rho_{x'}^{\otimes \nu}$ as follows
\begin{eqnarray}
F :=\Big[ \mbox{Tr} \sqrt{\sqrt{\rho_x^{\otimes \nu}}
\rho_{x'}^{\otimes \nu} \sqrt{\rho_x^{\otimes \nu}} }\Big]^2
\leqslant \Big[
\sum_j\sqrt{ P_j(x) P_j(x') } \Big]^2\;,\nonumber\\\label{fid}
\end{eqnarray}
with $P_j(x) = \mbox{Tr} [ E_j \rho_x^{\otimes \nu}]$ and $P_j(x') = \mbox{Tr} [ E_j \rho_{x'}^{\otimes \nu}]$.
The right-hand-side of this expression can be bound as
\begin{widetext}
\begin{eqnarray}
\sum_j\sqrt{ P_j(x) P_j(x') }
&=& \sum_{j\in I} \sqrt{ P_j(x) P_j(x') }+ \sum_{j\notin I} \sqrt{
P_j(x) P_j(x') } \nonumber \\
&\leqslant& \sqrt{\sum_{j\in I}{ P_j(x) } \sum_{j'\in I}{
P_{j'}(x') }} +\sqrt{ \sum_{j\notin I} { P_j(x) }
\sum_{j'\notin I}{P_{j'}(x') } }
\nonumber \\ &\leqslant&
\sqrt{\sum_{j\in I}{ P_j(x) } } +\sqrt{
\sum_{j'\notin I}{P_{j'}(x') } } \;,\label{boun1}
\end{eqnarray}
where $I$ is a subset of the domain of possible outcomes $j$ that we
will specify later, and where we used the Cauchy-Schwarz inequality
and the fact that $ \sum_{j'\in I}{ P_{j'}(x') }\leqslant 1$ and $
\sum_{j\notin I} { P_j(x) }\leqslant 1 $ independently from $I$. Now,
take $I$ to be the domain of the outcomes $j$ such that
\begin{eqnarray}
|x_j - \bar{x}'| \leqslant \lambda \delta X',
\labell{lam}\;
\end{eqnarray}
where $\lambda$ is a positive parameter (here $\bar{x}'$ and $(\delta X')^2$ are the average and the variance value of $x_j$ computed with the probability distribution $P_{j}(x')$).
From the Tchebychev inequality it then follows that
\begin{eqnarray}
\sum_{j'\notin I}{P_{j'}(x') } \leqslant 1 /\lambda^2\;,\label{asa1}
\end{eqnarray}
which gives a significant bound only when $\lambda>1$.
To bound the other term on the rhs of Eq.~(\ref{boun1}) we
notice that $|x-x'| \leqslant \big|x- \bar{x} \big| + \big| x'- \bar{x'}\big| + \big|\bar{x}-\bar{x}'\big|$ and use
Eq.~(\ref{dist}) and (\ref{defdelta}) to write
\begin{eqnarray}
\big|\bar{x}-\bar{x}' \big| &\geqslant& (\lambda +1) (\Delta X + \Delta X') - \big|x- \bar{x} \big| - \big| x'- \bar{x}'\big| \nonumber \\
&=& (\lambda +1) (\Delta X + \Delta X') - \sqrt{\Delta^2X - \delta^2 X} - \sqrt{\Delta^2X' - \delta^2 X'} \geqslant \lambda (\Delta X + \Delta X')\;.
\end{eqnarray}
From Eq.~(\ref{lam}) we also notice that for $j\in I$ we have
\begin{eqnarray}
\big|\bar{x}-\bar{x}'\big|
\leqslant \big|\bar{x}- x_j\big| + \big| x_j - \bar{x}'\big|
\leqslant \big|\bar{x}- x_j\big| + \lambda \delta X'\;,
\end{eqnarray}
which with the previous expression gives us
\begin{eqnarray}
\big|\bar{x}- x_j\big| \geqslant \lambda (\Delta X + \Delta X')
- \lambda \delta X'\geqslant \lambda \Delta X \geqslant \lambda \delta X\;,
\end{eqnarray}
and hence (using again the Tchebychev inequality)
\begin{eqnarray}
\sum_{j\in I}{P_{j}(x) } \leqslant 1 /\lambda^2\;.\label{17}
\end{eqnarray}
Replacing \eqref{asa1} and \eqref{17} into (\ref{fid}) and
(\ref{boun1}) we obtain
\begin{eqnarray}\label{fidnew}
F \leqslant 4 /\lambda^2 \;.
\end{eqnarray}
We can now employ the quantum speed limit inequality~(\ref{newqsl})
from \cite{qspeed} and merge it with the condition (\ref{dist}) to
obtain
\begin{eqnarray}
(\lambda +1 )(\Delta X + \Delta X')= |x'-x| &\geqslant&
\frac{\pi}{2} \max \left\{ \frac{\alpha(F)}{\nu(\langleH\rangle- E_0)},
\frac{\beta(F)}{\sqrt{\nu} \Delta H }\right\}
\geqslant \frac{\pi}{2} \max \left\{
\frac{\alpha(4/\lambda^2)}{\nu(\langleH\rangle - E_0)},
\frac{\beta(4/\lambda^2)}{\sqrt{\nu} \Delta H}\right\},\label{ila}
\end{eqnarray}
\end{widetext}
where, as in the main text, we used the fact that $\alpha$ and $\beta$
are decreasing functions of their arguments, and the fact that the
expectation and variances of $H$ over the family $\rho_x$ is
independent of $x$ (since $H$ is independent of $x$). The first term
of Eq.~\eqref{ila} together with the first part of the $\max$ implies
Eq.~(\ref{bd1}), choosing $\kappa= \sup_{\lambda} \pi
\alpha(4/\lambda^2)/[4(\lambda+1)]\simeq 0.074$, which for unbiased
estimation can be replaced by $\kappa=\sup_{\lambda} \pi
\alpha(4/\lambda^2)/[4\lambda] \simeq 0.091$. In the case in which
$\Delta X(x) =\Delta X(x')=\Delta X$ we then immediately obtain the
bound~(\ref{ris}).
\begin{figure}
\caption{ Plot of the functions $\alpha(F)$ and $\beta(F)$ appearing
in Eq.~(\ref{newqsl}
\end{figure}
\begin{figure}
\caption{Plot of the function $\pi\: \alpha(4/\lambda^2)/(4\lambda)$
as a function of $\lambda$ (blue continuous line). The function
$\alpha$ is evaluated numerically according to the prescription of
\cite{qspeed}
\end{figure} | 3,303 | 12,101 | en |
train | 0.16.3 | \begin{figure}
\caption{ Plot of the functions $\alpha(F)$ and $\beta(F)$ appearing
in Eq.~(\ref{newqsl}
\end{figure}
\begin{figure}
\caption{Plot of the function $\pi\: \alpha(4/\lambda^2)/(4\lambda)$
as a function of $\lambda$ (blue continuous line). The function
$\alpha$ is evaluated numerically according to the prescription of
\cite{qspeed}
\end{figure}
\begin{references}
\bibitem{heisenberg}Heisenberg, W., \"Uber den anschaulichen Inhalt der
quantentheoretischen Kinematik und Mechanik, {\em Zeitschrift f\"ur
Physik} {\bf 43}, 172-198 (1927), English translation in Wheeler
J.A. and Zurek H. eds., {\em Quantum Theory and Measurement}
(Princeton Univ. Press, 1983), pg. 62-84.
\bibitem{robertson} Robertson, H.P., The Uncertainty Principle, {\em
Phys. Rev.} {\bf 34}, 163 (1929).
\bibitem{holevo} Holevo, A.S., Probabilistic and Statistical Aspect
of Quantum Theory. (Edizioni della Normale, Pisa 2011).
\bibitem{helstrom} Helstrom, C.W., Quantum Detection and Estimation
Theory. (Academic Press, New York, 1976).
\bibitem{BRAU94} Braunstein, S.L. \& Caves, C.M., Statistical
distance and the geometry of quantum states. {\em Phys. Rev. Lett.}
{\bf 72}, 3439 (1994).
\bibitem{BRAU96} Braunstein, S.L., Caves, M.C. \& Milburn, G.J.,
Generalized Uncertainty Relations: Theory, Examples, and Lorentz
Invariance. {\em Annals of Physics} {\bf 247}, 135-173 (1996).
\bibitem{review} Giovannetti, V., Lloyd, S. \& Maccone, L., Advances
in Quantum Metrology, {\em Nature Phot.} {\bf 5}, 222 (2011).
\bibitem{caves} Braunstein, S.L., Lane, A.S., \& Caves, C.M.,
Maximum-likelihood analysis of multiple quantum phase measurements,
{\em Phys. Rev. Lett.} {\bf 69}, 2153-2156 (1992).
\bibitem{yurke} Yurke, B., McCall, S.L. \& Klauder, J.R., Phys. Rev.
A {\bf 33}, 4033 (1986).
\bibitem{barry}Sanders, B.C. \& Milburn, G.J., Optimal Quantum
Measurements for Phase Estimation, {\em Phys. Rev. Lett.} {\bf 75},
2944-2947 (1995).
\bibitem{ou} Ou, Z.Y., Fundamental quantum limit in precision phase
measurement, {\em Phys. Rev. A} {\bf 55}, 2598 (1997); Ou Z.Y.,
Complementarity and Fundamental Limit in Precision Phase
Measurement, {\em Phys. Rev. Lett.} {\bf 77}, 2352-2355 (1996).
\bibitem{bollinger} Bollinger, J.J., Itano, W.M., Wineland, D.J. \&
Heinzen, D.J., Optimal frequency measurements with maximally
correlated states, {\em Phys. Rev. A} {\bf 54}, R4649 (1996).
\bibitem{smerzi}Hyllus, P., Pezz\'e, L. \& Smerzi, A., Entanglement and
Sensitivity in Precision Measurements with States of a Fluctuating
Number of Particles, {\em Phys. Rev. Lett.} {\bf 105}, 120501
(2010).
\bibitem{rivasluis}Rivas, A. \& Luis, A., Challenging metrological
limits via coherence with the vacuum, {\em preprint}
arXiv:1105.6310v1 (2011).
\bibitem{dowling} Anisimov, P.M. et al.,
Quantum Metrology with Two-Mode Squeezed Vacuum: Parity Detection
Beats the Heisenberg, {\em Phys. Rev. Lett.} {\bf 104}, 103602
(2010).
\bibitem{zhang}Zhang, Y.R., et al., Heisenberg Limit of Phase
Measurements with a Fluctuating Number of Photons, {\em preprint}
arXiv:1105.2990v2 (2011).
\bibitem{GIOV06} Giovannetti, V., Lloyd, S. \& Maccone, L., Quantum
metrology. {\em Phys. Rev. Lett.} {\bf 96}, 010401 (2006).
\bibitem{qspeed} Giovannetti, V., Lloyd, S. \& Maccone, L., Quantum
limits to dynamical evolution, {\em Phys. Rev. A} {\bf 67}, 052109
(2003).
\bibitem{margolus} Margolus, N. \& Levitin, L.B., The maximum speed of
dynamical evolution {\em Physica D} {\bf 120}, 188 (1998).
\bibitem{bhatta} Bhattacharyya K., {\em J. Phys. A} {\bf 16}, 2993
(1983).
\bibitem{man} L. Mandelstam and I. G. Tamm, J. Phys. USSR {\bf 9}, 249
(1945).
\bibitem{kok} Zwierz, M., P\'erez-Delgado, C.A., \& Kok, P., General
Optimality of the Heisenberg Limit for Quantum Metrology, {\em Phys.
Rev. Lett.} {\bf 105,} 180402 (2010).
\bibitem{erratum}Zwierz, M., P\'erez-Delgado, C.A., \& Kok, P., Erratum:
General Optimality of the Heisenberg Limit for Quantum Metrology,
{\em Phys. Rev. Lett.} {\bf 107,} 059904(E) (2011).
\bibitem{ssw} Shapiro, J.H., Shepard, S.R. \& Wong, F.C., {\em Ultimate
quantum limits on phase measurement}, Phys. Rev. Lett. {\bf 62},
2377-2380 (1989).
\bibitem{HAYA10-1} Hayashi, M., Phase estimation with photon number
constraint. {\em Progress in Informatics} {\bf 8}, 81-87 (2011);
arXiv:1011.2546v2 [quant-ph].
\bibitem{nc} Nielsen, M.A. \& Chuang, I.L., Quantum Computation and
Quantum Information (Cambridge Univ. Press, Cambridge, 2004),
Eq.~(9.77), pg.~412.
\end{references}
\end{document} | 2,039 | 12,101 | en |
train | 0.17.0 | \begin{document}
\title{Paraconsistent Machines and their Relation to Quantum Computing}
\begin{abstract}
We describe a method to axiomatize computations in deterministic
Turing machines. When applied to computations in
non-deterministic Turing machines, this method may produce
contradictory (and therefore trivial) theories, considering
classical logic as the underlying logic. By substituting in such
theories the underlying logic by a paraconsistent logic
we define a new computation model, the
\emph{paraconsistent Turing machine}. This
model allows a partial simulation of superposed states of
quantum computing. Such a feature allows the definition of
paraconsistent algorithms which solve (with some restrictions)
the well-known Deutsch's and Deutsch-Jozsa
problems. This first model of computation, however,
does not adequately represent the notions of \emph{entangled
states} and \emph{relative phase}, which are key features in quantum computing.
In this way, a more sharpened model of paraconsistent Turing machines is defined,
which better approaches quantum computing features.
Finally, we define complexity classes for such models, and establish some
relationships with classical complexity classes.
\end{abstract}
\section{Introduction}
The undecidability of first-order logic was first proved by
Alonzo Church in \cite{Church-1936b} and an alternative proof with
the same result was presented by Alan Turing in
\cite{Turing-1936}. In his paper, Turing defined an abstract
model of automatic machines, now known as \emph{Turing machines}
(TMs), and demonstrated that there are unsolvable problems for that
class of machines. By axiomatizing machine
computations in first-order theories, he could then prove that
the decidability of first-order logic would imply the solution
of established unsolvable problems. Consequently, by
\emph{reductio ad absurdum}, first-order logic is shown to be
undecidable. Turing's proof was simplified by Richard B\"uchi in
\cite{Buchi-1962}, and a more recent and clear version of this
proof is presented by George Boolos and Richard Jeffrey in
\cite[Chap. 10]{Boolos-Jeffrey-1989}.
By following \cite{Boolos-Jeffrey-1989} and adding new axioms, we
define a method to obtain adequate theories for computations in
\emph{deterministic} TMs (DTMs) (Sec. \ref{axio-TM-comp}), which
verify a formal notion of representation of TM computation (here introduced),
therefore enhancing the standard way in which TMs are expressed by means of
classical first-order logic.
Next, we will show that by using the same axiomatization method for
\emph{non-deterministic} TMs (NDTMs), we obtain (in some cases)
contradictory theories, and therefore trivial theories in view of the
underlying logic. At this point, we have two options in sight to
avoid triviality: (a) The first option, consisting in the classical
move of restricting theories in a way that contradictions could
not be derived (just representing computations in NDTMs); (b) the
second option, consisting in substituting the underlying logic by
a paraconsistent logic, supporting contradictions and providing a way to define new models of
computation through the interpretation of the theories. The first
option is sterile: incapable of producing a new model of
computation. In this paper, we follow the second option,
initially defining in Sec. \ref{ptms} a model of
\emph{paraconsistent} TMs (ParTMs), using the paraconsistent
logic $LFI1^*$ (see \cite{Carnielli-Coniglio-Marcos-2007}). We
then show that ParTMs allow a partial simulation of `superposed states', an important
feature of quantum computing (Sec. \ref{sim-qc-ptms}). By using this property,
and taking advantage of `conditions of inconsistency' added to the instructions,
we show that the quantum solution of Deutsch's and
Deutsch-Jozsa problems can be simulated in ParTMs (Sec.
\ref{sim-D-DJ-prob}). However, as explained in Sec.
\ref{sim-ent-states-rel-phases}, ParTMs are not adequate to simulate
\emph{entangled states} and \emph{relative phases}, which are key features in quantum computing.
Thus, still in Sec. \ref{sim-ent-states-rel-phases}, we define a
paraconsistent logic with a \emph{non-separable} conjunction and,
with this logic, a new model of paraconsistent TMs is defined,
which we call \emph{entangled paraconsistent} TMs (EParTMs).
In EParTMs, uniform entangled states can be successfully
simulated. Moreover, we describe how the notion of relative phase (see \cite[p.
193]{Nielsen-Chuang-2000}) can be introduced
in this model of computation.\footnote{ParTMs were first presented in \cite{Agudelo-Sicard-2004}
and relations with quantum computing were presented in \cite{Agudelo-Carnielli-2005},
but here we obtain some improvements and introduce the model of EParTMs,
which represents a better approach to quantum computing.}
The \emph{paraconsistent computability theory} has already been
mentioned in \cite[p. 196]{Sylvan-Copeland-2000} as an `emerging
field of research'. In that paper, \emph{dialethic machines} are
summarily described as Turing machines acting under dialethic
logic (a kind of paraconsistent logic) in the presence of a
contradiction, but no definition of any computation model is
presented.\footnote{The authors say that ``it is not difficult to
describe how a machine might encounter a contradiction: For some
statement $A$, both $A$ and $\neg A$ appear in its output or
among its inputs'' (cf. \cite[p. 196]{Sylvan-Copeland-2000});
but how can $\neg A$ `appear'? They also claim that ``By contrast
[with a classical machine], a machine programmed with a dialethic
logic can proceed with its computation satisfactorily [when a
contradiction appears]''; but how would they proceed?} Also,
an obscure argument is presented in an attempt to show how dialethic machines
could be used to compute classically uncomputable functions.
Contrarily, our models of ParTMs and EParTMs do not intend to
break the Church-Turing thesis, i.e., all problems computed by
ParTMs and EParTMs can also be computed by TMs (Sec.
\ref{comp-power-partms-EParTMs}). However, such models are
advantageous for the understanding of quantum computing and
parallel computation in general. Definitions of computational
complexity classes for ParTMs and EParTMs, in addition to interesting
relations with existing classical and quantum computational
complexity classes, are presented in Sec. \ref{comp-power-partms-EParTMs}.
The paraconsistent approach to quantum computing presented here
is just one way to describe the role of quantum features in the
process of computation by means of non-classical logics; in
\cite{Agudelo-Carnielli-2007} we presented another way to define
a model of computation based on another paraconsistent logic, also
related with quantum computing. The relationship between these
two different definitions of paraconsistent computation is a task
that needs to be addressed in future work. | 1,908 | 28,884 | en |
train | 0.17.1 | \section{Axiomatization of TM Computations}\label{axio-TM-comp}
As mentioned above, a method to define first-order theories
for Turing machine computations has already been introduced in \cite{Turing-1936}.
Although this is a well-known construction, in view of the important role of
this method in our definition of paraconsistent Turing machines we will
describe it in detail, following \cite[Chap. 10]{Boolos-Jeffrey-1989}.
This method will be extended to deal with a formal notion of representation (here introduced) of TM computation.
Considering $Q = \{q_1, \ldots, q_n \}$ as a finite set of states
and $\Sigma = \{s_1, \ldots, s_m \}$ as a finite set of
read/write symbols, we will suppose that TM instructions are defined by
quadruples of one of the following types (with the usual interpretations, where
$R$ means a movement to the right, and $L$ means a movement to the left):
\begin{align}
&q_i s_j s_k q_l,\tag{I}\label{inst-i}\\
&q_i s_j R q_l,\tag{II}\label{inst-ii}\\
&q_i s_j L q_l.\tag{III}\label{inst-iii}
\end{align}
By convention, we will enumerate the instants of time and
the cells of the tape by integer numbers, and we will consider that
machine computations begin at time $0$, with a symbol sequence on
the tape (the \emph{input} of the computation), and with the
machine in state $q_1$ scanning the symbol on cell $0$. $s_1$ will be
assumed to be an \emph{empty} symbol.
In order to represent the computation of a TM $\mc{M}$ with input
$\alpha$ (hereafter $\mc{M}(\alpha)$), we initially define the
first-order theory $\Delta_{FOL}(\mc{M}(\alpha))$ over the first-order language $\mc{L}
= \{Q_1, \ldots, Q_n, S_1, \ldots, S_{m}, <, ', 0\}$,\footnote{The
subscript $FOL$ on $\Delta$ aims to emphasize the fact that we
are considering the \emph{classical first-order logic} (FOL) as
the underlying logic of the theory, i.e. $\Delta_{FOL} \vdash A$
means $\Delta \vdash_{FOL} A$. A different subscript will
indicate that another (non-classical) first-order logic is being
taken into consideration.} where
symbols $Q_i$, $S_j$ and $<$ are binary predicate symbols, $'$ is
a unary function symbol and $0$ is a constant symbol. In the
intended interpretation $\mc{I}$ of the sentences in
$\Delta_{FOL}(\mc{M}(\alpha))$, variables are interpreted as
integer numbers, and symbols in $\mc{L}$ are interpreted in the
following way:
\begin{itemize}
\item $Q_i(t, x)$ indicates that $\mc{M}(\alpha)$ is in state $q_i$, at time $t$,
scanning the cell $x$;
\item $S_j(t, x)$ indicates that $\mc{M}(\alpha)$ contains the symbol $s_j$, at
time $t$, on cell $x$;
\item $<(x, y)$ indicates that $x$ is less than $y$, in the standard order of
integer numbers;
\item $'(x)$ indicates the successor of $x$;
\item $0$ indicates the number $0$.
\end{itemize}
To simplify notation, we will use $x < y$ instead of $<(x, y)$ and
$x'$ instead of $'(x)$. The theory $\Delta_{FOL}(\mc{M}(\alpha))$
consists of the following axioms:
\begin{itemize}
\item Axioms establishing the properties of $'$ and $<$:
\begin{align}
&\forall z \exists x (z = x'), \tag{A1} \label{existe-suc}\\
&\forall z \forall x \forall y (((z=x') \wedge (z=y')) \to (x=y)), \tag{A2} \label{unicidad-suc}\\
&\forall x \forall y \forall z (((x<y) \wedge (y<z)) \to (x<z)), \tag{A3} \label{trans-menorque}\\
&\forall x (x < x'), \tag{A4} \label{relac-suc-menorque}\\
&\forall x \forall y ((x<y) \to (x \neq y)). \tag{A5} \label{antireflex-menorque}
\end{align}
\item An axiom for each instruction $i_j$ of $\mc{M}$. The axiom is defined depending
respectively on the instruction type \eqref{inst-i}, \eqref{inst-ii} or \eqref{inst-iii}
as:
\begin{multline}
\forall t \forall x \Biggl(\biggl(Q_i(t, x) \wedge S_j(t, x)\biggr) \to \biggl(Q_l(t', x) \wedge S_k(t', x) \wedge \\ \forall y \Bigl((y \neq x) \to
\Bigl(\bigwedge_{i=1}^{m}\bigr(S_i(t, y) \to S_i(t', y)\bigr)\Bigr)\Bigr)\biggr)\Biggr), \tag{A$i_{\msf{j}}$ \eqref{inst-i}} \label{ax-inst-i}
\end{multline}
\begin{equation}
\forall t \forall x \Biggl(\biggl(Q_i(t, x) \wedge S_j(t, x)\biggr) \to \biggl(Q_l(t', x') \wedge \\ \forall y \Bigl(\bigwedge_{i=1}^{m}\bigl(S_i(t, y) \to
S_i(t', y)\bigr)\Bigr)\biggr)\Biggr), \tag{A$i_{\msf{j}}$ \eqref{inst-ii}} \label{ax-inst-ii}
\end{equation}
\begin{equation}
\forall t \forall x \Biggl(\biggl(Q_i(t, x') \wedge S_j(t, x')\biggr) \to \biggl(Q_l(t', x) \wedge \\ \forall y \Bigl(\bigwedge_{i=1}^{m}\bigl(S_i(t, y) \to
S_i(t', y)\bigr)\Bigr)\biggr)\Biggr). \tag{A$i_{\msf{j}}$ \eqref{inst-iii}} \label{ax-inst-iii}
\end{equation}
\item An axiom to specify the initial configuration of the machine. Considering the input
$\alpha = s_{i_0} s_{i_1} \ldots s_{i_{p-1}}$, where $p$ represents the length of
$\alpha$, this axiom is defined by:
\begin{multline}
Q_1(0, 0) \wedge \left(\bigwedge_{j=0}^{p-1} S_{i_{j}}(0, 0^{j})\right) \wedge \forall y \left(\left(\bigwedge_{j=0}^{p-1} y \neq 0^{j}\right) \to S_1(0, y)\right), \tag{A$\alpha$} \label{init-conf}
\end{multline}
where $0^{j}$ means $j$ iterations of the successor ($'$) function to constant $0$.
\end{itemize}
In \cite{Boolos-Jeffrey-1989}, a sentence $H$ is defined to
represent the halting of the computation, and it is thus proved
that $\Delta_{FOL}(\mc{M}(\alpha)) \vdash H$ iff the machine
$\mc{M}$ with input $\alpha$ halts. In this way, the decidability
of first-order logic implies the solution for the \emph{halting
problem}, a well-known unsolvable problem; this proves (by
\emph{reductio ad absurdum}) the undecidability of first-order
logic. For Boolos and Jeffrey's aims,
$\Delta_{FOL}(\mc{M}(\alpha))$ theories are strong enough, but
our purpose here is to attain a precise logical representation of TM
computations. Therefore, we will formally define the notion of
representability of a TM computation and show that new axioms must
be added to $\Delta_{FOL}(\mc{M}(\alpha))$ theories. Our
definition of the representation of a TM computation (Definition
\ref{def-rep-comp}) is founded upon the definitions of the
representation of functions and relations (Definition
\ref{def-rep-func} and \ref{def-rep-rel}) in theories introduced
by Alfred Tarski in collaboration with Andrzej Mostowski and
Raphael M. Robinson in \cite{Tarski-Mostowski-Robinson-1953}.
\begin{definition}\label{def-rep-func}
Let $f$ be a function of arity $k$, $\Delta$ an arbitrary theory and $\varphi(x_1, \ldots, x_k, x)$ a wff (with $k + 1$ free variables) in $\Delta$. The function $f$ is \emph{represented} by $\varphi$ in $\Delta$ if $f(m_1, \ldots, m_k) = n$ implies (bars are used to denote numerals):
\begin{enumerate}
\item $\Delta \vdash \varphi(\bar{m_1}, \ldots, \bar{m_k}, \bar{n})$,\label{def-rep-func-cond-i}
\item if $n \neq p$ then $\Delta \vdash \neg \varphi(\bar{m_1}, \ldots, \bar{m_k}, \bar{p})$, and\label{def-rep-func-cond-ii}
\item $\Delta \vdash \varphi(\bar{m_1}, \ldots, \bar{m_k}, \bar{q}) \rightarrow \bar{q} = \bar{n}$.\label{def-rep-func-cond-iii}
\end{enumerate}
\end{definition}
\begin{definition}\label{def-rep-rel}
Let $R$ be a relation of arity $k$, $\Delta$ an arbitrary theory and $\varphi(x_1, \ldots, x_k)$ a wff (with $k$ free variables) in $\Delta$. The relation $R$ is \emph{represented} by $\varphi$ in $\Delta$ if:
\begin{enumerate}
\item $(m_1, \ldots, m_k) \in R$ implies $\Delta \vdash \varphi(\bar{m_1}, \ldots, \bar{m_k})$, and\label{def-rep-rel-cond-i}
\item $(m_1, \ldots, m_k) \notin R$ implies $\Delta \vdash \neg \varphi(\bar{m_1}, \ldots, \bar{m_k})$.\label{def-rep-rel-cond-ii}
\end{enumerate}
\end{definition}
\begin{definition}\label{def-rep-comp}
Let $\mc{M}$ be a TM, $\alpha$ the input for $\mc{M}$, and
$\mu(\mc{M}(\alpha)) = \langle \mbb{Z}, Q_1^{\mu}, Q_2^{\mu}, \dots, Q_n^{\mu}, S_1^{\mu}, S_1^{\mu}, \dots, S_{m}^{\mu}, <^{\mu}, '^{\mu}, 0^{\mu} \rangle$
the structure determined by the intended interpretation $\mc{I}$.\footnote{
$\mbb{Z}$ represents the integers, the
relations $Q_i^{\mu}$ express couples of instants of time and positions for
states $q_i$ in the computation of $\mc{M}(\alpha)$, relations $S_j^{\mu}$
express couples of instants of time and positions for symbols $s_j$ in the computation
of $\mc{M}(\alpha)$, $<^{\mu}$ is the standard strict order on $\mbb{Z}$, $'^{\mu}$
is the successor function on $\mbb{Z}$ and $0^{\mu}$ is the integer $0$.}.
A theory $\Delta$, in the language $\mc{L} = \{Q_1, Q_2, \ldots, Q_n, S_0, S_1, \ldots, S_{m-1}, <, ', 0\}$,
\emph{represents the computation of $\mc{M}(\alpha)$} if:
\begin{enumerate}
\item $<^{\mu}$ is represented by $\varphi(x, y) := x < y$ in $\Delta$,
\item $'^{\mu}$ is represented by $\varphi(x, y) := x' = y$ in $\Delta$,
\item $Q_i^{\mu}$ ($i = 1, \ldots, n$) are represented by $\varphi(x, y) := Q_i(x, y)$ in $\Delta$, and
\item $S_j^{\mu}$ ($j = 1, \ldots, m$) are represented by $\varphi(x, y) := S_j(x, y)$ in $\Delta$.
\end{enumerate}
\end{definition}
\begin{theorem}\label{theo-delta-not-rep}
Let $\mc{M}$ be a TM and $\alpha$ the input for $\mc{M}$. The
theory $\Delta_{FOL}(\mc{M}(\alpha))$ cannot represent the
computation of $\mc{M}(\alpha)$.
\begin{proof}
We show that condition \ref{def-rep-rel-cond-ii} of Definition \ref{def-rep-rel}
cannot be satisfied for relations $Q_i$ and $S_j$: Indeed, when $\mc{M}(\alpha)$ is in
state $q_i$, at time $t$ and position $x$, it is not in any other state
$q_j$ ($i \neq j$); in this case, we have
that $\Delta_{FOL}(\mc{M}(\alpha)) \vdash Q_i(\bar{t}, \bar{x})$
(by the proof in \cite[Chap. 10]{Boolos-Jeffrey-1989}),
but on the other hand we have
that $\Delta_{FOL}(\mc{M}(\alpha)) \nvdash \neg Q_j(\bar{t}, \bar{x})$,
because a non-standard TM with the same instructions of $\mc{M}$ (but
allowing multiple simultaneous states: starting the computation in
two-different simultaneous states, for example) also validates all
axioms in $\Delta_{FOL}(\mc{M}(\alpha))$. A similar situation occurs
with relations $S_j$. We can also define other non-standard TMs which
allow different symbols and states, on different positions of the tape,
at times before the beginning or after the end of the computation,
in such a way that the machine validates all axioms in $\Delta_{FOL}(\mc{M}(\alpha))$.
\end{proof} | 3,796 | 28,884 | en |
train | 0.17.2 | \end{theorem}
Theorem \ref{theo-delta-not-rep} shows that it is necessary to
expand the theories $\Delta_{FOL}(\mc{M}(\alpha))$ in order to disallow
non-standard interpretations and to grant representation of
computations in accordance with Definition \ref{def-rep-comp}. We thus
define the notion of an \emph{intrinsic theory of the
computation of} $\mc{M}(\alpha)$ as the theory
$\Delta^{\star}_{FOL}(\mc{M}(\alpha))$ by specifying which new
axioms have to be added to $\Delta_{FOL}(\mc{M}(\alpha))$
theories, so that these extended theories are able to represent
their respective TM computations (Theorem
\ref{theo-delta-exp-rep}). For the specification of such axioms,
we will suppose that before the beginning of any computation and
after the end of any computation (if the computation halts), the
machine is in none of its states and no symbols (not even the
empty symbol) occurs anywhere in its tape. New axioms are defined
as follows:
\begin{itemize}
\item An axiom to define the situation of $\mc{M}(\alpha)$ before the beginning of
the computation:
\begin{equation}
\forall t \forall x \left((t < 0) \to \left(\left(\bigwedge_{i=1}^{n} \neg Q_i(t, x)\right) \wedge \left(\bigwedge_{j=1}^{m} \neg S_j(t, x)\right)\right)\right).\tag{A$t0$} \label{ax-t-0}
\end{equation}
\item An axiom to define the situation of $\mc{M}(\alpha)$ after the
end of the computation (if the computation halts):
\begin{multline}
\forall t \forall x \Biggr(\neg \Biggl(\bigvee_{q_i s_j \in I} \biggl(Q_i(t, x) \wedge S_j(t, x)\biggr)\Biggr) \to \\
\forall u \forall y \Biggl( t < u \to \biggl(\biggl(\bigwedge_{i=1}^{n}\neg Q_i(u, y) \biggr) \wedge \biggl(\bigwedge_{j=1}^{m}\neg S_j(u, y)\biggr)\biggr)\Biggr)\Biggr), \tag{A$th$} \label{ax-t-halt}
\end{multline}
where subscript $q_i s_j \in I$ means that, in the disjunction, only
combinations of $q_i s_j$ coincident with the first two symbols of some instruction
of $\mc{M}$ are taken into account.
\item An axiom for any state symbol $q_i$ of $\mc{M}$
establishing the uniqueness of any state and any position in a given instant of time:
\begin{equation}
\forall t \forall x \left(Q_i(t, x) \to \left(\left(\bigwedge_{j \neq i} \neg Q_j(t, x)\right) \wedge \forall y \left(y \neq x \to \bigwedge_{i=1}^{n}\neg Q_i(t, y)\right)\right)\right). \tag{A$q_i$} \label{ax-unity-state}
\end{equation}
\item An axiom for any read/write symbol $s_i$ of $\mc{M}$
establishing the uniqueness of any symbol in a given instant of time and position:
\begin{equation}
\forall t \forall x \left(S_i(t, x) \to \bigwedge_{i \neq j} \neg S_j(t, x)\right).\tag{A$s_j$}\label{ax-unity-symbol}
\end{equation}
\end{itemize}
\begin{theorem}\label{theo-delta-exp-rep}
Let $\mc{M}$ be a TM and $\alpha$ the input for $\mc{M}$. Then, the
intrinsic theory $\Delta^{\star}_{FOL}(\mc{M}(\alpha))$ represents
the computation of $\mc{M}(\alpha)$.
\begin{proof}
Representation for the relation $<^{\mu}$ and for the function
$'^{\mu}$ is easy to proof. Representation for relations
$Q_i^{\mu}$ and $S_j^{\mu}$ follows from the proof in \cite[Chap.
10]{Boolos-Jeffrey-1989} and direct applications of the new
axioms in the intrinsic theory
$\Delta^{\star}_{FOL}(\mc{M}(\alpha))$.
\end{proof}
\end{theorem}
The definitions and theorems above consider only
DTMs (i.e, TMs with no pairs of instructions with the same two initial symbols);
the next theorem establishes that the method of axiomatization
defined above, when used to NDTMs, produces
contradictory theories (in some cases).
\begin{theorem}\label{theo-NDTM-cont}
Let $\mc{M}$ be a NDTM and $\alpha$ an input for $\mc{M}$. If
$\mc{M}(\alpha)$ reaches an ambiguous configuration (i.e. a configuration where multiple instructions can be executed), then its
intrinsic theory $\Delta^{\star}_{FOL}(\mc{M}(\alpha))$ is
contradictory.
\begin{proof}
By the proof in \cite[Chap. 10]{Boolos-Jeffrey-1989}, it is deduced a formula that expresses the ambiguous configuration. Then, by using theorems corresponding to the possible instructions that can be executed in the ambiguous configuration, there are deduced formulas expressing multiplicity of states, positions or symbols in some cell of the tape. Thus, by using axiom \eqref{ax-unity-state} or \eqref{ax-unity-symbol}, a contradiction is deduced.
\end{proof}
\end{theorem}
In \cite[p. 48]{Odifreddi-1989}, Odifreddi, in his definition of a
TM, establishes a condition of ``consistency'' for the machine
disallowing the existence of ``contradictory'' instructions (i.e.,
instructions with the same two initial symbols), which
corresponds to the notion of DTM. Thus, NDTMs are those that do not accomplish
the condition of consistency. Theorem
\ref{theo-NDTM-cont} shows that Odifreddi's idea of consistency in
TMs coincides with the consistency of the intrinsic theories
$\Delta^{\star}_{FOL}(\mc{M}(\alpha))$.
Note that contradictions in intrinsic theories
$\Delta^{\star}_{FOL}(\mc{M}(\alpha))$ arise by the multiple use
of axioms \eqref{ax-inst-i}, \eqref{ax-inst-ii} and
\eqref{ax-inst-iii} for the same instance of $t$ in combination
with the use of axioms \eqref{ax-unity-state} or
\eqref{ax-unity-symbol}. The multiple use of axioms
\eqref{ax-inst-i}, \eqref{ax-inst-ii} and \eqref{ax-inst-iii} for
the same instance of $t$ indicates the simultaneous execution of
multiple instructions, which can derive (at time $t + 1$)
multiplicity of symbols in the cell of the tape where the
instruction is executed, or multiplicity of states and positions,
while axioms \eqref{ax-unity-state} and \eqref{ax-unity-symbol}
establish the uniqueness of such elements. Intrinsic theories
$\Delta^{\star}_{FOL}(\mc{M}(\alpha))$ can be easily adapted to
deal with the idea that only one instruction is chosen to be
executed when the machine reaches an ambiguous configuration,
obtaining adequate theories for NDTMs computations. However, this
is not our focus in this paper. We are interested in
generalizing the notion of TM by using a paraconsistent logic, as
this is a fecund way of approaching quantum computing from a
logical viewpoint. | 1,994 | 28,884 | en |
train | 0.17.3 | \section{Paraconsistent TMs}\label{ptms}
There are many paraconsistent logics. They are proposed from different
philosophical perspectives but share the common feature of being
logics which support contradictions without falling into deductive
trivialization. Although in the definition of ParTMs we could, in
principle, depart from any first-order paraconsistent logic, we
will use the logic $LFI1^*$ (see
\cite{Carnielli-Marcos-deAmo-2000}), because it possesses an already
established proof-theory and first-order semantics, has properties
that allows natural interpretations of consequences of
$\Delta^{\star}_{LFI1^*}(\mc{M}(\alpha))$
theories\footnote{Intrinsic theories
$\Delta^{\star}_{LFI1^*}(\mc{M}(\alpha))$ are obtained
by substituting the underlying logic of
$\Delta^{\star}_{FOL}(\mc{M}(\alpha))$ theories by $LFI1^*$.} as
`paraconsistent computations', and also allows the addition of
conditions to control the execution of instructions involving
multiplicity of symbols and states.\footnote{It is worth to
remark that the choice of another paraconsistent logic, with
other features, can lead to different notions of ParTMs, as is
the case in Sec. \ref{sim-ent-states-rel-phases}.} $LFI1^*$ is the
first-order extension of $LFI1$, which is an LFI that extends
positive classical logic, defines connectives of
consistency $\circ$ and inconsistency $\bullet$,
and identifies inconsistency with contradiction by means of the
equivalence $\bullet A \leftrightarrow (A \wedge \neg A)$.
For intrinsic theories $\Delta^{\star}_{LFI1^*}(\mc{M}(\alpha))$
the proof in \cite[Chap. 10]{Boolos-Jeffrey-1989} continues to
hold, because $LFI1^*$ is an extension of positive classical
logic. Thus, as described above, the use of multiple axioms
describing instructions for the same instance of $t$ indicates
simultaneous execution of the instructions, which gives place to
multiplicity of symbols in the cell and multiplicity of states
and positions. Such a multiplicity, in conjunction with axioms
\eqref{ax-unity-state} and \eqref{ax-unity-symbol}, entails
contradictions which are identified in $LFI1^*$ with
inconsistencies. Thus, inconsistency in $\Delta^{\star}_{LFI1^*}(\mc{M}(\alpha))$ theories
characterize multiplicity.
By taking advantage of the robustness of $LFI1^*$ in the presence
of inconsistencies and their interpretation as multiplicity, we
can supply the ParTMs with conditions of inconsistency in the
two initial symbols of instructions in order to control the
process of computation. $q_i^{\bullet}$ will indicate that the
instruction will only be executed in configurations where the
machine is in multiple states or multiple positions, and
$s_j^{\bullet}$ will indicate that the instruction will only be
executed in cells with multiple symbols. These conditions
correspond to put the connective $\bullet$, respectively, in
front of the predicate $Q_i$ or $S_j$ in the antecedent of the axioms
related to the instructions. These apparently innocuous conditions are
essential for taking advantage of the parallelism provided by ParTMs and EParTMs.
As will be argued below, inconsistency conditions on the instructions seem to be
a more powerful mechanism than quantum interference, which is the instrument provided
by quantum computation taking advantage of quantum parallelism.
Note that axioms \eqref{ax-inst-i}, \eqref{ax-inst-ii} and
\eqref{ax-inst-iii} not only express the action of instructions
but also specify the preservation of symbols unmodified by the
instructions. Thus, in ParTMs, we have to take into account that any
instruction is executed in a specific position on the tape,
carrying symbols from cells not modified by the instruction to the next
instant of time; this is completed independently of the execution of other
instructions.
A ParTM is then defined as:
\begin{definition}
A \emph{ParTM} is a NDTM such that:
\begin{itemize}
\item When the machine reaches an ambiguous configuration it \emph{simultaneously} executes
all possible instructions, which can produce multiplicity on states, positions and symbols
in some cells of the tape;
\item Each instruction is executed in the position corresponding to the respective
state; symbols in cells unmodified by the instructions are carried to the next instant
of time;
\item \emph{Inconsistency} (or \emph{multiplicity}) conditions are allowed on the first
two symbols of the instructions (as described above);
\item The machine stops when there are no instructions to be executed; at this stage some
cells of the tape can contain multiple symbols, any choice of them represents a result of
the computation.
\end{itemize}
\end{definition}
The next example illustrates how a ParTM performs computations:
\begin{example}\label{exam-ParTM}
Let $\mc{M}$ be a ParTM with instructions: $i_1: q_1 0 0 q_2$,
$i_2: q_1 0 1 q_2$, $i_3: q_2 0 R q_3$, $i_4: q_2 1 R q_3$, $i_5:
q_3 \emptyset 1 q_4$, $i_6: q_4 0 0 q_5$, $i_7: q_4 1 0 q_5$,
$i_8: q_4 1^{\bullet} * q_5$, $i_9: q_5 * 1 q_5$. Figure
\ref{fig-comp-ParTM} schematizes the computation of $\mc{M}$,
beginning in position $0$, state $q_1$ and reading the symbol $0$
(with symbol $\emptyset$ in all other cells of the tape).
Instructions to be executed in each instant of time $t$ are
written within parentheses (note that instruction $i_8$ is not
executed at time $t = 3$ because of the inconsistency condition
on the scanned symbol). $\mc{M}$ will
be useful in the paraconsistent solution of Deutsch's and
Deutsch-Jozsa problems (Sec. \ref{sim-D-DJ-prob}).
\begin{figure}
\caption{\scriptsize Example of computation in a ParTM}
\label{fig-comp-ParTM}
\end{figure}
\end{example} | 1,624 | 28,884 | en |
train | 0.17.4 | \subsection{Simulating Quantum Computation through Paraconsistent TMs}\label{sim-qc-ptms}
In the Neumann-Dirac formulation, quantum mechanics is
synthesized in four postulates (cf. \cite[Sec.
2.2]{Nielsen-Chuang-2000}): The first postulate establishes that
states of isolated physical systems are represented by unit
vectors in a Hilbert space (known as the \emph{space state} of
the system); the second postulate claims that the evolution of closed
quantum systems is described by unitary transformations in the
Hilbert space; the third postulate deals with observations of
physical properties of the system by relating physical properties
with Hermitian operators (called \emph{observables}) and
establishing that, when a measurement is performed, an eigenvalue
of the observable is obtained (with a certain probability
depending upon the state of the system) and the system collapses to
its respective eigenstate; finally, the fourth postulate
establishes the tensor product of the state spaces of the component
systems as being the state space of the compound system,
allowing us to represent the state of a compound system as the tensor
product of the state of its subsystems (when the states of the
subsystems are known).
The best known models of quantum computation, namely \emph{quantum
Turing machines} (QTMs) and \emph{quantum circuits} (QCs), are
direct generalizations of TMs and boolean
circuits, respectively, using the laws of quantum mechanics.
By taking into account the postulates of quantum mechanics briefly
described above, a QTM (introduced in \cite{Deutsch-1985}) is
defined by considering elements of a TM (state, position and
symbols on the tape) as being observables of a quantum
system. The configuration of a QTM is thus represented by a unit vector in a Hilbert
space and the evolution is described by a unitary operator (with
some restrictions in order to satisfy the requirement that the machine
operates by `finite means', see \cite[p. 7]{Deutsch-1985} and
\cite{Ozawa-Nishimura-2000}). Since the configuration of a QTM
is described by a unit vector, it is generally a linear
superposition of basis states (called a \emph{superposed
state}), where the basis states represent classical TM
configurations. In quantum mechanics, superposed states can be
interpreted as the coexistence of multiple states, thus a QTM
configuration can be interpreted as the simultaneous existence of
multiple classical TM configurations. The linearity of the
operator describing the evolution of a QMT allows us to think in
the parallel execution of the instructions over the different states
(possibly an exponential number) present in the superposition.
Unfortunately, to know the result of the computation, we have to
perform a measurement of the system and, by the third postulate
of quantum mechanics, we can obtain only one classical TM
configuration in a probabilistic way, in effect, irredeemably losing all
other configurations. The art of `quantum programming' consists
in taking advantage of the intrinsic parallelism of the model,
by using quantum interference\footnote{Quantum interference is
expressed by the addition of amplitudes corresponding to equal basis states in a superposed state.
When signs of amplitudes are the same, their sum obtains a greater amplitude; in this case, we say that the interference is \emph{constructive}.
Otherwise, amplitudes subtract and we say that the interference is \emph{destructive}. Quantum interference occurs in the evolution of one quantum state to another.} to increase amplitudes of desired states
before the measurement of the system, with the aim to solve problems more
efficiently than in the classical case.
As shown in \cite{Ozawa-Nishimura-2000}, the evolution of a QTM
can be equivalently specified by a \emph{local transition
function}\footnote{Some changes were made in the definition of
$\delta$ to deal with the quadruple notation for instructions we
are using here.} $\fp{\delta} Q \times \Sigma \times \{\Sigma \cup
\{R, L\}\} \times Q \to \mbb{C}$, which validates some conditions related
to the unitary of state vectors and the reversibility of operators. In
this definition, the transition $\delta(q_i, s_j, Op, q_l) = c$
can be interpreted as the following action of the QTM: If the
machine is in state $q_i$ reading the symbol $s_j$, it follows
with the probability amplitude $c$ that the machine will perform the
operation $Op$ (which can be either to write a symbol or to move
on the tape) and reaches the state $q_l$. The amplitude $c$ cannot
be interpreted as the probability of performing the respective
transition, as with probabilistic TMs. Indeed, QTMs do not choose
only one transition to be executed; they can perform multiple
transitions simultaneously in a single instant of time in
a superposed configuration. Moreover, configurations resulting from different
transitions can interfere constructively or destructively (if they represent the same classical configuration), respectively increasing
or decreasing the amplitude of the configuration in the superposition.
By taking into account that each choice function on the elements of
a ParTM, in an instant of time $t$, gives a classical TM
configuration; a configuration of a ParTM can be viewed as a
\emph{uniform}\footnote{A superposed state is said to be \emph{uniform} if
all states in the superposition, with amplitude different of $0$,
have amplitudes with the same magnitude.}
superposition of classical TM configurations. This way, ParTMs
seem to be similar to QTMs: We could see ParTMs as QTMs without
amplitudes (which allows us only to represent uniform superpositions).
However, in ParTMs, actions performed by different instructions
mix indiscriminately, and thus all combinations of the singular
elements in a ParTM configuration are taken into account, which
makes it impossible to represent entangled states by only considering the
multiplicity of elements as superposed states (this point is
discussed in Sec. \ref{sim-ent-states-rel-phases}). Another difference
between the ParTMs and QTMs models is that `superposed states'
in the former model do not supply a notion of relative phase (corresponding to signs of basis states in uniform superpositions),
an important feature of quantum superpositions required for quantum interference.
Such a feature, as mentioned before, is the key mechanism for taking advantage of quantum parallelism.
However, inconsistency conditions on the instructions of ParTMs allows us to take
advantage of `paraconsistent parallelism', and this seems to be a
more powerful property than quantum interference (this point is fully discussed in
Sec. \ref{sim-ent-states-rel-phases}). In spite of the differences
between ParTMs and QTMs, ParTMs are able to simulate important
features of quantum computing; in particular, they can simulate uniform non-entangled superposed quantum states and solve the Deutsch and Deutsch-Jozsa problems preserving the efficiency of the quantum algorithms, but with certain restrictions (see Sec. \ref{sim-D-DJ-prob}). In Sec.
\ref{sim-ent-states-rel-phases}, we define another model of ParTMs, based on
a paraconsistent logic endowed with a `non-separable' conjunction, which
enables the simulation of uniform entangled states and represents a
better approach for the model of QTMs. We also show that a notion
of `relative phase' can be introduced in this new model of computation. | 1,809 | 28,884 | en |
train | 0.17.5 | \subsubsection{Paraconsistent Solutions for Deutsch and Deutsch-Jozsa Problems}\label{sim-D-DJ-prob}
Given an arbitrary function $\fp{f} \{0, 1\} \to \{0, 1\}$ and an
`oracle' (or black box) that computes $f$, Deutsch's problem
consists in defining a procedure to determine if $f$ is
\emph{constant} ($f(0) = f(1)$) or \emph{balanced} ($f(0) \neq
f(1)$) allowing only one query to the oracle. Classically, the procedure
seems to require two queries to the oracle in order to compute $f(0)$
and $f(1)$, plus a further step for the comparison; but by taking
advantage of the quantum laws the problem can be solved in a more efficient way,
by executing just a single query.
A probabilistic quantum solution to Deutsch's problem was first
proposed in \cite{Deutsch-1985} and a deterministic quantum
algorithm was given in
\cite{Cleve-Ekert-Macchiavello-Mosca-1998}. The deterministic
solution is usually formulated in the QCs formalism, so we
briefly describe this model of computation before presenting the
quantum algorithm.
The model of QCs (introduced in \cite{Deutsch-1989}) is defined
by generalizing the boolean circuit model in accordance with
the postulates of quantum mechanics: The classical unit of
information, the \emph{bit}, is generalized as the \emph{quantum
bit} (or \emph{qubit}), which is mathematically represented by a
unit vector in a two-dimensional Hilbert space; classical
logic gates are replaced by unitary operators; registers of
qubits are represented by tensor products and measurements
(following conditions of the third postulate above) are accomplished
at the end of the circuits in order to obtain the output of the
computation.\footnote{For a detailed introduction to QCs see
\cite{Nielsen-Chuang-2000}.} Under this definition, the QC
depicted in Figure \ref{fig-qc-Deutsch-problem} represents a
deterministic solution to Deutsch's problem.
\begin{figure}
\caption{\scriptsize QC to solve Deutsch's problem}
\label{fig-qc-Deutsch-problem}
\end{figure}
In the figure, squares labeled by $H$ represent \emph{Hadamard}
gates. A Hadamard gate is a quantum gate which performs the
following transformations ($\ket{\cdot}$ representing a vector in
Dirac's notation):
\begin{align}
\fp{H} &\ket{0} \mapsto \frac{1}{\sqrt{2}} \left(\ket{0} + \ket{1}\right) \nonumber \\
&\ket{1} \mapsto \frac{1}{\sqrt{2}} \left(\ket{0} - \ket{1}\right).
\end{align}
The rectangle labeled by $U_f$ represents the quantum oracle that
performs the operation $U_f(\ket{x, y}) = \ket{x, y \oplus
f(x)}$, where $\ket{x, y}$ represents the tensor product
$\ket{x} \otimes \ket{y}$ and $\oplus$ represents the addition module
2. Vectors $\ket{\psi_i}$ are depicted to explain, step by step, the process
of computation:
\begin{enumerate}
\item At the beginning of the computation, the input register takes the value $\ket{\psi_0} = \ket{0, 1}$;
\item After performing the two first Hadamard gates, the following superposition
is obtained:
\begin{equation}
\ket{\psi_1} = H \ket{0} \otimes H \ket{1} = \frac{1}{2}\left((\ket{0} + \ket{1}) \otimes (\ket{0} - \ket{1})\right);\label{eq-state1-qc-dp}
\end{equation}
\item By applying the operation $U_f$, one obtains:
\begin{align}
\ket{\psi_2} &= U_f \left(\frac{1}{2}\left(\ket{0, 0} - \ket{0, 1} + \ket{1, 0} - \ket{1, 1}\right)\right) \label{eq-state2-qc-dp} \\
&= \frac{1}{2}\left(\ket{0, 0 \oplus f(0)} - \ket{0, 1 \oplus f(0)} + \ket{1, 0 \oplus f(1)} - \ket{1, 1 \oplus f(1)}\right) \nonumber \\
&= \frac{1}{2}\left((-1)^{f(0)} (\ket{0} \otimes (\ket{0} - \ket{1})) + (-1)^{f(1)} (\ket{1} \otimes (\ket{0} - \ket{1}))\right) \nonumber \\
&=
\begin{cases}
\pm \left(\frac{1}{\sqrt{2}}(\ket{0} + \ket{1})\right) \otimes \left(\frac{1}{\sqrt{2}}(\ket{0} - \ket{1})\right) \mbox{ if } f(0) = f(1),\\
\pm \left(\frac{1}{\sqrt{2}}(\ket{0} - \ket{1})\right) \otimes \left(\frac{1}{\sqrt{2}}(\ket{0} - \ket{1})\right) \mbox{ if } f(0) \neq f(1).
\end{cases} \nonumber
\end{align}
\item By applying the last Hadamard gate, one finally reaches:
\begin{equation}
\ket{\psi_3} =
\begin{cases}
\pm \ket{0} \otimes \left(\frac{1}{\sqrt{2}}(\ket{0} - \ket{1})\right) \mbox{ if } f(0) = f(1),\\
\pm \ket{1} \otimes \left(\frac{1}{\sqrt{2}}(\ket{0} - \ket{1})\right) \mbox{ if } f(0) \neq f(1).
\end{cases}\label{eq-state3-qc-dp}
\end{equation}
\end{enumerate}
After a measurement of the first qubit of the state $\ket{\psi_3}$
is accomplished (on the standard basis, cf. \cite{Nielsen-Chuang-2000}),
one obtains $0$ (with probability $1$) if $f$ is constant or $1$
(with probability $1$) if $f$ is balanced.
The first step of the above QC generates a superposed state
(Eq. \eqref{eq-state1-qc-dp}), which is taken into the next step
to compute the function $f$ in parallel (Eq.
\eqref{eq-state2-qc-dp}), generating a superposition in such a way that
the relative phase of $\ket{1}$ in the first qubit differs depending on if $f$ is constant or balanced.
By applying again a Hadamard gate on the first qubit, quantum interference acts leaving the first qubit in the
basis state $\ket{0}$ if $f$ is constant or in the basis state $\ket{1}$ if $f$ is balanced (Eq. \eqref{eq-state3-qc-dp}). Thus,
by performing a measurement of the first qubit, we determine
with certainty if $f$ is constant or balanced. Note that
$U_f$ is used only once in the computation.
The ParTM in Example \ref{exam-ParTM} gives a `paraconsistent' simulation of
the quantum algorithm that solves Deutsch's problem, for the
particular case where $f$ is the constant function $1$.
Instructions $i_1$ and $i_2$, executed simultaneously at time $t
= 0$, simulate the generation of the superposed state.
Instructions $i_3$ to $i_5$ compute the constant function $1$
over the superposed state, performing in parallel the computation of
$f(0)$ and $f(1)$, and writing the results on position $1$ of the
tape. Instructions $i_6$ to $i_9$ check whether $f(0) = f(1)$, writing
$0$ on position $1$ of the tape if there is no multiplicity of
symbols on the cell (meaning that $f$ is constant) or writing $1$
in another case (meaning that $f$ is balanced). In the present case
$f(0) = f(1) = 1$, thus the execution of the instructions from $i_6$ to $i_9$
gives as result the writing of $0$ on position $1$ of the tape.
Consider a TM $\mc{M}'$ a black box that computes a function
$\fp{f} \{0, 1\} \to \{0, 1\}$. We could substitute instructions
$i_3$ to $i_5$ in Example \ref{exam-ParTM} (adequately renumbering
instructions and states from $i_6$ to $i_9$ if necessary) with the
instructions from $\mc{M}'$ in order to determine if $f$ is constant or
balanced. In this way, we define a paraconsistent simulation of the
quantum algorithm that solves Deutsch's problem. In the simulation,
$\mc{M}'$ is the analog of $U_f$ and quantum parallelism is
mimicked by the parallelism provided by the multiplicity
allowed in the ParTMs.
Notwithstanding the parallelism provided
by ParTMs, this first paraconsistent model of computation has some peculiar
properties which could give rise to `anomalies' in the process of
computation. For instance, consider a TM $\mc{M}'$ with
instructions: $i_1 = q_1 0 0 q_2$, $i_2 = q_1 1 1 q_3$, $i_3 =
q_2 0 R q_4$, $i_4 = q_3 1 R q_4$, $i_5 = q_2 1 R q_5$, $i_6 =
q_4 \emptyset 1 q_4$, $i_7 = q_5 \emptyset 0 q_5$. If $\mc{M}'$
starts the computation on position $0$ and state $q_1$, with a
single symbol ($0$ or $1$) on position $0$ of the tape, and with
symbol $\emptyset$ on all other cells of the tape, then $\mc{M}'$
computes the constant function $1$. Nevertheless, if $\mc{M}'$
begins reading symbols $0$ and $1$ (both simultaneously on cell
$0$), then it produces $0$ and $1$ as its outputs, as if $\mc{M}'$
had computed a balanced function. This example shows well how different
paths of a computation can mix indiscriminately, producing paths of computation
not possible in the TM considered as an oracle (when viewed as a classical TM) and generating undesirable results.
Therefore, only TMs that exclusively perform their possible paths of computation
when executed on a superposed state can be considered as oracles.
TMs with this property will be called \emph{parallelizable}.
Note that this restriction is not a serious limitation to our paraconsistent
solution of Deutsch's problem, because parallelizable TMs that compute any
function $\fp{f} \{0, 1\} \to \{0, 1\}$ are easy to define.
The Deutsch-Jozsa problem, first presented in
\cite{Deutsch-Jozsa-1992}, is the generalization of Deutsch's
problem to functions $\fp{f} \{0, 1\}^n \to \{0, 1\}$, where $f$
is assumed to be either constant or balanced.\footnote{$f$ is balanced
if $\card{f^{-1}(0)} = \card{f^{-1}(1)}$, where $\card{A}$
represents the cardinal of the set $A$.} A quantum solution to the
Deutsch-Jozsa problem is a direct generalization of the quantum
solution to Deutsch's problem presented above: The input register
is now constituted of $n + 1$ qubits and takes the value
$\ket{0}^{\otimes n} \otimes \ket{1}$ (where
$\ket{\cdot}^{\otimes n}$ represents the tensor product of $n$
qubits $\ket{\cdot}$); new Hadamard gates are added to act on the
new qubits in the input register and also on the first $n$
outputs of $U_f$, and $U_f$ is now a black box acting on $n + 1$
qubits, performing the operation $U_f(\ket{x_1, \ldots, x_n, y})
= \ket{x_1, \ldots, x_n, y \oplus f(x_1, \ldots, x_n)}$. In this
case, when a measurement of the first $n$ qubits is accomplished
at the end of the computation, if all the values obtained are $0$,
then $f$ is constant (with probability $1$); in another case $f$ is
balanced (with probability $1$).\footnote{Calculations are not
presented here, for details see \cite{Nielsen-Chuang-2000}.}
The paraconsistent solution to Deutsch's problem can be easily
generalized to solve the Deutsch-Jozsa problem as well: The input to
$\mc{M}$ must be a sequence of $n$ symbols $0$; instructions
$i_1$ and $i_2$ must be substituted by instructions $i_1 = q_1 0
0 q_2$, $i_2 = q_1 0 1 q_2$, $i_3 = q_1 0 R q_1$, $i_4 = q_1
\emptyset L q_3$, $i_5 = q_3 0 L q_3$, $i_6 = q_3 \emptyset R
q_4$, and the machine $\mc{M}'$ is now considered to be a parallelizable TM computing a
constant or balanced function $\fp{f} \{0, 1\}^n \to \{0, 1\}$.
Note that the paraconsistent solution to the Deutsch-Jozsa problem does not depend on
the assumption of the function to be constant or balanced; in fact, the solution can be
applied in order to distinguish between constant and non-constant functions.
This can lead to the erroneous conclusion that PartTMs are too powerful machines, in which all NP-Problems could be solved in polynomial time: Someone could mistakenly
think that, considering an oracle
evaluating propositional formulas in polynomial time, we could immediately define a ParTM solving SATISFIABILIY in polynomial time
(by defining, for instance, instructions to set values $0$ and $1$ to propositional variables, invoking the oracle to simultaneously evaluate
all possible values of the formula, and then using instructions with inconsistent conditions to establish whether any value $1$ was obtained).
However, this is not the case, because only parallelizable TMs can be taken as oracles.
As proven in Theorem \eqref{eq-comp-temp-partm-dtm}, ParTMs can be efficiently simulated
by DTMs. Then, if we had a ParTM solving SATISFIABILITY in polynomial time, this would lead to the surprising result that $P = NP$.
To avoid such a mistake, we have to take into account the restriction
of parallelizability imposed to oracles in our model: If we had a parallelizable TM to
evaluate propositional formulas, it would be easy to define a ParTM solving SATISFIABILIY in polynomial time and,
by Theorem \eqref{eq-comp-temp-partm-dtm}, we would conclude $P = NP$. This only
shows the difficulty (or impossibility) in defining a parallelizable TM able to evaluate propositional formulas. | 3,891 | 28,884 | en |
train | 0.17.6 | The Deutsch-Jozsa problem, first presented in
\cite{Deutsch-Jozsa-1992}, is the generalization of Deutsch's
problem to functions $\fp{f} \{0, 1\}^n \to \{0, 1\}$, where $f$
is assumed to be either constant or balanced.\footnote{$f$ is balanced
if $\card{f^{-1}(0)} = \card{f^{-1}(1)}$, where $\card{A}$
represents the cardinal of the set $A$.} A quantum solution to the
Deutsch-Jozsa problem is a direct generalization of the quantum
solution to Deutsch's problem presented above: The input register
is now constituted of $n + 1$ qubits and takes the value
$\ket{0}^{\otimes n} \otimes \ket{1}$ (where
$\ket{\cdot}^{\otimes n}$ represents the tensor product of $n$
qubits $\ket{\cdot}$); new Hadamard gates are added to act on the
new qubits in the input register and also on the first $n$
outputs of $U_f$, and $U_f$ is now a black box acting on $n + 1$
qubits, performing the operation $U_f(\ket{x_1, \ldots, x_n, y})
= \ket{x_1, \ldots, x_n, y \oplus f(x_1, \ldots, x_n)}$. In this
case, when a measurement of the first $n$ qubits is accomplished
at the end of the computation, if all the values obtained are $0$,
then $f$ is constant (with probability $1$); in another case $f$ is
balanced (with probability $1$).\footnote{Calculations are not
presented here, for details see \cite{Nielsen-Chuang-2000}.}
The paraconsistent solution to Deutsch's problem can be easily
generalized to solve the Deutsch-Jozsa problem as well: The input to
$\mc{M}$ must be a sequence of $n$ symbols $0$; instructions
$i_1$ and $i_2$ must be substituted by instructions $i_1 = q_1 0
0 q_2$, $i_2 = q_1 0 1 q_2$, $i_3 = q_1 0 R q_1$, $i_4 = q_1
\emptyset L q_3$, $i_5 = q_3 0 L q_3$, $i_6 = q_3 \emptyset R
q_4$, and the machine $\mc{M}'$ is now considered to be a parallelizable TM computing a
constant or balanced function $\fp{f} \{0, 1\}^n \to \{0, 1\}$.
Note that the paraconsistent solution to the Deutsch-Jozsa problem does not depend on
the assumption of the function to be constant or balanced; in fact, the solution can be
applied in order to distinguish between constant and non-constant functions.
This can lead to the erroneous conclusion that PartTMs are too powerful machines, in which all NP-Problems could be solved in polynomial time: Someone could mistakenly
think that, considering an oracle
evaluating propositional formulas in polynomial time, we could immediately define a ParTM solving SATISFIABILIY in polynomial time
(by defining, for instance, instructions to set values $0$ and $1$ to propositional variables, invoking the oracle to simultaneously evaluate
all possible values of the formula, and then using instructions with inconsistent conditions to establish whether any value $1$ was obtained).
However, this is not the case, because only parallelizable TMs can be taken as oracles.
As proven in Theorem \eqref{eq-comp-temp-partm-dtm}, ParTMs can be efficiently simulated
by DTMs. Then, if we had a ParTM solving SATISFIABILITY in polynomial time, this would lead to the surprising result that $P = NP$.
To avoid such a mistake, we have to take into account the restriction
of parallelizability imposed to oracles in our model: If we had a parallelizable TM to
evaluate propositional formulas, it would be easy to define a ParTM solving SATISFIABILIY in polynomial time and,
by Theorem \eqref{eq-comp-temp-partm-dtm}, we would conclude $P = NP$. This only
shows the difficulty (or impossibility) in defining a parallelizable TM able to evaluate propositional formulas.
On the other hand, Grover's quantum search algorithm and its proven optimality (see \cite[Chap. 6]{Nielsen-Chuang-2000}) implies
the non-existence of a `naive' search-based method to determine whether a function is constant or not in a time
less than $O(\sqrt{2^n})$. This shows that, in order to take advantage of
parallelism, inconsistency conditions on instructions featured by ParTMs is a
more powerful property than quantum interference. However, in the case of ParTMs, this feature does not allow us to
define more efficient algorithms than otherwise defined by classical means. The reason is that the different paths of computations may mix in this model, and consequently
we have to impose the parallelizability restriction on oracles.
In the EParTMs model defined in the next section, different paths of computation do not mix indiscriminately as in ParTMs. Thus, no restrictions on the oracles are necessary, and, as it is shown in Theorem \eqref{EParTM-csat},
this new model of computation solves all NP-problems in polynomial time. This result shows that conditions of inconsistency on the instructions
are a really efficient method to take advantage of parallelism, and that this mechanism is more powerful than quantum interference. | 1,374 | 28,884 | en |
train | 0.17.7 | \subsubsection{Simulating Entangled States and Relative Phases}\label{sim-ent-states-rel-phases}
In quantum theory, if we have $n$ physical systems with state
spaces $H_1, \ldots, H_n$ respectively, the system composed by
the $n$ systems has the state space $H_1 \otimes
\ldots \otimes H_n$ (in this case, $\otimes$ represent the
tensor product between state spaces) associated to it. Moreover, if we have
that the states of the $n$ component systems are respectively
$\ket{\psi_1}, \ldots, \ket{\psi_n}$, then the state of the
composed system is $\ket{\psi_1} \otimes \ldots \otimes
\ket{\psi_n}$. However, there are states in composed systems that
cannot be described as tensor products of the states of the
component systems; these states are known as \emph{entangled
states}. An example of a two qubit entangled state is $\ket{\psi}
= \frac{1}{\sqrt{2}} (\ket{00} + \ket{11})$.
Entangled states enjoy the property that a measurement of one state in the
component system affects the state of other component systems,
even when systems are spatially separated. In this way, singular (one particle)
systems lose identity, because their states are only describable in
conjunction with other systems. Entanglement is one of the
more (if not the most) puzzling characteristics of quantum
mechanics, with no analogue in classical physics. Many quantum
computing researchers think that entangled states play a
definite role in the definition of efficient quantum algorithms,
but this is not a completely established fact; any elucidation
about this would be of great relevance. In this direction, we are going
to show how the concept of entanglement can be expressed in
logical terms, and we will define a new model of paraconsistent
TMs (EParTMs) in which uniform entangled states are well
represented.
As mentioned before, choice functions over the different
elements (state, position and symbol on the cells of the tape) of
a ParTM, in a given instant of time $t$, determine a classical TM
configuration. Then, a configuration of a ParTM can be viewed as
a uniform superposition of classical TM configurations where all
combinations of singular elements are taken into account.
Ignoring amplitudes, the tensor product of composed systems coincides with
all combinations of the basis states present (with amplitude greater than $0$) in the component systems.
For instance, if a system $S_1$ is in state $\ket{\psi_1} =
\ket{a_{i_1}} + \ldots + \ket{a_{i_n}}$ and a system $S_2$ is in
state $\ket{\psi_2} = \ket{b_{j_1}} + \ldots + \ket{b_{j_m}}$,
then the composed system of $S_1$ and $S_2$ is in state
$\ket{\psi_{1,2}} = \ket{a_{i_1} b_{j_1}} + \ldots +
\ket{a_{i_1} b_{j_m}} + \ldots + \ket{a_{i_n} b_{j_1}} + \ldots +
\ket{a_{i_n} b_{j_m}}$. This rule can be applied $n - 1$ times to
obtain the state of a system composed by $n$ subsystems.
Consequently, just by interpreting multiplicity of elements as superposed
states, ParTMs cannot represent entangled states, because all of
their configurations can be expressed as tensor products of
their singular elements. This is why we define the new model
of EParTMs, or ``entangled paraconsistent TMs'' (cf.
Definition~\ref{EParTM}).
In a ParTM configuration all combinations of its singular elements
are taken into account in the execution of its instructions (and also in the reading of the results).
This is because the logic $LFI1^*$, used in
the definition of the model, validates the \emph{rule of
separation} (i.e. $\vdash_{\text{LFI1}^*} A \wedge B$ implies
$\vdash_{\text{LFI1}^*} A$ and $\vdash_{\text{LFI1}^*} B$) and the
\emph{rule of adjunction} (i.e. $\vdash_{\text{LFI1}^*} A$ and
$\vdash_{\text{LFI1}^*} B$ implies $\vdash_{\text{LFI1}^*} A
\wedge B$). Then, for instance, if
$\Delta^{\star}_{\text{LFI1}^*}(\mc{M}(n))\vdash Q_1(\overline{t},
\overline{x}) \wedge S_1(\overline{t}, \overline{x})$ and
$\Delta^{\star}_{\text{LFI1}^*}(\mc{M}(n))\vdash Q_2(\overline{t},
\overline{x}) \wedge S_2(\overline{t}, \overline{x})$, it is also
possible to deduce $\Delta^{\star}_{\text{LFI1}^*}(\mc{M}(n))\vdash
Q_1(\overline{t}, \overline{x}) \wedge S_2(\overline{t},
\overline{x})$ and $\Delta^{\star}_{\text{LFI1}^*}(\mc{M}(n))\vdash
Q_2(\overline{t}, \overline{x}) \wedge S_1(\overline{t},
\overline{x})$.
By the previous explanation, if we want to define a model of
paraconsistent TMs where configurations are not totally mixed,
we have to consider a paraconsistent logic where the rule of
separation or the rule of adjunction are not both valid.
There exist non-adjunctive paraconsistent logics,\footnote{The
most famous of them is the \emph{discussive} (or
\emph{discursive}) logic $D2$, introduced by Stanis\l aw
Ja\'{s}kowski in \cite{Jaskowski-1948} and \cite{Jaskowski-1949},
with extensions to first order logic and with possible
applications in the axiomatization of quantum theory, cf.
\cite{daCosta-Doria-1995}.} but paraconsistent systems where the rule
of separation fails have never been proposed.
Moreover, despite the fact that non-adjunctive paraconsistent logics appear
to be an acceptable solution to avoid the phenomenon of complete mixing in ParTMs, the
notion of entanglement seems to be more related with the failure of
the rule of separation: Indeed, an entangled state describes the
`conjunctive' state of a composed system, but not the state of
each single subsystem. Thus, in order to define a model of
paraconsistent TMs better approaching the behavior of QTMs, we
first define a paraconsistent logic with a \emph{non-separable}
conjuction.
By following the ideas in \cite{Beziau-2002} (see also
\cite{Beziau-2005}), a paraconsistent negation $\neg_{\diamond}$ is defined into the
well-known modal system $S5$ (departing from classical negation $\neg$)
by $\neg_{\diamond} A \eqdef \diamond \neg A$
(some properties of this negation are presented in the referred
papers). We now define a \emph{non-separable} conjunction $\wedge_{\diamond}$
into $S5$ by $A \wedge_{\diamond} B \eqdef \diamond (A \wedge B)$, where $\wedge$
is the classical conjunction. Some properties of this conjunction are
the following:
\begin{align}
&\vdash_{S5} A \wedge_{\diamond} B \mbox{ does not imply neither } \vdash_{S5} A \mbox{ nor } \vdash_{S5} B, \tag{$\wedge_{\diamond}1$}\label{ns-conj-prop-ns}\\
&\vdash_{S5} A \mbox{ and } \vdash_{S5} B \mbox{ implies } \vdash_{S5} A \wedge_{\diamond}B, \tag{$\wedge_{\diamond}2$}\label{ns-conj-prop-ad}\\
&\nvdash_{S5} \left(A \wedge_{\diamond}(B \wedge_{\diamond}C)\right) \leftrightarrow \left((A \wedge_{\diamond}B) \wedge_{\diamond}C\right), \tag{$\wedge_{\diamond}3$}\label{ns-conj-prop-nass}\\
&\vdash_{S5} (A \wedge_{\diamond}B) \leftrightarrow (B \wedge_{\diamond}A), \tag{$\wedge_{\diamond}4$}\label{ns-conj-prop-conm}\\
&\nvdash_{S5} \left((A \wedge_{\diamond}B) \wedge (C \wedge_{\diamond}D)\right) \rightarrow \left((A \wedge_{\diamond}D) \vee (C \wedge_{\diamond} B))\right)\tag{$\wedge_{\diamond}5$}\label{ns-conj-prop-comb}\\
&\vdash_{S5} (A_1 \wedge_{\diamond}(A_2 \wedge \ldots \wedge A_n)) \leftrightarrow \diamond (A_1 \wedge \ldots \wedge A_n). \tag{$\wedge_{\diamond}6$}\label{ns-conj-prop-mult-conj}
\end{align}
Property \eqref{ns-conj-prop-ns} reflects the non-separable
character of $\wedge_{\diamond}$, while \eqref{ns-conj-prop-ad} shows that
$\wedge_{\diamond}$ validates the rule of adjunction and
\eqref{ns-conj-prop-nass} grants the non-associativity
of $\wedge_{\diamond}$. \eqref{ns-conj-prop-conm} shows that $\wedge_{\diamond}$ is
commutative, \eqref{ns-conj-prop-comb} is a consequence of
\eqref{ns-conj-prop-ns} related with the expression of entangled
states, and \eqref{ns-conj-prop-mult-conj} is a simple
application of the definition of $\wedge_{\diamond}$ which will be useful
below.
A paraconsistent non-separable logic, which we will call $PNS5$, can be `extracted'
from the modal logic $S5$ (as much as done for negation in \cite{Beziau-2002})
by inductively defining a translation $\fp{*} ForPNS5 \to ForS5$
as:\footnote{Where $ForPNS5$ is the set of propositional formulas
generated over the signature $\sigma = \{\neg_{\diamond}, \wedge_{\diamond}, \vee,
\to\}$ (defined in the usual way) and $ForS5$ is the set of
formulas of $S5$.}
\begin{align*}
&A^* = A \mbox{ if $A$ is atomic},\\
&(\neg_{\diamond} A)^* = \diamond \neg (A)^*, \\
&(A \wedge_{\diamond}B)^* = \diamond (A^* \wedge B^*),\\
&(A \# B)^* = A^* \# B^* \mbox{ for $\# \in \{\vee, \to\}$};
\end{align*}
and by defining a consequence relation in the wffs of $PNS5$ as:
\begin{equation*}
\Gamma \vdash_{PNS5} A \mbox{ iff } \Gamma^* \vdash_{S5} A^*,
\end{equation*}
where $\Gamma$ represents a subset of $ForPNS5$ and $\Gamma^* =
\{B^* | B \in \Gamma \}$. This translation completely specifies
$PNS5$ as a sublogic of $S5$ with the desired properties.
In the spirit of the LFIs (see
\cite{Carnielli-Coniglio-Marcos-2007}), we can define a
connective $\bullet$ of `inconsistency' in $PNS5$ by $\bullet A \eqdef A
\wedge_{\diamond}\neg_{\diamond} A$ (which is equivalent to $\diamond A
\wedge \diamond \neg A$ in $S5$), a connective $\circ$ of `consistency'
by $\circ A \eqdef \neg_{\diamond} \bullet A$ (which is equivalent
to $\square \neg A \vee \square A$ in $S5$), a classical negation $\neg$
by $\neg A \eqdef \neg_{\diamond} A \wedge_{\diamond}\circ A$ (which is
equivalent to $\diamond \neg A \wedge (\square \neg A \vee
\square A)$ in $S5$, entailing $\neg A$) and a
classical conjunction by $A \wedge B \eqdef (A \wedge_{\diamond} B) \wedge_{\diamond} (\circ A \wedge_{\diamond} \circ B)$
(which is equivalent to $\diamond (A \wedge B) \wedge (\square (A \wedge B) \vee
\square (A \wedge \neg B) \vee \square (\neg A \wedge B) \vee
\square (\neg A \wedge \neg B))$ in $S5$, entailing $A \wedge B$). Consequently, the ``explosion principles''
$(A \wedge \neg_{\diamond} A \wedge \circ A) \rightarrow B$, $(A \wedge_{\diamond} (\neg_{\diamond} A \wedge_{\diamond} \circ A)) \rightarrow B$,
$((A \wedge_{\diamond} \neg_{\diamond} A) \wedge_{\diamond} \circ A) \rightarrow B$ and
$((A \wedge_{\diamond} \circ_{\diamond} A) \wedge_{\diamond} \neg A) \rightarrow B$
are theorems of $PNS5$; in this way, $PNS5$ is a legitimate logic of formal inconsistency (cf. \cite{Carnielli-Coniglio-Marcos-2007}).
These definitions also allow us to fully embed classical
propositional logic into $PNS5$. | 3,501 | 28,884 | en |
train | 0.17.8 | A paraconsistent non-separable logic, which we will call $PNS5$, can be `extracted'
from the modal logic $S5$ (as much as done for negation in \cite{Beziau-2002})
by inductively defining a translation $\fp{*} ForPNS5 \to ForS5$
as:\footnote{Where $ForPNS5$ is the set of propositional formulas
generated over the signature $\sigma = \{\neg_{\diamond}, \wedge_{\diamond}, \vee,
\to\}$ (defined in the usual way) and $ForS5$ is the set of
formulas of $S5$.}
\begin{align*}
&A^* = A \mbox{ if $A$ is atomic},\\
&(\neg_{\diamond} A)^* = \diamond \neg (A)^*, \\
&(A \wedge_{\diamond}B)^* = \diamond (A^* \wedge B^*),\\
&(A \# B)^* = A^* \# B^* \mbox{ for $\# \in \{\vee, \to\}$};
\end{align*}
and by defining a consequence relation in the wffs of $PNS5$ as:
\begin{equation*}
\Gamma \vdash_{PNS5} A \mbox{ iff } \Gamma^* \vdash_{S5} A^*,
\end{equation*}
where $\Gamma$ represents a subset of $ForPNS5$ and $\Gamma^* =
\{B^* | B \in \Gamma \}$. This translation completely specifies
$PNS5$ as a sublogic of $S5$ with the desired properties.
In the spirit of the LFIs (see
\cite{Carnielli-Coniglio-Marcos-2007}), we can define a
connective $\bullet$ of `inconsistency' in $PNS5$ by $\bullet A \eqdef A
\wedge_{\diamond}\neg_{\diamond} A$ (which is equivalent to $\diamond A
\wedge \diamond \neg A$ in $S5$), a connective $\circ$ of `consistency'
by $\circ A \eqdef \neg_{\diamond} \bullet A$ (which is equivalent
to $\square \neg A \vee \square A$ in $S5$), a classical negation $\neg$
by $\neg A \eqdef \neg_{\diamond} A \wedge_{\diamond}\circ A$ (which is
equivalent to $\diamond \neg A \wedge (\square \neg A \vee
\square A)$ in $S5$, entailing $\neg A$) and a
classical conjunction by $A \wedge B \eqdef (A \wedge_{\diamond} B) \wedge_{\diamond} (\circ A \wedge_{\diamond} \circ B)$
(which is equivalent to $\diamond (A \wedge B) \wedge (\square (A \wedge B) \vee
\square (A \wedge \neg B) \vee \square (\neg A \wedge B) \vee
\square (\neg A \wedge \neg B))$ in $S5$, entailing $A \wedge B$). Consequently, the ``explosion principles''
$(A \wedge \neg_{\diamond} A \wedge \circ A) \rightarrow B$, $(A \wedge_{\diamond} (\neg_{\diamond} A \wedge_{\diamond} \circ A)) \rightarrow B$,
$((A \wedge_{\diamond} \neg_{\diamond} A) \wedge_{\diamond} \circ A) \rightarrow B$ and
$((A \wedge_{\diamond} \circ_{\diamond} A) \wedge_{\diamond} \neg A) \rightarrow B$
are theorems of $PNS5$; in this way, $PNS5$ is a legitimate logic of formal inconsistency (cf. \cite{Carnielli-Coniglio-Marcos-2007}).
These definitions also allow us to fully embed classical
propositional logic into $PNS5$.
With the aim to use the logic $PNS5$ (instead of $LFI1^*$) in the definition of
EParTMs, we first need to extend $PNS5$ to first-order logic
with equality. This can be obtain by considering $S5Q^{=}$ (the first-order version of $S5$, with equality)
instead of $S5$ in the definition of the logic, and adjusting the translation function $*$ to deal with quantifiers and equality.
However, for the shake of simplicity, we will consider just $S5Q^{=}$ in the definition of the model,
and we will regard the connectives $\neg_{\diamond}, \wedge_{\diamond}, \bullet$
and $\circ$ as definitions into this logic. Then, we will substitute the underlying logic of intrinsic theories
$\Delta^{\star}_{FOL}(\mc{M}(\alpha))$ by $S5Q^{=}$, and through the
Kripkean interpretation of $\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha))$
theories, we will define what is a EParTM. Before that, we need
to identify which kind of negation ($\neg$ or $\neg_{\diamond}$) and
conjunction ($\wedge$ or $\wedge_{\diamond}$) are adequate in each axiom
of $\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha))$ (we will consider right-associative
conjunction, i.e., $A \wedge B \wedge C$ always mean $A \wedge (B
\wedge C)$; this is necessary a proviso because of the non-associativity of
$\wedge_{\diamond}$, cf. property \eqref{ns-conj-prop-nass}):
\begin{enumerate}
\item In axioms \eqref{existe-suc}-\eqref{antireflex-menorque}, negations and conjunctions are the classical ones;
\item in axioms \eqref{ax-inst-i}-\eqref{ax-inst-iii}, the conjunction in the antecedent is $\wedge_{\diamond}$, (considering \eqref{ns-conj-prop-mult-conj}) only the first conjunction in the consequent is $\wedge_{\diamond}$ (other conjunctions are classical), and negation in \eqref{ax-inst-i} is classical;
\item in axioms \eqref{init-conf} and \eqref{ax-t-0}, negations and conjunctions are the classical ones;
\item in axiom \eqref{ax-t-halt}, only the conjunction in the antecedent is $\wedge_{\diamond}$, all other connectives are classical;
\item in axioms \eqref{ax-unity-state} and \eqref{ax-unity-symbol}, all conjunctions are classical, but negations are $\neg_{\diamond}$ (except in $y \neq x$), and it is also necessary to add the connective $\diamond$ before the predicates $Q_i$ and $S_i$ into the antecedent of the axioms.
\end{enumerate}
We also need to define a notion of \emph{representation} for the
configurations of the TMs by worlds in a (possible-worlds) kripkean structure:
\begin{definition}
Let $w$ be a world in a kripkean structure. If $Q_i(\overline{t}, \overline{x}), \ldots, S_{j_{-1}}(\overline{t}, -1), S_{j_{0}}(\overline{t}, 0), S_{j_{1}}(\overline{t}, 1), \ldots$ are valid predicates on $w$, we say that $w$ \emph{represents} a configuration for a TM $\mc{M}$ at time $\overline{t}$, and the configuration is given by the intended interpretation $I$ presented above.
\end{definition}
By considering the choices of connectives and the definition above,
worlds in the kripkean interpretation of
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha))$ represent the parallel
computation of all possible computational paths of a NDTM
$\mc{M}$ for the input $\alpha$:
\begin{enumerate}
\item By axiom \eqref{init-conf}, there will be a world $w_{0}$ representing the initial configuration of $\mc{M}(\alpha)$;
\item by axioms \eqref{ax-inst-i}-\eqref{ax-inst-iii}, if $w_{t}$ represents a non-final configuration of $\mc{M}(\alpha)$ at time $t$, by any instruction $i_j$ to be executed at time $t$ (on such configuration), there will be a world $w_{t+1,j}$ representing a configuration of $\mc{M}(\alpha)$ at time $t+1$.
\end{enumerate}
Configurations represented by worlds for the same instant of time
$t$ can be considered \emph{superposed configurations}. In a
superposed configuration, a state on position $x$ and a symbol on
position $y$ are said to be \emph{entangled} if there exist $i, j, k, l$
($i \neq k$ and $j \neq l$) such that
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \vdash Q_i(\overline{t},
\overline{x}) \wedge_{\diamond}S_j(\overline{t}, \overline{y})$,
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \vdash Q_k(\overline{t},
\overline{x}) \wedge_{\diamond}S_l(\overline{t}, \overline{y})$ and
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \nvdash Q_i(\overline{t},
\overline{x}) \wedge_{\diamond}S_l(\overline{t}, \overline{y})$ or
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \nvdash Q_k(\overline{t},
\overline{x}) \wedge_{\diamond}S_j(\overline{t}, \overline{y})$. In a
similar way, the notion of entangled symbols on positions $x$ and $y$ can
also be defined.
In order to exemplify the definition above, consider the two qubits entangled state
$\ket{\psi} = \frac{1}{\sqrt{2}} (\ket{00} + \ket{11})$. Suppose the first qubit of $\ket{\psi}$
represents the state on position $x$ of an EParTM $\mc{M}$ (value $\ket{0}$ representing state $q_1$ and value $\ket{1}$ representing state $q_2$),
and the second qubit of $\ket{\psi}$ represents the symbol on position $y$ of $\mc{M}$
(value $\ket{0}$ representing symbol $s_1$ and value $\ket{1}$ representing symbol $s_2$).
Regarding only the state in position $x$ and the symbol in position $y$ of $\mc{M}$, state $\ket{\psi}$ represents a configuration of $\mc{M}$,
at time instant $t$, in which only the combinations $q_1 s_1$ and $q_2 s_2$ are possible, all other combinations being impossible.
This is expressed in the theory $\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha))$ by $\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \vdash Q_1(\overline{t},
\overline{x}) \wedge_{\diamond}S_1(\overline{t}, \overline{y})$,
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \vdash Q_2(\overline{t},
\overline{x}) \wedge_{\diamond}S_2(\overline{t}, \overline{y})$,
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \nvdash Q_1(\overline{t},
\overline{x}) \wedge_{\diamond}S_2(\overline{t}, \overline{y})$ and
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \nvdash Q_2(\overline{t},
\overline{x}) \wedge_{\diamond}S_1(\overline{t}, \overline{y})$.
Taking into account the definition of the inconsistency
connective in $S5Q^{=}$, as in the model of ParTMs, we can
define conditions of inconsistency in the execution of
instructions in the EParTMs. In this case, by the definition of
the inconsistency connective in $S5Q^{=}$ and its kripkean
interpretation, condition $q_i^\bullet$ will indicate that the
instruction will be executed only when at least two
configurations in the superposition differ in the current state
or position, while condition $s_j^\bullet$ will indicate that the
instruction will be executed only when at least two
configurations in the superposition differ in the symbol on the
position where the instruction can be executed.
A EParTM is then defined as:
\begin{definition}\label{EParTM}
A \emph{EParTM} is a NDTM such that:
\begin{itemize}
\item When the machine reaches an ambiguous configuration with $n$ possible instructions to be executed, the machine configuration \emph{splits} into $n$ copies, executing a different instruction in each copy; the set of the distinct configurations for an instant of time $t$ is called a \emph{superposed configuration};
\item \emph{Inconsistency} conditions are allowed on the first two symbols of instructions (as indicated above);
\item When there are no instructions to be executed (in any current configuration), the machine stops; at this stage the machine can be in a superposed configuration, each configuration in the superposition represents a result of the computation.
\end{itemize}
\end{definition}
Note that a EParTM parallelly performs all possible paths of
computation of a NDTM, and only such paths. This differs from the
previous model of ParTMs, where the combination of actions of
different instructions had led to computational paths not possible
in the corresponding NDTM.
Following \cite{Bennett-1973}, it is possible to define a
reversible EParTM for any EParTM without inconsistency
conditions in instructions.\footnote{In the case of EParTMs, it
is only necessary to avoid overlapping in the ranges of
instructions; the parallel execution of all possible instructions
in an ambiguous configuration does not imply irreversibility.}
This way, EParTMs almost coincide with QTMs without amplitudes;
EParTMs represent uniform superpositions with no
direct representation of the notion of relative phase,
but does allow conditions of inconsistency on the instructions.
As mentioned before, the notion of relative phase is a key
ingredient in allowing interference between different paths
of computation in QTMs, which is essential in order to take advantage of quantum
parallelism in the efficient solution of problems; however, this method has theoretical restrictions
which disable any definition of an efficient (polynomial time) quantum algorithm solving an NP-complete problem
by a naive search-based method (see \cite[Chap. 6]{Nielsen-Chuang-2000}). On the other
hand, conditions of inconsistency on instructions provided by EParTMs
are an efficient mechanism to accomplish actions
depending on different paths of computation. In fact, Theorem \ref{EParTM-csat} proves that
all NP-problems can be efficiently solved by EParTMs.
EParTMs represent an abstract model of computation, independent of any physical implementation.
However, if we think from the physical construction of EParTMs, quantum mechanics provides a way to implement the
simultaneous execution of different paths of computation, but does not provide
what it seems to be a simple operation over the superpositions obtained by quantum parallelism: The execution
of instructions depending on differences in elements on the superposed states (which correspond to
conditions of inconsistency on instructions of EParTMs). In this way, quantum mechanics does not supply a direct theoretical frame for the implementation
of EParTMs, but this definitely does not forbid the possibility of a physical implementation of EParTMs (perhaps conditions of inconsistency could be
implemented by a sophisticated quantum physical procedure, or by a new physical theory). | 3,893 | 28,884 | en |
train | 0.17.9 | In order to exemplify the definition above, consider the two qubits entangled state
$\ket{\psi} = \frac{1}{\sqrt{2}} (\ket{00} + \ket{11})$. Suppose the first qubit of $\ket{\psi}$
represents the state on position $x$ of an EParTM $\mc{M}$ (value $\ket{0}$ representing state $q_1$ and value $\ket{1}$ representing state $q_2$),
and the second qubit of $\ket{\psi}$ represents the symbol on position $y$ of $\mc{M}$
(value $\ket{0}$ representing symbol $s_1$ and value $\ket{1}$ representing symbol $s_2$).
Regarding only the state in position $x$ and the symbol in position $y$ of $\mc{M}$, state $\ket{\psi}$ represents a configuration of $\mc{M}$,
at time instant $t$, in which only the combinations $q_1 s_1$ and $q_2 s_2$ are possible, all other combinations being impossible.
This is expressed in the theory $\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha))$ by $\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \vdash Q_1(\overline{t},
\overline{x}) \wedge_{\diamond}S_1(\overline{t}, \overline{y})$,
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \vdash Q_2(\overline{t},
\overline{x}) \wedge_{\diamond}S_2(\overline{t}, \overline{y})$,
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \nvdash Q_1(\overline{t},
\overline{x}) \wedge_{\diamond}S_2(\overline{t}, \overline{y})$ and
$\Delta^{\star}_{S5Q^{=}}(\mc{M}(\alpha)) \nvdash Q_2(\overline{t},
\overline{x}) \wedge_{\diamond}S_1(\overline{t}, \overline{y})$.
Taking into account the definition of the inconsistency
connective in $S5Q^{=}$, as in the model of ParTMs, we can
define conditions of inconsistency in the execution of
instructions in the EParTMs. In this case, by the definition of
the inconsistency connective in $S5Q^{=}$ and its kripkean
interpretation, condition $q_i^\bullet$ will indicate that the
instruction will be executed only when at least two
configurations in the superposition differ in the current state
or position, while condition $s_j^\bullet$ will indicate that the
instruction will be executed only when at least two
configurations in the superposition differ in the symbol on the
position where the instruction can be executed.
A EParTM is then defined as:
\begin{definition}\label{EParTM}
A \emph{EParTM} is a NDTM such that:
\begin{itemize}
\item When the machine reaches an ambiguous configuration with $n$ possible instructions to be executed, the machine configuration \emph{splits} into $n$ copies, executing a different instruction in each copy; the set of the distinct configurations for an instant of time $t$ is called a \emph{superposed configuration};
\item \emph{Inconsistency} conditions are allowed on the first two symbols of instructions (as indicated above);
\item When there are no instructions to be executed (in any current configuration), the machine stops; at this stage the machine can be in a superposed configuration, each configuration in the superposition represents a result of the computation.
\end{itemize}
\end{definition}
Note that a EParTM parallelly performs all possible paths of
computation of a NDTM, and only such paths. This differs from the
previous model of ParTMs, where the combination of actions of
different instructions had led to computational paths not possible
in the corresponding NDTM.
Following \cite{Bennett-1973}, it is possible to define a
reversible EParTM for any EParTM without inconsistency
conditions in instructions.\footnote{In the case of EParTMs, it
is only necessary to avoid overlapping in the ranges of
instructions; the parallel execution of all possible instructions
in an ambiguous configuration does not imply irreversibility.}
This way, EParTMs almost coincide with QTMs without amplitudes;
EParTMs represent uniform superpositions with no
direct representation of the notion of relative phase,
but does allow conditions of inconsistency on the instructions.
As mentioned before, the notion of relative phase is a key
ingredient in allowing interference between different paths
of computation in QTMs, which is essential in order to take advantage of quantum
parallelism in the efficient solution of problems; however, this method has theoretical restrictions
which disable any definition of an efficient (polynomial time) quantum algorithm solving an NP-complete problem
by a naive search-based method (see \cite[Chap. 6]{Nielsen-Chuang-2000}). On the other
hand, conditions of inconsistency on instructions provided by EParTMs
are an efficient mechanism to accomplish actions
depending on different paths of computation. In fact, Theorem \ref{EParTM-csat} proves that
all NP-problems can be efficiently solved by EParTMs.
EParTMs represent an abstract model of computation, independent of any physical implementation.
However, if we think from the physical construction of EParTMs, quantum mechanics provides a way to implement the
simultaneous execution of different paths of computation, but does not provide
what it seems to be a simple operation over the superpositions obtained by quantum parallelism: The execution
of instructions depending on differences in elements on the superposed states (which correspond to
conditions of inconsistency on instructions of EParTMs). In this way, quantum mechanics does not supply a direct theoretical frame for the implementation
of EParTMs, but this definitely does not forbid the possibility of a physical implementation of EParTMs (perhaps conditions of inconsistency could be
implemented by a sophisticated quantum physical procedure, or by a new physical theory).
We could also modify the definition of EParTMs to capture more properties of QTMs. In this way, conditions of inconsistency in instructions could be avoided, and a notion of `relative phase' could be introduced in EParTMs. This could be achieved by extending $S5Q^=$ with a new connective of possibility. Thus, the possibility connective of $S5$ (now denoted by $\diamond_1$) would represent `positive configurations' and the other possibility connective ($\diamond_2$) would represent `negative configurations' (axioms establishing the behavior of $\diamond_2$ and their interrelation with other connectives would need to be added; in particular, combinations of connectives $\diamond_1$ and $\diamond_2$ would have to behave in an analogous way to combinations of symbols $+$ and $-$). The connective $\diamond_2$ could be used to define a new paraconsistent negation as well as a new non-separable conjunction. Thus, by specifying which connectives would be used in each axiom, we could obtain a different definition of EParTMs. In this new definition, a concept of `interference' can be specified; equal configurations with the same possibility connective interfere constructively, while equal configurations with different possibility connectives interfere destructively. Although details are not given here, this construction shows once more how we can define computation models with distinct computational power by just substituting the logic underlying theories $\Delta^{\star}_{FOL}(\mc{M}(\alpha))$. In this sense, computability can be seen as relative to logic. Alternatively, we can add a new element on the EParTMs: a sign indicating the `relative phase' of the configuration, and a new kind of instructions to change the relative phase. | 1,872 | 28,884 | en |
train | 0.17.10 | \subsection{About the Power of ParTMs and EParTMs}\label{comp-power-partms-EParTMs}
In order to estimate the computational power of ParTMs and EParTMs,
we first define what the `deciding' of a language
(i.e. a set of strings of symbols $L \subset \Sigma^*$, where
$\Sigma$ is a set of symbols and $^*$ represents the Kleene
closure) means in these models of computation.
In the definition, we will consider multiple results in a computation as being possible
responses from which we have to randomly select only one.
We will also suppose that ParTMs and EParTMs have two distinguished
states: $q_y$ (the \emph{accepting state}) and $q_n$ (the
\emph{rejecting state}), and that all final states of the machine
(if it halts) are $q_y$ or $q_n$.
\begin{definition}
Let $\mc{M}$ be a ParTM (EParTM) and $x$ be a string of symbols in the input/output alphabet
of $\mc{M}$. We say that $\mc{M}$ \emph{accepts} $x$
with probability $\frac{m}{n}$ if $\mc{M}(x)$ halts in a
superposition of $n$ configurations and $m$ of them are in
state $q_y$; conversely, we say that $\mc{M}$ \emph{rejects} $x$
with probability $\frac{m}{n}$ if $\mc{M}(x)$ halts in a `superposition'
of $n$ configurations and $m$ of them are in state $q_n$.
Consequently, we say that $\mc{M}$ \emph{decides} a language $L$,
with error probability at most $1 - \frac{m}{n}$,
if for any string $x \in L$, $\mc{M}$ accepts $x$ with probability at least $\frac{m}{n}$,
and for any string $x \notin L$, $\mc{M}$ rejects $x$ with probability at least $\frac{m}{n}$.
\end{definition}
Bounded-error probabilistic time complexity classes are
defined for ParTMs and EParTMs as:
\begin{definition}
BParTM-PTIME (BEParTM-PTIME) is the class of
languages decided in \emph{polynomial time} by some ParTM (EParTM), with error probability at most $\frac{1}{3}$.
BParTM-EXPTIME (BEParTM-EXPTIME) is the class of
languages decided in \emph{exponential time} by some ParTM (EParTM), with error probability at most $\frac{1}{3}$.
\end{definition}
Space complexity classes can be defined in an analogous way, considering only the
largest space used for the different superposed configurations.
Now, we will prove that ParTMs are computationally equivalent to
DTMs, showing how to simulate the computation of ParTMs by DTMs
(Theorem \ref{eq-comp-partm-dtm}). As a consequence, we have
that the class of languages decided by both models of computation
is the same. It is obvious that computations performed by DTMs
can be computed also by ParTMs, because DTMs are particular cases
of ParTMs. What is surprising is that the simulation of ParTMs by
DTMs is performed with \emph{only} a polynomial slowdown in time
(Theorem \ref{eq-comp-temp-partm-dtm}) and a constant factor
overhead in space (direct consequence of the proof of Theorem
\ref{eq-comp-partm-dtm}). Theorems \ref{eq-comp-partm-dtm} and
\ref{eq-comp-temp-partm-dtm} are inspired in the simulation of
multi-tape TMs by one-tape TMs as presented in
\cite{Hopcroft-Motwani-Ullman-2001}, and show once more how
powerful the classical model of TMs is.
\begin{theorem}\label{eq-comp-partm-dtm}
Any ParTM can be simulated by a DTM.
\begin{proof}
Let $\mc{M}$ be a ParTM with $n$ states and $m$ input/output symbols. Define a DTM $\mc{M}'$ and suppose its tape is divided into $2n + m$ tracks. Symbols $1$ and $0$ on track $i$ ($1 \leq i \leq n$) and position $p$ of $\mc{M}'$ represent respectively that $q_i$ is or is not one of the states of $\mc{M}$ in position $p$. In a similar way, symbols $1$ and $0$ on track $j$ ($n + 1 \leq j \leq n + m$) and position $p$ of $\mc{M}'$ respectively represent the occurrence or non-occurrence of symbol $s_j$ on position $p$ of $\mc{M}$. Tracks $n + m + 1$ to $2n + m$ are used to calculate states resulting from the parallel execution of instructions in $\mc{M}$, and values on these tracks represent states of $\mc{M}$ in the same way as tracks $1$ to $n$. The symbol $\$$ is used on track $1$ of $\mc{M}'$ to delimitate the area where $\mc{M}$ is in any state (i.e., where any symbol $1$ appears on some track $i$ associated to states of $\mc{M}$). To simulate a step of the computation of $\mc{M}$, $\mc{M}'$ scans the tape between delimiters $\$$ in four times. In the first scan (from left to right), $\mc{M}'$ simulates the parallel execution of instructions where the action is a movement to right: In each position, $\mc{M}'$ writes values in tracks $n + m + 1$ to $2n + m$ in accordance with states `remembered' from the previous step and collects (in the state of the machine, depending on the content of tracks $1$ to $n + m$ and the instructions of movement to right of $\mc{M}$) the states to be written in the next position of the tape; $\mc{M}'$ also moves delimiters $\$$ if necessary. The second scan is similar to the first one, but in the opposite direction and simulating instructions of movement to the left, taking care in the writing of values so as not to delete values $1$ written in the previous scan. In the third scan (from left to right), $\mc{M}'$ simulates the parallel execution of instructions where the action is the modification of symbols on the tape: In each position, depending on the content of tracks $1$ to $n + m$ and in accordance with the writing instructions of $\mc{M}$, $\mc{M}'$ writes values on tracks $n + 1$ to $n + m$ (corresponding to symbols written by instructions of $\mc{M}$) and also on tracks $n + m + 1$ to $2n + m$ (corresponding to changes of states from the writing instructions of $\mc{M}$, taking care in the writing of values so as not to delete values $1$ written in the previous scans). Finally, $\mc{M}'$ performs a fourth scan (from right to left) copying values from tracks $n + m + 1$ to $2n + m$ on tracks $1$ to $n$ and writing $0$ on tracks $n + m + 1$ to $2n + m$.
\end{proof}
\end{theorem}
\begin{theorem}\label{eq-comp-temp-partm-dtm}
The DTM of Theorem \ref{eq-comp-partm-dtm} simulates $n$ steps of the corresponding ParTM in time $O(n^2)$.
\begin{proof}
Let $\mc{M}$ be a ParTM and $\mc{M}'$ be the DTM described in the proof of Theorem \ref{eq-comp-partm-dtm} such that $\mc{M}'$ simulates the behavior of $\mc{M}$. After $n$ steps of computation, the leftmost state and the rightmost state of $\mc{M}$ cannot be separated by more than $2n$ cells, consequently this is the separation of $\$$ delimiters in the first track of $\mc{M}'$. In any scan of $\mc{M}'$, in the simulation of a step of computation of $\mc{M}$, $\mc{M}'$ has to move between $\$$ delimiters, and a writing operation can be performed in any position, thus any scan takes at most $4 n$ steps within the computation (ignoring steps due to scanning of delimiters $\$$ and their possible relocation). Therefore, the simulation of the $n$ step in the computation of $\mc{M}$ takes at most $16 n$ steps, i.e., time $O(n)$. Consequently, for the simulation of $n$ steps in $\mc{M}$, $\mc{M}'$ requires no more than $n$ times this amount, i.e., time $O(n^2)$.
\end{proof}
\end{theorem}
\begin{corollary}
The class of languages decided by ParTMs and DTMs are the same, and languages are decided with the same temporal and spatial complexity in both models.
\begin{proof}
Direct consequence of theorems \ref{eq-comp-partm-dtm} and \ref{eq-comp-temp-partm-dtm}; it is only necessary to add another scan between delimiters $\$$ at the end of the simulation to search for an accepting state, finalizing $\mc{M}'$ in its accepting state if symbol $1$ is found in the track corresponding to the accepting state of $\mc{M}$, or finalizing $\mc{M}'$ in its rejecting state if no symbol $1$ is found in the track corresponding to the accepting state of $\mc{M}$. Clearly, this additional scan takes at most a polynomial number of steps (thus preserving the temporal complexity) and does not use new space (thus preserving the spatial complexity).
\end{proof}
\end{corollary}
For EParTMs the situation is different: The class
of languages decided in both models continues to be the same
(DTMs can simulate all paths of computation of a EParTM, writing
different configurations in separate portions of the tape and
considering the different configurations in the simulation of
instructions with inconsistency conditions), but all $NP$-problems
can be \emph{deterministically} (with error probability $0$) computed
in polynomial time by EParTMs (a direct consequence of Theorem
\ref{EParTM-csat}, since the satisfiability of propositional formulas
in conjunctive normal form (CSAT) is $NP$-complete). Thus, time complexity
of EParTMs and DTMs are equal only if $P = NP$, which is broadly
believed to be false.
\begin{theorem}\label{EParTM-csat}
CSAT is in BEParTMs-PTIME.
\begin{proof}
It is not difficult to define a NDTM $\mc{M}$ deciding CSAT in polynomial time in which all computational paths have the same depth and finish in the same position of the tape. By considering $\mc{M}$ as a EParTM, all computational paths are performed in parallel, obtaining a superposition of configurations in which at least one of them is in state $q_y$ if the codified conjunctive normal form formula is satisfiable, or with all configurations in $q_n$ otherwise. Thus, by adding the instructions $i_{n+j}: q_y^\bullet s_j s_j q_y$ and $i_{n+m+j}: q_n^\bullet s_j s_j q_y$ to $\mc{M}$ (where $m$ is the number of input/output symbols of $\mc{M}$ and $1 \leq j \leq m$) we have the acceptance or rejection with probability 1.
\end{proof}
\end{theorem} | 2,823 | 28,884 | en |
train | 0.17.11 | \section{Final Remarks}
In this paper, we generalize a method for axiomatize Turing
machine computations not only with foundational aims, but
also envisaging new models of computation by logical handling (basically through the
substitution of the underlying logic of the intrinsic theories in the computation),
showing a way in which logical representations can be used in the
construction of new concepts.
The new models of computation defined here use a sophisticated
logical language which permits us to express some important features of
quantum computing. The first model allows the simulation of superposed
states by means of multiplicity of elements in TMs, enabling
the simulation of some quantum algorithms but unable to
speed up classical computation. In order to overcome
this weakness, we define a second model which is able to represent entangled
states, in this way, reaching an exponential speed-up of an
$NP$-complete problem. Both models are grounded on paraconsistent
logic (LFIs). In particular, the only element in the language
that cannot be directly simulated in quantum computing is the
``inconsistency operator'' of the second model. As this is a key component
in the efficiency of the whole model, an important
problem is to decide whether it can or cannot be characterized by
quantum means.
In spite of \emph{paraconsistent computational theory} being only an
emerging field of research, we believe that this logic
relativization of the notion of computation is really promising
in the search of efficient solutions to problems, particularly
helping in the understanding of the role of quantum
features and indeterminism in computation processes.
\end{document} | 399 | 28,884 | en |
train | 0.18.0 | \begin{document}
\title{A Unified and Strengthened Framework for the Uncertainty Relation}
\author{Xiao Zheng}
\affiliation{
Key Laboratory of Micro-Nano Measurement-Manipulation and Physics (Ministry of Education), School of Physics and Nuclear Energy Engineering, Beihang University, Xueyuan Road No. 37, Beijing 100191, China
}
\author{Shao-Qiang Ma}
\affiliation{
Key Laboratory of Micro-Nano Measurement-Manipulation and Physics (Ministry of Education), School of Physics and Nuclear Energy Engineering, Beihang University, Xueyuan Road No. 37, Beijing 100191, China
}
\author{Guo-Feng Zhang}
\email{[email protected]}
\affiliation{
Key Laboratory of Micro-Nano Measurement-Manipulation and Physics (Ministry of Education), School of Physics and Nuclear Energy Engineering, Beihang University, Xueyuan Road No. 37, Beijing 100191, China
}
\author{Heng Fan}
\affiliation{
Beijing National Laboratory for Condensed Matter Physics, and Institute of Physics,
Chinese Academy of Sciences, Beijing 100190, China
}
\author{Wu-Ming Liu}
\affiliation{
Beijing National Laboratory for Condensed Matter Physics, and Institute of Physics,
Chinese Academy of Sciences, Beijing 100190, China
}
\date{\today}
\begin{abstract}
We provide a unified and strengthened framework for the product form and the sum form variance-based uncertainty relations by constructing a unified uncertainty relation. In the unified framework, we deduce that the uncertainties of the incompatible observables are bounded by not only the commutator of themselves, but also the quantities related with the other operator. This operator can provide information so that we can capture the uncertainty of the measurement result more accurately, and thus is named as the information operator. The introduction of the information operator can fix the deficiencies in both the product form and the sum form uncertainty relations, and provides a more accurate description of the quantum uncertainty relation. The unified framework also proposes a new interpretation of the uncertainty relation for non-Hermitian operators; i.e., the ``observable" second-order origin moments of the non-Hermitian operators cannot be arbitrarily small at the same time when they are generalized-incompatible on the new definition of the generalized commutator.
\end{abstract}
\maketitle
Quantum uncertainty relations \cite{1,2,3}, expressing the impossibility of the joint sharp preparation of the incompatible observables \cite{4w,4}, are the most fundamental differences between quantum and classical mechanics \cite{5,6,7,28S}. The uncertainty relation has been widely used in the quantum information science, such as quantum non-cloning theorem \cite{7P,7H}, quantum cryptography \cite{CW,C7,8,9}, entanglement detection \cite{10,10C,24,11,12}, quantum spins squeezing \cite{13,14,15,16}, quantum metrology \cite{7V,17,18}, quantum synchronization \cite{18A,18F} and mixedness detection \cite{19,20}. In general, the improvement in uncertainty relations will greatly promote the development of quantum information science \cite{7V,10,20B,21P,21X}.
The variance-based uncertainty relations for two incompatible observables $A$ and $B$ can be divided into two forms: the product form ${\Delta A}^2{\Delta B}^2\geq LB_{p}$ \cite{2,3,4,28L} and the sum form ${\Delta A}^2+{\Delta B}^2\geq LB_{s}$ \cite{21,22,23,25}, where $LB_{p}$ and $LB_s$ represent the lower bounds of the two forms uncertainty relations, and ${\Delta Q}^2$ is the variance of $Q$ \cite{25F}. The product form uncertainty relation cannot fully capture the concept of incompatible observables, because it can be trivial; i.e., the lower bound $LB_{p}$ can be null even for incompatible observables \cite{21,22,4JL,28}. This deficiency is referred to as the triviality problem of the product form uncertainty relation. In order to fix the triviality problem, Maccone and Pati deduced a sum form uncertainty relation with a nonzero lower bound for incompatible observables \cite{28}, showing that the triviality problem can be addressed by the sum form uncertainty relation. Since then, lots of effort has been made to investigate the uncertainty relation in the sum form \cite{10,21,26,27,H2,42}. However, most of the sum form uncertainty relations depend on the orthogonal state to the state of the system, and thus are difficult to apply to the high dimension Hilbert space \cite{21}. There also exist the uncertainty relations based on the entropy \cite{5,6,7P,34} and skew information \cite{34S}, which may not suffer the triviality problem, but they cannot capture the incompatibility in terms of the experimentally measured error bars, namely variances \cite{28,28S}.
Here we only focus on the uncertainty relation based on the variance. Despite the significant progress on the variance-based uncertainty relation, previous work mainly studies the product form and the sum form uncertainty relations, separately. A natural question is raised : can the uncertainty relations in the two forms be integrated into a unified framework? If so, can the unified framework fix the deficiencies in the traditional uncertainty relations and provide a more accurate description of the quantum uncertainty relation? In other words, can the unified framework provide a stronger theoretical system for the quantum uncertainty relation?
In this Letter, we provide a unified framework for the product form and the sum form variance-based uncertainty relations by constructing a unified uncertainty relation. The unified framework shows that the uncertainties of the incompatible observables $A$ and $B$ are bounded by not only their commutator, but also the quantities related with the other operator, named as the information operator. Actually, the deficiencies in both the product form and the sum form uncertainty relations can be considered as having not taken the information operator into consideration and can be completely fixed by the introduction of the information operator. Furthermore, the uncertainty inequality will become an uncertainty equality when a specific number of information operators are introduced, which means the uncertainty relation can be expressed exactly with the help of the information operators. Thus the unified framework provides a strengthened theoretical system for the uncertainty relation. Meanwhile, our uncertainty relation provides a new interpretation of the uncertainty relation for non-Hermitian operators; i.e., the ``observable" second-order origin moments of the non-Hermitian operators cannot be arbitrarily small at the same time when they are generalized-incompatible on the new definition of the generalized commutator. The new interpretation reveals some novel quantum properties that the traditional uncertainty relation cannot do.
\emph{Unified Uncertainty Relation.--- }The Schr\"{o}dinger uncertainty relation (SUR) is the initial as well as the most widely used product form uncertainty relation \cite{3}:
\begin{align}
{\Delta A}^2{\Delta B}^2\geq\frac{1}{4}|\langle[A,B]\rangle|^2+\frac{1}{4}|\langle\{\check{A},\check{B}\}\rangle|^2\tag{1},
\end{align}
where $\langle Q\rangle$ represents the expected value of $Q$, $\check{Q}=Q-\langle Q\rangle$, $[A,B]=AB-BA$ and $\{\check{A},\check{B}\}=\check{A}\check{B}+\check{B}\check{A}$ represent the commutator and anti-commutator, respectively. One of the most famous sum form uncertainty relations, which have fixed the triviality problem in the product form uncertainty relation, takes the form \cite{28}:
\begin{align}
{\Delta A}^2+{\Delta B}^2\geq|\langle\psi|A\pm iB|\psi^\perp\rangle|^2\pm i\langle[A,B]\rangle \tag{2},
\end{align}
where $|\psi^\bot\rangle$ is the state orthogonal to the state of the system $|\psi\rangle$.
Before constructing the unified uncertainty relation, we first consider the non-Hermitian extension of the commutator and anti-commutator. There exist two kinds of operators in quantum mechanics: Hermitian and non-Hermitian operators, but it should be paid particular attention that lots of uncertainty relations are invalid for non-Hermitian operators \cite{37,38,39}. For instance, $|[\sigma_+,\sigma_-]|^2/4+|\{\check{\sigma}_+,\check{\sigma}_-\}|^2/4\geq{\Delta\sigma_+}^2{\Delta\sigma_-}^2$ , where the non-Hermitian operator $\sigma_+(\sigma_-)$ is the raising (lowering) operator of the single qubit system. That is to say, different from the Hermitian operators, the uncertainties of the non-Hermitian operators are not lower-bounded by quantities related with the commutator. The essential reason for this phenomenon is that $i[\mathcal{A},\mathcal{B}]$ and $\{\mathcal{A},\mathcal{B}\}$ cannot be guaranteed to be Hermitian by the existing definition of commutator and anti-commutator when the operator $\mathcal{A}$ or $\mathcal{B}$ is non-Hermitian. To fix this problem, we define the generalized commutator and anti-commutator as:
\begin{align}
[\mathcal{A},\mathcal{B}]_{\mathcal{G}}=\mathcal{A}^\dag\mathcal{B}-\mathcal{B}^\dag\mathcal{A}, \quad \{\mathcal{A},\mathcal{B}\}_{\mathcal{G}}=\mathcal{A}^\dag\mathcal{B}+\mathcal{B}^\dag\mathcal{A}\tag{3}.
\end{align}
The generalized commutator and anti-commutator will reduce to the normal ones when $\mathcal{A}$ and $\mathcal{B}$ are both Hermitian. We say that $\mathcal{A}$ and $\mathcal{B}$ are generalized-incompatible (generalized-anti-incompatible) with each other hereafter when $\langle[\mathcal{A},\mathcal{B}]_{\mathcal{G}}\rangle\neq0$ $(\langle\{\mathcal{A},\mathcal{B}\}_{\mathcal{G}}\rangle\neq0)$. Then, one can obtain a new uncertainty relation for both Hermitian and non-Hermitian operators (for more detail, please see the Unified Uncertainty Relation in the Supplemental Material \cite{35}):
\begin{align}
\langle\mathcal{A}^\dag\mathcal{A}\rangle\langle\mathcal{B}^\dag\mathcal{B}\rangle=\frac{|\langle [{\mathcal{A}},{\mathcal{B}}]_{\mathcal{G}}\rangle|^2}{4}+\frac{|\langle\{{\mathcal{A}},{\mathcal{B}}\}_{\mathcal{G}}\rangle|^2}{4}+\langle\mathcal{C}^\dag\mathcal{C}\rangle\langle\mathcal{B}^\dag\mathcal{B}\rangle \tag{4},
\end{align}
where the remainder $\langle\mathcal{C}^\dag\mathcal{C}\rangle\langle\mathcal{B}^\dag\mathcal{B}\rangle\geq0$ with $\mathcal{C}=\mathcal{A}-\langle\mathcal{B}^\dag\mathcal{A}\rangle\mathcal{B}/\langle\mathcal{B}^\dag\mathcal{B}\rangle$, and $\langle\mathcal{Q}^\dag\mathcal{Q}\rangle$ is the second-order origin moment of the operator $\mathcal{Q}$.
In fact, the traditional interpretation of the uncertainty relation is invalid for non-Hermitian operators, because, as mentioned above, most of the uncertainty relations will be violated when applied to non-Hermitian operators. The uncertainty relation (4) provides a new interpretation of the uncertainty relation for non-Hermitian operators; i.e., the second-order origin moments $\langle\mathcal{A}^\dag\mathcal{A}\rangle$ and $\langle\mathcal{B}^\dag\mathcal{B}\rangle$ cannot be arbitrarily small at the same time when $\mathcal{A}$ and $\mathcal{B}$ are generalized-incompatible or generalized-anti-incompatible with each other. Remarkably, the operators $\mathcal{A}^\dag\mathcal{A}$, $\mathcal{B}^\dag\mathcal{B}$, $i[\mathcal{A},\mathcal{B}]_{\mathcal{G}}$, and $\{\mathcal{A},\mathcal{B}\}_{\mathcal{G}}$ are Hermitian even when $\mathcal{A}$ and $\mathcal{B}$ are non-Hermitian. That is to say, different from the variance, the second-order origin moment is observable for both the Hermitian and non-Hermitian operators. The new interpretation reveals some novel quantum properties that the traditional uncertainty relations cannot do. Such as, applying the new uncertainty relation (4) to the annihilation operators $a_1$ and $a_2$ of two continuous variable subsystems, one can deduce that the product of the expected energy of two subsystems $\langle a_1^\dag a_1\rangle\langle a_2^\dag a_2\rangle$ is lower-bounded by $|\langle[a_1,a_2]_{\mathcal{G}}\rangle|^2/4+|\langle\{a_1,a_2\}_{\mathcal{G}}\rangle|^2/4$. Especially, the energy of two subsystems cannot be arbitrarily small at the same time, when the annihilation operators of the two systems are generalized-incompatible or generalized-anti-incompatible on the state of the system, which means $\langle[a_1,a_2]_{\mathcal{G}}\rangle$ or $\langle\{a_1,a_2\}_{\mathcal{G}}\rangle$ does not equal or tend to zero.
The new uncertainty relation (4) expresses the quantum uncertainty relation in terms of the second-order origin moment, instead of the variance, but can unify the uncertainty relations based on the variance. Then, we demonstrate that some well-known uncertainty relations in either the sum form or the product form can be unified by the new uncertainty relation. Firstly, the new uncertainty relation turns into the product form uncertainty relation SUR, if we replace the operators $\mathcal{A}$ and $\mathcal{B}$ with the Hermitian operators $\check{A}=A-\langle A\rangle$ and $\check{B}=B-\langle B\rangle$. Secondly, assuming the system is in the pure state $|\psi\rangle$ and substituting the non-Hermitian operators $\mathcal{A}=\check{A}\pm i\check{B}$ and $\mathcal{B}=|\psi^\bot\rangle\langle\psi|$ into the uncertainty relation (4), one can obtain the sum form uncertainty relation (2). Here, the product form $\langle\mathcal{A}^\dag\mathcal{A}\rangle\langle\mathcal{B}^\dag\mathcal{B}\rangle=\Delta(A\pm iB)^{2}\Delta(|\psi^\bot\rangle\langle\psi|)^{2}=\Delta A^{2}+\Delta B^{2}\pm i\langle[A,B]\rangle$ turns into the sum form. That is to say, the product form uncertainty relation is the new uncertainty relation for Hermitian operators and the sum form uncertainty relation is actually the new uncertainty relation for non-Hermitian operators. The other uncertainty relations in the two forms \cite{26,42,27,41,37,38,23,28} can also be recovered by the uncertainty relation (4) in the similar way, and thus the equality (4) provides a unified uncertainty relation.
The uncertainty relations in the two forms can be divided into several categories with respect to their purposes and applications, such as the uncertainty relations focused on the effect of the incompatibility of observables on the uncertainty \cite{26,42}, the uncertainty relations used to investigate the relation between the variance of the sum of the observables and the sum variances of the observables \cite{23,28}, and even the uncertainty relations for three and more observables \cite{27,41}. The unified uncertainty indicates that they can all be integrated into a unified framework. Besides, by the introduction of the information operator, the unified framework provides a strengthened theoretical system for the quantum uncertainty relation. That is to say, the unified framework can fix the deficiencies in the traditional uncertainty relations, and provides a more accurate description of the uncertainty relation. The corresponding discussion will be presented in the next section. | 3,938 | 10,403 | en |
train | 0.18.1 | In fact, the traditional interpretation of the uncertainty relation is invalid for non-Hermitian operators, because, as mentioned above, most of the uncertainty relations will be violated when applied to non-Hermitian operators. The uncertainty relation (4) provides a new interpretation of the uncertainty relation for non-Hermitian operators; i.e., the second-order origin moments $\langle\mathcal{A}^\dag\mathcal{A}\rangle$ and $\langle\mathcal{B}^\dag\mathcal{B}\rangle$ cannot be arbitrarily small at the same time when $\mathcal{A}$ and $\mathcal{B}$ are generalized-incompatible or generalized-anti-incompatible with each other. Remarkably, the operators $\mathcal{A}^\dag\mathcal{A}$, $\mathcal{B}^\dag\mathcal{B}$, $i[\mathcal{A},\mathcal{B}]_{\mathcal{G}}$, and $\{\mathcal{A},\mathcal{B}\}_{\mathcal{G}}$ are Hermitian even when $\mathcal{A}$ and $\mathcal{B}$ are non-Hermitian. That is to say, different from the variance, the second-order origin moment is observable for both the Hermitian and non-Hermitian operators. The new interpretation reveals some novel quantum properties that the traditional uncertainty relations cannot do. Such as, applying the new uncertainty relation (4) to the annihilation operators $a_1$ and $a_2$ of two continuous variable subsystems, one can deduce that the product of the expected energy of two subsystems $\langle a_1^\dag a_1\rangle\langle a_2^\dag a_2\rangle$ is lower-bounded by $|\langle[a_1,a_2]_{\mathcal{G}}\rangle|^2/4+|\langle\{a_1,a_2\}_{\mathcal{G}}\rangle|^2/4$. Especially, the energy of two subsystems cannot be arbitrarily small at the same time, when the annihilation operators of the two systems are generalized-incompatible or generalized-anti-incompatible on the state of the system, which means $\langle[a_1,a_2]_{\mathcal{G}}\rangle$ or $\langle\{a_1,a_2\}_{\mathcal{G}}\rangle$ does not equal or tend to zero.
The new uncertainty relation (4) expresses the quantum uncertainty relation in terms of the second-order origin moment, instead of the variance, but can unify the uncertainty relations based on the variance. Then, we demonstrate that some well-known uncertainty relations in either the sum form or the product form can be unified by the new uncertainty relation. Firstly, the new uncertainty relation turns into the product form uncertainty relation SUR, if we replace the operators $\mathcal{A}$ and $\mathcal{B}$ with the Hermitian operators $\check{A}=A-\langle A\rangle$ and $\check{B}=B-\langle B\rangle$. Secondly, assuming the system is in the pure state $|\psi\rangle$ and substituting the non-Hermitian operators $\mathcal{A}=\check{A}\pm i\check{B}$ and $\mathcal{B}=|\psi^\bot\rangle\langle\psi|$ into the uncertainty relation (4), one can obtain the sum form uncertainty relation (2). Here, the product form $\langle\mathcal{A}^\dag\mathcal{A}\rangle\langle\mathcal{B}^\dag\mathcal{B}\rangle=\Delta(A\pm iB)^{2}\Delta(|\psi^\bot\rangle\langle\psi|)^{2}=\Delta A^{2}+\Delta B^{2}\pm i\langle[A,B]\rangle$ turns into the sum form. That is to say, the product form uncertainty relation is the new uncertainty relation for Hermitian operators and the sum form uncertainty relation is actually the new uncertainty relation for non-Hermitian operators. The other uncertainty relations in the two forms \cite{26,42,27,41,37,38,23,28} can also be recovered by the uncertainty relation (4) in the similar way, and thus the equality (4) provides a unified uncertainty relation.
The uncertainty relations in the two forms can be divided into several categories with respect to their purposes and applications, such as the uncertainty relations focused on the effect of the incompatibility of observables on the uncertainty \cite{26,42}, the uncertainty relations used to investigate the relation between the variance of the sum of the observables and the sum variances of the observables \cite{23,28}, and even the uncertainty relations for three and more observables \cite{27,41}. The unified uncertainty indicates that they can all be integrated into a unified framework. Besides, by the introduction of the information operator, the unified framework provides a strengthened theoretical system for the quantum uncertainty relation. That is to say, the unified framework can fix the deficiencies in the traditional uncertainty relations, and provides a more accurate description of the uncertainty relation. The corresponding discussion will be presented in the next section.
\begin{figure}
\caption{The spin-1 system is chosen as the platform to demonstrate the new uncertainty inequality (8). We take $A=J_x$, $B=J_z$, $\hbar=1$ , and the state is parameterized by $\alpha$ as $\rho=\cos^2(\alpha)|1\rangle\langle1|+\sin^2(\alpha)|-1\rangle\langle-1|$, with $|\pm1\rangle$ and $|0\rangle$ being the eigenstates of $J_z$. The green dash-dotted line represents the lower bound of the SUR (denoted by $LB_{SUR}
\end{figure}
\begin{figure}
\caption{Illustration to demonstrate the function of the information operator is presented. We take $\hbar=1$, and assume that the state of the spin-1 system is in the pure state $|\psi\rangle=\cos(\beta)|1\rangle+\sin(\beta)|-1\rangle$ with $|\pm1\rangle$ being the eigenstates of $J_z$. The operator set $\Theta=\{\mathcal{O}
\end{figure}
\emph{Information operator.---}Based on the initial spirit of Schr\"{o}dinger, the SUR can be derived as follows \cite{25V}. Assume $\mathcal{F}=\sum^N_{m=1}x_m\check{A}_m$, where $A_m$ stands for an arbitrary operator, $N$ is the number of the operators and $x_m\in C$ represents a random complex number. Using the non-negativity of the second-order origin moment of $\mathcal{F}$ \cite{25V}, namely $\langle \mathcal{F}^\dag \mathcal{F}\rangle\geq0$, one can obtain:
\begin{align}
&\mathbb{D}:\geq0\tag{5},
\end{align}
where $\mathbb{D}$ is the $N\times N$ dimension matrix with the elements $\mathbb{D}(m,n)=\langle\check{A}_m^\dag\check{A}_n\rangle$ and $\mathbb{D}:\geq0$ means that $\mathbb{D}$ is a positive semidefinite matrix. As for the positive semidefinite matrix $\mathbb{D}$, we have ${\rm Det}(\mathbb{D})\geq0$ with ${\rm Det}(\mathbb{D})$ being the determinant value of ${\mathbb{D}}$, and $X^\dag.\mathbb{D}.X\geq0$ with $X\in C^N$ being a random column vector. In fact, ${\rm Det}(\mathbb{D})\geq0$ turns into the product form uncertainty relation and $X^\dag.\mathbb{D}.X\geq0$ becomes the sum form uncertainty relation. For instance, taking $N=2$ and $X=\{1,\mp i\}^T$, one can obtain that ${\rm Det}(\mathbb{D})\geq0$ is the SUR and $X^\dag.\mathbb{D}.X\geq0$ is the sum form uncertainty relation $\Delta A^{2}+\Delta B^{2}=|\langle [A,B]\rangle|$. Thus, the SUR can be interpreted as the fundamental inequality $\langle \mathcal{F}^\dag \mathcal{F}\rangle\geq0$ or $\mathbb{D}:\geq0$, and so do the other uncertainty relations deduced in Refs. \cite{28LS,52,53,54,55}.
However, the quantum properties of the operator $\mathcal{F}$, in most cases, cannot be fully expressed by $\langle \mathcal{F}^\dag \mathcal{F}\rangle\geq0$, because the non-negativity of the second-order origin moment $\langle \mathcal{F}^\dag \mathcal{F}\rangle\geq0$ cannot provide any information of $\mathcal{F}$ in the quantum level. Considering an arbitrary operator $\mathcal{O}$ , based on the unified uncertainty relation (4), one has:
\begin{align}
\langle \mathcal{F}^\dag \mathcal{F}\rangle\geq\dfrac{|\langle i[{\mathcal{F}},{\mathcal{O}}]_{\mathcal{G}}\rangle|^2+|\langle\{{\mathcal{F}},{\mathcal{O}}\}_{\mathcal{G}}\rangle|^2} {4\langle\mathcal{O}^\dag\mathcal{O}\rangle} \tag{6}.
\end{align}
Especially, we have $|\langle i[{\mathcal{F}},{\mathcal{O}}]_{\mathcal{G}}\rangle|^2+|\langle\{{\mathcal{F}},{\mathcal{O}}\}_{\mathcal{G}}\rangle|^2/4\langle\mathcal{O}^\dag\mathcal{O}\rangle>0$ when the operator $\mathcal{O}$ is generalized-incompatible or generalized-anti-incompatible with $\mathcal{F}$. Obviously, the introduction of $\mathcal{O}$ provides a more accurate description for the second-order origin moment $\langle \mathcal{F}^\dag \mathcal{F}\rangle$. That is to say, the operator $\mathcal{O}$ can provide information for the second-order origin moment of $\mathcal{F}$ that $\langle \mathcal{F}^\dag \mathcal{F}\rangle\geq0$ cannot do, and thus we name $\mathcal{O}$ as the information operator. In order to investigate the quantum uncertainty relation more accurately, the information operator should be introduced. Using (6), we have:
\begin{align}
&\mathbb{D}:\geq \mathbb{V}\tag{7},
\end{align}
where $\mathbb{V}$ is the $N\times N$ dimension positive semidefinite matrix with the elements $\mathbb{V}(m,n)=\langle\check{A}_m^\dag\mathcal{O}\rangle\langle\mathcal{O}^\dag\check{A}_n\rangle/\langle\mathcal{O}^\dag\mathcal{O}\rangle$ and $\mathbb{D}:\geq \mathbb{V}$ means $\mathbb{D}-\mathbb{V}$ is a positive semidefinite matrix. Based on the properties of the positive semidefinite matrix, we can obtain a series of uncertainty relations for $N$ observables in both the product form and the sum form.
To demonstrate the importance of the information operator, we will investigate its function on fixing the deficiencies appearing in the traditional uncertainty relations. The triviality problem of the SUR occurs when the state of the system happens to be the eigenstate of $A$ or $B$. For instance, one has $|\langle[A,B]\rangle/2i|^2+|\langle\{\check{A},\check{B}\}\rangle/2|^2\equiv{\Delta A}^2{\Delta B}^2\equiv0$ in the finite-dimension Hilbert space when ${\Delta A}^2=0$ or ${\Delta B}^2=0$. Different from ${\Delta A}^2{\Delta B}^2$, the sum of the variances ${\Delta A}^2+{\Delta B}^2$ will never equal zero for incompatible observables even when the state of the system is the eigenstate of $A$ or $B$. Thus the sum form has the advantage in expressing the uncertainty relation. However, the lower bounds of the most sum form uncertainty relations depend on the state $|\psi^\perp\rangle$, making them difficult to apply to the high dimension Hilbert space \cite{21}. Based on the analysis in the previous section, the sum form uncertainty relation (2) can be written as
$\Delta\mathcal{A}^{2}\Delta\mathcal{B}^{2}\geq|\langle\psi|[\mathcal{\check{A}},\mathcal{\check{B}}]_{\mathcal{G}}|\psi\rangle|^{2}/4+|\langle\psi|\{\mathcal{\check{A}},\mathcal{\check{B}}\}_{\mathcal{G}}|\psi\rangle|^{2}/4$, where $\mathcal{A}=A\pm iB$ and $\mathcal{B}=|\psi^\bot\rangle\langle\psi|$, which means the uncertainty relation (2) is still a type of the SUR. Obviously, the state $|\psi\rangle$ will never be the eigenstate of $\mathcal{B}$ when we take $\mathcal{B}=|\psi^\bot\rangle\langle\psi|$, and therefore the triviality problem of the SUR can be remedied by (2) \cite{40}. However, it is due to the existence of $|\psi^\bot\rangle\langle\psi|$ that the uncertainty relation (2) cannot be applied to the high dimension system. Thus, the triviality problem of the product form uncertainty can be considered as the essential reason for the phenomenon that lots of sum form uncertainty relations are difficult to apply to the high dimension system.
In fact, the physical essence of the triviality problem can be described as that we cannot obtain any information of the uncertainty of $A(B)$ by the product form uncertainty relation, when the state of the system happens to be the eigenstate of $B(A)$. Thus, the information operator, which can provide the information for the uncertainty relation, can be used to fix this triviality problem. Here, two generalized-incompatible operators $\mathcal{R}$ and $\mathcal{S}$ will be introduced as the information operators. According to (4) and (6), the information operator $\mathcal{O}$ will not contain any effective information of $\mathcal{F}$ when $\langle\mathcal{O}^\dag\mathcal{O}\rangle=0$, and thus the information operator introduced to fix the triviality problem should satisfy $\langle\mathcal{O}^\dag\mathcal{O}\rangle\neq0$. Based on the unified uncertainty relation (4), the second-order origin moments of the generalized-incompatible operators $\mathcal{R}$ and $\mathcal{S}$ will never be zero at the same time, hence at least one of the two information operators can provide effective information to fix the triviality problem. The corresponding uncertainty relation is obtained as (please see the Information operator in the Supplemental Material \cite{35}):
\begin{align}
{\Delta A}^2+{\Delta B}^2\geq&\max_{\mathcal{O}\in\{\mathcal{R},\mathcal{S}\}}\{\frac{|\langle\mathcal{O}^\dag(\check{A}+e^{i\theta}\check{B})\rangle|^2}{\langle\mathcal{O}^\dag\mathcal{O}\rangle}\}-\langle\{\check{A},e^{i\theta}\check{B}\}_{\mathcal{G}}\rangle \tag{8},
\end{align}
where $\theta\in[0,2\pi]$ should be chosen to maximize the lower bound. The triviality problem can be completely fixed by the uncertainty relation (8) for almost any choice of the generalized-incompatible operators $\mathcal{R}$ and $\mathcal{S}$ : choose $\mathcal{R}$ and $\mathcal{S}$ that can avoid $\langle\check{A}\check{B}\rangle\equiv\langle\mathcal{R}^\dag\check{A}\rangle\equiv\langle\mathcal{R}^\dag\check{B}\rangle\equiv\langle\mathcal{S}^\dag\check{A}\rangle\equiv\langle\mathcal{S}^\dag\check{B}\rangle\equiv0$. Such a choice is always possible, as shown in Fig.1.
Due to the absence of $|\psi^\bot\rangle$, the uncertainty relation (8) can be well applied to the high dimension system. Meanwhile, the uncertainty relation (8) has a tighter lower bound than the uncertainty realtion depending on $|\psi^\bot\rangle$ by limiting the choice of the information operator, as shown in Fig.1. Furthermore, the inequality (8) will become an equality on the condition that $\mathcal{R}$ or $\mathcal{S}=\lambda_1 \check{A}+\lambda_2 \check{B} $ with ${|\lambda_1|}^2={|\lambda_2|}^2\neq0$ and $\lambda_1,\lambda_2\in C$. The condition is independent on the state $|\psi^\bot\rangle$, and thus can be easily satisfied even for the high dimension Hilbert space. Besides, the uncertainty relation (8) will reduce to the uncertainty relation (2) when taking $\mathcal{R}=|\psi^\bot\rangle\langle\psi|$ and ignoring the influence of the other information operator $\mathcal{S}$, which means that the uncertainty relation (2) can also be considered as taking $|\psi^\bot\rangle\langle\psi|$ as the information operator. | 3,900 | 10,403 | en |
train | 0.18.2 | To demonstrate the importance of the information operator, we will investigate its function on fixing the deficiencies appearing in the traditional uncertainty relations. The triviality problem of the SUR occurs when the state of the system happens to be the eigenstate of $A$ or $B$. For instance, one has $|\langle[A,B]\rangle/2i|^2+|\langle\{\check{A},\check{B}\}\rangle/2|^2\equiv{\Delta A}^2{\Delta B}^2\equiv0$ in the finite-dimension Hilbert space when ${\Delta A}^2=0$ or ${\Delta B}^2=0$. Different from ${\Delta A}^2{\Delta B}^2$, the sum of the variances ${\Delta A}^2+{\Delta B}^2$ will never equal zero for incompatible observables even when the state of the system is the eigenstate of $A$ or $B$. Thus the sum form has the advantage in expressing the uncertainty relation. However, the lower bounds of the most sum form uncertainty relations depend on the state $|\psi^\perp\rangle$, making them difficult to apply to the high dimension Hilbert space \cite{21}. Based on the analysis in the previous section, the sum form uncertainty relation (2) can be written as
$\Delta\mathcal{A}^{2}\Delta\mathcal{B}^{2}\geq|\langle\psi|[\mathcal{\check{A}},\mathcal{\check{B}}]_{\mathcal{G}}|\psi\rangle|^{2}/4+|\langle\psi|\{\mathcal{\check{A}},\mathcal{\check{B}}\}_{\mathcal{G}}|\psi\rangle|^{2}/4$, where $\mathcal{A}=A\pm iB$ and $\mathcal{B}=|\psi^\bot\rangle\langle\psi|$, which means the uncertainty relation (2) is still a type of the SUR. Obviously, the state $|\psi\rangle$ will never be the eigenstate of $\mathcal{B}$ when we take $\mathcal{B}=|\psi^\bot\rangle\langle\psi|$, and therefore the triviality problem of the SUR can be remedied by (2) \cite{40}. However, it is due to the existence of $|\psi^\bot\rangle\langle\psi|$ that the uncertainty relation (2) cannot be applied to the high dimension system. Thus, the triviality problem of the product form uncertainty can be considered as the essential reason for the phenomenon that lots of sum form uncertainty relations are difficult to apply to the high dimension system.
In fact, the physical essence of the triviality problem can be described as that we cannot obtain any information of the uncertainty of $A(B)$ by the product form uncertainty relation, when the state of the system happens to be the eigenstate of $B(A)$. Thus, the information operator, which can provide the information for the uncertainty relation, can be used to fix this triviality problem. Here, two generalized-incompatible operators $\mathcal{R}$ and $\mathcal{S}$ will be introduced as the information operators. According to (4) and (6), the information operator $\mathcal{O}$ will not contain any effective information of $\mathcal{F}$ when $\langle\mathcal{O}^\dag\mathcal{O}\rangle=0$, and thus the information operator introduced to fix the triviality problem should satisfy $\langle\mathcal{O}^\dag\mathcal{O}\rangle\neq0$. Based on the unified uncertainty relation (4), the second-order origin moments of the generalized-incompatible operators $\mathcal{R}$ and $\mathcal{S}$ will never be zero at the same time, hence at least one of the two information operators can provide effective information to fix the triviality problem. The corresponding uncertainty relation is obtained as (please see the Information operator in the Supplemental Material \cite{35}):
\begin{align}
{\Delta A}^2+{\Delta B}^2\geq&\max_{\mathcal{O}\in\{\mathcal{R},\mathcal{S}\}}\{\frac{|\langle\mathcal{O}^\dag(\check{A}+e^{i\theta}\check{B})\rangle|^2}{\langle\mathcal{O}^\dag\mathcal{O}\rangle}\}-\langle\{\check{A},e^{i\theta}\check{B}\}_{\mathcal{G}}\rangle \tag{8},
\end{align}
where $\theta\in[0,2\pi]$ should be chosen to maximize the lower bound. The triviality problem can be completely fixed by the uncertainty relation (8) for almost any choice of the generalized-incompatible operators $\mathcal{R}$ and $\mathcal{S}$ : choose $\mathcal{R}$ and $\mathcal{S}$ that can avoid $\langle\check{A}\check{B}\rangle\equiv\langle\mathcal{R}^\dag\check{A}\rangle\equiv\langle\mathcal{R}^\dag\check{B}\rangle\equiv\langle\mathcal{S}^\dag\check{A}\rangle\equiv\langle\mathcal{S}^\dag\check{B}\rangle\equiv0$. Such a choice is always possible, as shown in Fig.1.
Due to the absence of $|\psi^\bot\rangle$, the uncertainty relation (8) can be well applied to the high dimension system. Meanwhile, the uncertainty relation (8) has a tighter lower bound than the uncertainty realtion depending on $|\psi^\bot\rangle$ by limiting the choice of the information operator, as shown in Fig.1. Furthermore, the inequality (8) will become an equality on the condition that $\mathcal{R}$ or $\mathcal{S}=\lambda_1 \check{A}+\lambda_2 \check{B} $ with ${|\lambda_1|}^2={|\lambda_2|}^2\neq0$ and $\lambda_1,\lambda_2\in C$. The condition is independent on the state $|\psi^\bot\rangle$, and thus can be easily satisfied even for the high dimension Hilbert space. Besides, the uncertainty relation (8) will reduce to the uncertainty relation (2) when taking $\mathcal{R}=|\psi^\bot\rangle\langle\psi|$ and ignoring the influence of the other information operator $\mathcal{S}$, which means that the uncertainty relation (2) can also be considered as taking $|\psi^\bot\rangle\langle\psi|$ as the information operator.
The introduction of the information operator makes us express the uncertainty relation more accurately. Based on the unified uncertainty relation (4), we can obtain the following uncertainty equality (please see the Information operator in the Supplemental Material \cite{35}):
\begin{align}
&\mathbb{D}=\sum^r_{k=1}\mathbb{V}_k \tag{9},
\end{align}
where $\mathbb{V}_k$ is the $N\times N$ dimension positive semidefinite matrix with the elements $\mathbb{V}_k(m,n)=\langle\check{A}_m^\dag\mathcal{O}_k\rangle\langle\mathcal{O}_k^\dag\check{A}_n\rangle/\langle\mathcal{O}_k^\dag\mathcal{O}_k\rangle$,
$\mathcal{O}_k$ is the element of the operator set $\Theta=\{\mathcal{O}_1,\mathcal{O}_2,\cdots,\mathcal{O}_r\}$ in which the elements satisfy $\langle\mathcal{O}^\dag_i\mathcal{O}_j\rangle=\langle\mathcal{O}^\dag_i\mathcal{O}_j\rangle\delta_{ij}$ and $\langle\mathcal{O}^\dag_k\mathcal{O}_k\rangle\neq 0$ with $k,i,j\in\{1,2,\cdots,r\}$, and $r$ is the maximum number of the elements that the set $\Theta$ can hold. The set can be obtained by the Schmidt transformation (please see the Schmidt transformation process in the Supplemental Material \cite{35}) \cite{LM,NI}. The value of $r$ is equal to the rank of the Metric matrix corresponding to the bilinear operator function$\langle\mathcal{A}^\dagger\mathcal{B}\rangle$, and only depends on the state of the system. It is worth mentioning that $r$ is less than $d$ for the pure state and less than $d^2$ for the mixed state in the $d$-dimension system, and $r$ will tend to the infinity when considering the infinite-dimension system. The uncertainty equality indicates that the information of the uncertainties for incompatible observables can be captured accurately when $r$ information operators are introduced, as shown in Fig.2.
\emph{Discussion.--- }The variance-based uncertainty relations can be divided into the product form and the sum form. The product form uncertainty relation cannot fully capture the concept of the incompatible observables, and the problem is referred to as the triviality problem of the product form uncertainty relation. The triviality problem can be fixed by the sum form uncertainty relation, and thus lots of effort has been made to investigate the sum form uncertainty relation. However, most of the sum form uncertainty relations depend on the orthogonal state to the state of the system, and are difficult to apply to the high dimension Hilbert space.
We provide a unified uncertainty relation for the two forms uncertainty relations, and deduce that the essences of the product form and the sum form uncertainty relations are actually the unified uncertainty relation for Hermitian operators and non-Hermitian operators, respectively.
Thus, the unified uncertainty relation provides a unified framework for the two forms uncertainty relations.
In the unified framework, we deduce that the uncertainty relation for incompatible observables is bounded by not only the commutator of themselves, but also the quantities related with the other operator, which can provide information for the uncertainty and thus is named as the information operator. The deficiencies in the product form and the sum form uncertainty relations are actually identical in essence, and can be completely fixed by the introduction of the information operators. Furthermore, the uncertainty inequality will become an uncertainty equality when a specific number of information operators are introduced, which means the uncertainty relation can be expressed exactly with the help of the information operators. Thus, the unified framework provides a strengthened theoretical system for the uncertainty relation.
The unified framework also provides a new interpretation of the quantum uncertainty relation for the non-Hermitian operators, i.e., the ``observable" second-order origin moments of the non-Hermitian operators cannot be arbitrarily small at the same time when they are generalized-incompatible or generalized-anti-incompatible with each other. The new interpretation reveals some novel quantum properties that the traditional uncertainty relation cannot do
This work is supported by the National Natural Science Foundation of China (Grant Nos.11574022, 61227902, 11774406, 11434015, 61835013), MOST of China (Nos. 2016YFA0302104, 2016YFA0300600), the Chinese Academy of Sciences Central of Excellence in Topological Quantum Computation (XDPB-0803), the National Key R\&D Program of China under grants Nos. 2016YFA0301500, SPRPCAS under grants No. XDB01020300, XDB21030300.
X. Z. and S. Q. M. contributed equally to this work.
\end{document} | 2,565 | 10,403 | en |
train | 0.19.0 | \begin{document}
\title{Tight bounds from multiple-observable entropic uncertainty relations}
\author{Alberto Riccardi}
\affiliation{INFN Sezione di Pavia, Via Agostino Bassi 6, I-27100 Pavia, Italy}
\author{Giovanni Chesi}
\affiliation{INFN Sezione di Pavia, Via Agostino Bassi 6, I-27100 Pavia, Italy}
\author{Chiara Macchiavello}
\affiliation{Dipartimento di Fisica, Universit\`{a} degli Studi di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy \\
INFN Sezione di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy}
\author{Lorenzo Maccone}
\affiliation{Dipartimento di Fisica, Universit\`{a} degli Studi di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy \\
INFN Sezione di Pavia, Via Agostino Bassi 6, I-27100, Pavia, Italy}
\begin{abstract}
We investigate the additivity properties for both bipartite and multipartite
systems by using entropic uncertainty relations (EUR) defined in terms
of the joint Shannon entropy of probabilities of local measurement outcomes and we apply them to entanglement detection.
In particular, we introduce state-independent and state-dependent
entropic inequalities whose violation certifies the presence of quantum
correlations. We show that the additivity of EUR holds only for EUR
that involve two observables, while inequalities that consider more
than two observables or the addition of the Von Neumann entropy of
a subsystem enable to detect quantum correlations. Furthermore, we
study their detection power for bipartite systems and for several
classes of states of a three-qubit system.
\end{abstract}
\maketitle
Entropic uncertainty relations (EUR) are inequalities that express
preparation uncertainty relations (UR) as sums of Shannon entropies
of probability distributions of measurement outcomes. First introduced
for continuous variables systems \cite{EUR1,EUR2,EUR3,EUR4}, they
were then generalized for pair of observables with discrete spectra
\cite{EUR5,EUR6,EUR7,EUR9,EUR10} (see \cite{EUR8} for a review of
the topic). Conversely to the most known UR defined for product of
variances \cite{Heis1,Robertson1}, which are usually state-dependent,
EUR provide lower bounds, which quantify the knowledge trade-off between
the different observables, that are state-independent.
Variance-based UR for the sum of variances \cite{SVar4} in some cases
also provide state-independent bounds \cite{SVar1,SVar2,SVar3,SVar5,SVar6}.
EUR, due to their simple structure, allow to consider UR for more
than two observables in a natural way by simply adding more entropies,
a task that is not straightforward for UR based on the product of variances. However,
tight bounds for multiple-observable EUR are known only for small
dimensions and for restricted sets of observables, typically for complementary
observables \cite{MultiAzarchs,MultiBallesterWehner,MultiIvanovic,MultiSanchez,TightAR,MultiCHina,MultiMolner},
namely the ones that have mutually unbiased bases as eigenbases, and
for angular momentum observables \cite{URAngular-1,TightAR}. \\
Besides their importance from a fundamental point of view as preparation
uncertainty relations, EUR have recently been used to investigate
the nature of correlations in composite quantum systems, providing
criteria that enable to detect the presence of different types of
quantum correlations, both for bipartite and multipartite systems.
Entanglement criteria based on EUR were defined in \cite{EntGuh,Ent1,Ent2,Ent3,Huang},
while steering inequalities in \cite{Steering1,Steering2,Steering3,Steering4,Steering5,SteeringAR}.
\\
Almost all of these criteria are based on EUR for conditional Shannon
entropies, where one tries to exploit, in the presence of correlations,
side information about some subsystems to reduce global uncertainties,
while only partial results for joint Shannon entropies are known \cite{EntGuh,Ent4}.
Moreover, it has been recently proven in \cite{Additivity} that if
one considers EUR defined for the joint Shannon entropy and only pairs
of observables, then it is not possible to distinguish between separable and
entangled states since in this case additivity holds. \\
In this paper we show that if we consider EUR for more than two observables
the additivity of EUR does no longer hold. This result implies that it is possible
to define criteria that certify the presence of entanglement by using
the joint Shannon entropy for both the bipartite and the multipartite
case. We investigate which criteria can be derived from EUR based on the joint Shannon entropy and their performance. We then provide some examples of entangled states that violate our criteria.
This paper is organized as follows: in Section I we briefly review
some concepts of single system EUR, in particular we discuss the case
of multiple observables. In Section II we establish the entanglement
criteria for bipartite systems and in Section III we address the problem
in the multipartite scenario. Finally, in Section IV we consider some
examples of entangled states that are detected by these criteria, in
particular we focus on the multi-qubit case.
\section{Entropic uncertainty relations Review}
The paradygmatic example of EUR for observables with a discrete
non-degenerate spectrum is due to Maassen and Uffink \cite{EUR7},
and it states that for any two observables $A_{1}$ and $A_{2}$,
defined on a $d$-dimensional system, the following inequality holds:
\begin{equation}
H(A_{1})+H(A_{2})\geqslant-2\log_{2}c=q_{MU},\label{MUF}
\end{equation}
where $H(A_{1})$ and $H(A_{2})$ are the Shannon entropies of the
measurement outcomes of two observables $A_{1}=\sum_{j}a_{j}^{1}\ket{a_{j}^{1}}\bra{a_{j}^{1}}$
and $A_{2}=\sum_{j}a_{j}^{2}\ket{a_{j}^{2}}\bra{a_{j}^{2}}$, namely
$H(A_{I})=-\sum_{j}p(a_{j}^{I})\log p(a_{j}^{I})$ being $p(a_{j}^{I})$
the probability of obtaining the outcome $a_{J}^{I}$ of $A_{I}$,
and $c=\max_{j,k}\left|\braket{a_{j}^{1}|a_{k}^{2}}\right|$ is the
maximum overlap between their eigenstates. The bound (\ref{MUF})
is known to be tight if $A_{1}$ and $A_{2}$ are complementary observables.
We remind that two observables $A_{1}$ and $A_{2}$ are said to be
complementary iff their eigenbases are mutually unbiased, namely iff
$\left|\braket{a_{j}^{1}|a_{k}^{2}}\right|=\frac{1}{\sqrt{d}}$ for
all eigenstates, where $d$ is the dimension of the system (see \cite{MUBs}
for a review on MUBs). In this case $q_{MU}=\log_{2}d$, hence we
have:
\begin{equation}
H(A_{1})+H(A_{2})\geqslant\log_{2}d.
\end{equation}
The above relation has a clear interpretation as UR:
let us suppose that $H(A_{1})=0$, which means that the state of the
system is an eigenstate of $A_{1}$, then the other entropy $H(A_{2})$
must be maximal, hence if we have a perfect knowledge of one observable
the other must be completely undetermined. For arbitrary observables
stronger bounds, that involve the second largest term in $\left|\braket{a_{j}|b_{k}}\right|$,
were derived in \cite{EUR10,EUR9}.
An interesting feature of EUR is that they can be generalized
to an arbitrary number of observables in a straightforward way from
Maassen and Uffink's EUR. Indeed, let us consider for simplicity the
case of three observables $A_{1}$, $A_{2}$ and $A_{3}$, which
mutually satisfy the following EURs:
\begin{equation}
H(A_{i})+H(A_{j})\geqslant q_{MU}^{ij},\label{MUF2}
\end{equation}
where $i,j=1,2,3$ labels the three observables. Then, we have:
\begin{align}
\sum_{k=1}^{3}H(A_{k}) & =\frac{1}{2}\sum_{k=1}^{3}\sum_{j\neq k}H(A_{k})+H(A_{j})\nonumber \\
& \geq\frac{1}{2}\left(q_{MU}^{12}+q_{MU}^{13}+q_{MU}^{23}\right)
\end{align}
where we have applied (\ref{MUF2}) to each pair. If we have $L$
observables, the above inequality becomes:
\begin{equation}
\sum_{k=1}^{L}H(A_{k})\geq\frac{1}{\left(L-1\right)}\sum_{t\in T_{2}}q_{MU}^{t},\label{MultiObsEUR}
\end{equation}
where $t$ takes values in the set $T_{2}$ of labels of all the
possible $L(L-1)/2$ pairs of observables. For example if
$L=4$, then $T_{2}=\{12,13,14,23,24,34\}$. However EUR in the form
(\ref{MultiObsEUR}) are usually not tight, i.e. in most cases the
lower bounds can be improved. Tight bounds are known only for small
dimensions and for complementary or angular momentum observables.
For the sake of simplicity, henceforth all explicit examples will
be discussed only for complementary observables. The maximal number
of complementary observables for any given dimension is an open problem
\cite{MUBs}, which finds its roots in the classification of all complex
Hadamard matrices. However, if $d$ is a power of a prime then $d+1$
complementary observables always exist. For any $d$, even
if it is not a power of a prime, it is possible to find at least three
complementary observables \cite{MUBs}. The method that we will define
in the next Section can be therefore used in any dimension. The qubit
case, where at most three complementary observables exist, which are
in correspondence with the three Pauli matrices, was studied in \cite{MultiSanchez},
while for systems with dimension three to five tight bounds for an
arbitrary number of complementary observables were derived in \cite{TightAR}.
For example in the qubit case, where the three observables $A_{1},A_{2}$
and $A_{3}$ correspond to the three Pauli matrices $\sigma_{x},\sigma_{y}$
and $\sigma_{z},$ we have:
\begin{equation}
H(A_{1})+H(A_{2})+H\left(A_{3}\right)\geqslant2,\label{MultiQubit}
\end{equation}
and the minimum is achieved by the eigenstates of one of the $A_{i}$.
In the case of a qutrit, where four complementary observables exist,
we instead have:
\begin{align}
& H(A_{1})+H(A_{2})+H(A_{3})\geqslant3,\label{3d 3Mubs}\\
& H(A_{1})+H(A_{2})+H(A_{3})+H(A_{4})\geqslant4.\label{3d 4mubs}
\end{align}
The minimum values are achieved by:
\begin{align}
& \frac{e^{i\varphi}\ket0+\ket1}{\sqrt{2}},\ \frac{e^{i\varphi}\ket0+\ket2}{\sqrt{2}},\ \frac{e^{i\varphi}\ket1+\ket2}{\sqrt{2}},
\end{align}
where $\varphi=\frac{\pi}{3},\pi,\frac{5\pi}{3}$. Another result,
for $L<d+1$, can be found in \cite{MultiBallesterWehner}, where
it has been shown that if the Hilbert space dimension is a square,
that is $d=r^{2},$ then for $L<r+1$ the inequality (\ref{MultiObsEUR})
is tight, namely:
\begin{equation}
\sum_{i=1}^{L}H(A_{i})\geqslant\frac{L}{2}\log_{2}d=q_{BW}.\label{Ballester}
\end{equation}
In order to have a compact expression to use, we express the EUR for
$L$ observables in the following way:
\begin{equation}
\sum_{i=1}^{L}H(A_{i})\geq f\left(\mathcal{A},L\right),\label{L-ObsEur}
\end{equation}
where $f\left(\mathcal{A},L\right)$ indicates the lower bound, which
can be tight or not, and it depends on the set $\mathcal{A}=\left\{ A_{1},...,A_{L}\right\} $
of $L$ observables considered. Here we also point out in the lower
bound how many observables are involved. When we refer explicitly to
tight bounds we will use the additional label $T$, namely $f^{T}\left(\mathcal{A},L\right)$
expresses a lower bound that we know is achievable via some states. | 3,463 | 13,667 | en |
train | 0.19.1 | \section{Bipartite entanglement criteria}
In this Section we discuss bipartite entanglement criteria based on
EUR, defined in terms of joint Shannon entropies. The framework consists
in two parties, say Alice and Bob, who share a quantum state $\rho_{AB}$,
and they want to establish if their state is entangled. Alice
and Bob can perform $L$ measurements each, that we indicate respectively
as $A_{1},..,A_{L}$ and $B_{1},..,B_{L}$. Alice and Bob measure the
observables $A_{i}\otimes B_{j}$ and they want to have a criterion
defined in terms of the joint Shannon entropies $H\left(A_{i},B_{j}\right)$
which certifies the presence of entanglement. As a reminder, in a
bipartite scenario we say that the state $\rho_{AB}$ is entangled
iff it cannot be expressed as a convex combination of product states,
which are represented by separable states, namely iff:
\begin{equation}
\rho_{AB}\neq\sum_{i}p_{i}\rho_{A}^{i}\otimes\rho_{B}^{i},
\end{equation}
where $p_{i}\geq0$, $\sum_{i}p_{i}=1$, and $\rho_{A}^{i}$, $\rho_{B}^{i}$
are Alice and Bob's states respectively.
\begin{prop}
If the state $\rho_{AB}$ is separable, then the following EUR must
hold:
\begin{equation}
\sum_{i=1}^{L}H(A_{i},B_{i})\geq f\left(\mathcal{A},L\right)+f\left(\mathcal{B},L\right),\label{Primo criterio}
\end{equation}
where $f\left(\mathcal{A},L\right)$ and $f\left(\mathcal{B},L\right)$
are the lower bounds of the single system EUR, namely
\begin{equation}
\sum_{i=1}^{L}H(A_{i})\geq f\left(\mathcal{A},L\right),
\end{equation}
\begin{equation}
\sum_{i=1}^{L}H(B_{i})\geq f\left(\mathcal{B},L\right).
\end{equation}
\end{prop}
\begin{proof}
Let us focus first on $H(A_{i},B_{i})$ which, for the properties
of the Shannon entropy, can be expressed as:
\begin{equation}
H(A_{i},B_{i})=H(A_{i})+H(B_{i}|A_{i}).
\end{equation}
We want to bound $H\left(B_{i}|A_{i}\right)$ which is computed over
the state $\rho_{AB}=\sum_{j}p_{j}\rho_{A}^{j}\otimes\rho_{B}^{j}$
. Through the convexity of the relative entropy, one can prove that
the conditional entropy H(B|A) is concave in $\rho_{AB}$. Then we
have:
\begin{equation}
H(B_{i}|A_{i})_{\sum_{j}p_{j}\rho_{A}^{j}\otimes\rho_{B}^{j}}\geq\sum_{j}p_{j}H(B_{i}|A_{i})_{\rho_{A}^{j}\otimes\rho_{B}^{j}},
\end{equation}
thus, since the right-hand side of the above Equation is evaluated on a product
state, we have:
\begin{equation}
H(B_{i}|A_{i})_{\sum_{j}p_{j}\rho_{A}^{j}\otimes\rho_{B}^{j}}\geq\sum_{j}p_{j}H(B_{i})_{\rho_{B}^{j}}.
\end{equation}
Therefore, considering $\sum_{i=1}^{L}H(A_{i},B_{i})$, we derive
the following:
\begin{equation}
\sum_{i=1}^{L}H(A_{i},B_{i})\geq\sum_{i}H(A_{i})+\sum_{j}p_{j}\sum_{i}H(B_{i})_{\rho_{B}^{j}}.\label{Proof}
\end{equation}
Then we can observe that $\sum_{i=1}^{L}H(A_{i})\geq f\left(\mathcal{A},L\right)$
and $\sum_{i}H(B_{i})_{\rho_{B}^{j}}\geq f\left(\mathcal{B},L\right)$,
the latter holding due to EUR being state-independent
bounds. Therefore we have:
\begin{align}
\sum_{i=1}^{L}H(A_{i},B_{i}) & \geq f\left(\mathcal{A},L\right)+\sum_{j}p_{j}f\left(\mathcal{B},L\right)\nonumber \\
& =f\left(\mathcal{A},L\right)+f\left(\mathcal{B},L\right),
\end{align}
since $\sum_{j}p_{j}=1.$
\end{proof}
Any state that violates the inequality $\sum_{i=1}^{L}H(A_{i},B_{i})\geq f\left(\mathcal{A},L\right)+f\left(\mathcal{B},L\right)$
must be therefore entangled. However this is not sufficient to have
a proper entanglement criterion. Indeed, if we consider the observables
$A_{i}\otimes B_{i}$ as ones of the bipartite system then they must
satisfy an EUR for all states, even the entangled ones, which can
be expressed as:
\begin{equation}
\sum_{i=1}^{L}H(A_{i},B_{i})\geq f(\mathcal{AB},L),
\end{equation}
where the lower bound now depends on the observables $A_{i}\otimes B_{i}$,
while $f\left(\mathcal{A},L\right)$ and $f\left(\mathcal{B},L\right)$
depend respectively on $A_{i}$ and $B_{i}$ individually. In order
to have a proper entanglement criterion then we should have that
\begin{equation}
f(\mathcal{AB},L)<f\left(\mathcal{A},L\right)+f\left(\mathcal{B},L\right),
\end{equation}
which means that the set of entangled states that violate the inequality
is not empty. As it was shown in \cite{Additivity}, for $L=2$, we
have $f(\mathcal{AB},2)=f\left(\mathcal{A},2\right)+f\left(\mathcal{B},2\right)$
for any observables, which expresses the additivity of EUR for pairs
of observables. A counterexample of this additivity property for $L>3$
is provided by the complete set of complementary observables for two
qubits, indeed we have:
\begin{equation}
H(A_{1},B_{1})+H(A_{2},B_{2})+H(A_{3},B_{3})\geq3,
\end{equation}
and the minimum is attained by the Bell states while $f\left(\mathcal{A},3\right)+f\left(\mathcal{B},3\right)=4,$
which provides the threshold that enables entanglement detection
in the case of two qubits. \\
Let us now clarify the difference of this result with respect to those
defined in terms of EUR based on conditional entropies, in particular
to entropic steering inequalities. Indeed, if one looks at the proof
of Proposition 1, it could be claimed that there is no difference at all
since we used the fact that $\sum_{i}H(B_{i}|A_{i})\geq f(\mathcal{B},L)$,
which is a steering inequality, namely violation of it witnesses the
presence of quantum steering from Alice to Bob. However the difference
is due to the symmetric behavior of the joint entropy, which contrasts
with the asymmetry of quantum steering. To be more formal, the joint
Shannon entropy $H(A_{i},B_{i})$ can be rewritten in two forms:
\begin{align}
H(A_{i},B_{i})= & H(A_{i})+H(B_{i}|A_{i})\\
= & H(B_{i})+H(A_{i}|B_{i}),\nonumber
\end{align}
then:
\begin{equation}
\sum_{i}H(A_{i},B_{i})=\sum_{i}\left(H(A_{i})+H(B_{i}|A_{i})\right),\label{SI 1}
\end{equation}
and
\begin{equation}
\sum_{i}H(A_{i},B_{i})=\sum_{i}\left(H(B_{i})+H(A_{i}|B_{i})\right).\label{SI2}
\end{equation}
If now the state is not steerable from Alice to Bob, we have $\sum_{i}H(B_{i}|A_{i})\geq f(\mathcal{B},L)$,
which implies $\sum_{i=1}^{L}H(A_{i},B_{i})\geq f\left(\mathcal{A},L\right)+f\left(\mathcal{B},L\right)$.
Note that in this case if we look at $\sum_{i}H(A_{i}|B_{i})$ no
bound can be derived, apart from the trivial bound $\sum_{i}H(A_{i}|B_{i})\geq0$,
since there are no assumptions on the conditioning from Bob to Alice.
Conversely, if the state is not steerable from Bob to Alice, i.e.
we exchange the roles, we have $\sum_{i}H(B_{i}|A_{i})\geq0$ and
$\sum_{i}H(A_{i}|B_{i})\geq f(\mathcal{A},L)$, which implies again
$\sum_{i=1}^{L}H(A_{i},B_{i})\geq f\left(\mathcal{A},L\right)+f\left(\mathcal{B},L\right)$.
Therefore if we just look at the inequality (\ref{Primo criterio}),
we cannot distinguish between entanglement, or the two possible forms
of quantum steering and since the presence of steering, for bipartite
systems, implies entanglement it is more natural to think about Eq.~(\ref{Primo criterio})
as an entanglement criterion, while if we want to investigate steering
properties of the state we should look at the violation of the criteria
$\sum_{i}H(B_{i}|A_{i})\geq f(\mathcal{B},L)$ and $\sum_{i}H(A_{i}|B_{i})\geq f(\mathcal{A},L).$
\subsubsection*{State-dependent bounds}
A stronger entanglement criteria can be derived by considering the
state-dependent EUR:
\begin{equation}
\sum_{i=1}^{L}H(A_{i})\geq f\left(\mathcal{A},L\right)+S\left(\rho_{A}\right),\label{State dependent}
\end{equation}
or the corresponding version for Bob's system $\sum_{i=1}^{L}H(B_{i})\geq f\left(\mathcal{B},L\right)+S\left(\rho_{B}\right)$,
where $S\left(\rho_{A}\right)$ and $S\left(\rho_{B}\right)$ are
the Von Neumann entropies of the marginal states of $\rho_{AB}.$
\begin{prop}
If the state $\rho_{AB}$ is separable, then the following EUR must
hold:
\begin{equation}
\sum_{i=1}^{L}H(A_{i},B_{i})\geq f\left(\mathcal{A},L\right)+f\left(\mathcal{B},L\right)+\max\left(S\left(\rho_{A}\right),S\left(\rho_{B}\right)\right).\label{Primo criterio-1}
\end{equation}
\end{prop}
\begin{proof}
The proof is the same of Proposition 1 where we use (\ref{State dependent})
in (\ref{Proof}), instead of the state-dependent bound (\ref{L-ObsEur}).
The same holds if we use the analogous version for Bob. Then, aiming
at the strongest criterion, we can take the maximum between the two
Von Neumann entropies.
\end{proof}
The edge in using these criteria, instead of the one defined in Proposition
1, is such that even for $L=2$ the bound is meaningful. Indeed
a necessary condition to the definition of a proper criterion is that:
\begin{equation}
f^{T}(\mathcal{AB},2)<f\left(\mathcal{A},2\right)+f\left(\mathcal{B},2\right)+S\left(\rho_{X}\right),\label{VN criteria}
\end{equation}
where $X=A,B$ with the additional requirement that the bound on the
left is tight, i.e. there exist states the violate the criterion.
As an example we can consider a two-qubit system, the observables
$X_{AB}=\sigma_{X}^{A}\otimes\sigma_{X}^{B}$ and $Z_{AB}=\sigma_{Z}^{A}\otimes\sigma_{Z}^{B}$,
which for all states of the whole system satisfy $H(X_{AB})+H(Z_{AB})\geq2$,
and the state $\rho_{AB}=\ket{\phi^{+}}\bra{\phi^{+}}$, indeed for
this scenario the entanglement criterion reads:
\begin{equation}
H(X_{AB})+H(Z_{AB})\geq3,
\end{equation}
which is actually violated since the left-hand side is equal to 2.
Note that in general the condition $f^{T}(\mathcal{AB},L)<f\left(\mathcal{A},L\right)+f\left(\mathcal{B},L\right)+S\left(\rho_{X}\right)$
is necessary to the usefulness of the corresponding entanglement criteria. | 3,361 | 13,667 | en |
train | 0.19.2 | \section{Multipartite entanglement criteria}
We now extend the results of Propositions 1 and 2 for multipartite
systems, where the notion of entanglement has to be briefly discussed
since it has a much richer structure than the bipartite case.
Indeed, we can distinguish among different levels of separability.
First, we say that a state $\rho_{V_{1},..,V_{n}}$ of $n$ systems
$V_{1},..,V_{n}$ is fully separable iff it can be written
in the form:
\begin{equation}
\rho_{V_{1},..,V_{n}}^{FS}=\sum_{i}p_{i}\rho_{V_{1}}^{i}\otimes...\otimes\rho_{V_{n}}^{i},\label{Fully sep}
\end{equation}
with $\sum_{i}p_{i}=1$, namely it is a convex combination of product
states of the single subsystems. As a case of study we will always
refer to tripartite systems, where there are three parties, say Alice,
Bob and Charlie. In this case a fully separable state can be written
as:
\begin{equation}
\rho_{ABC}^{FS}=\sum_{i}p_{i}\rho_{A}^{i}\otimes\rho_{B}^{i}\otimes\rho_{C}^{i}.
\end{equation}
Any state that does not admit such a decomposition contains entanglement among some subsystems. However, we can
define different levels of separability. Hence, we say that the state $\rho_{V1,..,V_{n}}$
of $n$ systems is separable with respect to a given partition $\{I_{1},..,I_{k}\}$,
where $I_{i}$ are disjoint subsets of the indices $I=\{1,..,n\}$,
such that $\cup_{j=1}^{k}I_{j}=I$, iff it can be expressed as:
\begin{equation}
\rho_{V_{1},..,V_{n}}^{1,..,k}=\sum_{i}p_{i}\rho_{1}^{i}\otimes..\otimes\rho_{k}^{i},
\end{equation}
namely some systems share entangled states, while the state is separable
with respect to the partition considered. For tripartite system we
have three different possible bipartitions: $1|23$, $2|13$ and $3|12$.
As an example, if the state $\rho_{ABC}$ can be expressed as:
\begin{equation}
\rho_{ABC}^{1|23}=\sum_{i}p_{i}\rho_{A}^{i}\otimes\rho_{BC}^{i},
\end{equation}
then there is no entanglement between Alice and Bob+Charlie, while
these last two share entanglement. If a state does not admit such
a decomposition, it is entangled with respect to this partition.
Finally, we say that $\rho_{V_{1},..,V_{n}}$ of $n$ systems can
have at most $m$-system entanglement iff it is a mixture of all states
such that each of them is separable with respect to some partition
$\{I_{1},..,I_{k}\}$, where all sets of indices $I_{k}$ have cardinality
$N\leq m$. For tripartite systems this corresponds to the notion of
biseparability, namely the state can have at most 2-system entanglement.
A biseparable state can be written as:
\begin{equation}
\rho_{ABC}=\sum_{i}p_{i}\rho_{A}^{i}\otimes\rho_{BC}^{i}+\sum_{j}q_{j}\rho_{B}^{j}\otimes\rho_{AC}^{j}+\sum_{k}m_{k}\rho_{C}^{k}\otimes\rho_{AB}^{k},
\end{equation}
with $\sum_{i}p_{i}+\sum_{j}q_{j}+\sum_{k}m_{k}=1.$ For $n=3$ a state is then said
to be genuine tripartite entangled if it is $3$-system entangled,
namely if it does not admit such a decomposition.
\subsubsection*{Full separability}
Let us clarify the scenario: in each system $V_{i}$ we consider a
set of $L$ observables $V_{i}^{1},..,V_{i}^{L}$ that we indicate
as $\mathcal{V}_{i}.$ The single system EUR is expressed as:
\begin{equation}
\sum_{j=1}^{L}H\left(V_{i}^{j}\right)\geq f\left(\mathcal{V}_{i},L\right).\label{EUR Vi}
\end{equation}
We are interested in defining criteria in terms of $\sum_{j=1}^{L}H\left(V_{1}^{j},..,V_{n}^{j}\right)$.
A first result regards the notion of full separability.
\begin{prop}
If the state $\rho_{V_{1},..,V_{n}}$ is fully separable, then the
following EUR must hold:
\begin{equation}
\sum_{j=1}^{L}H\left(V_{1}^{j},..,V_{n}^{j}\right)\geq\sum_{i=1}^{n}f\left(\mathcal{V}_{i},L\right).
\end{equation}
\end{prop}
\begin{proof}
Let us consider the case $n=3$. For a given $j$ we have:
\begin{equation}
H\left(V_{1}^{j},V_{2}^{j},V_{3}^{j}\right)=H\left(V_{1}^{j}\right)+H\left(V_{2}^{j}V_{3}^{J}|V_{1}^{j}\right).
\end{equation}
Since the state is separable with respect to the partition $23|1$, due
to concavity of the Shannon entropy, we have:
\begin{equation}
H\left(V_{2}^{j}V_{3}^{J}|V_{1}^{j}\right)\geq\sum_{i}p_{i}H(V_{2}^{j}V_{3}^{j})_{\rho_{2}^{i}\otimes\rho_{3}^{i}}.
\end{equation}
By using the chain rule of the Shannon entropy, the above right-hand side
can be rewritten as:
\begin{align}
\sum_{i}p_{i}H(V_{2}^{j}V_{3}^{j})_{\rho_{2}^{i}\otimes\rho_{3}^{i}}= & \sum_{i}p_{i}H(V_{2}^{j})_{\rho_{2}^{i}}\nonumber \\
& +\sum_{i}p_{i}H(V_{3}^{j}|V_{2}^{j})_{\rho_{2}^{i}\otimes\rho_{3}^{i}},
\end{align}
where the last term can be lower bounded by exploiting the separability
of the state and the concavity of the Shannon entropy, namely:
\begin{equation}
\sum_{i}p_{i}H(V_{3}^{j}|V_{2}^{j})_{\rho_{2}^{i}\otimes\rho_{3}^{i}}\geq\sum_{i}p_{i}H(V_{3}^{j})_{\rho_{3}^{i}}.
\end{equation}
By summing over $j$ we arrive at the thesis:
\begin{equation}
\sum_{j=1}^{L}H\left(V_{1}^{j},V_{2}^{j},V_{3}^{j}\right)\geq\sum_{i=1}^{3}f\left(\mathcal{V}_{i},L\right),
\end{equation}
since $\sum_{j}H\left(V_{1}^{j}\right)\geq f\left(\mathcal{V}_{1},L\right)$,
$\sum_{i}p_{i}\sum_{j}H(V_{2}^{j})_{\rho_{2}^{i}}\geq f\left(\mathcal{V}_{2},L\right)$
and $\sum_{i}p_{i}\sum_{j}H(V_{3}^{j})_{\rho_{3}^{i}}\geq f\left(\mathcal{V}_{3},L\right)$
because of the state-independent EUR. The extension of the proof to
$n$ systems is straightforward.
\end{proof}
The following proposition follows directly by considering the state-dependent
bound:
\begin{equation}
\sum_{j=1}^{L}H\left(V_{i}^{j}\right)\geq f\left(\mathcal{V}_{i},L\right)+S\left(\rho_{i}\right).\label{dh}
\end{equation}
\begin{prop}
If the state $\rho_{V_{1},..,V_{n}}$ is fully separable, then the
following EUR must hold:
\begin{equation}
\sum_{j=1}^{L}H\left(V_{1}^{j},..,V_{n}^{j}\right)\geq\sum_{i=1}^{n}f\left(\mathcal{V}_{i},L\right)+\max\left(S\left(\rho_{1}\right),...,S\left(\rho_{n}\right)\right).
\end{equation}
\end{prop}
Note that only the Von Neumann entropy of one system is present in
the above inequality. This is due to the fact that we use only (\ref{dh})
in the first step of the proof, otherwise we would end with criteria
that require the knowledge of the decomposition (\ref{Fully sep}).
\subsubsection*{Genuine multipartite entanglement}
We now analyze the strongest form of multipartite entanglement in
the case of three systems, say Alice, Bob and Charlie. We make the
further assumptions that the three systems have the same dimension
and in each system the parties perform the same set of measurements,
which implies that there is only one bound of the single system EUR
that we indicate as $\mathcal{F}_{1}\left(L\right).$ We indicate
the bound on a pair of systems as $F_{2}\left(L\right)$, namely $\sum_{i=1}^{L}H\left(A_{i},B_{i}\right)\geq F_{2}\left(L\right)$
and the same by permuting the three systems. This will contribute
to the readability of the paper. With this notation the criterion
defined in Proposition 3 for three systems reads as $\sum_{j=1}^{L}H\left(V_{1}^{j},V_{2}^{j},V_{3}^{j}\right)\geq3F_{1}\left(L\right)$,
and must be satisfied by all fully separable states.
\begin{prop}
If $\rho_{ABC}$ is not genuine multipartite entangled, namely
it is biseparable, then the following EUR must hold:
\begin{equation}
\sum_{j=1}^{L}H\left(V_{1}^{j},V_{2}^{j},V_{3}^{j}\right)\geq\frac{5}{3}\mathcal{F}_{1}\left(L\right)+\frac{1}{3}F_{2}(L).\label{Prop 5}
\end{equation}
\end{prop}
\begin{proof}
Let us assume that $\rho_{ABC}$ is biseparable, that is:
\begin{equation}
\rho_{ABC}=\sum_{i}p_{i}\rho_{A}^{i}\otimes\rho_{BC}^{i}+\sum_{l}q_{l}\rho_{B}^{l}\otimes\rho_{AC}^{l}+\sum_{k}m_{k}\rho_{C}^{k}\otimes\rho_{AB}^{k}.
\end{equation}
The joint Shannon entropy $H\left(V_{1}^{j},V_{2}^{j},V_{3}^{j}\right)$
can be expressed as:
\begin{align}
H\left(V_{1}^{j},V_{2}^{j},V_{3}^{j}\right)= & \frac{1}{3}\left[H(V_{1}^{j})+H\left(V_{2}^{j},V_{3}^{j}|V_{1}^{j}\right)\right]\label{chai rules}\\
& +\frac{1}{3}\left[H(V_{2}^{j})+H\left(V_{1}^{j},V_{3}^{j}|V_{2}^{j}\right)\right]\nonumber \\
& +\frac{1}{3}\left[H(V_{3}^{j})+H\left(V_{1}^{j},V_{2}^{j}|V_{3}^{j}\right)\right].\nonumber
\end{align}
By using the concavity of Shannon entropy and the fact that the state
is biseparable we find these relations:
\begin{align}
H\left(V_{2}^{j},V_{3}^{j}|V_{1}^{j}\right) & \geq\sum_{i}p_{i}H\left(V_{2}^{j},V_{3}^{j}\right)_{\rho_{BC}^{i}}\\
& +\sum_{l}q_{l}H(V_{2}^{j})_{\rho_{B}^{l}}+\sum_{l}m_{k}H(V_{3}^{j})_{\rho_{C}^{k}};\nonumber
\end{align}
\begin{align}
H\left(V_{1}^{j},V_{2}^{j}|V_{3}^{j}\right) & \geq\sum_{i}p_{i}H\left(V_{1}^{j}\right)_{\rho_{A}^{i}}\\
& +\sum_{l}q_{l}H(V_{2}^{j})_{\rho_{B}^{l}}+\sum_{l}m_{k}H(V_{1}^{j},V_{2}^{j})_{\rho_{AB}^{k}};\nonumber
\end{align}
\begin{align}
H\left(V_{1}^{j},V_{3}^{j}|V_{2}^{j}\right) & \geq\sum_{i}p_{i}H\left(V_{1}^{j}\right)_{\rho_{A}^{i}}\\
& +\sum_{l}q_{l}H(V_{1}^{j},V_{3}^{j})_{\rho_{AC}^{l}}+\sum_{l}m_{k}H(V_{3}^{j})_{\rho_{C}^{k}}.\nonumber
\end{align}
Then, by considering the sum over $j$ of the sum of the above entropies,
and using EUR, we find:
\begin{equation}
\begin{array}{c}
\sum_{j}H\left(V_{2}^{j},V_{3}^{j}|V_{1}^{j}\right)+H\left(V_{1}^{j},V_{2}^{j}|V_{3}^{j}\right)+H\left(V_{1}^{j},V_{3}^{j}|V_{2}^{j}\right)\\
\geq2\mathcal{F}_{1}\left(L\right)+F_{2}(L).
\end{array}
\end{equation}
The thesis (\ref{Prop 5}) is now implied by combining the expression above, Eq.
(\ref{chai rules}) and the following EUR:
\begin{equation}
\sum_{j}H(V_{1}^{j})+H(V_{2}^{j})+H(V_{3}^{j})\geq3\mathcal{F}_{1}\left(L\right).\label{EURs}
\end{equation}
\end{proof}
\begin{prop}
If $\rho_{ABC}$ is not genuine multipartite entangled, namely
it is biseparable, then the following EUR must hold:
\begin{equation}
\sum_{j=1}^{L}H\left(V_{1}^{j},V_{2}^{j},V_{3}^{j}\right)\geq\frac{5}{3}\mathcal{F}_{1}\left(L\right)+\frac{1}{3}F_{2}(L)+\frac{1}{3}\sum_{x=A,B,C}S\left(\rho_{X}\right).\label{Prop 5-1}
\end{equation}
\end{prop}
The above proposition follows from the proof of Prop. 5, where we consider
the single system state-dependent EUR. | 3,839 | 13,667 | en |
train | 0.19.3 | \section{Entanglement Detection}
Here we discuss our criteria for bipartite
and multipartite systems. We will mainly focus on pure states and
multi-qubit systems. We inspect in detail how many entangled states and which levels of separability can be detected with the different criteria derived from the EUR. We point out that, if one focuses just on the entanglement-detection efficiency, bounds retrieved from EUR based on joint Shannon entropy are not as good as some existing criteria. On the other hand, note that the experimental verification of our EUR-based criteria may require less measurements. For instance, if we want to detect the entanglement of a multipartite state through the PPT method, we need to perform a tomography of the state, which involves the measurement of $d^4$ observables. On the contrary, the evaluation of the entropies just needs the measurements of the observables involved in the EUR and its number can be fixed at $3$ independently of the dimension $d$.
\subsection{Bipartite systems}
Let us start with the easiest case of two qubits. In this scenario
we will consider complementary observables and the tight EUR \cite{MultiSanchez,TightAR},
hence the considered criteria read as:
\begin{equation}
H(A_{1},B_{1})+H(A_{2},B_{2})<2+\max(S(\rho_{A}),S(\rho_{B})),\label{criterio1}
\end{equation}
\begin{equation}
\sum_{i=1}^{3}H(A_{i},B_{i})<4,\label{criterio2}
\end{equation}
\begin{align}
\sum_{i=1}^{3}H(A_{i},B_{i}) & <4+\max(S(\rho_{A}),S(\rho_{B})).\label{criterio3}
\end{align}
where $A_{1}=Z_{1},A_{2}=X_{1}$ and $A_{3}=Y_{1}$ being $Z_{1},X_{1}$
and $Y_{1}$ the usual Pauli matrices for the first qubit; the same
holds for the second qubit. In the case of two qubits we have already
shown in Section II that maximally entangled states are detected by
the above criteria.\\
We can then consider the family of entangled two-qubit states given
by:
\begin{equation}
\ket{\psi_{\epsilon}}=\epsilon\ket{00}+\sqrt{1-\epsilon^{2}}\ket{11},\label{entangled states1}
\end{equation}
where $\epsilon\in(0,1).$ We first note that for this family we have
$S(\rho_{A})=S(\rho_{B})=-\epsilon^{2}\log_{2}\epsilon^{2}-(1-\epsilon^{2})\log_{2}(1-\epsilon^{2}),$
which is equal to $H(A_{1},B_{1}).$ Conversely, we have instead
$H(A_{2},B_{2})=-\frac{1}{2}(1-\bar{\epsilon})\log_{2}(\frac{1}{4}(1-\bar{\epsilon}))-\frac{1}{2}(1+\bar{\epsilon})\log_{2}(\frac{1}{4}(1+\bar{\epsilon})),$
with $\bar{\epsilon}=2\epsilon\sqrt{1-\epsilon^{2}}$ and $H(A_{2},B_{2})=H(A_{3},B_{3})$.
The family of states (\ref{entangled states1}) is then completely
detected by (\ref{criterio1}) and (\ref{criterio3}), since $H(A_{2},B_{2})<2$, while
(\ref{criterio2}) fails to detect all states (see Fig.~\ref{fig1}).
Let us consider now the entangled two-qudit states given by:
\begin{equation}
\ket{\psi_{\lambda}}=\sum_{i=0}^{d-1}\lambda_{i}\ket{ii},\label{entangled states1-1}
\end{equation}
where $\sum_{i}\lambda_{i}^{2}=1$ and $0<\lambda_{i}<1$. As an entanglement
criterion we consider:
\begin{equation}
H(A_{1},B_{1})+H(A_{2},B_{2})\geq2\log_{2}d+\max(S(\rho_{A}),S(\rho_{B})),\label{qudit criterion}
\end{equation}
where the first observable $A_{1}$ is the computational basis and
$A_{2}$ is its Fourier transform, which is well-defined in any dimension,
and the same for $B_{1}$ and $B_{2}$. First, we can observe that
for these states we have $S(\rho_{A})=S(\rho_{B})=-\sum_{i}\lambda_{i}^{2}\log_{2}\lambda_{i}^{2}$. Moreover, since $A_{1}$ and $B_{1}$ are respectively represented
by the computational bases, we have $H(A_{1},B_{1})=-\sum_{i}\lambda_{i}^{2}\log_{2}\lambda_{i}^{2}$.
Hence, the entanglement condition becomes:
\begin{equation}
H(A_{2},B_{2})<2\log_{2}d.
\end{equation}
\begin{figure}
\caption{Pure Bipartite States: the continuous line represents $\sum_{i=1}
\label{fig1}
\end{figure}
However, for any two-qudit states we have $H(A_{2},B_{2})\leq2\log_{2}d$
and the maximum is achieved by states that give uniform probability
distributions for $A_{2}\otimes B_{2}$. Since $A_{2}$ and $B_{2}$
are the Fourier transformed bases of the computational ones, the family
(\ref{entangled states1-1}) cannot give a uniform probability distribution
due the definition of the Fourier transform. The maximum value could
be attained only by states of the form $\ket{ii}$, hence by separable
states which are excluded in (\ref{entangled states1-1}). Thus, our
criterion (\ref{qudit criterion}) detects all two-qudit entangled
states of the form (\ref{entangled states1-1}).\\
\begin{figure}
\caption{GHZ States: on the left, the continuous line represents $\sum_{i=1}
\label{fig2}
\end{figure}
\subsection{Multipartite systems}
As an example of multipartite systems we focus on the case of a three
qubit system. In this case a straightforward generalization of the
Schmidt decomposition is not available. However, the pure states can be
parameterized and classified in terms of five real parameters:
\begin{equation}
\ket{\psi}=\lambda_{0}\ket{000}+\lambda_{1}e^{i\phi}\ket{100}+\lambda_{2}\ket{101}+\lambda_{3}\ket{110}+\lambda_{4}\ket{111},
\end{equation}
where $\sum_{i}\lambda_{i}^{2}=1$. In particular we are interested
in two classes of entangled states, the GHZ states, given by
\begin{equation}
\ket{GHZ}=\lambda_{0}\ket{000}+\lambda_{4}\ket{111},
\end{equation}
and the $W$-states, which are
\begin{equation}
\ket{W}=\lambda_{0}\ket{000}+\lambda_{2}\ket{101}+\lambda_{3}\ket{110}.
\end{equation}
\begin{figure}
\caption{Tripartite W-States: The above shows in which areas in the plane $\lambda_{0}
\label{fig3}
\end{figure}
\begin{figure}
\caption{W-states Non-Separability: the plot shows the effectiveness of the state-dependent criterion
(\ref{multi_ent2}
\label{fig4}
\end{figure}
\begin{figure}
\caption{W-State Genuine Multipartite Entanglement: the plot shows the performance of the state-dependent criterion (\ref{gen_ent2}
\label{fig5}
\end{figure}
The three observables considered in each system
are the Pauli matrices, hence $A_{1}=Z_{1},A_{2}=X_{1}$ and
$A_{3}=Y_{1}$ and the same for the other subsystems. The criteria
for detecting the presence of entanglement, namely states that are not fully separable,
in this case read as:
\begin{equation}
\sum_{i=1}^{3}H(A_{i},B_{i},C_{i})<6,\label{multi_ent1}
\end{equation}
\begin{equation}
\sum_{i=1}^{3}H(A_{i},B_{i},C_{i})<6+\max(S(\rho_{A}),S(\rho_{B}),S(\rho_{C})),\label{multi_ent2}
\end{equation}
while the criteria for genuine multipartite entanglement are:
\begin{equation}
\sum_{i=1}^{3}H(A_{i},B_{i},C_{i})<\text{\ensuremath{\frac{13}{3}}},\label{gen_ent1}
\end{equation}
\begin{align}
\sum_{i=1}^{3}H(A_{i},B_{i},C_{i}) & <\ensuremath{\frac{13}{3}}+\frac{1}{3}\sum_{x=A,B,C}S\left(\rho_{X}\right).\label{gen_ent2}
\end{align}
For the class of GHZ states the sum of the three entropies $\sum_{i=1}^{3}H(A_{i},B_{i},C_{i})$
is plotted as a function of $\lambda_{0}$ in Fig.~\ref{fig2} with respect
to the state-independent and dependent bounds. We can therefore see
that the state-independent bounds fail to detect even the weakest
form of entanglement. Conversely, the state-dependent bounds identify
all states as non-separable but none as genuine multipartite entangled.
\\
For the class of the W states the effectiveness of our criteria is
shown in Figs.~\ref{fig3},~\ref{fig4} and~\ref{fig5}. Since the W-states depend on two parameters
we decided to use contour plots in the plane $\lambda_{0}\times\lambda_{2}$
showing which subsets of W states are detected as non-fully separable or
genuine multipartite entangled. As we can see, the state-independent
bounds (Fig.~\ref{fig3}) detect the non-separable character of the W-states
for a large subset of them. Conversely, no state is identified as
genuine multipartite entangled. By using the state-dependent bounds
(Figs.~\ref{fig4} and~\ref{fig5}) we are able to detect almost all non separable W-states
and, above all, we can also identify a small subset of W states as
genuine multipartite entangled.
\section{Conclusions}
In conclusion, we derived and characterized a number of entropic
uncertainty inequalities, defined in terms of the joint Shannon entropy,
whose violation guarantees the presence of entanglement. On
a theoretical level, which was the main aim of this work, we clarified
that EUR entanglement criteria for the joint Shannon entropy require
at least three different observables or, if one considers only two
measurements, the addition of the Von Neumann entropy of a subsystem,
showing thus that the additivity character of the EUR holds only for
two measurements \cite{Additivity}. We also extended our
criteria to the case of multipartite systems, which enable us to discriminate
between different types of multipartite entanglement. We then showed
how these criteria perform for both bipartite and multipartite systems,
providing several examples of states that are detected by the proposed
criteria.
This material is based upon work supported by the U.S. Department
of Energy, Office of Science, National Quantum Information Science
Research Centers, Superconducting Quantum Materials and Systems Center
(SQMS) under contract number DEAC02-07CH11359 and by the EU H2020
QuantERA ERA-NET Cofund in Quantum Technologies project QuICHE.
\end{document} | 3,004 | 13,667 | en |
train | 0.20.0 | \begin{document}
\title{Chern classes in equivariant bordism}
\date{\longrightarrowday; 2020 AMS Math.\ Subj.\ Class.: 55N22, 55N91, 55P91, 57R85}
\author{Stefan Schwede}
\address{Mathematisches Institut, Universit\"at Bonn, Germany}
\email{[email protected]}
\begin{abstract} We introduce Chern classes in $U(m)$-equivariant homotopical bordism
that refine the Conner-Floyd-Chern classes in the $\mathbf{MU}$-cohomology of $B U(m)$.
For products of unitary groups, our Chern classes form regular sequences
that generate the augmentation ideal of the equivariant bordism rings.
Consequently, the Greenlees-May local homology spectral sequence collapses for products of unitary groups.
We use the Chern classes to reprove the $\mathbf{MU}$-completion theorem of Greenlees-May and La Vecchia.
\end{abstract}
\maketitle
\section*{Introduction}
Complex cobordism $\mathbf{MU}$ is arguably the most important cohomology theory in algebraic topology.
It represents the bordism theory of stably almost complex manifolds,
and it is the universal complex oriented cohomology theory;
via Quillen's celebrated theorem \cite{quillen:formal_group},
$\mathbf{MU}$ is the entry gate for the theory of formal group laws into
stable homotopy theory, and thus the cornerstone of chromatic stable homotopy theory.
Tom Dieck's homotopical equivariant bordism $\mathbf{MU}_G$ \cite{tomDieck:bordism_integrality},
defined with the help of equivariant Thom spaces,
strives to be the legitimate equivariant refinement of complex cobordism,
for compact Lie groups $G$.
The theory $\mathbf{MU}_G$ is the universal equivariantly complex oriented theory;
and for abelian compact Lie groups, the coefficient ring $\mathbf{MU}_G^*$ carries the universal
$G$-equivariant formal group law \cite{hausmann:group_law}.
Homotopical equivariant bordism receives a homomorphism
from the geometrically defined equivariant bordism theory; due to the lack
of equivariant transversality, this homomorphism is {\em not} an isomorphism for non-trivial groups.
In general, the equivariant bordism ring $\mathbf{MU}^*_G$
is still largely mysterious; the purpose of this paper is to elucidate its structure
for unitary groups, and for products of unitary groups.
Chern classes are important characteristic classes for complex vector bundles
that were originally introduced in singular cohomology.
Conner and Floyd \cite[Corollary 8.3]{conner-floyd:relation_cobordism}
constructed Chern classes for complex vector bundles in complex cobordism;
in the universal cases, these yield classes $c_k\in \mathbf{MU}^{2 k}(B U(m))$
that are nowadays referred to as Conner-Floyd-Chern classes.
Conner and Floyd's construction works in much the same way for any complex oriented cohomology theory,
see \cite[Part II, Lemma 4.3]{adams:stable_homotopy};
in singular cohomology, it reduces to the classical Chern classes.
The purpose of this note is to define and study Chern classes
in $U(m)$-equivariant homotopical bordism $\mathbf{MU}^*_{U(m)}$
that map to the Conner-Floyd-Chern classes
under tom Dieck's bundling homomorphism \cite[Proposition 1.2]{tomDieck:bordism_integrality}.
Our classes satisfy the analogous formal properties as their classical counterparts,
including the equivariant refinement of the Whitney sum formula, see Theorem \ref{thm:CFC main}.
Despite the many formal similarities, there are crucial qualitative differences
compared to Chern classes in complex oriented cohomology theories: our
Chern classes are {\em not} characterized by their restriction to the maximal torus, and some
of our Chern classes are zero-divisors, see Remark \ref{rk:torus_restriction}.
We will use our Chern classes and the splitting of \cite{schwede:split BU}
to prove new structure results about the equivariant bordism rings $\mathbf{MU}^*_{U(m)}$
for unitary groups, or more generally for products of unitary groups.
To put this into context, we recall that in the special case when $G$ is an {\em abelian} compact Lie group,
the graded ring $\mathbf{MU}^*_G$ is concentrated in even degrees and free as a module
over the non-equivariant cobordism ring $\mathbf{MU}^*$ \cite[Theorem 5.3]{comezana}, \cite{loeffler:equivariant},
and the bundling homomorphism $\mathbf{MU}^*_G\longrightarrow \mathbf{MU}^*(B G)$ is completion
at the augmentation ideal of $\mathbf{MU}^*_G$ \cite[Theorem 1.1]{comezana-may}, \cite{loeffler:bordismengruppen}.
For non-abelian compact Lie groups $G$, however, the equivariant bordism rings $\mathbf{MU}^*_G$
are still largely mysterious.
\pagebreak
The main result of this note is the following:\wedgeallskip
{\bf Theorem.} {\em Let $m\geq 1$ be a natural number.
\begin{enumerate}[\em (i)]
\item
The sequence of Chern classes $c_m^{(m)},c_{m-1}^{(m)},\dots,c_1^{(m)}$
is a regular sequence that generates the augmentation ideal of the graded-commutative ring $\mathbf{MU}^*_{U(m)}$.
\item
The completion of $\mathbf{MU}_{U(m)}^*$ at the augmentation ideal
is a graded $\mathbf{MU}^*$-power series algebra in the above Chern classes.
\item
The bundling homomorphism $\mathbf{MU}_{U(m)}^*\longrightarrow \mathbf{MU}^*(B U(m))$ extends to an isomorphism
\[ ( \mathbf{MU}_{U(m)}^*)^\wedge_I \ \longrightarrow \ \mathbf{MU}^*(BU(m)) \]
from the completion at the augmentation ideal.
\end{enumerate}
}
We prove this result as a special case of Theorem \ref{thm:completions} below;
the more general version applies to products of unitary groups.
As we explain in Remark \ref{rk:degenerate}, the regularity of the Chern classes
also implies that the Greenlees-May local homology spectral sequence
converging to $\mathbf{MU}^*(BU(m))$ degenerates
because the relevant local homology groups vanish in positive degrees.
As another application we use the Chern classes in equivariant bordism
to give a reformulation and self-contained proof of work of Greenlees-May \cite{greenlees-may:completion}
and La Vecchia \cite{lavecchia} on the completion theorem for $\mathbf{MU}_G$,
see Theorem \ref{thm:completion}. | 1,826 | 13,792 | en |
train | 0.20.1 | \section{Equivariant \texorpdfstring{$\mathbf{MU}$}{MU}-Chern classes}
In this section we introduce the Chern classes in $U(m)$-equivariant homotopical bordism,
see Definition \ref{def:CFC}. We establish their basic properties
in Theorem \ref{thm:CFC main}, including a Whitney sum formula and the fact that the bundling homomorphism
takes our Chern classes to the Conner-Floyd-Chern classes in $\mathbf{MU}$-cohomology.
We begin by fixing our notation.
For a compact Lie group $G$, we write $\mathbf{MU}_G$ for the $G$-equivariant homotopical bordism spectrum
introduced by tom Dieck \cite{tomDieck:bordism_integrality}.
For our purposes, it is highly relevant that the theories $\mathbf{MU}_G$ for varying compact Lie groups $G$ assemble
into a global stable homotopy type, see \cite[Example 6.1.53]{schwede:global}.
For an integer $n$, we write $\mathbf{MU}_G^n=\pi_{-n}^G(\mathbf{MU})$ for the $G$-equivariant coefficient group
in cohomological degree $n$.
Since $\mathbf{MU}$ comes with the structure of a global ring spectrum, it supports
graded-commutative multiplications on $\mathbf{MU}_G^*$, as well as external multiplication pairings
\[ \times \ : \ \mathbf{MU}_G^k\times \mathbf{MU}_K^l \ \longrightarrow \ \mathbf{MU}_{G\times K}^{k+l} \]
for all pairs of compact Lie groups $G$ and $K$.
We write $\nu_k$ for the tautological representation
of the unitary group $U(k)$ on $\mathbb C^k$; we denote its Euler class by
\[ e_k \ = \ e(\nu_k) \ \in \ \mathbf{MU}^{2 k}_{U(k)}\ ,\]
compare \cite[page 347]{tomDieck:bordism_integrality}.
We write $U(k,m-k)$ for the block subgroup of $U(m)$ consisting of matrices of the form
$(\begin{smallmatrix}A & 0 \\ 0 & B \end{smallmatrix})$
for $(A,B)\in U(k)\times U(m-k)$.
We write $\tr_{U(k,m-k)}^{U(m)}:\mathbf{MU}_{U(k,m-k)}^*\longrightarrow\mathbf{MU}_{U(m)}^*$
for the transfer associated to the inclusion $U(k,m-k)\longrightarrow U(m)$,
see for example \cite[Construction 3.2.11]{schwede:global}.
\begin{defn}\label{def:CFC}
For $0\leq k\leq m$, the {\em $k$-th Chern class}
in equivariant complex bordism is the class
\[ c_k^{(m)} \ = \ \tr_{U(k,m-k)}^{U(m)}(e_k\times 1_{m-k})\ \in \ \mathbf{MU}^{2 k}_{U(m)}\ , \]
where $1_{m-k}\in\mathbf{MU}_{U(m-k)}^0$ is the multiplicative unit. We also set $c_k^{(m)} =0$ for $k>m$.
\end{defn}
In the extreme cases $k=0$ and $k=m$, we recover familiar classes:
since $e_0$ is the multiplicative unit in the non-equivariant cobordism ring $\mathbf{MU}^*$,
the class $c_0^{(m)}=1_m$ is the multiplicative unit in $\mathbf{MU}_{U(m)}^0$.
In the other extreme, $c_m^{(m)}=e_m=e(\nu_m)$ is the Euler class of
the tautological $U(m)$-representation.
As we will show in Theorem \ref{thm:CFC main} (ii), the classes $c_k^{(m)}$
are compatible in $m$ under restriction to smaller unitary groups.
\begin{rk}\label{rk:torus_restriction}
We alert the reader that the restriction homomorphism $\res^{U(m)}_{T^m}:\mathbf{MU}^*_{U(m)}\longrightarrow \mathbf{MU}^*_{T^m}$
is not injective for $m\geq 2$, where $T^m$ denotes a maximal torus in $U(m)$.
So the Chern classes in $\mathbf{MU}^*_{U(m)}$ are not characterized by their restrictions
to the maximal torus -- in contrast to the non-equivariant situation for complex oriented cohomology theories.
To show this we let $N$ denote the maximal torus normalizer inside $U(m)$. The class
\[ 1- \tr_N^{U(m)}(1) \ \in \ \mathbf{MU}^0_{U(m)} \]
has infinite order because the $U(m)$-geometric fixed point map
takes it to the multiplicative unit; in particular, this class is nonzero.
The double coset formula \cite[IV Corollary 6.7 (i)]{lms}
\[ \res^{U(m)}_{T^m}(\tr_N^{U(m)}(1))\ = \ \res^N_{T^m}(1)\ = \ 1 \]
implies that the class $ 1- \tr_N^{U(m)}(1)$ lies in the kernel of the restriction homomorphism
$\res^{U(m)}_{T^m}:\mathbf{MU}^0_{U(m)}\longrightarrow \mathbf{MU}^0_{T^m}$.
Moreover, the Chern class $c_1^{(2)}$ is a zero-divisor in the ring $\mathbf{MU}^*_{U(2)}$,
also in stark contrast to Chern classes in complex oriented cohomology theories.
Indeed, reciprocity for restriction and transfers \cite[Corollary 3.5.17 (v)]{schwede:global}
yields the relation
\begin{align*}
c_1^{(2)}\cdot (1-\tr_N^{U(2)}(1)) \
&= \ \tr_{U(1,1)}^{U(2)}(e_1\times 1)\cdot (1-\tr_N^{U(2)}(1)) \\
&= \
\tr_{U(1,1)}^{U(2)}((e_1\times 1)\cdot \res^{U(2)}_{U(1,1)}(1-\tr_N^{U(2)}(1))) \ = \ 0 \ .
\end{align*}
One can also show that the class $1-\tr_N^{U(2)}(1)$ is infinitely divisible by the Euler class $e_2=c_2^{(2)}$;
so it is also in the kernel of the completion map at the ideal $(e_2)$.
\end{rk}
The Chern class $c_k^{(m)}$ is defined as a transfer; so identifying its restriction
to a subgroup of $U(m)$ involves a double coset formula.
The following double coset formula will take care of all cases we need in this paper;
it ought to be well-known to experts, but I do not know a reference.
The case $l=1$ is established in \cite[Lemma 4.2]{symonds-splitting},
see also \cite[Example 3.4.13]{schwede:global}.
The double coset space $U(i,j)\backslash U(m)/ U(k,l)$ is discussed at various places in the
literature, for example \cite[Example 3]{matsuki:double_coset},
but I have not seen the resulting double coset formula spelled out.
\begin{prop}[Double coset formula]\label{prop:double coset}
Let $i,j,k,l$ be positive natural numbers such that $i+j=k+l$. Then
\[ \res^{U(i+j)}_{U(i,j)}\circ\tr_{U(k,l)}^{U(k+l)} \ = \
\sum_{0,k-j\leq d\leq i,k}\, \tr_{U(d,i-d,k-d,j-k+d)}^{U(i,j)}\circ\gamma_d^*\circ \res^{U(k,l)}_{U(d,k-d,i-d,l-i+d)}\ ,\]
where $\gamma_d\in U(i+j)$ is the permutation matrix of the shuffle permutation $\chi_d\in\Sigma_{i+j}$
given by
\[ \chi_d(a) \ = \
\begin{cases}
a & \text{ for $1\leq a\leq d$,}\\
a-d+i& \text{ for $d+1\leq a\leq k$,}\\
a+d-k& \text{ for $k+1\leq a\leq k+i-d$, and}\\
a & \text{ for $a > k+i-d$.}
\end{cases}
\]
\end{prop}
\begin{proof}
We refer to \cite[IV 6]{lms} or \cite[Theorem 3.4.9]{schwede:global} for the general
double coset formula for $\res^G_K\circ\tr_H^G$ for two closed subgroups $H$ and $K$
of a compact Lie group $G$; we need to specialize it to the situation at hand.
We first consider a matrix $A\in U(m)$ such that the center $Z$ of $U(i,j)$
is {\em not} contained in the $U(i,j)$-stabilizer
\[ S_A\ = \ U(i,j)\cap {^A U(k,l)} \]
of the coset $A\cdot U(k,l)$.
Then $S_A\cap Z$ is a proper subgroup of the center $Z$ of $U(i,j)$, which is isomorphic
to $U(1)\times U(1)$. So $S_A\cap Z$ has strictly smaller dimension than $Z$.
Since the center of $U(i,j)$ is contained in the normalizer of $S_A$,
we conclude that the group $S_A$ has an infinite Weyl group inside $U(i,j)$.
All summands in the double coset formula indexed by such points then involve transfers with infinite Weyl groups,
and hence they vanish.
So all non-trivial contributions to the double coset formula
stem from double cosets $U(i,j)\cdot A\cdot U(k,l)$ such that $S_A$ contains the center of $U(i,j)$.
In particular the matrix
$ \left( \begin{smallmatrix} - E_i & 0 \\ 0 & E_j \end{smallmatrix} \right)$ then belongs to $S_A$.
We write $L=A\cdot (\mathbb C^k\oplus 0^l)$, a complex $k$-plane in $\mathbb C^{k+l}$;
we consider $x\in\mathbb C^i$ and $y\in\mathbb C^j$ such that $(x,y)\in L$.
Because $ \left( \begin{smallmatrix} - E_i & 0 \\ 0 & E_j \end{smallmatrix} \right)\cdot L=L$,
we deduce that $(-x,y)\in L$. Since $(x,y)$ and $(-x,y)$ belong to $L$, so do the vectors $(x,0)$ and $(y,0)$.
We have thus shown that the $k$-plane $L=A\cdot(\mathbb C^k\oplus 0^l)$ is spanned by the intersections
\[ L\cap (\mathbb C^i\oplus 0^j) \text{\qquad and\qquad} L\cap (0^i\oplus\mathbb C^j)\ . \]
We organize the cosets with this property by the dimension of the first intersection:
we define $M_d$ as the closed subspace of $U(m)/U(k,l)$
consisting of those cosets $A\cdot U(k,l)$ such that
\[ \dim_\mathbb C( L\cap (\mathbb C^i\oplus 0^j))\ = \ d
\text{\qquad and\qquad}
\dim_\mathbb C( L\cap (0^i\oplus\mathbb C^j))\ = \ k-d\ . \]
If $M_d$ is non-empty, we must have $0, k-j\leq d\leq i,k$.
The group $U(i,j)$ acts transitively on $M_d$, and the coset $\gamma_d\cdot U(k,l)$ belongs to $M_d$;
so $M_d$ is the $U(i,j)$-orbit type manifold of $U(m)/U(k,l)$ for the conjugacy class of
\[ S_{\gamma_d}\ = \ U(i,j)\cap {^{\gamma_d} U(k,l)} \ = \ U(d,i-d,k-d,j-k+d)\ . \]
The corresponding orbit space $U(i,j)\backslash M_d=U(i,j)\cdot\gamma_d\cdot U(k,l)$
is a single point inside the double coset space, so its internal Euler characteristic is 1.
This orbit type thus contributes the summand
\[ \tr_{U(d,i-d,k-d,j-k+d)}^{U(i,j)}\circ\gamma_d^*\circ \res^{U(k,l)}_{U(d,k-d,i-d,l-i+d)} \]
to the double coset formula.
\end{proof} | 3,414 | 13,792 | en |
train | 0.20.2 | In \cite[Corollary 8.3]{conner-floyd:relation_cobordism},
Conner and Floyd define Chern classes for complex vector bundles
in the non-equivariant $\mathbf{MU}$-cohomology rings.
In the universal cases, these yield classes $c_k\in \mathbf{MU}^{2 k}(B U(m))$
that are nowadays referred to as Conner-Floyd-Chern classes.
The next theorem spells out the key properties of our Chern classes $c_k^{(m)}$;
parts (i), (ii) and (iii) roughly say that all the familiar structural properties
of the Conner-Floyd-Chern classes in $\mathbf{MU}^*(B U(m))$
already hold for our Chern classes in $U(m)$-equivariant $\mathbf{MU}$-theory.
Part (iv) of the theorem refers to the bundling maps $\mathbf{MU}_G^*\longrightarrow \mathbf{MU}^*(B G)$
defined by tom Dieck in \cite[Proposition 1.2]{tomDieck:bordism_integrality}.
\begin{thm}\label{thm:CFC main} The Chern classes in homotopical equivariant bordism enjoy the following properties.
\begin{enumerate}[\em (i)]
\item For all $0\leq k\leq m=i+j$, the relation
\[ \res^{U(m)}_{U(i,j)}(c_k^{(m)})\ = \ \sum_{d=0,\dots,k} c_d^{(i)}\times c_{k-d}^{(j)}\]
holds in the group $\mathbf{MU}_{U(i,j)}^{2 k}$.
\item The relation
\[ \res^{U(m)}_{U(m-1)}(c_k^{(m)})\ = \
\begin{cases}
c_k^{(m-1)} & \text{ for $0\leq k\leq m-1$, and}\\
\ 0 & \text{ for $k=m$}
\end{cases}\]
holds in the group $\mathbf{MU}_{U(m-1)}^{2 k}$.
\item Let $T^m$ denote the diagonal maximal torus of $U(m)$. Then the restriction homomorphism
\[ \res^{U(m)}_{T^m} \ : \ \mathbf{MU}_{U(m)}^{2 k} \ \longrightarrow \ \mathbf{MU}^{2 k}_{T^m} \]
takes the class $c_k^{(m)}$ to the $k$-th elementary symmetric polynomial
in the classes $p_1^*(e_1),\dots,p_m^*(e_1)$,
where $p_i:T^m\longrightarrow T=U(1)$ is the projection to the $i$-th factor.
\item The bundling map
\[ \mathbf{MU}_{U(m)}^* \ \longrightarrow \ \mathbf{MU}^*(BU(m)) \]
takes the class $c_k^{(m)}$ to the $k$-th Conner-Floyd-Chern class.
\end{enumerate}
\end{thm}
\begin{proof}
(i) This property exploits the double coset formula for
$\res^{U(m)}_{U(i,j)}\circ\tr_{U(k,m-k)}^{U(m)}$ recorded in Proposition \ref{prop:double coset},
which is the second equation in the following list:
\begin{align*}
\res^{U(m)}_{U(i,j)}(c_k^{(m)})\
&= \ \res^{U(m)}_{U(i,j)}(\tr_{U(k,m-k)}^{U(m)}(e_k\times 1_{m-k})) \\
&= \
\sum_{d=0,\dots,k} \tr_{U(d,i-d,k-d,j-k+d)}^{U(i,j)}(\gamma_d^*(\res^{U(k,m-k)}_{U(d,k-d,i-d,j-k+d)}(e_k\times 1_{m-k})))\\
&= \
\sum_{d=0,\dots,k} \tr_{U(d,i-d,k-d,j-k+d)}^{U(i,j)}(\gamma_d^*(e_d\times e_{k-d}\times 1_{i-d}\times 1_{j-k+d}))\\
&= \
\sum_{d=0,\dots,k} \tr_{U(d,i-d,k-d,j-k+d)}^{U(i,j)}(e_d\times 1_{i-d}\times e_{k-d}\times 1_{j-k+d})\\
&= \
\sum_{d=0,\dots,k} \tr_{U(d,i-d)}^{U(i)}(e_d\times 1_{i-d})\times
\tr_{U(k-d,j-k+d)}^{U(j)}(e_{k-d}\times 1_{j-k+d})\\
&= \ \sum_{d=0,\dots,k} c_d^{(i)}\times c_{k-d}^{(j)}
\end{align*}
Part (ii) for $k<m$ follows from part (i) by restriction from $U(m-1,1)$ to $U(m-1)$:
\begin{align*}
\res^{U(m)}_{U(m-1)}(c_k^{(m)})\
&= \ \res^{U(m-1,1)}_{U(m-1)}(\res^{U(m)}_{U(m-1,1)}(c_k^{(m)}))\\
&= \ \res^{U(m-1,1)}_{U(m-1)}(c_{k-1}^{(m-1)}\times c_1^{(1)}\ + \ c_k^{(m-1)}\times c_0^{(1)})\\
&= \ c_{k-1}^{(m-1)}\times \res^{U(1)}_1(c_1^{(1)})\ +\ c_k^{(m-1)}\times \res^{U(1)}_1(c_0^{(1)})\ = \ c_k^{(m-1)}\ .
\end{align*}
We have used that the class $c_1^{(1)}=e_1$ is in the kernel of the augmentation
$\res^{U(1)}_1:\mathbf{MU}_{U(1)}^*\longrightarrow \mathbf{MU}^*$. The Euler class $c_m^{(m)}=e(\nu_m)$
restricts to 0 in $\mathbf{MU}^*_{U(m-1)}$ because the restriction
of the tautological $U(m)$-representation to $U(m-1)$ splits off a trivial 1-dimensional summand.
(iii) An inductive argument based on property (i) shows the desired relation:
\begin{align*}
\res^{U(m)}_{T^m}(c_k^{(m)}) \ &= \
\res^{U(m)}_{U(1,\dots,1)}(c_k^{(m)}) \\
&= \ \sum_{A\subset\{1,\dots,m\}, |A|=k}\quad \prod_{a\in A} p_a^*(c_1^{(1)})\cdot\prod_{b\not\in A}p_b^*(c_0^{(1)}) \\
&= \ \sum_{A\subset\{1,\dots,m\}, |A|=k} \quad \prod_{a\in A} p_a^*(e_1)\ .
\end{align*}
(iv)
As before we let $T^m$ denote the diagonal maximal torus in $U(m)$.
The splitting principle holds for non-equivariant complex oriented cohomology theories,
see for example \cite[Proposition 8.10]{dold:Chern_classes}.
In other words, the right vertical map in the commutative square of graded rings is injective:
\[ \xymatrix{ \mathbf{MU}^*_{U(m)}\ar[r]\ar[d]_{\res^{U(m)}_{T^m}} & \mathbf{MU}^*(B U(m))\ar[d]^-{ (B i)^*} \\
\mathbf{MU}^*_{T^m}\ar[r] & \mathbf{MU}^*(B T^m) \ar@{=}[r] & \mathbf{MU}^*[[p_1^*(e_1),\dots,p_m^*(e_1)]]
} \]
The $k$-th Conner-Floyd-Chern class is characterized as the unique element
of $\mathbf{MU}^{2 k}(B U(m))$ that maps to
the $k$-th elementary symmetric polynomial in the classes $p_1^*(e_1),\dots,p_m^*(e_1)$.
Together with part (iii), this proves the claim.
\end{proof} | 2,198 | 13,792 | en |
train | 0.20.3 | \section{Regularity results}
In this section we use the Chern classes to formulate new structural properties of the equivariant
bordism ring $\mathbf{MU}_{U(m)}^*$. In particular, we can say what $\mathbf{MU}_{U(m)}^*$
looks like after dividing out some of the Chern classes, and after completing at the Chern classes.
The following theorem states these facts more generally for $U(m)\times G$
instead of $U(m)$; by induction on the number of factors, we can then deduce corresponding results
for products of unitary groups, see Theorem \ref{thm:completions}.
The results in this section make crucial use of the splitting theorem for
global functors established in \cite{schwede:split BU}.
\begin{thm}\label{thm:structure}
For every compact Lie group $G$ and all $0\leq k\leq m$, the sequence of Chern classes
\[ (c_m^{(m)}\times 1_G,\ c_{m-1}^{(m)}\times 1_G,\dots,\ c_{k+1}^{(m)}\times 1_G) \]
is a regular sequence in the graded-commutative ring $\mathbf{MU}^*_{U(m)\times G}$
that generates the kernel of the surjective restriction homomorphism
\[ \res^{ U(m)\times G}_{U(k)\times G}\ :\ \mathbf{MU}_{U(m)\times G}^*\ \longrightarrow \mathbf{MU}_{U(k)\times G}^*\ . \]
In particular, the sequence of Chern classes $(c_m^{(m)},c_{m-1}^{(m)},\dots,c_1^{(m)})$
is a regular sequence that generates the augmentation ideal of the graded-commutative ring $\mathbf{MU}^*_{U(m)}$.
\end{thm}
\begin{proof}
We argue by downward induction on $k$. The induction starts with $k=m$, where there is nothing to show.
Now we assume the claim for some $k\leq m$, and we deduce it for $k-1$.
The inductive hypothesis shows that
$c_m^{(m)}\times 1_G,\dots,c_{k+1}^{(m)}\times 1_G$
is a regular sequence in the graded-commutative ring $\mathbf{MU}^*_{U(m)\times G}$,
and that the restriction homomorphism $\res^{U(m)\times G}_{U(k)\times G}$
factors through an isomorphism
\[ \mathbf{MU}_{U(m)\times G}^*/(c_m^{(m)}\times 1_G,\dots,c_{k+1}^{(m)}\times 1_G)
\ \cong \mathbf{MU}_{G\times U(k)}^*\ . \]
We exploit that the various equivariant bordism spectra $\mathbf{MU}_G$ underlie a global spectrum,
see \cite[Example 6.1.53]{schwede:global};
thus the restriction homomorphism $\res^{U(k)\times G}_{U(k-1)\times G}$ is surjective by
Theorem 1.4 and Proposition 2.2 of \cite{schwede:split BU}.
Hence the standard long exact sequence unsplices into
a short exact sequence of graded $\mathbf{MU}^*$-modules:
\[ 0\ \longrightarrow\ \mathbf{MU}_{U(k)\times G}^{*-2 k}\ \xrightarrow{(e_k\times 1_G)\cdot -\ }\
\mathbf{MU}_{U(k)\times G}^* \xrightarrow{\res^{U(k)\times G}_{U(k-1)\times G}}\ \mathbf{MU}_{U(k-1)\times G}^*\ \longrightarrow\ 0
\]
Because
\[ \res^{U(m)\times G}_{U(k)\times G}(c_k^{(m)}\times 1_G)\ = \ c_k^{(k)}\times 1_G\ = \ e_k\times 1_G\ , \]
we conclude that $c_k^{(m)}\times 1_G$ is a non zero-divisor in
$\mathbf{MU}_{U(m)\times G}^*/(c_m^{(m)}\times 1_G,c_{m-1}^{(m)}\times 1_G,\dots,c_{k+1}^{(m)}\times 1_G)$,
and that additionally dividing out $c_k^{(m)}\times 1_G$ yields $\mathbf{MU}_{U(k-1)\times G}^*$.
This completes the inductive step.
\end{proof}
We can now identify the completion of $\mathbf{MU}^*_{U(m)}$ at the augmentation ideal
as an $\mathbf{MU}^*$-power series algebra on the Chern classes.
We state this somewhat more generally for products of unitary groups, which we write as
\[ U(m_1,\dots,m_l)\ = \ U(m_1)\times\dots\times U(m_l)\ , \]
for natural numbers $m_1,\dots,m_l\geq 1$.
For $1\leq i\leq l$, we write $p_i:U(m_1,\dots,m_l)\longrightarrow U(m_i)$ for the projection to the $i$-th factor,
and we set
\[ c^{[i]}_k \ = \ p_i^*(c_k^{(m_i)})\ = \ 1_{U(m_1,\dots,m_{i-1})}\times c_k^{(m_i)}\times 1_{U(m_{i+1},\dots,m_l)}
\ \in \ \mathbf{MU}_{U(m_1,\dots,m_l)}^{2 k}\ .\]
The following theorem was previously known for tori, i.e., for $m_1=\dots=m_l=1$.
\begin{thm}\label{thm:completions}
Let $m_1,\dots,m_l\geq 1$ be positive integers.
\begin{enumerate}[\em (i)]
\item
The sequence of Chern classes
\begin{equation}\label{eq:Chern_for_products}
c_{m_1}^{[1]},\dots,c_1^{[1]},c_{m_2}^{[2]},\dots,c_1^{[2]},\dots, c_{m_l}^{[l]},\dots,c_1^{[l]}
\end{equation}
is a regular sequence that generates the augmentation ideal of the graded-commutative ring $\mathbf{MU}^*_{U(m_1,\dots,m_l)}$.
\item
The completion of $\mathbf{MU}_{U(m_1,\dots,m_l)}^*$ at the augmentation ideal
is a graded $\mathbf{MU}^*$-power series algebra in the Chern classes \eqref{eq:Chern_for_products}.
\item
The bundling map $\mathbf{MU}_{U(m_1,\dots,m_l)}^*\longrightarrow \mathbf{MU}^*(B U(m_1,\dots,m_l))$ extends to an isomorphism
\[ ( \mathbf{MU}_{U(m_1,\dots,m_l)}^*)^\wedge_I \ \longrightarrow \ \mathbf{MU}^*(BU(m_1,\dots,m_l)) \]
from the completion at the augmentation ideal.
\end{enumerate}
\end{thm}
\begin{proof}
Part (i) follows from Theorem \ref{thm:structure} by induction on the number $l$ of factors.
We prove parts (ii) and (iii) together.
We must show that for every $n\geq 1$, $\mathbf{MU}_{U(m_1,\dots,m_l)}^*/I^n$ is free as an $\mathbf{MU}^*$-module on the monomials
of degree less than $n$ in the Chern classes \eqref{eq:Chern_for_products}.
There is nothing to show for $n=1$.
The short exact sequence
\[ 0\ \longrightarrow\ I^n/I^{n+1}\ \longrightarrow\ \mathbf{MU}_{U(m_1,\dots,m_l)}^*/I^{n+1}\ \longrightarrow\ \mathbf{MU}_{U(m_1,\dots,m_l)}^*/I^n\ \longrightarrow\ 0\]
and the inductive hypothesis reduce the claim to showing that
$I^n/I^{n+1}$ is free as an $\mathbf{MU}^*$-module on the monomials of degree exactly $n$ in the Chern classes
\eqref{eq:Chern_for_products}.
Since the augmentation ideal $I$ is generated by these Chern classes,
the $n$-th power $I^n$ is generated, as a module over $\mathbf{MU}_{U(m_1,\dots,m_l)}^*$, by the monomials
of degree $n$.
So $I^n/I^{n+1}$ is generated by these monomials as a module over $\mathbf{MU}^*$.
The bundling map $\mathbf{MU}_{U(m_1,\dots,m_l)}^*\longrightarrow \mathbf{MU}^*(B U(m_1,\dots,m_l))$
is a homomorphism of augmented $\mathbf{MU}^*$-algebras,
and it takes the Chern class $c_k^{[i]}$ to the inflation of the $k$-th Conner-Floyd-Chern class
along the projection to the $i$-th factor.
By the theory of complex orientations, the collection of these
Conner-Floyd-Chern classes are $\mathbf{MU}^*$-power series generators of $\mathbf{MU}^*(B U(m_1,\dots,m_l))$;
in particular,
the images of the Chern class monomials are $\mathbf{MU}^*$-linearly independent in $\mathbf{MU}^*(B U(m_1,\dots,m_l))$.
Hence these classes are themselves linearly independent in $I^n/I^{n+1}$.
\end{proof}
\begin{rk}\label{rk:degenerate}
Greenlees and May \cite[Corollary 1.6]{greenlees-may:completion} construct
a local homology spectral sequence
\[ E_2^{p,q}\ = \ H^I_{-p,-p}(\mathbf{MU}_G^*)\ \Longrightarrow \ \mathbf{MU}^{p+q}( B G )\ .\]
The regularity results about Chern classes from Theorem \ref{thm:completions} imply that
whenever $G=U(m_1,\dots,m_l)$ is a product of unitary groups, the $E_2^{p,q}$-term vanishes for all $p\ne 0$,
and the spectral sequence degenerates into the isomorphism
\[ E_2^{0,*}\ \cong \ (\mathbf{MU}_{U(m_1,\dots,m_l)}^*)^\wedge_I \ \cong \ \mathbf{MU}^*( B U(m_1,\dots,m_l)) \]
of Theorem \ref{thm:completions} (iii).
\end{rk}
\begin{rk}
The previous regularity theorems are special cases of the following more general results
that hold for every global $\mathbf{MU}$-module $E$:
\begin{itemize}
\item For every compact Lie group $G$, the sequence of Chern classes
$c_m^{(m)}\times 1_G,\dots,c_1^{(m)}\times 1_G$
acts regularly on the graded $\mathbf{MU}^*_{U(m)\times G}$-module $E^*_{U(m)\times G}$.
\item The restriction homomorphism
\[ \res^{ U(m)\times G}_ G\ :\ E_{U(m)\times G}^*\ \longrightarrow \ E_G^*\]
factors through an isomorphism
\[ E_{U(m)\times G}^*/(c_m^{(m)}\times 1_G,\dots, c_1^{(m)}\times 1_G)\ \cong \ E_G^* \ .\]
\item
For all $m_1,\dots,m_l\geq 1$, the sequence of Chern classes \eqref{eq:Chern_for_products}
acts regularly on the graded $\mathbf{MU}^*_{U(m_1,\dots,m_l)}$-module $E^*_{U(m_1,\dots,m_l)}$.
\end{itemize}
As in Remark \ref{rk:degenerate}, the regularity properties also imply the degeneracy
of the Greenlees-May local homology spectral sequence converging to $E^*(B U(m_1,\dots,m_l))$.
\end{rk} | 3,176 | 13,792 | en |
train | 0.20.4 | \section{The \texorpdfstring{$\mathbf{MU}$}{MU}-completion theorem via Chern classes}
In this section we use the Chern classes to reformulate the $\mathbf{MU}_G$-completion theorem
of Greenlees-May \cite{greenlees-may:completion} and La Vecchia \cite{lavecchia},
for any compact Lie group $G$, and we give a short and self-contained proof.
We emphasize that the essential arguments of this section are all contained in
\cite{greenlees-may:completion} and \cite{lavecchia};
the Chern classes let us arrange them in a more conceptual and concise way.
The references \cite{greenlees-may:completion, lavecchia}
ask for a finitely generated ideal of $\mathbf{MU}_G^*$ that is `sufficiently large' in the
sense of \cite[Definition 2.4]{greenlees-may:completion};
while we have no need to explicitly mention sufficiently large ideals,
the new insight is that the ideal generated by the Chern classes
of any faithful $G$-representation is `sufficiently large'.
\begin{con}[Chern classes of representations]
We let $V$ be a complex representation of a compact Lie group $G$. We let
$\rho:G\longrightarrow U(m)$ be a continuous homomorphism that classifies $V$, i.e., such that $\rho^*(\nu_m)$
is isomorphic to $V$; here $m=\dim_\mathbb C(V)$.
The {\em $k$-th Chern class} of $V$ is
\[ c_k(V)\ = \ \rho^*(c_k^{(m)})\ \in \ \mathbf{MU}_G^{2 k}\ .\]
In particular, $c_0(V)=1$, $c_m(V)=e(V)$ is the Euler class, and $c_k(V)=0$ for $k>m$.
\end{con}
\begin{eg} As an example, we consider the tautological representation $\nu_2$ of $S U(2)$ on $\mathbb C^2$.
By the general properties of Chern classes we have
$c_0(\nu_2)=1$, $c_2(\nu_2)=e(\nu_2)$ is the Euler class,
and $c_k(\nu_2)=0$ for $k\geq 3$. The first Chern class of $\nu_2$ can be rewritten
by using a double coset formula as follows:
\begin{align*}
c_1(\nu_2)\
&= \ \res^{U(2)}_{S U(2)}(c_1^{(2)}) \ = \ \res^{U(2)}_{S U(2)}(\tr_{U(1,1)}^{U(2)}(e_1\times 1)) \\
&= \ \tr_T^{S U(2)}(\res^{U(1,1)}_T(e_1\times 1)) \ = \ \tr_T^{S U(2)}(e(\chi)) \ .
\end{align*}
Here $T=\{
(\begin{smallmatrix} \lambda & 0 \\ 0 & \lambda^{-1} \end{smallmatrix}) \ : \ \lambda\in U(1)
\}$
is the diagonal maximal torus of $S U(2)$, $\chi:T\cong U(1)$ is the character that projects
onto the upper left diagonal entry, and $e(\chi)\in\mathbf{MU}^2_T$ is its Euler class.
\end{eg}
\begin{con}
We construct a $G$-equivariant $\mathbf{MU}_G$-module $K(G,V)$ associated
to a complex representation $V$ of a compact Lie group $G$.
The construction is a special case of one used by Greenlees and May
\cite[Section 1]{greenlees-may:completion},
based on the sequence of Chern classes $c_1(V),\dots,c_m(V)$, where $m=\dim_\mathbb C(V)$.
For any equivariant homotopy class $x\in \mathbf{MU}_G^l$,
we write $\mathbf{MU}_G[1/x]$ for the $\mathbf{MU}_G$-module localization of $\mathbf{MU}_G$ with $x$ inverted;
in other words, $\mathbf{MU}_G[1/x]$ is a homotopy colimit (mapping telescope)
in the triangulated category of the sequence
\[ \mathbf{MU}_G\ \xrightarrow{-\cdot x} \ \Sigma^l \mathbf{MU}_G\ \xrightarrow{-\cdot x} \Sigma^{2 l}\mathbf{MU}_G\ \xrightarrow{-\cdot x} \ \Sigma^{3 l}\mathbf{MU}_G\ \xrightarrow{-\cdot x} \ \dots \ \ .\]
We write $K(x)$ for the fiber of the morphism $\mathbf{MU}_G\longrightarrow\mathbf{MU}_G[1/x]$.
Then we define
\[ K(G,V)\ = \ K(c_1(V))\wedge_{\mathbf{MU}_G}\dots \wedge_{\mathbf{MU}_G}K(c_m(V))\ . \]
The smash product of the morphisms $K(c_i(V))\longrightarrow\mathbf{MU}_G$ provides a morphism of $G$-equivariant $\mathbf{MU}_G$-module spectra
\[ \epsilon_V\ : \ K(G,V)\ \longrightarrow \ \mathbf{MU}_G .\]
By general principles, the module $K(G,V)$
only depends on the radical of the ideal generated by the classes $c_1(V),\dots,c_m(V)$.
But more is true: as a consequence of Theorem \ref{thm:completion} below,
$K(G,V)$ is entirely independent, as a $G$-equivariant $\mathbf{MU}_G$-module, of the faithful representation $V$.
\end{con}
\begin{prop}\label{prop:characterize K(G,V)}
Let $V$ be a faithful complex representation of a compact Lie group $G$.
\begin{enumerate}[\em (i)]
\item The morphism
$\epsilon_V:K(G,V)\longrightarrow \mathbf{MU}_G$ is an equivalence of underlying non-equivariant spectra.
\item For every non-trivial closed subgroup $H$ of $G$, the $H$-geometric fixed point spectrum
$\Phi^H(K(G,V))$ is trivial.
\end{enumerate}
\end{prop}
\begin{proof}
(i) We set $m=\dim_\mathbb C(V)$.
The Chern classes $c_1(V),\dots,c_m(V)$ belong to the augmentation ideal
of $\mathbf{MU}_G^*$, so they restrict to 0 in $\mathbf{MU}_{\{1\}}^*$, and
hence the underlying non-equivariant spectrum of $\mathbf{MU}_G[1/c_i(V)]$ is trivial
for each $i=1,\dots,m$.
Hence the morphisms $K(c_i(V))\longrightarrow \mathbf{MU}_G$ are underlying non-equivariant
equivalences for $i=1,\dots,m$.
So also the morphism $\epsilon_V$ is an underlying non-equivariant equivalence.
(ii) We let $H$ be a non-trivial closed subgroup of $G$.
We set $W=V-V^H$, the orthogonal complement of the $H$-fixed points.
This is a complex $H$-representation with $W^H=0$; moreover, $W$ is nonzero because
$H$ acts faithfully on $V$ and $H\ne\{1\}$.
For $k=\dim_\mathbb C(W)$ we then have
\[ e(W) \ = \ c_k(W)\ = \ c_k(W\oplus V^H)\ = \ c_k(\res^G_H(V))\ = \ \res^G_H( c_k(V)) \ ;\]
the second equation uses the fact that adding a trivial representation
leaves Chern classes unchanged, by part (ii) of Theorem \ref{thm:CFC main}.
Since $W^H=0$, the geometric fixed point homomorphism $\Phi^H:\mathbf{MU}_H^*\longrightarrow \Phi_H^*(\mathbf{MU})$
sends the Euler class $e(W) = \res^G_H( c_k(V))$ to an invertible element.
The functor $\Phi^H\circ \res^G_H$ commutes with inverting elements.
Since the class $\Phi^H(\res^G_H(c_k(V)))$ is already invertible,
the localization morphism $\mathbf{MU}_G\longrightarrow \mathbf{MU}_G[1/c_k(V)]$ induces an equivalence on $H$-geometric fixed points.
Since the functor $\Phi^H\circ\res^G_H$ is exact, it annihilates the fiber $K(c_k(V))$
of the localization $\mathbf{MU}_G\longrightarrow \mathbf{MU}_G[1/c_k(V)]$.
The functor $\Phi^H\circ\res^G_H$ is also strong monoidal, in the sense of a natural
equivalence of non-equivariant spectra
\[ \Phi^H(X\wedge_{\mathbf{MU}_G}Y) \ \simeq \ \Phi^H(X)\wedge_{\Phi^H(\mathbf{MU}_G)}\Phi^H(Y) \ , \]
for all $G$-equivariant $\mathbf{MU}_G$-modules $X$ and $Y$.
Since $K(G,V)$ contains $K(c_k(V))$ as a factor (with respect to $\wedge_{\mathbf{MU}_G}$),
we conclude that the spectrum $\Phi^H(K(G,V))$ is trivial.
\end{proof}
The following 'completion theorem' is a reformulation of the combined
work of Greenlees-May \cite[Theorem 1.3]{greenlees-may:completion}
and La Vecchia \cite{lavecchia}. It is somewhat more precise in that an unspecified
`sufficiently large' finitely generated ideal of $\mathbf{MU}_G^*$ is replaced by the
ideal generated by the Chern classes of a faithful $G$-representation.
The proof is immediate from the properties of $K(G,V)$ listed in
Proposition \ref{prop:characterize K(G,V)}.
We emphasize, however, that our proof is just a different way of arranging some arguments from
\cite{greenlees-may:completion} and \cite{lavecchia} while taking advantage of the Chern class formalism.
Since the morphism $\epsilon_V:K(G,V)\longrightarrow \mathbf{MU}_G$ is a non-equivariant equivalence
of underlying spectra, the morphism $E G_+\wedge \mathbf{MU}_G\longrightarrow \mathbf{MU}_G$ that collapses
the universal space $E G$ to a point admits a unique lift to a morphism
of $G$-equivariant $\mathbf{MU}_G$-modules $\psi: E G_+\wedge \mathbf{MU}_G\longrightarrow K(G,V)$ across $\epsilon_V$.
\begin{thm}\label{thm:completion}
Let $V$ be a faithful complex representation of a compact Lie group $G$.
Then the morphism
\[ \psi\ : \ E G_+\wedge \mathbf{MU}_G\ \longrightarrow\ K(G,V) \]
is an equivalence of $G$-equivariant $\mathbf{MU}_G$-module spectra.
\end{thm}
\begin{proof}
Because the underlying space of $E G$ is contractible, the composite
\[ E G_+\wedge \mathbf{MU}_G\ \xrightarrow{\ \psi\ } \ K(G,V)\ \xrightarrow{\ \epsilon_V\ }\ \mathbf{MU}_G \]
is an equivariant equivalence of underlying non-equivariant spectra.
Since $\epsilon_V$ is an equivariant equivalence of underlying non-equivariant spectra
by Proposition \ref{prop:characterize K(G,V)}, so is $\psi$.
For all non-trivial closed subgroups $H$ of $G$, source and target of $\psi$
have trivial $H$-geometric fixed points spectra,
again by Proposition \ref{prop:characterize K(G,V)}. So the morphism $\psi$ induces
an equivalence on geometric fixed point spectra for all closed subgroup of $G$,
and it is thus an equivariant equivalence.
\end{proof}
\end{document} | 3,178 | 13,792 | en |
train | 0.21.0 | \begin{document}
\ifreport
\title{Improved Static Analysis for\\ Parameterised Boolean Equation Systems\\
using Control Flow Reconstruction}
\else
\title{Liveness Analysis for\\ Parameterised Boolean Equation Systems}
\fi
\author{Jeroen J. A. Keiren\inst{1} \and Wieger Wesselink\inst{2} \and Tim A.
C. Willemse\inst{2}}
\institute{VU University Amsterdam, The Netherlands\\
\email{[email protected]}
\and
Eindhoven University of Technology, The Netherlands\\
\email{\{j.w.wesselink, t.a.c.willemse\}@tue.nl}
}
\maketitle
\begin{abstract}
We present a sound static analysis technique for fighting the
combinatorial explosion
of parameterised Boolean equation systems (PBESs). These essentially
are systems of mutually recursive fixed point equations ranging over
first-order logic formulae.
Our method detects parameters that are not live by analysing
a control
flow graph of a PBES, and it subsequently eliminates such parameters. We show
that a naive approach to constructing a control flow graph, needed for the analysis,
may suffer from an
exponential blow-up,
and we define
an approximate analysis that avoids this problem.
The effectiveness of our techniques is
evaluated using a number of case studies.
\end{abstract}
\section{Introduction}
\emph{Parameterised Boolean equation systems (PBESs)}~\cite{GW:05tcs} are
systems of fixpoint equations that range over first-order formulae;
they are essentially an equational variation of \emph{Least Fixpoint
Logic (LFP)}. Fixpoint logics such as PBESs have applications in
database theory and computer aided verification. For instance, the
CADP~\cite{Gar+:11} and mCRL2~\cite{Cra+:13} toolsets use PBESs
for model checking and equivalence checking and in \cite{AFJV:11} PBESs are
used to solve Datalog queries.
In practice, the predominant problem for PBESs is evaluating (henceforth
referred to as \emph{solving}) them so as to
answer the decision problem encoded
in them. There are a variety of techniques for solving
PBESs, see~\cite{GW:05tcs}, but the most straightforward method is
by instantiation to a \emph{Boolean equation system (BES)}
\cite{Mad:97}, and then solving this BES.
This process is similar to the explicit generation
of a behavioural state space from its symbolic description, and it
suffers from a combinatorial explosion that is akin to the state
space explosion problem. Combatting this combinatorial
explosion is therefore instrumental in speeding up the process of
solving the problems encoded by PBESs.
While several static analysis techniques have been described using fixpoint
logics, see \emph{e.g.}\xspace \cite{CC:77}, with the exception of the static analysis
techniques for PBESs, described in~\cite{OWW:09}, no such techniques seem to
have been applied to fixpoint
logics themselves.
Our main contribution in this paper is a static analysis method for PBESs that
significantly improves over the aforementioned
techniques for simplifying
PBESs.
In our method, we construct a \emph{control flow graph}
(CFG) for
a given PBES and subsequently apply state space reduction
techniques~\cite{FBG:03,YG:04}, combined with liveness analysis
techniques
from compiler technology~\cite{ASU:86}.
These typically scrutinise syntactic
descriptions of behaviour to detect and eliminate
variables that at some point become irrelevant (dead, not live) to the
behaviour,
thereby decreasing the complexity.
The notion of control flow of a PBES is not self-evident: formulae
in fixpoint logics (such as PBESs) do not have a notion of a program
counter. Our notion of control flow is based on the concept of
\emph{control flow parameters} (CFPs), which induce a CFG.
Similar notions exist in the context of state space
exploration, see~\emph{e.g.}\xspace~\cite{PT:09atva}, but so far, no such concept exists
for fixpoint logics.
The size of the CFGs is potentially exponential
in the number of CFPs. We therefore also describe a modification of
our analysis---in which reductive power is traded against a lower
complexity---that does not suffer from this problem.
Our static analysis technique allows for solving PBESs using
instantiation that hitherto could not be solved this way, either
because the underlying BESs would be infinite or they would be extremely large.
We show that our methods are
sound; \emph{i.e.}\xspace, simplifying PBESs using our analyses lead to PBESs with the
same solution.
Our static analysis techniques have been implemented in the
mCRL2 toolset~\cite{Cra+:13} and applied to a set of model
checking and equivalence checking problems. Our experiments show that the
implementations
outperform existing static analysis techniques for PBESs~\cite{OWW:09} in
terms of reductive power, and that reductions of almost 100\% of the size of the
underlying BESs can be achieved. Our experiments confirm that the optimised
version sometimes achieves slightly less reduction
than our
non-optimised version, but is faster.
Furthermore, in cases where no additional reduction is achieved compared
to existing techniques, the overhead is mostly
neglible.
\paragraph{Structure of the paper.}
In Section~\ref{sec:preliminaries} we give a cursory overview of basic
PBES theory and in Section~\ref{sec:example}, we present an example to
illustrate the difficulty of using instantiation to
solve a PBES and to sketch our solution. In
Section~\ref{sec:CFP_CFG} we describe our construction of control flow graphs
for PBESs and in Section~\ref{sec:dataflow} we describe our live parameter
analysis. We present an optimisation of
the analysis in Section~\ref{sec:local}. The approach is evaluated
in Section~\ref{sec:experiments}, and
Section~\ref{sec:conclusions} concludes.
\paperonly{\textbf{We refer to~\cite{KWW:13report} for
proofs and additional results.}} | 1,651 | 43,359 | en |
train | 0.21.1 | \section{Preliminaries}\label{sec:preliminaries}
Throughout this paper, we work in a setting of \emph{abstract data
types} with non-empty data sorts $\sort{D_1}, \sort{D_2}, \ldots$,
and operations on these sorts, and a set $\varset{D}$ of sorted
data variables. We write vectors in boldface, \emph{e.g.}\xspace $\var{d}$ is
used to denote a vector of data variables. We write $\var[i]{d}$
to denote the $i$-th element of a vector $\var{d}$.
A semantic set $\semset{D}$ is associated to every sort $\sort{D}$,
such that each term of sort $\sort{D}$, and all operations on
$\sort{D}$ are mapped to the elements and operations of $\semset{D}$
they represent. \emph{Ground terms} are terms that do not contain
data variables. For terms that contain data variables, we use an
environment $\ensuremath{\delta}$ that maps each variable from $\varset{D}$
to a value of the associated type. We assume an interpretation
function $\sem{\_}{}{}$ that maps every term $t$ of sort $\sort{D}$
to the data element $\sem{t}{}{\ensuremath{\delta}}$ it represents, where the
extensions of $\ensuremath{\delta}$ to open terms and vectors are standard. Environment
updates are denoted $\ensuremath{\delta}[\subst{d}{v}]$, where
$\ensuremath{\delta}[\subst{d}{v}](d') = v$ if $d' = d$, and $\ensuremath{\delta}(d')$
otherwise.
We specifically assume the existence of a sort $\sort{B}$ with
elements $\ensuremath{\mathit{true}}$ and $\ensuremath{\mathit{false}}$ representing the Booleans $\semset{B}$
and a sort $\sort{N} = \{0, 1, 2, \ldots \}$ representing the natural
numbers $\semset{N}$. For these sorts, we assume that the usual operators
are available and, for readability, these are written the same
as their semantic counterparts.
\emph{Parameterised Boolean equation systems}~\cite{Mat:98} are
sequences of
fixed-point equations ranging over \emph{predicate formulae}. The
latter are first-order formulae extended with predicate
variables, in which the non-logical symbols are taken from the data
language.
\begin{definition}
\label{def:formula}
\label{def:semFormula}
\emph{Predicate formulae} are defined through the following grammar:
$$
\varphi, \psi ::= b \mid X(\val{e}) \mid \varphi \land \psi \mid \varphi \lor
\psi \mid \forall d \colon D. \varphi \mid \exists d \colon D. \varphi$$
in which $b$ is a data term of sort $\sort{B}$, $X(\val{e})$ is a \emph{predicate
variable instance} (PVI) in which $X$ is a predicate variable of
sort $\vec{\sort{D}} \to \sort{B}$, taken from some sufficiently large set
$\varset{P}$ of predicate variables, and $\val{e}$ is a vector of data terms of
sort $\vec{\sort{D}}$.
The interpretation of a predicate formula $\varphi$ in the
context of a predicate
environment $\ensuremath{\eta} \colon \varset{P} \to \semset{D} \to \semset{B}$
and data environment $\ensuremath{\delta}$ is denoted as
$\sem{\varphi}{\ensuremath{\eta}}{\ensuremath{\delta}}$, where:
\[
\begin{array}{ll}
\sem{b}{\ensuremath{\eta}\ensuremath{\delta}}
&\ensuremath{=}
\left \{
\begin{array}{ll} \text{true} & \text{if $\ensuremath{\delta}(b)$ holds} \\
\text{false} & \text{otherwise}
\end{array}
\right . \\[5pt]
\sem{X(\val{e})}{\ensuremath{\eta}\ensuremath{\delta}}
&\ensuremath{=}
\left \{
\begin{array}{ll} \text{true}& \text{if $\ensuremath{\eta}(X)(\ensuremath{\delta}(\val{e}))$
holds} \\
\text{false} & \text{otherwise}
\end{array}
\right . \\[5pt]
\sem{\phi \land \psi}{\ensuremath{\eta}\ensuremath{\delta}}
&\ensuremath{=} \sem{\phi}{\ensuremath{\eta}\ensuremath{\delta}} \text{ and } \sem{\psi}{\ensuremath{\eta}\ensuremath{\delta}} \text{ hold} \\[5pt]
\sem{\phi \lor \psi}{\ensuremath{\eta}\ensuremath{\delta}}
&\ensuremath{=} \sem{\phi}{\ensuremath{\eta}\ensuremath{\delta}} \text{ or } \sem{\psi}{\ensuremath{\eta}\ensuremath{\delta}} \text{ hold} \\[5pt]
\sem{\forall{d \colon \sort{D}}.~ \phi}{\ensuremath{\eta}\ensuremath{\delta}}
&\ensuremath{=} \text{for all ${v \in \semset{D}}$, }~\sem{\phi}{\ensuremath{\eta}\ensuremath{\delta}[v/d]} \text{ holds} \\[5pt]
\sem{\exists{d \colon \sort{D}}.~ \phi}{\ensuremath{\eta}\ensuremath{\delta}}
&\ensuremath{=} \text{for some ${v \in \semset{D}}$, }~\sem{\phi}{\ensuremath{\eta}\ensuremath{\delta}[v/d]} \text{ holds}
\end{array}
\]
\end{definition}
We assume the usual precedence rules for the logical operators.
\emph{Logical equivalence} between two predicate formulae $\varphi,
\psi$, denoted $\varphi \equiv \psi$, is defined as
$\sem{\varphi}{\ensuremath{\eta}\ensuremath{\delta}}
= \sem{\psi}{\ensuremath{\eta}\ensuremath{\delta}}$ for all $\ensuremath{\eta}, \ensuremath{\delta}$.
Freely
occurring data variables
in $\varphi$ are denoted by $\free{\varphi}$. We refer to $X(\val{e})$ occuring
in a predicate formula as a \emph{predicate variable instance} (PVI).
For simplicity, we assume that if a data variable is bound by a quantifier
in a formula $\varphi$, it does not also occur free within $\varphi$.
\begin{definition}
\label{def:PBES}
PBESs are defined by the following grammar:
$$
\ensuremath{\mathcal{E}} ::= \ensuremath{\emptyset} \mid (\nu X(\var{d} \colon \vec{D}) = \varphi) \ensuremath{\mathcal{E}}
\mid (\mu X(\var{d} \colon \vec{D}) = \varphi) \ensuremath{\mathcal{E}}
$$
in which $\ensuremath{\emptyset}$ denotes the empty equation system; $\mu$ and $\nu$ are the
least and greatest fixed point signs, respectively; $X$ is a sorted predicate
variable of sort $\vec{\sort{D}} \to \sort{B}$, $\var{d}$ is a vector of formal
parameters,
and $\varphi$ is a predicate formula. We henceforth omit a trailing $\ensuremath{\emptyset}$.
\end{definition}
By convention $\rhs{X}$ denotes the right-hand side of the
defining equation for $X$ in a PBES $\ensuremath{\mathcal{E}}$;
$\param{X}$ denotes the set of \emph{formal parameters} of $X$ and
we assume that $\free{\rhs{X}} \subseteq
\param{X}$. By superscripting a formal parameter with the predicate
variable to which it belongs, we distinguish between formal parameters for
different predicate variables, \emph{i.e.}\xspace, we
write $d^X$ when $d \in \param{X}$. We write $\sigma$ to stand for
either $\mu$ or $\nu$.
The set of \emph{bound predicate variables} of some PBES $\ensuremath{\mathcal{E}}$, denoted
$\bnd{\ensuremath{\mathcal{E}}}$, is the set of predicate variables occurring
at the left-hand sides of the equations in $\ensuremath{\mathcal{E}}$. Throughout this
paper, we deal with PBESs that are both \emph{well-formed}, \emph{i.e.}\xspace for every
$X \in \bnd{\ensuremath{\mathcal{E}}}$ there is exactly one equation in $\ensuremath{\mathcal{E}}$, and
\emph{closed}, \emph{i.e.}\xspace for every $X \in \bnd{\ensuremath{\mathcal{E}}}$, only predicate variables taken
from $\bnd{\ensuremath{\mathcal{E}}}$ occur in $\rhs{X}$.
To each PBES $\ensuremath{\mathcal{E}}$ we associate a \emph{top assertion}, denoted
$\mcrlKw{init}~X(\val{v})$, where we require $X \in \bnd{\ensuremath{\mathcal{E}}}$. For
a parameter $\var[m]{d} \in \param{X}$ for the top assertion
$\mcrlKw{init}~X(\val{v})$ we define the value $\init{\var[m]{d}}$
as $\val[m]{v}$.\\
We next define a PBES's semantics. Let $\semset{B}^{\vec{\semset{D}}}$
denote the set of functions $f \colon \vec{\semset{D}} \to \semset{B}$,
and define the ordering $\sqsubseteq$ as $f \sqsubseteq g$ iff for
all $\vec{v} \in \vec{\semset{D}}$, $f(\vec{v})$ implies $g(\vec{v})$.
For a given pair of environments $\ensuremath{\delta}, \ensuremath{\eta}$, a predicate
formula $\varphi$ gives rise to a predicate transformer $T$
on the complete lattice
$(\semset{B}^{\vec{\semset{D}}}, \sqsubseteq)$ as follows:
$
T(f) = \lambda \vec{v} \in \vec{\semset{D}}.
\sem{\varphi}{\ensuremath{\eta}[\subst{X}{f}]}{\ensuremath{\delta}[\subst{\vec{d}}{\vec{v}}]}
$.
Since the predicate transformers defined this way are monotone,
their extremal fixed points exist. We denote the least fixed point of
a given predicate transformer $T$ by $\mu T$, and the greatest fixed point
of $T$ is denoted $\nu T$.
\begin{definition}
The \emph{solution} of an equation system in the context of a predicate
environment $\ensuremath{\eta}$ and data environment $\ensuremath{\delta}$ is defined inductively
as follows:
\begin{align*}
\sem{\ensuremath{\emptyset}}{\ensuremath{\eta}}{\ensuremath{\delta}} & \ensuremath{=} \ensuremath{\eta} \\
\sem{(\mu X(\var{d} \colon \vec{D}) = \rhs{X}) \ensuremath{\mathcal{E}}}{\ensuremath{\eta}}{\ensuremath{\delta}}
& \ensuremath{=} \sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}[\subst{X}{\mu T}]}{\ensuremath{\delta}}\\
\sem{(\nu X(\var{d} \colon \vec{D}) = \rhs{X}) \ensuremath{\mathcal{E}}}{\ensuremath{\eta}}{\ensuremath{\delta}}
& \ensuremath{=} \sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}[\subst{X}{\nu T}]}{\ensuremath{\delta}}
\end{align*}
with $T(f) = \lambda \val{v} \in \val{\semset{D}}.
\sem{\varphi}{(\sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}[\subst{X}{f}]}{\ensuremath{\delta}})}
{\ensuremath{\delta}[\subst{\var{d}}{\val{v}}]}$
\end{definition}
The solution prioritises the fixed point signs of left-most equations
over the fixed point signs of equations that follow, while respecting
the equations. Bound predicate variables of closed PBESs have a
solution that is independent of the predicate and data environments
in which it is evaluated. We therefore omit these environments and
write $\sem{\ensuremath{\mathcal{E}}}(X)$ instead of $\sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}}{\ensuremath{\delta}}(X)$.
\reportonly{
\newcommand{\sigma}{\sigma}
\newcommand{\rankE}[2]{\ensuremath{\mathsf{rank}_{#1}(#2)}}
\newcommand{\rank}[1]{\ensuremath{\mathsf{rank}(#1)}}
The \emph{signature} \cite{Wil:10} of a predicate variable $X$ of sort
$\vec{\sort{D}} \to \sort{B}$, $\signature{X}$, is the product $\{X\} \times
\semset{D}$.
The notion of signature is lifted to sets of predicate variables $P \subseteq
\varset{P}$ in the natural way, \emph{i.e.}\xspace
$\signature{P} = \bigcup_{X \in P} \signature{X}$.\footnote{Note that in
\cite{Wil:10} the notation $\mathsf{sig}$ is used to denote the signature. Here
we deviate from this notation due to the naming conflict with the
\emph{significant parameters} of a formula, which also is standard notation
introduced in \cite{OWW:09}, and which we introduce in
Section~\ref{sec:dataflow}.}
\begin{definition}[{\cite[Definition~6]{Wil:10}}]\label{def:r-correlation}
Let ${\rel{R}} \subseteq \signature{\varset{P}} \times \signature{\varset{P}}$
be
an arbitrary
relation. A predicate environment $\ensuremath{\eta}$ is an $\rel{R}$-correlation iff
$(X, \val{v}) {\rel{R}} (X', \val{v'})$ implies $\ensuremath{\eta}(X)(\val{v}) =
\ensuremath{\eta}(X')(\val{v'})$.
\end{definition}
A \emph{block} is a non-empty equation system of like-signed fixed point
equations. Given an equation system $\ensuremath{\mathcal{E}}$, a block $\mathcal{B}$ is maximal
if its neighbouring equations in $\ensuremath{\mathcal{E}}$ are of a different sign than the
equations in $\mathcal{B}$. The $i^\mathit{th}$ maximal block in $\ensuremath{\mathcal{E}}$ is
denoted by
$\block{i}{\ensuremath{\mathcal{E}}}$.
For relations $\rel{R}$ we write $\correnv{\rel{R}}$ for the set of
$\rel{R}$-correlations. | 3,524 | 43,359 | en |
train | 0.21.2 | To each PBES $\ensuremath{\mathcal{E}}$ we associate a \emph{top assertion}, denoted
$\mcrlKw{init}~X(\val{v})$, where we require $X \in \bnd{\ensuremath{\mathcal{E}}}$. For
a parameter $\var[m]{d} \in \param{X}$ for the top assertion
$\mcrlKw{init}~X(\val{v})$ we define the value $\init{\var[m]{d}}$
as $\val[m]{v}$.\\
We next define a PBES's semantics. Let $\semset{B}^{\vec{\semset{D}}}$
denote the set of functions $f \colon \vec{\semset{D}} \to \semset{B}$,
and define the ordering $\sqsubseteq$ as $f \sqsubseteq g$ iff for
all $\vec{v} \in \vec{\semset{D}}$, $f(\vec{v})$ implies $g(\vec{v})$.
For a given pair of environments $\ensuremath{\delta}, \ensuremath{\eta}$, a predicate
formula $\varphi$ gives rise to a predicate transformer $T$
on the complete lattice
$(\semset{B}^{\vec{\semset{D}}}, \sqsubseteq)$ as follows:
$
T(f) = \lambda \vec{v} \in \vec{\semset{D}}.
\sem{\varphi}{\ensuremath{\eta}[\subst{X}{f}]}{\ensuremath{\delta}[\subst{\vec{d}}{\vec{v}}]}
$.
Since the predicate transformers defined this way are monotone,
their extremal fixed points exist. We denote the least fixed point of
a given predicate transformer $T$ by $\mu T$, and the greatest fixed point
of $T$ is denoted $\nu T$.
\begin{definition}
The \emph{solution} of an equation system in the context of a predicate
environment $\ensuremath{\eta}$ and data environment $\ensuremath{\delta}$ is defined inductively
as follows:
\begin{align*}
\sem{\ensuremath{\emptyset}}{\ensuremath{\eta}}{\ensuremath{\delta}} & \ensuremath{=} \ensuremath{\eta} \\
\sem{(\mu X(\var{d} \colon \vec{D}) = \rhs{X}) \ensuremath{\mathcal{E}}}{\ensuremath{\eta}}{\ensuremath{\delta}}
& \ensuremath{=} \sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}[\subst{X}{\mu T}]}{\ensuremath{\delta}}\\
\sem{(\nu X(\var{d} \colon \vec{D}) = \rhs{X}) \ensuremath{\mathcal{E}}}{\ensuremath{\eta}}{\ensuremath{\delta}}
& \ensuremath{=} \sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}[\subst{X}{\nu T}]}{\ensuremath{\delta}}
\end{align*}
with $T(f) = \lambda \val{v} \in \val{\semset{D}}.
\sem{\varphi}{(\sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}[\subst{X}{f}]}{\ensuremath{\delta}})}
{\ensuremath{\delta}[\subst{\var{d}}{\val{v}}]}$
\end{definition}
The solution prioritises the fixed point signs of left-most equations
over the fixed point signs of equations that follow, while respecting
the equations. Bound predicate variables of closed PBESs have a
solution that is independent of the predicate and data environments
in which it is evaluated. We therefore omit these environments and
write $\sem{\ensuremath{\mathcal{E}}}(X)$ instead of $\sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}}{\ensuremath{\delta}}(X)$.
\reportonly{
\newcommand{\sigma}{\sigma}
\newcommand{\rankE}[2]{\ensuremath{\mathsf{rank}_{#1}(#2)}}
\newcommand{\rank}[1]{\ensuremath{\mathsf{rank}(#1)}}
The \emph{signature} \cite{Wil:10} of a predicate variable $X$ of sort
$\vec{\sort{D}} \to \sort{B}$, $\signature{X}$, is the product $\{X\} \times
\semset{D}$.
The notion of signature is lifted to sets of predicate variables $P \subseteq
\varset{P}$ in the natural way, \emph{i.e.}\xspace
$\signature{P} = \bigcup_{X \in P} \signature{X}$.\footnote{Note that in
\cite{Wil:10} the notation $\mathsf{sig}$ is used to denote the signature. Here
we deviate from this notation due to the naming conflict with the
\emph{significant parameters} of a formula, which also is standard notation
introduced in \cite{OWW:09}, and which we introduce in
Section~\ref{sec:dataflow}.}
\begin{definition}[{\cite[Definition~6]{Wil:10}}]\label{def:r-correlation}
Let ${\rel{R}} \subseteq \signature{\varset{P}} \times \signature{\varset{P}}$
be
an arbitrary
relation. A predicate environment $\ensuremath{\eta}$ is an $\rel{R}$-correlation iff
$(X, \val{v}) {\rel{R}} (X', \val{v'})$ implies $\ensuremath{\eta}(X)(\val{v}) =
\ensuremath{\eta}(X')(\val{v'})$.
\end{definition}
A \emph{block} is a non-empty equation system of like-signed fixed point
equations. Given an equation system $\ensuremath{\mathcal{E}}$, a block $\mathcal{B}$ is maximal
if its neighbouring equations in $\ensuremath{\mathcal{E}}$ are of a different sign than the
equations in $\mathcal{B}$. The $i^\mathit{th}$ maximal block in $\ensuremath{\mathcal{E}}$ is
denoted by
$\block{i}{\ensuremath{\mathcal{E}}}$.
For relations $\rel{R}$ we write $\correnv{\rel{R}}$ for the set of
$\rel{R}$-correlations.
\begin{definition}[{\cite[Definition~7]{Wil:10}}]
\label{def:consistent-correlation} Let $\ensuremath{\mathcal{E}}$ be an equation system.
Relation ${\rel{R}} \subseteq \signature{\varset{P}} \times
\signature{\varset{P}}$ is
a \emph{consistent correlation} on $\ensuremath{\mathcal{E}}$, if for $X, X' \in \bnd{\ensuremath{\mathcal{E}}}$,
$(X, \val{v}) \rel{R} (X', \val{v'})$ implies:
\begin{compactenum}
\item for all $i$, $X \in \bnd{\block{i}{\ensuremath{\mathcal{E}}}}$ iff $X' \in
\bnd{\block{i}{\ensuremath{\mathcal{E}}}}$
\item for all $\ensuremath{\eta} \in \correnv{\rel{R}}$, $\ensuremath{\delta}$, we have
$\sem{\rhs{X}}{\ensuremath{\eta} \ensuremath{\delta}[\subst{\var{d}}{\val{v}}]} =
\sem{\rhs{X'}}{\ensuremath{\eta} \ensuremath{\delta}[\subst{\var{d'}}{\val{v'}}]}$
\end{compactenum}
For $X, X' \in \bnd{\ensuremath{\mathcal{E}}}$, we say $(X, \val{v})$ and $(X', \val{v'})$
consistently
correlate, denoted as $(X, \val{v}) \ensuremath{\doteqdot} (X', \val{v'})$ iff there
exists a correlation $\rel{R} \subseteq
\signature{\bnd{\ensuremath{\mathcal{E}}}} \times \signature{\bnd{\ensuremath{\mathcal{E}}}}$ such that $(X, \val{v})
\rel{R} (X', \val{v'})$ .
\end{definition}
Consistent
correlations can be lifted to variables in different equation systems in $\ensuremath{\mathcal{E}}$
and $\ensuremath{\mathcal{E}}'$, assuming that the variables in the equation systems do not
overlap.
We call such equation systems \emph{compatible}.
Lifting consistent correlations to different equation systems can, \emph{e.g.}\xspace, be
achieved by merging the equation systems to an
equation system $\mathcal{F}$, in which, if $X \in \bnd{\ensuremath{\mathcal{E}}}$, then
$X \in \bnd{\block{i}{\ensuremath{\mathcal{E}}}}$ iff $X \in \bnd{\block{i}{\mathcal{F}}}$,
and likewise for $\ensuremath{\mathcal{E}}'$.
The consistent correlation can then be defined on $\mathcal{F}$.
The following theorem \cite{Wil:10} shows the relation between consistent
correlations and the solution of a PBES.
\begin{theorem}[{\cite[Theorem~2]{Wil:10}}]\label{thm:willemse}\label{thm:cc}
Let $\ensuremath{\mathcal{E}}$, $\ensuremath{\mathcal{E}}'$ be compatible equation systems, and $\ensuremath{\doteqdot}$ a
consistent correlation. Then for all $X \in \bnd{\ensuremath{\mathcal{E}}}$,
$X' \in \bnd{\ensuremath{\mathcal{E}}'}$ and all $\ensuremath{\eta} \in \correnv{\ensuremath{\doteqdot}}$, we have
$(X, \val{v}) \ensuremath{\doteqdot} (X', \val{v'}) \implies \sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}
\ensuremath{\delta}}(X)(\val{v}) =
\sem{\ensuremath{\mathcal{E}}'}{\ensuremath{\eta} \ensuremath{\delta}}(X')(\val{v'})$
\end{theorem}
We use this theorem in proving the correctness of our static analysis
technique.
} | 2,353 | 43,359 | en |
train | 0.21.3 | \section{A Motivating Example}\label{sec:example}
In practice, solving PBESs proceeds via \emph{instantiating}~\cite{PWW:11}
into \emph{Boolean equation systems (BESs)}, for which solving is
decidable. The latter is the fragment of PBESs with equations
that range over propositions only, \emph{i.e.}\xspace, formulae without data and
quantification. Instantiating a PBES to a BES is akin to state space
exploration and suffers from a similar combinatorial
explosion. Reducing the time spent on it is thus instrumental in speeding
up, or even enabling the solving process.
We illustrate this using the following (academic) example, which we also
use as our running example:
\[
\begin{array}{lll}
\nu X(i,j,k,l\colon\sort{N}) & = &
( i \not= 1 \vee j \not= 1 \vee X(2,j,k,l+1)) \wedge \forall m\colon\sort{N}. Z(i,2,m+k,k) \\
\mu Y(i,j,k,l\colon\sort{N}) & = &
k = 1 \vee (i = 2 \wedge X(1,j,k,l) ) \\
\nu Z(i,j,k,l\colon\sort{N}) & = &
(k < 10 \vee j = 2) \wedge (j \not= 2 \vee Y(1,1,l,1) ) \wedge
Y(2,2,1,l)
\end{array}
\]
The presence of PVIs $X(2,j,k,l+1)$ and $Z(i,2,m+k,k)$ in $X$'s
equation means the solution to $X(1,1,1,1)$ depends on the
solutions to $X(2,1,1,2)$ and $Z(1,2,v+1,1)$,
for all values $v$, see Fig.~\ref{fig:instantiate}. Instantiation
finds these dependencies by simplifying the right-hand
side of $X$ when its parameters have been assigned value $1$:
\[
( 1 \not= 1 \vee 1 \not= 1 \vee X(2,1,1,1+1))
\wedge \forall m\colon\sort{N}. Z(1,2,m+1,1)
\]
Since for an infinite number of different arguments the solution
to $Z$ must be computed, instantiation does not terminate. The problem is with
the third parameter ($k$) of $Z$. We cannot simply assume
that values assigned to the third parameter of $Z$ do not matter;
in fact, only when $j =2$, $Z$'s right-hand side predicate formula
does not depend on $k$'s value. This is where our developed method will
come into play: it automatically
determines that it is sound to replace PVI
$Z(i,2,m+k,k)$ by, \emph{e.g.}\xspace, $Z(i,2,1,k)$ and to remove the universal
quantifier, enabling us to solve $X(1,1,1,1)$ using
instantiation.
Our technique uses a \emph{Control Flow Graph} (CFG) underlying the
PBES for analysing which parameters of a PBES are \emph{live}.
The CFG is a finite abstraction of the dependency graph
that would result from instantiating a PBES. For instance, when
ignoring the third and fourth parameters in our example PBES,
we find that the solution to $X(1,1,*,*)$ depends
on the first PVI, leading to $X(2,1,*,*)$ and the second PVI in $X$'s
equation, leading to $Z(1,2,*,*)$. In the same way we can determine
the dependencies for $Z(1,2,*,*)$, resulting in the finite structure
depicted in Fig.~\ref{fig:CFG}. The subsequent liveness analysis annotates
each vertex with a label indicating which parameters
cannot (cheaply) be excluded from
having an impact on the solution to the equation system; these are
assumed to be live. Using these labels, we modify the PBES automatically.
\begin{figure}
\caption{Dependency graph}
\caption{Control flow graph for the running example}
\label{fig:instantiate}
\label{fig:CFG}
\end{figure}
Constructing a good CFG is a major difficulty, which we address in
Section~\ref{sec:CFP_CFG}. The liveness analysis and the subsequent
modification of the analysed PBES is described in
Section~\ref{sec:dataflow}. Since the CFG constructed in
Section~\ref{sec:CFP_CFG} can still suffer from a combinatorial
explosion, we present an optimisation of our analysis in
Section~\ref{sec:local}. | 1,210 | 43,359 | en |
train | 0.21.4 | \section{Constructing Control Flow Graphs for PBESs}
\label{sec:CFP_CFG}
The vertices in the control flow graph we constructed in the previous section
represent the values assigned to a subset of the equations' formal
parameters whereas an edge between two vertices captures the
dependencies among (partially instantiated) equations. The better
the control flow graph approximates the dependency graph resulting
from an instantiation, the more precise the resulting liveness
analysis.
Since computing a precise control flow graph is expensive,
the problem is to
compute the graph effectively and balance
precision and cost.
To this end, we first identify a set of
\emph{control flow parameters}; the values to
these parameters will make up the vertices in the control flow
graph. While there is some choice for control flow parameters,
we require that these are parameters for which we can
\emph{statically} determine:
\begin{compactenum}
\item the (finite set of) values these parameters can assume,
\item the set of PVIs on which the truth of a right-hand
side predicate formula may depend, given a concrete value for each control flow
parameter, and
\item the values assigned to the control flow parameters by all
PVIs on which the truth of a right-hand side predicate formula
may depend.
\end{compactenum}
In addition to these requirements, we impose one other restriction:
control flow parameters of one equation must be \emph{mutually
independent}; \emph{i.e.}\xspace, we have to be able to determine their values
independently of each other. Apart from being a natural requirement
for a control flow parameter, it enables us to devise optimisations of
our liveness analysis.
We now formalise these ideas. First, we characterise three partial
functions that together allow to relate values of formal parameters
to the dependency of a formula on a given PVI. Our formalisation
of these partial functions is based on the following observation:
if in a formula $\varphi$, we can replace a particular PVI $X(\val{e})$
with the subformula $\psi \wedge X(\val{e})$ without this affecting
the truth value of $\varphi$, we know that $\varphi$'s truth value
only depends on $X(\val{e})$'s whenever $\psi$ holds. We will
choose $\psi$ such that it allows us to pinpoint exactly what value
a formal parameter of an equation has (or will be assigned through
a PVI). Using these functions, we then identify our
control flow parameters by eliminating variables that do
not meet all of the aforementioned requirements.
In order to reason about individual PVIs occurring in predicate
formulae we introduce the notation necessary to do so. Let
$\npred{\varphi}$ denote the number of PVIs occurring in a predicate
formula $\varphi$. The function $\predinstphi{\varphi}{i}$ is the
formula representing the $i^\text{th}$ PVI in $\varphi$, of which
$\predphi{\varphi}{i}$ is the name and $\dataphi{\varphi}{i}$
represents the term that appears as the argument of the instance. In general
$\dataphi{\varphi}{i}$ is a vector, of which we denote the $j^{\text{th}}$
argument by $\dataphi[j]{\varphi}{i}$.
Given predicate formula $\psi$ we write $\varphi[i \mapsto \psi]$
to indicate that the PVI at position $i$ is replaced syntactically
by $\psi$ in $\varphi$.
\reportonly{
Formally we define $\varphi[i \mapsto
\psi]$,
as follows.
\begin{definition}
Let $\psi$ be a predicate formula, and let $i \leq \npred{\varphi}$,
$\varphi[i \mapsto \psi]$ is defined inductively as follows.
\begin{align*}
b[i \mapsto \psi] & \ensuremath{=} b \\
Y(e)[i \mapsto \psi] & \ensuremath{=} \begin{cases} \psi & \text{if $i = 1$}\\ Y(e) &
\text{otherwise} \end{cases} \\
(\forall d \colon D . \varphi)[i \mapsto \psi] & \ensuremath{=} \forall d \colon D .
\varphi[i \mapsto \psi]\\
(\exists d \colon D . \varphi)[i \mapsto \psi] & \ensuremath{=} \exists d \colon D .
\varphi[i \mapsto \psi]\\
(\varphi_1 \land \varphi_2)[i \mapsto \psi] & \ensuremath{=} \begin{cases}
\varphi_1 \land \varphi_2[(i - \npred{\varphi_1}) \mapsto \psi] & \text{if } i
> \npred{\varphi_1} \\
\varphi_1[i \mapsto \psi] \land \varphi_2 & \text{if } i \leq \npred{\varphi_1}
\end{cases}\\
(\varphi_1 \lor \varphi_2)[i \mapsto \psi] & \ensuremath{=} \begin{cases}
\varphi_1 \lor \varphi_2[(i - \npred{\varphi_1}) \mapsto \psi] & \text{if } i >
\npred{\varphi_1} \\
\varphi_1[i \mapsto \psi] \lor \varphi_2 & \text{if } i \leq \npred{\varphi_1}
\end{cases}
\end{align*}
\end{definition}
}
\begin{definition} Let $s \colon \varset{P} \times \mathbb{N} \times \mathbb{N}
\to D$, $t \colon \varset{P} \times \mathbb{N} \times \mathbb{N} \to D$, and
$c \colon \varset{P} \times \mathbb{N} \times \mathbb{N} \to \mathbb{N}$
be partial
functions, where $D$ is the union of all ground terms.
The triple $(s,t,c)$ is a \emph{unicity constraint} for PBES $\ensuremath{\mathcal{E}}$ if for all
$X \in \bnd{\ensuremath{\mathcal{E}}}$, $i,j,k \in \mathbb{N}$ and
ground terms $e$:
\begin{compactitem}
\item (source) if $s(X,i,j) {=} e$ then
$\rhs{X} \equiv
\rhs{X}[i \mapsto (\var[j]{d} = e \wedge
\predinstphi{\rhs{X}}{i})]$,
\item (target) if $t(X,i,j) {=} e$ then
$\rhs{X}
\equiv \rhs{X}[i \mapsto (\dataphi[j]{\rhs{X}}{i} = e \wedge
\predinstphi{\rhs{X}}{i})]$,
\item (copy) if $c(X,i,j) {=} k$ then $\rhs{X} \equiv \rhs{X}[i \mapsto
(\dataphi[k]{\rhs{X}}{i} = \var[j]{d} \wedge
\predinstphi{\rhs{X}}{i} )]$.
\end{compactitem}
\end{definition}
Observe that indeed, function $s$ states that, when defined, formal
parameter $\var[j]{d}$ must have value $s(X,i,j)$ for $\rhs{X}$'s
truth value to depend on that of $\predinstphi{\rhs{X}}{i}$. In
the same vein $t(X,i,j)$, if defined, gives the fixed value of the
$j^\text{th}$ formal parameter of $\predphi{\rhs{X}}{i}$.
Whenever $c(X,i,j) = k$ the value of variable $\var[j]{d}$ is
transparently copied to position $k$ in the $i^\text{th}$ predicate
variable instance of $\rhs{X}$. Since $s,t$ and $c$ are partial
functions, we do not require them to be defined; we use $\bot$ to
indicate this.
\begin{example}\label{exa:unicity_constraint}
A unicity constraint $(s,t,c)$ for our running example
could be one that assigns $s(X,1,2) = 1$, since parameter $j^X$
must be $1$ to make $X$'s right-hand side formula depend on PVI
$X(2,j,k,l+1)$. We can set $t(X,1,2) = 1$, as one can deduce that
parameter $j^X$ is set to $1$ by the PVI $X(2,j,k,l+1)$;
furthermore, we can set $c(Z,1,4) = 3$, as parameter $k^Y$ is
set to $l^Z$'s value by PVI $Y(1,1,l,1)$.
\end{example}
\reportonly{
The requirements allow unicity constraints to be underspecified. In practice,
it is desirable to choose the constraints as complete as possible. If, in a
unicity constraint $(s,t,c)$, $s$ and $c$ are defined for a predicate variable
instance, it can immediately be established that we can define $t$ as well.
This is formalised by the following property.
\begin{property}\label{prop:sourceCopyDest}
Let $X$ be a predicate variable, $i \leq \npred{\rhs{X}}$, let $(s,t,c)$ be
a unicity constraint, and let $e$ be a value, then
$$
(s(X, i, n) = e \land c(X, i, n) = m) \implies t(X, i, m) = e.
$$
\end{property}
Henceforth we assume that all unicity constraints satisfy this property.
The overlap between $t$ and $c$ is now straightforwardly formalised in the
following lemma.
\begin{lemma}\label{lem:copyDest}
Let $X$ be a predicate variable, $i \leq \npred{\rhs{X}}$, and let
$(s,t,c)$ be a unicity constraint, then if
$(s(X, i, n)$ and $t(X, i, m)$ are both defined,
$$
c(X, i, n) = m \implies s(X, i, n) = t(X, i, m).
$$
\end{lemma}
\begin{proof}
Immediately from the definitions and Property~\ref{prop:sourceCopyDest}.
\end{proof} | 2,541 | 43,359 | en |
train | 0.21.5 | }
From hereon, we assume that $\ensuremath{\mathcal{E}}$ is an arbitrary PBES
with $(\ensuremath{\mathsf{source}}\xspace,\ensuremath{\mathsf{target}}\xspace,\ensuremath{\mathsf{copy}}\xspace)$
a unicity constraint we can deduce for it.
Notice that for each formal parameter for which either \ensuremath{\mathsf{source}}\xspace
or \ensuremath{\mathsf{target}}\xspace is defined for some PVI, we have a finite set of values
that this parameter can assume. However, at this point we do not
yet know whether this set of values is exhaustive: it may be that
some PVIs may cause the parameter to take on arbitrary values.
Below, we will narrow down for which parameters we \emph{can}
ensure that the set of values is exhaustive. First, we eliminate
formal parameters that do not meet conditions 1--3 for PVIs that
induce self-dependencies for an equation.
\begin{definition}\label{def:LCFP}
A parameter $\var[n]{d} \in \param{X}$ is a \emph{local
control flow parameter} (LCFP) if for all $i$ such that $\predphi{\rhs{X}}{i}
= X$, either $\source{X}{i}{n}$ and $\dest{X}{i}{n}$ are defined, or
$\copied{X}{i}{n} = n$.
\end{definition}
\begin{example} Formal parameter $l^X$ in our running example
does not meet the conditions of Def.~\ref{def:LCFP} and is therefore not
an LCFP. All other parameters in all other equations are still LCFPs since
$X$ is the only equation with a self-dependency.
\end{example}
From the formal parameters that are LCFPs, we
next eliminate those parameters that do not meet conditions 1--3
for PVIs that induce dependencies among \emph{different} equations.
\begin{definition}\label{def:GCFP}
A parameter $\var[n]{d} \in \param{X}$
is a \emph{global control flow parameter} (GCFP)
if it is an LCFP, and for all $Y \in \bnd{\mathcal{E}}\setminus \{X\}$ and
all $i$ such that $\predphi{\rhs{Y}}{i} = X$, either
$\dest{Y}{i}{n}$ is defined, or $\copied{Y}{i}{m} = n$
for some GCFP $\var[m]{d} \in \param{Y}$.
\end{definition}
The above definition is recursive in nature: if a parameter does
not meet the GCFP conditions then this may result in another parameter
also not meeting the GCFP conditions. Any set of parameters that
meets the GCFP conditions is a good set, but larger sets possibly lead to
better information about the control flow in a PBES.
\begin{example}
Formal parameter $k^Z$ in our running example
is not a GCFP since in PVI $Z(i,2,m+k,1)$ from $X$'s equation,
the value assigned to $k^Z$ cannot be determined.
\end{example}
The parameters that meet the GCFP conditions satisfy the conditions
1--3 that we imposed on control flow parameters: they assume a
finite set of values, we can deduce which PVIs may affect the
truth of a right-hand side predicate formula, and we can deduce how
these parameters evolve as a result of all PVIs in a PBES. However,
we may still have parameters of a given equation that are mutually
dependent. Note that this dependency can only arise as a result of
copying parameters: in all other cases, the functions \ensuremath{\mathsf{source}}\xspace
and \ensuremath{\mathsf{target}}\xspace provide the information to deduce concrete values.
\begin{example}
GCFP $k^Y$ affects GCFP $k^X$'s value through PVI $X(1,j,k,l)$; likewise,
$k^X$ affects
$l^Z$'s value through PVI $Z(i,2,m+k,k)$.
Through the PVI $Y(2,2,1,l)$ in $Z$'s equation,
GCFP $l^Z$ affects GCFPs $l^Y$ value. Thus, $k^Y$ affects $l^Y$'s value
transitively.
\end{example}
We identify parameters that, through copying, may become
mutually dependent. To this end, we use a relation $\sim$, to
indicate that GCFPs are \emph{related}. Let $\var[\!\!\!\!n]{d^X}$ and
$\var[\!\!\!\!m]{d^Y}$ be GCFPs; these are \emph{related}, denoted
$\var[\!\!\!\!n]{d^X}
\sim \var[\!\!\!\!m]{d^Y}$, if $n = \copied{Y}{i}{m}$ for some $i$. Next,
we characterise when a set of GCFPs does not introduce mutual
dependencies.
\begin{definition}
\label{def:control_structure}
Let $\mathcal{C}$ be a set of GCFPs, and let $\sim^*$ denote the reflexive,
symmetric and transitive closure of $\sim$ on $\mathcal{C}$.
Assume ${\approx} \subseteq \mathcal{C} \times \mathcal{C}$ is an equivalence
relation that subsumes $\sim^*$; \emph{i.e.}\xspace,
that satisfies $\sim^* \subseteq \approx$. Then the pair
$\langle \mathcal{C}, \approx \rangle$ defines a \emph{control structure}
if for all $X \in \bnd{\mathcal{E}}$ and all $d,d' \in \mathcal{C} \cap
\param{X}$, if $d \approx d'$, then $d = d'$.
\end{definition}
We say that a unicity constraint is a \emph{witness} to a control
structure $\langle \varset{C},\approx\rangle$ if the latter can be
deduced from the unicity constraint through
Definitions~\ref{def:LCFP}--\ref{def:control_structure}.
The equivalence $\approx$ in a control structure
also serves to identify GCFPs that take on the same role in
\emph{different} equations: we say that two parameters $c,c' \in
\varset{C}$ are \emph{identical} if $c \approx c'$.
As a last step, we formally define our notion of a
control flow parameter.
\begin{definition}
A formal parameter $c$ is a \emph{control flow parameter (CFP)} if there
is a control structure $\langle \varset{C},\approx\rangle$ such
that $c \in \varset{C}$.
\end{definition}
\begin{example}\label{exa:CFP}
Observe that there is a unicity constraint that
identifies that parameter
$i^X$ is copied to $i^Z$ in our running example. Then necessarily $i^Z \sim
i^X$ and thus
$i^X \approx i^Z$ for a control structure $\langle \mathcal{C},\approx \rangle$
with $i^X,i^Z \in \mathcal{C}$.
However, $i^X$ and $i^Y$
do not have to be related, but we have the option to define $\approx$
so that they are. In fact, the structure $\langle
\{i^X,j^X,i^Y,j^Y,i^Z,j^Z\}, \approx \rangle$ for which $\approx$
relates all (and only) identically named parameters is a control
structure.
\end{example}
Using a control structure $\langle \varset{C},\approx\rangle$,
we can ensure that all equations have the same set of CFPs. This can
be done by assigning unique names to
identical CFPs and by adding CFPs that
do not appear in an equation as formal parameters for this equation.
Without loss of generality
we therefore continue to work under the following assumption.
\begin{assumption} \label{ass:names}
The set of CFPs is the same for every equation in
a PBES; that is, for all $X, Y \in \bnd{\ensuremath{\mathcal{E}}}$ in a PBES $\ensuremath{\mathcal{E}}$ we have
$d^X \in \param{X}$ is a CFP iff $d^Y \in \param{Y}$ is a CFP, and $d^X \approx
d^Y$.
\end{assumption}
From hereon, we call any formal parameter that is not a control flow parameter
a \emph{data parameter}. We make this
distinction explicit by partitioning $\varset{D}$ into CFPs $\varset{C}$
and data parameters $\varset{D}^{\mathit{DP}}$. As a consequence of Assumption~\ref{ass:names},
we may assume that every PBES we consider has equations with the same sequence of CFPs;
\emph{i.e.}\xspace, all equations are of the form
$\sigma X(\var{c} \colon \vec{C}, \var{d^X} \colon \vec{D^X})
= \rhs{X}(\var{c}, \var{d^X})$, where $\var{c}$ is the (vector of) CFPs, and
$\var{d^X}$ is the (vector of) data parameters of the equation for $X$.
Using the CFPs, we next construct a control flow graph. Vertices in this
graph represent valuations for the vector of CFPs and
the edges capture dependencies on PVIs.
The set of potential valuations for the CFPs is bounded by $\values{\var[k]{c}}$,
defined as:
\[
\{ \init{\var[k]{c}} \} \cup \bigcup\limits_{i \in \mathbb{N}, X \in \bnd{\ensuremath{\mathcal{E}}}}
\{ v \in D \mid
\source{X}{i}{k} = v \lor \dest{X}{i}{k} = v \}.
\]
We generalise $\ensuremath{\mathsf{values}}$ to the vector $\var{c}$ in the obvious way.
\begin{definition}
\label{def:globalCFGHeuristic}
The control flow graph (CFG) of $\ensuremath{\mathcal{E}}$ is a directed graph $(V^{\semantic}syn,
{\smash{\xrightarrow{\semantic}syn}})$
with:
\begin{compactitem}
\item $V^{\semantic}syn \subseteq \bnd{\ensuremath{\mathcal{E}}} \times \values{\var{c}}$.
\item ${\smash{\xrightarrow{\semantic}syn}} \subseteq V \times \mathbb{N} \times
V$ is the least relation for which, whenever $(X,\val{v}) \xrightarrow{\semantic}syn[i]
(\predphi{\rhs{X}}{i},\val{w})$ then for every $k$ either:
\begin{compactitem}
\item $\source{X}{i}{k} = \val[k]{v}$ and $\dest{X}{i}{k} = \val[k]{w}$, or
\item $\source{X}{i}{k} = \bot$, $\copied{X}{i}{k} = k$ and $\val[k]{v} =
\val[k]{w}$, or
\item $\source{X}{i}{k} = \bot$, and $\dest{X}{i}{k} = \val[k]{w}$.
\end{compactitem}
\end{compactitem}
\end{definition}
We refer to the vertices in the CFG as \emph{locations}. Note that
a CFG is finite since the set $\values{\var{c}}$ is finite.
Furthermore, CFGs are complete in the sense that all PVIs on which
the truth of some $\rhs{X}$ may depend when $\val{c} = \val{v}$ are neighbours
of
location $(X, \val{v})$. | 2,902 | 43,359 | en |
train | 0.21.6 | We say that a unicity constraint is a \emph{witness} to a control
structure $\langle \varset{C},\approx\rangle$ if the latter can be
deduced from the unicity constraint through
Definitions~\ref{def:LCFP}--\ref{def:control_structure}.
The equivalence $\approx$ in a control structure
also serves to identify GCFPs that take on the same role in
\emph{different} equations: we say that two parameters $c,c' \in
\varset{C}$ are \emph{identical} if $c \approx c'$.
As a last step, we formally define our notion of a
control flow parameter.
\begin{definition}
A formal parameter $c$ is a \emph{control flow parameter (CFP)} if there
is a control structure $\langle \varset{C},\approx\rangle$ such
that $c \in \varset{C}$.
\end{definition}
\begin{example}\label{exa:CFP}
Observe that there is a unicity constraint that
identifies that parameter
$i^X$ is copied to $i^Z$ in our running example. Then necessarily $i^Z \sim
i^X$ and thus
$i^X \approx i^Z$ for a control structure $\langle \mathcal{C},\approx \rangle$
with $i^X,i^Z \in \mathcal{C}$.
However, $i^X$ and $i^Y$
do not have to be related, but we have the option to define $\approx$
so that they are. In fact, the structure $\langle
\{i^X,j^X,i^Y,j^Y,i^Z,j^Z\}, \approx \rangle$ for which $\approx$
relates all (and only) identically named parameters is a control
structure.
\end{example}
Using a control structure $\langle \varset{C},\approx\rangle$,
we can ensure that all equations have the same set of CFPs. This can
be done by assigning unique names to
identical CFPs and by adding CFPs that
do not appear in an equation as formal parameters for this equation.
Without loss of generality
we therefore continue to work under the following assumption.
\begin{assumption} \label{ass:names}
The set of CFPs is the same for every equation in
a PBES; that is, for all $X, Y \in \bnd{\ensuremath{\mathcal{E}}}$ in a PBES $\ensuremath{\mathcal{E}}$ we have
$d^X \in \param{X}$ is a CFP iff $d^Y \in \param{Y}$ is a CFP, and $d^X \approx
d^Y$.
\end{assumption}
From hereon, we call any formal parameter that is not a control flow parameter
a \emph{data parameter}. We make this
distinction explicit by partitioning $\varset{D}$ into CFPs $\varset{C}$
and data parameters $\varset{D}^{\mathit{DP}}$. As a consequence of Assumption~\ref{ass:names},
we may assume that every PBES we consider has equations with the same sequence of CFPs;
\emph{i.e.}\xspace, all equations are of the form
$\sigma X(\var{c} \colon \vec{C}, \var{d^X} \colon \vec{D^X})
= \rhs{X}(\var{c}, \var{d^X})$, where $\var{c}$ is the (vector of) CFPs, and
$\var{d^X}$ is the (vector of) data parameters of the equation for $X$.
Using the CFPs, we next construct a control flow graph. Vertices in this
graph represent valuations for the vector of CFPs and
the edges capture dependencies on PVIs.
The set of potential valuations for the CFPs is bounded by $\values{\var[k]{c}}$,
defined as:
\[
\{ \init{\var[k]{c}} \} \cup \bigcup\limits_{i \in \mathbb{N}, X \in \bnd{\ensuremath{\mathcal{E}}}}
\{ v \in D \mid
\source{X}{i}{k} = v \lor \dest{X}{i}{k} = v \}.
\]
We generalise $\ensuremath{\mathsf{values}}$ to the vector $\var{c}$ in the obvious way.
\begin{definition}
\label{def:globalCFGHeuristic}
The control flow graph (CFG) of $\ensuremath{\mathcal{E}}$ is a directed graph $(V^{\semantic}syn,
{\smash{\xrightarrow{\semantic}syn}})$
with:
\begin{compactitem}
\item $V^{\semantic}syn \subseteq \bnd{\ensuremath{\mathcal{E}}} \times \values{\var{c}}$.
\item ${\smash{\xrightarrow{\semantic}syn}} \subseteq V \times \mathbb{N} \times
V$ is the least relation for which, whenever $(X,\val{v}) \xrightarrow{\semantic}syn[i]
(\predphi{\rhs{X}}{i},\val{w})$ then for every $k$ either:
\begin{compactitem}
\item $\source{X}{i}{k} = \val[k]{v}$ and $\dest{X}{i}{k} = \val[k]{w}$, or
\item $\source{X}{i}{k} = \bot$, $\copied{X}{i}{k} = k$ and $\val[k]{v} =
\val[k]{w}$, or
\item $\source{X}{i}{k} = \bot$, and $\dest{X}{i}{k} = \val[k]{w}$.
\end{compactitem}
\end{compactitem}
\end{definition}
We refer to the vertices in the CFG as \emph{locations}. Note that
a CFG is finite since the set $\values{\var{c}}$ is finite.
Furthermore, CFGs are complete in the sense that all PVIs on which
the truth of some $\rhs{X}$ may depend when $\val{c} = \val{v}$ are neighbours
of
location $(X, \val{v})$.
\reportonly{
\begin{restatable}{lemma}{resetrelevantpvineighbours}
\label{lem:relevant_pvi_neighbours}
Let $(V^{\semantic}syn, {\smash{\xrightarrow{\semantic}syn}})$ be $\ensuremath{\mathcal{E}}$'s control flow graph. Then
for all $(X, \val{v}) \in V^{\semantic}syn$ and all predicate environments
$\ensuremath{\eta}, \ensuremath{\eta}'$ and data environments $\ensuremath{\delta}$:
$$\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\val{v}}{}}]}
= \sem{\rhs{X}}{\ensuremath{\eta}'}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\val{v}}{}}]}$$
provided that
$\ensuremath{\eta}(Y)(\val{w}) = \ensuremath{\eta}'(Y)(\val{w})$ for all
$(Y, \val{w})$ satisfying
$(X, \val{v}) \xrightarrow{\semantic}syn[i] (Y, \val{w})$.
\end{restatable}
\begin{proof}
Let $\ensuremath{\eta}, \ensuremath{\eta}'$ be predicate environments,$\ensuremath{\delta}$ a data
environment, and let $(X, \val{v}) \in V$.
Suppose that for all $(Y, \val{w})$ for which
$(X, \val{v}) \xrightarrow{\semantic}syn[i] (Y, \val{w})$, we know that $\ensuremath{\eta}(Y)(\val{w})
=
\ensuremath{\eta}'(Y)(\val{w})$.
Towards a contradiction, let
$\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\val{v}}{}{}}]}
\neq
\sem{\rhs{X}}{\ensuremath{\eta}'}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\val{v}}{}{}}]}$.
Then there must be a predicate variable instance $\predinstphi{\rhs{X}, i}$
such that
\begin{equation}\label{eqn:ass_pvi}
\begin{array}{cl}
&
\ensuremath{\eta}(\predphi{\rhs{X}}{i})(\sem{\dataphi{\rhs{X}}{i}}{}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\val{v}}{}{}}]})\\
\neq &
\ensuremath{\eta}'(\predphi{\rhs{X}}{i})(\sem{\dataphi{\rhs{X}}{i}}{}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\val{v}}{}{}}]}).
\end{array}
\end{equation}
Let
$\dataphi{\rhs{X}}{i} = (\val{e},\val{e'})$, where $\val{e}$ are the values
of the control flow parameters, and $\val{e'}$ are the values of the data
parameters.
Consider an arbitrary control flow parameter $\var[\ell]{c}$. We
distinguish two cases:
\begin{compactitem}
\item $\source{X}{i}{\ell} \neq \bot$. Then we know
$\dest{X}{i}{\ell} \neq \bot$, and the
requirement for the edge $(X, \val{v}) \xrightarrow{\semantic}syn[i]
(\predphi{\rhs{X}}{i}, \val{e})$
is satisfied for $\ell$.
\item $\source{X}{i}{\ell} = \bot$. Since $\var[\ell]{c}$
is a control flow parameter, we can distinguish two cases based on
Definitions~\ref{def:LCFP} and \ref{def:GCFP}:
\begin{compactitem}
\item $\dest{X}{i}{\ell} \neq \bot$. Then parameter $\ell$
immediately satisfies the requirements that show the existence
of the edge $(X, \val{v}) \xrightarrow{\semantic}syn[i]
(\predphi{\rhs{X}}{i}, \val{e})$ in the third clause in the
definition of CFG.
\item $\copied{X}{i}{\ell} = \ell$.
According to the definition of $\ensuremath{\mathsf{copy}}\xspace$, we now know that
$\val[\ell]{v} = \val[\ell]{e}$, hence the edge
$(X, \val{v}) \xrightarrow{\semantic}syn[i]
(\predphi{\rhs{X}}{i}, \val{e})$ exists according to the
second requirement in the definition of CFG.
\end{compactitem}
\end{compactitem}
Since we have considered an arbitrary $\ell$, we know that for all $\ell$
the requirements are satisfied, hence
$(X, \val{v}) \xrightarrow{\semantic}syn[i] (\predphi{\rhs{X}}{i}, \val{e})$. Then
according to the definition of $\ensuremath{\eta}$ and $\ensuremath{\eta}'$,
$\ensuremath{\eta}(\predphi{\rhs{X}}{i})(\sem{\val{e}}{}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\val{v}}{}{}}]})
=
\ensuremath{\eta}'(\predphi{\rhs{X}}{i})(\sem{\val{e}}{}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\val{v}}{}{}}]})$.
This contradicts \eqref{eqn:ass_pvi}, hence we find that
$\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\var{v}}{}{}}]}
=
\sem{\rhs{X}}{\ensuremath{\eta}'}{\ensuremath{\delta}[\subst{\var{c}}{\sem{\var{v}}{}{}}]}$.\qed | 2,925 | 43,359 | en |
train | 0.21.7 | \end{proof} | 5 | 43,359 | en |
train | 0.21.8 | }
\begin{example}
Using the CFPs identified earlier and an appropriate unicity constraint, we can
obtain the CFG depicted in Fig.~\ref{fig:CFG} for our running example.
\end{example}
\paragraph{Implementation.} CFGs are defined in terms of
CFPs, which in turn are obtained from a unicity constraint.
Our definition of a unicity constraint is not constructive.
However, a unicity constraint can be derived from \emph{guards} for
a PVI. While computing the exact guard, \emph{i.e.}\xspace the strongest formula $\psi$ satisfying
$\varphi \equiv
\varphi[i \mapsto (\psi \wedge \predinstphi{\varphi}{i})]$, is
computationally
hard, we can efficiently approximate it as follows:
\begin{definition}
Let $\varphi$ be a predicate formula. We define the \emph{guard}
of the $i$-th PVI in $\varphi$,
denoted $\guard{i}{\varphi}$, inductively as follows:
\begin{align*}
\guard{i}{b} & = \ensuremath{\mathit{false}} &
\guard{i}{Y} & = \ensuremath{\mathit{true}} \\
\guard{i}{\forall d \colon D . \varphi} & = \guard{i}{\varphi} &
\guard{i}{\exists d \colon D . \varphi} & = \guard{i}{\varphi} \\
\guard{i}{\varphi \land \psi} & = \begin{cases}
s(\varphi) \land \guard{i - \npred{\varphi}}{\psi} & \text{if } i > \npred{\varphi} \\
s(\psi) \land \guard{i}{\varphi} & \text{if } i \leq \npred{\varphi}
\end{cases} \span\omit\span\omit\\
\guard{i}{\varphi \lor \psi} & = \begin{cases}
s(\lnot \varphi) \land \guard{i - \npred{\varphi}}{\psi} & \text{if } i >
\npred{\varphi} \\
s(\lnot \psi) \land \guard{i}{\varphi} & \text{if } i \leq \npred{\varphi}
\end{cases}\span\omit\span\omit
\end{align*}
where
$s(\varphi) = \varphi$ if $\npred{\varphi} = 0$, and $\ensuremath{\mathit{true}}$ otherwise.
\end{definition}
We have $\varphi \equiv \varphi[i \mapsto (\guard{i}{\varphi} \wedge \predinstphi{\varphi}{i})]$;
\emph{i.e.}\xspace,
$\predinstphi{\varphi}{i}$ is relevant to $\varphi$'s truth value only if
$\guard{i}{\varphi}$ is satisfiable.
\reportonly{
This is formalised int he following lemma.
\begin{lemma}\label{lem:addGuard}
Let $\varphi$ be a predicate formula, and let $i \leq \npred{\varphi}$, then
for every predicate environment $\ensuremath{\eta}$ and data environment $\ensuremath{\delta}$,
$$
\sem{\varphi}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\varphi[i \mapsto (\guard{i}{\varphi} \land
\predinstphi{\varphi}{i})]}{\ensuremath{\eta}}{\ensuremath{\delta}}.
$$
\end{lemma}
\begin{proof}
Let $\ensuremath{\eta}$ and $\ensuremath{\delta}$ be arbitrary. We proceed by induction on
$\varphi$.
The base cases where $\varphi = b$ and $\varphi = Y(\val{e})$ are trivial, and
$\forall d \colon D . \psi$ and $\exists d \colon D . \psi$ follow immediately
from the induction hypothesis. We
describe the case where $\varphi = \varphi_1 \land \varphi_2$ in detail,
the $\varphi = \varphi_1 \lor \varphi_2$ is completely analogous.
Assume that $\varphi = \varphi_1 \land \varphi_2$.
Let $i \leq \npred{\varphi_1 \land \varphi_2}$.
Without loss of generality assume that $i \leq \npred{\varphi_1}$, the other
case is analogous. According to the induction hypothesis,
\begin{equation}
\sem{\varphi_1}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\varphi_1[i \mapsto (\guard{i}{\varphi_1} \land
\predinstphi{\varphi_1}{i})]}{\ensuremath{\eta}}{\ensuremath{\delta}} \label{eq:ih}
\end{equation}
We distinguish two cases.
\begin{compactitem}
\item $\npred{\varphi_2} \neq 0$. Then
$\sem{\guard{i}{\varphi_1}}{\ensuremath{\delta}}{\ensuremath{\eta}}
= \sem{\guard{i}{\varphi_1 \land \varphi_2}}{\ensuremath{\delta}}{\ensuremath{\eta}}$
according to the definition of $\mathsf{guard}$. Since $i \leq \npred{\varphi_1}$,
we find that
$
\sem{\varphi_1 \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{(\varphi_1 \land \varphi_2)[i \mapsto (\guard{i}{\varphi_1 \land
\varphi_2} \land \predinstphi{\varphi_1 \land
\varphi_2}{i})]}{\ensuremath{\eta}}{\ensuremath{\delta}}.
$
\item $\npred{\varphi_2} = 0$.
We have to show that
$$\sem{\varphi_1 \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\varphi_1[i \mapsto (\guard{i}{\varphi_1 \land \varphi_2} \land
\predinstphi{\varphi_1}{i})] \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}}$$
From the semantics, it follows that
$\sem{\varphi_1 \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\varphi_1}{\ensuremath{\eta}}{\ensuremath{\delta}} \land
\sem{\varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}}.
$
Combined with \eqref{eq:ih}, and an application of the semantics, this yields
$$
\sem{\varphi_1 \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\varphi_1[i \mapsto (\guard{i}{\varphi_1} \land
\predinstphi{\varphi_1}{i})] \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}}.
$$
According to the definition
of $\mathsf{guard}$, $\guard{i}{\varphi_1 \land \varphi_2} = \varphi_2 \land
\guard{i}{\varphi_1}$.
Since $\varphi_2$ is present in the context, the desired result
follows.\qed
\end{compactitem}
\end{proof}
We can generalise the above, and guard every predicate variable
instance in a formula with its guard, which preserves the solution of the
formula. To this end we introduce the function $\mathsf{guarded}$.
\begin{definition}\label{def:guarded}
Let $\varphi$ be a predicate formula, then
$$\guarded{\varphi} \ensuremath{=} \varphi[i \mapsto (\guard{i}{\varphi} \land
\predinstphi{\varphi}{i})]_{i \leq \npred{\varphi}}$$
where $[i \mapsto \psi_i]_{i \leq \npred{\varphi}}$ is the simultaneous
syntactic substitution of all $\predinstphi{\varphi}{i}$ with $\psi_i$.
\end{definition}
The following corollary follows immediately from Lemma~\ref{lem:addGuard}.
\begin{corollary}\label{cor:guardedPreservesSol}
For all formulae $\varphi$, and for all predicate environments $\ensuremath{\eta}$,
and data environments $\ensuremath{\delta}$,
$
\sem{\varphi}{\ensuremath{\eta}}{\ensuremath{\delta}} = \sem{\guarded{\varphi}}{\ensuremath{\eta}}{\ensuremath{\delta}}
$
\end{corollary}
This corollary confirms our intuition that indeed the guards we compute
effectively guard the recursions in a formula.
}
A good heuristic for defining the unicity constraints is looking
for positive occurrences of constraints of the form $d
= e$ in the guards and using this information to see if the arguments
of PVIs reduce to constants. | 2,048 | 43,359 | en |
train | 0.21.9 | \section{Data Flow Analysis}\label{sec:dataflow}
Our liveness analysis is built on top of CFGs constructed using
Def.~\ref{def:globalCFGHeuristic}. The analysis proceeds as follows:
for each location in the CFG, we first identify the data parameters
that may directly affect the truth value of the corresponding predicate
formula. Then we inductively identify data parameters that can
affect such parameters through PVIs as live as well. Upon termination,
each location is labelled by the \emph{live} parameters at that
location.
The set $\significant{\varphi}$ of parameters that affect the truth value of
a predicate formula $\varphi$, \emph{i.e.}\xspace, those parameters that occur in
Boolean data terms, are approximated as follows:
\begin{align*}
\significant{b} & = \free{b} &
\significant{Y(e)} & = \emptyset \\
\significant{\varphi \land \psi} & = \significant{\varphi} \cup \significant{\psi} &
\significant{\varphi \lor \psi} & = \significant{\varphi} \cup \significant{\psi} \\
\significant{\exists d \colon D . \varphi} & = \significant{\varphi} \setminus \{ d \} &
\significant{\forall d \colon D . \varphi} & = \significant{\varphi} \setminus \{ d \}
\end{align*}
Observe that $\significant{\varphi}$ is not invariant under logical
equivalence. We use this fact to our advantage: we assume the
existence of a function $\mathsf{simplify}$ for which we require
$\simplify{\varphi} \equiv \varphi$, and
$\significant{\simplify{\varphi}}\subseteq \significant{\varphi}$.
An appropriately chosen function $\mathsf{simplify}$ may help to narrow
down the parameters that affect the truth value of predicate
formulae in our base case. Labelling the CFG with live variables
is achieved as follows:
\begin{definition}
\label{def:markingHeuristic}
Let $\ensuremath{\mathcal{E}}$ be a PBES and let
$(V^{\semantic}syn, \xrightarrow{\semantic}syn)$ be its CFG.
The labelling $\markingsn{\syntactic}{} \colon V^{\semantic}syn \to
\mathbb{P}({\varset{D}^{\mathit{DP}}})$ is defined as
$\marking{\markingsn{\syntactic}{}}{X, \val{v}} = \bigcup_{n \in \ensuremath{\mathbb{N}}}
\marking{\markingsn{\syntactic}{n}}{X, \val{v}}$, with
$\markingsn{\syntactic}{n}$ inductively defined as:
\[
\begin{array}{ll}
\marking{\markingsn{\syntactic}{0}}{X, \val{v}} & =
\significant{\simplify{\rhs{X}[\var{c} := \val{v}]}}\\
\marking{\markingsn{\syntactic}{n+1}}{X, \val{v}} & =
\marking{\markingsn{\syntactic}{n}}{X, \val{v}} \\
& \cup \{ d \in \param{X} \cap \varset{D}^{\mathit{DP}} \mid \exists {i \in \mathbb{N},
(Y,\val{w}) \in V}:
(X, \val{v}) \xrightarrow{\semantic}syn[i] (Y, \val{w}) \\
& \qquad \land \exists {\var[\ell]{d} \in
\marking{\markingsn{\syntactic}{n}}{Y, \val{w}}}:~
\affects{d}{\dataphi[\ell]{\rhs{X}}{i}}
\}
\end{array}
\]
\end{definition}
The set $\marking{\markingsn{\syntactic}{}}{X, \val{v}}$ approximates the set of
parameters potentially live at location $(X, \val{v})$; all other data
parameters are guaranteed to be ``dead'', \emph{i.e.}\xspace, irrelevant.
\begin{example}\label{exa:globalCFGLabelled} The
labelling computed for our running example is depicted in Fig.~\ref{fig:CFG}.
One can cheaply establish that
$k^Z \notin \marking{\markingsn{\syntactic}{0}}{Z,1,2}$ since assigning
value $2$ to $j^Z$ in $Z$'s right-hand side effectively allows to
reduce subformula $(k < 10 \vee j =2)$ to $\ensuremath{\mathit{true}}$. We have
$l \in \marking{\markingsn{\syntactic}{1}}{Z,1,2}$ since
we have $k^Y \in \marking{\markingsn{\syntactic}{0}}{Y,1,1}$.
\end{example}
\reportonly{
The labelling from Definition~\ref{def:markingHeuristic} induces
a relation $\markrel{\markingsn{\syntactic}{}}$ on signatures as follows.
\begin{definition}\label{def:markrelSyntactic}
Let $\markingsn{\syntactic}{} \colon V \to \mathbb{P}(\varset{D}^{\mathit{DP}})$ be a
labelling. $\markingsn{\syntactic}{}$ induces a relation
$\markrel{\markingsn{\syntactic}{}}$ such that $(X, \sem{\val{v}}{}{},
\sem{\val{w}}{}{})
\markrel{\markingsn{\syntactic}{}} (Y, \sem{\val{v'}}{}{}, \sem{\val{w'}}{}{})$
if and only if
$X = Y$, $\sem{\val{v}}{}{} = \sem{\val{v'}}{}{}$, and
$\forall \var[k]{d} \in \marking{\markingsn{\syntactic}{}}{X,
\val{v}}: \semval[k]{w} = \semval[k]{w'}$.
\end{definition}
Observe that the relation $\markrel{\markingsn{\syntactic}{}}$
allows for relating \emph{all} instances of the non-labelled data parameters at
a given control flow location.
We prove that, if locations are related using the relation
$\markrel{\markingsn{\syntactic}{}}$, then the corresponding instances in the
PBES have the same solution by showing that $\markrel{\markingsn{\syntactic}{}}$
is a consistent correlation.
In order to prove this, we first show that given a predicate environment and two
data environments, if the solution of a formula differs between those
environments, and all predicate variable instances in the formula have the same
solution, then there must be a significant parameter $d$ in
the formula that gets a different value in the two data environments.
\begin{lemma}
\label{lem:non_recursive_free}
For all formulae $\varphi$, predicate environments $\ensuremath{\eta}$,
and data environments $\ensuremath{\delta}, \ensuremath{\delta}'$, if
$\sem{\varphi}{\ensuremath{\eta}}{\ensuremath{\delta}} \neq \sem{\varphi}{\ensuremath{\eta}}{\ensuremath{\delta}'}$
and for all $i \leq \npred{\varphi}$,
$\sem{\predinstphi{\varphi}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\predinstphi{\varphi}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}'}$,
then $\exists d \in \significant{\varphi}: \ensuremath{\delta}(d) \neq \ensuremath{\delta}'(d)$.
\end{lemma}
\begin{proof}
We proceed by induction on $\varphi$.
\begin{compactitem}
\item $\varphi = b$. Trivial.
\item $\varphi = Y(e)$. In this case the two preconditions
contradict, and the result trivially follows.
\item $\varphi = \forall e \colon D . \psi$. Assume that
$\sem{\forall e \colon D . \psi}{\ensuremath{\eta}}{\ensuremath{\delta}}
\neq \sem{\forall e \colon D . \psi}{\ensuremath{\eta}}{\ensuremath{\delta}'}$, and furthermore,
$\forall i \leq \npred{\forall e \colon D . \psi}:
\sem{\predinstphi{\forall e \colon D . \psi}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\predinstphi{\forall e \colon D . \psi}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}'}$.
According to the semantics, we have
$\forall u \in \semset{D} . \sem{\psi}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{e}{u}]}
\neq \forall u' \in \semset{D} .
\sem{\psi}{\ensuremath{\eta}}{\ensuremath{\delta}'[\subst{e}{u'}]}$,
so $\exists u \in \semset{D}$ such that
$\sem{\psi}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{e}{u}]}
\neq \sem{\psi}{\ensuremath{\eta}}{\ensuremath{\delta}'[\subst{e}{u}]}$.
Choose an arbitrary such $u$. Observe that also
for all $i \leq \npred{\psi}$, we know that
$$\sem{\predinstphi{\psi}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{e}{u}]}
= \sem{\predinstphi{\psi}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}'[\subst{e}{u}]}.$$
According to the induction hypothesis, there exists some $d \in
\significant{\psi}$
such that $\ensuremath{\delta}[\subst{e}{u}](d) \neq \ensuremath{\delta}'[\subst{e}{u}](d)$.
Choose such a $d$, and observe that $d \neq e$ since otherwise $u \neq u$,
hence $d \in \significant{\forall e \colon D . \psi}$,
which is the desired result.
\item $\varphi = \exists e \colon D . \psi$. Analogous to the previous case.
\item $\varphi = \varphi_1 \land \varphi_2$. Assume that
$\sem{\varphi_1 \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}}
\neq \sem{\varphi_1 \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}'}$, and suppose
that that for all $i \leq \npred{\varphi_1 \land \varphi_2}$, we know that
$\sem{\predinstphi{\varphi_1 \land \varphi_2}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\predinstphi{\varphi_1 \land \varphi_2}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}'}$.
According to the first assumption, either
$\sem{\varphi_1}{\ensuremath{\eta}}{\ensuremath{\delta}} \neq
\sem{\varphi_1}{\ensuremath{\eta}}{\ensuremath{\delta}'}$,
or $\sem{\varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}} \neq
\sem{\varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}'}$.
Without loss of generality, assume that
$\sem{\varphi_1}{\ensuremath{\eta}}{\ensuremath{\delta}} \neq
\sem{\varphi_1}{\ensuremath{\eta}}{\ensuremath{\delta}'}$,
the other case is completely analogous.
Observe that from our second assumption it follows that
$\forall i \leq \npred{\varphi_1}:
\sem{\predinstphi{\varphi_1}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}}
= \sem{\predinstphi{\varphi_1}{i}}{\ensuremath{\eta}}{\ensuremath{\delta}'}$.
According to the induction hypothesis, we now find some
$d \in \significant{\varphi_1}$ such that $\ensuremath{\delta}(d) \neq \ensuremath{\delta}'(d)$.
Since $\significant{\varphi_1} \subseteq \significant{\varphi_1 \land
\varphi_2}$,
our result follows.
\item $\varphi = \varphi_1 \lor \varphi_2$. Analogous to the previous case.
\qed
\end{compactitem}
\end{proof} | 2,982 | 43,359 | en |
train | 0.21.10 | This is now used in proving the following proposition, that shows that related
signatures have the same solution. This result follows from the fact that
$\markrel{\markingsn{\syntactic}{}}$ is a consistent correlation.
\begin{proposition}
\label{prop:ccSyn}
Let $\ensuremath{\mathcal{E}}$ be a PBES, with global control flow graph $(V^{\semantic}syn, \xrightarrow{\semantic}syn)$,
and labelling $\markingsn{\syntactic}{}$. For all predicate environments
$\ensuremath{\eta}$ and data environments $\ensuremath{\delta}$,
$$(X, \semval{v}, \semval{w}) \markrel{\markingsn{\syntactic}{}}
(Y, \semval{v'}, \semval{w'})
\implies \sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}}{\ensuremath{\delta}}(X(\val{v},\val{w})) =
\sem{\ensuremath{\mathcal{E}}}{\ensuremath{\eta}}{\ensuremath{\delta}}(Y(\val{v'},\val{w'})).$$
\end{proposition}
\begin{proof}
We show that $\markrel{\markingsn{\syntactic}{}}$ is a consistent correlation.
The result then follows immediately from Theorem~\ref{thm:cc}.
Let $n$ be the smallest number such that for all $X, \val{v}$,
$\markingsn{\syntactic}{n+1}(X, \val{v})
= \markingsn{\syntactic}{n}(X, \val{v})$, and hence
$\markingsn{\syntactic}{n}(X, \val{v}) = \markingsn{\syntactic}{}(X, \val{v})$.
Towards a contradiction, suppose that $\markrel{\markingsn{\syntactic}{}}$
is not a consistent correlation. Since $\markrel{\markingsn{\syntactic}{}}$
is not a consistent correlation, there exist
$X, X', \val{v}, \val{v'}, \val{w}, \val{w'}$ such that
$(X, \semval{v}, \semval{w}) \markrel{\markingsn{\syntactic}{n}} (X',
\semval{v'}, \semval{w'})$,
and
\begin{equation*}
\exists \ensuremath{\eta} \in \correnv{\markrel{\markingsn{\syntactic}{n}}}, \ensuremath{\delta}:
\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\semval{v}},
\subst{\var{d}}{\semval{w}}]}
\neq \sem{\rhs{X'}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\semval{v'}},
\subst{\var{d}}{\semval{w'}}]}.
\end{equation*}
According to Definition~\ref{def:markrelSyntactic}, $X = X'$, and $\semval{v} =
\semval{v'}$,
hence this is equivalent to
\begin{equation}\label{eq:equal_phi}
\exists \ensuremath{\eta} \in \correnv{\markrel{\markingsn{\syntactic}{n}}}, \ensuremath{\delta}:
\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\semval{v}},
\subst{\var{d}}{\semval{w}}]}
\neq \sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\semval{v}},
\subst{\var{d}}{\semval{w'}}]}.
\end{equation}
Let $\ensuremath{\eta}$ and $\ensuremath{\delta}$ be such, and let
$\ensuremath{\delta}_1 = \ensuremath{\delta}[\subst{\var{c}}{\semval{v}},
\subst{\var{d}}{\semval{w}}]$
and $\ensuremath{\delta}_2 = \ensuremath{\delta}[\subst{\var{c}}{\semval{v}},
\subst{\var{d}}{\semval{w'}}]$.
Define $ \varphi'_X \ensuremath{=} \simplify{\rhs{X}[\var{c} := \val{v}]}.$
Since the values in $\val{v}$ are closed, and from the definition of
$\mathsf{simplify}$,
we find that $\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}_1} =
\sem{\varphi'_X}{\ensuremath{\eta}}{\ensuremath{\delta}_1}$,
and likewise for $\ensuremath{\delta}_2$. Therefore, we know that
\begin{equation}
\label{eq:equal_phi'}
\sem{\varphi'_X}{\ensuremath{\eta}}{\ensuremath{\delta}_1} \neq
\sem{\varphi'_X}{\ensuremath{\eta}}{\ensuremath{\delta}_2}.
\end{equation}
Observe that for all $\var[k]{d} \in \marking{\markingsn{\syntactic}{}}{X,
\val{v}}$,
$\semval[k]{w} = \semval[k]{w'}$ by definition of
$\markrel{\markingsn{\syntactic}{}}$.
Every predicate variable instance that might change the solution of $\varphi'_X$
is a neighbour of $(X, \val{v})$ in the control flow graph, according to Lemma
\ref{lem:relevant_pvi_neighbours}.
Take an arbitrary predicate variable instance
$\predinstphi{\rhs{X}}{i} = Y(\val{e}, \val{e'})$ in $\varphi'_X$.
We first show that $\sem{\val[\ell]{e'}}{}{\ensuremath{\delta}_1}
= \sem{\val[\ell]{e'}}{}{\ensuremath{\delta}_2}$ for all $\ell$.
Observe that $\sem{\val{e}}{}{\ensuremath{\delta}_1} = \sem{\val{e}}{}{\ensuremath{\delta}_2}$ since
$\val{e}$ are expressions substituted for control flow parameters, and hence
are either constants, or the result of copying.
Furthermore, there is no unlabelled parameter $\var[k]{d}$ that can influence a
labelled parameter $\var[\ell]{d}$ at location $(Y, \val{u})$. If there is a
$\var[\ell]{d} \in \markingsn{\syntactic}{n}(Y, \val{u})$ such that
$\var[k]{d} \in \free{\val[\ell]{e'}}$, and
$\var[k]{d} \not \in \markingsn{\syntactic}{n}(X, \val{v})$, then by
definition of labelling $\var[k]{d} \in \markingsn{\syntactic}{n+1}(X,
\val{v})$,
which contradicts the assumption that the labelling is stable, so it follows
that
\begin{equation} \label{eq:equivalent_arguments_Xi}
\sem{\val[\ell]{e'}}{}{\ensuremath{\delta}_1}
= \sem{\val[\ell]{e'}}{}{\ensuremath{\delta}_2}\text{ for all }\ell.
\end{equation}
From \eqref{eq:equivalent_arguments_Xi}, and since we have chosen the predicate
variable instance arbitrarily, it follows that for all $1 \leq i \leq
\npred{\varphi'_X}$,
$\sem{X(\val{e},\val{e'})}{\ensuremath{\eta}}{\ensuremath{\delta}_1}
= \sem{X(\val{e},\val{e'})}{\ensuremath{\eta}}{\ensuremath{\delta}_2}$.
Together with \eqref{eq:equal_phi'}, according to
Lemma~\ref{lem:non_recursive_free},
this implies that there is some $d \in \significant{\varphi'_X}$
such that $\ensuremath{\delta}_1(d) \neq \ensuremath{\delta}_2(d)$. From the definition of
$\markingsn{\syntactic}{0}$, however, it follows that $d$ must be labelled
in $\marking{\markingsn{\syntactic}{0}}{X, \val{v}}$, and hence also in
$\marking{\markingsn{\syntactic}{n}}{X, \val{v}}$.
According to the definition of $\markrel{\markingsn{\syntactic}{n}}$ it then is
the case that $\ensuremath{\delta}_1(d) = \ensuremath{\delta}_2(d)$, which is a contradiction.
Since also in this case we derive a contradiction, the original assumption that
$\markrel{\markingsn{\syntactic}{}}$ is not a consistent correlation does not
hold, and we conclude that $\markrel{\markingsn{\syntactic}{}}$ is a consistent
correlation. \qed
\end{proof} | 2,086 | 43,359 | en |
train | 0.21.11 | }
\label{sec:reset}
A parameter $d$ that is not live at a location
can be assigned a fixed default value. To this end
the corresponding data argument of the PVIs that lead to that location
are replaced by a default value $\init{d}$. This is achieved by
function $\ensuremath{\mathsf{Reset}}$, defined below:
\reportonly{
\begin{definition}
\label{def:reset}
Let $\ensuremath{\mathcal{E}}$ be a PBES, let $(V, \to)$ be its CFG, with labelling
$\markingsn{\syntactic}{}$\!. Resetting a PBES is inductively defined on the
structure of
$\ensuremath{\mathcal{E}}$.
$$
\begin{array}{lcl}
\reset{\markingsn{\syntactic}{}}{\ensuremath{\emptyset}} & \ensuremath{=} & \ensuremath{\emptyset} \\
\reset{\markingsn{\syntactic}{}}{\sigma X(\var{c} \colon \vec{C}, \var{d}
\colon \vec{D}) = \varphi) \ensuremath{\mathcal{E}}'} & \ensuremath{=}
& (\sigma \changed{X}(\var{c} \colon \vec{C}, \var{d} \colon\vec{D}) =
\reset{\markingsn{\syntactic}{}}{\varphi})
\reset{\markingsn{\syntactic}{}}{\ensuremath{\mathcal{E}}'} \\
\end{array}
$$
Resetting for formulae is defined inductively as follows:
$$
\begin{array}{lcl}
\reset{\markingsn{\syntactic}{}}{b} & \ensuremath{=} & b\\
\reset{\markingsn{\syntactic}{}}{\varphi \land \psi} & \ensuremath{=} &
\reset{\markingsn{\syntactic}{}}{\varphi} \land
\reset{\markingsn{\syntactic}{}}{\psi}\\
\reset{\markingsn{\syntactic}{}}{\varphi \lor \psi} & \ensuremath{=} &
\reset{\markingsn{\syntactic}{}}{\varphi} \lor
\reset{\markingsn{\syntactic}{}}{\psi}\\
\reset{\markingsn{\syntactic}{}}{\forall d \colon D. \varphi} & \ensuremath{=} &
\forall d \colon D. \reset{\markingsn{\syntactic}{}}{\varphi} \\
\reset{\markingsn{\syntactic}{}}{\exists d \colon D. \varphi} & \ensuremath{=} &
\exists d \colon D. \reset{\markingsn{\syntactic}{}}{\varphi} \\
\reset{\markingsn{\syntactic}{}}{X(\val{e}, \val{e'})} & \ensuremath{=} &
\bigwedge_{\val{v} \in \values{\var{c}}} (\val{e} = \val{v} \implies
\changed{X}(\val{v}, \resetvars{(X,
\val{v})}{\markingsn{\syntactic}{}}{\val{e'}}))
\end{array}
$$
With $\val{e} = \val{v}$ we denote that for all $i$,
$\val[i]{e} = \var[i]{v}$.
The function $\resetvars{(X, \val{v})}{\markingsn{\syntactic}{}}{\val{e'}}$ is
defined
positionally as follows:
$$
\ind{\resetvars{(X, \val{v})}{\markingsn{\syntactic}{}}{\val{e'}}}{i} =
\begin{cases}
\val[i]{e'} & \text{ if } \var[i]{d} \in
\marking{\markingsn{}{}}{X, \val{v}}
\\
\init{\var[i]{d}} & \text{otherwise}.
\end{cases}
$$
\end{definition}
\begin{remark}
We can reduce the number of equivalences we introduce in resetting a recurrence.
This effectively reduces the guard as follows.
Let $X \in \bnd{\ensuremath{\mathcal{E}}}$, such that $Y(\val{e}, \val{e'}) =
\predinstphi{\rhs{X}}{i}$,
and let $I = \{ j \mid \dest{X}{i}{j} = \bot \} $ denote the indices of the
control flow parameters for which the destination is undefined.
Define $\var{c'} = \var[i_1]{c}, \ldots, \var[i_n]{c}$ for $i_n \in I$,
and $\val{f} = \val[i_1]{e}, \ldots, \val[i_n]{e}$ to be the vectors
of control flow parameters for which the destination is undefined, and the
values that are assigned to them in predicate variable instance $i$. Observe
that these are the only control flow parameters that we need to constrain in
the guard while resetting.
We can redefine $\reset{\markingsn{\syntactic}{}}{X(\val{e}, \val{e'})}$ as
follows.
$$\reset{\markingsn{\syntactic}{}}{X(\val{e}, \val{e'})} \ensuremath{=}
\bigwedge_{\val{v'} \in \values{\var{c'}}}
( \val{f} = \val{v'} \implies
\changed{X}(\val{v}, \resetvars{\markingsn{\syntactic}{}}{(X,
\val{v})}{\val{e'}}) ).$$
In this definition
$\val{v}$ is defined positionally as $$\val[j]{v} =
\begin{cases}
\val[j]{v'} & \text{if } j \in I \\
\dest{X}{i}{j} & \text{otherwise}
\end{cases}$$
\end{remark}
Resetting dead parameters preserves the solution of the PBES. We formalise
this in Theorem~\ref{thm:resetSound} below. Our proof is based on consistent
correlations. We first define the relation $R^{\ensuremath{\mathsf{Reset}}}$, and we show that
this is indeed a consistent correlation. Soundness then follows from
Theorem~\ref{thm:cc}. Note that $R^{\ensuremath{\mathsf{Reset}}}$ uses the relation
$\markrel{\markingsn{\syntactic}{}}$ from Definition~\ref{def:markrelSyntactic}
to relate predicate variable instances of the original equation system. The
latter
is used in the proof of Lemma~\ref{lem:resetRecursion}.
\begin{definition}\label{def:resetRelSyn}
Let $R^{\ensuremath{\mathsf{Reset}}}$ be the relation defined as follows.
$$
\begin{cases}
X(\semval{v}, \semval{w})) R^{\ensuremath{\mathsf{Reset}}} \changed{X}(\semval{v},
\sem{\resetvars{(X, \val{v})}{\markingsn{\syntactic}{}}{\val{w})}}{}{}) \\
X(\semval{v}, \semval{w}) R^{\ensuremath{\mathsf{Reset}}} X(\semval{v}, \semval{w'}) & \text{if }
X(\semval{v}, \sem{\val{w}}{}{}) \markrel{\markingsn{\syntactic}{}}
X(\semval{v}, \semval{w'})
\end{cases}
$$
\end{definition}
We first show
that we can unfold the values of the control flow parameters in every predicate
variable instance, by duplicating the predicate variable instance, and
substituting the values of the CFPs.
\begin{lemma}\label{lem:unfoldCfl}
Let $\ensuremath{\eta}$ and $\ensuremath{\delta}$ be environments, and let $X \in \bnd{\ensuremath{\mathcal{E}}}$,
then for all $i \leq \npred{\rhs{X}}$, such that $\predinstphi{\rhs{X}}{i}
= Y(\val{e},\val{e'})$,
$$\sem{Y(\val{e}, \val{e'}))}{\ensuremath{\eta}}{\ensuremath{\delta}} =
\sem{\bigwedge_{\val{v}\in \values{\var{c}}} (\val{e} = \val{v} \implies
Y(\val{v}, \val{e'})}{\ensuremath{\eta}}{\ensuremath{\delta}}$$
\end{lemma}
\begin{proof}
Straightforward; observe that $\val{e} = \val{v}$ for
exactly one $\val{v} \in \values{\var{c}}$, using that $\val{v}$ is
closed.\qed
\end{proof}
Next we establish that resetting dead parameters is sound, \emph{i.e.}\xspace it
preserves the solution of the PBES. We first show that resetting
a predicate variable instance in an $R^{\ensuremath{\mathsf{Reset}}}$-correlating environment
and a given data environment is sound.
\begin{lemma}\label{lem:resetRecursion}
Let $\ensuremath{\mathcal{E}}$ be a PBES, let $(V, \to)$ be its CFG, with labelling
$\markingsn{\syntactic}{}$ such that $\markrel{\markingsn{\syntactic}{}}$ is a
consistent correlation, then
$$\forall \ensuremath{\eta} \in
\correnv{R^{\ensuremath{\mathsf{Reset}}}}, \ensuremath{\delta}: \sem{Y(\val{e},
\val{e'})}{\ensuremath{\eta}}{\ensuremath{\delta}} =
\sem{\reset{\markingsn{\syntactic}{}}{Y(\val{e},
\val{e'})}}{\ensuremath{\eta}}{\ensuremath{\delta}}$$
\end{lemma}
\begin{proof}
Let $\ensuremath{\eta} \in \correnv{R^{\ensuremath{\mathsf{Reset}}}}$, and $\ensuremath{\delta}$ be arbitrary. We
derive this as follows.
$$
\begin{array}{ll}
& \sem{\reset{\markingsn{\syntactic}{}}{Y(\val{e},
\val{e'}))}}{\ensuremath{\eta}}{\ensuremath{\delta}} \\
= & \{ \text{Definition~\ref{def:reset}} \} \\
& \sem{\bigwedge_{\val{v} \in \cfl{Y}} (\val{e} = \val{v}
\implies \changed{Y}(\val{v},\resetvars{(Y,
\val{v})}{\markingsn{\syntactic}{}}{\val{e'}}))}{\ensuremath{\eta}}{\ensuremath{\delta}}
\\
=^{\dagger} & \bigwedge_{\val{v} \in \cfl{Y}} (\sem{\val{e}}{}{\ensuremath{\delta}}
= \sem{\val{v}}{}{} \implies
\sem{\changed{Y}(\val{v},\resetvars{(Y,
\val{v})}{\markingsn{\syntactic}{}}{\val{e'}}}{\ensuremath{\eta}}{\ensuremath{\delta}}))
\\
=^{\dagger} & \bigwedge_{\val{v} \in \cfl{Y}} (\sem{\val{e}}{}{\ensuremath{\delta}}
= \semval{v} \implies
\ensuremath{\eta}(\changed{Y})(\sem{\val{v}}{}{\ensuremath{\delta}},\sem{\resetvars{(Y,
\val{v})}{\markingsn{\syntactic}{}}{\val{e'}}}{}{\ensuremath{\delta}}))\\
= & \{ \ensuremath{\eta} \in \correnv{R^{\ensuremath{\mathsf{Reset}}}} \} \\
& \bigwedge_{\val{v} \in \cfl{Y}} (\sem{\val{e}}{}{\ensuremath{\delta}} =
\semval{v} \implies
\ensuremath{\eta}(Y)(\sem{\val{v}}{}{\ensuremath{\delta}},\sem{\val{e'}}{}{\ensuremath{\delta}})))\\
=^{\dagger} & \bigwedge_{\val{v} \in \cfl{Y}} (\sem{\val{e}}{}{\ensuremath{\delta}}
= \semval{v} \implies
\sem{Y(\val{v}, \val{e'})}{\ensuremath{\eta}}{\ensuremath{\delta}})) \\
=^{\dagger} & \sem{\bigwedge_{\val{v} \in \cfl{Y}} (\val{e} = \val{v}
\implies Y(\val{v}, \val{e'}))}{\ensuremath{\eta}}{\ensuremath{\delta}} \\
= & \{ \text{Lemma~\ref{lem:unfoldCfl}} \}\\
& \sem{Y(\val{e}, \val{e'}))}{\ensuremath{\eta}}{\ensuremath{\delta}}
\end{array}
$$
Here at $^{\dagger}$ we have used the semantics.\qed
\end{proof} | 3,081 | 43,359 | en |
train | 0.21.12 | Next we establish that resetting dead parameters is sound, \emph{i.e.}\xspace it
preserves the solution of the PBES. We first show that resetting
a predicate variable instance in an $R^{\ensuremath{\mathsf{Reset}}}$-correlating environment
and a given data environment is sound.
\begin{lemma}\label{lem:resetRecursion}
Let $\ensuremath{\mathcal{E}}$ be a PBES, let $(V, \to)$ be its CFG, with labelling
$\markingsn{\syntactic}{}$ such that $\markrel{\markingsn{\syntactic}{}}$ is a
consistent correlation, then
$$\forall \ensuremath{\eta} \in
\correnv{R^{\ensuremath{\mathsf{Reset}}}}, \ensuremath{\delta}: \sem{Y(\val{e},
\val{e'})}{\ensuremath{\eta}}{\ensuremath{\delta}} =
\sem{\reset{\markingsn{\syntactic}{}}{Y(\val{e},
\val{e'})}}{\ensuremath{\eta}}{\ensuremath{\delta}}$$
\end{lemma}
\begin{proof}
Let $\ensuremath{\eta} \in \correnv{R^{\ensuremath{\mathsf{Reset}}}}$, and $\ensuremath{\delta}$ be arbitrary. We
derive this as follows.
$$
\begin{array}{ll}
& \sem{\reset{\markingsn{\syntactic}{}}{Y(\val{e},
\val{e'}))}}{\ensuremath{\eta}}{\ensuremath{\delta}} \\
= & \{ \text{Definition~\ref{def:reset}} \} \\
& \sem{\bigwedge_{\val{v} \in \cfl{Y}} (\val{e} = \val{v}
\implies \changed{Y}(\val{v},\resetvars{(Y,
\val{v})}{\markingsn{\syntactic}{}}{\val{e'}}))}{\ensuremath{\eta}}{\ensuremath{\delta}}
\\
=^{\dagger} & \bigwedge_{\val{v} \in \cfl{Y}} (\sem{\val{e}}{}{\ensuremath{\delta}}
= \sem{\val{v}}{}{} \implies
\sem{\changed{Y}(\val{v},\resetvars{(Y,
\val{v})}{\markingsn{\syntactic}{}}{\val{e'}}}{\ensuremath{\eta}}{\ensuremath{\delta}}))
\\
=^{\dagger} & \bigwedge_{\val{v} \in \cfl{Y}} (\sem{\val{e}}{}{\ensuremath{\delta}}
= \semval{v} \implies
\ensuremath{\eta}(\changed{Y})(\sem{\val{v}}{}{\ensuremath{\delta}},\sem{\resetvars{(Y,
\val{v})}{\markingsn{\syntactic}{}}{\val{e'}}}{}{\ensuremath{\delta}}))\\
= & \{ \ensuremath{\eta} \in \correnv{R^{\ensuremath{\mathsf{Reset}}}} \} \\
& \bigwedge_{\val{v} \in \cfl{Y}} (\sem{\val{e}}{}{\ensuremath{\delta}} =
\semval{v} \implies
\ensuremath{\eta}(Y)(\sem{\val{v}}{}{\ensuremath{\delta}},\sem{\val{e'}}{}{\ensuremath{\delta}})))\\
=^{\dagger} & \bigwedge_{\val{v} \in \cfl{Y}} (\sem{\val{e}}{}{\ensuremath{\delta}}
= \semval{v} \implies
\sem{Y(\val{v}, \val{e'})}{\ensuremath{\eta}}{\ensuremath{\delta}})) \\
=^{\dagger} & \sem{\bigwedge_{\val{v} \in \cfl{Y}} (\val{e} = \val{v}
\implies Y(\val{v}, \val{e'}))}{\ensuremath{\eta}}{\ensuremath{\delta}} \\
= & \{ \text{Lemma~\ref{lem:unfoldCfl}} \}\\
& \sem{Y(\val{e}, \val{e'}))}{\ensuremath{\eta}}{\ensuremath{\delta}}
\end{array}
$$
Here at $^{\dagger}$ we have used the semantics.\qed
\end{proof}
By extending this result to the right-hand sides of equations, we can prove that
$R^{\ensuremath{\mathsf{Reset}}}$ is a consistent correlation.
\begin{proposition}
\label{prop:resetCc}
Let $\ensuremath{\mathcal{E}}$ be a PBES, and let $(V, \to)$ be a CFG, with labelling
$\markingsn{\syntactic}{}$ such that $\markrel{\markingsn{\syntactic}{}}$ is a
consistent correlation. Let $X \in \bnd{\ensuremath{\mathcal{E}}}$, with $\val{v} \in \ensuremath{\mathit{CFL}}(X)$,
then for all $\val{w}$, and for all predicate environments $\ensuremath{\eta} \in
\correnv{R^{\ensuremath{\mathsf{Reset}}}}$ and data environments $\ensuremath{\delta}$
$$
\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\semval{v}},\subst{\var{d}}{\semval{w}}]}
=
\sem{\reset{\markingsn{\syntactic}{}}{\rhs{X}}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\semval{v}},\subst{\var{d}}{\sem{\resetvars{(X,
\var{v})}{\markingsn{\syntactic}{}}{\val{w}}}{}{}}]} $$
\end{proposition}
\begin{proof}
Let $\ensuremath{\eta}$ and $\ensuremath{\delta}$ be arbitrary, and define $\ensuremath{\delta}_r \ensuremath{=}
\ensuremath{\delta}[\subst{\var{c}}{\semval{v}},\subst{\var{d}}{\sem{\resetvars{(X,
\val{v})}{\markingsn{\syntactic}{}}{\val{w}}}{}{}}]$.
We first prove that
\begin{equation}
\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}_r}
=
\sem{\reset{\markingsn{\syntactic}{}}{\rhs{X}}}{\ensuremath{\eta}}{\ensuremath{\delta}_r}
\end{equation}
We proceed by induction on $\rhs{X}$.
\begin{compactitem}
\item $\rhs{X} = b$. Since $\reset{\markingsn{\syntactic}{}}{b} = b$ this
follows immediately.
\item $\rhs{X} = Y(\val{e})$. This follows immediately from
Lemma~\ref{lem:resetRecursion}.
\item $\rhs{X} = \forall y \colon D . \varphi$. We derive that
$\sem{\forall y \colon D . \varphi}{\ensuremath{\eta}}{\ensuremath{\delta}_r} = \forall v
\in \semset{D} . \sem{\varphi}{\ensuremath{\eta}}{\ensuremath{\delta}_r[\subst{y}{v}]}$.
According to the induction hypothesis, and since we applied only a
dummy transformation on $y$, we find that
$\sem{\varphi}{\ensuremath{\eta}}{\ensuremath{\delta}_r[\subst{y}{v}]}
=
\sem{\reset{\markingsn{\syntactic}{}}{\varphi}}{\ensuremath{\eta}}{\ensuremath{\delta}_r[\subst{y}{v}]}$,
hence $\sem{\forall y \colon D . \varphi}{\ensuremath{\eta}}{\ensuremath{\delta}_r} =
\sem{\reset{\markingsn{\syntactic}{}}{\forall y \colon D .
\varphi}}{\ensuremath{\eta}}{\ensuremath{\delta}_r}$.
\item $\rhs{X} = \exists y \colon D . \varphi$. Analogous to the previous
case.
\item $\rhs{X} = \varphi_1 \land \varphi_2$. We derive that
$\sem{\varphi_1 \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}_r} =
\sem{\varphi_1}{\ensuremath{\eta}}{\ensuremath{\delta}_r} \land
\sem{\varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}_r}$.
If we apply the induction hypothesis on both sides we get
$\sem{\varphi_1 \land \varphi_2}{\ensuremath{\eta}}{\ensuremath{\delta}_r} =
\sem{\reset{\markingsn{\syntactic}{}}{\varphi_1}}{\ensuremath{\eta}}{\ensuremath{\delta}_r}
\land
\sem{\reset{\markingsn{\syntactic}{}}{\varphi_2}}{\ensuremath{\eta}}{\ensuremath{\delta}_r}$.
Applying the semantics, and the definition of $\ensuremath{\mathsf{Reset}}$ we find this
is equal to
$\sem{\reset{\markingsn{\syntactic}{}}{\varphi_1 \land
\varphi_2}}{\ensuremath{\eta}}{\ensuremath{\delta}_r}$.
\item $\rhs{X} = \varphi_1 \lor \varphi_2$. Analogous to the previous case.
\end{compactitem}
Hence we find that
$\sem{\reset{\markingsn{\syntactic}{}}{\rhs{X}}}{\ensuremath{\eta}}{\ensuremath{\delta}_{r}}
= \sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}_{r}}$.
It now follows immediately from the observation that
$\markrel{\markingsn{\syntactic}{}}$ is a consistent correlation, and
Definition~\ref{def:reset}, that
$\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}_{r}} =
\sem{\rhs{X}}{\ensuremath{\eta}}{\ensuremath{\delta}[\subst{\var{c}}{\semval{v}},\subst{\var{d}}{\semval{w}}]}$.
Our result follows by transitivity of $=$. \qed
\end{proof}
The theory of consistent correlations now gives an immediate proof of soundness
of resetting dead parameters, which is formalised by the following
theorem.
\begin{theorem}
\label{thm:resetSound}
Let $\ensuremath{\mathcal{E}}$ be a PBES, with control flow graph $(V, \to)$ and labelling
$\markingsn{}{}$\!. For all $X$, $\val{v}$ and $\val{w}$:
$$\sem{\ensuremath{\mathcal{E}}}{}{}(X(\sem{\val{v}},\sem{\val{w}})) =
\sem{\reset{\markingsn{\syntactic}{}}{\ensuremath{\mathcal{E}}}}{}{}(\changed{X}(\sem{\val{v}},\sem{\val{w}})).$$
\end{theorem}
\begin{proof}
Relation $R^{\ensuremath{\mathsf{Reset}}}$ is a consistent correlation, as witnessed by
Proposition~\ref{prop:resetCc}. From Theorem~\ref{thm:cc} the result
now follows immediately.\qed
\end{proof}
}
\paperonly{
\begin{definition}
\label{def:reset}
Let $\ensuremath{\mathcal{E}}$ be a PBES, let $(V, \to)$ be its CFG, with labelling
$\markingsn{\syntactic}{}$\!. The PBES
$\reset{\markingsn{\syntactic}{}}{\ensuremath{\mathcal{E}}}$ is obtained
from $\ensuremath{\mathcal{E}}$ by replacing every PVI
$X(\val{e},\val{e'})$ in every $\rhs{X}$ of $\ensuremath{\mathcal{E}}$ by the formula
$\bigwedge_{\val{v} \in \values{\var{c}}} (\val{v} \not= \val{e} \vee
X(\val{e}, \resetvars{(X,\val{v})}{\markingsn{\syntactic}{}}{\val{e'}}))$.
The function $\resetvars{(X, \val{v})}{\markingsn{\syntactic}{}}{\val{e'}}$ is
defined
positionally as follows:
$$
\text{if $\var[i]{d} \in \marking{\markingsn{}{}}{X,\val{v}}$ we set }
\ind{\resetvars{(X, \val{v})}{\markingsn{\syntactic}{}}{\val{e'}}}{i} =
\val[\!\!i]{e'},
\text{ else }
\ind{\resetvars{(X, \val{v})}{\markingsn{\syntactic}{}}{\val{e'}}}{i} =
\init{\var[i]{d}}.
$$
\end{definition}
Resetting dead parameters preserves the solution of the PBES, as we claim below.
\begin{restatable}{theorem}{resetSound}
\label{thm:resetSound}
Let $\ensuremath{\mathcal{E}}$ be a PBES, and
$\markingsn{}{}$ a labelling. For all predicate variables $X$, and ground terms
$\val{v}$ and $\val{w}$:
$\sem{\ensuremath{\mathcal{E}}}{}{}(X(\sem{\val{v}},\sem{\val{w}})) =
\sem{\reset{\markingsn{\syntactic}{}}{\ensuremath{\mathcal{E}}}}{}{}(X(\sem{\val{v}},\sem{\val{w}}))$.
\end{restatable}
}
As a consequence of the above theorem, instantiation of a PBES may become
feasible where this was not the case for the original PBES.
This is nicely illustrated by our running example, which now indeed
can be instantiated to a BES.
\begin{example}\label{exa:reset} Observe that parameter
$k^Z$ is not labelled in any of the $Z$ locations. This means that
$X$'s right-hand side essentially changes to:
$$
\begin{array}{c}
( i \not= 1 \vee j \not= 1 \vee X(2,j,k,l+1)) \wedge \\
\forall m\colon\sort{N}. (i \not= 1 \vee Z(i,2,1,k)) \wedge
\forall m\colon\sort{N}. (i \not= 2 \vee Z(i,2,1,k))
\end{array}
$$
Since variable $m$ no longer occurs in the above formula, the
quantifier can be eliminated. Applying the reset function on the
entire PBES leads to a PBES that we \emph{can} instantiate to a BES
(in contrast to the original PBES),
allowing us to compute that the
solution to $X(1,1,1,1)$ is $\ensuremath{\mathit{true}}$.
This BES has only 7 equations.
\end{example} | 3,490 | 43,359 | en |
train | 0.21.13 | \section{Optimisation}\label{sec:local}
Constructing a CFG can suffer from a combinatorial explosion; \emph{e.g.}\xspace,
the size of the CFG underlying the following PBES
is exponential in the number of detected CFPs.
\[
\begin{array}{lcl}
\nu X(i_1,\dots,i_n \colon \sort{B}) & =&
(i_1 \wedge X(\ensuremath{\mathit{false}}, \dots, i_n)) \vee
(\neg i_1 \wedge X(\ensuremath{\mathit{true}}, \dots, i_n)) \vee \\
&\dots \vee
& (i_n \wedge X(i_1, \dots, \ensuremath{\mathit{false}})) \vee
(\neg i_n \wedge X(i_1, \dots, \ensuremath{\mathit{true}}))
\end{array}
\]
In this section we develop an alternative to the analysis of the
previous section which mitigates the combinatorial explosion
but still yields sound results. The
correctness of our alternative is based on the following proposition,
which states that resetting using any labelling that approximates that of
Def.~\ref{def:markingHeuristic} is sound.
\begin{proposition}\label{prop:approx}
Let, for given PBES $\ensuremath{\mathcal{E}}$, $(V^{\semantic}syn, {\smash{\xrightarrow{\semantic}syn}})$ be a
CFG with labelling $\markingsn{\syntactic}{}$, and let $L'$ be
a labelling such that $\marking{\markingsn{\syntactic}{}}{X,\val{v}} \subseteq
L'(X,\val{v})$ for all $(X, \val{v})$. Then for all $X, \val{v}$ and
$\val{w}$:
$\sem{\ensuremath{\mathcal{E}}}{}{}(X(\sem{\val{v}},\sem{\val{w}})) =
\sem{\reset{L'\!}{\ensuremath{\mathcal{E}}}}{}{}(X(\sem{\val{v}},\sem{\val{w}}))$
\end{proposition}
\reportonly{
\begin{proof}
Let $(V^{\semantic}syn, {\smash{\xrightarrow{\semantic}syn}})$ be a
CFG with labelling $\markingsn{\syntactic}{}$, and let $L'$ be
a labelling such that $\marking{\markingsn{\syntactic}{}}{X,\val{v}}
\subseteq L'(X,\val{v})$ for all $(X, \val{v})$.
Define relation $R^{\ensuremath{\mathsf{Reset}}}_{\markingsn{\syntactic}{},L'}$ as follows.
$$
\begin{cases}
X(\semval{v}, \semval{w})) R^{\ensuremath{\mathsf{Reset}}}_{L,L'} \changed{X}(\semval{v},
\sem{\resetvars{(X, \val{v})}{L'}{\val{w})}}{}{}) \\
X(\semval{v}, \semval{w}) R^{\ensuremath{\mathsf{Reset}}}_{L,L'} X(\semval{v}, \semval{w'}) &
\text{if }
X(\semval{v}, \sem{\val{w}}{}{}) \markrel{\markingsn{\syntactic}{}}
X(\semval{v}, \semval{w'})
\end{cases}
$$
The proof using $R^{\ensuremath{\mathsf{Reset}}}_{L,L'}$ now follows the exact same line of
reasoning as the proof of Theorem~\ref{thm:resetSound}.\qed
\end{proof} | 896 | 43,359 | en |
train | 0.21.14 | }
The idea is to analyse a CFG consisting of disjoint subgraphs for
each individual CFP, where each
subgraph captures which PVIs are under the control of a CFP: only if the
CFP can confirm whether a predicate formula potentially depends on
a PVI, there will be an edge in the graph.
As before, let
$\ensuremath{\mathcal{E}}$ be an arbitrary but fixed PBES, $(\ensuremath{\mathsf{source}}\xspace, \ensuremath{\mathsf{target}}\xspace,
\ensuremath{\mathsf{copy}}\xspace)$ a unicity constraint derived from $\ensuremath{\mathcal{E}}$, and
$\var{c}$ a vector of CFPs.
\begin{definition}
\label{def:localCFGHeuristic}
The \emph{local} control flow graph (LCFG) is a graph $(V^{\semantic}loc, \xrightarrow{\semantic}loc)$ with:
\begin{compactitem}
\item $V^{\semantic}loc = \{ (X, n, v) \mid X \in \bnd{\ensuremath{\mathcal{E}}} \land
n \le |\var{c}| \land v \in \values{\var[n]{c}} \}$, and
\item $\xrightarrow{\semantic}loc \subseteq V^{\semantic}loc \times \mathbb{N} \times V^{\semantic}loc$ is the
least relation
satisfying $(X,n,v) \xrightarrow{\semantic}loc[i] (\predphi{\rhs{X}}{i},n,w)$ if:
\begin{compactitem}
\item $\source{X}{i}{n} = v$ and $\dest{X}{i}{n} = w$, or
\item $\source{X}{i}{n} = \bot$, $\predphi{\rhs{X}}{i} \neq X$ and
$\dest{X}{i}{n} = w$, or
\item $\source{X}{i}{n} = \bot$, $\predphi{\rhs{X}}{i} \neq X$ and
$\copied{X}{i}{n} = n$ and $v = w$.
\end{compactitem}
\end{compactitem}
\end{definition}
We write $(X, n, v) \xrightarrow{\semantic}loc[i]$ if there exists some $(Y, m, w)$ such that
$(X, n, v) \xrightarrow{\semantic}loc[i] (Y, m, w)$.
Note that the size of an LCFG is $\mathcal{O}(|\bnd{\ensuremath{\mathcal{E}}}| \times |\var{c}|
\times \max\{ |\values{\var[k]{c}}| ~\mid~ 0 \leq k \leq |\var{c}| \})$.
We next describe how to label the LCFG in such a way that the
labelling meets the condition of Proposition~\ref{prop:approx},
ensuring soundness of our liveness analysis. The idea of using LCFGs
is that in practice, the use and alteration of a data parameter is entirely
determined by a single CFP, and that only on ``synchronisation points''
of two CFPs (when the values of the two CFPs are such that they
both confirm that a formula may depend on the same PVI) there is
exchange of information in the data parameters.
We first formalise when a
data parameter is involved in a recursion (\emph{i.e.}\xspace, when the parameter may
affect whether a formula depends on a PVI, or when a PVI may modify
the data parameter through a self-dependency or uses it to change another parameter).
Let $X \in \bnd{\ensuremath{\mathcal{E}}}$ be an arbitrary bound predicate variable in the
PBES $\ensuremath{\mathcal{E}}$.
\begin{definition}
\label{def:used}
\label{def:changed} Denote $\predinstphi{\rhs{X}}{i}$ by $Y(\var{e})$.
Parameter $\var[j]{d} \in \param{X}$ is:
\begin{compactitem}
\item \emph{used for}
$Y(\var{e})$
if $\var[j]{d} \in \free{\guard{i}{\rhs{X}}}$;
\item \emph{used in}
$Y(\var{e})$ if for some $k$, we have $\var[j]{d} \in \free{\var[k]{e}}$,
($k \not=j$ if
$X =Y$)
;
\item
\emph{changed} by
$Y(\var{e})$ if both $X = Y$ and
$\var[j]{d} \neq \var[j]{e}$.
\end{compactitem}
\end{definition}
We say that a data parameter \emph{belongs to} a CFP if it controls
its complete dataflow.
\begin{definition}
\label{def:belongsTo}
\label{def:rules}
CFP $\var[j]{c}$ \emph{rules}
$\predinstphi{\rhs{X}}{i}$ if $(X, j, v) \xrightarrow{\semantic}loc[i]$ for some $v$.
Let $d \in \param{X} \cap \varset{D}^{\mathit{DP}}$ be a data parameter;
$d$ \emph{belongs to} $\var[j]{c}$ if and only if:
\begin{compactitem}
\item whenever $d$ is
used for \emph{or} in $\predinstphi{\rhs{X}}{i}$, $\var[j]{c}$ rules
$\predinstphi{\rhs{X}}{i}$, and
\item whenever $d$ is changed by $\predinstphi{\rhs{X}}{i}$,
$\var[j]{c}$ rules $\predinstphi{\rhs{X}}{i}$.
\end{compactitem}
The set of data parameters that belong to $\var[j]{c}$ is denoted
by $\belongsto{\var[j]{c}}$.
\end{definition}
By adding dummy CFPs that can only take on one value, we can ensure that
every data parameter belongs to at least one CFP.
For simplicity and without loss of generality, we can therefore
continue to work under the following assumption.
\begin{assumption}\label{ass:belongs}
Each data parameter in an equation
belongs to at least one CFP.
\end{assumption}
We next describe how to conduct the liveness analysis using the
LCFG. Every live data parameter is only labelled in those subgraphs
corresponding to the CFPs to which it belongs. The labelling
itself is constructed in much the same way as was done in the previous
section.
Our base case labels a vertex $(X, n, v)$ with those parameters
that belong to the CFP and that are significant in $\rhs{X}$ when
$\var[n]{c}$ has value $v$.
The backwards reachability now dinstinguishes two cases,
based on whether the influence on live variables is internal to the CFP
or via an external CFP.
\begin{definition}
\label{def:relevanceLocalHeuristic}
Let $(V^{\semantic}loc\!, \xrightarrow{\semantic}loc)$ be a LCFG for PBES
$\ensuremath{\mathcal{E}}$. The labelling $\markingsn{\mathit{l}}{} \colon V^{\semantic}loc
\to \mathbb{P}(\varset{D}^{\mathit{DP}})$
is defined as
$\marking{\markingsn{\mathit{l}}{}}{X, n, v} = \bigcup_{k \in \ensuremath{\mathbb{N}}} \marking{\markingsn{\mathit{l}}{k}}{X, n, v}$,
with
$\markingsn{\mathit{l}}{k}$ inductively defined as:
\[
\begin{array}{ll}
\marking{\markingsn{\mathit{l}}{0}}{X, n, v} & =
\{ d \in \belongsto{\var[n]{c}} \mid d \in
\significant{\simplify{\rhs{X}[\var[n]{c} := v]}} \} \\
\marking{\markingsn{\mathit{l}}{k+1}}{X, n, v} & =
\marking{\markingsn{\mathit{l}}{k}}{X, n, v} \\
& \quad \cup \{ d \in \belongsto{\var[n]{c}} \mid
\exists i,w \text{ such that }~ \exists \var[\!\!\!\!\ell]{{d}^{Y}} \in
\marking{\markingsn{\mathit{l}}{k}}{Y,n,w}: \\
& \qquad (X, n, v) \xrightarrow{\semantic}loc[i] (Y, n, w) \land
\affects{d}{\dataphi[\ell]{\rhs{X}}{i}} \} \\
& \quad \cup \{ d \in \belongsto{\var[n]{c}} \mid
\exists i,m,v',w' \text{ such that } (X, n, v) \xrightarrow{\semantic}loc[i] \\
& \qquad \land\ \exists \var[\!\!\!\!\ell]{d^Y} \in
\marking{\markingsn{\mathit{l}}{k}}{Y, m, w'}: \var[\!\!\!\!\ell]{d^Y} \not \in
\belongsto{\var[n]{c}} \\
& \qquad \land\ (X,m,v') \xrightarrow{\semantic}loc[i] (Y,m,w') \land
\affects{d}{\dataphi[\ell]{\rhs{X}}{i}} \}
\end{array}
\]
\end{definition}
On top of this labelling we define the induced labelling
$\marking{\markingsn{\mathit{l}}{}}{X, \val{v}}$, defined as $d \in
\marking{\markingsn{\mathit{l}}{}}{X, \val{v}}$ iff for all $k$ for
which $d \in \belongsto{\var[k]{c}}$ we have $d \in
\marking{\markingsn{\mathit{l}}{}}{X, k, \val[k]{v}}$. This labelling
over-approximates the labelling of Def.~\ref{def:markingHeuristic}; \emph{i.e.}\xspace, we
have $\marking{\markingsn{\syntactic}{}}{X,\val{v}} \subseteq
\marking{\markingsn{\mathit{l}}{}}{X,\val{v}}$ for all $(X,\val{v})$.
\reportonly{
We formalise this in the following lemma.
\begin{lemma}
Let, for given PBES $\ensuremath{\mathcal{E}}$, $(V^{\semantic}syn, {\smash{\xrightarrow{\semantic}syn}})$ be a global
control flow graph with
labelling $\markingsn{\syntactic}{}$, and let $(V^{\semantic}loc, \xrightarrow{\semantic}loc)$ be
a local control flow graph with
labelling $\markingsn{\mathit{l}}{}$, that has been lifted to the global CFG. Then
$\marking{\markingsn{\syntactic}{}}{X,\val{v}} \subseteq
\marking{\markingsn{\mathit{l}}{}}{X,\val{v}}$ for all $(X,\val{v})$.
\end{lemma}
\begin{proof}
We prove the more general statement that for all natural numbers $n$ it
holds
that
$\forall (X, \val{v}) \in V^{\semantic}syn, \forall d \in
\marking{\markingsn{\syntactic}{n}}{X,
\val{v}}: (\forall j: d \in \belongsto{\var[j]{c}} \implies
d \in \marking{\markingsn{\mathit{l}}{n}}{X, j, \val[j]{v}})$. The lemma
then is an immediate consequence.
We proceed by induction on $n$.
\begin{compactitem}
\item $n = 0$. Let $(X, \val{v})$ and $d \in
\marking{\markingsn{\syntactic}{0}}{X, \val{v}}$
be arbitrary. We need to show that $\forall j: d \in
\belongsto{\var[j]{c}}
\implies d \in \marking{\markingsn{\mathit{l}}{0}}{X, j, \val[j]{v}}$.
Let $j$ be arbitrary such that $d \in \belongsto{\var[j]{c}}$.
Since $d \in \marking{\markingsn{\syntactic}{0}}{X, \val{v}}$, by
definition
$d \in \significant{\simplify{\rhs{X}[\var{c} := \val{v}]}}$, hence also
$d \in \significant{\simplify{\rhs{X}[\var[j]{c} :=
\val[j]{v}}]}$.
Combined with the assumption that $d \in \belongsto{\var[j]{c}}$,
this gives us $d \in \marking{\markingsn{\mathit{l}}{0}}{X, j, \val[j]{v}}$
according to Definition~\ref{def:localCFGHeuristic}.
\item $n = m + 1$. As induction hypothesis assume for all $(X, \val{v})
\in V$:
\begin{equation}\label{eq:IHlocalapprox}
\forall d: d \in
\marking{\markingsn{\syntactic}{m}}{X, \val{v}}
\implies (\forall j: d \in \belongsto{\var[j]{c}}
\implies d \in \marking{\markingsn{\mathit{l}}{m}}{X,j,\val[j]{v}}).
\end{equation}
Let $(X, \val{v})$ be arbitrary with
$d \in \marking{\markingsn{\syntactic}{m+1}}{X, \val{v}}$. Also let $j$
be arbitrary, and assume that $d \in \belongsto{\var[j]{c}}$.
We show that $d \in
\marking{\markingsn{\mathit{l}}{m+1}}{X,j,\val[j]{v}}$ by distinguishing the
cases of Definition~\ref{def:markingHeuristic}. | 3,544 | 43,359 | en |
train | 0.21.15 | On top of this labelling we define the induced labelling
$\marking{\markingsn{\mathit{l}}{}}{X, \val{v}}$, defined as $d \in
\marking{\markingsn{\mathit{l}}{}}{X, \val{v}}$ iff for all $k$ for
which $d \in \belongsto{\var[k]{c}}$ we have $d \in
\marking{\markingsn{\mathit{l}}{}}{X, k, \val[k]{v}}$. This labelling
over-approximates the labelling of Def.~\ref{def:markingHeuristic}; \emph{i.e.}\xspace, we
have $\marking{\markingsn{\syntactic}{}}{X,\val{v}} \subseteq
\marking{\markingsn{\mathit{l}}{}}{X,\val{v}}$ for all $(X,\val{v})$.
\reportonly{
We formalise this in the following lemma.
\begin{lemma}
Let, for given PBES $\ensuremath{\mathcal{E}}$, $(V^{\semantic}syn, {\smash{\xrightarrow{\semantic}syn}})$ be a global
control flow graph with
labelling $\markingsn{\syntactic}{}$, and let $(V^{\semantic}loc, \xrightarrow{\semantic}loc)$ be
a local control flow graph with
labelling $\markingsn{\mathit{l}}{}$, that has been lifted to the global CFG. Then
$\marking{\markingsn{\syntactic}{}}{X,\val{v}} \subseteq
\marking{\markingsn{\mathit{l}}{}}{X,\val{v}}$ for all $(X,\val{v})$.
\end{lemma}
\begin{proof}
We prove the more general statement that for all natural numbers $n$ it
holds
that
$\forall (X, \val{v}) \in V^{\semantic}syn, \forall d \in
\marking{\markingsn{\syntactic}{n}}{X,
\val{v}}: (\forall j: d \in \belongsto{\var[j]{c}} \implies
d \in \marking{\markingsn{\mathit{l}}{n}}{X, j, \val[j]{v}})$. The lemma
then is an immediate consequence.
We proceed by induction on $n$.
\begin{compactitem}
\item $n = 0$. Let $(X, \val{v})$ and $d \in
\marking{\markingsn{\syntactic}{0}}{X, \val{v}}$
be arbitrary. We need to show that $\forall j: d \in
\belongsto{\var[j]{c}}
\implies d \in \marking{\markingsn{\mathit{l}}{0}}{X, j, \val[j]{v}}$.
Let $j$ be arbitrary such that $d \in \belongsto{\var[j]{c}}$.
Since $d \in \marking{\markingsn{\syntactic}{0}}{X, \val{v}}$, by
definition
$d \in \significant{\simplify{\rhs{X}[\var{c} := \val{v}]}}$, hence also
$d \in \significant{\simplify{\rhs{X}[\var[j]{c} :=
\val[j]{v}}]}$.
Combined with the assumption that $d \in \belongsto{\var[j]{c}}$,
this gives us $d \in \marking{\markingsn{\mathit{l}}{0}}{X, j, \val[j]{v}}$
according to Definition~\ref{def:localCFGHeuristic}.
\item $n = m + 1$. As induction hypothesis assume for all $(X, \val{v})
\in V$:
\begin{equation}\label{eq:IHlocalapprox}
\forall d: d \in
\marking{\markingsn{\syntactic}{m}}{X, \val{v}}
\implies (\forall j: d \in \belongsto{\var[j]{c}}
\implies d \in \marking{\markingsn{\mathit{l}}{m}}{X,j,\val[j]{v}}).
\end{equation}
Let $(X, \val{v})$ be arbitrary with
$d \in \marking{\markingsn{\syntactic}{m+1}}{X, \val{v}}$. Also let $j$
be arbitrary, and assume that $d \in \belongsto{\var[j]{c}}$.
We show that $d \in
\marking{\markingsn{\mathit{l}}{m+1}}{X,j,\val[j]{v}}$ by distinguishing the
cases of Definition~\ref{def:markingHeuristic}.
If $d \in \marking{\markingsn{\syntactic}{m}}{X, \val{v}}$ the
result follows immediately from the induction hypothesis. For the second
case, suppose there are $i \in \mathbb{N}$ and $(Y, \val{w}) \in V$ such
that $(X, \val{v}) \smash{\xrightarrow{\semantic}syn[i]} (Y, \val{w})$,
also assume there is some $\var[\ell]{d} \in
\marking{\markingsn{\syntactic}{m}}{Y, \val{w}}$
with $d \in \free{\dataphi[\ell]{\rhs{X}}{i}}$. Let
$i$ and $\var[\ell]{d}$ be such, and observe that
$Y = \predphi{\rhs{X}}{i}$ and $i \leq \npred{\rhs{X}}$.
According to the induction hypothesis,
$\forall k: \var[\ell]{d} \in \belongsto{\var[k]{c}}
\implies \var[\ell]{d} \in
\marking{\markingsn{\mathit{l}}{m}}{Y, k,
\val[k]{w}}$.
We distinguish two cases.
\begin{compactitem}
\item $\var[\ell]{d}$ belongs to $\var[j]{c}$. According to
\eqref{eq:IHlocalapprox}, we know
$\var[\ell]{d} \in
\marking{\markingsn{\mathit{l}}{m}}{Y, j,
\val[j]{w}}$.
Since $d \in \free{\dataphi[\ell]{\rhs{X}}{i}}$,
we only need to show that $(X, j, \val[j]{v}) \xrightarrow{\semantic}loc[i]
(Y, j, \val[j]{w})$.
We distinguish the cases for $j$ from
Definition~\ref{def:globalCFGHeuristic}.
\begin{compactitem}
\item $\source{X}{i}{j} = \val[j]{v}$ and $\dest{X}{i}{j} =
\val[j]{w}$,
then according to Definition~\ref{def:localCFGHeuristic} $(X, j,
\val[j]{v}) \xrightarrow{\semantic}loc[i]
(Y, j, \val[k]{w})$ .
\item $\source{X}{i}{j} = \bot$, $\copied{X}{i}{j} = j$ and
$\val[j]{v} = \val[j]{w}$.
In case $Y \neq X$ the edge exists locally, and
we are done.
Now suppose that $Y = X$. Then
$\predinstphi{\rhs{X}}{i}$
is not ruled by $\var[j]{c}$. Furthermore, $\var[\ell]{d}$
is changed in $\predinstphi{\rhs{X}}{i}$, hence
$\var[\ell]{d}$
cannot belong to $\var[j]{c}$, which is a contradiction.
\item $\source{X}{i}{j} = \bot$, $\copied{X}{i}{j} = \bot$ and
$\dest{X}{i}{j} = \val[j]{w}$. This is completely analogous to
the previous case.
\end{compactitem}
\item $\var[\ell]{d}$ does not belong to $\var[j]{c}$.
Recall that there must be some $\var[k]{c}$ such that
$\var[\ell]{d}$ belongs to $\var[k]{c}$, and by assumption now
$\var[\ell]{d}$ does not belong to $\var[j]{c}$. Then
according to Definition~\ref{def:relevanceLocalHeuristic}, $d$ is
marked in $\marking{\markingsn{\mathit{l}}{m+1}}{X, j, \val[j]{v}}$,
provided that $(X, j, \val[j]{v}) \xrightarrow{\semantic}loc[i]$ and $(X, k, v')
\xrightarrow{\semantic}loc[i] (Y, k, \val[k]{w})$ for some
$v'$. Let $v' = \val[k]{v}$ and $w' = \val[j]{w}$, according to the
exact same reasoning as
before, the existence of the edges $(X,j,\val[j]{v}) \xrightarrow{\semantic}loc[i] (Y, j,
\val[j]{w})$ and $(X, k, \val[k]{v}) \xrightarrow{\semantic}loc[i] (Y,
k, \val[k]{w})$ can be shown, completing the proof.\qed | 2,359 | 43,359 | en |
train | 0.21.16 | \end{compactitem}
\end{compactitem}
\end{proof} | 23 | 43,359 | en |
train | 0.21.17 | }
Combined with Prop.~\ref{prop:approx}, this leads to the following
theorem.
\begin{theorem}
We have
$\sem{\ensuremath{\mathcal{E}}}(X(\semval{v}, \semval{w})) =
\sem{\reset{\markingsn{\mathit{l}}{}}{\ensuremath{\mathcal{E}}}}{}{}(\changed{X}(\sem{\val{v}},\sem{\val{w}}))$
for all
predicate variables $X$ and ground terms $\val{v}$ and $\val{w}$.
\end{theorem}
The induced labelling $\markingsn{\mathit{l}}{}$ can remain
implicit; in an implementation, the labelling constructed
by Def.~\ref{def:relevanceLocalHeuristic} can be used directly, sidestepping a
combinatorial explosion. | 196 | 43,359 | en |
train | 0.21.18 | \section{Case Studies}\label{sec:experiments}
We implemented our techniques
in the tool \texttt{pbesstategraph} of the mCRL2
toolset~\cite{Cra+:13}. Here, we report on the tool's effectiveness
in simplifying the PBESs originating from model checking problems and
behavioural equivalence checking problems: we compare sizes of the BESs
underlying the original PBESs to those for the PBESs obtained after
running the tool \texttt{pbesparelm} (implementing the techniques
from~\cite{OWW:09}) and those for the PBESs obtained after running
our tool. Furthermore, we compare the total times needed for reducing the PBES,
instantiating it into a BES, and solving this BES.
\begin{table}[!ht]
\small
\caption{Sizes of the BESs underlying (1) the original PBESs, and the
reduced PBESs using (2)
\texttt{pbesparelm}, (3) \texttt{pbesstategraph} (global)
and (4) \texttt{pbesstategraph} (local).
For the original PBES, we report the number of generated BES equations,
and the time required for generating and
solving the resulting BES. For the other PBESs, we state the total
reduction in percentages (\emph{i.e.}\xspace, $100*(|original|-|reduced|)/|original|$),
and the reduction of the times (in percentages, computed in the same way),
where for times we additionally include the \texttt{pbesstategraph/parelm}
running times.
Verdict $\surd$ indicates the problem
has solution $\ensuremath{\mathit{true}}$; $\times$ indicates it is $\ensuremath{\mathit{false}}$.
}
\label{tab:results}
\centering
\scriptsize
\begin{tabular}{lc@{\hspace{5pt}}|@{\hspace{5pt}}rrrr@{\hspace{5pt}}|@{\hspace{5pt}}rrrr@{\hspace{5pt}}|@{\hspace{5pt}}c@{}}
& \multicolumn{1}{c@{\hspace{10pt}}}{} & \multicolumn{4}{c}{Sizes} &
\multicolumn{4}{c}{Times}&Verdict\\
\cmidrule(r){3-6}
\cmidrule(r){7-10}
\cmidrule{11-11}\\[-1.5ex]
& \multicolumn{1}{c}{} & \multicolumn{1}{@{\hspace{5pt}}c}{Original} &
\multicolumn{1}{c}{\texttt{parelm}} &
\multicolumn{1}{c}{\texttt{st.graph}} &
\multicolumn{1}{c@{\hspace{5pt}}}{\texttt{st.graph}} &
\multicolumn{1}{c}{Original}
& \multicolumn{1}{c}{\texttt{parelm}} & \multicolumn{1}{c}{\texttt{st.graph}} &
\multicolumn{1}{c@{\hspace{5pt}}}{\texttt{st.graph}} & \\
& \multicolumn{1}{c@{\hspace{10pt}}}{$|D|$}
&&& \multicolumn{1}{c}{\texttt{(global)}} &
\multicolumn{1}{c}{\texttt{(local)}} &&& \multicolumn{1}{c}{\texttt{(global)}}
&
\multicolumn{1}{c}{\texttt{(local)}}& \\
\\[-1ex]
\toprule
\\[-1ex]
\multicolumn{4}{c}{Model Checking Problems} \\
\cmidrule{1-4} \\[-1ex]
\multicolumn{11}{l}{\textbf{No deadlock}} \\[.5ex]
\emph{Onebit} & $2$ & 81,921 & 86\% & 89\% & 89\% & 15.7 & 90\% & 85\% & 90\% & $\surd$ \\
& $4$ & 742,401 & 98\% & 99\% & 99\% & 188.5 & 99\% & 99\% & 99\% & $\surd$ \\
\emph{Hesselink} & $2$ & 540,737 & 100\% & 100\% & 100\% & 64.9 & 99\% & 95\% & 99\% & $\surd$ \\
& $3$ & 13,834,801 & 100\% & 100\% & 100\% & 2776.3 & 100\% & 100\% & 100\% & $\surd$ \\
\\[-1ex]
\multicolumn{11}{l}{\textbf{No spontaneous generation of messages}} \\[.5ex]
\emph{Onebit} & $2$ & 185,089 & 83\% & 88\% & 88\% & 36.4 & 87\% & 85\% & 88\% & $\surd$ \\
& $4$ & 5,588,481 & 98\% & 99\% & 99\% & 1178.4 & 99\% & 99\% & 99\% & $\surd$ \\
\\[-1ex]
\multicolumn{11}{l}{\textbf{Messages that are read are inevitably sent}}
\\[.5ex]
\emph{Onebit} & $2$ & 153,985 & 63\% & 73\% & 73\% & 30.8 & 70\% & 62\% & 73\% & $\times$ \\
& $4$ & 1,549,057 & 88\% & 92\% & 92\% & 369.6 & 89\% & 90\% & 92\% & $\times$ \\
\\[-1ex]
\multicolumn{11}{l}{\textbf{Messages can overtake one another}} \\[.5ex]
\emph{Onebit} & $2$ & 164,353 & 63\% & 73\% & 70\% & 36.4 & 70\% & 67\% & 79\% & $\times$ \\
& $4$ & 1,735,681 & 88\% & 92\% & 90\% & 332.0 & 88\% & 88\% & 90\% & $\times$ \\
\\[-1ex]
\multicolumn{11}{l}{\textbf{Values written to the register can be read}}
\\[.5ex]
\emph{Hesselink} & $2$ & 1,093,761 & 1\% & 92\% & 92\% & 132.8 & -3\% & 90\% & 91\% & $\surd$ \\
& $3$ & 27,876,961 & 1\% & 98\% & 98\% & 5362.9 & 25\% & 98\% & 99\% & $\surd$ \\
\\[-1ex]
\multicolumn{4}{c}{Equivalence Checking Problems} \\
\cmidrule{1-4} \\[-1ex]
\multicolumn{11}{l}{\textbf{Branching bisimulation equivalence}} \\[.5ex]
\emph{ABP-CABP} & $2$ & 31,265 & 0\% & 3\% & 0\% & 3.9 & -4\% & -1880\% & -167\% & $\surd$ \\
& $4$ & 73,665 & 0\% & 5\% & 0\% & 8.7 & -7\% & -1410\% & -72\% & $\surd$ \\
\emph{Buf-Onebit} & $2$ & 844,033 & 16\% & 23\% & 23\% & 112.1 & 30\% & 28\% & 31\% & $\surd$ \\
& $4$ & 8,754,689 & 32\% & 44\% & 44\% & 1344.6 & 35\% & 44\% & 37\% & $\surd$ \\
\emph{Hesselink I-S} & $2$ & 21,062,529 & 0\% & 93\% & 93\% & 4133.6 & 0\% & 74\% & 91\% & $\times$ \\
\\[-1ex]
\multicolumn{11}{l}{\textbf{Weak bisimulation equivalence}} \\[.5ex]
\emph{ABP-CABP} & $2$ & 50,713 & 2\% & 6\% & 2\% & 5.3 & 2\% & -1338\% & -136\% & $\surd$ \\
& $4$ & 117,337 & 3\% & 10\% & 3\% & 13.0 & 4\% & -862\% & -75\% & $\surd$ \\
\emph{Buf-Onebit} & $2$ & 966,897 & 27\% & 33\% & 33\% & 111.6 & 20\% & 29\% & 28\% & $\surd$ \\
& $4$ & 9,868,225 & 41\% & 51\% & 51\% & 1531.1 & 34\% & 49\% & 52\% & $\surd$ \\
\emph{Hesselink I-S} & $2$ & 29,868,273 & 4\% & 93\% & 93\% & 5171.7 & 7\% & 79\% & 94\% & $\times$ \\
\\[-1ex]
\bottomrule
\end{tabular}
\end{table}
Our cases are taken from the literature. We here present a selection of the
results. For the model checking problems,
we considered the \emph{Onebit} protocol, which is a complex sliding window
protocol, and Hesselink's handshake register~\cite{Hes:98}.
Both protocols are parametric in the set of values that can be read
and written. A selection of properties of varying complexity and
varying nesting degree, expressed in the data-enhanced modal
$\mu$-calculus are checked.\footnote{\reportonly{The formulae are contained in
the appendix;}
\paperonly{The formulae are contained in \cite{KWW:13report};} here we
use textual characterisations instead.}
For the behavioural equivalence checking problems, we considered a
number of communication protocols such as the \emph{Alternating Bit
Protocol} (ABP), the \emph{Concurrent Alternating Bit Protocol} (CABP),
a two-place buffer (Buf) and the aforementioned Onebit protocol. Moreover,
we compare an implementation of Hesselink's register to a specification
of the protocol that is correct with respect to trace equivalence (but
for which currently no PBES encoding exists) but not with respect to the
two types of behavioural equivalence checking problems we consider here:
branching bisimilarity and weak bisimilarity.
The experiments were performed on a 64-bit Linux machine with kernel
version 2.6.27, consisting of 28 Intel\textregistered\ Xeon\copyright\ E5520
Processors running
at 2.27GHz, and 1TB of shared main memory. None of our experiments use
multi-core features. We used revision 12637 of the mCRL2 toolset,
and the complete scripts for our test setup are available at
\url{https://github.com/jkeiren/pbesstategraph-experiments}.
The results are reported in Table~\ref{tab:results};
higher percentages mean better reductions/\-smaller
runtimes.\reportonly{\footnote{The absolute sizes and times are included in the
appendix.}}
The experiments confirm our technique can achieve as much as an
additional reduction of about 97\% over \texttt{pbesparelm}, see the
model checking and equivalence problems
for Hesselink's
register. Compared to the sizes of the BESs underlying the original PBESs,
the reductions can be immense. Furthermore,
reducing the PBES using the local stategraph algorithm, instantiating, and
subsequently solving it is typically faster than using the global stategraph
algorithm,
even when the reduction achieved by the first is less.
For the equivalence checking
cases, when no reduction is achieved the local version of stategraph sometimes
results in substantially larger running times than parelm, which in turn already
adds an overhead compared to the original; however, for the cases in which this
happens the original running time is around or below 10 seconds, so the
observed increase may be due to inaccuracies in measuring.
\section{Conclusions and Future Work}\label{sec:conclusions}
We described a static analysis technique for PBESs that uses
a notion of control flow to determine when data parameters become
irrelevant. Using this information, the PBES can be simplified, leading
to smaller underlying BESs. Our static analysis technique
enables the solving of PBESs using instantiation that so far could not be solved
this way as shown by our running example.
Compared to existing techniques, our new static analysis technique can lead to
additional reductions of up-to 97\% in practical cases, as illustrated by our
experiments. Furthermore, if a reduction can be achieved the technique can
significantly speed up instantiation and solving, and in case no reduction is
possible, it typically does not negatively impact the total running time.
Several techniques described in this paper can be used
to enhance existing reduction techniques for PBESs. For instance,
our notion of a \emph{guard} of a predicate variable instance
in a PBES can be put to use to cheaply improve on the heuristics for
constant elimination~\cite{OWW:09}. Moreover, we believe that our
(re)construction of control flow graphs from PBESs can be used to
automatically generate invariants for PBESs. The theory on invariants
for PBESs is well-established, but still lacks proper
tool support.
\ifreport
\appendix | 3,633 | 43,359 | en |
train | 0.21.19 | \section{Conclusions and Future Work}\label{sec:conclusions}
We described a static analysis technique for PBESs that uses
a notion of control flow to determine when data parameters become
irrelevant. Using this information, the PBES can be simplified, leading
to smaller underlying BESs. Our static analysis technique
enables the solving of PBESs using instantiation that so far could not be solved
this way as shown by our running example.
Compared to existing techniques, our new static analysis technique can lead to
additional reductions of up-to 97\% in practical cases, as illustrated by our
experiments. Furthermore, if a reduction can be achieved the technique can
significantly speed up instantiation and solving, and in case no reduction is
possible, it typically does not negatively impact the total running time.
Several techniques described in this paper can be used
to enhance existing reduction techniques for PBESs. For instance,
our notion of a \emph{guard} of a predicate variable instance
in a PBES can be put to use to cheaply improve on the heuristics for
constant elimination~\cite{OWW:09}. Moreover, we believe that our
(re)construction of control flow graphs from PBESs can be used to
automatically generate invariants for PBESs. The theory on invariants
for PBESs is well-established, but still lacks proper
tool support.
\ifreport
\appendix
\section{$\mu$-calculus formulae}\label{app:experiments}
Below, we list the formulae that were verified in
Section~\ref{sec:experiments}. All formulae are denoted in
the the first order modal $\mu$-calculus, an mCRL2-native data
extension of the modal $\mu$-calculus. The formulae assume that
there is a data specification defining a non-empty sort $D$ of
messages, and a set of parameterised actions that are present
in the protocols. The scripts we used to generate our results,
and the complete data of the experiments are available from
\url{https://github.com/jkeiren/pbesstategraph}-\url{experiments}
\subsection{Onebit protocol verification}
\begin{itemize}
\item No deadlock:
\[
\nu X. [\ensuremath{\mathit{true}}]X \wedge \langle \ensuremath{\mathit{true}} \rangle \ensuremath{\mathit{true}}
\]
Invariantly, over all reachable states at least one action is enabled.
\item Messages that are read are inevitably sent:
\[
\nu X. [\ensuremath{\mathit{true}}]X \wedge \forall d\colon D.[ra(d)]\mu Y.([\overline{sb(d}]Y \wedge \langle \ensuremath{\mathit{true}} \rangle \ensuremath{\mathit{true}}))
\]
The protocol receives messages via action $ra$ and tries to send these
to the other party. The other party can receive these via action $sb$.
\item Messages can be overtaken by other messages:
\[
\begin{array}{ll}
\mu X. \langle \ensuremath{\mathit{true}} \rangle X \vee
\exists d\colon D. \langle ra(d) \rangle
\mu Y. \\
\qquad \ensuremath{\mathbb{B}}ig ( \langle \overline{sb(d)}Y \vee \exists d'\colon D. d \neq d' \wedge
\langle ra(d') \rangle \mu Z. \\
\qquad \qquad ( \langle \overline{sb(d)} \rangle Z \vee
\langle sb(d') \rangle \ensuremath{\mathit{true}}) \\
\qquad \ensuremath{\mathbb{B}}ig )
\end{array}
\]
That is, there is a trace in which message $d$ is read, and is still in the
protocol when another message $d'$ is read, which then is sent to the receiving
party before message $d$.
\item No spontaneous messages are generated:
\[
\begin{array}{ll}
\nu X.
[\overline{\exists d\colon D. ra(d)}]X \wedge\\
\qquad \forall d':D. [ra(d')]\nu Y(m_1\colon D = d'). \\
\qquad\qquad \ensuremath{\mathbb{B}}ig ( [\overline{\exists d:D. ra(d) \vee sb(d) }]Y(m_1) \wedge \\
\qquad\qquad\quad \forall e\colon D.[sb(e)]((m_1 = e) \wedge X) \wedge \\
\qquad\qquad\quad \forall e':D. [ra(e')]\nu Z(m_2\colon D = e'). \\
\qquad\qquad\qquad \ensuremath{\mathbb{B}}ig ([\overline{\exists d \colon D. ra(d) \vee sb(d)}]Z(m_2) \wedge \\
\qquad\qquad\qquad\quad \forall f:D. [sb(f)]((f = m_1) \wedge Y(m_2))\\
\qquad\qquad\qquad\quad \ensuremath{\mathbb{B}}ig )\\
\qquad\qquad \ensuremath{\mathbb{B}}ig )
\end{array}
\]
Since the onebit protocol can contain two messages at a time, the
formula states that only messages that are received can be subsequently
sent again. This requires storing messages that are currently in the buffer
using parameters $m_1$ and $m_2$.
\end{itemize}
\subsection{Hesselink's register}
\begin{itemize}
\item No deadlock:
\[\nu X. [\ensuremath{\mathit{true}}]X \wedge \langle \ensuremath{\mathit{true}} \rangle \ensuremath{\mathit{true}} \]
\item Values that are written to the register can be read from the
register if no other value is written to the register in the meantime.
\[
\begin{array}{l}
\nu X. [\ensuremath{\mathit{true}}]X \wedge
\forall w \colon D . [begin\_write(w)]\nu Y.\\
\qquad\qquad \ensuremath{\mathbb{B}}ig ( [\overline{end\_write}]Y \wedge
[end\_write]\nu Z. \\
\qquad\qquad\qquad \ensuremath{\mathbb{B}}ig( [\overline{\exists d:D.begin\_write(d)}]Z \wedge
[begin\_read]\nu W.\\
\qquad\qquad\qquad\qquad ([\overline{\exists d:D.begin\_write(d)}]W \wedge \\
\qquad\qquad\qquad\qquad\qquad \forall w': D . [end\_read(w')](w = w') ) \\
\qquad\qquad\qquad \ensuremath{\mathbb{B}}ig) \\
\qquad\qquad \ensuremath{\mathbb{B}}ig) \\
\end{array}
\]
\end{itemize}
\section{Absolute sizes and times for the experiments}
\begin{table}[!ht]
\small
\caption{Sizes of the BESs underlying (1) the original PBESs, and the
reduced PBESs using (2)
\texttt{pbesparelm}, (3) \texttt{pbesstategraph} (global)
and (4) \texttt{pbesstategraph} (local).
For each PBES, we report the number of generated BES equations,
and the time required for generating and
solving the resulting BES. For the other PBESs, we additionally include the
\texttt{pbesstategraph/parelm}
running times.
Verdict $\surd$ indicates the problem
has solution $\ensuremath{\mathit{true}}$; $\times$ indicates it is $\ensuremath{\mathit{false}}$.
}
\label{tab:results_absolute}
\input{table_r12637_absolute}
\end{table}
\fi
\end{document} | 1,910 | 43,359 | en |
train | 0.22.0 | \begin{document}
\title{The growth of fine Selmer groups}
\author{Meng Fai Lim}
\email{[email protected]}
\address{School of Mathematics and Statistics, Central China Normal University, No.152, Luoyu Road, Wuhan, Hubei 430079, CHINA}
\author{V. Kumar Murty}
\email{[email protected]}
\address{Department of Mathematics, University of Toronto, 40 St. George Street, Toronto, CANADA}
\date{\today}
\twoheadrightarrownks{Research of VKM partially supported by an NSERC Discovery grant}
\keywords{Fine Selmer groups, abelian variety,
class groups, $p$-rank.}
\mathfrak{m}aketitle
\begin{abstract}
Let $A$ be an abelian variety defined over a number field
$F$. In this paper, we will investigate the growth of the $p$-rank
of the fine Selmer group in three situations. In particular, in
each of these situations, we show that there is a strong analogy
between the growth of the $p$-rank of the fine Selmer group and the
growth of the $p$-rank of the class groups.
\end{abstract}
\section{Introduction}
In the study of rational points on Abelian varieties, the Selmer group plays an important role. In Mazur's fundamental work \cite{Mazur}, the
Iwasawa theory of Selmer groups was introduced. Using this theory, Mazur was able to describe the growth of the
size of the $p$-primary part of the Selmer group in $\mathfrak{m}athds{Z}_p$-towers. Recently, several authors have initiated the study of a certain subgroup, called the fine Selmer group. This subgroup, as well as the `fine' analogues of the Mordell-Weil group and Shafarevich-Tate group, seem to have stronger finiteness properties than the classical Selmer group (respectively,
Mordell-Weil or Shafarevich-Tate groups). The fundamental paper of Coates and Sujatha \cite{CS} explains some of these properties.
\mathfrak{m}edskip
Let $F$ be a number field and $p$ an odd prime. Let $A$ be an Abelian variety defined over $F$ and let $S$ be a finite set
of primes of $F$ including the infinite primes, the primes where $A$ has bad reduction, and the primes of $F$ over $p$.
Fix an algebraic closure $\overline{F}$ of $F$ and denote by $F_S$ the maximal subfield of $\overline{F}$ containing
$F$ which is unramified outside $S$. We set $G = \Gammal(\overline{F}/F)$ and $G_S = \Gammal(F_S/F)$.
\mathfrak{m}edskip
The usual $p^{\infty}$-Selmer group of $A$ is defined by
$$
\Sel_{p^{\infty}}(A/F) = \ker\Big(H^1(G,A[p^{\infty}])\longrightarrow
\bigoplus_{v}H^1(F_v, A)[p^{\infty}]\Big).
$$
Here, $v$ runs through all the primes of $F$ and as usual, for a
$G$-module $M$, we write $H^*(F_v,M)$ for the Galois cohomology of
the decomposition group at $v$. Following \cite{CS}, the
$p^{\infty}$-fine Selmer group of $A$ is defined by
\[
R_{p^\infty}(A/F) = \ker\Big(H^1(G_S(F),A[p^{\infty}])\longrightarrow \bigoplus_{v\in S}H^1(F_v,
A[p^{\infty}])\Big).
\]
This definition is in fact independent of the choice of $S$ as can be seen from the exact sequence (Lemma \ref{indep of S})
$$
0 \longrightarrow R_{p^\infty}(A/F) \longrightarrow \Sel_{p^{\infty}}(A/F)
\longrightarrow \bigoplus_{v|p}A(F_v)\otimesimes\mathfrak{m}athds{Q}p/\mathfrak{m}athds{Z}p.
$$
\mathfrak{m}edskip
Coates and Sujatha study this group over a field $F_\infty$ contained in $F_S$ and for which $\Gammal(F_\infty/F)$
is a $p$-adic Lie group. They set
$$
R_{p^\infty}(A/F_\infty)\ =\ \displaystyle \mathop{\varinjlim}\limits R_{p^\infty}(A/L)
$$
where the inductive limit ranges over finite extensions $L$ of $F$
contained in $F_\infty$. When $F_\infty = F^{cyc}$ is the cyclotomic
$\mathfrak{m}athds{Z}_p$-extension of $F$, they conjecture that the Pontryagin dual
$Y_{p^\infty}(A/F_\infty)$ is a finitely generated $\mathfrak{m}athds{Z}_p$-module.
This is known {\em not} to be true for the dual of the classical
Selmer group. A concrete example of such is the elliptic curve
$E/\mathfrak{m}athds{Q}$ of conductor 11 which is given by
\[ y^2 +y=x^3 -x^2 -10 x- 20. \]
For the prime $p = 5$, it is known that the Pontryagin dual of the
$5^{\infty}$-Selmer group over $\mathfrak{m}athds{Q}^{\mathrm{cyc}}$ is not finitely generated
over $\mathfrak{m}athds{Z}_5$ (see \cite[\S 10, Example 2]{Mazur}). On the other hand
it is expected to be true if the Selmer group is replaced by a
module made out of class groups. Thus, in some sense, the fine
Selmer group seems to approximate the class group. One of the themes
of our paper is to give evidence of that by making the relationship
precise in three instances.
\mathfrak{m}edskip
Coates and Sujatha also study extensions for which $G_\infty = \Gammal(F_\infty/F)$ is a $p$-adic Lie group of
dimension larger than $1$ containing $F^{cyc}$. They make a striking conjecture that the dual
$Y_{p^\infty}(A/F_\infty)$ of the fine Selmer group is pseudo-null as a module over the Iwasawa algebra
$\Lambdambda(G_\infty)$. While we have nothing to say about this conjecture, we investigate the growth of $p$-ranks of
fine Selmer groups in some pro-$p$ towers that are {\em not} $p$-adic analytic.
\mathfrak{m}edskip
\section{Outline of the Paper}
Throughout this paper, $p$ will always denote an odd prime. In the
first situation, we study the growth of the fine Selmer groups over
certain $\mathfrak{m}athds{Z}p$-extensions. It was first observed in \cite{CS} that
over a cyclotomic $\mathfrak{m}athds{Z}p$-extension, the growth of the fine Selmer
group of an abelian variety in a cyclotomic $\mathfrak{m}athds{Z}p$-extension exhibits
phenomena parallel to the growth of the $p$-part of the class groups
over a cyclotomic $\mathfrak{m}athds{Z}p$-extension. Subsequent papers \cite{A, JhS,
LimFine} in this direction has further confirmed this observation.
(Actually, in \cite{JhS, LimFine}, they have also considered the
variation of the fine Selmer group of a more general $p$-adic
representation. In this article, we will only be concerned with the
fine Selmer groups of abelian varieties.) In this paper, we will
show that the growth of the $p$-rank of fine Selmer group of an
abelian variety in a certain class of $\mathfrak{m}athds{Z}p$-extension is determined
by the growth of the $p$-rank of ideal class groups in the
$\mathfrak{m}athds{Z}p$-extension in question (see Theorem \ref{asymptotic compare})
and vice versa. We will also specialize our theorem to the
cyclotomic $\mathfrak{m}athds{Z}p$-extension to recover a theorem of Coates-Sujatha
\cite[Theorem 3.4]{CS}.
\mathfrak{m}edskip
In the second situation, we investigate the growth of the fine
Selmer groups over $\mathfrak{m}athds{Z}/p$-extensions of a fixed number field. We
note that it follows from an application of the Grunwald-Wang
theorem that the $p$-rank of the ideal class groups grows
unboundedly in $\mathfrak{m}athds{Z}/p$-extensions of a fixed number field. Recently,
many authors have made analogous studies in this direction replacing
the ideal class group by the classical Selmer group of an abelian
variety (see \cite{Ba, Br, Ce, Mat09}). In this article, we
investigate the analogous situation for the fine Selmer group of an
abelian variety, and we show that the $p$-rank of the fine Selmer
group of the abelian variety grows unboundedly in $\mathfrak{m}athds{Z}/p$-extensions
of a fixed number field (see Theorem \ref{class Z/p}). Note that the
fine Selmer group is a subgroup of the classical Selmer group, and
therefore, our results will also recover some of their results.
\mathfrak{m}edskip
In the last situation, we consider the growth of the fine Selmer
group in an infinite unramified pro-$p$ extensions. It is known that
the $p$-rank of the class groups is unbounded in such tower under
suitable assumptions. Our result will again show that we have the
same phenomenon for the $p$-rank of fine Selmer groups (see Theorem
\ref{Fine Sel in class tower}). As above, our result will also imply
some of the main results in \cite{LM, Ma, MO}, where analogous
studies in this direction have been made for the classical Selmer
group of an abelian variety.
\section{$p$-rank} \lambdabel{some cohomology lemmas}
In this section, we record some basic results on Galois cohomology
that will be used later. For an abelian group $N$, we define its
$p$-rank to be the $\mathfrak{m}athds{Z}/p$-dimension of $N[p]$ which we denote by
$r_p(N)$. If $G$ is a pro-$p$ group, we write $h_i(G) =
r_p\big(H^i(G,\mathfrak{m}athds{Z}/p)\big)$. We now state the following lemma which
gives an estimate of the $p$-rank of the first cohomology group.
\begin{lem} \lambdabel{cohomology rank inequalities} Let $G$ be a pro-$p$ group,
and let $M$ be a discrete $G$-module which is cofinitely generated
over $\mathfrak{m}athds{Z}p$.
If $h_1(G)$ is finite, then $r_p\big(H^1(G,M)\big)$ is finite, and
we have the following estimates for
$r_p\big(H^1(G,M)\big)$
\[
h_1(G)r_p(M^G) -r_p \big( (M/M^G)^G\big)
\leq r_p\big(H^1(G,M)\big) \leq h_1(G)\big(\mathfrak{m}athrm{corank}_{\mathfrak{m}athds{Z}p}(M)
+ \log_p(\big| M/M_{\mathfrak{m}athrm{div}}\big|)\big).
\]
\end{lem}
\begin{prop}f
See \cite[Lemma
3.2]{LM}. \end{prop}f
We record another useful estimate.
\begin{lem} \lambdabel{estimate lemma}
Let
\[ W \longrightarrow X \longrightarrow Y \longrightarrow Z\]
be an exact sequence of cofinitely generated abelian groups. Then we have
\[ \Big| r_p(X) - r_p(Y) \Big| \leq 2r_p(W) + r_p(Z).\]
\end{lem}
\begin{prop}f It suffices to show the lemma for the exact sequence
\[ 0\longrightarrow W \longrightarrow X
\longrightarrow Y \longrightarrow Z \longrightarrow 0.\]
We break up the exact sequence into two short exact sequences
\[ 0\longrightarrow W \longrightarrow X
\longrightarrow C \longrightarrow 0, \]
\[ 0\longrightarrow C \longrightarrow Y
\longrightarrow Z \longrightarrow 0.\]
From these short exact sequences, we obtain two exact sequences of
finite dimensional $\mathfrak{m}athds{Z}/p$-vector spaces (since $W$, $X$, $Y$ and $Z$
are cofinitely generated abelian groups)
\[ 0\longrightarrow W[p] \longrightarrow X[p]
\longrightarrow C[p] \longrightarrow P \longrightarrow 0 , \]
\[ 0\longrightarrow C[p] \longrightarrow Y[p]\longrightarrow Q
\longrightarrow 0, \]
where $P\subseteq W/p$ and $Q\subseteq Z[p]$. It follows from these
two exact sequences and a straightforward calculation that we have
\[ r_p(X) - r_p(Y) = r_p(W) - r_p(P) - r_p(Q). \]
The inequality of the lemma is immediate from this.
\end{prop}f
\mathfrak{m}edskip | 3,399 | 16,181 | en |
train | 0.22.1 | \section{$p$-rank} \lambdabel{some cohomology lemmas}
In this section, we record some basic results on Galois cohomology
that will be used later. For an abelian group $N$, we define its
$p$-rank to be the $\mathfrak{m}athds{Z}/p$-dimension of $N[p]$ which we denote by
$r_p(N)$. If $G$ is a pro-$p$ group, we write $h_i(G) =
r_p\big(H^i(G,\mathfrak{m}athds{Z}/p)\big)$. We now state the following lemma which
gives an estimate of the $p$-rank of the first cohomology group.
\begin{lem} \lambdabel{cohomology rank inequalities} Let $G$ be a pro-$p$ group,
and let $M$ be a discrete $G$-module which is cofinitely generated
over $\mathfrak{m}athds{Z}p$.
If $h_1(G)$ is finite, then $r_p\big(H^1(G,M)\big)$ is finite, and
we have the following estimates for
$r_p\big(H^1(G,M)\big)$
\[
h_1(G)r_p(M^G) -r_p \big( (M/M^G)^G\big)
\leq r_p\big(H^1(G,M)\big) \leq h_1(G)\big(\mathfrak{m}athrm{corank}_{\mathfrak{m}athds{Z}p}(M)
+ \log_p(\big| M/M_{\mathfrak{m}athrm{div}}\big|)\big).
\]
\end{lem}
\begin{prop}f
See \cite[Lemma
3.2]{LM}. \end{prop}f
We record another useful estimate.
\begin{lem} \lambdabel{estimate lemma}
Let
\[ W \longrightarrow X \longrightarrow Y \longrightarrow Z\]
be an exact sequence of cofinitely generated abelian groups. Then we have
\[ \Big| r_p(X) - r_p(Y) \Big| \leq 2r_p(W) + r_p(Z).\]
\end{lem}
\begin{prop}f It suffices to show the lemma for the exact sequence
\[ 0\longrightarrow W \longrightarrow X
\longrightarrow Y \longrightarrow Z \longrightarrow 0.\]
We break up the exact sequence into two short exact sequences
\[ 0\longrightarrow W \longrightarrow X
\longrightarrow C \longrightarrow 0, \]
\[ 0\longrightarrow C \longrightarrow Y
\longrightarrow Z \longrightarrow 0.\]
From these short exact sequences, we obtain two exact sequences of
finite dimensional $\mathfrak{m}athds{Z}/p$-vector spaces (since $W$, $X$, $Y$ and $Z$
are cofinitely generated abelian groups)
\[ 0\longrightarrow W[p] \longrightarrow X[p]
\longrightarrow C[p] \longrightarrow P \longrightarrow 0 , \]
\[ 0\longrightarrow C[p] \longrightarrow Y[p]\longrightarrow Q
\longrightarrow 0, \]
where $P\subseteq W/p$ and $Q\subseteq Z[p]$. It follows from these
two exact sequences and a straightforward calculation that we have
\[ r_p(X) - r_p(Y) = r_p(W) - r_p(P) - r_p(Q). \]
The inequality of the lemma is immediate from this.
\end{prop}f
\mathfrak{m}edskip
\section{Fine Selmer groups} \lambdabel{Fine Selmer group section}
As before, $p$ will denote an odd prime. Let $A$ be an abelian
variety over a number field $F$. Let $S$ be a finite set of primes
of $F$ which contains the primes above $p$, the primes of bad
reduction of $A$ and the archimedean primes. Denote by $F_S$ the
maximal algebraic extension of $F$ unramified outside $S$. We will
write $G_S(F) = \Gammal(F_S/F)$.
\mathfrak{m}edskip
As stated in the introduction and following
\cite{CS}, the fine Selmer group of $A$ is defined by
\[ R(A/F) = \ker\Big(H^1(G_S(F),A[p^{\infty}])\longrightarrow \bigoplus_{v\in S}H^1(F_v,
A[p^{\infty}])\Big). \]
(Note that we have dropped the subscript $p^\infty$ on $R(A/F)$ as $p$ is fixed.)
\mathfrak{m}edskip
To facilitate further discussion, we also recall the definition of
the classical Selmer group of $A$ which is given by
\[ \Sel_{p^{\infty}}(A/F) = \ker\Big(H^1(F,A[p^{\infty}])\longrightarrow
\bigoplus_{v}H^1(F_v, A)[p^{\infty}]\Big), \] where $v$ runs through
all the primes of $F$. (Note the difference of the position of the
``$[p^{\infty}]$'' in the local cohomology groups in the
definitions.)
\mathfrak{m}edskip
At first viewing, it will seem that the definition of the fine
Selmer group depends on the choice of the set $S$. We shall show
that this is not the case.
\begin{lem} \lambdabel{indep of S}
We have an exact sequence
\[ 0 \longrightarrow R(A/F) \longrightarrow \Sel_{p^{\infty}}(A/F)
\longrightarrow \bigoplus_{v|p}A(F_v)\otimesimes\mathfrak{m}athds{Q}p/\mathfrak{m}athds{Z}p. \]
In particular, the definition of the fine
Selmer group does not depend on the choice of the set $S$. \end{lem}
\begin{prop}f Let $S$ be a finite set of primes of $F$ which contains the
primes above $p$, the primes of bad reduction of $A$ and the
archimedean primes. Then by \cite[Chap. I, Corollary 6.6]{Mi}, we
have the following description of the Selmer group
\[ \Sel_{p^{\infty}}(A/F) = \ker\Big(H^1(G_S(F),A[p^{\infty}])\longrightarrow
\bigoplus_{v\in S}H^1(F_v, A)[p^{\infty}]\Big). \]
Combining this description with the definition of the fine Selmer
group and an easy diagram-chasing argument, we obtain the required
exact sequence (noting that $A(F_v)\otimesimes\mathfrak{m}athds{Q}p/\mathfrak{m}athds{Z}p =0$ for $v\nmid
p$). \end{prop}f
\begin{remark} In \cite{Wu}, Wuthrich used the exact sequence in the lemma for
the definition of the fine Selmer group.
\end{remark}
We end the section with the following simple lemma which gives a
lower bound for the $p$-rank of the fine Selmer group in terms of the
$p$-rank of the $S$-class group. This will be used in Sections
\ref{unboundness} and \ref{unramified pro-p}.
\begin{lem} \lambdabel{lower bound}
Let $A$ be an abelian variety defined over a
number field $F$. Suppose that $A(F)[p]\neq 0$. Then we have
\[r_p\big(R(A/F)\big) \geq r_p(\Cl_S(F))r_p(A(F)[p])-2d, \]
where $d$ denotes the dimension of the abelian variety $A$.
\end{lem}
\begin{prop}f
Let $H_S$ be the $p$-Hilbert $S$-class field of $F$ which, by definition, is the maximal abelian unramified $p$-extension
of $F$ in which all primes in $S$ split completely. Consider the
following diagram
\[ \entrymodifiers={!! <0pt, .8ex>+} \SelectTips{eu}{}
\xymatrix{
0 \ar[r] & R(A/F) \ar[d]^{\alpha} \ar[r] & H^1(G_S(F),A[p^{\infty}]) \ar[d]^{\beta}
\ar[r]^{} & \displaystyle\bigoplus_{v\in S}H^1(F_v,
A[p^{\infty}]) \ar[d]^{\gammamma} \\
0 \ar[r] & R(A/H_S) \ar[r] & H^1(G_S(H_S),A[p^{\infty}])
\ar[r] & \displaystyle\bigoplus_{v\in S}\bigoplus_{w|v}H^1(H_{S,w},
A[p^{\infty}]) }
\]
with exact rows. Here the vertical maps are given by the restriction maps.
Write $\gammamma = \oplus_v \gammamma_v$, where
\[ \gammamma_v : H^1(F_v,
A[p^{\infty}]) \longrightarrow \bigoplus_{w|v}H^1(H_{S,w},
A[p^{\infty}]). \]
It follows from the inflation-restriction sequence that $\ker
\gammamma_v = H^1(G_v,
A(H_{S,v})[p^{\infty}])$, where $G_v$ is the decomposition group of
$\Gammal(H_S/F)$ at $v$. On the other hand, by the definition of $H_S$,
all the primes of $F$ in $S$ split completely in $H_S$, and
therefore, we have $G_v=1$ which in turn implies that $\ker \gammamma
=0$. Similarly, the inflation-restriction sequence gives the
equality $\ker \beta = H^1(\Gammal(H_S/F), A(H_S)[p^{\infty}])$.
Therefore, we obtain an injection
\[
H^1(\Gammal(H_S/F), A(H_S)[p^{\infty}])\hookrightarrow R(A/F) \]
It follows from this injection that we have
\[ r_p(R(A/F)) \geq r_p\big(H^1(\Gammal(H_S/F), A(H_S)[p^{\infty}])\big). \]
By Lemma \ref{cohomology rank inequalities}, the latter quantity is
greater or equal to
\[ h_1(\Gammal(H_S/F))r_p(A(F)[p^{\infty}]) - 2d. \]
By class field theory, we have $\Gammal(H_S/F)\cong \Cl_S(F)$, and
therefore,
\[ h_1(\Gammal(H_S/F)) = r_p(\Cl_S(F)/p) = r_p(\Cl_S(F)),\]
where the last equality follows from the fact that $\Cl_S(F)$ is
finite. The required estimate is now established (and noting that
$r_p(A(F)[p]) = r_p(A(F)[p^{\infty}]))$.
\end{prop}f
\begin{remark}
Since the fine Selmer group is contained in the classical Selmer group
(cf. Lemma \ref{indep of S}), the above estimate also gives a lower
bound for the classical Selmer group. \end{remark} | 2,878 | 16,181 | en |
train | 0.22.2 | \section{Growth of fine Selmer groups in a $\mathfrak{m}athds{Z}p$-extension}
\lambdabel{cyclotomic Zp-extension}
As before, $p$ denotes an odd prime. In this section, $F_{\infty}$
will always denote a fixed $\mathfrak{m}athds{Z}p$-extension of $F$. We will denote
$F_n$ to be the subfield of $F_{\infty}$ such that $[F_n : F] =
p^n$. If $S$ is a finite set of primes of $F$, we denote by $S_f$
the set of finite primes in $S$.
\mathfrak{m}edskip
We now state the main theorem of this section which compares the
growth of the fine Selmer groups and the growth of the class groups
in the $\mathfrak{m}athds{Z}p$-extension of $F$. To simplify our discussion, we will
assume that $A[p]\subseteq A(F)$.
\mathfrak{m}edskip
\begin{thm} \lambdabel{asymptotic compare}
Let $A$ be a $d$-dimensional abelian variety defined over a number field
$F$. Let $F_{\infty}$ be a fixed $\mathfrak{m}athds{Z}p$-extension of $F$ such that
the primes of $F$ above $p$ and the bad reduction primes of $A$
decompose finitely in $F_{\infty}/F$. Furthermore, we assume that
$A[p]\subseteq A(F)$. Then we have
\[ \Big|r_p(R(A/F_n)) - 2dr_p(\Cl(F_n))\Big|=
O(1).\]
\end{thm}
In preparation for the proof of the theorem, we require a few
lemmas.
\begin{lem}
Let $F_{\infty}$ be a $\mathfrak{m}athds{Z}p$-extension of $F$ and
let $F_n$ be the subfield of $F_{\infty}$ such that $[F_n : F] =
p^n$. Let $S$ be a given finite set of primes of $F$ which contains
all the primes above $p$ and the archimedean primes. Suppose that
all the primes in $S_f$ decompose finitely in $F_{\infty}/F$. Then
we have
\[ \Big|r_p(\Cl(F_n)) - r_p(\Cl_S(F_n))\Big|=
O(1). \]
\end{lem}
\begin{prop}f
For each $F_n$, we write $S_f(F_n)$ for the set of finite primes of
$F_n$ above $S_f$. For each $n$, we have the following exact
sequence (cf. \cite[Lemma 10.3.12]{NSW})
\[ \mathfrak{m}athds{Z}^{|S_f(F_n)|} \longrightarrow \Cl(F_n)\longrightarrow \Cl_S(F_n)
\longrightarrow 0. \]
Denote by $C_n$ the kernel of $\Cl(F_n)\longrightarrow
\Cl_S(F_n)$. Note that $C_n$ is finite, since it is contained in
$\Cl(F_n)$. Also, it is clear from the above exact sequence that
$r_p(C_n) \leq |S_f(F_n)|$ and $r_p(C_n/p) \leq |S_f(F_n)|$. By
Lemma \ref{estimate lemma}, we have
\[ \Big| r_p(\Cl(F_n)) - r_p(\Cl_S(F_n)) \Big| \leq 3|S_f(F_n)| =
O(1), \]
where the last equality follows from the assumption that
all the primes in $S_f$ decompose finitely in $F_{\infty}/F$. \end{prop}f
\mathfrak{m}edskip
Before stating the next lemma, we introduce the $p$-fine Selmer
group of an abelian variety $A$. Let $S$ be a finite set of primes
of $F$ which contains the primes above $p$, the primes of bad
reduction of $A$ and the archimedean primes. Then the $p$-fine
Selmer group (with respect to $S$) is defined to be
\[ R_S(A[p]/F) = \ker\Big(H^1(G_S(F),A[p])\longrightarrow \bigoplus_{v\in S}H^1(F_v,
A[p])\Big). \] Note that the $p$-fine Selmer group may be dependent
on $S$. In fact, as we will see in the proof of Theorem
\ref{asymptotic compare} below, when $F = F(A[p])$, we have
$R_S(A[p]/F) = \Cl_S(F)[p]^{2d}$, where the latter group is clearly
dependent on $S$.
We can now state the following lemma which compare the growth of
$r_p(R_S(A[p]/F_n))$ and $r_p(R(A/F_n))$.
\begin{lem}
Let $F_{\infty}$ be a $\mathfrak{m}athds{Z}p$-extension of $F$ and
let $F_n$ be the subfield of $F_{\infty}$ such that $[F_n : F] =
p^n$. Let $A$ be an abelian variety defined over $F$. Let $S$ be a
finite set of primes of $F$ which contains the primes above $p$, the
primes of bad reduction of $A$ and the archimedean primes. Suppose
that all the primes in $S_f$ decompose finitely in $F_{\infty}/F$.
Then we have
\[ \Big|r_p(R_S(A[p]/F_n)) -r_p(R(A/F_n))\Big| =
O(1).\]
\end{lem}
\begin{prop}f
We have a commutative diagram
\[ \entrymodifiers={!! <0pt, .8ex>+} \SelectTips{eu}{}
\xymatrix{
0 \ar[r] & R_S(A[p]/F_n) \ar[d]^{s_n} \ar[r] & H^1(G_S(F_n),A[p]) \ar[d]^{h_n}
\ar[r]^{} & \displaystyle\bigoplus_{v_n\in S(F_n)}H^1(F_{n,v_n},
A[p]) \ar[d]^{g_n} \\
0 \ar[r] & R(A/F_n)[p] \ar[r] & H^1(G_S(F_n),A[p^{\infty}])[p]
\ar[r] & \displaystyle\bigoplus_{v_n\in S(F_n)} H^1(F_{n,v_n},
A[p^{\infty}])[p] }
\]
with exact rows. It is an easy exercise to show that the maps
$h_n$ and $g_n$ are surjective, that $\ker h_n =
A(F_n)[p^{\infty}]/p$ and that
\[ \ker g_n = \displaystyle\bigoplus_{v_n\in S(F_n)}A(F_{n,v_n})[p^{\infty}]/p.
\]
Since we are assuming $p$ is odd, we have $r_p(\ker g_n)\leq 2d|S_f(F_n)|$.
By an application of Lemma \ref{estimate lemma}, we have
\[ \begin{array}{rl} \Big| r_p(R_S(A[p]/F_n))) - r_p(R(A/F_n)) \Big|\!
&\leq ~2r_p(\ker s_n) + r_p(\mathrm{coker}\, s_n) \\
& \leq ~ 2r_p(\ker h_n) + r_p(\ker g_n) \\
& \leq ~ 4d + 2d|S_f(F_n)| = O(1),\\
\end{array} \] where the last equality
follows from the assumption that all the primes in $S_f$ decompose
finitely in $F_{\infty}/F$. \end{prop}f
We are in the position to prove our theorem.
\begin{prop}f[Proof of Theorem \ref{asymptotic compare}]
Let $S$ be the
finite set of primes of $F$ consisting precisely of the primes above
$p$, the primes of bad reduction of $A$ and the archimedean primes.
By the hypothesis $A[p]\subseteq A(F)$ ($\subseteq A(F_n)$) of the
theorem, we have $A[p] \cong (\mathfrak{m}athds{Z}/p)^{2d}$ as $G_S(F_n)$-modules.
Therefore, we have $H^1(G_S(F_n),A[p]) = \Hom(G_S(F_n),A[p])$. We
have similar identification for the local cohomology groups, and it
follows that
$$
R_S(A[p]/F_n) = \Hom(\Cl_S(F_n),A[p])\cong \Cl_S(F_n)[p]^{2d}
$$
as abelian groups. Hence we have $r_p(R_S(A[p]/F_n)) = 2d
r_p(\Cl_S(F_n))$. The conclusion of the theorem is now immediate
from this equality and the above two lemmas. \end{prop}f
\begin{cor} \lambdabel{asymptotic compare corollary}
Retain the notations and assumptions of Theorem \ref{asymptotic compare}.
Then we have
\[ r_p(R(A/F_n)) = O(1)\]
if and only if
\[ r_p(\Cl(F_n)) = O(1).\]
\end{cor}
For the remainder of the section, $F_{\infty}$ will be taken to be
the cyclotomic $\mathfrak{m}athds{Z}p$-extension of $F$. As before, we denote by $F_n$
the subfield of $F_{\infty}$ such that $[F_n : F] = p^n$. Denote by
$X_{\infty}$ the Galois group of the maximal abelian unramified
pro-$p$ extension of $F_{\infty}$ over $F_{\infty}$. A well-known
conjecture of Iwasawa asserts that $X_{\infty}$ is finitely
generated over $\mathfrak{m}athds{Z}p$ (see \cite{Iw, Iw2}). We will call this
conjecture the \textit{Iwasawa $\mathfrak{m}u$-invariant conjecture} for
$F_{\infty}$. By \cite[Proposition 13.23]{Wa}, this is also
equivalent to saying that $r_p(\Cl(F_n)/p)$ is bounded independently
of $n$. Now, by the finiteness of class groups, we have
$r_p(\Cl(F_n))= r_p(\Cl(F_n)/p)$. Hence the Iwasawa $\mathfrak{m}u$-invariant
conjecture is equivalent to saying that $r_p(\Cl(F_n))$ is bounded
independently of $n$.
\mathfrak{m}edskip
We consider the analogous situation for the fine Selmer group. Define
$R(A/F_{\infty}) = \displaystyle \mathop{\varinjlim}\limits_nR(A/F_n)$ and denote by $Y(A/F_{\infty})$
the Pontryagin dual of $R(A/F_{\infty})$. We may now recall the
following conjecture which was first introduced in \cite{CS}.
\noindent \textbf{Conjecture A.} For any number field $F$,
$Y(A/F_{\infty})$ is a finitely generated $\mathfrak{m}athds{Z}p$-module, where
$F_{\infty}$ is the cyclotomic $\mathfrak{m}athds{Z}p$-extension of $F$.
\mathfrak{m}edskip
We can now give the proof of \cite[Theorem 3.4]{CS}. For another
alternative approach, see \cite{JhS, LimFine}.
\begin{thm} \lambdabel{Coates-Sujatha}
Let $A$ be a $d$-dimensional abelian variety defined over a number field
$F$ and let $F_{\infty}$ be the cyclotomic $\mathfrak{m}athds{Z}p$-extension of $F$.
Suppose that $F(A[p])$ is a finite $p$-extension of $F$.
Then Conjecture A holds for $A$ over $F_{\infty}$ if and only if
the Iwasawa $\mathfrak{m}u$-invariant conjecture holds for $F_{\infty}$.
\end{thm}
\begin{prop}f
Now if $L'/L$ is a finite $p$-extension, it follows from \cite[Theorem 3]{Iw}
that the Iwasawa $\mathfrak{m}u$-invariant conjecture holds for $L_{\infty}$
if and only if the Iwasawa $\mathfrak{m}u$-invariant conjecture holds for
$L'_{\infty}$. On the other hand, it is not difficult to show that
the map
\[ Y(A/L'_{\infty})_{G}\longrightarrow Y(A/L_{\infty})\]
has finite kernel and cokernel, where $G=\Gammal(L'/L)$. It follows
from this observation that Conjecture A holds for $A$ over
$L_{\infty}$ if and only if $Y(A/L'_{\infty})_{G}$ is finitely
generated over $\mathfrak{m}athds{Z}p$. Since $G$ is a $p$-group, $\mathfrak{m}athds{Z}p[G]$ is local
with a unique maximal (two-sided) ideal $p\mathfrak{m}athds{Z}p[G]+I_G$, where $I_G$
is the augmentation ideal (see \cite[Proposition 5.2.16(iii)]{NSW}).
It is easy to see from this that
\[ Y(A/L'_{\infty})/\mathfrak{m} \cong Y(A/L'_{\infty})_{G}/pY(A/L'_{\infty})_{G}.
\] Therefore, Nakayama's lemma
for $\mathfrak{m}athds{Z}p$-modules tells us that $Y(A/L'_{\infty})_{G}$ is finitely
generated over $\mathfrak{m}athds{Z}p$ if and only if $Y(A/L'_{\infty})/\mathfrak{m}$ is finite.
On the other hand, Nakayama's lemma for $\mathfrak{m}athds{Z}p[G]$-modules tells us
that $Y(A/L'_{\infty})/\mathfrak{m}$ is finite if and only if
$Y(A/L'_{\infty})$ is finitely generated over $\mathfrak{m}athds{Z}p[G]$. But since
$G$ is finite, the latter is equivalent to $Y(A/L'_{\infty})$ being
finitely generated over $\mathfrak{m}athds{Z}p$. Hence we have shown that Conjecture A
holds for $A$ over $L_{\infty}$ if and only if Conjecture A holds
for $A$ over $L_{\infty}'$.
\mathfrak{m}edskip | 3,837 | 16,181 | en |
train | 0.22.3 | \begin{cor} \lambdabel{asymptotic compare corollary}
Retain the notations and assumptions of Theorem \ref{asymptotic compare}.
Then we have
\[ r_p(R(A/F_n)) = O(1)\]
if and only if
\[ r_p(\Cl(F_n)) = O(1).\]
\end{cor}
For the remainder of the section, $F_{\infty}$ will be taken to be
the cyclotomic $\mathfrak{m}athds{Z}p$-extension of $F$. As before, we denote by $F_n$
the subfield of $F_{\infty}$ such that $[F_n : F] = p^n$. Denote by
$X_{\infty}$ the Galois group of the maximal abelian unramified
pro-$p$ extension of $F_{\infty}$ over $F_{\infty}$. A well-known
conjecture of Iwasawa asserts that $X_{\infty}$ is finitely
generated over $\mathfrak{m}athds{Z}p$ (see \cite{Iw, Iw2}). We will call this
conjecture the \textit{Iwasawa $\mathfrak{m}u$-invariant conjecture} for
$F_{\infty}$. By \cite[Proposition 13.23]{Wa}, this is also
equivalent to saying that $r_p(\Cl(F_n)/p)$ is bounded independently
of $n$. Now, by the finiteness of class groups, we have
$r_p(\Cl(F_n))= r_p(\Cl(F_n)/p)$. Hence the Iwasawa $\mathfrak{m}u$-invariant
conjecture is equivalent to saying that $r_p(\Cl(F_n))$ is bounded
independently of $n$.
\mathfrak{m}edskip
We consider the analogous situation for the fine Selmer group. Define
$R(A/F_{\infty}) = \displaystyle \mathop{\varinjlim}\limits_nR(A/F_n)$ and denote by $Y(A/F_{\infty})$
the Pontryagin dual of $R(A/F_{\infty})$. We may now recall the
following conjecture which was first introduced in \cite{CS}.
\noindent \textbf{Conjecture A.} For any number field $F$,
$Y(A/F_{\infty})$ is a finitely generated $\mathfrak{m}athds{Z}p$-module, where
$F_{\infty}$ is the cyclotomic $\mathfrak{m}athds{Z}p$-extension of $F$.
\mathfrak{m}edskip
We can now give the proof of \cite[Theorem 3.4]{CS}. For another
alternative approach, see \cite{JhS, LimFine}.
\begin{thm} \lambdabel{Coates-Sujatha}
Let $A$ be a $d$-dimensional abelian variety defined over a number field
$F$ and let $F_{\infty}$ be the cyclotomic $\mathfrak{m}athds{Z}p$-extension of $F$.
Suppose that $F(A[p])$ is a finite $p$-extension of $F$.
Then Conjecture A holds for $A$ over $F_{\infty}$ if and only if
the Iwasawa $\mathfrak{m}u$-invariant conjecture holds for $F_{\infty}$.
\end{thm}
\begin{prop}f
Now if $L'/L$ is a finite $p$-extension, it follows from \cite[Theorem 3]{Iw}
that the Iwasawa $\mathfrak{m}u$-invariant conjecture holds for $L_{\infty}$
if and only if the Iwasawa $\mathfrak{m}u$-invariant conjecture holds for
$L'_{\infty}$. On the other hand, it is not difficult to show that
the map
\[ Y(A/L'_{\infty})_{G}\longrightarrow Y(A/L_{\infty})\]
has finite kernel and cokernel, where $G=\Gammal(L'/L)$. It follows
from this observation that Conjecture A holds for $A$ over
$L_{\infty}$ if and only if $Y(A/L'_{\infty})_{G}$ is finitely
generated over $\mathfrak{m}athds{Z}p$. Since $G$ is a $p$-group, $\mathfrak{m}athds{Z}p[G]$ is local
with a unique maximal (two-sided) ideal $p\mathfrak{m}athds{Z}p[G]+I_G$, where $I_G$
is the augmentation ideal (see \cite[Proposition 5.2.16(iii)]{NSW}).
It is easy to see from this that
\[ Y(A/L'_{\infty})/\mathfrak{m} \cong Y(A/L'_{\infty})_{G}/pY(A/L'_{\infty})_{G}.
\] Therefore, Nakayama's lemma
for $\mathfrak{m}athds{Z}p$-modules tells us that $Y(A/L'_{\infty})_{G}$ is finitely
generated over $\mathfrak{m}athds{Z}p$ if and only if $Y(A/L'_{\infty})/\mathfrak{m}$ is finite.
On the other hand, Nakayama's lemma for $\mathfrak{m}athds{Z}p[G]$-modules tells us
that $Y(A/L'_{\infty})/\mathfrak{m}$ is finite if and only if
$Y(A/L'_{\infty})$ is finitely generated over $\mathfrak{m}athds{Z}p[G]$. But since
$G$ is finite, the latter is equivalent to $Y(A/L'_{\infty})$ being
finitely generated over $\mathfrak{m}athds{Z}p$. Hence we have shown that Conjecture A
holds for $A$ over $L_{\infty}$ if and only if Conjecture A holds
for $A$ over $L_{\infty}'$.
\mathfrak{m}edskip
Therefore, replacing $F$ by $F(A[p])$, we may assume that
$A[p]\subseteq A(F)$. Write $\Gamma_n = \Gammal(F_{\infty}/F_n)$. Consider
the following commutative diagram
\[ \entrymodifiers={!! <0pt, .8ex>+} \SelectTips{eu}{}
\xymatrix{
0 \ar[r] & R(A/F_n) \ar[d]^{r_n} \ar[r] &
H^1(G_S(F_n),A[p^{\infty}]) \ar[d]^{f_n}
\ar[r]^{} & \displaystyle\bigoplus_{v_n\in S(F_n)}H^1(F_{n,v_n},
A[p^{\infty}]) \ar[d]^{\gammamma_n} \\
0 \ar[r] & R(A/F_{\infty})^{\Gamma_n} \ar[r] &
H^1(G_S(F_{\infty}),A[p^{\infty}])^{\Gamma_n}
\ar[r] & \Big(\displaystyle \displaystyle \mathop{\varinjlim}\limits_n\bigoplus_{v_n\in S(F_n)}
H^1(F_{n,v_n}, A[p^{\infty}])\Big)^{\Gamma_n} }
\]
with exact rows, and the vertical maps given by the restriction maps.
It is an easy exercise to show that $r_p(\ker f_n) \leq 2d$,
$r_p(\ker \gammamma_n) \leq 2d|S_f(F_n)|$, and that $f_n$ and
$\gammamma_n$ are surjective. It then follows from these estimates and
Lemma \ref{estimate lemma} that we have
\[ \Big| r_p\big(R(A/F_n))\big) -
r_p\big(R(A/F_{\infty})^{\Gamma_n}\big) \Big| = O(1).
\] Combining this observation with \cite[Lemma 13.20]{Wa}, we have
that Conjecture A holds for $A$ over $F_{\infty}$ if and only if
$r_p(R(A/F_n))=O(1)$. The conclusion of the theorem is now immediate
from Corollary \ref{asymptotic compare corollary}. \end{prop}f | 2,027 | 16,181 | en |
train | 0.22.4 | \section{Unboundedness of fine Selmer groups in $\mathfrak{m}athds{Z}/p$-extensions} \lambdabel{unboundness}
In this section, we will study the question of unboundedness of fine
Selmer groups in $\mathfrak{m}athds{Z}/p$-extensions. We first recall the case of
class groups. Since for a number field $L$, the $S$-class
group $\Cl_{S}(L)$ is finite, we have
$r_p(\Cl_{S}(L)) = \dim_{\mathfrak{m}athds{Z}/p}\big(\Cl_S(L)/p\big)$.
\begin{prop} \lambdabel{class Z/p}
Let $S$ be a finite set of primes of $F$ which contains all the
the archimedean primes. Then there exists a sequence $\{L_n\}$ of
distinct number fields such that each $L_n$ is a $\mathfrak{m}athds{Z}/p$-extension of
$F$ and such that
\[ r_p(\Cl_{S}(L_n)) \geq n \] for every $n \geq 1$.
\end{prop}
\begin{prop}f
Denote $r_1$ (resp. $r_2$) be the number of real places of $F$
(resp. the number of the pairs of complex places of $F$). Let $S_1$
be a set of primes of $F$ which contains $S$ and such that
\[ |S_1| \geq |S| + r_1 + r_2 + \delta+1. \]
Here $\delta = 1$ if $F$ contains a primitive $p$-root of unity, and
0 otherwise. By the theorem of Grunwald-Wang (cf. \cite[Theorem
9.2.8]{NSW}), there exists a $\mathfrak{m}athds{Z}/p$-extension $L_1$ of $F$ such that
$L_1/F$ is ramified at all the finite primes of $S_1$ and unramified
outside $S_1$. By \cite[Proposition 10.10.3]{NSW}, we have
\[ r_p(\Cl_{S}(L_1)) \geq |S_1| - |S| -r_1 - r_2 -\delta
\geq 1.\]
Choose $S_2$ to be a set of primes of $F$ which contains
$S_1$ (and hence $S_0$) and which has the property that
\[ |S_2| \geq |S_1| + 1 \geq |S| + r_1 + r_2 + \delta+2. \]
By the theorem of Grunwald-Wang, there exists a $\mathfrak{m}athds{Z}/p$-extension
$L_2$ of $F$ such that $L_2/F$ is ramified at all the finite primes
of $S_2$ and unramified outside $S_2$. In particular, the fields
$L_1$ and $L_2$ are distinct. By an application of \cite[Proposition
10.10.3]{NSW} again, we have
\[ r_p(\Cl_{S}(L_2)) \geq |S_2| - |S| -r_1 - r_2 -\delta
\geq 2.\]
Note that since there are infinitely many primes in $F$, we can always continue
the above process iteratively. Also, it is clear from our choice of
$L_n$, they are mutually distinct. Therefore, we have the required
conclusion. \end{prop}f
For completeness and for ease of later comparison, we record the
following folklore result.
\begin{thm} Let $F$ be a number field. Then we have
\[ \sup\{ r_p\big(\Cl(L)\big)~ |~
\mathfrak{m}box{L/F is a cyclic extension of degree p}\} = \infty \]\end{thm}
\begin{prop}f
Since $\Cl(L)$ surjects onto $\Cl_S(L)$, the theorem follows from
the preceding proposition.
\end{prop}f
We now record the analogous statement for the fine Selmer groups.
\begin{thm} \lambdabel{theorem Z/p} Let $A$ be an abelian variety defined over a
number field $F$. Suppose that $A(F)[p]\neq 0$. Then we have
\[ \sup\{ r_p\big(R(A/L)\big)~ |~
\mathfrak{m}box{L/F is a cyclic extension of degree p}\} = \infty \]\end{thm}
\begin{prop}f
This follows immediately from combining Lemma \ref{lower bound} and
Proposition \ref{class Z/p}. \end{prop}f
In the case, when $A(F)[p]=0$, we have the following weaker
statement.
\begin{cor} \lambdabel{theorem Z/p corollary} Let $A$ be a $d$-dimensional
abelian variety defined over a number field $F$. Suppose that
$A(F)[p]=0$. Define
\[m = \mathfrak{m}in\{[K:F]~|~ A(K)[p]\neq 0\}.\]
Then we have
\[ \sup\{ r_p\big(R(A/L)\big)~ |~
\mathfrak{m}box{L/F is an extension of degree pm}\} = \infty \]\end{cor}
\begin{prop}f
This follows from an application of the previous theorem to
the field $K$. \end{prop}f
\begin{remark} Clearly $1< m\leq |\mathfrak{m}athrm{GL}_{2d}(\mathfrak{m}athds{Z}/p)| = (p^{2d}-1)(p^{2d}
-p)\cdots (p^{2d}-p^{2d-1})$. In fact, we can even do
better\footnote{We thank Christian Wuthrich for pointing this out to
us.}. Write $G=\Gammal(F(A[p])/F)$. Note that this is a subgroup of
$\mathfrak{m}athrm{GL}_{2d}(\mathfrak{m}athds{Z}/p)$. Let $P$ be a nontrivial point in $A[p]$
and denote by $H$ the subgroup of $G$ which fixes $P$. Set $K =
F(A[p])^H$. It is easy to see that $[K:F] = [G:H] = |O_G(P)|$, where
$O_G(P)$ is the orbit of $P$ under the action of $G$. Since $O_G(P)$
is contained in $A[p]\setminus\{0\}$, we have $m \leq [K:F] =
|O_G(P)| \leq p^{2d}-1$. \end{remark}
As mentioned in the introductory section, analogous result to the
above theorem for the classical Selmer groups have been studied (see
\cite{Ba, Br, Ce, K, KS, Mat06, Mat09}). Since the fine Selmer group
is contained in the classical Selmer group (cf. Lemma \ref{indep of
S}), our result recovers the above mentioned results (under our
hypothesis). We note that the work of \cite{Ce} also considered the
cases of a global field of positive characteristic. We should also
mention that in \cite{ClS, Cr}, they have even established the
unboundness of $\Sha(A/L)$ over $\mathfrak{m}athds{Z}/p$-extensions $L$ of $F$ in
certain cases (see also \cite{K, Mat09} for some other related
results in this direction). In view of these results on $\Sha(A/L)$,
one may ask for analogous results for a `fine' Shafarevich-Tate
group.
\mathfrak{m}edskip
Wuthrich \cite{Wu} introduces such a group as follows. One first
defines a `fine' Mordell-Weil group $M_{p^{\infty}}(A/L)$ by the
exact sequence
$$
0 \longrightarrow\ M_{p^\infty}(A/L) \longrightarrow \ A(L) \otimesimes \mathfrak{m}athds{Q}_p/\mathfrak{m}athds{Z}_p
\longrightarrow\ \displaystyle\bigoplus_{v|p} A(L_v) \otimesimes \mathfrak{m}athds{Q}_p/\mathfrak{m}athds{Z}_p.
$$
Then, the `fine' Shafarevich-Tate group is defined by the exact sequence
$$
0 \ \longrightarrow\ M_{p^\infty}(A/L)\ \longrightarrow\ R_{p^\infty}(A/L)\ \longrightarrow\ \mathfrak{m}athds{Z}he_{p^\infty}(A/L)\ \longrightarrow
\ 0.
$$
In fact, it is not difficult to show that $\mathfrak{m}athds{Z}he(A/L)$ is contained
in the ($p$-primary) classical Shafarevich-Tate group (see loc.
cit.).
One may therefore think of $\mathfrak{m}athds{Z}he(A/L)$ as
the `Shafarevich-Tate part' of the fine Selmer group.
\mathfrak{m}edskip
With this definition in hand, one is naturally led to the following question for which we do not have
an answer at present.
\mathfrak{m}edskip \noindent \textbf{Question.} Retaining the assumptions of
Theorem \ref{theorem Z/p}, do we also have
\[ \sup\{ r_p\big(\mathfrak{m}athds{Z}he_{p^\infty}(A/L)\big)~ |~
\mathfrak{m}box{L/F is a cyclic extension of degree p}\} = \infty ?\]
\mathfrak{m}edskip
\section{Growth of fine Selmer groups in
infinite unramified pro-$p$ extensions} \lambdabel{unramified pro-p}
We introduce an interesting class of infinite unramified extensions
of $F$. Let $S$ be a finite set (possibly empty) of primes in $F$.
As before, we denote the $S$-ideal class group of $F$ by $\Cl_S(F)$.
For the remainder of the section, $F_{\infty}$ will denote the
maximal unramified $p$-extension of $F$ in which all primes in $S$
split completely. Write $\Sigma = \Sigma_F= \Gammal(F_{\infty}/F)$, and
let $\{ \Sigma_n\}$ be the derived series of $\Sigma$. For each $n$,
the fixed field $F_{n+1}$ corresponding to $\Sigma_{n+1}$ is the
$p$-Hilbert $S$-class field of $F_n$.
\mathfrak{m}edskip
Denote by $S_{\infty}$ the collection of infinite primes of $F$, and
define $\delta$ to be 0 if $\mathfrak{m}u_p\subseteq F$ and 1 otherwise. Let
$r_1(F)$ and $r_2(F)$ denote the number of real and complex places
of $F$ respectively. It is known that if the inequality
\[ r_p(\Cl_S(F)) \geq 2+ 2\sqrt{r_1(F)+ r_2(F) + \delta +
|S\setminus S_{\infty}|}\]
holds, then $\Sigma$ is infinite (see
\cite{GS}, and also \cite[Chap.\ X, Theorem 10.10.5]{NSW}). Stark
posed the question on whether $r_p(\Cl_S(F_n))$ tends to infinity in
an infinite $p$-class field tower as $n$ tends to infinity. By class
field theory, we have $r_p(\Cl_S(F_n)) = h_1(\Sigma_n)$. It then
follows from the theorem of Lubotzsky and Mann \cite{LuM} that
Stark's question is equivalent to whether the group $\Sigma$ is
$p$-adic analytic. By the following conjecture of Fontaine-Mazur
\cite{FM}, one does not expect $\Sigma$ to be an analytic group if
it is infinite.
\mathfrak{m}edskip\par
\mathfrak{m}edskip \noindent \textbf{Conjecture} (Fontaine-Mazur) \textit{For
any number field $F$,\, the group $\Sigma_F$ has no infinite
$p$-adic analytic quotient.}
\mathfrak{m}edskip
Without assuming the Fontaine-Mazur Conjecture, we have the
following unconditional (weaker) result, proven by various authors.
\begin{thm} \lambdabel{p-adic class tower} Let $F$ be a number field. If the
following inequality
\[ r_p(\Cl_S(F)) \geq 2+ 2\sqrt{r_1(F)+ r_2(F) +
\delta + |S\setminus S_{\infty}|}\] holds, then the group $\Sigma_F$
is not $p$-adic analytic. \end{thm}
\begin{prop}f When $S$ is the empty set, this theorem has been proved
independently by Boston \cite{B} and Hajir \cite{Ha}. For a general
nonempty $S$, this is proved in \cite[Lemma 2.3]{Ma}. \end{prop}f
\mathfrak{m}edskip\par
Collecting all the information we have, we obtain the following
result which answers an analogue of Stark's question, namely the
growth of the $p$-rank of the fine Selmer groups.
\begin{thm} \lambdabel{Fine Sel in class tower}
Let $A$ be an Abelian
variety of dimension $d$ defined over $F$ and let $S$ be a finite
set of primes which contains the primes above $p$, the primes of bad
reduction of $A$ and the archimedean primes. Let $F_{\infty}$ be the
maximal unramified $p$-extension of $F$ in which all primes of the
given set $S$ split completely, and let $F_n$ be defined as above.
Suppose that
\[ r_p(\Cl_S(F)) \geq 2+ 2\sqrt{r_1(F)+ r_2(F) + \delta +
|S\setminus S_{\infty}|}\] holds, and suppose that $A(F)[p]\neq 0$.
Then the $p$-rank of $R(A/F_n)$ is unbounded as $n$ tends to
infinity. \end{thm}
\begin{prop}f
By Lemma \ref{lower bound}, we have
\[ r_p(R(A/F_n)) \geq r_p(\Cl_S(F_n))r_p(A(F)[p])-2d. \]
Now by the hypothesis of the theorem, it follows from Theorem \ref{p-adic class
tower} that $\Sigma_F$ is not $p$-adic analytic. By the theorem of Lubotzsky and Mann
\cite{LuM}, this in turn implies that $r_p(\Cl_S(F_n))$ is
unbounded as $n$ tends to infinity. Hence we also have that $r_p(R(A/F_n))$ is unbounded as $n$ tends to
infinity (note here we also make use of the fact that $r_p(A(F)[p])\neq 0$ which comes from the hypothesis that $A(F)[p]\neq 0$). \end{prop}f
\mathfrak{m}edskip
\begin{remark}
(1) The analogue of the above result for the classical Selmer group has been
established in \cite{LM, Ma}. In particular, our result here refines
(and implies) those proved there.
(2) Let $A$ be an abelian variety defined over $F$ with complex
multiplication by $K$, and suppose that $K\subseteq F$. Let
$\mathfrak{m}athfrak{p}$ be a prime ideal of $K$ above $p$. Then one can
define a $\mathfrak{m}athfrak{p}$-version of the fine Selmer group replacing
$A[p^{\infty}]$ by $A[\mathfrak{m}athfrak{p}^{\infty}]$ in the definition of
the fine Selmer group. The above arguments carry over to establish
the fine version of the results in \cite{MO}. \end{remark}
\mathfrak{m}edskip
\end{ack}
\mathfrak{m}edskip
\end{document} | 4,040 | 16,181 | en |
train | 0.23.0 | \begin{document}
\title{Bayesian testing of linear versus nonlinear effects using Gaussian process priors}
\author{Joris Mulder}
\date{}
\thispagestyle{empty} \maketitle
\begin{abstract}
A Bayes factor is proposed for testing whether the effect of a key predictor variable on the dependent variable is linear or nonlinear, possibly while controlling for certain covariates. The test can be used (i) when one is interested in quantifying the relative evidence in the data of a linear versus a nonlinear relationship and (ii) to quantify the evidence in the data in favor of a linear relationship (useful when building linear models based on transformed variables). Under the nonlinear model, a Gaussian process prior is employed using a parameterization similar to Zellner's $g$ prior resulting in a scale-invariant test. Moreover a Bayes factor is proposed for one-sided testing of whether the nonlinear effect is consistently positive, consistently negative, or neither. Applications are provides from various fields including social network research and education.
\end{abstract}
\section{Introduction}
Linearity between explanatory and dependent variables is a key assumption in most statistical models. In linear regression models, the explanatory variables are assumed to affect the dependent variables in a linear manner, in logistic regression models it is assumed that the explanatory variables have a linear effect on the logit of the probability of a success on the outcome variable, in survival or event history analysis a log linear effect is generally assumed between the explanatory variables and the event rate, etc. Sometimes nonlinear functions (e.g., polynomials) are included of certain explanatory variables (e.g., for modeling curvilinear effects), or interaction effects are included between explanatory variables, which, in turn, are assumed to affect the dependent variable(s) in a linear manner.
Despite the central role of linear effects in statistical models, statistical tests of linearity versus nonlinearity are only limitedly available. In practice researchers tend to eyeball the relationship between the variables based on a scatter plot. When a possible nonlinear relationship is observed, various linear transformations (e.g., polynomial, logarithmic, Box-Cox) are applied and significance tests are executed to see if the coefficients of the transformed variables are significant or not. Eventually, when the nonlinear trend results in a reasonable fit, standard statistical inferential methods are applied (such as testing whether certain effects are zero and/or evaluating interval estimates).
This procedure is problematic for several reasons. First, executing many different significance tests on different transformed variables may result in $p$-hacking and inflated type I errors. Second, regardless of the outcome of a significance test, e.g., when testing whether the coefficient of the square of the predictor variable, $X^2$, equals zero, $H_0:\beta_{X^2}=0$ versus $H_1:\beta_{X^2}\not=0$, we would not learn whether $X^2$ has a linear effect on $Y$ or not; only whether an increase of $X^2$ results \textit{on average} in an increase/decrease of $Y$ or not. Third, nonlinear transformations (e.g., polynomials, logarithmic, Box-Cox) are only able to create approximate linearity for a limited set of nonlinear relationships. Fourth, eyeballing the relationship can be subjective, and instead a principle approach is needed.
To address these shortcomings this paper proposes a Bayes factor for the following hypothesis test
\begin{eqnarray}
\nonumber\text{$M_0:$ ``$X$ has a linear effect on $Y$''}~~\\
\label{Htest}\text{versus}~~~~~~~~~~~~~~~~~~~\\
\nonumber \text{$M_1:$ ``$X$ has a nonlinear effect on $Y$'',}
\end{eqnarray}
possibly while controlling for covariates. Unlike $p$ value significance tests, a Bayes factor can be used for quantifying the relative evidence in favor of linearity \citep{Wagenmakers:2007}. Furthermore, Bayes factors are preferred for large samples as significance tests may indicate that the null model needs to be rejected even though inspection may not show striking discrepancies from linearity. This behavior is avoided when using Bayesian model selection \citep{Raftery:1995}.
Under the alternative model, a Gaussian process prior is used to model the nonlinear effect. A Gaussian process is employed due to its flexibility to model nonlinear relationships \citep{Rasmussen:2007}. Because nonlinear relationships are generally fairly smooth, the Gaussian process is modeled using a squared exponential kernel. Furthermore, under both models a $g$ prior approach is considered \cite{Zellner:1986} so that the test is scale-invariant of the dependent variable. To our knowledge a $g$ prior was not used before for parameterizing a Gaussian process. As a result of the common parameterization under both models, the test comes down to testing whether a specific scale parameter equals zero or not, where a zero value implies linearity. Under the alternative the scale parameter is modeled using a half-Cauchy prior with a scale hyperparameter that can be chosen depending on the expected deviation from linearity under the alternative model.
Furthermore, in the case of a nonlinear effect, a Bayes factor is proposed for testing whether the effect is consistently increasing, consistently decreasing or neither. This test can be seen as a novel nonlinear extension to one-sided testing.
Finally note that the literature on Gaussian processes has mainly focused on estimating nonlinear effects \citep[e.g.,][]{Rasmussen:2007,Duvenaud:2011,Cheng:2019}, and not testing nonlinear effects, with an exception of \cite{Liu:2017} who proposed a significance (score) test, which has certain drawbacks as mentioned above. Further note that spline regression analysis is also typically used for estimating nonlinear effects, and not for testing (non)linearity.
The paper is organized as follows. Section 2 describes the linear and nonlinear Bayesian models and the corresponding Bayes factor. Its behavior is also explored in a numerical simulation. Section 3 describes the nonlinear one-sided Bayesian test. Subsequently, Section 4 presents 4 applications of the proposed methodology in different research fields. We end the paper with a short discussion in Section 5. | 1,440 | 9,686 | en |
train | 0.23.1 | \section{A Bayes factor for testing (non)linearity}
\subsection{Model specification}
Under the standard linear regression model, denoted by $M_0$, we assume that the mean of the dependent variable $Y$ depends proportionally on the key predictor variable $X$, possibly while correcting for certain covariates. Mathematically, this implies that the predictor variable is multiplied with the same coefficient, denoted by $\beta$, to compute the (corrected) mean of the dependent variable for all values of $X$. The linear model can then be written as
\begin{equation}
M_0:\textbf{y}\sim\mathcal{N}(\beta\textbf{x} + \bm\gamma\textbf{Z},\sigma^2 \textbf{I}_n),
\end{equation}
where $\textbf{y}$ is a vector containing the $n$ observations of the dependent variable, $\textbf{x}$ contains the $n$ observations of the predictor variable, $\textbf{Z}$ is a $n\times k$ matrix of covariates (which are assumed to be orthogonal to the key predictor variable) with corresponding coefficients $\bm\gamma$, and $\sigma^2$ denotes the error variance which is multiplied with the identity matrix of size $n$, denoted by $\textbf{I}_n$. To complete the Bayesian model, we adopt the standard $g$ prior approach \citep{Zellner:1986} by setting a Gaussian prior on $\beta$ where the variance is scaled based on the error variance, the scale of the predictor variable, and the sample size, with a flat prior for the nuisance regression coefficients, and the independence Jeffreys prior for the error variance, i.e.,
\begin{eqnarray*}
\beta |\sigma^2 &\sim & N(0,\sigma^2g(\textbf{x}'\textbf{x})^{-1})\\
p(\bm\gamma) & \propto & 1\\
p(\sigma^2) & \propto & \sigma^{-2}.
\end{eqnarray*}
The prior mean is set to the default value of 0 so that, a priori, small effects in absolute value are more likely than large effects (as is common in applied research) and positive effects are equally likely as negative effects (an objective choice in Bayesian one-sided testing \citep{Jeffreys,Mulder:2010}). By setting $g=n$ we obtain a unit-information prior \citep{KassWasserman:1995,Liang:2008} which will be adopted throughout this paper\footnote{Note that we don't place a prior on $g$, as is becoming increasingly common \citep{Liang:2008,Rouder:2009,Bayarri:2007}, because we are not specifically testing whether $\beta$ equals 0 and to keep the model as simple as possible.}.
Under the alternative nonlinear model, denoted by $M_1$, we assume that the mean of the dependent variable does not depend proportionally on the predictor variable. This implies that the observations of the predictor variable can be multiplied with different values for different values of the predictor variable $X$. This can be written as follows
\begin{equation}
M_1:\textbf{y}\sim\mathcal{N}(\bm\beta(\textbf{x})\circ\textbf{x} + \bm\gamma\textbf{Z},\sigma^2 \textbf{I}_n),
\end{equation}
where $\bm\beta(\textbf{x})$ denotes a vector of length $n$ containing the coefficients of the corresponding $n$ observations of the predictor variable $\textbf{x}$, and $\circ$ denotes the Hadamard product. The vector $\bm\beta(\textbf{x})$ can be viewed as the $n$ realizations when plugging the different values of $\textbf{x}$ in a unknown theoretical function $\beta(x)$. Thus, in the special case where $\beta(x)$ is a constant function, say, $\beta(x)=\beta$, model $M_1$ would be equivalent to the linear model $M_0$.
Next we specify a prior probability distribution for the function of the coefficients. Because we are testing for linearity, it may be more likely to expect relatively smooth changes between different values, say, $\beta_i(x_i)$ and $\beta_j(x_j)$ than large changes when the values $x_i$ and $x_j$ are close to each other. A Gaussian process prior for the function $\bm\beta(\textbf{x})$ has this property which is defined by
\begin{equation}
\bm\beta(\textbf{x}) | \tau^2,\xi \sim \mathcal{GP}(\textbf{0},\tau^2k(\textbf{x},\textbf{x}'|\xi)),
\end{equation}
which has a zero mean function and a kernel function $k(\cdot,\cdot)$ which defines the covariance of the coefficients as a function of the distance between values of the predictor variable. A squared exponential kernel will be used which is given by
\begin{equation}
\label{kernel}
k(x_i,x_j|\xi) = \exp\left\{ -\tfrac{1}{2}\xi^2(x_i-x_j)^2 \right\},
\end{equation}
for $i,j=1,\ldots,n$. As can be seen, predictor variables $x_i$ and $x_j$ that are close to (far away from) each other have a larger (smaller) covariance, and thus, are on average closer to (further away from) each other. The hyperparameter $\xi$ controls the smoothness of the function where values close to 0 imply very smooth function shapes and large values imply highly irregular shapes (as will be illustrated later). Note that typically the smoothness is parameterized via the reciprocal of $\xi$. Here we use the current parameterization so that the special value $\xi=0$ would come down to a constant function, say $\beta(x)=\beta$, which would correspond to a linear relationship between the predictor and the outcome variable.
The hyperparameter $\tau^2$ controls the prior magnitude of the coefficients, i.e., the overall prior variance for the coefficients. We extend the $g$ prior formulation to the alternative model by setting $\tau^2=\sigma^2g(\textbf{x}'\textbf{x})^{-1}$ and specify the same priors for $\bm\gamma$ and $\sigma^2$ as under $M_0$. Furthermore, by taking into account that the Gaussian process prior implies that the coefficients for the observed predictor variables follow a multivariate normal distribution, the priors under $M_1$ given the predictor variables can be formulated as
\begin{eqnarray*}
\bm\beta(\textbf{x})|\sigma^2,\xi,\textbf{x} & \sim & \mathcal{N}(\textbf{0}, \sigma^2g(\textbf{x}'\textbf{x})^{-1}k(\textbf{x},\textbf{x}'|\xi))\\
p(\bm\gamma) & \propto & 1\\
p(\sigma^2) & \propto & \sigma^{-2}.
\end{eqnarray*}
To complete the model a half-Cauchy prior is specified for the key $\xi$ having prior scale $s_{\xi}$, i.e.,
\[
\xi \sim \text{half-}\mathcal{C}(s_{\xi}).
\]
The motivation for this prior is based on one of \cite{Jeffreys} desiderata which states that small deviations from the null value are generally more likely a priori than large deviations otherwise there would be no point in testing the null value. In the current setting this would imply that small deviations from linearity are more likely to be expected than large deviations. This would imply that values of $\xi$ close to 0 are more likely a priori than large values, and thus that the prior distribution for $\xi$ should be a decreasing function. The half-Cauchy distribution satisfies this property. Further note that the half-Cauchy prior is becoming increasingly popular for scale parameters in Bayesian analyses \citep{Gelman:2006,Polson:2012,MulderPericchi:2018}.
The prior scale for key parameter $\xi$ under $M_1$ should be carefully specified as it defines which deviations from linearity are most plausible. To give the reader more insight about how $\xi$ affects the distribution of the slopes of $\textbf{y}$ as function of $\textbf{x}$, Figure \ref{fig1} displays 10 random draws of the function of slopes when setting $\xi_1=0$ (Figure \ref{fig1}a), $\xi=\exp(-2)$ (Figure \ref{fig1}b), $\xi=\exp(-1)=1$ (Figure \ref{fig1}c), $\xi=\exp(0)$ (Figure \ref{fig1}d) while fixing $\tau^2=\sigma^2 g (\textbf{x}'\textbf{x})^{-1}=1$, where the slope function is defined by
\begin{equation}
\bm\eta (\textbf{x}) = \frac{d}{d\textbf{x}}[\bm\beta(\textbf{x})\circ\textbf{x}] = \bm\beta(\textbf{x}) +
\frac{d}{d\textbf{x}}[\bm\beta(\textbf{x})]\circ\textbf{x}.
\end{equation}
The figure shows that by increasing $\xi$ we get larger deviations from a constant slope. Based on these plots we qualify the choices $\xi=\exp(-2)$, $\exp(-1)$, and 1 as small deviations, medium deviations, and large deviations from linearity, respectively.
\begin{figure}
\caption{Ten random slope functions $\bm\eta(\textbf{x}
\label{fig1}
\end{figure}
Because the median of a half-Cauchy distribution is equal to the scale parameter $s_{\xi}$, the scale parameter could be set based on the expected deviation from linearity. It is important to note here that the expected deviation depends on the range of the predictor variable: In a very small range it may be expected that the effect is close to linear but in a wide range of the predictor variable, large deviations from linearity may be expected. Given the plots in Figure \ref{fig1}, one could set the prior scale equal to $s_{\xi} = 6e/\text{range}(\textbf{x})$, where $e$ can be interpreted as a standardized measure for the deviation from linearity such that setting $e = \exp(-2), \exp(-1)$, or $\exp(0)$ would imply small, medium, or large deviations from linearity, respectively. Thus, if the range of $\textbf{x}$ would be equal to 6 (as in the plots in Figure \ref{fig1}), the median of $\xi$ would be equal to $\exp(-2), \exp(-1)$, and $\exp(0)$, as plotted in Figure \ref{fig1}.
\subsection{Bayes factor computation}
The Bayes factor is defined as the ratio of the marginal (or integrated) likelihoods under the respective models. For this reason it is useful to integrate out the coefficient $\beta$ under $M_0$ and the coefficients $\bm\beta(\textbf{x})$ under $M_1$, which are in fact nuisance parameters in the test. This yields the following integrated models
\begin{align}
M_0 : &
\begin{cases}
\textbf{y} | \textbf{x},\bm\gamma,\sigma^2 \sim \mathcal{N}(\textbf{Z}\bm\gamma,\sigma^2 g (\textbf{x}'\textbf{x})^{-1}\textbf{x}\textbf{x}'+\sigma^2\textbf{I}_n) \\
p(\bm\gamma) \propto 1\\
p(\sigma^2) \propto \sigma^{-2}
\end{cases}\\
M_1 : &
\begin{cases}
\textbf{y} | \textbf{x},\bm\gamma,\sigma^2,\xi \sim \mathcal{N}(\textbf{Z}\bm\gamma,\sigma^2 g (\textbf{x}'\textbf{x})^{-1} k(\textbf{x},\textbf{x}'|\xi) \circ \textbf{x}\textbf{x}'+\sigma^2\textbf{I}_n) \\
p(\bm\gamma) \propto 1\\
p(\sigma^2) \propto \sigma^{-2}\\
\xi \sim \text{half-}\mathcal{C}(s_{\xi}),
\end{cases}
\end{align}
As can be seen $\sigma^2$ is a common factor in all (co)variances of $\textbf{y}$ under both models. This makes inferences about $\xi$ invariant to the scale of the outcome variable. Finally note that the integrated models clearly show that the model selection problem can concisely be written as
\begin{eqnarray*}
M_0&:&\xi = 0\\
M_1&:&\xi > 0.
\end{eqnarray*}
because $k(\textbf{x},\textbf{x}'|\xi)=1$ when setting $\xi=0$.
Using the above integrated models, the Bayes factor can be written as
\begin{equation*}
B_{01} = \frac{
\iint p(\textbf{y}|\textbf{x},\bm\gamma,\sigma^2,\xi=0)\pi(\bm\gamma)\pi(\sigma^2)
d\bm\gamma d\sigma^2
}{
\iiint p(\textbf{y}|\textbf{x},\bm\gamma,\sigma^2,\xi)\pi(\bm\gamma)\pi(\sigma^2)\pi(\xi)d\bm\gamma d\sigma^2 d\xi
},
\end{equation*}
which quantifies the relative evidence in the data between the linear model $M_0$ and the nonlinear model $M_1$.
Different methods can be used for computing marginal likelihoods. Throughout this paper we use an importance sample estimate. The R code for the computation of the marginal likelihoods and the sampler from the posterior predictive distribution can be found in the supplementary material.
\subsection{Numerical behavior}
Numerical simulation were performed to evaluate the performance of the proposed Bayes factor. The nonlinear function was set equal to $\beta(x)=3h\phi(x)$, for $h=0,\ldots,.5$, where $\phi$ is the standard normal probability density function (Figure \ref{simulfig}; upper left panel). In the case $h=0$, the effect is linear, and as $h$ increase, the effect becomes increasingly nonlinear. The dependent variable was computed as $\bm\beta(\textbf{x})\circ\textbf{x}+\bm\epsilon$, where $\bm\epsilon$ was sampled from a normal distribution with mean 0 and $\sigma=.1$.
The logarithm of the Bayes factor, denoted by $\log(B_{01})$, was computed between the linear model $M_0$ and the nonlinear model $M_1$ (Figure \ref{simulfig}; lower left panel) while setting the prior scale equal to $s_{\xi}=\exp(-2)$ (small prior scale; solid line), $\exp(-1)$ (medium prior scale; dashed line), and $\exp(0)$ (large prior scale; dotted line) for sample size $n=20$ (black lines), $50$ (red lines), and 200 (green lines) for equally distant predictor values in the interval $(-3,3)$. Overall we see the expected trend where we obtain evidence in favor of $M_0$ in the case $h$ is close to zero and evidence in favor of $M_1$ for larger values of $h$. Moreover the evidence for $M_0$ ($M_1$) is larger for larger sample sizes and larger prior scale when $h=0$ ($h\gg 0$) as anticipated given the consistent behavior of the Bayes factor.
\begin{figure}
\caption{Left panels. (Upper) Example functions of $\beta(x)=3h\phi(x)$, for $h=0,\ldots,.5$, where $\phi$ is the standard normal probability density function. Left lower panel and (lower) corresponding logarithm of the Bayes factor as function of $h$ for $n=20$ (black lines), $50$ (red lines), and 200 (green lines) for a small prior scale $s_{\xi}
\label{simulfig}
\end{figure}
Next we investigated the robustness of the test to nonlinear relationships that are not smooth as in the Gaussian processes having a squared exponential kernel. A similar analysis was performed when using the nonsmooth, discontinuous step function $\beta(x)=h~1(x>0)$, where $1(\cdot)$ is the indicator function, for $h=0,\ldots,.5$ (Figure \ref{simulfig}; upper right panel). Again the dependent variable was computed as $\bm\beta(\textbf{x})\circ\textbf{x}+\bm\epsilon$ and the logarithm of the Bayes factor was computed (Figure \ref{simulfig}; lower right panel). The Bayes factor shows a similar behavior as the above example where the data came from a smooth nonlinear alternative. The similarity of the results can be explained by the fact that even though the step function cannot be generated using a Gaussian process with a squared exponential kernel, the closest approximation of the step function is still nonlinear, and thus evidence is found against the linear model $M_0$ in the case $h>0$. This illustrates that the proposed Bayes factor is robust to nonsmooth nonlinear alternative models. | 4,055 | 9,686 | en |
train | 0.23.2 | \section{Extension to one-sided testing}
When testing linear effects, the interest is often on whether the effect is either positive or negative if the null does not hold. Equivalently in the case of nonlinear effects the interest would be whether the effect is consistently increasing or consistently decreasing over the range of $X$. To model this we divide the parameter space under the nonlinear model $M_1$ in three subspaces:
\begin{eqnarray}
\nonumber M_{1,\text{positive}} &:& \text{``the nonlinear effect of $X$ on $Y$ is consistently positive''}\\
\nonumber M_{1,\text{negitive}} &:& \text{``the nonlinear effect of $X$ on $Y$ is consistently negative''}\\
\nonumber M_{1,\text{complement}}&:& \text{``the nonlinear effect of $X$ on $Y$ is neither consistently}\\
\label{onesided}&&\text{positive, nor consistently negative''.}
\end{eqnarray}
Note that the first model implies that the slope function is consistently positive, i.e., $\eta(x)>0$, the second implies that the slope is consistently negative, i.e., $\eta(x)<0$, while the third complement model assumes that the slope function is neither consistently positive nor negative.
Following standard Bayesian methodology using truncated priors for one-sided testing problems \citep{Klugkist:2005,Mulder:2020}, we set truncated Gaussian process priors on each of these three models, e.g., for model $M_{1,\text{positive}}$, this comes down to
\[
\pi_{1,\text{pos}}(\bm\beta(\textbf{x})|\xi_1,\tau_1) = \pi_1(\bm\beta(\textbf{x})|\xi,\tau)
\text{Pr}(\bm\eta(\textbf{x})>\textbf{0}|M_1,\xi,\tau)^{-1} 1_{\bm\eta(\textbf{x})>\textbf{0}}(\bm\beta(\textbf{x})),
\]
where $1_{\{\cdot\}}(\cdot)$ denotes the indicator function, and the prior probability, which serves as normalizing constant, equals
\[
\text{Pr}(\bm\eta(\textbf{x})>\textbf{0}|M_1,\xi,\tau) = \int_{\bm\eta(\textbf{x})>\textbf{0}} \pi_1(\bm\beta(\textbf{x})|\xi,\tau) d\bm\beta(\textbf{x}).
\]
Note that the prior probability for a consistently positive effect is equal because the prior mean of $\bm\beta(\textbf{x})$ equals $\textbf{0}$. Given this prior, the Bayes factor of each constrained model against the unconstrained model $M_1$ is then given by the ratio of the posterior and prior probabilities that the constraints hold under $M_1$, e.g.,
\[
B_{(1,\text{pos})u} = \frac{\text{Pr}(\bm\eta(\textbf{x})>\textbf{0}|M_1,\textbf{y})}{\text{Pr}(\bm\eta(\textbf{x})>\textbf{0}|M_1)}.
\]
Bayes factors between the above three models can then be computed using the transitive property of the Bayes factor, e.g., $B_{(1,\text{pos})(1,\text{comp})}=B_{(1,\text{pos})u}/B_{(1,\text{comp})u}$.
The choice of the prior of $\xi$ (which reflects the expected deviation from linearity before observing the data) implicitly determines the prior probability that the nonlinear effect is consistently positive or negative effect. This makes intuitive sense as large (small) deviations from linearity make it less (more) likely that the effect is either consistently positive or negative. This can also be observed from a careful inspection of the random draws in Figure \ref{fig1}. When $\xi=\exp(-2)$, we see that 4 out of 10 random functions in Figure \ref{fig1}b are consistently positive and 2 functions are consistently negative; when $\xi=\exp(-1)$ we see 1 random function that is consistently positive and 1 function that is consistently negative; and when $\xi=\exp(0)$ none of the 10 draws are either consistently positive or negative. The probabilities for a consistently positive (or negative) effect can simply be computed as the proportion of draws of random functions that is consistently positive (or negative). The choices $s_{\xi}=\exp(-2),~\exp(-1),$ and $\exp(0)$ result in prior probabilities for a consistently positive effect are approximately 0.25, 0.14, and 0.06.
\section{Empirical applications}
\subsection{Neuroscience: Facebook friends vs grey matter}
\cite{Kanai:2012} studied the relationship between the number of facebook friends and the grey matter density in regions of the brain that are related to social perception and associative memory to better understand the reason reasons for people to participate in online social networking. Here we analyze the data from the right entorhinal cortex ($n=41$). Due to the nature of the variables a positive relationship was expected. Based on a significance test \citep{Kanai:2012} and a Bayes factor \citep{Wetzels:2012} on a sample of size 41, it was concluded that there is evidence for a nonzero correlation between the square root of the number of Facebook friends and the grey matter density. In order for a correlation to be meaningful however it is important that the relationship is (approximately) linear. Here we test whether the relationship is linear or nonlinear. Furthermore, in the case of a nonlinear relationship, we test whether the relationships are consistently positive, consistently negative, or neither. Besides the predictor variable, the employed model has an intercept. The predictor variable is shifted to have a mean of 0 so that it is independent of the vector of ones for the intercept.
The Bayes factor between the linear model against the nonlinear model when using a prior scale of $\exp(-1)$ (medium effect) was equal to $B_{01}=2.50$ (with $\log(B_{01})=0.917$). This implies very mild evidence for a linear relationship between the square root of the number of Facebook friends and grey matter density in this region of the predictor variable. When assuming equal prior model probabilities, this would result in posterior model probabilities of .714 and .286 for $M_0$ and $M_0$, respectively. Thus if we would conclude that the relation is linear there would be a conditional error probability of drawing the wrong conclusion. Table \ref{appresults} presents the Bayes factors also for the other prior scales which tell a similar tale. Figure \ref{appfigure} (upper left panel) displays the data (circles; replotted from Kanai et al., 2012) and 50 draws of the posterior distribution density for the mean function under the nonlinear model at the observed values of the predictor variable. As can be seen most draws are approximately linear, and because the Bayes factor functions as an Occam's razor, the (linear) null model receives most evidence.
Even though we found evidence for a linear effect, there is still posterior model uncertainty and therefore we computed the Bayes factors between the one-sided models \eqref{onesided} under the nonlinear model $M_1$. This resulted in Bayes factors for the consistently positive, consistently negative, and the complement model against the unconstrained model of $B_{(1,\text{pos})u}=\frac{.825}{.140}=5.894$, $B_{(2,\text{pos})u}=\frac{.000}{.140}=0.000$, and $B_{(1,\text{comp})u}=\frac{.175}{.720}=0.242$, and thus most evidence for a consistently positive effect $B_{(1,\text{pos})(1,\text{neg})}\approx\infty$ and $B_{(1,\text{pos})(1,\text{neg})}\approx24.28$. These results are confirmed when checking the slopes of the posterior draws of the nonlinear mean function in Figure \ref{appfigure} (upper left panel).
\begin{table}[t]
\begin{center}
\caption{Log Bayes factors for the linear model versus the nonlinear model using different prior scales $s_{\xi}$.}
\begin{tabular}{lccccccccccc}
\hline
& sample size & $s_{\xi}=\mbox{e}^{-2}$ & $s_{\xi}=\mbox{e}^{-1}$ & $s_{\xi}=1$\\
\hline
Fb friends \& grey matter & 41 & 0.508 & 0.917 & 1.45\\
Age \& knowing gay & 63 & -37.7 & -38.3 & -38.1\\
Past activity \& waiting time & 500 & -0.776 & -0.361 & 0.394\\
Mother's IQ \& child test scores & 434 & -2.46 & -2.07 & -1.38\\
\hline
\end{tabular}\label{appresults}
\end{center}
\end{table}
\begin{figure}
\caption{Observations (circles) and 50 draws of the mean function (lines) under the nonlinear model $M_1$ for the four different applications. In the lower left panel draws are given in the case the mother finished her high school (light blue lines) or not (dark blue lines).}
\label{appfigure}
\end{figure}
\subsection{Sociology: Age and attitude towards gay}
We consider data presented in \cite{Gelman:2014} from the 2004 National Annenberg Election Survey containing respondents' age, sex, race, and attitude on three gay-related questions from the 2004 National Annenberg Election Survey. Here we are interested in the relationship between age and the proportion of people who know someone who's gay ($n=63$). It may be expected that older people may know less people who are gay and thus a negative relationship may be expected. Here we test whether the relationship between these variables is linear or not. In the case of a nonlinear relationship we also perform the one-sided test whether the relationship is consistently positive, negative, or neither. Again the employed model also has an intercept.
When setting the prior scale to a medium deviation from linearity, the logarithm of the Bayes factor between the linear model against the nonlinear model was approximately equal to $-38.3$, which corresponds to a Bayes factor of 0. This implies convincing evidence for a nonlinear effect. When using a small or large prior scale, the Bayes factors result in the same conclusion (Table \ref{appresults}). Figure \ref{appfigure} (upper right panel) displays the data (black circles) and 50 posterior draws of the mean function, which have clear nonlinear curves which fit the observed data.
Next we computed the Bayes factors for the one-sided test which results in decisive evidence for the complement model that the relationship is neither consistently positive nor consistently negative, with $B_{(1,\text{comp})(1,\text{pos})}=\infty$ and $B_{(1,\text{comp})(1,\text{neg})}=\infty$. This is confirmed when checking the posterior draws of the mean function in Figure \ref{appfigure} (upper right panel). We see a slight increase of the proportion of respondents who know someone who's gay towards the age of 45, and a decrease afterwards.
\subsection{Social networks: inertia and dyadic waiting times}
In dynamic social network data it is often assumed that actors in a network have a tendency to continue initiate social interactions with each other as a function of the volume of past interactions. This is also called inertia. In the application of the relational event model \cite{Butts:2008}, it is often assumed that the expected value of the logarithm of the waiting time between social interactions depends linearly on the number of past social interactions between actors. Here we consider relational (email) data from the Enron e-mail corpus \citep{Cohen:2009}. We consider a subset of the last $n=500$ emails (excluding 4 outliers) in a network of 156 employees in the Enron data \citep{Cohen:2009}. We use a model with an intercept.
Based on a medium prior scale under the nonlinear model, the logarithm of the Bayes factor between the linear model against the nonlinear model equals $\log(B_{01})=-0.361$, which corresponds to $B_{10}=1.43$, implying approximately equal evidence for both models. The posterior probabilities for the two models would be 0.412 and 0.588 for model $M_0$ and $M_1$, respectively. When using the small and large prior scale the Bayes factors are similar (Table \ref{appresults}), where the direction of the evidence flips towards the null when using a large prior scale. This could be interpreted as that a large deviation from linearity is least likely. Based on the posterior draws of the mean function in Figure \ref{appfigure} (lower left panel) we also see an approximate linear relationship. The nonlinearity seems to be mainly caused by the larger observations of the predictor variable. As there are relatively few large observations, the evidence is inconclusive about the nature of the relationship (linear or nonlinear). This suggests that more data would be needed in the larger region of the predictor variable.
The Bayes factors for the one-sided tests yield most evidence for a consistent decrease but the evidence is not conclusive in comparison to the complement model with $B_{(1,\text{neg})(1,\text{pos})}=\infty$ and $B_{(1,\text{neg})(1,\text{comp})}=5.14$. This suggests that there is most evidence that dyads (i.e., pairs of actors) that have been more active in the past will communicate more frequently.
\subsection{Education: Mother's IQ and child test scores}
In \cite{GelmanHill} the relationship between the mother's IQ and the test scores of her child is explored while controlling for whether the mother finished her high school. The expectation is that there is a positive relationship between the two key variables, and additionally there may be a positive effect of whether the mother went to high school. Here we explore whether the relationship between the mother's IQ and child test scores is linear. An ANCOVA model is considered with an intercept and a covariate that is either 1 or 0 depending on whether the mother finished high school or not\footnote{As discussed by \cite{GelmanHill} an interaction effect could also be reasonable to consider. Here we did not add the interaction effect for illustrative purposes. We come back to this in the Discussion.}.
Based on a medium prior scale, we obtain a logarithm of the Bayes factor for $M_0$ against $M_1$ of $-2.07$. This corresponds to a Bayes factor of $B_{10}=7.92$ which implies positive evidence for the nonlinear model. Table \ref{appresults} shows that the evidence for $M_1$ is slightly higher (lower) when using a smaller (larger) prior scale. This suggests that a small deviation from linearity is more likely than a large deviation a posteriori.
Next we computed the Bayes factors for testing whether the relationships are consistently increasing, consistently decreasing, or neither. We found clear evidence for a consistently increasing effect with Bayes factors equal to $B_{(1,\text{pos})(1,\text{neg})}=\infty$ and $B_{(1,\text{pos})(1,\text{comp})}=13.3$. This implies that, within this range of the predictor variable, a higher IQ of the mother always results in a higher expected test score of the child. This is also confirmed from the random posterior curves in Figure \ref{appfigure} (lower right panel) where we correct for whether the mother finished high
school (blue lines) or not (green lines). | 3,738 | 9,686 | en |
train | 0.23.3 | \section{Discussion}
In order to make inferences about the nature of the relationship between two variables principled statistical tests are needed. In this paper a Bayes factor was proposed that allows one to quantify the relative evidence in the data between a linear relationship and a nonlinear relationship, possibly while controlling for certain covariates. The test is useful (i) when one is interested in assessing whether the relationship between variables is more likely to be linear or more likely to be nonlinear, and (ii) to determine whether a certain relationship is linear after transformation.
A Gaussian process prior with a square exponential kernel was used to model the nonlinear relationship under the alternative (nonlinear) model. The model was parameterized similar as Zellner's $g$ prior to make inferences that are invariant of the scale of the dependent variable and predictor variable. Moreover the Gaussian process was parameterized using the reciprocal of the scale length parameter which controls the smoothness of the nonlinear trend so that the linear model would be obtained when setting this parameter equal to 0. Moreover a standardized scale for this parameter was proposed to quantify the deviation from linearity under the alternative model.
In the case of a nonlinear effect a Bayes factor was proposed for testing whether the effect was consistently positive, consistently negative, or neither. This test can be seen as a nonlinear extension of Bayesian one-sided testing. Unlike the linear one-sided test, the Bayes factor depends on the prior scale for the nonlinear one-sided test. Thus, the prior scale also needs to be carefully chosen for the one-sided test depending on the expected deviation from linearity.
As a next step it would be useful to extend the methodology to correct covariates that have a nonlinear effect on the outcome variable \citep[e.g., using additive Gaussian processes;][]{Cheng:2019,Duvenaud:2011}, to test nonlinear interaction effects, or to allow other kernels to model other nonlinear forms. We leave this for future work.
\end{document} | 453 | 9,686 | en |
train | 0.24.0 | \begin{document}
\title{Beyond the $Q$-process: various ways of conditioning the multitype Galton-Watson process}
\author{Sophie P\'{e}nisson
\thanks{\texttt{[email protected]}}}
\affil{Universit\'e Paris-Est, LAMA (UMR 8050), UPEMLV, UPEC, CNRS, 94010 Cr\'{e}teil, France}
\date{ }
\mathbf{m}aketitle
\begin{abstract}
Conditioning a multitype Galton-Watson process to stay alive into the indefinite future leads to what is known as its associated $Q$-process. We show that the same holds true if the process is conditioned to reach a positive threshold or a non-absorbing state. We also demonstrate that the stationary measure of the $Q$-process, obtained by construction as two successive limits (first by delaying the extinction in the original process and next by considering the long-time behavior of the obtained $Q$-process), is as a matter of fact a double limit. Finally, we prove that conditioning a multitype branching process on having an infinite total progeny leads to a process presenting the features of a $Q$-process. It does not however coincide with the original associated $Q$-process, except in the critical regime.
\end{abstract}
{\bf Keywords:} multitype branching process, conditioned limit theorem, quasi-stationary distribution, $Q$-process, size-biased distribution, total progeny
{\bf 2010 MSC:} 60J80, 60F05
\mathbf{m}athbf{s}ection{Introduction}
The benchmark of our study is the $Q$-process associated with a multitype Galton-Watson (GW) process, obtained by conditioning the branching process $\mathbf{X}_k$ on not being extinct in the distant future ($\{\mathbf{X}_{k+n}\mathbf{m}athbf{n}eq\mathbf{m}athbf{0}\}$, with $n\to +\infty$) and on the event that extinction takes place ($\{\lim_l \mathbf{X}_l=\mathbf{m}athbf{0}\}$) (see \cite{Naka78}). Our goal is to investigate some seemingly comparable conditioning results and to relate them to the $Q$-process.
After a description of the basic assumptions on the multitype GW process, we start in Subsection \ref{sec:asso} by describing the "associated" branching process, which will be a key tool when conditioning on the event that extinction takes place, or when conditioning on an infinite total progeny.
We shall first prove in Section \ref{sec:threshold} that by replacing in what precedes the conditioning event $\{\mathbf{X}_{k+n}\mathbf{m}athbf{n}eq\mathbf{m}athbf{0}\}$ by $\{\mathbf{X}_{k+n}\in S\}$, where $S$ is a subset which does not contain $\mathbf{m}athbf{0}$, the obtained limit process remains the $Q$-process. This means in particular that conditioning in the distant future on reaching a non-zero state or a positive threshold, instead of conditioning on non-extinction, does not alter the result.
In a second instance, we focus in the noncritical case on the stationary measure of the positive recurrent $Q$-process. Formulated in a loose manner, this measure is obtained by considering $\{\mathbf{X}_k\mathbf{m}id \mathbf{X}_{k+n}\mathbf{m}athbf{n}eq\mathbf{m}athbf{0}\}$, by delaying the extinction time ($n\to\infty$), and by studying the long-time behavior of the limit process ($k\to\infty$). It is already known (\cite{Naka78}) that inverting the limits leads to the same result. We prove in Section \ref{sec:Yaglom double} that the convergence to the stationary measure still holds even if $n$ and $k$ simultaneously grow to infinity. This requires an additional second-order moment assumption if the process is subcritical.
Finally, we investigate in Section \ref{sec:totalprog} the distribution of the multitype GW process conditioned on having an infinite total progeny. This is motivated by Kennedy's result, who studies in \cite{Ken75} the behavior of a monotype GW process $ X_k$ conditioned on the event $\{N = n\}$ as $n\to+\infty$, where $N=\mathbf{m}athbf{s}um_{k=0}^{+\infty}X_k$ denotes the total progeny. Note that the latter conditioning seems comparable to the device of conditioning on the event that extinction occurs but has not done so by generation $n$. It is indeed proven in the aforementioned paper that in the critical case, conditioning on the total progeny or on non-extinction indifferently results in the $Q$-process. This result has since then been extended for instance to monotype GW trees and to other conditionings: in the critical case, conditioning a GW tree by its height, by its total progeny or by its number of leaves leads to the same limiting tree (see e.g. \cite{AbrDel14,Jan12}). However, in the noncritical case, the two methods provide different limiting results: the limit process is always the $Q$-process of some critical process, no matter the class of criticality of the original process. Under a moment assumption (depending on the number of types of the process), we generalize this result to the multitype case. For this purpose we assume that the total progeny increases to infinity according to the "typical" limiting type proportions of the associated critical GW process, by conditioning on the event $\{\mathbf{N} = \left\lfloor n\mathbf{m}athbf{w}\right\rfloor\}$ as $n\to\infty$, where $\mathbf{m}athbf{w}$ is a left eigenvector related to the maximal eigenvalue 1 of the mean matrix of the critical process.
\mathbf{m}athbf{s}ubsection{Notation}
\label{sec:notation}
Let $d\geqslant 1$. In this paper, a generic point in $ \mathbb{R}^{d}$ is denoted by $\mathbf{m}athbf{x}=(x_1,\ldots,x_d)$, and its transpose is written $\mathbf{m}athbf{x}^T$. By $\mathbf{m}athbf{e}_i=(\delta_{i,j}) _{1\leqslant j\leqslant d}$ we denote the $i$-th unit vector in $ \mathbb{R}^{d}$, where $\delta_{i,j}$ stands for the Kronecker
delta. We write $\mathbf{m}athbf{0}=\left( 0,\ldots,0\right) $ and $\mathbf{1}=\left( 1,\ldots,1\right) $. The notation $\mathbf{m}athbf{x}\mathbf{m}athbf{y}$ (resp. $\left\lfloor \mathbf{m}athbf{x} \right\rfloor$) stands for the vector with coordinates $x_iy_i$ (resp. $\left\lfloor x_i \right\rfloor$, the integer part of $x_i$). We denote by $\mathbf{m}athbf{x}^{\mathbf{m}athbf{y}}$ the product $\prod_{i=1}^dx_i^{y_i}$. The obvious partial order on $ \mathbb{R}^{d}$ is $\mathbf{m}athbf{x}\leqslant \mathbf{m}athbf{y}$, when $x_i\leqslant y_i$ for each $i$, and $\mathbf{m}athbf{x}<\mathbf{m}athbf{y}$ when $x_i< y_i$ for each $i$. Finally, $\mathbf{m}athbf{x} \cdot \mathbf{m}athbf{y}$ denotes the scalar product in $ \mathbb{R}^{d}$, $\|\mathbf{m}athbf{x}\|_1$ the $L^1$-norm and $\|\mathbf{m}athbf{x}\|_2$ the $L^2$-norm.
\mathbf{m}athbf{s}ubsection{Multitype GW processes}
Let $( \mathbf{X}_k)_{k\geqslant 0}$ denote a $d$-type GW process, with $n$-th transition probabilities $P_n\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right)= \mathbf{m}athbb{P}( \mathbf{X}_{k+n}=\mathbf{m}athbf{y}\mathbf{m}id\mathbf{X}_{k}=\mathbf{m}athbf{x})$, $ k$, $n\in\mathbf{m}athbb{N}$, $\mathbf{m}athbf{x}$, $\mathbf{m}athbf{y}\in\mathbf{N}N$. Let $\mathbf{m}athbf{f}=\left( f_1,\ldots,f_d\right) $ be its offspring generating function, where for each $i=1\ldots d$ and $\mathbf{m}athbf{r}\in[0,1]^d$, $f_i\left( \mathbf{m}athbf{r}\right) =\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_i}(\mathbf{m}athbf{r}^{\mathbf{X}_1})=\mathbf{m}athbf{s}um_{\mathbf{k}\in\mathbf{N}N}p_i\left( \mathbf{k}\right) \mathbf{m}athbf{r}^{\mathbf{k}}$, the subscript $\mathbf{m}athbf{e}_i$ denoting the initial condition, and $p_i$ the offspring probability distribution of type $i$. For each $i$, we denote by $\mathbf{m}^i=\left( m_{i1},\ldots,m_{id}\right) $ (resp. $\mathbf{m}athbf{\Sigma}^{i}$) the mean vector (resp. covariance matrix) of the offspring probability distribution $p_i$. The mean matrix is then given by $\mathbf{m}athbf{M}=(m_{ij})_{1\leqslant i,j \leqslant d}$. If it exists, we denote by $\rho$ its Perron's root, and by $\mathbf{u}$ and $\mathbf{v}$ the associated right and left eigenvectors (i.e. such that $\mathbf{m}athbf{M}\mathbf{u}^T=\rho\mathbf{u}^T$, $\mathbf{v}\mathbf{m}athbf{M}=\rho\mathbf{v}$), with the normalization convention $\mathbf{u}\cdot \mathbf{1}=\mathbf{u}\cdot\mathbf{v}=1$. The process is then called critical (resp. subcritical, supercritical) if $\rho=1$ (resp. $\rho<1$, $\rho>1$). In what follows we shall denote by $\mathbf{m}athbf{f}_n$ the $n$-th iterate of the function $\mathbf{m}athbf{f}$, and by $\mathbf{m}athbf{M}^n=(m_{ij}^{(n)})_{1\leqslant i,j \leqslant d}$ the $n$-th power of the matrix $\mathbf{M}$, which correspond respectively to the generating function and mean matrix of the process at time $n$. By the branching property, for each $\mathbf{m}athbf{x}\in\mathbf{N}N$, the function $\mathbf{m}athbf{f}_n^\mathbf{m}athbf{x}$ then corresponds to the generating function of the process at time $n$ with initial state $\mathbf{m}athbf{x}$, namely $\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}}(\mathbf{m}athbf{r}^{\mathbf{X}_n})=\mathbf{m}athbf{f}_n\left( \mathbf{m}athbf{r}\right)^\mathbf{m}athbf{x}$. Finally, we define the extinction time $T=\inf\{k\in\mathbf{m}athbb{N},\,\mathbf{X}_k=\mathbf{m}athbf{0}\}$, and the extinction probability vector $\mathbf{q}=(q_1,\ldots,q_d)$, given by $q_i=\mathbf{m}athbb{P}_{\mathbf{m}athbf{e}_i}\left( T<+\infty\right)$, $i=1\ldots d$.
\mathbf{m}athbf{s}ubsection{Basic assumptions}
\label{sec:basic assumptions}
\begin{enumerate}
\item[$(A_1)$] The mean matrix $\mathbf{m}athbf{M}$ is finite. The process is nonsingular ($\mathbf{m}athbf{f}(\mathbf{m}athbf{r})\mathbf{m}athbf{n}eq \mathbf{m}athbf{M}\mathbf{m}athbf{r}$), is positive regular (there exists some $n\in\mathbf{N}et$ such that each entry of $\mathbf{m}athbf{M}^n$ is positive), and is such that $\mathbf{m}athbf{q}>\mathbf{m}athbf{0}$.
\end{enumerate}
The latter statement will always be assumed. It ensures in particular the existence of the Perron's root $\rho$ and that (\cite{Karl66}),
\begin{equation}\label{mean}
\lim_{n\to+\infty}\rho^{-n}m^{(n)}_{ij}=u_iv_j.
\end{equation} When necessary, the following additional assumptions will be made.
\begin{enumerate}
\item[$(A_2)$] For each $i,j=1\ldots d$, $\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_i}( X_{1,j}\ln X_{1,j})<+\infty$.
\item[$(A_3)$] The covariance matrices $\mathbf{m}athbf{\Sigma}^i$, $i=1\ldots d$, are finite.
\end{enumerate}
\mathbf{m}athbf{s}ubsection{The associated process}
\label{sec:asso}
For any vector $\mathbf{a}>\mathbf{m}athbf{0}$ such that for each $i=1\ldots d$, $f_i(\mathbf{a})<+\infty$, we define the generating function $\overline{\mathbf{m}athbf{f}}=\left( \overline{f}_1,\ldots,\overline{f}_d\right) $ on $[0,1]^d$ as follows: \[\overline{f}_i\left( \mathbf{m}athbf{r}\right)=\frac{f_i\left( \mathbf{a} \mathbf{m}athbf{r}\right)}{f_i\left( \mathbf{a}\right)},\ \ i=1\ldots d.\]
We then denote by $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ the GW process with offspring generating function $\overline{\mathbf{m}athbf{f}}$, which will be referred to as the \textit{associated process} with respect to $\mathbf{a}$. We shall denote by $\overline{P}_n$, $\overline{p}_i$ etc. its transition probabilities, offspring probability distributions etc. We easily compute that for each $n\geqslant 1$, $i=1\ldots d$, $\mathbf{k}\in\mathbf{N}N$ and $\mathbf{m}athbf{r}\in[0,1]^d$, denoting by $*$ the convolution product,
\begin{align}\label{off}
\overline{p}_i^{*n}\left(\mathbf{k}\right)=\frac{\mathbf{a}^{\mathbf{k}}}{f_i\left( \mathbf{a}\right)^{n} }p_i^{*n}\left(\mathbf{k}\right),\ \ \
\overline{f}_{n,i}\left(\mathbf{m}athbf{r}\right)=\frac{f_{n,i}\left(\mathbf{a}\mathbf{m}athbf{r}\right)}{f_i\left( \mathbf{a}\right)^{n} }.
\end{align}
\begin{remark}\label{rem: subcritic} It is known (\cite{JagLag08}) that a supercritical GW process conditioned on the event $\{T<+\infty\}$ is subcritical. By construction, its offspring generating function is given by $\mathbf{m}athbf{r}\mathbf{m}apsto f_i(\mathbf{m}athbf{q}\mathbf{m}athbf{r})/q_i$. Since the extinction probability vector satisfies $\mathbf{m}athbf{f}(\mathbf{m}athbf{q})=\mathbf{m}athbf{q}$ (\cite{Har63}), this means that the associated process $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ with respect to $\mathbf{m}athbf{q}$ is subcritical.
\end{remark} | 4,080 | 33,678 | en |
train | 0.24.1 | \mathbf{m}athbf{s}ubsection{Basic assumptions}
\label{sec:basic assumptions}
\begin{enumerate}
\item[$(A_1)$] The mean matrix $\mathbf{m}athbf{M}$ is finite. The process is nonsingular ($\mathbf{m}athbf{f}(\mathbf{m}athbf{r})\mathbf{m}athbf{n}eq \mathbf{m}athbf{M}\mathbf{m}athbf{r}$), is positive regular (there exists some $n\in\mathbf{N}et$ such that each entry of $\mathbf{m}athbf{M}^n$ is positive), and is such that $\mathbf{m}athbf{q}>\mathbf{m}athbf{0}$.
\end{enumerate}
The latter statement will always be assumed. It ensures in particular the existence of the Perron's root $\rho$ and that (\cite{Karl66}),
\begin{equation}\label{mean}
\lim_{n\to+\infty}\rho^{-n}m^{(n)}_{ij}=u_iv_j.
\end{equation} When necessary, the following additional assumptions will be made.
\begin{enumerate}
\item[$(A_2)$] For each $i,j=1\ldots d$, $\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_i}( X_{1,j}\ln X_{1,j})<+\infty$.
\item[$(A_3)$] The covariance matrices $\mathbf{m}athbf{\Sigma}^i$, $i=1\ldots d$, are finite.
\end{enumerate}
\mathbf{m}athbf{s}ubsection{The associated process}
\label{sec:asso}
For any vector $\mathbf{a}>\mathbf{m}athbf{0}$ such that for each $i=1\ldots d$, $f_i(\mathbf{a})<+\infty$, we define the generating function $\overline{\mathbf{m}athbf{f}}=\left( \overline{f}_1,\ldots,\overline{f}_d\right) $ on $[0,1]^d$ as follows: \[\overline{f}_i\left( \mathbf{m}athbf{r}\right)=\frac{f_i\left( \mathbf{a} \mathbf{m}athbf{r}\right)}{f_i\left( \mathbf{a}\right)},\ \ i=1\ldots d.\]
We then denote by $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ the GW process with offspring generating function $\overline{\mathbf{m}athbf{f}}$, which will be referred to as the \textit{associated process} with respect to $\mathbf{a}$. We shall denote by $\overline{P}_n$, $\overline{p}_i$ etc. its transition probabilities, offspring probability distributions etc. We easily compute that for each $n\geqslant 1$, $i=1\ldots d$, $\mathbf{k}\in\mathbf{N}N$ and $\mathbf{m}athbf{r}\in[0,1]^d$, denoting by $*$ the convolution product,
\begin{align}\label{off}
\overline{p}_i^{*n}\left(\mathbf{k}\right)=\frac{\mathbf{a}^{\mathbf{k}}}{f_i\left( \mathbf{a}\right)^{n} }p_i^{*n}\left(\mathbf{k}\right),\ \ \
\overline{f}_{n,i}\left(\mathbf{m}athbf{r}\right)=\frac{f_{n,i}\left(\mathbf{a}\mathbf{m}athbf{r}\right)}{f_i\left( \mathbf{a}\right)^{n} }.
\end{align}
\begin{remark}\label{rem: subcritic} It is known (\cite{JagLag08}) that a supercritical GW process conditioned on the event $\{T<+\infty\}$ is subcritical. By construction, its offspring generating function is given by $\mathbf{m}athbf{r}\mathbf{m}apsto f_i(\mathbf{m}athbf{q}\mathbf{m}athbf{r})/q_i$. Since the extinction probability vector satisfies $\mathbf{m}athbf{f}(\mathbf{m}athbf{q})=\mathbf{m}athbf{q}$ (\cite{Har63}), this means that the associated process $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ with respect to $\mathbf{m}athbf{q}$ is subcritical.
\end{remark}
\mathbf{m}athbf{s}ection{Classical results: conditioning on non-extinction}
\label{sec:nonext}
\mathbf{m}athbf{s}ubsection{The Yaglom distribution (\cite{JofSpit67}, Theorem 3)}
\label{sec:Yaglom}
Let $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ be a subcritical multitype GW process satisfying $(A_1)$. Then for all $\mathbf{m}athbf{x}_0,\mathbf{z}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$,
\begin{equation}\label{Yaglom}
\lim_{k\to+\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k}=\mathbf{z} \mathbf{m}id \mathbf{X}_k\mathbf{m}athbf{n}eq \mathbf{m}athbf{0}\right)=\mathbf{m}athbf{n}u(\mathbf{z}),
\end{equation}
where $\mathbf{m}athbf{n}u$ is a probability distribution on $\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$ independent of the initial state $\mathbf{m}athbf{x}_0$. This quasi-stationary distribution is often referred to as the Yaglom distribution associated with $\left(\mathbf{X}_k\right)_{k\geqslant 0}$. We shall denote by $g$ its generating function $g(\mathbf{m}athbf{r})=\mathbf{m}athbf{s}um_{\mathbf{z}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0}}\mathbf{m}athbf{n}u(\mathbf{z})\mathbf{m}athbf{r}^{\mathbf{z}}$. Under $(A_2)$, $\mathbf{m}athbf{n}u$ admits finite and positive first moments
\begin{equation}\label{first moment g}
\frac{\partial g \left( \mathbf{m}athbf{1}\right) }{\partial r_i}=v_i\gamma^{-1},\ \ i=1\ldots d,
\end{equation}
where $\gamma>0$ is a limiting quantity satisfying for each $\mathbf{m}athbf{x}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$,
\begin{equation}\label{againbasic}
\lim_{k\to +\infty}\rho^{-k}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}}\left( \mathbf{X}_k\mathbf{m}athbf{n}eq\mathbf{m}athbf{0}\right) = \gamma\,\mathbf{m}athbf{x}\cdot\mathbf{u}.
\end{equation}
\mathbf{m}athbf{s}ubsection{The $Q$-process (\cite{Naka78}, Theorem 2)}
\label{sec:Q process}
Let $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ be a multitype GW process satisfying $(A_1)$. Then for all $\mathbf{m}athbf{x}_0\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$, $k_1\leqslant\ldots\leqslant k_j\in\mathbf{m}athbb{N}$, and $\mathbf{m}athbf{x}_1,\ldots,\mathbf{m}athbf{x}_j\in\mathbf{N}N$,
\begin{multline}\label{limext}
\lim_{n\to+\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id \mathbf{X}_{k_j+n}\mathbf{m}athbf{n}eq\mathbf{m}athbf{0},\,T<+\infty\right)\\=\frac{1}{\overline{\rho}^{k_j}}\frac{\mathbf{m}athbf{x}_j\cdot\overline{\mathbf{u}}}{\mathbf{m}athbf{x}_0\cdot\overline{\mathbf{u}}}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \overline{\mathbf{X}}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j\right),
\end{multline}
where $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ is the associated process with respect to $\mathbf{q}$. As told in the introduction, this limiting process is the $Q$-process associated with $\left(\mathbf{X}_k\right)_{k\geqslant 0}$. It is Markovian with transition probabilities
\begin{equation*}
Q_1\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right)=\frac{1}{\overline{\rho}}\frac{\mathbf{m}athbf{y}\cdot\overline{\mathbf{u}}}{\mathbf{m}athbf{x}\cdot\overline{\mathbf{u}}}\overline{P}_1\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right)=\frac{1}{\overline{\rho}}\mathbf{q}^{\mathbf{m}athbf{y}-\mathbf{m}athbf{x}}\frac{\mathbf{m}athbf{y}\cdot\overline{\mathbf{u}}}{\mathbf{m}athbf{x}\cdot\overline{\mathbf{u}}}P_1\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right),\ \ \ \ \ \ \mathbf{m}athbf{x},\mathbf{m}athbf{y}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}.
\end{equation*}
If $\rho>1$, the $Q$-process is positive recurrent. If $\rho=1$, it is transient. If $\rho<1$, the $Q$-process is positive recurrent if and only if $(A_2)$ is satisfied. In the positive recurrent case, the stationary measure for the $Q$-process is given by the size-biased Yaglom distribution
\begin{equation}\label{size biased}
\overline{\mathbf{m}u}\left( \mathbf{z}\right)=\frac{\mathbf{z}\cdot\mathbf{u}\,\overline{\mathbf{m}athbf{n}u}\left( \mathbf{z}\right)}{\mathbf{m}athbf{s}um_{\mathbf{m}athbf{y}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}}\mathbf{m}athbf{y}\cdot\mathbf{u}\,\overline{\mathbf{m}athbf{n}u}\left( \mathbf{m}athbf{y}\right)},\ \ \mathbf{z}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\},
\end{equation}
where $\overline{\mathbf{m}athbf{n}u}$ is the Yaglom distribution associated with the subcritical process $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$.
\mathbf{m}athbf{s}ubsection{A Yaglom-type distribution (\cite{Naka78}, Theorem 3)}
\label{sec:Yaglom type}
Let $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ be a noncritical multitype GW process satisfying $(A_1)$. Then for all $\mathbf{m}athbf{x}_0,\mathbf{z}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$ and $n\in\mathbf{N}$,
$
\lim_{k}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k}=\mathbf{z} \mathbf{m}id \mathbf{X}_{k+n}\mathbf{m}athbf{n}eq\mathbf{m}athbf{0},\,T<+\infty\right)=\overline{\mathbf{m}athbf{n}u}^{(n)}(\mathbf{z})$,
where $\overline{\mathbf{m}athbf{n}u}^{(n)}$ is a probability distribution on $\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$ independent of the initial state $\mathbf{m}athbf{x}_0$. In particular, $\overline{\mathbf{m}athbf{n}u}^{(0)}=\overline{\mathbf{m}athbf{n}u}$ is the Yaglom distribution associated with $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$, the associated subcritical process with respect to $\mathbf{q}$. Moreover, assuming in addition $(A_2)$ if $\rho<1$, then for each $\mathbf{z}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$, $
\lim_{n}\overline{\mathbf{m}athbf{n}u}^{(n)}(\mathbf{z})=\overline{\mathbf{m}u}\left( \mathbf{z}\right).$
\mathbf{m}athbf{s}ection{Conditioning on reaching a certain state or threshold}
\label{sec:threshold}
In this section we shall generalize \eqref{limext} by proving that by replacing the conditioning event $\{\mathbf{X}_{k_j+n}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0}\}$ by $\{\mathbf{X}_{k_j+n}\in S\}$, where $S$ is a subset of $\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$, the obtained limit process remains the $Q$-process. In particular, conditioning the process on reaching a certain non-zero state or positive threshold in a distant future, i.e. with \begin{equation*}S=\{\mathbf{m}athbf{y}\},\ S=\{\mathbf{m}athbf{x}\in\mathbf{N}N,\ \|\mathbf{m}athbf{x}\|_1= m\}\ \mathbf{m}box{or}\ S=\{\mathbf{m}athbf{x}\in\mathbf{N}N,\ \|\mathbf{m}athbf{x}\|_1\geqslant m\},\end{equation*} ($\mathbf{m}athbf{y}\mathbf{m}athbf{n}eq\mathbf{m}athbf{0}, m>0$), leads to the same result as conditioning the process on non-extinction. | 3,807 | 33,678 | en |
train | 0.24.2 | \mathbf{m}athbf{s}ubsection{A Yaglom-type distribution (\cite{Naka78}, Theorem 3)}
\label{sec:Yaglom type}
Let $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ be a noncritical multitype GW process satisfying $(A_1)$. Then for all $\mathbf{m}athbf{x}_0,\mathbf{z}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$ and $n\in\mathbf{N}$,
$
\lim_{k}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k}=\mathbf{z} \mathbf{m}id \mathbf{X}_{k+n}\mathbf{m}athbf{n}eq\mathbf{m}athbf{0},\,T<+\infty\right)=\overline{\mathbf{m}athbf{n}u}^{(n)}(\mathbf{z})$,
where $\overline{\mathbf{m}athbf{n}u}^{(n)}$ is a probability distribution on $\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$ independent of the initial state $\mathbf{m}athbf{x}_0$. In particular, $\overline{\mathbf{m}athbf{n}u}^{(0)}=\overline{\mathbf{m}athbf{n}u}$ is the Yaglom distribution associated with $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$, the associated subcritical process with respect to $\mathbf{q}$. Moreover, assuming in addition $(A_2)$ if $\rho<1$, then for each $\mathbf{z}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$, $
\lim_{n}\overline{\mathbf{m}athbf{n}u}^{(n)}(\mathbf{z})=\overline{\mathbf{m}u}\left( \mathbf{z}\right).$
\mathbf{m}athbf{s}ection{Conditioning on reaching a certain state or threshold}
\label{sec:threshold}
In this section we shall generalize \eqref{limext} by proving that by replacing the conditioning event $\{\mathbf{X}_{k_j+n}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0}\}$ by $\{\mathbf{X}_{k_j+n}\in S\}$, where $S$ is a subset of $\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$, the obtained limit process remains the $Q$-process. In particular, conditioning the process on reaching a certain non-zero state or positive threshold in a distant future, i.e. with \begin{equation*}S=\{\mathbf{m}athbf{y}\},\ S=\{\mathbf{m}athbf{x}\in\mathbf{N}N,\ \|\mathbf{m}athbf{x}\|_1= m\}\ \mathbf{m}box{or}\ S=\{\mathbf{m}athbf{x}\in\mathbf{N}N,\ \|\mathbf{m}athbf{x}\|_1\geqslant m\},\end{equation*} ($\mathbf{m}athbf{y}\mathbf{m}athbf{n}eq\mathbf{m}athbf{0}, m>0$), leads to the same result as conditioning the process on non-extinction.
In what follows we call a subset $S$ accessible if for any $\mathbf{m}athbf{x}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$, there exists some $n\in\mathbf{m}athbb{N}$ such that $\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}}\left( \mathbf{X}_n\in S\right) >0$. For any subset $S$ we shall denote $S^c=\mathbf{N}N\mathbf{m}athbf{s}etminus\left(\{\mathbf{m}athbf{0}\}\cup S\right)$.
\begin{theorem}Let $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ be a multitype GW process satisfying $(A_1)$, and let $S$ be a subset of $\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$. If $\rho\leqslant 1$ we assume in addition one of the following assumptions:
\begin{itemize}
\item[$(a_1)$] $S$ is finite and accessible,
\item[$(a_2)$] $S^c$ is finite,
\item[$(a_3)$] $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ is subcritical and satisfies $(A_2)$.
\end{itemize}
Then for all $\mathbf{m}athbf{x}_0\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$, $k_1\leqslant\ldots\leqslant k_j\in\mathbf{m}athbb{N}^*$ and $\mathbf{m}athbf{x}_1,\ldots,\mathbf{m}athbf{x}_j\in\mathbf{N}N$,
\begin{multline}\label{result}
\lim_{n\to +\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\mathbf{X}_{k_j+n}\in S,\ T<+\infty\right)
\\=\frac{1}{\overline{\rho}^{k_j}}\frac{\mathbf{m}athbf{x}_j\cdot\overline{\mathbf{u}}}{\mathbf{m}athbf{x}_0\cdot\overline{\mathbf{u}}}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \overline{\mathbf{X}}_{k_1}=\mathbf{m}athbf{x}_1,\ldots,
\overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j\right),
\end{multline}
where $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ is the associated process with respect to $\mathbf{q}$.
\end{theorem}
\begin{proof}Note that if $\rho>1$, then $\mathbf{q}<\mathbf{1}$ (\cite{AthNey}) which implies that $\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_i}( \overline{X}_{1,j}\ln \overline{X}_{1,j}) <+\infty$, meaning that $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0} $ automatically satisfies $(A_2)$. Thanks to Remark \ref{rem: subcritic}, we can thus assume without loss of generality that $\rho \leqslant 1$ and simply consider the limit \begin{multline}\label{reaching}\lim_{n\to +\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\mathbf{X}_{k_j+n}\in S\right)\\=\lim_{n\to +\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\right) \frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left( \mathbf{X}_n\in S\right)}{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \mathbf{X}_{k_j+n}\in S\mathbf{i}g)}.\end{multline}
Let us recall here some of the technical results established for $\rho \leqslant 1$ in \cite{Naka78}, essential to our proof. First, for each $\mathbf{m}athbf{0}\leqslant \mathbf{b}<\mathbf{c}\leqslant \mathbf{m}athbf{1}$ and $\mathbf{m}athbf{x}\in\mathbf{N}N$,
\begin{equation}\label{Nak1}
\lim_{n\to +\infty}\frac{\mathbf{v}\cdot\left( \mathbf{m}athbf{f}_{n+2}\left( \mathbf{m}athbf{0}\right)-\mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) \right) }{\mathbf{v}\cdot\left( \mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right)-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) \right) }=\rho,
\end{equation}
\begin{equation}\label{Nak2}
\lim_{n\to +\infty}\frac{ \mathbf{m}athbf{f}_{n}\left( \mathbf{c}\right)^{\mathbf{m}athbf{x}}-\mathbf{m}athbf{f}_{n}\left( \mathbf{b}\right)^{\mathbf{m}athbf{x}} }{\mathbf{v}\cdot\left( \mathbf{m}athbf{f}_{n}\left( \mathbf{c}\right)-\mathbf{m}athbf{f}_{n}\left( \mathbf{b}\right) \right) }=\mathbf{m}athbf{x}\cdot\mathbf{u},
\end{equation}
Moreover, for each $\mathbf{m}athbf{x},\mathbf{m}athbf{y}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$,
\begin{equation}\label{Nak3}
\lim_{n\to +\infty}\frac{1-\mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}}{1-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}}=\rho,
\end{equation}
\begin{equation}\label{Nak4}
P_n\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right) =\left( \pi\left( \mathbf{m}athbf{y}\right) +\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right)\right) \left( \mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}\right),
\end{equation}where $\lim_{n}\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right)=0$ and $\pi$ is the unique measure (up to multiplicative constants) on $\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$ not identically zero satisfying $\mathbf{m}athbf{s}um_{\mathbf{m}athbf{y}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0}}\pi( \mathbf{m}athbf{y}) P( \mathbf{m}athbf{y},\mathbf{z})=\rho\pi( \mathbf{z})$ for each $\mathbf{z}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0}$. In particular, if $\rho<1$, $\pi=\left( 1-\rho\right)^{-1}\mathbf{m}athbf{n}u$, where $\mathbf{m}athbf{n}u$ is the probability distribution defined by \eqref{Yaglom}.
Let us first assume $(a_1)$. By \eqref{Nak4}
\begin{align}\label{toworkon1}
\frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left( \mathbf{X}_n\in S\right)}{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \mathbf{X}_{k_j+n}\in S\mathbf{i}g)}&=\frac{\mathbf{m}athbf{s}um_{\mathbf{z} \in S}P_n\left( \mathbf{m}athbf{x}_j,\mathbf{z}\right) }{\mathbf{m}athbf{s}um_{\mathbf{z} \in S}P_{n+k_j}\left( \mathbf{m}athbf{x}_0,\mathbf{z}\right)}\mathbf{m}athbf{n}onumber\\&=\frac{\pi\left( S\right) +\varepsilon_n\left( \mathbf{m}athbf{x}_j\right) }{\pi\left( S\right) +\varepsilon_{n+k_j}\left( \mathbf{m}athbf{x}_0\right)}\frac{ \mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}}{\mathbf{m}athbf{f}_{n+k_j+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}-\mathbf{m}athbf{f}_{n+k_j}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}},
\end{align}
where $\lim_{n}\varepsilon_n\left( \mathbf{m}athbf{x}\right)=\lim_{n}\mathbf{m}athbf{s}um_{\mathbf{z}\in S}\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{z}\right)=0$ since $S$ is finite. On the one hand, we can deduce from \eqref{Nak1} and \eqref{Nak2} that
\begin{equation*}
\lim_{n\to +\infty}\frac{ \mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}}{\mathbf{m}athbf{f}_{n+k_j+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}-\mathbf{m}athbf{f}_{n+k_j}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}} | 3,812 | 33,678 | en |
train | 0.24.3 | Let us first assume $(a_1)$. By \eqref{Nak4}
\begin{align}\label{toworkon1}
\frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left( \mathbf{X}_n\in S\right)}{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \mathbf{X}_{k_j+n}\in S\mathbf{i}g)}&=\frac{\mathbf{m}athbf{s}um_{\mathbf{z} \in S}P_n\left( \mathbf{m}athbf{x}_j,\mathbf{z}\right) }{\mathbf{m}athbf{s}um_{\mathbf{z} \in S}P_{n+k_j}\left( \mathbf{m}athbf{x}_0,\mathbf{z}\right)}\mathbf{m}athbf{n}onumber\\&=\frac{\pi\left( S\right) +\varepsilon_n\left( \mathbf{m}athbf{x}_j\right) }{\pi\left( S\right) +\varepsilon_{n+k_j}\left( \mathbf{m}athbf{x}_0\right)}\frac{ \mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}}{\mathbf{m}athbf{f}_{n+k_j+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}-\mathbf{m}athbf{f}_{n+k_j}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}},
\end{align}
where $\lim_{n}\varepsilon_n\left( \mathbf{m}athbf{x}\right)=\lim_{n}\mathbf{m}athbf{s}um_{\mathbf{z}\in S}\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{z}\right)=0$ since $S$ is finite. On the one hand, we can deduce from \eqref{Nak1} and \eqref{Nak2} that
\begin{equation*}
\lim_{n\to +\infty}\frac{ \mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}}{\mathbf{m}athbf{f}_{n+k_j+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}-\mathbf{m}athbf{f}_{n+k_j}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}}
=\frac{1}{\rho^{k_j}}\frac{\mathbf{m}athbf{x}_j\cdot\mathbf{u}}{\mathbf{m}athbf{x}_0\cdot\mathbf{u}}.
\end{equation*}
On the other hand, $\pi$ being not identically zero, there exists some $\mathbf{m}athbf{y}_0\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$ such that $\pi\left( \mathbf{m}athbf{y}_0\right) >0$. Since $S$ is accessible, there exists some $\mathbf{z}_0\in S$ and $k\in\mathbf{m}athbb{N}^*$ such that $P_k\left( \mathbf{m}athbf{y}_0,\mathbf{z}_0\right)>0$, and thus
\[+\infty>\pi\left( S\right) \geqslant\pi\left( \mathbf{z}_0\right) =\rho^{-k}\mathbf{m}athbf{s}um_{\mathbf{m}athbf{y}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}}\pi\left( \mathbf{m}athbf{y}\right) P_k\left( \mathbf{m}athbf{y},\mathbf{z}_0\right)\geqslant \rho^{-k}\pi\left( \mathbf{m}athbf{y}_0\right) P_k\left( \mathbf{m}athbf{y}_0,\mathbf{z}_0\right)>0.\]
From \eqref{toworkon1} we thus deduce that \eqref{reaching} leads to \eqref{result}.
Let us now assume $(a_2)$. We can similarly deduce from \eqref{Nak4} that
\begin{multline}\label{toworkon}
\frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left( \mathbf{X}_n\in S\right)}{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}( \mathbf{X}_{k_j+n}\in S)}\\=\frac{1-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}- \left( \pi\left( S^c\right) +\varepsilon_n\left( \mathbf{m}athbf{x}_j\right)\right) \left( \mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}\right) }{1-\mathbf{m}athbf{f}_{n+k_j}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}-( \pi\left( S^c\right) +\varepsilon_{n+k_j}\left( \mathbf{m}athbf{x}_0\right)) (\mathbf{m}athbf{f}_{n+k_j+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}-\mathbf{m}athbf{f}_{n+k_j}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}) },
\end{multline}
with $0\leqslant \pi\left( S^c\right) <+\infty$ and $\lim_{n}\varepsilon_n\left( \mathbf{m}athbf{x}\right)=\lim_{n}\mathbf{m}athbf{s}um_{\mathbf{z}\in S^c}\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{z}\right)=0$ since $S^c$ is finite. Note that \eqref{limext} implies that
\begin{equation*}
\ \lim_{n\to +\infty}\frac{1-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_j}}{1-\mathbf{m}athbf{f}_{n+k_j}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}_0}}=\frac{1}{\rho^{k_j}}\frac{\mathbf{m}athbf{x}_j\cdot\mathbf{u}}{\mathbf{m}athbf{x}_0\cdot\mathbf{u}},
\end{equation*} which together with \eqref{Nak3} enables to show that \eqref{toworkon} tends to $\rho^{-k_j}\frac{\mathbf{m}athbf{x}_j\cdot\mathbf{u}}{\mathbf{m}athbf{x}_0\cdot\mathbf{u}}$ as $n$ tends to infinity, leading again to \eqref{result}.
Let us finally assume $(a_3)$. Then we know from \cite{Naka78} (Remark 2) that $\pi(\mathbf{z})>0$ for each $\mathbf{z}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0}$, hence automatically $0<\pi\left( S\right) =\left( 1-\rho\right)^{-1}\mathbf{m}athbf{n}u\left( S\right)<+\infty$. Moreover, $\mathbf{m}athbf{n}u$ admits finite first-order moments (see \eqref{first moment g}). Hence for any $a>0$, by Markov's inequality,
\begin{align*}
\left| \mathbf{m}athbf{s}um_{\mathbf{z}\in S}\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{z}\right)\right|&\leqslant \mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{z}\in S\\\|\mathbf{z}\|_1<a}}\left|\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{z}\right)\right|+\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{z}\in S\\\|\mathbf{z}\|_1 \geqslant a}}\left|\frac{P_n\left( \mathbf{m}athbf{x},\mathbf{z}\right)}{\mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}}- \pi\left( \mathbf{z}\right)\right|\\&\leqslant \mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{z}\in S\\\|\mathbf{z}\|_1<a}}\left|\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{z}\right)\right|+\frac{1}{a}\frac{\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}}\left(\|\mathbf{X}_n\|_1\right)}{\mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}}+\frac{1}{1-\rho}\frac{1}{a}
\mathbf{m}athbf{s}um_{i=1}^d\frac{\partial g \left( \mathbf{m}athbf{1}\right) }{\partial r_i}.
\end{align*}
We recall that by \eqref{againbasic}, $\lim_n \rho^{-n}\left( \mathbf{m}athbf{f}_{n+1}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) ^{\mathbf{m}athbf{x}}\right) =\left( 1-\rho\right) \gamma\,\mathbf{m}athbf{x}\cdot\mathbf{u}$, while by \eqref{mean}, $\lim_n \rho^{-n}\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}}\left(\|\mathbf{X}_n\|_1\right)=\mathbf{m}athbf{s}um_{i,j=1}^dx_iu_iv_j$. Hence the previous inequality ensures that $\lim_n \mathbf{m}athbf{s}um_{\mathbf{z}\in S}\varepsilon_n\left( \mathbf{m}athbf{x},\mathbf{z}\right)=0$. We can thus write \eqref{toworkon1} even without the finiteness assumption of $S$, and prove \eqref{result} as previously.
\end{proof} | 2,987 | 33,678 | en |
train | 0.24.4 | \mathbf{m}athbf{s}ection{The size-biased Yaglom distribution as a double limit}
\label{sec:Yaglom double}
From Subsection \ref{sec:Q process} and Subsection \ref{sec:Yaglom type} we know that in the noncritical case, assuming $(A_2)$ if $\rho<1$,
\begin{align*}
\lim_{k\to +\infty}\lim_{n\to +\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_k=\mathbf{z}\mathbf{m}id\mathbf{X}_{k+n}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0},\,T<+\infty \right) &=\lim_{k\to +\infty}Q_k\left(\mathbf{m}athbf{x}_0, \mathbf{z}\right) =\overline{\mathbf{m}u}\left( \mathbf{z}\right),\\
\lim_{n\to +\infty}\lim_{k\to +\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_k=\mathbf{z}\mathbf{m}id\mathbf{X}_{k+n}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0},\,T<+\infty \right) &=\lim_{n\to +\infty}\overline{\mathbf{m}athbf{n}u}^{(n)}\left(\mathbf{z}\right) =\overline{\mathbf{m}u}\left( \mathbf{z}\right).
\end{align*} We prove here that, under the stronger assumption $(A_3)$ if $\rho<1$, this limiting result also holds when $k$ and $n$ simultaneously tend to infinity.
\begin{theorem}Let $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ be a noncritical multitype GW process satisfying $(A_1)$. If $\rho<1$, we assume in addition $(A_3)$. Then for all $\mathbf{m}athbf{x}_0\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$ and $\mathbf{z}\in\mathbf{N}N$, \[\lim_{\mathbf{m}athbf{s}ubstack{n\to +\infty\\k\to+\infty}}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_k=\mathbf{z}\mathbf{m}id\mathbf{X}_{k+n}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0},\,T<+\infty \right) =\overline{\mathbf{m}u}\left( \mathbf{z}\right),\] where $\overline{\mathbf{m}u}$ is the size-biased Yaglom distribution of $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$, the associated process with respect to $\mathbf{q}$. \end{theorem}
\begin{remark}
This implies in particular that for any $0<t<1$,
\[\lim_{k\to +\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{\left\lfloor kt\right\rfloor}=\mathbf{z}\mathbf{m}id\mathbf{X}_{k}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0},\,T<+\infty \right) =\overline{\mathbf{m}u}\left( \mathbf{z}\right).\]
\end{remark}
\begin{remark}In the critical case, the $Q$-process is transient and the obtained limit is degenerate. A suitable normalization in order to obtain a non-degenerate probability distribution is of the form $\mathbf{X}_k/k$. However, even with this normalization, the previous result does not hold in the critical case. Indeed, we know for instance that in the monotype case, a critical process with finite variance $\mathbf{m}athbf{s}igma^2>0$ satisfies for each $z\geqslant 0$ (\cite{LaNey68}),
\begin{align*}\lim_{k\to +\infty}\lim_{n\to +\infty}\mathbf{m}athbb{P}_{1}\left( \frac{X_k}{k}\leqslant z\mathbf{m}id X_{k+n}\mathbf{m}athbf{n}eq 0\right) &=1-e^{-\frac{2z}{\mathbf{m}athbf{s}igma^2}} ,\\
\lim_{n\to +\infty}\lim_{k\to +\infty}\mathbf{m}athbb{P}_{1}\left( \frac{X_k}{k}\leqslant z\mathbf{m}id X_{k+n}\mathbf{m}athbf{n}eq 0 \right)&=1-e^{-\frac{2z}{\mathbf{m}athbf{s}igma^2}}-\frac{2z}{\mathbf{m}athbf{s}igma^2}e^{-\frac{2z}{\mathbf{m}athbf{s}igma^2}}.
\end{align*}
\end{remark}
\begin{proof}Thanks to Remark \ref{rem: subcritic} and to the fact that if $\rho>1$, $\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_i}( \overline{X}_{1,j} \overline{X}_{1,l})<+\infty$, we can assume without loss of generality that $\rho < 1$. For each $n$, $k\in\mathbf{m}athbb{N}$ and $\mathbf{m}athbf{r}\in[0,1]^d$, \[\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}_0}\left(\mathbf{m}athbf{r}^{\mathbf{X}_k}\mathbf{m}athbf{1}_{\mathbf{X}_{k+n}=\mathbf{m}athbf{0}} \right)=\mathbf{m}athbf{s}um_{\mathbf{m}athbf{y}\in\mathbf{N}N}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left(\mathbf{X}_k=\mathbf{m}athbf{y}\right) \mathbf{m}athbf{r}^{\mathbf{m}athbf{y}}\mathbf{m}athbb{P}_{\mathbf{m}athbf{y}}\left(\mathbf{X}_{n}=\mathbf{m}athbf{0} \right)=\mathbf{m}athbf{f}_k\left( \mathbf{m}athbf{r}\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) \right)^{\mathbf{m}athbf{x}_0},\] which leads to
\begin{align}\label{begin}
\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}_0}\left[\mathbf{m}athbf{r}^{\mathbf{X}_k}\mathbf{m}id \mathbf{X}_{k+n}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0} \right]&=\frac{\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}_0}\left(\mathbf{m}athbf{r}^{\mathbf{X}_k} \right) -\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}_0}\left(\mathbf{m}athbf{r}^{\mathbf{X}_k}\mathbf{m}athbf{1}_{\mathbf{X}_{k+n}=\mathbf{m}athbf{0}} \right)}{1-\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left(\mathbf{X}_{k+n}= \mathbf{m}athbf{0} \right)}\mathbf{m}athbf{n}onumber\\&=\frac{\mathbf{m}athbf{f}_k\left( \mathbf{m}athbf{r}\right)^{\mathbf{m}athbf{x}_0} -\mathbf{m}athbf{f}_k\left( \mathbf{m}athbf{r}\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) \right)^{\mathbf{m}athbf{x}_0} }{1-\mathbf{m}athbf{f}_{k+n}\left( \mathbf{m}athbf{0}\right)^{\mathbf{m}athbf{x}_0}}.
\end{align}
By Taylor's theorem,
\begin{multline}\label{Taylor}\mathbf{m}athbf{f}_k\left( \mathbf{m}athbf{r}\right)^{\mathbf{m}athbf{x}_0} -\mathbf{m}athbf{f}_k\left( \mathbf{m}athbf{r}\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right) \right)^{\mathbf{m}athbf{x}_0}=\mathbf{m}athbf{s}um_{i=1}^d\frac{\partial \mathbf{m}athbf{f}_k^{\mathbf{m}athbf{x}_0}\left( \mathbf{m}athbf{r}\right)}{\partial r_i}r_i\left( 1-f_{n,i}\left( \mathbf{m}athbf{0}\right) \right) \\-\mathbf{m}athbf{s}um_{i,j=1\ldots d} \frac{r_ir_j\left( 1-f_{n,i}\left( \mathbf{m}athbf{0}\right) \right)( 1-f_{n,j}\left( \mathbf{m}athbf{0}\right) )}{2}\int_0^1\left( 1-t\right) \frac{\partial ^2\mathbf{m}athbf{f}_k^{\mathbf{m}athbf{x}_0}\left( \mathbf{m}athbf{r}-t \mathbf{m}athbf{r}\left( \mathbf{m}athbf{1}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right)\right) \right)}{\partial r_i\partial r_j}dt,
\end{multline}
with
\begin{equation}\label{calculus}\frac{\partial \mathbf{m}athbf{f}_k^{\mathbf{m}athbf{x}_0}\left( \mathbf{m}athbf{r}\right)}{\partial r_i}=\mathbf{m}athbf{s}um_{j=1}^dx_{0,j}\frac{\partial f_{k,j}\left( \mathbf{m}athbf{r}\right) }{\partial r_i}\mathbf{m}athbf{f}_k\left( \mathbf{m}athbf{r}\right)^{\mathbf{m}athbf{x}_0-\mathbf{m}athbf{e}_j}.
\end{equation}
Let us first prove the existence of $\lim_k\rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i}$ for each $i,j$ and $\mathbf{m}athbf{r}\in[0,1]^d$. For each $k$, $p\in\mathbf{m}athbb{N}$ and $a>0$,
\begin{multline}\label{en fait si}
\Big| \rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i}- \rho^{-(k+p)}\frac{\partial f_{k+p,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i} \Big|\\\leqslant \mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{z}\in\mathbf{N}N\\\|\mathbf{z}\|_2< a}}z_i\mathbf{m}athbf{r}^{\mathbf{z}-\mathbf{m}athbf{e}_i}\Big|\rho^{-k}P_k(\mathbf{m}athbf{e}_j, \mathbf{z}) - \rho^{-(k+p)}P_{k+p}(\mathbf{m}athbf{e}_j, \mathbf{z})\Big| \\+ \rho^{-k}\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}\left( X_{k,i}\mathbf{m}athbf{1}_{\|\mathbf{X}_k\|_2\geqslant a}\right) +\rho^{-(k+p)}\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}\left( X_{k+p,i}\mathbf{m}athbf{1}_{\|\mathbf{X}_{k+p}\|_2\geqslant a}\right).
\end{multline}
By Cauchy-Schwarz and Markov's inequalities, $ \mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}( X_{k,i}\mathbf{m}athbf{1}_{\|\mathbf{X}_k\_2|\geqslant a})\leqslant \frac{1}{a}\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}( \|\mathbf{X}_{k}\|_2^2).$ For each $\mathbf{m}athbf{x}\in\mathbf{N}N$, let $\mathbf{m}athbf{C}_{\mathbf{m}athbf{x},k}$ be the matrix $( \mathbf{m}athbb{E}_{\mathbf{m}athbf{x}}( X_{k,i}X_{k,j}))_{1\leqslant i,j\leqslant d}$. According to \cite{Har63},
\begin{equation}\label{har}
\mathbf{m}athbf{C}_{\mathbf{m}athbf{x},k}=( \mathbf{m}athbf{M}^T) ^k\mathbf{m}athbf{C}_{\mathbf{m}athbf{x},0}\mathbf{m}athbf{M}^k+\mathbf{m}athbf{s}um_{n=1}^k( \mathbf{m}athbf{M}^T) ^{k-n}\left( \mathbf{m}athbf{s}um_{i=1}^d\mathbf{m}athbf{\Sigma}^{i}\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}}\left(X_{n-1,i} \right) \right) \mathbf{m}athbf{M}^{k-n}.
\end{equation}
Thanks to \eqref{mean} this implies the existence of some $C>0$ such that for all $k\in\mathbf{m}athbb{N}$,
$\rho^{-k}\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}( \|\mathbf{X}_{k}\|_2^2)=\rho^{-k}\mathbf{m}athbf{s}um_{i=1}^d[ \mathbf{m}athbf{C}_{\mathbf{m}athbf{e}_j,k}]_{ii} \leqslant C$, and the two last right terms in \eqref{en fait si} can be bounded by $2Ca^{-1}$. As for the first right term in \eqref{en fait si}, it is thanks to \eqref{againbasic} and \eqref{Nak4} as small as desired for $k$ large enough. This proves that $( \rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i})_k$ is a Cauchy sequence. | 3,774 | 33,678 | en |
train | 0.24.5 | Let us first prove the existence of $\lim_k\rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i}$ for each $i,j$ and $\mathbf{m}athbf{r}\in[0,1]^d$. For each $k$, $p\in\mathbf{m}athbb{N}$ and $a>0$,
\begin{multline}\label{en fait si}
\Big| \rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i}- \rho^{-(k+p)}\frac{\partial f_{k+p,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i} \Big|\\\leqslant \mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{z}\in\mathbf{N}N\\\|\mathbf{z}\|_2< a}}z_i\mathbf{m}athbf{r}^{\mathbf{z}-\mathbf{m}athbf{e}_i}\Big|\rho^{-k}P_k(\mathbf{m}athbf{e}_j, \mathbf{z}) - \rho^{-(k+p)}P_{k+p}(\mathbf{m}athbf{e}_j, \mathbf{z})\Big| \\+ \rho^{-k}\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}\left( X_{k,i}\mathbf{m}athbf{1}_{\|\mathbf{X}_k\|_2\geqslant a}\right) +\rho^{-(k+p)}\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}\left( X_{k+p,i}\mathbf{m}athbf{1}_{\|\mathbf{X}_{k+p}\|_2\geqslant a}\right).
\end{multline}
By Cauchy-Schwarz and Markov's inequalities, $ \mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}( X_{k,i}\mathbf{m}athbf{1}_{\|\mathbf{X}_k\_2|\geqslant a})\leqslant \frac{1}{a}\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}( \|\mathbf{X}_{k}\|_2^2).$ For each $\mathbf{m}athbf{x}\in\mathbf{N}N$, let $\mathbf{m}athbf{C}_{\mathbf{m}athbf{x},k}$ be the matrix $( \mathbf{m}athbb{E}_{\mathbf{m}athbf{x}}( X_{k,i}X_{k,j}))_{1\leqslant i,j\leqslant d}$. According to \cite{Har63},
\begin{equation}\label{har}
\mathbf{m}athbf{C}_{\mathbf{m}athbf{x},k}=( \mathbf{m}athbf{M}^T) ^k\mathbf{m}athbf{C}_{\mathbf{m}athbf{x},0}\mathbf{m}athbf{M}^k+\mathbf{m}athbf{s}um_{n=1}^k( \mathbf{m}athbf{M}^T) ^{k-n}\left( \mathbf{m}athbf{s}um_{i=1}^d\mathbf{m}athbf{\Sigma}^{i}\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}}\left(X_{n-1,i} \right) \right) \mathbf{m}athbf{M}^{k-n}.
\end{equation}
Thanks to \eqref{mean} this implies the existence of some $C>0$ such that for all $k\in\mathbf{m}athbb{N}$,
$\rho^{-k}\mathbf{m}athbb{E}_{\mathbf{m}athbf{e}_j}( \|\mathbf{X}_{k}\|_2^2)=\rho^{-k}\mathbf{m}athbf{s}um_{i=1}^d[ \mathbf{m}athbf{C}_{\mathbf{m}athbf{e}_j,k}]_{ii} \leqslant C$, and the two last right terms in \eqref{en fait si} can be bounded by $2Ca^{-1}$. As for the first right term in \eqref{en fait si}, it is thanks to \eqref{againbasic} and \eqref{Nak4} as small as desired for $k$ large enough. This proves that $( \rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i})_k$ is a Cauchy sequence.
Its limit is then necessarily, for each $\mathbf{m}athbf{r}\in[0,1]^d$,
\begin{equation}\label{limit to prove}
\lim_{k\to+\infty} \rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}\right)}{\partial r_i}=\gamma u_j\frac{\partial g\left(\mathbf{m}athbf{r}\right)}{\partial r_i},
\end{equation}
where $g$ is defined in Subsection \ref{sec:nonext}. Indeed, since assumption $(A_3)$ ensures that $(A_2)$ is satisfied, we can deduce from \eqref{Yaglom} and \eqref{againbasic}
that $\lim_{k}\rho^{-k}(f_{k,j}( \mathbf{m}athbf{r})-f_{k,j}( \mathbf{m}athbf{0}))=\gamma u_j g(\mathbf{m}athbf{r})$. Hence, using the fact that $0\leqslant \rho^{-k}\frac{\partial f_{k,j}\left( \mathbf{m}athbf{r}\right)}{\partial r_i} \leqslant \rho^{-k}m^{(k)}_{ji}$, which thanks to \eqref{mean} is bounded, we obtain by Lebesgue's dominated convergence theorem that for each $h\in\mathbf{m}athbb{R}$ such that $\mathbf{m}athbf{r}+h\mathbf{m}athbf{e}_i\in[0,1]^d$, \begin{equation*}\gamma u_j g(\mathbf{m}athbf{r}+h\mathbf{m}athbf{e}_i)-\gamma u_j g(\mathbf{m}athbf{r})
=\lim_{k\to+\infty} \int_0^h\rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}+t\mathbf{m}athbf{e}_{i}\right)}{\partial r_i}dt= \int_0^h\lim_{k\to+\infty}\rho^{-k}\frac{\partial f_{k,j}\left(\mathbf{m}athbf{r}+t\mathbf{m}athbf{e}_{i}\right)}{\partial r_i}dt,\end{equation*}
proving \eqref{limit to prove}.
In view of \eqref{Taylor}, let us note that for each $\mathbf{m}athbf{r}\in[0,1]^ d$, there exists thanks to \eqref{mean} and \eqref{har} some $C>0$ such that for each $k\in\mathbf{m}athbb{N}$,
$ 0\leqslant \rho^{-k}\frac{\partial ^2\mathbf{m}athbf{f}_k^{\mathbf{m}athbf{x}}\left( \mathbf{m}athbf{r} \right)}{\partial r_i\partial r_j}\leqslant\rho^{-k}\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}}[ X_{k,j}( X_{k,i}-\delta_{ij})]\leqslant C$, hence for each $k$, $n\in\mathbf{m}athbb{N}$,\[\rho^{-k}\int_0^1\left( 1-t\right) \frac{\partial ^2\mathbf{m}athbf{f}_k^{\mathbf{m}athbf{x}}\left( \mathbf{m}athbf{r}-t \mathbf{m}athbf{r}\left( \mathbf{m}athbf{1}-\mathbf{m}athbf{f}_{n}\left( \mathbf{m}athbf{0}\right)\right) \right)}{\partial r_i\partial r_j}dt\leqslant \frac{C}{2}.\]
Together with \eqref{againbasic} this entails that the last right term in \eqref{Taylor} satisfies
\[\lim_{\mathbf{m}athbf{s}ubstack{n\to +\infty\\k\to+\infty}}\rho^{-(k+n)}\mathbf{m}athbf{s}um_{i,j=1\ldots d}\frac{r_ir_j\left( 1-f_{n,i}\left( \mathbf{m}athbf{0}\right) \right)( 1-f_{n,j}\left( \mathbf{m}athbf{0}\right) )}{2} \int_0^1\ldots\ dt=0.\]
Moreover, we deduce from \eqref{calculus}, \eqref{limit to prove} and $\lim_n\mathbf{m}athbf{f}_n(\mathbf{m}athbf{r})=\mathbf{m}athbf{1}$ that the first right term in \eqref{Taylor} satisfies
\[\lim_{\mathbf{m}athbf{s}ubstack{n\to +\infty\\k\to+\infty}}\rho^{-(k+n)}\mathbf{m}athbf{s}um_{i=1}^d\frac{\partial \mathbf{m}athbf{f}_k^{\mathbf{m}athbf{x}_0}\left( \mathbf{m}athbf{r}\right)}{\partial r_i}r_i\left( 1-f_{n,i}\left( \mathbf{m}athbf{0}\right) \right)=\gamma^2\,\mathbf{m}athbf{x}_0\cdot\mathbf{u} \mathbf{m}athbf{s}um_{i=1}^dr_iu_i\frac{\partial g\left(\mathbf{m}athbf{r}\right)}{\partial r_i}.\]
Recalling \eqref{begin} and \eqref{againbasic}, we have thus proven that for each $\mathbf{m}athbf{r}\in[0,1]^d$,
\[\lim_{\mathbf{m}athbf{s}ubstack{n\to +\infty\\k\to+\infty}}\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}_0}\left[\mathbf{m}athbf{r}^{\mathbf{X}_k}\mathbf{m}id \mathbf{X}_{k+n}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0} \right] =\gamma \mathbf{m}athbf{s}um_{i=1}^dr_iu_i\frac{\partial g\left(\mathbf{m}athbf{r}\right)}{\partial r_i}=\gamma \mathbf{m}athbf{s}um_{\mathbf{z}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}}\mathbf{z}\cdot\mathbf{u}\,\mathbf{m}athbf{n}u\left( \mathbf{z}\right)\mathbf{m}athbf{r}^{\mathbf{z}}.\]
Finally, \eqref{first moment g} leads to
$\gamma \mathbf{m}athbf{s}um_{i=1}^du_i\frac{\partial g\left(\mathbf{m}athbf{1}\right)}{\partial r_i}=1$, and thus
\[\lim_{\mathbf{m}athbf{s}ubstack{n\to +\infty\\k\to+\infty}}\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}_0}\left[\mathbf{m}athbf{r}^{\mathbf{X}_k}\mathbf{m}id \mathbf{X}_{k+n}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0} \right] =\frac{\mathbf{m}athbf{s}um_{\mathbf{z}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}}\mathbf{z}\cdot\mathbf{u}\,\mathbf{m}athbf{n}u\left( \mathbf{z}\right)\mathbf{m}athbf{r}^{\mathbf{z}}}{\mathbf{m}athbf{s}um_{\mathbf{m}athbf{y}\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}}\mathbf{m}athbf{y}\cdot\mathbf{u}\, \mathbf{m}athbf{n}u\left( \mathbf{m}athbf{y}\right)},\]
which by \eqref{size biased} is a probability generating function.
\end{proof} | 3,116 | 33,678 | en |
train | 0.24.6 | \mathbf{m}athbf{s}ection{Conditioning on the total progeny}
\label{sec:totalprog}
Let $\mathbf{m}athbf{N}=\left( N_1,\ldots,N_d\right)$ denote the total progeny of the process $\left(\mathbf{X}_k\right)_{k\geqslant 0} $, where for each $i=1\ldots d$,
$N_i=\mathbf{m}athbf{s}um_{k=0}^{+\infty}X_{k,i}$,
and $N_i=+\infty$ if the sum diverges. Our aim is to study the behavior of $\left(\mathbf{X}_k \right)_{k\geqslant 0} $ conditioned on the event $\{\mathbf{N}=\left\lfloor n\mathbf{m}athbf{w} \right\rfloor\}$, as $n$ tends to infinity, for some specific positive vector $\mathbf{m}athbf{w}$. We recall that in the critical case, the GW process suitably normalized and conditioned on non-extinction in the same fashion as in \eqref{Yaglom}, converges to a limit law supported by the ray $\{\lambda\mathbf{v}: \lambda\geqslant 0\}\mathbf{m}athbf{s}ubset \mathbf{m}athbb{R}_+^d$. In this sense,
its left eigenvector $\mathbf{v}$ describes "typical limiting type proportions", as pointed out in \cite{FleiVat06}. As we will see in Lemma \ref{lem1}, conditioning a GW process on a given total progeny size comes down to conditioning an associated critical process on the same total progeny size. For this reason, the vector $\mathbf{m}athbf{w}$ will be chosen to be the left eigenvector of the associated critical process. It then appears that, similarly as in the monotype case (\cite{Ken75}), the process conditioned on an infinite total progeny $\{\mathbf{N}=\left\lfloor n\mathbf{m}athbf{w} \right\rfloor\}$, $n\to\infty$, has the structure of the $Q$-process of a critical process, and is consequently transient. This is the main result, stated in Theorem \ref{thm2}.
\begin{theorem}\label{thm2} Let $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ be a multitype GW process satisfying $(A_1)$. We assume in addition that
\begin{enumerate}
\item[$(A_4)$] there exists $\mathbf{a}>\mathbf{m}athbf{0}$ such that the associated process with respect to $\mathbf{a}$ is critical,
\item[$(A_5)$] for each $j=1\ldots d$, there exist $i=1\ldots d$ and $\mathbf{k}\in\mathbf{N}N$ such that $p_i\left(\mathbf{k}\right) >0$ and $p_i\left(\mathbf{k}+\mathbf{m}athbf{e}_j\right) >0$,
\item[$(A_6)$] the associated process with respect to $\mathbf{a}$ admits moments of order $d+1$, and its covariance matrices are positive-definite.
\end{enumerate}
Then for all $\mathbf{m}athbf{x}_0\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\}$, $k_1\leqslant\ldots\leqslant k_j\in\mathbf{m}athbb{N}$, and $\mathbf{m}athbf{x}_1,\ldots,\mathbf{m}athbf{x}_j\in\mathbf{N}N$,
\begin{equation}\label{lim}
\lim_{n\to+\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\mathbf{N}=\left\lfloor n\overline{\mathbf{v}} \right\rfloor\mathbf{i}g)\\=\frac{\mathbf{m}athbf{x}_j\cdot\overline{\mathbf{u}}}{\mathbf{m}athbf{x}_0\cdot\overline{\mathbf{u}}}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \overline{\mathbf{X}}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{i}g),
\end{equation}
where $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ is the associated process with respect to $\mathbf{a}$.
\end{theorem}
The limiting process defined by \eqref{lim} is thus Markovian with transition probabilities
\begin{equation*}
\overline{Q}_1\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right) =\frac{\mathbf{m}athbf{y}\cdot\overline{\mathbf{u}}}{\mathbf{m}athbf{x}\cdot\overline{\mathbf{u}}}\overline{P}_1\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right)=\frac{\mathbf{a}^{\mathbf{m}athbf{y}}}{\mathbf{m}athbf{f}\left( \mathbf{a}\right) ^{\mathbf{m}athbf{x}}}\frac{\mathbf{m}athbf{y}\cdot\overline{\mathbf{u}}}{\mathbf{m}athbf{x}\cdot\overline{\mathbf{u}}}P_1\left( \mathbf{m}athbf{x},\mathbf{m}athbf{y}\right),\ \ \ \ \ \ \mathbf{m}athbf{x},\mathbf{m}athbf{y}\in\mathbf{N}N\mathbf{m}athbf{s}etminus\{\mathbf{m}athbf{0}\},
\end{equation*}and corresponds to the $Q$-process associated with the critical process $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$.
\begin{remark}$ $
\begin{itemize}
\item If $d=1$, the conditional event $\{\mathbf{N}=\left\lfloor n\overline{\mathbf{v}} \right\rfloor\}$ reduces to $\{N=n\}$, as studied in \cite{Ken75}, in which assumptions $(A_4)$--$(A_6)$ are also required\footnote{Since the author's work, it has been proved in \cite{AbrDelGuo15} that in the critical case and under $(A_1)$, Theorem \ref{thm2} holds true under the minimal assumptions of aperiodicity of the offspring distribution (implied by $(A_5)$) and the finiteness of its first order moment.}.
\item If $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ is critical, assumption $(A_4)$ is satisfied with $\mathbf{a}=\mathbf{1}$. This assumption is also automatically satisfied if $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ is supercritical. Indeed, as mentioned in Remark \ref{rem: subcritic}, the associated process with respect to $\mathbf{m}athbf{0}<\mathbf{q}<\mathbf{1}$ is subcritical and thus satisfies $\overline{\rho}< 1$. The fact that $\rho>1$ and the continuity of the Perron's root as a function of the mean matrix coefficients then ensures the existence of some $\mathbf{q}\leqslant \mathbf{a} \leqslant \mathbf{1}$ satisfying $(A_4)$. Note however that such an $\mathbf{a}$ is not unique.
\item For any $\mathbf{a}>\mathbf{m}athbf{0}$, $p_i$ and $\overline{p}_i$ share by construction the same support. As a consequence, $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ satisfies $(A_5)$ if and only if $\left(\overline{\mathbf{X}}_k\right)_{k\geqslant 0} $ does. Moreover, a finite covariance matrix $\mathbf{m}athbf{\Sigma}^i$ is positive-definite if and only if there does not exist any $c\in\mathbf{m}athbb{R}$ and $\mathbf{m}athbf{x}\mathbf{m}athbf{n}eq \mathbf{m}athbf{0}$ such that $\mathbf{m}athbf{x}\cdot\mathbf{X}=c$ $\mathbf{m}athbb{P}_{\mathbf{m}athbf{e}_i}$-almost-surely, hence if and only if $\mathbf{m}athbf{x}\cdot\overline{\mathbf{X}}=c$ $\mathbf{m}athbb{P}_{\mathbf{m}athbf{e}_i}$-almost-surely. Consequently, provided it exists, $\mathbf{m}athbf{\Sigma}^i$ is positive-definite if and only if $\overline{\mathbf{m}athbf{\Sigma}}^i$ is positive-definite as well.
\end{itemize}
\end{remark}
We shall first show in Lemma \ref{lem1} that for any $\mathbf{a}$, the associated process $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ with respect to $\mathbf{a}$, conditioned on $\{\overline{\mathbf{N}}=\mathbf{m}athbf{n}\}$, has the same probability distribution as the original process conditioned on $\{\mathbf{N}=\mathbf{m}athbf{n}\}$, for any $\mathbf{m}athbf{n}\in\mathbf{N}N$. It is thus enough to prove Theorem \ref{thm2} in the critical case, which is done at the end of the article.
It follows from Proposition 1 in \cite{Good75} or directly from Theorem 1.2 in \cite{ChauLiu11} that the probability distribution of the total progeny in the multitype case is given for each $\mathbf{m}athbf{x}_0$, $\mathbf{m}athbf{n}\in\mathbf{N}N$ with $\mathbf{m}athbf{n}>\mathbf{m}athbf{0}$, $\mathbf{m}athbf{n}\geqslant \mathbf{m}athbf{x}_0 $ by
\begin{equation}\label{totprogeny}
\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0 }\left( \mathbf{N}=\mathbf{m}athbf{n}\right)=\frac{1}{n_1\ldots n_d}\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{k}^{1},\ldots,\mathbf{k}^{d}\in\mathbf{N}N\\\mathbf{k}^{1}+\ldots+\mathbf{k}^{d}=\mathbf{m}athbf{n}-\mathbf{m}athbf{x}_0}}\det \begin{pmatrix}
n_1\mathbf{m}athbf{e}_1-\mathbf{k}^1\\\cdots \\mathbf{m}athbf{n}_{d}\mathbf{m}athbf{e}_{d}-\mathbf{k}^{d}
\end{pmatrix} \prod_{i=1}^d p_i^{*n_i}\left( \mathbf{k}^i\right).\end{equation}
\begin{lemma}\label{lem1}Let $\left(\mathbf{X}_k \right)_{k\geqslant 0} $ be a multitype GW process. Then, for any $\mathbf{a}>\mathbf{m}athbf{0}$, the associated process $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g) _{k\geqslant 0}$ with respect to $\mathbf{a}$ satisfies for any $\mathbf{m}athbf{x}_0\in\mathbf{N}N$, $k_1\leqslant\ldots\leqslant k_j\in\mathbf{m}athbb{N}$, $\mathbf{m}athbf{x}_1,\ldots,\mathbf{m}athbf{x}_j\in\mathbf{N}N$ and $\mathbf{m}athbf{n}\in\mathbf{N}N$,
\begin{equation*}
\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\mathbf{N}=\mathbf{m}athbf{n}\right)=\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \overline{\mathbf{X}}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\overline{\mathbf{N}}=\mathbf{m}athbf{n}\right).
\end{equation*}
\end{lemma} | 3,187 | 33,678 | en |
train | 0.24.7 | We shall first show in Lemma \ref{lem1} that for any $\mathbf{a}$, the associated process $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$ with respect to $\mathbf{a}$, conditioned on $\{\overline{\mathbf{N}}=\mathbf{m}athbf{n}\}$, has the same probability distribution as the original process conditioned on $\{\mathbf{N}=\mathbf{m}athbf{n}\}$, for any $\mathbf{m}athbf{n}\in\mathbf{N}N$. It is thus enough to prove Theorem \ref{thm2} in the critical case, which is done at the end of the article.
It follows from Proposition 1 in \cite{Good75} or directly from Theorem 1.2 in \cite{ChauLiu11} that the probability distribution of the total progeny in the multitype case is given for each $\mathbf{m}athbf{x}_0$, $\mathbf{m}athbf{n}\in\mathbf{N}N$ with $\mathbf{m}athbf{n}>\mathbf{m}athbf{0}$, $\mathbf{m}athbf{n}\geqslant \mathbf{m}athbf{x}_0 $ by
\begin{equation}\label{totprogeny}
\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0 }\left( \mathbf{N}=\mathbf{m}athbf{n}\right)=\frac{1}{n_1\ldots n_d}\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{k}^{1},\ldots,\mathbf{k}^{d}\in\mathbf{N}N\\\mathbf{k}^{1}+\ldots+\mathbf{k}^{d}=\mathbf{m}athbf{n}-\mathbf{m}athbf{x}_0}}\det \begin{pmatrix}
n_1\mathbf{m}athbf{e}_1-\mathbf{k}^1\\\cdots \\mathbf{m}athbf{n}_{d}\mathbf{m}athbf{e}_{d}-\mathbf{k}^{d}
\end{pmatrix} \prod_{i=1}^d p_i^{*n_i}\left( \mathbf{k}^i\right).\end{equation}
\begin{lemma}\label{lem1}Let $\left(\mathbf{X}_k \right)_{k\geqslant 0} $ be a multitype GW process. Then, for any $\mathbf{a}>\mathbf{m}athbf{0}$, the associated process $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g) _{k\geqslant 0}$ with respect to $\mathbf{a}$ satisfies for any $\mathbf{m}athbf{x}_0\in\mathbf{N}N$, $k_1\leqslant\ldots\leqslant k_j\in\mathbf{m}athbb{N}$, $\mathbf{m}athbf{x}_1,\ldots,\mathbf{m}athbf{x}_j\in\mathbf{N}N$ and $\mathbf{m}athbf{n}\in\mathbf{N}N$,
\begin{equation*}
\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\mathbf{N}=\mathbf{m}athbf{n}\right)=\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \overline{\mathbf{X}}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\overline{\mathbf{N}}=\mathbf{m}athbf{n}\right).
\end{equation*}
\end{lemma}
\begin{proof}From \eqref{off} and \eqref{totprogeny} ,
$\mathbf{m}athbb{P}_{\mathbf{k}}\left( \overline{\mathbf{N}}=\mathbf{m}athbf{n}\right)
=\frac{\mathbf{a}^{\mathbf{m}athbf{n}-\mathbf{k}}}{\mathbf{m}athbf{f}\left( \mathbf{a}\right)^{\mathbf{m}athbf{n}} } \mathbf{m}athbb{P}_{\mathbf{k}}\left( \mathbf{N}=\mathbf{m}athbf{n}\right)$.
For all $n\in\mathbf{m}athbb{N}$, we denote by $\mathbf{N}_n=\mathbf{m}athbf{s}um_{k=0}^n\mathbf{X}_k$ (resp. $\overline{\mathbf{N}}_n=\mathbf{m}athbf{s}um_{k=0}^n\overline{\mathbf{X}}_k$) the total progeny up to generation $n$ of $\left(\mathbf{X}_k \right)_{k\geqslant 0} $ (resp. $\mathbf{i}g(\overline{\mathbf{X}}_k\mathbf{i}g)_{k\geqslant 0}$). Then
\begin{align*}
\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left(\overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j,\overline{\mathbf{N}}_{k_j}=\mathbf{m}athbf{l}\right)&=\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{i}_1,\ldots,\mathbf{i}_{k_j-1}\in\mathbf{N}N\\\mathbf{i}_1+\ldots+\mathbf{i}_{k_j-1}=\mathbf{m}athbf{l}-\mathbf{m}athbf{x}_0-\mathbf{m}athbf{x}_j}}\overline{P}_1\left( \mathbf{m}athbf{x}_0,\mathbf{i}_1\right) \ldots\overline{P}_1( \mathbf{i}_{k_j-1},\mathbf{m}athbf{x}_j) \\
&=\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{i}_1,\ldots,\mathbf{i}_{k_j-1}\in\mathbf{N}N\\\mathbf{i}_1+\ldots+\mathbf{i}_{k_j-1}=\mathbf{m}athbf{l}-\mathbf{m}athbf{x}_0-\mathbf{m}athbf{x}_j}}\frac{\mathbf{a}^{\mathbf{i}_1}P_1\left( \mathbf{m}athbf{x}_0,\mathbf{i}_1\right) }{\mathbf{m}athbf{f}\left( \mathbf{a}\right) ^{\mathbf{m}athbf{x}_0}}\ldots\frac{\mathbf{a}^{\mathbf{m}athbf{x}_j}P_1( \mathbf{i}_{k_j-1},\mathbf{m}athbf{x}_j) }{\mathbf{m}athbf{f}\left( \mathbf{a}\right) ^{\mathbf{i}_{k_j-1}}}\\&=\frac{\mathbf{a}^{\mathbf{m}athbf{l}-\mathbf{m}athbf{x}_0}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left(\mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j,\mathbf{N}_{k_j}=\mathbf{m}athbf{l}\right)}{\mathbf{m}athbf{f}\left( \mathbf{a}\right) ^{\mathbf{m}athbf{l}-\mathbf{m}athbf{x}_j}} ,
\end{align*}
and similarly
\begin{equation*}
\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\Big( \overline{\mathbf{X}}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j,\overline{\mathbf{N}}_{k_j}=\mathbf{m}athbf{l}\Big)\\=\frac{\mathbf{a}^{\mathbf{m}athbf{l}-\mathbf{m}athbf{x}_0}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j,\mathbf{N}_{k_j}=\mathbf{m}athbf{l}\right)}{\mathbf{m}athbf{f}\left( \mathbf{a}\right) ^{\mathbf{m}athbf{l}-\mathbf{m}athbf{x}_j}} .
\end{equation*}
Consequently, thanks to the Markov property,
\begin{align*}
&\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \overline{\mathbf{X}}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\overline{\mathbf{N}}=\mathbf{m}athbf{n}\right)
\\
&\ \ =\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{m}athbf{l}\in\mathbf{N}N\\\mathbf{m}athbf{l}\leqslant \mathbf{m}athbf{n}}}\frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \overline{\mathbf{X}}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \overline{\mathbf{X}}_{k_j}=\mathbf{m}athbf{x}_j,\overline{\mathbf{N}}_{k_j}=\mathbf{m}athbf{l}\right)\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left(\overline{\mathbf{N}}=\mathbf{m}athbf{n}-\mathbf{m}athbf{l}+\mathbf{m}athbf{x}_j\right)}{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left(\overline{\mathbf{N}}=\mathbf{m}athbf{n}\right)} \\
&\ \ =\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\mathbf{N}=\mathbf{m}athbf{n}\right).
\end{align*}
\end{proof} | 2,586 | 33,678 | en |
train | 0.24.8 | Thanks to Lemma \ref{lem1}, it suffices to prove Theorem \ref{thm2} in the critical case. For this purpose, we prove the following convergence result for the total progeny of a critical GW process.
\begin{proposition}\label{prop: convergence}
Let $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ be a critical multitype GW process satisfying $(A_1)$, $(A_5)$ and $(A_6)$. Then there exists $C> 0$ such that for all $\mathbf{m}athbf{x}_0\in\mathbf{N}N$,
\begin{equation}
\lim_{n\to+\infty}n^{\frac{d}{2}+1}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0 }\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)=C\mathbf{m}athbf{x}_0\cdot \mathbf{u}.
\end{equation}
\end{proposition}
\begin{proof}
From \eqref{totprogeny}, for each $n\geqslant \mathbf{m}ax_{i}v_i^{-1}$, $n\geqslant \mathbf{m}ax_{i} x_{0,i}v_i^{-1}$,
\begin{align*}
\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0 }\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)&=\frac{1}{\prod_{i=1}^{d} \left\lfloor nv_i\right\rfloor }\mathbf{m}athbb{E}\left[ \det \hspace{-1mm}\begin{pmatrix}
\left\lfloor nv_1\right\rfloor \mathbf{m}athbf{e}_1-\mathbf{S}^1_{\left\lfloor nv_1\right\rfloor }\\\cdots\\\left\lfloor nv_d\right\rfloor \mathbf{m}athbf{e}_{d}-\mathbf{S}^{d}_{\left\lfloor nv_d\right\rfloor }
\end{pmatrix}\hspace{-1mm}\mathbf{1}_{\mathbf{m}athbf{s}um_{i=1}^d\mathbf{S}^i_{\left\lfloor nv_i\right\rfloor }=\left\lfloor n\mathbf{v}\right\rfloor -\mathbf{m}athbf{x}_0}\right]\mathbf{m}athbf{n}onumber\\
&=\frac{1}{ \left\lfloor nv_d\right\rfloor }\mathbf{m}athbb{E}\left[\det \hspace{-1mm}\begin{pmatrix}\mathbf{m}athbf{e}_1-\mathbf{S}^1_{\left\lfloor nv_1\right\rfloor }/\left\lfloor nv_1\right\rfloor\\\cdots\\ \mathbf{m}athbf{e}_{d-1}-\mathbf{S}^{d-1}_{\left\lfloor nv_{d-1}\right\rfloor}/\left\lfloor nv_{d-1}\right\rfloor \\\mathbf{m}athbf{x}_0 \end{pmatrix}\hspace{-1mm}\mathbf{1}_{\mathbf{m}athbf{s}um_{i=1}^d\mathbf{S}^i_{\left\lfloor nv_i\right\rfloor }=\left\lfloor n\mathbf{v}\right\rfloor -\mathbf{m}athbf{x}_0}\right]\hspace{-1mm},
\end{align*}
where the family $(\mathbf{m}athbf{S}_{\left\lfloor nv_i\right\rfloor}^{i})_{i=1\ldots d}$ is independent and is such that for each $i$, $\mathbf{m}athbf{S}_{\left\lfloor nv_i\right\rfloor}^{i}$ denotes the sum of $\left\lfloor nv_i\right\rfloor$ independent and identically distributed random variables with probability distribution $p_i$.
Let us consider the event $A_n=\mathbf{i}g\{ \mathbf{m}athbf{s}um_{i=1}^d\mathbf{S}^i_{\left\lfloorloor nv_i\right\rfloorloor }=\left\lfloor n\mathbf{v}\right\rfloor -\mathbf{m}athbf{x}_0\mathbf{i}g\}$. We define the covariance matrix $\mathbf{m}athbf{\Sigma}=\mathbf{m}athbf{s}um_{i=1}^dv_i\mathbf{m}athbf{\Sigma}^{i}$, which since $\mathbf{v}>\mathbf{m}athbf{0}$ is positive-definite under $(A_6)$.
Theorem 1.1 in \cite{Bent05} for nonidentically distributed independent variables ensures that $\mathbf{m}athbf{s}um_{i=1}^d ( \mathbf{S}^i_{\left\lfloor nv_i\right\rfloor }-\left\lfloor nv_i\right\rfloor \mathbf{m}^i)n^{-\frac{1}{2}}$ converges in distribution as $n\to+\infty$ to the multivariate normal distribution $\mathbf{m}athcal{N}_d\left(\mathbf{m}athbf{0},\mathbf{m}athbf{\Sigma} \right)$ with density $\phi$. Under $(A_5)$ we have
\[\limsup_n \frac{n}{\mathbf{m}in_{j=1\ldots d}\mathbf{m}athbf{s}um_{i=1}^d \frac{n_i}{d}\mathbf{m}athbf{s}um_{\mathbf{k}\in\mathbf{N}N}\mathbf{m}in\left( p_i\left( \mathbf{k}\right) , p_i\left( \mathbf{k}+\mathbf{m}athbf{e}_j\right) \right) }<+\infty,\]
which by Theorem 2.1 in \cite{DavMcDon} ensures the following local limit theorem for nonidentically distributed independent variables:
\begin{equation}\label{local}
\lim_{n\to\infty}\mathbf{m}athbf{s}up_{\mathbf{k}\in\mathbf{N}N}\left|n^{\frac{d}{2}}\mathbf{m}athbb{P}\left(\mathbf{m}athbf{s}um_{i=1}^d \mathbf{S}^i_{\left\lfloor nv_i\right\rfloor }=\mathbf{k}\right)-\phi\left(\frac{\mathbf{k}-\mathbf{m}athbf{s}um_{i=1}^d \left\lfloor nv_i\right\rfloor \mathbf{m}^i}{\mathbf{m}athbf{s}qrt{n}} \right) \right|=0.
\end{equation}
In the critical case, the left eigenvector $\mathbf{v}$ satisfies for each $j$, $v_j=\mathbf{m}athbf{s}um_{i=1}^d v_i m_{ij}$, hence $ 0\leqslant |\left\lfloorloor n v_j\right\rfloorloor -\mathbf{m}athbf{s}um_{i=1}^d\left\lfloorloor n v_i\right\rfloorloor m_{ij}|<\mathbf{m}ax(1,\mathbf{m}athbf{s}um_{i=1}^d m_{ij})$ and \eqref{local} implies in particular that
\begin{equation}\label{local2}
\lim_{n\to+\infty} n^{\frac{d}{2}}\mathbf{m}athbb{P}\left(A_n\right)=\phi\left( \mathbf{m}athbf{0}\right) =\frac{1 }{\left( 2\pi\right) ^{\frac{d}{2}}\left(\det \mathbf{m}athbf{\Sigma}\right) ^{\frac{1}{2}}}.
\end{equation}
Now, denoting by $\mathbf{m}athfrak{S}_d$ the symmetric group of order $d$ and by $\epsilon(\mathbf{m}athbf{s}igma)$ the signature of a permutation $\mathbf{m}athbf{s}igma\in\mathbf{m}athfrak{S}_d$, we obtain by Leibniz formula that
\begin{multline}\label{Leib} \left\lfloor nv_d\right\rfloor\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0 }\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)= \mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}igma\in \mathbf{m}athfrak{S}_d}\varepsilon\left( \mathbf{m}athbf{s}igma\right)x_{0,\mathbf{m}athbf{s}igma(d)} \mathbf{m}athbb{E}\Big[\prod_{i=1}^{d-1}\Big( \delta_{i,\mathbf{m}athbf{s}igma(i)}-\frac{S^i_{\left\lfloor nv_i\right\rfloor,\mathbf{m}athbf{s}igma(i) }}{\left\lfloor nv_i\right\rfloor}\Big)\Big]\\=\hspace{-4mm}\mathbf{m}athbf{s}um_{I\mathbf{m}athbf{s}ubset\{1,\ldots, d-1\}}\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}igma\in \mathbf{m}athfrak{S}_d}\hspace{-2mm}\varepsilon\left( \mathbf{m}athbf{s}igma\right)x_{0,\mathbf{m}athbf{s}igma(d)} \mathbf{m}athbb{E}\Big[\prod_{i\in I}\Big(\hspace{-1mm}-\frac{S^i_{\left\lfloor nv_i\right\rfloor,\mathbf{m}athbf{s}igma(i) }}{\left\lfloor nv_i\right\rfloor}+ m_{i,\mathbf{m}athbf{s}igma(i)}\Big)\mathbf{1}_{A_n}\Big]\hspace{-1mm}\prod_{i\mathbf{m}athbf{n}otin I}( \delta_{i,\mathbf{m}athbf{s}igma(i)}-m_{i,\mathbf{m}athbf{s}igma(i) }).
\end{multline}
Let $\varepsilon>0$. Since on the event $A_n$ each $S_{\left\lfloor nv_i\right\rfloor,j}^{i}/\left\lfloor nv_i\right\rfloor$ is bounded, there exists some constant $A>0$ such that for each $i,j=1\ldots d$,
\begin{align*} \mathbf{m}athbb{E}\left(\left|\frac{S^i_{\left\lfloor nv_i\right\rfloor,j }}{\left\lfloor nv_i\right\rfloor}- m_{i,j}\right|\mathbf{1}_{A_n}\right)&\leqslant \varepsilon\mathbf{m}athbb{P}\left( A_n\right) +\frac{A}{\varepsilon^{d+1}}\mathbf{m}athbb{E}\left(\left|\frac{S^i_{\left\lfloor nv_i\right\rfloor,j}}{\left\lfloor nv_i\right\rfloor}- m_{i,j}\right|^{d+1}\right)\\&\leqslant \varepsilon\mathbf{m}athbb{P}\left( A_n\right) +\frac{AB}{\varepsilon^{d+1}\left\lfloor nv_i\right\rfloor^{\frac{d+1}{2}}}\mathbf{m}athbb{E}\left(\left|S^i_{1,j}- m_{i,j}\right|^{d+1}\right),
\end{align*}
for some constant $B>0$. The second inequality on the $d+1$-th central moment can be found for instance in \cite{DharJog69}, Theorem 2. From \eqref{local2} it thus appears that for each non-empty subset $I\mathbf{m}athbf{s}ubset\{1,\ldots, d-1\}$,
\[\lim_{n\to+\infty}n^{\frac{d}{2}}\hspace{-2mm}\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}igma\in \mathbf{m}athfrak{S}_d}\hspace{-2mm}\varepsilon\left( \mathbf{m}athbf{s}igma\right)x_{0,\mathbf{m}athbf{s}igma(d)} \mathbf{m}athbb{E}\Big[\prod_{i\in I}\Big(-\frac{S^i_{\left\lfloor nv_i\right\rfloor,\mathbf{m}athbf{s}igma(i) }}{\left\lfloor nv_i\right\rfloor}+ m_{i,\mathbf{m}athbf{s}igma(i)}\Big)\mathbf{1}_{A_n}\Big]\hspace{-1mm}\prod_{i\mathbf{m}athbf{n}otin I} \left( \delta_{i,\mathbf{m}athbf{s}igma(i)}-m_{i,\mathbf{m}athbf{s}igma(i) }\right)\hspace{-1mm}=0.\] Consequently, considering the remaining term in \eqref{Leib} corresponding to $I=\emptyset$, we obtain that
\begin{align*}&\lim_{n\to+\infty}n^{\frac{d}{2}+1}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0 }\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)\\&\ \ \ \ =\lim_{n\to+\infty}n^{\frac{d}{2}} \mathbf{m}athbb{P}\left(A_n\right)\frac{1}{v_d}\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}igma\in \mathbf{m}athfrak{S}_d}\varepsilon\left( \mathbf{m}athbf{s}igma\right)x_{0,\mathbf{m}athbf{s}igma(d)}\prod_{i=1}^{d-1} \left( \delta_{i,\mathbf{m}athbf{s}igma(i)}-m_{i,\mathbf{m}athbf{s}igma(i)}\right) \\&\ \ \ \ = \frac{1 }{v_d\left( 2\pi\right) ^{\frac{d}{2}}\left(\det \mathbf{m}athbf{\Sigma}\right) ^{\frac{1}{2}}}\det\begin{pmatrix}\mathbf{m}athbf{e}_1-\mathbf{m}^1\\\cdots\\ \mathbf{m}athbf{e}_{d-1}-\mathbf{m}^{d-1} \\\mathbf{m}athbf{x}_0 \end{pmatrix}=\frac{\mathbf{m}athbf{x}_0\cdot \mathbf{m}athbf{D}}{v_d\left( 2\pi\right) ^{\frac{d}{2}}\left(\det \mathbf{m}athbf{\Sigma}\right) ^{\frac{1}{2}}},
\end{align*}
where $\mathbf{m}athbf{D}=(D_1,\ldots,D_d)$ is such that $D_i$ is the $(d,i)$-th cofactor of the matrix $\mathbf{m}athbf{I}-\mathbf{M}$.
The criticality of $\left(\mathbf{X}_k\right)_{k\geqslant 0} $ implies that $\det\left( \mathbf{m}athbf{I}-\mathbf{M}\right)=( \mathbf{m}athbf{e}_d-\mathbf{m}^d )\cdot \mathbf{m}athbf{D}=0$. Moreover, for each $j=1\ldots d-1$,
$( \mathbf{m}athbf{e}_j-\mathbf{m}^j )\cdot \mathbf{m}athbf{D}$ corresponds to the determinant of $ \mathbf{m}athbf{I}-\mathbf{M}$ in which the $d$-th row has been replaced by the $j$-th row, and is consequently null. We have thus proven that for each $j=1\ldots d$, $( \mathbf{m}athbf{e}_j-\mathbf{m}^j )\cdot \mathbf{m}athbf{D}=0$, or equivalently that $\mathbf{m}athbf{s}um_{i=1}^d m_{ji} D_i=D_j.$ Hence $\mathbf{m}athbf{D}$ is a right eigenvector of $\mathbf{M}$ for the Perron's root 1, which implies the existence of some nonnull constant $c$ such that $\mathbf{m}athbf{D}=c\mathbf{u}$, leading to the desired result.
\end{proof} | 4,047 | 33,678 | en |
train | 0.24.9 | \textit{Proof of Theorem \ref{thm2}}
Let us assume that $\left(\mathbf{X}_k\right)_{k\geqslant 0}$ is critical and satisfies $(A_1)$, $(A_5)$ and $(A_6)$. Let $\mathbf{m}athbf{x}_0\in\mathbf{N}N$, $k_1\leqslant\ldots\leqslant k_j\in\mathbf{m}athbb{N}$, and $\mathbf{m}athbf{x}_1,\ldots,\mathbf{m}athbf{x}_j\in\mathbf{N}N$ and let us show that
\begin{equation}\label{toprove}
\lim_{n\to+\infty}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\mathbf{i}g)\\=\frac{\mathbf{m}athbf{x}_j\cdot\mathbf{u}}{\mathbf{m}athbf{x}_0\cdot\mathbf{u}}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{i}g).
\end{equation}
Let $\frac{3}{4}<\varepsilon<1$. The Markov property entails that
\begin{multline}\label{second term}
\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{m}id\mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)
\\=\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{m}athbf{l}\in\mathbf{N}N\\\mathbf{m}athbf{l}< \left\lfloorloor n^{\varepsilon}\mathbf{v} \right\rfloorloor}} \mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j,\mathbf{N}_{k_j}=\mathbf{m}athbf{l}\right)\frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left(\mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor-\mathbf{m}athbf{l}+\mathbf{m}athbf{x}_j\right)}{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)}\\\ \ +\mathbf{m}athbf{s}um_{\mathbf{m}athbf{s}ubstack{\mathbf{m}athbf{l}\in\mathbf{N}N\\ \left\lfloorloor n^{\varepsilon}\mathbf{v} \right\rfloorloor\leqslant\mathbf{m}athbf{l}\leqslant \left\lfloor n\mathbf{v} \right\rfloor}} \mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j,\mathbf{N}_{k_j}=\mathbf{m}athbf{l}\right)\frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left(\mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor-\mathbf{m}athbf{l}+\mathbf{m}athbf{x}_j\right)}{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)}.
\end{multline}
Note that \eqref{local} ensures that
\[\lim_{n} n^{\frac{d}{2}}\mathbf{m}athbb{P}\left(\mathbf{m}athbf{s}um_{i=1}^d \mathbf{S}^i_{\left\lfloor nv_i\right\rfloor -l_i+x_{j,i}}=\left\lfloor n\mathbf{v}\right\rfloor -\mathbf{m}athbf{l}\right)=\frac{1 }{\left( 2\pi\right) ^{\frac{d}{2}}\left(\det \mathbf{m}athbf{\Sigma}\right) ^{\frac{1}{2}}}, \]
uniformly in $\mathbf{m}athbf{l}< \left\lfloorloor n^{\varepsilon}\mathbf{v} \right\rfloorloor$, and that the proof of Proposition \ref{prop: convergence} can be used to show that
\[
\lim_{n\to+\infty}n^{\frac{d}{2}+1}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j }\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor-\mathbf{m}athbf{l}+\mathbf{m}athbf{x}_j\right)=\frac{C\mathbf{m}athbf{x}_j\cdot \mathbf{u} }{v_d\left( 2\pi\right) ^{\frac{d}{2}}\left(\det \mathbf{m}athbf{\Sigma}\right) ^{\frac{1}{2}}},
\]uniformly in $\mathbf{m}athbf{l}< \left\lfloorloor n^{\varepsilon}\mathbf{v} \right\rfloorloor$. Together with Proposition \ref{prop: convergence}, this shows that the first sum in \eqref{second term} converges to
\begin{equation}\label{hop}
\frac{\mathbf{m}athbf{x}_j\cdot\mathbf{u}}{\mathbf{m}athbf{x}_0\cdot\mathbf{u}}\mathbf{m}athbf{s}um_{\mathbf{m}athbf{l}\in\mathbf{N}N} \mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j,\mathbf{N}_{k_j}=\mathbf{m}athbf{l}\mathbf{i}g)\\=\frac{\mathbf{m}athbf{x}_j\cdot\mathbf{u}}{\mathbf{m}athbf{x}_0\cdot\mathbf{u}}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\mathbf{i}g( \mathbf{X}_{k_1}=\mathbf{m}athbf{x}_1,\ldots, \mathbf{X}_{k_j}=\mathbf{m}athbf{x}_j\mathbf{i}g)
\end{equation}
as $n\to+\infty$. The second sum in \eqref{second term} can be bounded by
\begin{align*}
\frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left(\mathbf{N}_{k_j}\geqslant \left\lfloor n^{\varepsilon}\mathbf{v}\right\rfloor\right) }{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)}&\leqslant
\frac{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_j}\left(\|\mathbf{N}_{k_j}\|_1^{d+1}\geqslant n^{(d+1)\varepsilon}\|\mathbf{v}\|_1^{d+1}\right)}{\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)}\\
&\leqslant \frac{\mathbf{m}athbb{E}_{\mathbf{m}athbf{x}_j}\left(\|\mathbf{N}_{k_j}\|_1^{d+1}\right)}{\|\mathbf{v}\|_1^{d+1}n^{(d+1)\varepsilon}\mathbf{m}athbb{P}_{\mathbf{m}athbf{x}_0}\left( \mathbf{N}=\left\lfloor n\mathbf{v} \right\rfloor\right)}.
\end{align*}
Thanks to $(A_5)$, the moments of order $d+1$ of the finite sum $\mathbf{N}_{k_j}$ are finite, and since $(d+1)\varepsilon>\frac{d}{2}+1$, the right term of the last inequality converges to 0 as $n\to +\infty$ thanks to Proposition \ref{prop: convergence}. This together with \eqref{hop} in \eqref{second term} finally proves \eqref{toprove}.
\end{document} | 2,282 | 33,678 | en |
train | 0.25.0 | \begin{document}
\title{A nonexistence theorem for proper biharmonic maps into general Riemannian manifolds}
\begin{abstract}
In this note we prove a nonexistence result for proper biharmonic maps from complete non-compact Riemannian manifolds
of dimension \(m=\dim M\geq 3\) with infinite volume that admit an Euclidean type Sobolev inequality
into general Riemannian manifolds by assuming finiteness of $\|\tau(\phi)\|_{L^p(M)}, p>1$ and smallness of $\|d\phi\|_{L^m(M)}$.
This is an improvement of a recent result of the first named author, where he assumed $2<p<m$.
As applications we also get several nonexistence results for proper biharmonic submersions from complete non-compact manifolds into general Riemannian manifolds.
\end{abstract}
\,\,\,\,ection{Introduction}
Let $(M,g)$ be a Riemannian manifold and $(N,h)$ a Riemannian manifold without boundary. For a $W^{1,2}(M,N)$ map $\phi$, the energy density of $\phi$ is defined by
$$ e(\phi)=|d\phi|^2=\rm{Tr_g}(\phi^\ast h),$$
where $\phi^\ast h$ is the pullback of the metric tensor $h$. The energy functional of the map $\phi$ is defined as $$E(\phi)=\frac{1}{2}\int_Me(\phi)dv_g.$$
The Euler-Lagrange equation of $E(\phi)$ is $\tau(\phi)=\rm Tr_g\bar{\nabla} d\phi=0$ and $\tau(\phi)$ is called the \textbf{tension field} of $\phi$. A map is called a \textbf{harmonic map} if $\tau(\phi)=0$.
The theory of harmonic maps has many important applications in various fields of differential geometry, including minimal surface theory, complex geometry, see \cite{SY} for a survey.
Much effort has been paid in the last several decades to generalize the notion of harmonic maps. In 1983, Eells and Lemaire (\cite{EL}, see also \cite{ES}) proposed to consider the bienergy functional
$$E_2(\phi)=\frac{1}{2}\int_M|\tau(\phi)|^2dv_g$$ for smooth maps between Riemannian manifolds. Stationary points of the bienergy functional are called \textbf{biharmonic maps}.
We see that harmonic maps are biharmonic maps and even more, minimizers of the bienergy functional. In 1986, Jiang \cite{Ji} derived the first and second variational formulas of the bienergy functional and initiated the study of biharmonic maps. The Euler-Lagrange equation of $E_2(\phi)$ is given by
$$\tau_2(\phi):=-\Delta^\phi\tau(\phi)-\,\,\,\,um_{i=1}^mR^N(\tau(\phi), d\phi(e_i))d\phi(e_i)=0,$$
where $\Delta^\phi:=\,\,\,\,um_{i=1}^m(\bar{\nabla}_{e_i}\bar{\nabla}_{e_i}-\bar{\nabla}_{\nabla_{e_i}e_i})$. Here, $\nabla$ is the Levi-Civita connection on $(M,g)$, $\bar{\nabla}$ is the induced connection on the pullback bundle $\phi^{\ast}TN$, and $R^N$ is the Riemannian curvature tensor on $N$.
The first nonexistence result for biharmonic maps was obtained by Jiang \cite{Ji}. He proved that biharmonic maps from a compact, orientable Riemannian manifold into a Riemannian manifold of nonpositive curvature are harmonic.
Jiang's theorem is a direct application of the Weitzenb\"ock formula. If $\phi$ is biharmonic, then
\begin{eqnarray*}
-\frac{1}{2}\Delta|\tau(\phi)|^2&=&\langlegle-\Delta^\phi\tau(\phi), \tau(\phi)\ranglegle-|\bar{\nabla}\tau(\phi)|^2
\\&=&Tr_g\langlegle R^N(\tau(\phi), d\phi)d\phi, \tau(\phi)\ranglegle-|\bar{\nabla}\tau(\phi)|^2
\\&\leq&0.
\end{eqnarray*}
The maximum principle implies that $|\tau(\phi)|^2$ is constant. Therefore $\bar{\nabla}\tau(\phi)=0$ and so by
$$div\langlegle d\phi, \tau(\phi)\ranglegle=|\tau(\phi)|^2+\langlegle d\phi, \bar{\nabla}\tau(\phi)\ranglegle,$$
we deduce that $div\langlegle d\phi, \tau(\phi)\ranglegle=|\tau(\phi)|^2$. Then, by the divergence theorem, we have $\tau(\phi)=0$. Generalizations of this result by making use of similar ideas are given in \cite{On}.
If $M$ is non-compact, the maximum principle is no longer applicable. In this case, Baird et al. \cite{BFO} proved that biharmonic maps
from a complete non-compact Riemannian manifold with nonnegative Ricci curvature into a nonpositively curved manifold with finite bienergy are harmonic.
It is natural to ask whether we can abandon the curvature restriction on the domain manifold and weaken the integrability condition on the bienergy.
In this direction, Nakauchi et al. \cite{NUG} proved that biharmonic maps from a complete manifold to a nonpositively curved manifold are harmonic if ($p=2$)
\\(i) $\int_M|d\phi|^2dv_g<\infty$ and $\int_M|\tau(\phi)|^pdv_g<\infty$, or
\\(ii) $Vol(M, g)=\infty$ and $\int_M|\tau(\phi)|^pdv_g<\infty.$
Later Maeta \cite{Ma} generalized this result by assuming that $p\geq2$ and further generalizations are given by the second named author in \cite{Luo1}, \cite{Luo2}.
Recently, the first named author proved a nonexistence result for proper biharmonic maps from complete non-compact manifolds into general target manifolds \cite{Ba}, by only assuming that the sectional curvatures of the target manifold have an upper bound.
Explicitly, he proved the following theorem.
\begin{thm}[Branding]\label{Bra}
Suppose that $(M,g)$ is a complete non-compact Riemannian manifold of dimension \(m=\dim M\geq 3\)
whose Ricci curvature is bounded from below and with positive injectivity radius.
Let $\phi: (M,g)\to (N,h)$ be a smooth biharmonic map, where \(N\) is another Riemannian manifold.
Assume that the sectional curvatures of $N$ satisfy $K^N\leq A,$ where $A$ is a positive constant.
If $$\int_M|\tau(\phi)|^pdv_g<\infty$$ and $$\int_M|d\phi|^mdv_g<\epsilon$$ for $2<p<m$ and $\epsilon>0$ (depending on $p,A$ and the geometry of \(M\)) sufficiently small, then $\phi$ must be harmonic.
\end{thm}
The central idea in the proof of Theorem \ref{Bra}
is the use of an \emph{Euclidean type Sobolev inequality} that allows to control the curvature term
in the biharmonic map equation. However, in order for this inequality to hold one has to make stronger
assumptions on the domain manifold \(M\) as in Theorem \ref{Bra}, which we will correct below.
We say that a complete non-compact Riemannian manifold of infinite volume admits
an \emph{Euclidean type Sobolev inequality} if the following inequality holds (assuming \(m=\dim M\geq 3\))
\begin{align}
\label{sobolev-inequality}
(\int_M|u|^{2m/(m-2)}dv_g)^\frac{m-2}{m}\leq C_{sob}^M\int_M|\nabla u|^2dv_g
\end{align}
for all \(u\in W^{1,2}(M)\) with compact support,
where \(C_{sob}^M\) is a positive constant that depends on the geometry of \(M\).
Such an inequality holds in \(\mathbb{R}^m\) and is well-known as \emph{Gagliardo-Nirenberg inequality} in this case.
One way of ensuring that \eqref{sobolev-inequality} holds is the following:
If \((M,g)\) is a complete, non-compact Riemannian manifold of dimension \(m\)
with nonnegative Ricci curvature, and if for some point \(x\in M\)
\begin{align*}
\lim_{R\to\infty}\frac{vol_g(B_R(x))}{\omega_mR^m}>0
\end{align*}
holds, then \eqref{sobolev-inequality} holds true, see \cite{Sh}.
Here, \(\omega_m\) denotes the volume of the unit ball in \(\mathbb{R}^m\).
For further geometric conditions ensuring that \eqref{sobolev-inequality} holds
we refer to \cite[Section 3.7]{He}.
In this article we will correct the assumptions that are needed for Theorem \ref{Bra} to hold
and extend it to the case of $p=2$, which is a more natural integrability condition.
Motivated by these aspects, we actually can prove the following result:
\begin{thm}\label{main1}
Suppose that $(M,g)$ is a complete, connected non-compact Riemannian manifold of dimension \(m=\dim M\geq 3\) with infinite volume that admits
an Euclidean type Sobolev inequality of the form \eqref{sobolev-inequality}.
Moreover, suppose that \((N,h)\) is another Riemannian manifold
whose sectional curvatures satisfy $K^N\leq A,$ where $A$ is a positive constant.
Let $\phi: (M,g)\to (N,h)$ be a smooth biharmonic map.
If $$\int_M|\tau(\phi)|^pdv_g<\infty$$ and $$\int_M|d\phi|^mdv_g<\epsilon$$
for $p>1$ and $\epsilon>0$ (depending on $p,A$ and the geometry of \(M\)) sufficiently small, then $\phi$ must be harmonic.
\end{thm}
Similar ideas have been used to derive Liouville type results for \(p\)-harmonic maps in \cite{NT}, see also \cite{zc} for a more general result.
In the proof we choose a test function of the form $(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\tau(\phi), (p>1, \delta>0)$ to avoid problems that may be caused by the zero points of $\tau(\phi)$.
When we take the limit $\delta\to 0$, we also need to be careful about the set of zero points of $\tau(\phi)$, and a delicate analysis is given.
For details please see the proof in section 2.
Moreover, we can get the following Liouville type result.
\begin{thm}\label{main2}
Suppose that $(M,g)$ is a complete, connected non-compact Riemannian manifold of \(m=\dim M\geq 3\) with nonnegative Ricci curvature that admits
an Euclidean type Sobolev inequality of the form \eqref{sobolev-inequality}.
Moreover, suppose that \((N,h)\) is another Riemannian manifold
whose sectional curvatures satisfy $K^N\leq A,$ where $A$ is a positive constant.
Let $\phi: (M,g)\to (N,h)$ be a smooth biharmonic map.
If $$\int_M|\tau(\phi)|^pdv_g<\infty$$ and $$\int_M|d\phi|^mdv_g<\epsilon$$
for $p>1$ and $\epsilon>0$ (depending on $p,A$ and the geometry of \(M\)) sufficiently small, then $\phi$ is a constant map.
\end{thm}
Note that due to a classical result of Calabi and Yau \cite[Theorem 7]{Yau} a complete non-compact Riemannian manifold
with nonnegative Ricci curvature has infinite volume.
\begin{rem}
Due to Theorem \ref{main1} we only need to prove that harmonic maps satisfying the assumption of Theorem \ref{main2} are constant maps.
Such a result was proven in \cite{NT} and thus Theorem \ref{main2} is a corollary of Theorem 1.5 in \cite{NT}. Conversely, Theorem \ref{main2} generalizes related Liouville type results for harmonic maps in \cite{NT}.
\end{rem}
\quad \\
\textbf{Organization:} Theorem \ref{main1} is proved in section 2.
In section 3 we apply Theorem \ref{main1} to get several nonexistence results for proper biharmonic submersions.
\,\,\,\,ection{Proof of the main result}
In this section we will prove Theorem \ref{main1}.
Assume that $x_0\in M$. We choose a cutoff function $0\leq\eta\leq1$ on $M$ that satisfies
\begin{equation}\label{flow1}
\left\{\begin{array}{rcl}
\eta(x)&=&1, \quad \forall \ x\in B_R(x_0), \\
\eta(x)&=&0,\quad \forall \ x\in M\,\,\,\,etminus B_{2R}(x_0),\\
|\nabla\eta(x)|&\leq& \frac{C}{R}, \quad \forall \ x \in M.
\end{array}\right.
\end{equation}
\begin{lem}\label{lem1}
Let $\phi:(M,g)\to (N,h)$ be a smooth biharmonic map and assume that the sectional curvatures of $N$ satisfy $K^N\leq A.$
Let $\delta$ be a positive constant. Then the following inequalities hold.
\\(1) If $1<p<2$, we have
\begin{eqnarray}\label{ine1}
&&(1-\frac{p-1}{2})\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g \nonumber
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R}(x_0)}(|\tau(\phi)|^2+\delta)^\frac{p}{2}dv_g\nonumber
\\&-&(p-2)\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g;
\end{eqnarray}
\\(2) If $p\geq2$, we have
\begin{eqnarray}\label{ine1'}
&&\frac{1}{2}\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g \nonumber
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R}(x_0)}(|\tau(\phi)|^2+\delta)^\frac{p}{2}dv_g.
\end{eqnarray}
\end{lem}
\proof Multiplying the biharmonic map equation by $\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\tau(\phi)$ we get
$$\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\Delta^\phi\tau(\phi), \tau(\phi)\ranglegle=-\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\,\,\,\,um_{i=1}^mR^N(\tau(\phi),d\phi(e_i),\tau(\phi),d\phi(e_i)).$$ | 4,007 | 14,169 | en |
train | 0.25.1 | Similar ideas have been used to derive Liouville type results for \(p\)-harmonic maps in \cite{NT}, see also \cite{zc} for a more general result.
In the proof we choose a test function of the form $(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\tau(\phi), (p>1, \delta>0)$ to avoid problems that may be caused by the zero points of $\tau(\phi)$.
When we take the limit $\delta\to 0$, we also need to be careful about the set of zero points of $\tau(\phi)$, and a delicate analysis is given.
For details please see the proof in section 2.
Moreover, we can get the following Liouville type result.
\begin{thm}\label{main2}
Suppose that $(M,g)$ is a complete, connected non-compact Riemannian manifold of \(m=\dim M\geq 3\) with nonnegative Ricci curvature that admits
an Euclidean type Sobolev inequality of the form \eqref{sobolev-inequality}.
Moreover, suppose that \((N,h)\) is another Riemannian manifold
whose sectional curvatures satisfy $K^N\leq A,$ where $A$ is a positive constant.
Let $\phi: (M,g)\to (N,h)$ be a smooth biharmonic map.
If $$\int_M|\tau(\phi)|^pdv_g<\infty$$ and $$\int_M|d\phi|^mdv_g<\epsilon$$
for $p>1$ and $\epsilon>0$ (depending on $p,A$ and the geometry of \(M\)) sufficiently small, then $\phi$ is a constant map.
\end{thm}
Note that due to a classical result of Calabi and Yau \cite[Theorem 7]{Yau} a complete non-compact Riemannian manifold
with nonnegative Ricci curvature has infinite volume.
\begin{rem}
Due to Theorem \ref{main1} we only need to prove that harmonic maps satisfying the assumption of Theorem \ref{main2} are constant maps.
Such a result was proven in \cite{NT} and thus Theorem \ref{main2} is a corollary of Theorem 1.5 in \cite{NT}. Conversely, Theorem \ref{main2} generalizes related Liouville type results for harmonic maps in \cite{NT}.
\end{rem}
\quad \\
\textbf{Organization:} Theorem \ref{main1} is proved in section 2.
In section 3 we apply Theorem \ref{main1} to get several nonexistence results for proper biharmonic submersions.
\,\,\,\,ection{Proof of the main result}
In this section we will prove Theorem \ref{main1}.
Assume that $x_0\in M$. We choose a cutoff function $0\leq\eta\leq1$ on $M$ that satisfies
\begin{equation}\label{flow1}
\left\{\begin{array}{rcl}
\eta(x)&=&1, \quad \forall \ x\in B_R(x_0), \\
\eta(x)&=&0,\quad \forall \ x\in M\,\,\,\,etminus B_{2R}(x_0),\\
|\nabla\eta(x)|&\leq& \frac{C}{R}, \quad \forall \ x \in M.
\end{array}\right.
\end{equation}
\begin{lem}\label{lem1}
Let $\phi:(M,g)\to (N,h)$ be a smooth biharmonic map and assume that the sectional curvatures of $N$ satisfy $K^N\leq A.$
Let $\delta$ be a positive constant. Then the following inequalities hold.
\\(1) If $1<p<2$, we have
\begin{eqnarray}\label{ine1}
&&(1-\frac{p-1}{2})\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g \nonumber
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R}(x_0)}(|\tau(\phi)|^2+\delta)^\frac{p}{2}dv_g\nonumber
\\&-&(p-2)\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g;
\end{eqnarray}
\\(2) If $p\geq2$, we have
\begin{eqnarray}\label{ine1'}
&&\frac{1}{2}\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g \nonumber
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R}(x_0)}(|\tau(\phi)|^2+\delta)^\frac{p}{2}dv_g.
\end{eqnarray}
\end{lem}
\proof Multiplying the biharmonic map equation by $\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\tau(\phi)$ we get
$$\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\Delta^\phi\tau(\phi), \tau(\phi)\ranglegle=-\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\,\,\,\,um_{i=1}^mR^N(\tau(\phi),d\phi(e_i),\tau(\phi),d\phi(e_i)).$$
Integrating over $M$ and using integration by parts we get
\begin{eqnarray}\label{inem1}
&&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\Delta^\phi\tau(\phi), \tau(\phi)\ranglegle dv_g\nonumber
\\&=&-2\int_M(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle\eta\nabla\eta dv_g\nonumber
\\&-&(p-2)\int_M\eta^2|\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle|^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}dv_g\nonumber
\\&-&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}|\bar{\nabla}\tau(\phi)|^2dv_g\nonumber
\\&\leq&-2\int_M(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle\eta\nabla\eta dv_g
\\&-&(p-2)\int_M\eta^2|\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle|^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}dv_g\nonumber
\\&-&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g.\nonumber
\end{eqnarray}
Therefore when $1<p<2$ we have
\begin{eqnarray*}
&&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\Delta^\phi\tau(\phi), \tau(\phi)\ranglegle dv_g
\\&\leq&(\frac{p-1}{2}-1)\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&+&\frac{2}{p-1}\int_M(|\tau(\phi)|^2+\delta)
^\frac{p}{2}|\nabla\eta|^2dv_g
\\&-&(p-2)\int_M\eta^2|\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle|^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}dv_g
\\&\leq& (\frac{p-1}{2}-1)\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&+&\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g
\\&-&(p-2)\int_M\eta^2|\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle|^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}dv_g
\\&\leq& (\frac{p-1}{2}-1)\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&+&\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g
\\&-&(p-2)\int_M\eta^2|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}dv_g,
\end{eqnarray*}
where in the last inequality we used $1<p<2$ and
$$|\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle|^2\leq|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2.$$
Therefore we find
\begin{eqnarray*}
&&(1-\frac{p-1}{2})\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g \nonumber
\\&\leq& \int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\,\,\,\,um_{i=1}^mR^N(\tau(\phi),d\phi(e_i),\tau(\phi),d\phi(e_i))dv_g
\\&+&\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g\nonumber
\\&-&(p-2)\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}|\tau(\phi)|^2|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g\nonumber
\\&-&(p-2)\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g\nonumber
\\&-&(p-2)\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g,
\end{eqnarray*}
which proves the first claim.\\
When $p\geq2$ equation \eqref{inem1} gives
\begin{eqnarray*}
&&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\Delta^\phi\tau(\phi), \tau(\phi)\ranglegle dv_g
\\&\leq&-2\int_M(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle\eta\nabla\eta dv_g
\\&-&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&\leq&-\frac{1}{2}\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g+2\int_M(|\tau(\phi)|^2+\delta)
^\frac{p}{2}|\nabla\eta|^2dv_g.
\end{eqnarray*}
Therefore we have
\begin{eqnarray*}
&&\frac{1}{2}\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&\leq&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\,\,\,\,um_{i=1}^mR^N(\tau(\phi),d\phi(e_i),\tau(\phi),d\phi(e_i))dv_g+\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}|\tau(\phi)|^2|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g.
\end{eqnarray*}
This completes the proof of Lemma \ref{lem1}.
$
\Box$\\ | 3,798 | 14,169 | en |
train | 0.25.2 | When $p\geq2$ equation \eqref{inem1} gives
\begin{eqnarray*}
&&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\Delta^\phi\tau(\phi), \tau(\phi)\ranglegle dv_g
\\&\leq&-2\int_M(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle\eta\nabla\eta dv_g
\\&-&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&\leq&-\frac{1}{2}\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g+2\int_M(|\tau(\phi)|^2+\delta)
^\frac{p}{2}|\nabla\eta|^2dv_g.
\end{eqnarray*}
Therefore we have
\begin{eqnarray*}
&&\frac{1}{2}\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\\&\leq&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}\,\,\,\,um_{i=1}^mR^N(\tau(\phi),d\phi(e_i),\tau(\phi),d\phi(e_i))dv_g+\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-2}{2}|\tau(\phi)|^2|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g
\\&\leq& A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g+\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)
^\frac{p}{2}dv_g.
\end{eqnarray*}
This completes the proof of Lemma \ref{lem1}.
$
\Box$\\
In the following we will estimate the term $$A\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g.$$
\begin{lem}\label{lem2}
Assume that $(M,g)$ satisfies the assumptions of Theorem \ref{main1}. Then the following inequality holds
\begin{eqnarray}\label{ine2}
&&\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g\nonumber
\\&\leq & C(\int_M|d\phi|^mdv_g)^{\frac{2}{m}}\times
\\&&(\frac{1}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)^\frac{p}{2}dv_g\nonumber
+\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g),
\end{eqnarray}
where $C$ is a constant depending on $p,A$ and the geometry of $M$.
\end{lem}
\proof Set $f=(|\tau(\phi)|^2+\delta)^\frac{p}{4}$, then we have
$$\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p}{2}|d\phi|^2dv_g=\int_M\eta^2f^2|d\phi|^2dv_g.$$ Then by H\"older's inequality we get
$$\int_M\eta^2f^2|d\phi|^2dv_g\leq (\int_M(\eta f)^\frac{2m}{m-2}dv_g)^\frac{m-2}{m}(\int_M|d\phi|^mdv_g)^\frac{2}{m}.$$
Applying \eqref{sobolev-inequality} to $u=\eta f$ we get
$$(\int_M(\eta f)^\frac{2m}{m-2}dv_g)^\frac{m-2}{m}\leq C^M_{sob}\int_M|d(\eta f)|^2dv_g,$$
which leads to
\begin{eqnarray}\label{ine7}
\int_M\eta^2f^2|d\phi|^2dv_g\leq 2C^M_{sob}(\int_M|d\phi|^mdv_g)^{\frac{2}{m}}(\int_M|d\eta|^2f^2dv_g+\int_M\eta^2|df|^2dv_g).
\end{eqnarray}
Note that $f=(|\tau(\phi)|^2+\delta)^\frac{p}{4}$ and $$|df|^2=\frac{p^2}{4}(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\langlegle\bar{\nabla}\tau(\phi), \tau(\phi)\ranglegle|^2\leq \frac{p^2}{4}(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2.$$
This completes the proof of Lemma \ref{lem2}.
$
\Box$\\
When $1<p<2$, due to Lemmas \ref{lem1}, \ref{lem2}, we see that by choosing $\epsilon$ sufficiently small such that $AC\epsilon^\frac{2}{m}\leq\frac{p-1}{4},$ we have
\begin{eqnarray}\label{ine3}
\frac{p-1}{4}\int_M\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\leq\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)^\frac{p}{2}dv_g,
\end{eqnarray}
where $C$ is a constant depending on $p,A$ and the geometry of $M$.
Now, set $M_1:=\{x\in M| \tau(\phi)(x)=0\}$, and $M_2=M\,\,\,\,etminus M_1.$
If $M_2$ is an empty set, then we are done. Hence we assume that $M_2$ is nonempty and we will get a contradiction below.
Note that since $\phi$ is smooth, $M_2$ is an open set.
From \eqref{ine3} we have
\begin{eqnarray}
\frac{p-1}{4}\int_{M_2}\eta^2(|\tau(\phi)|^2+\delta)^\frac{p-4}{2}|\bar{\nabla}\tau(\phi)|^2|\tau(\phi)|^2dv_g
\leq\frac{C}{R^2}\int_{B_{2R(x_0)}}(|\tau(\phi)|^2+\delta)^\frac{p}{2}dv_g.
\end{eqnarray}
Letting $\delta\to 0$ we get
$$\frac{p-1}{4}\int_{M_2}\eta^2|\tau(\phi)|^{p-2}|\bar{\nabla}\tau(\phi)|^2dv_g\leq\frac{C}{R^2}\int_{B_{2R(x_0)}}|\tau(\phi)|^pdv_g\leq
\frac{C}{R^2}\int_{M}|\tau(\phi)|^pdv_g.$$
Letting $R\to \infty$ we get
$$\frac{p-1}{4}\int_{M_2}|\tau(\phi)|^{p-2}|\bar{\nabla}\tau(\phi)|^2dv_g=0.$$
When $p\geq2$ by a similar discussion we can prove that
$$\frac{1}{4}\int_{M_2}|\tau(\phi)|^{p-2}|\bar{\nabla}\tau(\phi)|^2dv_g=0.$$
Therefore we have that $\bar{\nabla}\tau(\phi)=0$ everywhere in $M_2$ and hence $M_2$ is an \textbf{open} and \textbf{closed} nonempty set, thus $M_2=M$ (as we assume that $M$ is a connected manifold)
and $|\tau(\phi)|\equiv c$ for some constant $c\neq 0$. Thus $Vol(M)<\infty$ by $\int_Mc^pdv_g<\infty$.
In the following we will need Gaffney's theorem \cite{Ga}, stated below:
\begin{thm}[Gaffney]
Let $(M, g)$ be a complete Riemannian manifold. If a $C^1$ 1-form $\omega$ satisfies
that $\int_M|\omega|dv_g<\infty \ and \ \int_M|\delta\omega| dv_g<\infty,$ or equivalently, a $C^1$ vector field $X$ defined by
$\omega(Y) = \langlegle X, Y \ranglegle, (\forall Y \in TM)$ satisfies that $\int_M|X|dv_g<\infty \ and \ \int_M|div X|dv_g<\infty,$ then $$\int_M\delta\omega dv_g=\int_Mdiv Xdv_g=0.$$
\end{thm}
Define a l-form on $M$ by
$$\omega(X):=\langlegle d\phi(X),\tau(\phi)\ranglegle,~(X\in TM).$$
Then
\begin{eqnarray*}
\int_M|\omega|dv_g&=&\int_M(\,\,\,\,um_{i=1}^m|\omega(e_i)|^2)^\frac{1}{2}dv_g
\\&\leq&\int_M|\tau(\phi)||d\phi|dv_g
\\&\leq&c Vol(M)^{1-\frac{1}{m}}(\int_M|d\phi|^mdv_g)^\frac{1}{m}
\\&<&\infty.
\end{eqnarray*}
In addition, we calculate $-\delta\omega=\,\,\,\,um_{i=1}^m(\nabla_{e_i}\omega)(e_i)$:
\begin{eqnarray*}
-\delta\omega&=&\,\,\,\,um_{i=1}^m\nabla_{e_i}(\omega(e_i))-\omega(\nabla_{e_i}e_i)
\\&=&\,\,\,\,um_{i=1}^m\{\langlegle\bar{\nabla}_{e_i}d\phi(e_i),\tau(\phi)\ranglegle
-\langlegle d\phi(\nabla_{e_i}e_i),\tau(\phi)\ranglegle\}
\\&=&\,\,\,\,um_{i=1}^m\langlegle \bar{\nabla}_{e_i}d\phi(e_i)-d\phi(\nabla_{e_i}e_i),\tau(\phi)\ranglegle
\\&=&|\tau(\phi)|^2,
\end{eqnarray*}
where in the second equality we used $\bar{\nabla}\tau(\phi)=0$. Therefore $$\int_M|\delta\omega|dv_g=c^2Vol(M)<\infty.$$
Now by Gaffney's theorem and the above equality we have that
$$0=\int_M(-\delta\omega)dv_g=\int_M|\tau(\phi)|^2dv_g=c^2Vol(M),$$
which implies that $c=0$, a contradiction. Therefore we must have $M_1=M$, i.e. $\phi$ is a harmonic map. This completes the proof of Theorem \ref{main1}.
$
\Box$\\
\,\,\,\,ection{Applications to biharmonic submersions}
In this section we give some applications of our result to biharmonic submersions.
First we recall some definitions \cite{BW}.
Assume that $\phi: (M, g)\to (N, h)$ is a smooth map between Riemannian manifolds and $x\in M$. Then $\phi$ is called {\bf horizontally weakly conformal} if either
(i) $d\phi_x=0$, or
(ii) $d\phi_x$ maps the horizontal space $\rm \mathcal{H}_x=\{Ker~d\phi_x\}^\bot$ conformally \textbf{onto} $T_{\phi(x)}N$, i.e.
$$h(d\phi_x(X), d\phi_x(Y))=\lambda^2 g(X, Y), (X, Y\in \mathcal{H}_x),$$
for some $\lambda=\lambda(x)>0,$ called the {\bf dilation} of $\phi$ at $x$.
A map $\phi$ is called {\bf horizontally weakly conformal} or {\bf semiconformal} on $M$ if it is horizontally weakly conformal at every point of $M$. Furthermore, if $\phi$ has no critical points, then we call it a {\bf horizontally conformal submersion}: In this case the dilation $\lambda:M \to (0,\infty)$ is a smooth function. Note that if $\phi: (M, g)\to (N, h)$ is a horizontally weakly conformal map and $\dim M<\dim N$, then $\phi$ is a constant map.
If for every harmonic function $f: V\to \mathbb{R}$ defined on an open subset $V$ of $N$ with $\phi^{-1}(V)$ nonempty, the composition $f\circ\phi$ is harmonic on $\phi^{-1}(V)$, then $\phi$ is called a {\bf harmonic morphism}. Harmonic morphisms are characterized as follows (cf. \cite{Fu, Is}).
\begin{thm}[\cite{Fu, Is}]\label{thm4}
A smooth map $\phi: (M, g)\to (N, h)$ between Riemannian manifolds is a harmonic morphism if and only if $\phi$ is both harmonic and horizontally weakly conformal.
\end{thm}
When $\phi:(M^m, g)\to (N^n, h),(m>n\geq2)$ is a horizontally conformal submersion, the tension field is given by
\begin{eqnarray}\label{eq5}
\tau(\phi)=\frac{n-2}{2}\lambda^2d\phi(grad_\mathcal{H}(\frac{1}{\lambda^2}))
-(m-n)d\phi(\hat{H}),
\end{eqnarray}
where $grad_\mathcal{H}(\frac{1}{\lambda^2})$ is the horizontal component of $\rm grad(\frac{1}{\lambda^2})$, and $\hat{H}$ is the {\bf mean curvature} of the fibres given by the trace
$$\hat{H}=\frac{1}{m-n}\,\,\,\,um_{i=n+1}^m\mathcal{H}(\nabla_{e_i}e_i).$$
Here, $\{e_i, i=1,...,m\}$ is a local orthonormal frame field on $M$ such that $\{e_{i}, i=1,...,n\}$ belongs to $\mathcal{H}_x$ and $\{e_{j}, j=n+1,...,m \}$ belongs to $\mathcal{V}_x$ at each point $x\in M$, where $T_xM=\mathcal{H}_x\oplus \mathcal{V}_x$. | 3,889 | 14,169 | en |