set
stringclasses 1
value | id
stringlengths 5
9
| chunk_text
stringlengths 1
115k
| chunk_num_tokens
int64 1
106k
| document_num_tokens
int64 58
521k
| document_language
stringclasses 2
values |
---|---|---|---|---|---|
train
|
0.114.13
|
}
We point out that although the assumption (3.19) on $u$, (equivalent to (3.13) on $\omega$), is in fact not used in Proposition 3.7, it is required in Lemma 3.8 to control the $L^1$ norm of $u$.
We now apply these results to the trace equation (3.2), so that $u = -\omega $ and $f = \frac{1}{4}\alpha|r|^{2}.$ Using the fact that $f \geq $ 0, we obtain:
\begin{corollary} \label{c 3.9.}
On (B, g) as above,
\begin{equation} \label{e3.33}
sup_{\bar{B}}|\omega| \leq c(d,\varepsilon^{-1})\cdot \int_{B}\alpha|r|^{2} + {\tfrac{1}{2}} |\omega|_{av}(\varepsilon ).
\end{equation}
\end{corollary}
{\bf Proof:}
By Lemma 3.8, it suffices to prove there is a bound
$$sup_{\bar{B}}|\omega| \leq c||\omega||_{L^{2}(B)}. $$
Since $f$ is positive, i.e. $\Delta (-\omega ) \geq $ 0, this estimate is an application of the DeGiorgi-Nash-Moser sup estimate for subsolutions of the Laplacian, c.f. [GT, Thm 8.17].
{\qed
}
We are now in a position to deal with the relative sizes of $\alpha $ and $\omega .$ First, we renormalize $\omega $ to unit size near the center point $x$. Observe that for any $\varepsilon > $ 0,
\begin{equation} \label{e3.34}
|\omega|_{av}(\varepsilon ) > 0,
\end{equation}
since if $|\omega|_{av}(\varepsilon ) =$ 0, then by the minimum principle applied to the trace equation (3.2), $\omega \equiv $ 0 in $B_{\varepsilon}$ which has been ruled out above. For the remainder of the proof, we fix an $\varepsilon > $ 0 small and set
\begin{equation} \label{e3.35}
\bar{\omega} = \frac{\omega}{|\omega|_{av}(\varepsilon )}.
\end{equation}
Similarly, divide the equations (3.1)-(3.2) by $|\omega|_{av}(\varepsilon );$ this has the effect of changing $\alpha $ to
\begin{equation} \label{e3.36}
\bar{\alpha} = \frac{\alpha}{|\omega|_{av}(\varepsilon )}.
\end{equation}
By Corollary 3.9, (or Lemma 3.8), the $L^{2}$ norm of $\bar{\omega}$ on $\bar{B},$ i.e. $\bar{\omega}_{o},$ is now controlled by the $L^{1}$ norm of $\bar{\alpha}|r|^{2}$ on $B$,
\begin{equation} \label{e3.37}
\bar{\omega}_{o} \leq c\cdot [||\bar{\alpha}|r|^{2}||_{L^{1}(B)}+1].
\end{equation}
We now divide the discussion into three cases, similar to the discussion following Theorem 3.6.
{\bf (I).}
Suppose $\bar{\alpha}$ is uniformly bounded away from 0 and $\infty ,$ i.e.
\begin{equation} \label{e3.38}
\kappa \leq \bar{\alpha} \leq \kappa^{-1},
\end{equation}
for some $\kappa > $ 0. Since $r_{h}(x) \geq $ 1, the $L^{1}$ norm of $\bar{\alpha}|r|^{2}$ on $B$ is then uniformly bounded, and so $\bar{\omega}_{o}$ is uniformly bounded by (3.37). Hence, the proof of (3.14) follows as in Case (i) above, starting with Corollary 3.5. Further,
$$\bar{\omega}_{o} \leq c(\varepsilon^{-1})\cdot ||\bar{\omega}||_{L^{1}(B_{\varepsilon})}, $$
since both terms are uniformly bounded away from 0 and $\infty .$ Hence (3.15) also follows as in Case (i) above.
{\bf (II).}
Suppose $\bar{\alpha}$ is uniformly bounded away from 0, i.e.
\begin{equation} \label{e3.39}
\kappa \leq \bar{\alpha},
\end{equation}
for some $\kappa > $ 0. Then as in Case (ii), divide $\bar{\omega}$ and $\bar{\alpha}$ by $\bar{\alpha},$ so that the coefficient $\alpha' = \frac{\bar{\alpha}}{\bar{\alpha}} =$ 1. Then $\omega' = \frac{\bar{\omega}}{\bar{\alpha}}$ has bounded $L^{2}$ norm on $\bar{B}$ by (3.37) and the proof of (3.14) proceeds as in Case (I). In this case, the $L^{1}$ norm of $\omega' $ on $B_{\varepsilon}$ may be very small, but (3.15) follows as above from the renormalization of (3.37).
{\bf (III).}
$\bar{\alpha}$ is small, i.e.
\begin{equation} \label{e3.40}
\bar{\alpha} \leq \kappa .
\end{equation}
Again from (3.37), it follows that $\bar{\omega}_{o}$ is uniformly bounded on $\bar{B}.$ Thus, by Corollary 3.5, we have
\begin{equation} \label{e3.41}
||\bar{\omega}||_{L^{2,2}} \leq c(\kappa ),
\end{equation}
so that, by Sobolev embedding, $\bar{\omega}$ is uniformly controlled in the $C^{1/2}$ topology inside $\bar{B}.$ This, together with (3.35) implies that $\bar{\omega}(x) \sim - 1,$ and hence there is a smaller ball $B'\subset\bar{B},$ whose radius is uniformly bounded below, such that for all $y\in B' ,$
\begin{equation} \label{e3.42}
\bar{\omega}(y) \leq -\tfrac{1}{10}.
\end{equation}
(The factor $10^{-1}$ can be replaced by any other fixed negative constant).
We now proceed with the proof of Theorem 3.6 under these assumptions. For notational simplicity, we remove the bar and prime from now on, so that $B' $ becomes $B$, $\bar{\omega}$ becomes $\omega , \bar{\alpha}$ becomes $\alpha ,$ and $\alpha , \omega $ satisfy (3.40)-(3.42) on $B$. The previous arguments, when $\alpha $ is bounded away from 0, essentially depend on the (dominant) $\alpha\nabla{\cal R}^{2}$ term in (3.1), and much less on the $L^{*}(\omega )$ term, (except in the estimates on $\omega $ in Proposition 3.7 - Corollary 3.9). When $\alpha $ is or approaches 0, one needs to work mainly with the $L^{*}(\omega )$ term to obtain higher order estimates, ensuring that the two terms $\alpha\nabla{\cal R}^{2}$ and $L^{*}(\omega )$ do not interfere with each other. For the following Lemmas, the preceding assumptions are assumed to hold.
\begin{lemma} \label{l 3.10.}
There is a constant $c < \infty $ such that, on $\bar{B} \subset B,$
\begin{equation} \label{e3.43}
\alpha (\int|r|^{6})^{1/3} \leq c, \ \ \alpha\int|r|^{3} \leq c, \ \ \alpha\int|Dr|^{2} \leq c.
\end{equation}
\end{lemma}
{\bf Proof:}
For a given smooth cutoff function $\eta $ of compact support in $\bar{B},$ pair (3.1) with $\eta^{2}r$ and integrate by parts to obtain
\begin{equation} \label{e3.44}
\alpha\int|D\eta r|^{2} - \int\omega\eta^{2}|r|^{2} \leq \alpha c\int\eta^{2}|r|^{3} + \int< D^{2}(-\omega ), \eta^{2}r> + \alpha c\int (|D\eta|^{2}+ |\Delta\eta^{2}|)|r|^{2}.
\end{equation}
By the Sobolev inequality,
\begin{equation} \label{e3.45}
(\int (\eta|r|)^{6})^{1/3} \leq c_{s}\int|D(\eta|r|)|^{2} \leq cc_{s}\int||D\eta r||^{2}.
\end{equation}
where $c_{s}$ is the Sobolev constant of the embedding $L_{0}{}^{1,2} \subset L^{6}.$ Also, by the H\"older inequality,
\begin{equation} \label{e3.46}
\int\eta^{2}|r|^{3} \leq (\int (\eta|r|)^{6})^{1/3}\cdot (\int|r|^{3/2})^{2/3} \leq c_{o}(\int (\eta|r|)^{6})^{1/3},
\end{equation}
where the last inequality follows from the definition of $\rho ,$ c.f. the beginning of \S 2. We may assume $c_{o}$ is chosen sufficiently small so that $cc_{o}c_{s} \leq \frac{1}{2}.$ Thus, the cubic term on the right in (3.44) may be absorbed into the left, giving
$$\frac{c}{2}\alpha (\int (\eta r)^{6})^{1/3} - \int\omega\eta^{2}|r|^{2} \leq \int< D^{2}(-\omega ), \eta^{2}r> + \alpha c\int (|D\eta|^{2}+|\Delta\eta^{2}|)|r|^{2}. $$
For the first term on the right, we may estimate
$$\int< D^{2}(-\omega ), \eta^{2}r> \ \leq (\int|D^{2}\omega|^{2})^{1/2}(\int|r|^{2})^{1/2} \leq c, $$
by (3.41) and the definition of $\rho .$ For the second term, $|D\eta|$ and $|\Delta\eta^{2}|$ are uniformly bounded in $L^{\infty}(B),$ so that
$$\alpha\int (|D\eta|^{2}+ |\Delta\eta^{2}|)|r|^{2} \leq c, $$
either by Lemma 3.2 or by (3.40) and the definition of $\rho .$ From (3.42), $-\omega \geq \frac{1}{10}$ in $\bar{B},$ so that one obtains the bound
$$\alpha (\int|r|)^{6})^{1/3} \leq c, $$
and thus also
$$\alpha\int|Dr|^{2} \leq c. $$
The bound on the $L^{1}$ norm of $\alpha|r|^{3}$ then follows from (3.46).
{\qed
}
\noindent
\begin{lemma} \label{l 3.11.}
There is a constant $c < \infty $ such that, on $\bar{B} \subset B,$
\begin{equation} \label{e3.47}
\int|r|^{3} \leq c, \ \ \alpha (\int|r|^{9})^{1/3} \leq c.
\end{equation}
\end{lemma}
{\bf Proof:}
By pairing (3.1) with $r$, one obtains the estimate
\begin{equation} \label{e3.48}
-\alpha\Delta|r|^{2} + \alpha|Dr|^{2} - \omega|r|^{2} \ \leq \ < D^{2}(-\omega ), r> + \ \alpha|r|^{3}.
\end{equation}
Multiply (3.48) by $\eta|r|$ and integrate by parts, using the bound (3.42) and the Sobolev inequality as in (3.45) to obtain
\begin{equation} \label{e3.49}
c\cdot \alpha (\int|r|^{9})^{1/3}+ \frac{1}{10}\int|r|^{3} \leq \alpha\int|r|^{4} + \int< D^{2}(-\omega ), \eta|r|r> + \alpha\int|r|^{2}|D|r||\cdot |D\eta|.
\end{equation}
We have
$$\int< D^{2}(-\omega ), \eta|r|r> \ \leq \int|r|^{2}|D^{2}\omega| \leq (\int|r|^{3})^{2/3}(\int|D^{2}\omega|^{3})^{1/3} ,$$
and
$$(\int|D^{2}\omega|^{3})^{1/3} \leq c(\int\alpha^{3}|r|^{6})^{1/3}+ c(\int|\omega|^{2})^{1/2} \leq c, $$
where the last estimate follows from elliptic regularity on the trace equation (3.2) together with Lemma 3.10 and (3.41). Thus, this term may be absorbed into the left of (3.49), (because of the $2/3$ power), (unless $||r||_{L^{3}}$ is small, in which case one obtains a bound on $||r||_{L^{3}}).$
Similarly
$$\alpha\int|r|^{4} = \alpha\int|r|^{3}|r| \leq \alpha (\int|r|^{9})^{1/3}(\int|r|^{3/2})^{2/3} \leq c_{o}\alpha (\int|r|^{9})^{1/3}. $$
Since $c_{o}$ is sufficiently small, this term may be absorbed into the left. For the last term on the right in (3.49),
$$\alpha\int|r|^{2}|D|r||\cdot |D\eta| \leq \delta\alpha\int|r|^{4} + \delta^{-1}\alpha\int|D|r||^{2}|D\eta|^{2}; $$
the second term on the right is bounded by Lemma 3.10, while as in the previous estimate, the first term may be absorbed into the left, by choosing $\delta $ sufficiently small. These estimates combine to give the result.
{\qed
| 3,652 | 60,678 |
en
|
train
|
0.114.14
|
}
\noindent
\begin{lemma} \label{l 3.11.}
There is a constant $c < \infty $ such that, on $\bar{B} \subset B,$
\begin{equation} \label{e3.47}
\int|r|^{3} \leq c, \ \ \alpha (\int|r|^{9})^{1/3} \leq c.
\end{equation}
\end{lemma}
{\bf Proof:}
By pairing (3.1) with $r$, one obtains the estimate
\begin{equation} \label{e3.48}
-\alpha\Delta|r|^{2} + \alpha|Dr|^{2} - \omega|r|^{2} \ \leq \ < D^{2}(-\omega ), r> + \ \alpha|r|^{3}.
\end{equation}
Multiply (3.48) by $\eta|r|$ and integrate by parts, using the bound (3.42) and the Sobolev inequality as in (3.45) to obtain
\begin{equation} \label{e3.49}
c\cdot \alpha (\int|r|^{9})^{1/3}+ \frac{1}{10}\int|r|^{3} \leq \alpha\int|r|^{4} + \int< D^{2}(-\omega ), \eta|r|r> + \alpha\int|r|^{2}|D|r||\cdot |D\eta|.
\end{equation}
We have
$$\int< D^{2}(-\omega ), \eta|r|r> \ \leq \int|r|^{2}|D^{2}\omega| \leq (\int|r|^{3})^{2/3}(\int|D^{2}\omega|^{3})^{1/3} ,$$
and
$$(\int|D^{2}\omega|^{3})^{1/3} \leq c(\int\alpha^{3}|r|^{6})^{1/3}+ c(\int|\omega|^{2})^{1/2} \leq c, $$
where the last estimate follows from elliptic regularity on the trace equation (3.2) together with Lemma 3.10 and (3.41). Thus, this term may be absorbed into the left of (3.49), (because of the $2/3$ power), (unless $||r||_{L^{3}}$ is small, in which case one obtains a bound on $||r||_{L^{3}}).$
Similarly
$$\alpha\int|r|^{4} = \alpha\int|r|^{3}|r| \leq \alpha (\int|r|^{9})^{1/3}(\int|r|^{3/2})^{2/3} \leq c_{o}\alpha (\int|r|^{9})^{1/3}. $$
Since $c_{o}$ is sufficiently small, this term may be absorbed into the left. For the last term on the right in (3.49),
$$\alpha\int|r|^{2}|D|r||\cdot |D\eta| \leq \delta\alpha\int|r|^{4} + \delta^{-1}\alpha\int|D|r||^{2}|D\eta|^{2}; $$
the second term on the right is bounded by Lemma 3.10, while as in the previous estimate, the first term may be absorbed into the left, by choosing $\delta $ sufficiently small. These estimates combine to give the result.
{\qed
}
This same argument may be repeated once more, pairing with $\eta|r|^{2},$ to obtain a bound
\begin{equation} \label{e3.50}
\int|r|^{4} \leq c.
\end{equation}
\noindent
\begin{lemma} \label{l 3.12.}
There is a constant $c < \infty $ such that, on $\bar{B} \subset B,$
\begin{equation} \label{e3.51}
\alpha^{1/2}||D^{*}Dr||_{L^{2}} \leq c,\ \ ||Dr||_{L^{2}} \leq c, \ \ ||D^{2}\omega||_{L^{1,2}} \leq c,
\end{equation}
\end{lemma}
{\bf Proof:}
We may take the covariant derivative of (3.1) to obtain
\begin{equation} \label{e3.52}
\alpha DD^{*}Dr + \alpha Dr^{2} - D\omega r = DD^{2}(-\omega ),
\end{equation}
where $Dr^{2}$ denotes derivatives of terms quadratic in curvature. Hence
$$\alpha< DD^{*}Dr,\eta Dr> + \alpha< Dr^{2},\eta Dr> - < D\omega r,\eta Dr> \ = -< DD^{2}(\omega ),\eta Dr> . $$
Using (3.42), this gives rise to the bound
\begin{equation} \label{e3.53}
\alpha\int|D^{*}D\eta r|^{2}+\frac{1}{10}\int\eta|Dr|^{2}\leq
\end{equation}
$$\leq \alpha c\int\eta|r||Dr|^{2}+\delta\int\eta|Dr|^{2}+c\delta^{-1}\int\eta|DD^{2}\omega|^{2}+\alpha c\int|D\eta|^{2}|Dr|^{2}+c\int\eta|D\omega||Dr||r|. $$
From elliptic regularity together with the Sobolev inequality (3.45), we have, (ignoring some constants),
\begin{equation} \label{e3.54}
\alpha (\int|D\eta r|^{6})^{1/3}\leq \alpha\int|D^{*}D\eta r|^{2}.
\end{equation}
Using same argument kind of argument as in (3.46), the first term on right of (3.53) can then be absorbed into left. Similarly, the second term in (3.53) can be absorbed into the $\frac{1}{10}$ term on the left, for $\delta $ small. For the third term, elliptic regularity gives, (ignoring constants and lower order terms),
\begin{equation} \label{e3.55}
\int|DD^{2}\omega|^{2} \leq \int|D\Delta\omega|^{2}+c \leq \alpha^{2}\int|r|^{2}|D|r||^{2}+c \leq \alpha^{2}(\int|r|^{4})^{1/2}(\int|D|r||^{4})^{1/2}+c.
\end{equation}
But by Lemma 3.10,
$$\alpha (\int|r|^{4})^{1/2}\leq c\cdot c_{o}, $$
so that this third term may also be absorbed in left using (3.54). The fourth term on the right in (3.53) is bounded by Lemma 3.10. Finally for the last term, the H\"older inequality gives
$$\int\eta|D\omega||Dr||r| \leq ||D\omega||_{L^{6}}||r||_{L^{3}}||Dr||_{L^{2}}. $$
The first two terms on the right here are bounded, by (3.41) and Lemma 3.11, so by use of the Young inequality, this term may also be absorbed into the left in (3.53).
It follows that the left side of (3.53) is uniformly bounded. This gives the first two bounds in (3.51), while the last bound follows from the argument (3.55) above.
{\qed
}
The estimates in Lemma 3.12, together with the previous work in Cases (I)-(III) above, proves Theorem 3.6 for $k =$ 1. By taking higher covariant derivatives of (3.1) and continuing in the same way for Case(III), one derives higher order estimates on $g$ and $\omega .$
This completes the proof of Theorem 3.6.
{\qed
}
| 1,989 | 60,678 |
en
|
train
|
0.114.15
|
\section{Non-Existence of ${\cal R}_{s}^{2}$ Solutions with Free $S^{1}$ Action.}
\setcounter{equation}{0}
In this section, we prove Theorem 0.2. Thus let $N$ be an open oriented 3-manifold and $g$ a complete Riemannian metric on $N$ satisfying the ${\cal R}_{s}^{2}$ equations (0.4)-(0.5). The triple $(N, g, \omega )$ is assumed to admit a free isometric $S^{1}$ action. The assumption (0.6) on $\omega $ will enter only later.
Let $V$ denote the quotient space $V = N/S^{1}.$ By passing to a suitable covering space if necessary, we may and will assume that $V$ is simply connected, and so $V = {\Bbb R}^{2}$ topologically.
The metric $g$ is a Riemannian submersion to a complete metric $g_{V}$ on $V$. Let $f: V \rightarrow {\Bbb R} $ denote the length of the orbits, i.e. $f(x)$ is the length of the $S^{1}$ fiber through $x$. Standard submersion formulas, c.f. [B, 9.37], imply that that the scalar curvature $s_{V}$ of $g_{V},$ equal to twice the Gauss curvature, is given by
\begin{equation} \label{e4.1}
s_{V} = s + |A|^{2} + 2|H|^{2} + 2\Delta logf ,
\end{equation}
where $A$ is the obstruction to integrability of the horizontal distribution and $H$ is the geodesic curvature of the fibers $S^{1}.$ Further $|H|^{2} = |\nabla logf|^{2},$ where log denotes the natural logarithm.
Let $v(r)$ = area$D_{x_{o}}(r)$ in $(V, g_{V})$ and $\lambda (r) = v' (r)$ the length of $S_{x_{o}}(r),$ for some fixed base point $x_{o}\in V.$ The following general result uses only the submersion equation (4.2) and the assumption $s \geq $ 0. (The ${\cal R}_{s}^{2}$ equations are used only after this result).
\begin{proposition} \label{p 4.1. }
Let (V, $g_{V})$ be as above, satisfying (4.1) with $s \geq $ 0. Then there exists a constant $c < \infty $ such that
\begin{equation} \label{e4.2}
v(r) \leq c\cdot r^{2},
\end{equation}
\begin{equation} \label{e4.3}
\lambda (r) \leq c\cdot r,
\end{equation}
and
\begin{equation} \label{e4.4}
\int_{V}|\nabla logf|^{2}dA_{V} \leq c, \ \ \int_{V}|A|^{2}dA_{V} \leq c.
\end{equation}
\end{proposition}
{\bf Proof:}
We first prove (4.2) by combining (4.1) with the Gauss-Bonnet theorem. Thus, pair (4.1) with a smooth cutoff function $\eta^{2}$ where $\eta = \eta (d),$ and $d$ is the distance function on $V$ from $x_{o}\in V.$ This gives
\begin{equation} \label{e4.5}
\int_{V}(\eta^{2}s + \eta^{2}|A|^{2} + 2\eta^{2}|\nabla logf|^{2} + 2\eta^{2}\Delta logf) = \int_{V}\eta^{2}s_{V}.
\end{equation}
Now
\begin{equation} \label{e4.6}
2\int_{V}\eta^{2}\Delta logf = - 4\int_{V}\eta<\nabla\eta , \nabla logf> \ \geq - 2\int_{V}\eta^{2}|\nabla logf|^{2} - 2\int_{V}|\nabla\eta|^{2},
\end{equation}
so that,
\begin{equation} \label{e4.7}
\int_{V}\eta^{2}s_{V} \geq - 2\int_{V}|\nabla\eta|^{2},
\end{equation}
since $s \geq $ 0. The Gauss-Bonnet theorem implies that in the sense of distributions on ${\Bbb R} ,$
\begin{equation} \label{e4.8}
s_{V}(r) = 2[\chi (r) - \kappa (r)]' ,
\end{equation}
where
\begin{equation} \label{e4.9}
\chi (r) = \chi (B(r)), \ s_{V}(r) = \int_{S_{x_{o}}(r)}s_{V}, \ \kappa (r) = \int_{S_{x_{o}}(r)}\kappa ,
\end{equation}
and $\kappa $ is the geodesic curvature of $S_{x_{o}}(r),$ c.f. [GL, Theorem 8.11]. Since $\eta = \eta (d),$ by expressing (4.7) in terms of integration over the levels of $d$, one obtains, after integrating by parts,
\begin{equation} \label{e4.10}
\int_{0}^{\infty}(\eta^{2})' [\chi (r) - \kappa (r)] \leq \int_{0}^{\infty}(\eta' )^{2}\lambda (r).
\end{equation}
Now choose $\eta = \eta (r) = -\frac{1}{R}r +$ 1, $\eta (r) =$ 0, for $r \geq R$. In particular, $(\eta^{2})' \leq $ 0. It is classical, c.f. again [GL, p. 391], that
\begin{equation} \label{e4.11}
\lambda' (r) \leq \kappa (r) .
\end{equation}
Then (4.10) gives
$$2\int_{0}^{R}(-\frac{1}{R}r+1)\frac{1}{R}\lambda' (r) \leq \frac{1}{R^{2}}\int_{0}^{R}\lambda (r)dr + 2\int (-\frac{1}{R}r+1)\frac{1}{R}\chi (r)dr. $$
But
$$2\int_{0}^{R}(-\frac{1}{R}r+1)\frac{1}{R}\lambda' (r) = \frac{2}{R}\int\lambda' - \frac{2}{R^{2}}\int r\lambda' = \frac{2\lambda}{R} - \frac{2}{R}\lambda + \frac{2}{R^{2}}\int\lambda . $$
Thus
$$\frac{1}{R^{2}}\int_{0}^{R}\lambda (r)dr = 2\int (-\frac{1}{R}r+1)\frac{1}{R}\chi (r)dr \leq C, $$
where the last inequality follows from the fact that $\chi (B(r)) \leq $ 1, since $B(r)$ is connected. This gives (4.2).
Next, we prove (4.4). Returning to (4.5) and applying the Gauss-Bonnet theorem with $\eta = \chi_{B(r)}$ the characteristic function of $B(r) = B_{x_{o}}(r)$ in (4.10) gives,
$$\int_{D(r)}(2|\nabla logf|^{2}+|A|^{2}) \leq 4\pi\chi (B(r)) - 2\lambda'- 2\int_{S(r)}<\nabla logf, \nu> , $$
where $\nu $ is the unit outward normal; the term $\lambda' $ is understood as a distribution on ${\Bbb R}^{+}.$ Using the H\"older inequality on the last term, we have
\begin{equation} \label{e4.12}
2\int_{D(r)}(|\nabla logf|^{2}+\frac{1}{2}|A|^{2}) \leq 4\pi\chi (B(r)) - 2\lambda' + 2(\int_{S(r)}|\nabla logf|^{2}+\frac{1}{2}|A|^{2})^{1/2}(v' )^{1/2}.
\end{equation}
We now proceed more or less following a well-known argument in [CY,Thm.1]. Set
$$F(r) = \int_{D(r)}(|\nabla logf|^{2}+\frac{1}{2}|A|^{2}). $$
Then (4.12) gives
\begin{equation} \label{e4.13}
F(r) \leq 2\pi - \lambda' + (F' )^{1/2}\lambda^{1/2}.
\end{equation}
Note that $F(r)$ is monotone increasing in $r$. If $F(r) \leq 2\pi ,$ for all $r$, then (4.4) is proved, so that we may assume that $F(r) > 2\pi ,$ for large $r$. Then (4.13) implies
$$\frac{1}{\lambda^{1/2}} + \frac{\lambda'}{(F- 2\pi )\lambda^{1/2}} \leq \frac{(F' )^{1/2}}{F- 2\pi}. $$
We integrate this from $r$ to $s$, with $s >> r$. A straightforward application of the H\"older inequality gives
$$(s- r)^{3/2}\cdot \bigl(\int_{r}^{s}\lambda\bigr)^{- 1/2} \leq \int_{r}^{s}(\lambda^{- 1/2}). $$
Thus, using (4.2), one obtains
$$(s^{1/2} - r^{1/2}) + \int_{r}^{s} \frac{\lambda'}{(F- 2\pi )\lambda^{1/2}} \leq c\cdot (F^{-1}(r) - F^{-1}(s))^{1/2}(s-r)^{1/2}. $$
The integral on the left is non-negative, (integrate by parts to see this), and thus taking a limit as $s \rightarrow \infty $ implies, for any $r$,
$$c \leq F^{-1}(r), $$
which gives (4.4).
To prove (4.3), write (4.13) as $\lambda' \leq 2\pi + (F' )^{1/2}\cdot \lambda^{1/2}.$ Then one may integrate this from 0 to $r$ and apply the H\"older inequality and (4.2) to obtain (4.3).
{\qed
| 2,658 | 60,678 |
en
|
train
|
0.114.16
|
}
\noindent
\begin{remark} \label{r 4.2.}
{\rm Consider the metric $\Roof{g}{\widetilde}_{V} = f^{2}\cdot g_{V}$ on $V$. If $\Roof{K}{\widetilde}$ and $K$ denote the Gauss curvatures of $\Roof{g}{\widetilde}_{V}$ and $g_{V}$ respectively, then a standard formula, c.f. [B, 1.159], gives $f^{2}\Roof{K}{\widetilde} = K - \Delta logf.$ Thus, from (4.1), we obtain
$$f^{2}\Roof{K}{\widetilde} = {\tfrac{1}{2}} s + {\tfrac{1}{2}} |A|^{2} + |\nabla logf|^{2} \geq 0. $$
If one knew that $\Roof{g}{\widetilde}_{V} $ were complete, then the results in Proposition 4.1 could be derived in a simpler way from the well-known geometry of complete surfaces of non-negative curvature.}
\end{remark}
Next, we have the following analogue of Lemma 2.4, or more precisely (2.16). As in \S 2, let $t(x) = dist_{N}(x, x_{o}),$ for some fixed point $x_{o}\in N.$
\begin{proposition} \label{p 4.3.}
Let (N, g, $\omega )$ be a complete ${\cal R}_{s}^{2}$ solution, with a free isometric $S^{1}$ action, and $\omega \leq $ 0. Then there is a constant $c_{o} > $ 0 such that on (N, g)
\begin{equation} \label{e4.14}
\rho \geq c_{o}\cdot t,
\end{equation}
and
\begin{equation} \label{e4.15}
|r| \leq c_{o}^{-1}t^{-2}.
\end{equation}
Further, for any $x,y\in S(r),$
\begin{equation} \label{e4.16}
\frac{\omega (x)}{\omega (y)} \rightarrow 1, \ \ {\rm as} \ \ r \rightarrow \infty ,
\end{equation}
and $\omega $ is a proper exhaustion function on N.
If further $\omega $ is bounded below, then
\begin{equation} \label{e4.17}
lim_{t\rightarrow\infty}t^{2}|r| = 0,
\end{equation}
and
\begin{equation} \label{e4.18}
osc_{N \setminus B(r)} \ \omega \rightarrow 0, \ \ {\rm as} \ r \rightarrow \infty ,
\end{equation}
\end{proposition}
{\bf Proof:}
These estimates will be proved essentially at the same time, but we start with the proof of (4.14). This is proved by contradiction, and the proof is formally identical to the proof of (2.16). Thus, assuming (4.14) is false, let $\{x_{i}\}$ be any sequence in $(N, g)$ with $t(x_{i}) \rightarrow \infty ,$ chosen to be $(\rho ,\frac{1}{2})$ buffered as following (2.16). Blow-down the metric $g$ based at $x_{i},$ by setting $g_{i} = \rho (x_{i})^{-2}\cdot g.$ In the $g_{i}$ metric, the equations (0.4)-(0.5) take the form
\begin{equation} \label{e4.19}
\alpha_{i}\nabla{\cal R}^{2} + L^{*}\omega = 0,
\end{equation}
\begin{equation} \label{e4.20}
\Delta\omega = -\frac{\alpha_{i}}{4}|r|^{2},
\end{equation}
where $\alpha_{i} = \alpha\rho_{i}^{-2}, \rho_{i} = \rho (x_{i}, g)$. All other metric quantities in (4.19)-(4.20) are w.r.t. $g_{i}.$
For clarity, it is useful to separate the discussion into non-collapse and collapse cases.
{\bf Case (I).(Non-Collapse).}
Suppose there is a constant $a_{o} > $ 0 such that
\begin{equation} \label{e4.21}
areaD(r) \geq a_{o}r^{2},
\end{equation}
in $(V, g_{V}).$
Now by Proposition 4.1,
\begin{equation} \label{e4.22}
\int_{V \setminus D(r)}|\nabla logf|^{2} \rightarrow 0, \ \ {\rm as} \ r \rightarrow \infty .
\end{equation}
The integral in (4.22) is scale-invariant, and also invariant under multiplicative renormalizations of $f$. Thus, we normalize $f$ at each $x_{i}$ so that $f(x_{i}) \sim $ 1. This is equivalent to passing to suitable covering or quotient spaces of $N$.
By the discussion on convergence preceding Lemma 2.1, and by Theorem 3.6, a subsequence of the metrics $(B_{x_{i}}(\frac{3}{2}), g_{i}) \subset (N, g_i)$ converges smoothly to a limit ${\cal R}_{s}^{2}$ solution defined at least on $(B_{x}(\frac{5}{4}), g_{\infty})$, $x = lim x_i$, possibly with $\alpha = 0$, (i.e. a solution of the static vacuum Einstein equations (1.8)). In particular, $f$ restricted to $B_{x_{i}}(\frac{5}{4})$ is uniformly bounded, away from 0 and $\infty .$ It follows then from the bound on $f$ and (4.22) that
\begin{equation} \label{e4.23}
\int_{B_{x_{i}}(\frac{5}{4})}|\nabla logf|^{2} \rightarrow 0, \ \ {\rm as} \ \ r \rightarrow \infty .
\end{equation}
w.r.t. the $g_{i}$-metric on $N$.
The smooth convergence as above, and (4.23) imply that on the limit, $f =$ const. The same reasoning on (4.4) implies that $A =0$ on the limit. Since $A = 0$, $\nabla f =$ 0 and $s =$ 0 on the limit, from (4.1) we see that $s_{V} =$ 0 on the limit. Hence the limit is a flat product metric on $C\times S^{1},$ where $C$ is a flat 2-manifold. The smooth convergence implies that $(B_{x_{i}}(\frac{5}{4}), g_{i})$ is almost flat, i.e. its curvature is almost 0. This is a contradiction to the buffered property of $x_{i},$ as in Lemma 2.4. This proves (4.14). The estimate (4.15) follows immediately from (4.14) and the smooth convergence as above.
The proof of (4.17) is now again the same as in Lemma 2.4. Namely repeat the argument above on the metrics $g_{i} = t(x_{i})^{-2}\cdot g,$ for any sequence $x_{i}$ with $t(x_{i}) \rightarrow \infty .$ Note that in this (non-collapse) situation, the hypothesis that $\omega $ is bounded below is not necessary.
Observe also that the smooth convergence and (4.22) imply that
\begin{equation} \label{e4.24}
|\nabla logf|(x) << 1/t(x),
\end{equation}
as $t(x) \rightarrow \infty ,$ so that $f$ grows slower than any fixed positive power of $t$. It follows from this and from (4.3) that the annuli $A_{x_{o}}(r, 2r)$ have diameter satisfying diam$A_{x_{o}}(r,2r) \leq c\cdot r,$ for some fixed constant $c < \infty .$
To prove (4.16), we see from the above that for any sequence $x_{i}$ with $t(x_{i}) \rightarrow \infty ,$ the blow-downs $g_{i}$ as above converge smoothly to a solution of the static vacuum Einstein equations (1.8), which is flat, (in a subsequence). Hence the limit potential $\bar{\omega} =$ lim $\bar{\omega}_{i}, \bar{\omega}_{i} = \omega (x)/|\omega (x_{i})|,$ is either constant or a non-constant affine function. The limit is a flat product $A(k^{-1},k)\times S^{1},$ where $A(k^{-1},k)$ is the limit of the blow-downs of $A(k^{-1}t(x_{i}),kt(x_{i}))\subset (V, g_{i})$ and $k > $ 0 is arbitrary. If $\bar{\omega}$ were a non-constant affine function, then $\bar{\omega}$ must assume both positive and negative values on $A(k^{-1}, k)$, for some choice of sufficiently large $k$, which contradicts the assumption that $\omega \leq $ 0 everywhere. Thus, $\omega $ renormalized as above converges to a constant on all blow-downs, which gives (4.16).
Since $N = {\Bbb R}^{2}\times S^{1}$ topologically, $N$ has only one end. By the minimum principle applied to the trace equation (3.2), $inf_{S(r)}\omega \rightarrow inf_{N}\omega ,$ as $r \rightarrow \infty $. Together with (4.16), it follows that $\omega $ is a proper exhaustion function on $N$. Further, if $\omega $ is bounded below, then (4.18) follows immediately.
{\bf Case (II). (Collapse).}
If (4.21) does not hold, so that
\begin{equation} \label{e4.25}
areaD(r_{i}) << r_{i}^{2},
\end{equation}
for some sequence $r_{i} \rightarrow \infty ,$ then one needs to argue differently, since in this case, the estimate (4.22) may arise from collapse of the area, and not the behavior of $logf$.
First we prove (4.14). By the same reasoning as above in Case (I), if (4.14) does not hold, then there is a $(\rho, \frac{1}{2})$ buffered sequence ${x_{i}}$, with $t(x_i) \rightarrow \infty$, which violates (4.14). Further, we may choose the base points exactly as in the proof of Lemmas 2.1 or 2.4, to satisfy (2.4). As in Case (I), normalize $f$ at each $x_i$ so that $f(x_i) \sim 1$. If the metrics $g_{i} = \rho(x_{i})^{-2} \cdot g$ are not collapsing at ${x_{i}}$, then the same argument as in Case (I) above gives a contradiction. Thus, assume the metrics $\{g_i\}$ are collapsing at $x_i$. Now as in (2.4), the curvature radius $\rho_i$ is uniformly bounded below by $\frac{1}{2}$ within arbitrary but fixed distances to the base point $x_{i}$. Hence, as discussed preceding Lemma 2.1, we may unwrap the collapse by passing to sufficiently large finite covers of arbitrarily large balls $B_{x_{i}}(R_i)$, $R_i \rightarrow \infty$. One thus obtains in the limit a complete non-flat ${\cal R}_{s}^{2}$ or static vacuum solution $(N' , g' )$, (corresponding to $\alpha = 0$), with an additional free isometric $S^{1}$ action, i.e. on $(N' , g' )$ one now has a free isometric $S^{1}\times S^{1}$ action. The second $S^{1}$ action arises from the collapse, and the unwrapping of the collapse in very large covers. This means that $N' $ is a torus bundle over ${\Bbb R} .$ Since $(N' , g' )$ is complete and scalar-flat, a result of Gromov-Lawson [GL, Thm. 8.4] states that any such metric is flat. This contradiction then implies (4.14) must hold. As before, smooth convergence then gives (4.15).
The argument for (4.16)-(4.18) proceeds as follows. Consider any sequence $\{x_{i}\}$ in $N$ with $t(x_{i}) \rightarrow \infty .$ The blow-down metrics $g_{i} = t(x_{i})^{-2}\cdot g$ have a subsequence converging, after passing to suitably large finite covers as above, to a (now non-complete) maximal limit $(N' , g' )$ with, as above, a free isometric $S^{1}\times S^{1}$ action. The limit $(N' , g' )$ is necessarily a solution of the static vacuum equations (1.8) with potential $\bar \omega$ obtained by renormalizing the potential $\omega $ of $(N, g)$, i.e. $\bar{\omega} = lim_{i\rightarrow\infty} \frac{\omega (x)}{|\omega (x_{i})|}$ as before. Now it is standard, c.f. [An2, Ex.2.11], [EK, Thm.2-3.12], that the only, (even locally defined), solutions of these equations with such an $S^{1}\times S^{1}$ action are (submanifolds of) the {\it Kasner metrics}, given explicitly as metrics on ${\Bbb R}^{+}\times S^{1}\times S^{1}$ by
\begin{equation} \label{e4.26}
dr^{2}+r^{2\alpha}d\theta_{1}^{2}+r^{2\beta}d\theta_{2}^{2},
\end{equation}
where $\alpha = (a- 1)/(a- 1+a^{-1}), \beta = (a^{-1}- 1)/(a- 1+a^{-1}),$ with potential $\bar{\omega} = cr^{\gamma}, \gamma = (a- 1+a^{-1})^{-1}$. The parameter $a$ may take any value in $[- 1,1].$ The values $a = 0$ and $a = 1$ give the flat metric, with $\bar{\omega} =$ const and $\bar{\omega} = cr$ respectively.
The limit $(N', g')$ is flat if $a = 0,1$; this occurs if and only if $|r|t^{2} \rightarrow $ 0 in a $g_i$-neighborhood of $x_{i}$. Similarly, the limit $(N', g')$ is non-flat when $a \in [-1, 0)\cup (0,1)$, which occurs when $|r|t^{2}$ does not converge to 0 everywhere in a $g_i$-neighborhood of $x_{i}.$
In either case, the limits of the geodesic spheres in $(N, g)$ in the limit Kasner metric, are the tori $\{r =$ const\}. Since the oscillation of the limit potential $\bar{\omega}$ on such tori is 0, it is clear from the smooth convergence that (4.16) holds. As before, it is also clear that $\omega $ is a proper exhaustion function.
Finally if $\omega $ is bounded below, then the limit $\bar \omega$ is uniformly bounded. Since $\bar \omega = cr^{\gamma}$, it follows that necessarily $a = 0$, so that all limits $(N' , g' )$ above are flat. The same argument as above in the non-collapse case then implies (4.17)-(4.18).
{\qed
| 3,856 | 60,678 |
en
|
train
|
0.114.17
|
}
\noindent
\begin{remark} \label{r 4.4.}
{\rm The argument in Proposition 4.3 shows that if $\omega \leq $ 0, then either the curvature decays faster than quadratically, i.e. (4.17) holds, or the ${\cal R}_{s}^{2}$ solution is asymptotic to the Kasner metric (4.26). It may be possible that there are complete $S^{1}$ invariant ${\cal R}_{s}^{2}$ solutions asymptotic to the Kasner metric, although this possibility remains unknown.}
\end{remark}
We are now in position to complete the proof of Theorem 0.2.
\noindent
{\bf Proof of Theorem 0.2.}
The assumption in (0.6) that $\omega $ is bounded below will be used only in one place, c.f. the paragraph following (4.37), so for the moment, we proceed without this assumption.
It is convenient for notation to change sign, so we let
\begin{equation} \label{e4.27}
u = -\omega .
\end{equation}
It is clear that Proposition 4.3 implies that $u > $ 0 outside a compact set in $N$.
It is useful to consider the auxilliary metric
\begin{equation} \label{e4.28}
\Roof{g}{\widetilde} = u^{2}\cdot g,
\end{equation}
compare with [An2, \S 3]. In fact the remainder of the proof follows closely the proof of [An2, Thm.0.3], c.f. also [An2, Rmk.3.6] so we refer there for some details. We only consider $\Roof{g}{\widetilde}$ outside a compact set $K \subset N$ on which $u > $ 0, so that $\Roof{g}{\widetilde}$ is a smooth Riemannian metric. By standard formulas for conformal change of the metric, c.f. [B, Ch.1J], the Ricci curvature $\Roof{r}{\widetilde}$ of $\Roof{g}{\widetilde}$ is given by
$$\Roof{r}{\widetilde} = r - u^{-1}D^{2}u - u^{-1}\Delta u\cdot g + 2(dlog u)^{2} = $$
\begin{equation} \label{e4.29}
= - u^{-1}L^{*}u - 2u^{-1}\Delta u\cdot g + 2(dlog u)^{2}.
\end{equation}
$$\geq - u^{-1}L^{*}u - 2u^{-1}\Delta u\cdot g. $$
From the Euler-Lagrange equations (0.4)-(0.5) and (4.15), together with the regularity estimates from Theorem 3.6, we see that
$$|\nabla{\cal R}^{2}| \leq c\cdot t^{-4}, |\Delta u| \leq c\cdot t^{-4}. $$
Thus the Ricci curvature of $\Roof{g}{\widetilde}$ is almost non-negative outside a compact set, in the sense that
$$\Roof{r}{\widetilde} \geq - c\cdot t^{-4}\Roof{g}{\widetilde}. $$
Further, since the Ricci curvature controls the full curvature in dimension 3, one sees from (4.29) that the sectional curvature $\Roof{K}{\widetilde}$ of $\Roof{g}{\widetilde}$ satisfies
\begin{equation} \label{e4.30}
|\Roof{K}{\widetilde}| \leq \frac{c}{u^{2}}|\nabla logu|^{2} + \frac{c}{t^4},
\end{equation}
where the norm and gradient on the right are w.r.t. the $g$ metric.
Let $\Roof{t}{\widetilde}(x) = dist_{\Roof{g}{\widetilde}}(x, x_{o}),$ (for $\Roof{t}{\widetilde}$ large), and $|\Roof{K}{\widetilde}|(\Roof{t}{\widetilde}) = sup_{S(\Roof{t}{\widetilde})}|\Roof{K}{\widetilde}|,$ taken w.r.t. $(N, \Roof{g}{\widetilde}).$ It follows from the change of variables formula that
\begin{equation} \label{e4.31}
\int_{1}^{\Roof{s}{\tilde}}\Roof{t}{\tilde}|\Roof{K}{\tilde}|(\Roof{t}{\tilde})d\Roof{t}{\tilde} \leq c \bigl [\int_{1}^{s}t|\nabla logu|^{2}(t)dt + 1 \bigr ],
\end{equation}
where as above, $|\nabla logu|^{2}(t) = sup_{S(t)}|\nabla logu|^{2}.$ In establishing (4.31), we use the fact that,
\begin{equation} \label{e4.32}
d\Roof{t}{\tilde} = udt
\end{equation}
together with the fact that
\begin{equation} \label{e4.33}
\Roof{t}{\tilde} \leq c\cdot u\cdot t,
\end{equation}
which follows from (4.32) by integration, using (4.16) together with the fact that $|\nabla logu| \leq c/t,$ which follows from (4.15).
Now we claim that
\begin{equation} \label{e4.34}
\int_{1}^{s}t|\nabla logu|^{2}(t)dt \leq c\int_{1}^{s}area(S(t))^{-1}dt,
\end{equation}
for some $c < \infty .$ We refer to [An2,Lemma 3.5] for the details of this (quite standard) argument, and just sketch the ideas involved. First from the Bochner-Lichnerowicz formula and (4.15), one obtains
$$\Delta|\nabla logu| \geq - (c/t^{2})|\nabla logu|. $$
Hence, from the sub-mean value inequality, [GT, Thm.8.17], one has
\begin{equation} \label{e4.35}
sup_{S(r)}|\nabla logu|^{2} \leq \frac{C}{volA(\frac{1}{2}r, 2r)}\int_{A(\frac{1}{2}r,2r)}|\nabla logu|^{2} \leq \frac{C}{r\cdot areaS(r)}\int_{B(r)}|\nabla logu|^{2}.
\end{equation}
where the second inequality uses again the curvature bound (4.15). To estimate the $L^{2}$ norm of $|\nabla logu|$ on $N$, we observe that
\begin{equation} \label{e4.36}
\int_{N}|r|^{2}dV = \int_{V}|r|^{2}fdA < \infty .
\end{equation}
The estimate (4.36) follows from the decay (4.15), from (4.2), and the fact that $sup_{S(r)}f \leq r^{1+\varepsilon},$ for any fixed $\varepsilon > $ 0, which follows from the proof of Proposition 4.3, (c.f. (4.24) and (4.26)). Now multiply the trace equation (0.5) by $u^{-1}$ and apply the divergence theorem on a suitable compact exhaustion $\Omega_{i}$ of $N$, for instance by sub-level sets of the proper exhaustion function $u$. Using (4.36), one thus obtains a uniform bound on the $L^{2}$ norm of $|\nabla logu|$ on $\Omega_{i},$ independent of $i$. Inserting this bound in (4.35) implies (4.34).
Now if
\begin{equation} \label{e4.37}
\int_{1}^{\infty}area(S(t))^{-1}dt = \infty ,
\end{equation}
then a result of Varopoulos [V] implies that $(N, g)$ is parabolic, in the sense that any positive superharmonic function on $N$ is constant.
Since by (0.5) and (0.6), $\omega $ is a bounded superharmonic function on $(N, g)$, $\omega $ must be constant, so that $(N, g)$ is flat, by the trace equation (0.5). This proves Theorem 0.2 in this case. As indicated above, this is in fact the only place in the proof of Theorem 0.2 where the lower bound assumption $\omega \geq -\lambda > -\infty $ is used.
Thus, suppose instead that
\begin{equation} \label{e4.38}
\int_{1}^{\infty}area(S(t))^{-1}dt < \infty .
\end{equation}
It follows then from (4.31),(4.34) and (4.38) that
\begin{equation} \label{e4.39}
\int_{1}^{\infty}\Roof{t}{\tilde}|\Roof{K}{\tilde}|(\Roof{t}{\tilde})d\Roof{t}{\tilde} < \infty .
\end{equation}
Now it is a standard fact in comparison theory, c.f. [Ab], that the bound (4.39) implies that $(N, \Roof{g}{\widetilde})$ is almost flat at infinity, in the strong sense that geodesic rays starting at some base point $x_{o} \in N$ either stay a bounded distance apart, or grow apart linearly. More precisely, outside a sufficiently large compact set, $(N, \Roof{g}{\widetilde})$ is quasi-isometric to the complement of a compact set in a complete flat manifold. Observe that $(N, \Roof{g}{\widetilde})$ cannot be quasi-isometric to ${\Bbb R}^{3}$ outside a compact set, since that would imply that $V$ is non-collapsing at infinity. But then by combining (4.24), (4.34) and (4.38) it follows that $u\cdot f,$ the length of the $S^{1}$ fiber in $(N, \Roof{g}{\widetilde})$ has sublinear growth; this is impossible when $(N, \widetilde g)$ is quasi-isometric to ${\Bbb R}^3$. Hence, outside a compact set, $(N, \widetilde g)$ is quasi-isometric to a flat product $C\times S^{1}$ where $C$ is the complement of a compact set in a complete flat 2-manifold.
This means that there is a constant $C < \infty $ such that the $\widetilde g$-length of the $S^{1}$ fiber satisfies
\begin{equation} \label{e4.40}
L_{\tilde g}(S^{1}) = u\cdot f \leq C.
\end{equation}
Now return to the trace equation (0.5) on $(N, g)$. Integrate this over $B(s) \subset (N, g)$ and apply the divergence theorem to obtain
\begin{equation} \label{e4.41}
\frac{\alpha}{4}\int_{B(s)}|r|^{2} = \int_{S(s)}<\nabla u, \nu> \ \leq\int_{S(s)}|\nabla u| = \int_{S_{V}(s)}|\nabla u|f,
\end{equation}
where $S_{V}(s)$ is the geodesic $s$-sphere in $(V, g_{V}).$ Using (4.40), it follows that
$$\frac{\alpha}{4}\int_{B(s)}|r|^{2} \leq C\int_{S_{V}(s)}|\nabla logu|. $$
However, by (4.34) and (4.38), $|\nabla logu|(t) << t^{-1}.$ This and the length estimate (4.3) imply that
$$\int_{S_{V}(s)}|\nabla logu| \rightarrow 0, \ \ {\rm as} \ \ s \rightarrow \infty , $$
which of course implies that $(N, g)$ is flat. This completes the proof of Theorem 0.2.
{\qed
}
\noindent
\begin{remark} \label{r 4.5.(i).}
{\rm As noted above, the lower bound assumption on $\omega $ is required only in case $(N, g)$ is parabolic. Alternately, if $(V, g_{V})$ is non-collapsing at infinity, i.e. (4.21) holds, or if $u = - \omega$ has sufficiently small growth at infinity, i.e. $\int^{\infty}t|\nabla logu|^{2}dt < \infty ,$ then the proof above shows that the lower bound on $\omega $ is again not necessary.
On the other hand, as noted in Remark 4.4, there might exist complete ${\cal R}_{s}^{2}$ solutions asymptotic to the Kasner metric at infinity, so that $\omega \sim - r^{\gamma}, \gamma\in (0,1).$
{\bf (ii).}
The proof of both Theorems 0.1 and 0.2 above only involve the asymptotic properties of the solution. It is clear that these proofs remains valid if it is assumed that $s \geq $ 0 outside a compact set in $(N, g)$ for Theorem 0.1, while $\omega \leq $ 0 outside a compact set in $(N, g)$ for Theorem 0.2.}
\end{remark}
| 3,273 | 60,678 |
en
|
train
|
0.114.18
|
\section{Existence of Complete ${\cal R}_{s}^{2}$ Solutions.}
\setcounter{equation}{0}
{\bf \S 5.1.}
In this section, we show that the assumption that $(N, g, \omega )$ have an isometric free $S^{1}$ action in Theorem 0.2 is necessary, by constructing non-trivial ${\cal R}_{s}^{2}$ solutions with a large degree of symmetry.
Let $g_{S}$ be the Schwarzschild metric on $[2m,\infty )\times S^{2},$ given by
\begin{equation} \label{e5.1}
g_{S} = (1-\frac{2m}{r})^{-1}dr^{2} + r^{2}ds^{2}_{S^{2}}.
\end{equation}
The parameter $m > $ 0 is the mass of $g_{S}.$ Varying $m$ corresponds to changing the metric by a homothety. Clearly the metric is spherically symmetric, and so admits an isometric $SO(3)$ action, although the action of any $S^{1}\subset SO(3)$ on $[2m,\infty )\times S^{2}$ is not free.
The boundary $\Sigma = r^{-1}(2m)$ is a totally geodesic round 2-sphere, of radius $2m$, and hence $g_{S}$ may be isometrically doubled across $\Sigma $ to a complete smooth metric on $N = {\Bbb R}\times S^{2}.$
The Schwarzschild metric is the most important solution of the static vacuum Einstein equations (1.8). The potential is given by the function
\begin{equation} \label{e5.2}
u = (1-\frac{2m}{r})^{1/2},
\end{equation}
The potential $u$ extends past $\Sigma $ as an {\it odd} harmonic function under reflection in $\Sigma .$
We show that there is a potential function $\omega $ on $N$ such that $(N, g_{S}, \omega )$ is a complete solution of the ${\cal R}_{s}^{2}$ equations (0.4)-(0.5) with non-zero $\alpha .$
\begin{proposition} \label{p 5.1.}
The Schwarzschild metric (N, $g_{S})$ satisfies the ${\cal R}_{s}^{2}$ equations (0.4)-(0.5), where the potential is given by
$$\omega = \tau + c\cdot u, $$
for any $c\in{\Bbb R} $ and $u$ is as in (5.2). The function $\tau $ is spherically symmetric and even w.r.t. reflection across $\Sigma .$ Explicitly,
\begin{equation} \label{e5.3}
\tau = lim_{a\rightarrow 2m} \ \tau_{a}
\end{equation}
where a $> 2m$ and
\begin{equation} \label{e5.4}
\tau_{a}(r) = \frac{\alpha}{8}(1-\frac{2m}{r})^{1/2}\bigl(\int_{a}^{r} \frac{1}{s^{5}(1-\frac{2m}{s})^{3/2}}ds - \frac{1}{ma^{3}}\frac{1}{(1-\frac{2m}{a})^{1/2}}\bigr) ,
\end{equation}
for $r \geq a$. In particular, $\tau (2m) = -\frac{\alpha}{8m}(2m)^{-3}, \tau' (2m) =$ 0, $\tau < $ 0 everywhere, and $\tau $ is asymptotic to a constant $\tau_{o} < $ 0 at each end of $N$.
\end{proposition}
{\bf Proof:}
By scaling, we may assume $m = \frac{1}{2}.$ One may rewrite the expression (0.2) for $\nabla{\cal R}^{2}$ via a standard Weitzenbock formula, c.f. [B, 4.71], to obtain
\begin{equation} \label{e5.5}
\nabla{\cal R}^{2} = \frac{1}{2}\delta dr + \frac{1}{2}D^{2}s - r\circ r - R\circ r - \frac{1}{2}\Delta s\cdot g + \frac{1}{2}|r|^{2}\cdot g.
\end{equation}
The Schwarzschild metric $g_{S},$ or any spherically symmetric metric, is conformally flat, so that $d(r-\frac{s}{4}g) =$ 0. Since $g_{S}$ is scalar-flat, one thus has
\begin{equation} \label{e5.6}
\nabla{\cal R}^{2} = - r\circ r - R\circ r + \frac{1}{2}|r|^{2}\cdot g.
\end{equation}
Let $t$ denote the distance to the event horizon $\Sigma $ and let $e_{i}, i =$ 1,2,3 be a local orthonormal frame on $N = S^{2}\times {\Bbb R} ,$ with $e_{3} = \nabla t,$ so that $e_{1}$ and $e_{2}$ are tangent to the spheres $t =$ const. Any such framing diagonalizes $r$ and $\nabla{\cal R}^{2}.$ The Ricci curvature of $g_{S}$ satisfies
$$r_{33} = - r^{-3}, r_{11} = r_{22} = \frac{1}{2}r^{-3}. $$
A straightforward computation of the curvature terms in (5.6) then gives
$$(\nabla{\cal R}^{2})_{33} = \frac{1}{4}r^{-6}, (\nabla{\cal R}^{2})_{11} = (\nabla{\cal R}^{2})_{22} = -\frac{1}{2}r^{-6} $$
We look for a solution of (0.4) with $\tau $ spherically symmetric, i.e. $\tau = \tau (t).$ Then
$$D^{2}\tau = \tau' D^{2}t + \tau'' dt\otimes dt, \Delta\tau = \tau' H + \tau'' , $$
where $H = \Delta t$ is the mean curvature of the spheres $t =$ const and the derivatives are w.r.t. $t$. One has $(D^{2}t)_{33} =$ 0 while $(D^{2}t)_{ii} = \frac{1}{2}H$ in tangential directions $i =$ 1,2. Thus, in the $\nabla t$ direction, (0.4) requires
\begin{equation} \label{e5.7}
\frac{\alpha}{4}r^{-6}+ \tau'' - (\tau' H+\tau'' ) - \tau (- r^{-3}) = 0,
\end{equation}
while in the tangential directions, (0.4) is equivalent to
\begin{equation} \label{e5.8}
-\frac{\alpha}{2}r^{-6}+ \frac{H}{2}\tau' - (\tau' H+\tau'' ) - \frac{\tau}{2}r^{-3} = 0.
\end{equation}
The equations (5.7)-(5.8) simplify to the system
\begin{equation} \label{e5.9}
\tau' H - \tau r^{-3} = \frac{\alpha}{4}r^{-6},
\end{equation}
\begin{equation} \label{e5.10}
\tau'' + \frac{H}{2}\tau' + \frac{\tau}{2}r^{-3} = -\frac{\alpha}{2}r^{-6}.
\end{equation}
It is easily verified that (5.10) follows from (5.9) by differentiation, so only (5.9) needs to be satisfied. It is convenient to change the derivatives w.r.t. $t$ above to derivatives w.r.t. $r$, using the relation $\frac{dr}{dt} = (1-\frac{1}{r})^{1/2}.$ Since also $H = 2\frac{r'}{r} = \frac{2}{r}(1-\frac{1}{r})^{1/2},$ (5.9) becomes
\begin{equation} \label{e5.11}
\frac{2}{r}(1-\frac{1}{r})\dot {\tau} - \tau r^{-3} = \frac{\alpha}{4}r^{-6},
\end{equation}
where $\dot {\tau} = d\tau /dr.$
The linear ODE (5.11) in $\tau $ can easily be explicitly integrated and one may verify that $\tau $ in (5.3)-(5.4) is the unique solution on $([2m,\infty )\times S^{2}, g_{S})$ of (0.4)-(0.5) satisfying, (since $m = \frac{1}{2}),$
\begin{equation} \label{e5.12}
\tau (1) = -\frac{\alpha}{4} \ \ {\rm and} \ \ \frac{d}{dr}\tau|_{r=1} = 0.
\end{equation}
It follows that the {\it even} reflection of $\tau $ across $\{r =$ 1\} $= \Sigma $ gives a smooth solution of (0.4) on the (doubled) Schwarzschild metric $(N, g_{S}),$ satisfying the stated properties.
Since Ker $L^{*} = \ <u> $ on the Schwarzschild metric, the potential $\omega $ has in general the form $\omega = \tau +cu,$ for some $c\in{\Bbb R} .$
{\qed
| 2,350 | 60,678 |
en
|
train
|
0.114.19
|
}
Thus the Schwarzschild metric, with potential function $\omega ,$ gives the simplest non-trivial solution to the ${\cal R}_{s}^{2}$ equations (0.4)-(0.5), just as it is the canonical solution of the static vacuum equations (1.8).
\noindent
\begin{remark} \label{r 5.2.}
{\rm It is useful to consider some global aspects of solutions to the static vacuum equations in this context. It is proved in [An3,Appendix] that there are no non-flat complete solutions to the static vacuum equations (1.8) with potential $\omega < $ 0 or $\omega > $ 0 everywhere. Thus, this result is analogous to Theorem 0.1. While there do exist non-trivial complete smooth solutions with $\omega $ changing sign, for example the (isometrically doubled) Schwarzschild metric with $\omega = \pm u$ in (5.2), such solutions are very special since they have a smooth event horizon $\Sigma = \{\omega =$ 0\}, c.f. [An2] for further discussion.
Hence, among the three classes of equations considered here, namely the ${\cal R}^{2}$ equations (0.2), the static vacuum equations (1.8) and and the ${\cal R}_{s}^{2}$ equations (1.9), only the scalar curvature constrained ${\cal R}_{s}^{2}$ equations (1.9) for $0<\alpha<\infty $ admit non-trivial complete solutions with non-vanishing potential.
In [An4], we will investigate in greater detail the structure of complete ${\cal R}_{s}^{2}$ solutions, 0 $< \alpha < \infty ,$ as well as the structure of junction solutions, i.e. metrics which are solutions of the ${\cal R}^{2}$ equations in one region, and solutions of the ${\cal R}_{s}^{2}$ equations or static vacuum equations in another region, as mentioned in \S 1.}
\end{remark}
{\bf \S 5.2.}
For the work in [An4] and that following it, it turns out to be very useful to have the results of this paper, and those of [AnI, \S 3], with the $L^{2}$ norm of the traceless Ricci curvature $z = r - \frac{s}{3}g$ in place of $r$, i.e. with the functional
\begin{equation} \label{e5.13}
{\cal Z}^{2} = \int|z|^{2}dV
\end{equation}
in place of ${\cal R}^{2}.$ We show below that this can be done with relatively minor changes in the arguments.
First, the Euler-Lagrange equations for ${\cal Z}^{2}$ are
\begin{equation} \label{e5.14}
\nabla{\cal Z}^{2} = D^{*}Dz + \frac{1}{3}D^{2}s - 2\stackrel{\circ}{R}\circ z + \frac{1}{2}(|z|^{2} - \frac{1}{3}\Delta s)\cdot g = 0,
\end{equation}
\begin{equation} \label{e5.15}
tr\nabla{\cal Z}^{2} = -\frac{1}{6}\Delta s - \frac{1}{2}|z|^{2} = 0.
\end{equation}
Similarly, the ${\cal Z}_{s}^{2}$ equations for scalar-flat metrics, i.e. the analogues of (0.4)-(0.5), are
\begin{equation} \label{e5.16}
\alpha\nabla{\cal Z}^{2} + L^{*}(\omega ) = 0,
\end{equation}
$$\Delta \omega = -\frac{\alpha}{4}|z|^{2}. $$
We begin by examining the functional $I_{\varepsilon}^{~-}$ given by
\begin{equation} \label{e5.17}
I_{\varepsilon}^{~-} = \varepsilon v^{1/3}\int|z|^{2} + (v^{1/2}\int (s^{-})^{2})^{1/2},
\end{equation}
i.e. the ${\cal Z}^{2}$ analogue of the functional $I_{\varepsilon}' $ in (1.16). Recall here that from \S 1 that the behavior of $I_{\varepsilon}' $ as $\varepsilon \rightarrow $ 0 gives rise to the ${\cal R}^{2}$ and ${\cal R}_{s}^{2}$ equations. In the same way, the behavior of $I_{\varepsilon}^{~-}$ as $\varepsilon \rightarrow 0$ gives rise to the ${\cal Z}^2$ and ${\cal Z}_{s}^{2}$ equations.
Again, as noted in \S 1, the existence and basic properties of critical points of the functional $I_{\varepsilon}$ in (1.1) were (essentially) treated in [An1, \S 8]. For this functional, the presence of $r$ or $z$ in the definition (1.1) makes no essential difference, due to the ${\cal S}^{2}$ term in (1.1). However, for the passage from ${\cal S}^{2}$ in (1.1) to ${\cal S}_{-}^{2}$ in (5.17), this is no longer the case, since we have no apriori control on $s^{+}$ = min$(s, 0)$. Thus, we first show that the results of [An1, \S 3 and \S 8] do in fact hold for metrics with bounds on $I_{\varepsilon}^{~-}$ and for minimizers of $I_{\varepsilon}^{~-}$.
Let $r_{h}$ be the $L^{2,2}$ harmonic radius, as in [An1, Def. 3.2]. The main estimate we need is the following; compare with [An1, Rmk. 3.6].
\begin{proposition} \label{p 5.3.}
Let $D$ be a domain in a complete Riemannian manifold $(N, g)$, such that
\begin{equation} \label{e5.18}
\varepsilon\int_{D}|z|^{2} + \int_{D}(s^{-})^{2} \leq \Lambda ,
\end{equation}
where 0 $< \varepsilon < \infty .$ Then there is a constant $r_{o} = r_{o}(\Lambda , \varepsilon ) > $ 0 such that
\begin{equation} \label{e5.19}
r_{h}(x) \geq r_{o}\cdot \nu (x)\cdot \frac{dist(x, \partial D)}{diam D}.
\end{equation}
\end{proposition}
{\bf Proof:}
The proof is a modification of the proof of [An1, Thm. 3.5]. Thus, if (5.19) is false, then there exists a sequence of points $x_{i}$ in domains $(D_{i}, g_{i})$ such that
\begin{equation} \label{e5.20}
\frac{r_{h}(x_{i})}{dist(x_{i}, \partial D_{i}))} << \frac{\nu (x_{i})}{diam D_{i}} \leq 1,
\end{equation}
where the last inequality follows since the volume radius $\nu$ is at most the diameter. Choose the points $x_{i}$ to realize the mimimal value of the ratio in (5.20). It follows as in the proof of [An1, Thm. 3.5] that a subsequence of the rescaled pointed manifolds $(D_{i}, g_{i}' , x_{i}), g_{i}' = r_{h}(x_{i})^{-2}\cdot g_{i},$ converges in the weak $L^{2,2}$ topology to a complete, non-compact limit manifold $(N, g' , x)$, with $L^{2,2}$ metric $g' ,$ and of infinite volume radius. From (5.20), one easily deduces that $r_{h}(y_{i}) > \frac{1}{2},$ for all $y_{i}$ such that $dist_{g_{i}'}(x_{i}, y_{i}) \leq R$, for an arbitrary $R < \infty ,$ and for $i$ sufficiently large.
Now the bound (5.18) and scaling properties of curvature imply that
\begin{equation} \label{e5.21}
z \rightarrow 0, \ \ s^{-} \rightarrow 0,
\end{equation}
strongly in $L^{2},$ uniformly on compact subsets of $(N, g' , x)$. Hence the limit $(N, g' )$ is of constant curvature, and non-negative scalar curvature. In particular, the scalar curvature $s' $ of $g' $ is constant. If $s' > $ 0, then $(N, g' )$ must be compact, which is impossible. Hence $s' =$ 0, and so $(N, g' )$ is flat. Hence, $(N, g' )$ is isometric to the complete flat metric ${\Bbb R}^{3},$ since $(N, g' )$ has infinite volume radius.
We claim that $s \rightarrow $ 0 strongly in $L^{2}$ also. Together with (5.21), this implies $r \rightarrow $ 0 strongly in $L^{2},$ and the proof proceeds as in [An1, Thm. 3.5]. To prove the claim, let $\phi$ be any function of compact support in $B_{y}(\frac{1}{2}),$ for an arbitrary $y\in N.$ With respect to the $g_{i}' $ metric, and by use of the Bianchi identity, we have
$$\int s\cdot \Delta \phi = -\int<\nabla s, \nabla \phi> = \tfrac{1}{6} \int<\delta z, \nabla \phi> = \tfrac{1}{6} \int< z, D^{2}\phi> , $$
where we have removed the prime and subscript $i$ from the notation. By $L^{2}$ elliptic regularity w.r.t. the metrics $g_{i}' ,$ (which are uniformly bounded locally in $L^{2,2}),$ it follows that
\begin{equation} \label{e5.22}
\int_{B_{y}(\frac{1}{4})}(s - s_{o})^{2} \leq C\cdot \int_{B_{y}(\frac{1}{2})}|z|^{2},
\end{equation}
where $s_{o}$ is the mean value of $s$ on $B_{y}(\frac{1}{2}),$ and where $C$ is a fixed constant, independent of $i$ and $y$. Thus, $s = s_{g_{i}}'$ converges strongly to its limit mean value in the limit. Since $s_{o} =$ 0 in the limit, it follows that $s \rightarrow $ 0 strongly in $L^{2},$ as required.
{\qed
| 2,611 | 60,678 |
en
|
train
|
0.114.20
|
}
Let $\rho_{z}$ be the $L^{2}$ curvature radius w.r.t. $z$, i.e. again as in [An1, Def. 3.2], $\rho_{z}(x)$ is the largest radius of a geodesic ball about $x$ such that for $y\in B_{x}(\rho_{z}(x))$ and $D_{y}(s) = B_{x}(\rho_{z}(x))\cap B_{y}(s),$
\begin{equation} \label{e5.23}
\frac{s^{4}}{vol D_{y}(s)}\int_{D_{y}(s)}|z|^{2} \leq c_{o},
\end{equation}
where $c_{o}$ is a fixed small constant. Of course $\rho_{r} \leq \rho_{z}$. Note that Proposition 5.3 implies in particular that for the $L^{2}$ curvature radius $\rho = \rho_{r}$ as in \S 2,
\begin{equation} \label{e5.24}
\rho_{r}(x) \geq \rho_{o}\cdot \rho_{z}(x),
\end{equation}
where $\rho_{o}$ is a constant depending only a lower bound on $\nu $ and a bound on ${\cal S} _{-}^{2}$ on $\rho_{z}(x).$ Note also that this local result is false without a local bound on ${\cal S}_{-}^{2}.$ For in this case, a metric of constant negative curvature, but with arbitrarily large scalar curvature will make $r_{h}$ arbitrarily small without any change to $\rho_{z}.$
Proposition 5.3 shows that the analogue of [An1, Thm.3.5/Rmk.3.6] holds for the functional $I_{\varepsilon}^{~-}$ in place of ${\cal R}^{2},$ for any given $\varepsilon > $ 0, as does [An1,Thm.3.7]. Given this local $L^{2,2}$ control, an examination of the proofs shows that the results [An1,Thm.3.9-Cor.3.11] also hold w.r.t. $I_{\varepsilon}^{~-},$ without any further changes, as does the main initial structure theorem, [An1, Thm.3.19].
The local results [An1, Lem.3.12-Cor.3.17], dealing with collapse within the $L^{2}$ curvature radius will not hold for $\rho_{z}$ without a suitable local bound on the scalar curvature. However, as noted at the bottom of [An1,p.223], [An1, Lemmas 3.12,3.13] only require a lower bound on the $L^{2}$ norm the negative part of the Ricci curvature. Hence these results, as well as [An1, Cor.3.14-Cor.3.17] hold for metrics satisfying
\begin{equation} \label{e5.25}
s \geq -\lambda ,
\end{equation}
for some fixed constant $\lambda > -\infty .$ In particular, for metrics satisfying (5.25), we have
\begin{equation} \label{e5.26}
\rho_{r}(x) \geq \rho_{o}\cdot \rho_{z}(x),
\end{equation}
where $\rho_{o} = \rho_{o}(\lambda , c_{o})$ is independent of the volume radius $\nu .$
We now use the results above to prove the following:
\begin{proposition} \label{p 5.4.}
Theorems 0.1 and 0.2 remain valid for complete non-compact ${\cal Z}^{2}$ and ${\cal Z}_{s}^{2}$ solutions respectively.
\end{proposition}
{\bf Proof:}
For Theorem 0.2, this is clear, since a scalar-flat ${\cal R}_{s}^{2}$ solution is the same as a scalar-flat ${\cal Z}_{s}^{2}$ solution.
For Theorem 0.1, as noted in the beginning of \S 2, the form of the full Euler-Lagrange equations (0.2) or (5.14) makes no difference in the arguments, since elliptic regularity may be obtained from either one. Thus we need only consider the difference in the trace equations (0.3) and (5.15). Besides the insignificant difference in the constant factor, the only difference in these equations is that $r$ is replaced by $z$; the fact that the sign of the constants is the same is of course important.
An examination of the proof shows that all arguments for ${\cal R}^{2}$ solutions remain valid for ${\cal Z}^{2}$ solutions, with $r$ replaced by $z$, except in the following two instances:
(i). The passage from (2.7) to (2.8) in the proof of Proposition 2.2, which used the obvious estimate $|r|^{2} \geq s^{2}/3.$ This estimate is no longer available for $|z|^{2}$ in place of $|r|^{2}.$
(ii). In the proof of Lemma 2.9(ii), where $\Delta s(x_{i}) \rightarrow $ 0 implies $|r|(x_{i}) \rightarrow $ 0, and hence $s(x_{i}) \rightarrow $ 0, which again no longer follows trivially for $z$ in place of $r$.
We first prove (ii) for ${\cal Z}^{2}$ solutions. By Lemma 2.1, we may assume that $(N, g)$ is a complete ${\cal Z}^{2}$ solution, with uniformly bounded curvature. Let $\{x_{i}\}$ be a minimizing sequence for $s \geq $ 0 on $(N, g)$. As before $\Delta s(x_{i}) \rightarrow $ 0, and so $|z|(x_{i}) \rightarrow $ 0, as $i \rightarrow \infty .$ Since the curvature is uniformly bounded, it follows from (the proof of) the maximum principle, c.f. [GT, Thm.3.5], that $|z|^{2} \rightarrow $ 0 in balls $B_{x_{i}}(c),$ for any given $c < \infty .$ Hence the metric $g$ in $B_{x_{i}}(c)$ approximates a constant curvature metric. Since $(N, g, x_{i})$ is complete and non-compact, this forces $s \rightarrow $ 0 in $B_{x_{i}}(c),$ which proves (ii).
Regarding (i), it turns out, somewhat surprisingly, that the proof of Proposition 2.2 is not so simple to rectify. First, observe that Proposition 2.2 is used only in the following places in the proof of Theorem 0.1.
(a). The end of Lemma 2.4.
(b). Lemmas 2.8 and 2.10.
Regarding (a), the estimate (2.19) still holds. In this case, as noted following the proof of Proposition 2.2, there is a smoothing $\tilde t$ of the distance function $t$ such that $|\Delta \tilde t| \leq c/ \tilde t^{2}$ and in fact
\begin{equation} \label{e5.27}
|D^{2}\tilde t| \leq c/\tilde t^{2}.
\end{equation}
The proof may then be completed as follows. As in the beginning of the proof of Proposition 2.2, we obtain from the trace equation (5.15),
\begin{equation} \label{e5.28}
\int\eta^{4}|z|^{2} \leq \frac{1}{3}\int<\nabla s, \nabla \eta^{4}> = -\frac{1}{18}\int< z, D^{2}\eta^{4}> ,
\end{equation}
where the last equality follows from the Bianchi identity $\delta z = -\frac{1}{6}ds,$ and $\eta = \eta (\tilde t)$ is of compact support. Expand $D^{2}\eta^{4}$ as before and apply the Cauchy-Schwarz inequality to (5.28). Using (5.27), together with the argument following (2.9), the proof of Proposition 2.2 follows easily in this case.
(b). For both of these results, it is assumed that (2.47) holds, i.e. $s(x) \geq d_{o}\cdot \rho (x)^{-2},$ and so $s(x) \geq d\cdot t(x)^{-2}.$ In this case, the estimate (5.26), which holds since $(N, g)$ has non-negative scalar curvature, together with a standard covering argument implies that
\begin{equation} \label{e5.29}
\int_{B(R)}s^{2}dV \leq c_{1}\cdot \int_{B(2R)}|z|^{2},
\end{equation}
for all $R$ large, where $c_{1}$ is a constant independent of $R$. As before in the proof, there exists a sequence $r_{i} \rightarrow \infty$ as $i \rightarrow \infty$ such that, for all $R\in [r_{i}, 10r_{i}],$
\begin{equation} \label{e5.30}
\int_{B(R)}s^{2}dV \leq c_{2}\cdot \int_{B(R/2)}s^{2},
\end{equation}
with $c_{2}$ independent of $R$. Given the estimates (5.29)-(5.30), the proof of Proposition 2.2 then proceeds exactly as before.
The remainder of the proof of Theorem 0.1 then holds for ${\cal Z}^{2}$ solutions; the only further change is to replace $r$ by $z$.
{\qed
}
\begin{remark} \label{r 5.5.}
{\rm We take this opportunity to correct an error in [AnI]. Namely in [An1, Thms. 0.1/3.19], and also in [An1, Thms. 0.3/5.9], it is asserted that the maximal open set $\Omega $ is embedded in $M$. This assertion may be incorrect, and in any case its validity remains unknown. The error is in the statement that the diffeomorphisms $f_{i_{k}}$ constructed near the top of [AnI, p.229] can be chosen to be nested.
The proof of these results does show that $\Omega $ is weakly embedded in $M$,$$\Omega \subset\subset M, $$
in the sense that any compact domain with smooth boundary in $\Omega $ embeds as such a domain in $M$. Similarly, there exist open sets $V \subset M$, which contain a neighborhood of infinity of $\Omega ,$ such that $\{g_{i}\}$ partially collapses $V$ along F-structures. In particular, $V$ itself, as well as a neighborhood of infinity in $\Omega $ are graph manifolds. Thus, the basic structure of these results remains the same, provided one replaces the claim that $\Omega \subset M$ by the statement that $\Omega \subset\subset M$.
The remaining parts of these results hold without further changes. The same remarks hold with regard to the results of \S 8. My thanks to thank Yair Minsky for pointing out this error.}
\end{remark}
\begin{center}
September, 1998/October, 1999
\end{center}
\address{Department of Mathematics\\
S.U.N.Y. at Stony Brook\\
Stony Brook, N.Y. 11794-3651}\\
\email{anderson@@math.sunysb.edu}
\end{document}
| 2,798 | 60,678 |
en
|
train
|
0.115.0
|
\begin{document}
\title{Unsupervised classification of quantum data}
\author{Gael Sent\'is,$^{1}$ Alex Monr\`as,$^{2}$ Ramon Mu\~noz-Tapia,$^{2}$ John Calsamiglia,$^{2}$ and Emilio Bagan$^{2,3}$}
\affiliation{$^{1}$Naturwissenschaftlich-Technische Fakult\"at, Universit\"at Siegen, 57068 Siegen, Germany\\
$^{2}$F\'{i}sica Te\`{o}rica: Informaci\'{o} i Fen\`{o}mens Qu\`antics, Departament de F\'{\i}sica, Universitat Aut\`{o}noma de Barcelona,08193 Bellaterra (Barcelona), Spain\\
$^{3}$Department of Computer Science, The University of Hong Kong, Pokfulam Road, Hong Kong
}
\begin{abstract}
We introduce the problem of unsupervised classification of quantum data, namely, of systems whose quantum states are unknown. We derive the optimal single-shot protocol
for the binary case, where the states in a disordered input array are of two types. Our protocol
is universal and able to automatically sort the input under minimal assumptions, yet partially preserving information contained in the states.
We quantify analytically its performance for arbitrary size and dimension of the data. We contrast it with the performance of its classical counterpart, which clusters data that has been sampled from two unknown probability distributions. We find that the quantum protocol
fully exploits the dimensionality of the quantum data to achieve a much higher performance, provided data is at least three-dimensional. \blue{For the sake of comparison, we discuss the optimal protocol when the classical and quantum states are known.}
{{\bf s}m e}nd{abstract}
\maketitle
{{\bf s}m s}ection{Introduction}
Quantum-based communication and computation technologies promise unprecedented applications and unforeseen speed-ups for certain classes of computational problems. In origin, the advantages of quantum computing were exemplary showcased through instances of problems that are hard to solve in a classical computer, such as integer factorization~\cite{Shor1998}, unstructured search~\cite{Grover1997}, discrete optimization~\cite{Finnila1994,Kadowaki1998},
and simulation of many-body Hamiltonian dynamics~\cite{Lloyd1996}.
In recent times, the field has ventured one step further: quantum computers are now also envisioned as nodes in a network of quantum devices, where connections are established via quantum channels, and data are
quantum systems that flow through the network~\cite{Kimble2008,Wehner2018}. The design of future quantum networks in turn brings up new theoretical challenges, such as devising universal information processing protocols optimized to work with generic quantum inputs, without the need of human intervention.
Quantum learning algorithms are by design well suited for this class of automated tasks~\cite{Dunjko2017}. Generalizing classical machine learning ideas to operate with quantum data, some algorithms have been devised for quantum template matching~\cite{Sasaki2002}, quantum anomaly detection~\cite{Liu2017,Skotiniotis2018}, learning unitary transformations~\cite{Bisio2010} and quantum measurements~\cite{Bisio2011a}, and classifying quantum states~\cite{Guta2010,Sentis2012a,Sentis2014a,Fanizza2018}. These works fall under the broad category of {{\bf s}m e}mph{supervised} learning~\cite{Hastie2001,Devroye2013}, where the aim is to learn an unknown conditional probability distribution ${{\bf s}m Pr}(y|x)$ from a number of given samples $x_i$ and associated values or labels $y_i$, called {{\bf s}m e}mph{training} instances. The performance of a trained learning algorithm is then evaluated by applying the learned function over new data $x'_i$ called {{\bf s}m e}mph{test} instances. In the quantum extension of supervised learning~\cite{Monras2017}, the training instances are quantum---say, copies of the quantum state templates, or a potential anomalous state, or a number of uses of an unknown unitary transformation. The separation between training and testing steps is sometimes not as sharp: in reinforcement learning, training occurs on an instance basis via the interaction of an agent with an environment, and the learning process itself may alter the underlying probability distribution~\cite{Dunjko2016}.
In contrast, {{\bf s}m e}mph{unsupervised} learning aims at inferring structure in an unknown distribution ${{\bf s}m Pr}(x)$ given random, unlabeled samples $x_i$. Typically, this is done by grouping the samples in {{\bf s}m e}mph{clusters}, according to a preset definition of similarity. Unsupervised learning is a versatile form of learning, attractive in scenarios where
appropriately labeled training data is not available or too costly.
But it is also---generically---a much more challenging problem \cite{Aloise2009,Ben-David2015}. To our knowledge, a quantum extension of unsupervised learning in the sense described above
has not yet been considered in the literature.
\begin{figure}[t]
\includegraphics[scale=.6]{fig1_mockup.pdf}
\caption{Pictorial representation of the clustering device for an input of eight quantum states. States of the same type have the same color. States are clustered according to their type by performing a suitable collective measurement, which also provides a classical description of the clustering.}{(\lambdaambda)}bel{fig:scheme}
{{\bf s}m e}nd{figure}
In this paper, we take a first step into this branch of quantum learning by introducing the problem of unsupervised binary classification of quantum states. We consider the following scenario: a source prepares quantum systems in two possible pure states that are completely unknown; after some time, $N$ such systems have been produced and we ask ourselves whether there exists a quantum device that is able to cluster them in two groups according to their states (see Fig.~{\bf s}ef{fig:scheme}).
\blue{This scenario represents a quantum clustering task in its simplest form,
where the single feature defining a cluster of quantum systems is that their states are identical.
While clustering classical data under this definition of cluster---a set of equal data instances---yields a trivial algorithm,
merely observing such simple feature in a finite quantum data set
involves a nontrivial stochastic process
and gives rise to a primitive of operational relevance for quantum information.
Moreover, in some sense our scenario actually contains a
classical binary clustering problem: if we were to measure each quantum system separately, we would obtain a set of $N$ data points (the measurement outcomes). The points would be effectively sampled from the two probability distributions determined by the quantum states and the choice of measurement. The task would then be to identify which points were sampled from the same distribution.
Reciprocally, we can interpret our quantum clustering task as a natural extension of a classical clustering problem with completely unstructured data, where the only single feature that identifies a cluster is that the data points are sampled from a fixed, but arbitrary, categorical probability distribution (i.e., with no order nor metric in the underlying space). The quantum generalization is then to consider (non-commuting) quantum states instead of probability distributions.}
We require two important features in our quantum clustering device: (i) it has to be universal, that is, it should be designed to take any possible pair of types of input states, and (ii)
it has to provide a classical description of the clustering, that is, which particles belong to each cluster.
Feature (i) ensures
general purpose use and versatility of the clustering device, in a similar spirit to programmable quantum processors~\cite{Buzek2006}. Feature (ii) allows us to assess the performance of the device purely in terms of the accuracy of the clustering, which in turn facilitates the comparison with classical clustering strategies. Also due to (ii), we can justifiably say that the device has not only performed the clustering task but also ``learned'' that the input is (most likely) partitioned as specified by the output description.
Note that relaxing
feature (ii) in principle opens the door to a more general class of {{\bf s}m e}mph{sorting} quantum devices, where the goal could be, e.g., to minimize the distance (under some norm) between the global output state and the state corresponding to perfect clustering of the input. Such devices, however, fall beyond the scope of unsupervised learning.
Requiring the description of the clusters as a classical outcome induces structure in the device. To generate this information, a quantum measurement shall be performed over all $N$ systems with as many outcomes as possible clusterings. Then, the systems will be sorted according to this outcome (see Fig.~{\bf s}ef{fig:scheme}).
Depending on the context, e.g., on whether or not the systems will be further used after the clustering, different figures of merit shall be considered in the optimization of the device.
In this paper we focus on the clustering part: our goal is to find the quantum measurement that maximizes the success probability of a correct clustering.
\blue{
Features (i) and (ii) allow us to formally regard quantum clustering as a state discrimination task~\cite{Helstrom1976,Barnett2001,Chiribella2004,Chiribella2006a,Audenaert2007,Krovi2015a},
albeit with important differences with respect to the standard setting. In quantum state discrimination~\cite{Helstrom1976},
we want to determine the state of a quantum system among a set of {{\bf s}m e}mph{known} hypotheses (i.e., classical descriptions of quantum states).
We can phrase this problem in machine learning terminology as follows. We have a test state (or several copies of it~\cite{Audenaert2007}) and we decide its label based on {{\bf s}m e}mph{infinite training} data. In other words, we have full knowledge about the meaning of the possible labels. Supervised quantum learning algorithms for quantum state classification~\cite{Guta2010,Sentis2012a,Sentis2014a,Fanizza2018} consider the intermediate scenario with {{\bf s}m e}mph{limited training} data. In this case, no description of the states is available. Instead, we are provided with a finite number of copies of systems in each of the possible quantum states, and thus we have only partial classical knowledge about the labels. Extracting the label information from the quantum training data then becomes a key step in the protocol. Following this line of thought, the problem we consider in this paper is a type of unsupervised learning, that is, one with {{\bf s}m e}mph{no training}. There is no information whatsoever about what state each label represents.
}
We obtain analytical expressions for the performance of the optimal clustering protocol for arbitrary values of the local dimension $d$ of the systems in the cases of finite number of systems $N$ and in the asymptotic limit of many systems. We show that, in spite of the fact that the number of possible clusterings grows exponentially with $N$, the success probability decays only as $O(1/N^2)$.
Furthermore, we contrast these results with an optimal clustering algorithm designed for the classical version of the task.
We observe a striking phenomenon when analyzing the performance of the two protocols for $d>2$: whereas increasing the local dimension has a rapid negative impact in the success probability of the classical protocol (clustering becomes, naturally, harder), it turns out to be beneficial for its quantum counterpart.
We also see, through numerical analysis, that the quantum measurement that maximizes the success probability is also optimal for a more general class of cost functions that are more natural for clustering problems, including the Hamming distance.
In other words, this provides evidence that our entire analysis does not depend strongly on the chosen figure of merit, but rather on the structure of the problem itself.
Measuring the systems will in principle degrade the information encoded in their states, hence, intuitively, there should be a trade-off between how good a clustering is and how much information about the original states is left in the clusters. Remarkably, our analysis reveals that the measurement that clusterizes optimally actually preserves information regarding the type of states that form each cluster.
\blue{This feature adds to the usability of our device as a universal quantum data sorting processor. It can be regarded as the quantum analogue of a sorting network (or sorting memory)~\cite{Knuth1998}, used as a fixed network architecture that automatically orders generic inputs coming from an aggregated data pipeline.}
The details of this second step are however left for a subsequent publication.
The paper is organized as follows.
In Section~{\bf s}ef{sec:the_task}, we formalize the problem and derive the optimal clustering protocol and its performance. In Section~{\bf s}ef{sec:classical}, we consider a classical clustering protocol and contrast it with the optimal one. \blue{We present the proofs of the main results of our work and the necessary theoretical tools to derive them in Section~{\bf s}ef{sec:methods}.}
We end in Section~{\bf s}ef{sec:discussion} discussing the features of our quantum clustering device and other cost functions, and giving an outlook on future extensions.
{{\bf s}m s}ection{Clustering quantum states}{(\lambdaambda)}bel{sec:the_task}
Let us suppose that a source prepares quantum systems randomly in one of two pure $d$-dimensional states~$\ket{\phi_0}$ and~$\ket{\phi_1}$ with equal prior probabilities.
Given a sequence of $N$ systems produced by the source, and with no knowledge of the states~$\ket{\phi_{0/1}}$, we are required to assign labels~`0' or~`1' to each of the systems. The labeling can be achieved via a generalized quantum measurement that tries to distinguish among all the possible global states of the $N$ systems. Each outcome of the measurement will then be associated to a possible label assignment, that is, to a {{\bf s}m e}mph{clustering}.
Consider the case of four systems. All possible clusterings that we may arrange are depicted in Fig.~{\bf s}ef{fig:N4} as strings of red and blue balls. Since the individual states of the systems are unknown, what is labeled as ``red'' or ``blue'' is arbitrary, thus interchanging the labels leads to an equivalent clustering. For arbitrary $N$, there will be~$2^{N-1}$ such clusterings. Fig.~{\bf s}ef{fig:N4} also illustrates a natural way to label each clustering as $(n,{{\bf s}m s}igma)$. The index~$n$ counts the number of systems in the smallest cluster. The index ${{\bf s}m s}igma$ is a permutation that brings a {{\bf s}m e}mph{reference} clustering, defined as that in which the systems belonging to the smallest cluster fall all on the right, into the desired form. To make this labeling unambiguous, ${{\bf s}m s}igma$ is chosen from a restricted set ${\mathscr S}_n{{\bf s}m s}ubset S_N$, where $S_N$ stands for the permutation group of $N$ elements and $e$ denotes its unity element. We will see that the optimal clustering procedure consists in measuring first the value of $n$, and, depending on the outcome, performing a second measurement that identifies ${{\bf s}m s}igma$ among the relevant permutations with a fixed $n$.
Thus, unsupervised clustering has been cast as a multi-hypothesis discrimination problem, which can be solved for an arbitrary number of systems $N$ with local dimension $d$. Below, we outline the derivation of our main result: the expression of the maximum average success probability achievable by a quantum clustering protocol. In the limit of large $N$ \blue{and for arbitrary $d$ (not necessarily constant with $N$)},
we show that this probability behaves as\footnote{\blue{The symbol ${{\bf s}m s}im$ stands for ``asymptotically equivalent to'', as in~\citep{Abramowitz1965}.}}
\begin{equation}{(\lambdaambda)}bel{ps_asym}
P_{{\bf s}m s} {{\bf s}m s}im {8(d-1)\over\lambdaeft(\displaystyle 2d+N{\bf s}ight) N}\,.
{{\bf s}m e}nd{equation}
| 4,002 | 66,496 |
en
|
train
|
0.115.1
|
We obtain analytical expressions for the performance of the optimal clustering protocol for arbitrary values of the local dimension $d$ of the systems in the cases of finite number of systems $N$ and in the asymptotic limit of many systems. We show that, in spite of the fact that the number of possible clusterings grows exponentially with $N$, the success probability decays only as $O(1/N^2)$.
Furthermore, we contrast these results with an optimal clustering algorithm designed for the classical version of the task.
We observe a striking phenomenon when analyzing the performance of the two protocols for $d>2$: whereas increasing the local dimension has a rapid negative impact in the success probability of the classical protocol (clustering becomes, naturally, harder), it turns out to be beneficial for its quantum counterpart.
We also see, through numerical analysis, that the quantum measurement that maximizes the success probability is also optimal for a more general class of cost functions that are more natural for clustering problems, including the Hamming distance.
In other words, this provides evidence that our entire analysis does not depend strongly on the chosen figure of merit, but rather on the structure of the problem itself.
Measuring the systems will in principle degrade the information encoded in their states, hence, intuitively, there should be a trade-off between how good a clustering is and how much information about the original states is left in the clusters. Remarkably, our analysis reveals that the measurement that clusterizes optimally actually preserves information regarding the type of states that form each cluster.
\blue{This feature adds to the usability of our device as a universal quantum data sorting processor. It can be regarded as the quantum analogue of a sorting network (or sorting memory)~\cite{Knuth1998}, used as a fixed network architecture that automatically orders generic inputs coming from an aggregated data pipeline.}
The details of this second step are however left for a subsequent publication.
The paper is organized as follows.
In Section~{\bf s}ef{sec:the_task}, we formalize the problem and derive the optimal clustering protocol and its performance. In Section~{\bf s}ef{sec:classical}, we consider a classical clustering protocol and contrast it with the optimal one. \blue{We present the proofs of the main results of our work and the necessary theoretical tools to derive them in Section~{\bf s}ef{sec:methods}.}
We end in Section~{\bf s}ef{sec:discussion} discussing the features of our quantum clustering device and other cost functions, and giving an outlook on future extensions.
{{\bf s}m s}ection{Clustering quantum states}{(\lambdaambda)}bel{sec:the_task}
Let us suppose that a source prepares quantum systems randomly in one of two pure $d$-dimensional states~$\ket{\phi_0}$ and~$\ket{\phi_1}$ with equal prior probabilities.
Given a sequence of $N$ systems produced by the source, and with no knowledge of the states~$\ket{\phi_{0/1}}$, we are required to assign labels~`0' or~`1' to each of the systems. The labeling can be achieved via a generalized quantum measurement that tries to distinguish among all the possible global states of the $N$ systems. Each outcome of the measurement will then be associated to a possible label assignment, that is, to a {{\bf s}m e}mph{clustering}.
Consider the case of four systems. All possible clusterings that we may arrange are depicted in Fig.~{\bf s}ef{fig:N4} as strings of red and blue balls. Since the individual states of the systems are unknown, what is labeled as ``red'' or ``blue'' is arbitrary, thus interchanging the labels leads to an equivalent clustering. For arbitrary $N$, there will be~$2^{N-1}$ such clusterings. Fig.~{\bf s}ef{fig:N4} also illustrates a natural way to label each clustering as $(n,{{\bf s}m s}igma)$. The index~$n$ counts the number of systems in the smallest cluster. The index ${{\bf s}m s}igma$ is a permutation that brings a {{\bf s}m e}mph{reference} clustering, defined as that in which the systems belonging to the smallest cluster fall all on the right, into the desired form. To make this labeling unambiguous, ${{\bf s}m s}igma$ is chosen from a restricted set ${\mathscr S}_n{{\bf s}m s}ubset S_N$, where $S_N$ stands for the permutation group of $N$ elements and $e$ denotes its unity element. We will see that the optimal clustering procedure consists in measuring first the value of $n$, and, depending on the outcome, performing a second measurement that identifies ${{\bf s}m s}igma$ among the relevant permutations with a fixed $n$.
Thus, unsupervised clustering has been cast as a multi-hypothesis discrimination problem, which can be solved for an arbitrary number of systems $N$ with local dimension $d$. Below, we outline the derivation of our main result: the expression of the maximum average success probability achievable by a quantum clustering protocol. In the limit of large $N$ \blue{and for arbitrary $d$ (not necessarily constant with $N$)},
we show that this probability behaves as\footnote{\blue{The symbol ${{\bf s}m s}im$ stands for ``asymptotically equivalent to'', as in~\citep{Abramowitz1965}.}}
\begin{equation}{(\lambdaambda)}bel{ps_asym}
P_{{\bf s}m s} {{\bf s}m s}im {8(d-1)\over\lambdaeft(\displaystyle 2d+N{\bf s}ight) N}\,.
{{\bf s}m e}nd{equation}
Naturally, $P_{{\bf s}m s}$ goes to zero with $N$, since the total number of clusterings increases exponentially and it becomes much harder to discriminate among them. What may perhaps come as a surprise is that, despite this exponential growth, the scaling of $P_{{\bf s}m s}$ is only of order $O(1/N^2)$.\footnote{\blue{It is also interesting to see how far can one improve this result. By letting $d$ scale with $N$, e.g., by substituting $d{{\bf s}m s}im s N^\gamma$ for some $s>0$, $\gamma>1$ in Eq.~{{\bf s}m e}qref{ps_asym}, we obtain the absolute maximum
$P_{{\bf s}m s} {{\bf s}m s}im 4/N$.}}
Furthermore, increasing the local dimension yields a linear improvement in the asymptotic success probability. As we will later see, whereas the asympotic behavior in $N$ is not an exclusive feature of the optimal quantum protocol---we observe the same scaling in its classical counterpart, albeit only when $d=2$---the ability to exploit extra dimensions to enhance distinguishability is.
\begin{figure}[t]
\includegraphics[scale=.35]{fig2_N4examples2.pdf}
\caption{All possible clusterings of $N=4$ systems when each can be in one of two possible states, depicted as blue and red. The pair of indices $(n,{{\bf s}m s}igma)$ identifies each clustering, where $n$ is the size of the smallest cluster, and~${{\bf s}m s}igma$ is a permutation of the {{{\bf s}m e}m reference} clusterings (those on top of each box), wherein the smallest cluster falls on the right. The symbol $e$ denotes the identity permutation, and $(ij)$ the transposition of systems in positions $i$ and $j$. Note that the choice of ${{\bf s}m s}igma$ is not unique.}{(\lambdaambda)}bel{fig:N4}
{{\bf s}m e}nd{figure}
Let us present an outlined derivation of the optimal quantum clustering protocol. Each input can be described by a string of 0's and 1's ${{\bf x}}=(x_1\cdots x_N)$, so that the global state of the systems entering the device is
$\ket{\Phi_{\bf x}} = \ket{\phi_{x_1}}\otimes\ket{\phi_{x_2}}\otimes\cdots\otimes \ket{\phi_{x_N}}$.
The clustering device can generically be defined by a positive operator valued measure (POVM) with elements $\{E_{\bf x}\}$, fulfilling $E_{\bf x}\ge 0$ and ${{\bf s}m s}um_{\bf x} E_{\bf x} = \openone$, where each operator~$E_{\bf x}$ is associated to the statement ``the measured global state corresponds to the string ${\bf x}$''. We want to find a POVM that maximizes the average success probability $P_{{\bf s}m s}=2^{1-N}\int d\phi_0 d\phi_1 {{\bf s}m s}um_{\bf x} {{\bf s}m tr}\, (\ketbrad{\Phi_{\bf x}} E_{\bf x})$, where we assumed that each clustering is equally likely at the input, and we are averaging over all possible pairs of states $\{\ket{\phi_{0}},\ket{\phi_{1}}\}$ and strings ${\bf x}$. Since our goal is to design a universal clustering protocol, the operators $E_{\bf x}$ cannot depend on $\ket{\phi_{0,1}}$, and we can take the integral inside the trace. The clustering problem can then be regarded as the optimization of a POVM that distinguishes between effective density operators of the form
\begin{equation}{(\lambdaambda)}bel{rhox}
{\bf s}ho_{{\bf x}}= \int d\phi_0 \, d\phi_1 \ketbrad{\Phi_{\bf x}} \,.
{{\bf s}m e}nd{equation}
It now becomes apparent that ${\bf s}ho_{\bf x}={\bf s}ho_{\bar {\bf x}}$, where $\bar {\bf x}$ is the complementary string of ${\bf x}$ (i.e., the values 0 and 1 are exchanged).
The key that reveals the structure of the problem and allows us to deduce the optimal clustering protocol resides in computing the integral in Eq.~{{\bf s}m e}qref{rhox}. Averaging over the states leaves out only the information relevant to identify a clustering, that is, $n$ and ${{\bf s}m s}igma$. Certainly, identifying ${\bf x}{{\bf s}m e}quiv(n,{{\bf s}m s}igma)$, we can rewrite ${\bf s}ho_{\bf x}$ as
\begin{align}{(\lambdaambda)}bel{rho_ns}
{\bf s}ho_{n,{{\bf s}m s}igma} &= c_n \, U_{{\bf s}m s}igma \, (\openone^{{\bf s}m sym}_n\otimes\openone^{{\bf s}m sym}_{N-n} )\,U_{{\bf s}m s}igma^\dagger \nonumber\\
&= c_n \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}\,.
{{\bf s}m e}nd{align}
By applying Schur lemma, one readily obtains the first line, where $\openone^{{\bf s}m sym}_k$ is a projector onto the completely symmetric subspace of $k$ systems,
$c_n$ is a normalization factor,
and $U_{{\bf s}m s}igma$ is a unitary matrix representation of ${{\bf s}m s}igma$. The second line follows from using the Schur basis (see Section~{\bf s}ef{app:optimality}), in which the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ are block-diagonal.
Here $\lambda$ labels the irreducible representations---irreps for short---of the joint action of the groups ${{\bf s}m SU}(d)$ and $S_N$ over the vector space $(d,\mathbb C)^{\otimes N}$, and is usually identified with the shape of Young diagrams (or partitions of~$N$).
A pair of parentheses, $()$ [brackets, $\{\}$], surrounding the subscript ${(\lambdaambda)}mbda$, e.g., in Eq.~{{\bf s}m e}qref{rho_ns}, are used when
${(\lambdaambda)}mbda$ refers exclusively to irreps of ${{\bf s}m SU}(d)$ [$S_N$];
we stick to this convention throughout the paper.
Note that averaging over all ${{\bf s}m SU}(d)$ transformations erases the information contained in the representation subspace~$({(\lambdaambda)}mbda)$.
It also follows from Eq.~{{\bf s}m e}qref{rho_ns} and the rules of the Clebsch-Gordan decomposition that (i)~only two-row Young diagrams (partitions of length two) show up in the direct sum above, and (ii)~the operators $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}$ are rank-1 projectors (see Appendix~{\bf s}ef{app:irreps}). They carry all the information relevant for the clustering, and are understood to be zero for irreps $\lambda$ outside the support of~${\bf s}ho_{n,{{\bf s}m s}igma}$.
With Eq.~{{\bf s}m e}qref{rho_ns} at hand, the optimal clustering protocol can be succinctly described as two successive measurements---we state the result here and present an optimality proof in Section~{\bf s}ef{app:optimality}. The first measurement is a projection onto the irrep subspaces $\lambda$,
described by the set $\{\openone_{(\lambdaambda)} \otimes \openone_{\{\lambdaambda\}}\}$.
The outcome of this measurement provides an estimate of~$n$, as $\lambda$ is one-to-one related to the size of the clusters. More precisely, we have from~(i) that $\lambda=(\lambda_1,\lambda_2)$, where $\lambda_1$ and $\lambda_2$ are nonnegative integers such that $\lambda_1+\lambda_2=N$ and~$\lambda_1\ge \lambda_2$.
Then, given the outcome $\lambda=(\lambda_1,\lambda_2)$ of this first measurement, the optimal guess turns out to be $n=\lambda_2$.
Very roughly speaking, the ``asymmetry" in the subspace $\lambda=(\lambda_1,\lambda_2)$ increases with $\lambda_2$.
We recall that $\lambda=(N,0)$ is the fully symmetric subspace of $(d,\mathbb{C})^N$. Naturally, ${\bf s}ho_{0,{{\bf s}m s}igma}$ has support only in this subspace, as all states in the data are of one type.
As $\lambda_2$ increases from zero, more states of the alternative type are necessary to achieve the increasing asymmetry of $\lambda=(\lambda_1,\lambda_2)$.
Hence, for a given $\lambda_2$, there is a minimum value of $n$ for which~${\bf s}ho_{n,{{\bf s}m s}igma}$ can have support in the subspace $\lambda=(\lambda_1,\lambda_2)$. This minimum $n$ is the optimal guess.
Once we have obtained a particular $\lambda\!=\!\lambda^*$ as an outcome (and guessed $n$), a second measurement is performed over the subspace $\{\lambda^*\}$ to produce a guess for~${{\bf s}m s}igma$.
Since the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ are covariant under~$S_N$, the optimal measurement to guess the permutation~${{\bf s}m s}igma$
is also covariant,
and its seed is the rank-1 operator $\Omega_{\{\lambda^*\}}^{n, e}$, where $\lambda^*= (N-n,n)$. Put together, these two successive measurements
yield a joint optimal POVM whose elements take the form
\begin{equation}{(\lambdaambda)}bel{povm_elements_main}
E_{n,{{\bf s}m s}igma} = {\bf x}i_{\lambda^*}^{n} (\openone_{({(\lambdaambda)}mbda^*)} \otimes \Omega_{\{{(\lambdaambda)}mbda^*\}}^{n,{{\bf s}m s}igma})\,,
{{\bf s}m e}nd{equation}
where $(n,{{\bf s}m s}igma)$ is the guess for the cluster
and ${\bf x}i_{\lambda^*}^n$
is some coefficient that guarantees the POVM condition \mbox{${{\bf s}m s}um_{n,{{\bf s}m s}igma} E_{n,{{\bf s}m s}igma}=\openone$}.
| 3,974 | 66,496 |
en
|
train
|
0.115.2
|
\begin{align}{(\lambdaambda)}bel{rho_ns}
{\bf s}ho_{n,{{\bf s}m s}igma} &= c_n \, U_{{\bf s}m s}igma \, (\openone^{{\bf s}m sym}_n\otimes\openone^{{\bf s}m sym}_{N-n} )\,U_{{\bf s}m s}igma^\dagger \nonumber\\
&= c_n \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}\,.
{{\bf s}m e}nd{align}
By applying Schur lemma, one readily obtains the first line, where $\openone^{{\bf s}m sym}_k$ is a projector onto the completely symmetric subspace of $k$ systems,
$c_n$ is a normalization factor,
and $U_{{\bf s}m s}igma$ is a unitary matrix representation of ${{\bf s}m s}igma$. The second line follows from using the Schur basis (see Section~{\bf s}ef{app:optimality}), in which the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ are block-diagonal.
Here $\lambda$ labels the irreducible representations---irreps for short---of the joint action of the groups ${{\bf s}m SU}(d)$ and $S_N$ over the vector space $(d,\mathbb C)^{\otimes N}$, and is usually identified with the shape of Young diagrams (or partitions of~$N$).
A pair of parentheses, $()$ [brackets, $\{\}$], surrounding the subscript ${(\lambdaambda)}mbda$, e.g., in Eq.~{{\bf s}m e}qref{rho_ns}, are used when
${(\lambdaambda)}mbda$ refers exclusively to irreps of ${{\bf s}m SU}(d)$ [$S_N$];
we stick to this convention throughout the paper.
Note that averaging over all ${{\bf s}m SU}(d)$ transformations erases the information contained in the representation subspace~$({(\lambdaambda)}mbda)$.
It also follows from Eq.~{{\bf s}m e}qref{rho_ns} and the rules of the Clebsch-Gordan decomposition that (i)~only two-row Young diagrams (partitions of length two) show up in the direct sum above, and (ii)~the operators $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}$ are rank-1 projectors (see Appendix~{\bf s}ef{app:irreps}). They carry all the information relevant for the clustering, and are understood to be zero for irreps $\lambda$ outside the support of~${\bf s}ho_{n,{{\bf s}m s}igma}$.
With Eq.~{{\bf s}m e}qref{rho_ns} at hand, the optimal clustering protocol can be succinctly described as two successive measurements---we state the result here and present an optimality proof in Section~{\bf s}ef{app:optimality}. The first measurement is a projection onto the irrep subspaces $\lambda$,
described by the set $\{\openone_{(\lambdaambda)} \otimes \openone_{\{\lambdaambda\}}\}$.
The outcome of this measurement provides an estimate of~$n$, as $\lambda$ is one-to-one related to the size of the clusters. More precisely, we have from~(i) that $\lambda=(\lambda_1,\lambda_2)$, where $\lambda_1$ and $\lambda_2$ are nonnegative integers such that $\lambda_1+\lambda_2=N$ and~$\lambda_1\ge \lambda_2$.
Then, given the outcome $\lambda=(\lambda_1,\lambda_2)$ of this first measurement, the optimal guess turns out to be $n=\lambda_2$.
Very roughly speaking, the ``asymmetry" in the subspace $\lambda=(\lambda_1,\lambda_2)$ increases with $\lambda_2$.
We recall that $\lambda=(N,0)$ is the fully symmetric subspace of $(d,\mathbb{C})^N$. Naturally, ${\bf s}ho_{0,{{\bf s}m s}igma}$ has support only in this subspace, as all states in the data are of one type.
As $\lambda_2$ increases from zero, more states of the alternative type are necessary to achieve the increasing asymmetry of $\lambda=(\lambda_1,\lambda_2)$.
Hence, for a given $\lambda_2$, there is a minimum value of $n$ for which~${\bf s}ho_{n,{{\bf s}m s}igma}$ can have support in the subspace $\lambda=(\lambda_1,\lambda_2)$. This minimum $n$ is the optimal guess.
Once we have obtained a particular $\lambda\!=\!\lambda^*$ as an outcome (and guessed $n$), a second measurement is performed over the subspace $\{\lambda^*\}$ to produce a guess for~${{\bf s}m s}igma$.
Since the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ are covariant under~$S_N$, the optimal measurement to guess the permutation~${{\bf s}m s}igma$
is also covariant,
and its seed is the rank-1 operator $\Omega_{\{\lambda^*\}}^{n, e}$, where $\lambda^*= (N-n,n)$. Put together, these two successive measurements
yield a joint optimal POVM whose elements take the form
\begin{equation}{(\lambdaambda)}bel{povm_elements_main}
E_{n,{{\bf s}m s}igma} = {\bf x}i_{\lambda^*}^{n} (\openone_{({(\lambdaambda)}mbda^*)} \otimes \Omega_{\{{(\lambdaambda)}mbda^*\}}^{n,{{\bf s}m s}igma})\,,
{{\bf s}m e}nd{equation}
where $(n,{{\bf s}m s}igma)$ is the guess for the cluster
and ${\bf x}i_{\lambda^*}^n$
is some coefficient that guarantees the POVM condition \mbox{${{\bf s}m s}um_{n,{{\bf s}m s}igma} E_{n,{{\bf s}m s}igma}=\openone$}.
The success probability of the optimal protocol can be computed as~$P_{{\bf s}m s}=2^{1-N}{{\bf s}m s}um_{n,{{\bf s}m s}igma} {{\bf s}m tr}\,({\bf s}ho_{n,{{\bf s}m s}igma} E_{n,{{\bf s}m s}igma})$ (see Section~{\bf s}ef{app:optimality}). It reads
\begin{align}
P_{{\bf s}m s} &= 2^{1-N} {{\bf s}m s}um_{i=0}^{\floor{N/2}} \binom{N}{i} \frac{(d-1)(N-1-2i)^2}{(N-1+d-i)(i+1)^2} \,, {(\lambdaambda)}bel{ps}
{{\bf s}m e}nd{align}
from which the asymptotic limit Eq.~{{\bf s}m e}qref{ps_asym} follows (see Appendix~{\bf s}ef{app:asymptotics}).
\blue{ Before closing this section we would like to briefly discuss the case when some information about the possible states $\ket{\phi_0}$ and $\ket{\phi_1}$ is available. A clustering device that incorporates this information into its design should succeed with a probability higher than Eq.~{{\bf s}m e}qref{ps}, at the cost of universality. To explore the extent of this performance enhancement, we study the extreme case where we have full knowledge of the states $\ket{\phi_0}$ and $\ket{\phi_1}$. We find that in the large $N$ limit the maximum improvement is by a factor of $N$. The optimal success probability scales as
\begin{equation}{(\lambdaambda)}bel{ps_known}
P_{{\bf s}m s} {{\bf s}m s}im \frac{4(d-1)}{N}
{{\bf s}m e}nd{equation}
(see Section~{\bf s}ef{sec:quantumknown} for details).
}
{{\bf s}m s}ection{Clustering classical states}{(\lambdaambda)}bel{sec:classical}
To grasp the significance of our quantum clustering protocol, a comparison with a classical analogue is called for.
First, in the place of a quantum system whose state is either $\ket{\phi_0}$ or $\ket{\phi_1}$,
an input would be an instance of a $d$-dimensional random variable sampled from either one of two categorical
probability distributions, $P=\{p_s\}_{s=1}^d$ and $Q=\{q_s\}_{s=1}^d$.
Then, given a string of samples ${\bf s}=(s_1\cdots s_N)$, $s_i\in\{1,\lambdadots,d\}$, the clustering task would consist in grouping the data points $s_i$ in two clusters so that all points in a cluster have a common underlying probability distribution.
Second, in analogy with the quantum protocol, our goal would be to find the optimal universal (i.e., independent of $P$ and $Q$) protocol, that performs this task. Here, optimality means attaining
the maximum average success probability, where the average is over all $N$-length sequences ${\bf x}$ of distributions $P$ and $Q$ from which the string ${\bf s}$ is sampled, and over all such distributions.
It should be emphasized that this is a very hard classical clustering problem, with absolute minimal assumptions, where there is no metric in the domain of the random variables and, in consequence, no exploitable notion of distance. Therefore, \blue{one should expect} the optimal algorithm to have a rather low performance and to differ significantly from well-known algorithms for classical unsupervised classification problems.
As a further remark, we note that a choice of prior is required to perform the average over $P$ and $Q$. We will assume that the two are uniformly distributed over the simplex on which they are both defined. This reflects our lack of knowledge about the distributions underlying the string of samples ${\bf s}$.
\blue{Under all these specifications, the classical clustering problem we just defined naturally connects with the quantum scenario in Section~{\bf s}ef{sec:the_task} as follows.
We can interpret
${\bf s}$
as
a string of outcomes obtained upon performing the same projective measurement on each individual quantum state $|\phi_{x_i}{\bf s}angle$ of our original problem.
Furthermore, such local measurements can also be interpreted as a decoherence process affecting the pure quantum states at the input, whereby they decay into classical probability distributions over a fixed basis.}
We might think of this as the semiclassical analogue of our original problem, since quantum resources are not fully exploited.
Let us first lay out the problem in the special case of $d=2$, where the underlying distributions are Bernoulli, and we can write $P=\{p,1-p\}$, $Q=\{q,1-q\}$. Given an $N$-length string of samples~${\bf s}$, our intuition tells us that the best we can do is to assign the same underlying probability distribution to equal values in~${\bf s}$. So if, e.g., ${\bf s}=(00101\cdots)$, we will guess that the underlying sequence of distributions is
$\hat{\bf x}=(PPQPQ\cdots)$ [or, equivalently, the complementary sequence $\hat{\bf x}=(QQPQP\cdots)$]. Thus, data points will be clustered according to their value 0 or 1. The optimality of this guessing rule is a particular case of the result for $d$-dimensional random variables in Appendix~{\bf s}ef{app:classic}.
The probability that a string of samples ${\bf s}$, with $l$ zeros and $N-l$ ones, arises from the guessed sequence $\hat{\bf x}$ is given by
\begin{equation}
\!{{\bf s}m Pr}({\bf s}|{\bf x}\!=\!\hat{\bf x}) \!=\!\!\!\int_0^1\!\! \!dp\!\!\int_0^1\!\!\! dq\,p^l q^{N-l} =\frac{1}{(l+1)(N-l+1)} \,.
{{\bf s}m e}nd{equation}
The average success probability can then be readily computed as
$P_{{\bf s}m s}^{{\bf s}m cl}=2 {{\bf s}m s}um_{{\bf x},{\bf s}} \delta_{{\bf x},\hat{\bf x}}\,{{\bf s}m Pr}({\bf x})\,{{\bf s}m Pr}({\bf s}|{\bf x})$ (recall that $\hat {\bf x}$ depends on ${\bf s}$), where ${{\bf s}m Pr}({\bf x})=2^{-N}$ is the prior probability of the sequence ${\bf x}$, which we assume to be uniform. The factor $2$ takes into account that guessing the complementary sequence leads to the same clustering. It is now quite straightforward to derive the asymptotic expression of $P_{{\bf s}m s}^{{\bf s}m cl}$ for large $N$. In this limit~${\bf x}$ will typically have the same number of $P$ and $Q$ distributions, so the guess $\hat{\bf x}$ will be right if $l=N/2$.
Then,
\begin{equation}
P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im 2 \frac{1}{(N/2+1)^2} {{\bf s}m s}im \frac{8}{N^2} \,.
{{\bf s}m e}nd{equation}
This expression coincides with the quantum asymptotic result in Eq.~{{\bf s}m e}qref{ps_asym} for $d=2$. As we now see, this is however a particularity of Bernoulli
distributions.
The derivation for $d>2$ is more involved, since the optimal guessing rule is not so obvious (see Appendix~{\bf s}ef{app:classic} for details). Loosely speaking, we should still assign samples with the same value to the same cluster. By doing so, we obtain up to $d$ preliminary clusters. We next merge them into two clusters in such a way that their final sizes are as balanced as possible. This last step, known as the {{{\bf s}m e}m partition problem}~\cite{Korf1998}, is weakly NP-complete. Namely,
\blue{its complexity is polynomial in the magnitudes of the data involved (the size of the preliminary clusters, which depends on $N$) but non-polynomial in the input size (the number of such clusters, determined by $d$).}
This means that the classical and semiclassical protocols cannot be implemented efficiently for arbitrary~$d$. In the asymptotic limit of large $N$, and for arbitrary fixed values of $d$, we obtain
\begin{equation}{(\lambdaambda)}bel{ps_asym_cl}
P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im \lambdaeft(\frac{2}{N}{\bf s}ight)^d \frac{(2d-2)!}{(d-2)!} \,.
{{\bf s}m e}nd{equation}
There is a huge difference between this result and Eq.~{{\bf s}m e}qref{ps_asym}. Whereas increasing the local dimension provides an asymptotic linear advantage in the optimal quantum clustering protocol---states become more orthogonal---it has the opposite effect in its classical and semiclassical analogues, as it reduces exponentially the success probability.
In the opposite regime, i.e., for $d$ asymptotically large and fixed values of~$N$, the optimal classical and semiclassical strategies provide no improvement over random guessing, and the clustering tasks become exceedingly hard and somewhat uninteresting. This follows from observing that the guessing rule relies on grouping repeated data values. In this regime, the typical string of samples~${\bf s}$ has no repeated elements, thus we are left with no alternative but to randomly guess the right clustering of the data and $P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im 2^{1-N}$.
| 3,904 | 66,496 |
en
|
train
|
0.115.3
|
\blue{Under all these specifications, the classical clustering problem we just defined naturally connects with the quantum scenario in Section~{\bf s}ef{sec:the_task} as follows.
We can interpret
${\bf s}$
as
a string of outcomes obtained upon performing the same projective measurement on each individual quantum state $|\phi_{x_i}{\bf s}angle$ of our original problem.
Furthermore, such local measurements can also be interpreted as a decoherence process affecting the pure quantum states at the input, whereby they decay into classical probability distributions over a fixed basis.}
We might think of this as the semiclassical analogue of our original problem, since quantum resources are not fully exploited.
Let us first lay out the problem in the special case of $d=2$, where the underlying distributions are Bernoulli, and we can write $P=\{p,1-p\}$, $Q=\{q,1-q\}$. Given an $N$-length string of samples~${\bf s}$, our intuition tells us that the best we can do is to assign the same underlying probability distribution to equal values in~${\bf s}$. So if, e.g., ${\bf s}=(00101\cdots)$, we will guess that the underlying sequence of distributions is
$\hat{\bf x}=(PPQPQ\cdots)$ [or, equivalently, the complementary sequence $\hat{\bf x}=(QQPQP\cdots)$]. Thus, data points will be clustered according to their value 0 or 1. The optimality of this guessing rule is a particular case of the result for $d$-dimensional random variables in Appendix~{\bf s}ef{app:classic}.
The probability that a string of samples ${\bf s}$, with $l$ zeros and $N-l$ ones, arises from the guessed sequence $\hat{\bf x}$ is given by
\begin{equation}
\!{{\bf s}m Pr}({\bf s}|{\bf x}\!=\!\hat{\bf x}) \!=\!\!\!\int_0^1\!\! \!dp\!\!\int_0^1\!\!\! dq\,p^l q^{N-l} =\frac{1}{(l+1)(N-l+1)} \,.
{{\bf s}m e}nd{equation}
The average success probability can then be readily computed as
$P_{{\bf s}m s}^{{\bf s}m cl}=2 {{\bf s}m s}um_{{\bf x},{\bf s}} \delta_{{\bf x},\hat{\bf x}}\,{{\bf s}m Pr}({\bf x})\,{{\bf s}m Pr}({\bf s}|{\bf x})$ (recall that $\hat {\bf x}$ depends on ${\bf s}$), where ${{\bf s}m Pr}({\bf x})=2^{-N}$ is the prior probability of the sequence ${\bf x}$, which we assume to be uniform. The factor $2$ takes into account that guessing the complementary sequence leads to the same clustering. It is now quite straightforward to derive the asymptotic expression of $P_{{\bf s}m s}^{{\bf s}m cl}$ for large $N$. In this limit~${\bf x}$ will typically have the same number of $P$ and $Q$ distributions, so the guess $\hat{\bf x}$ will be right if $l=N/2$.
Then,
\begin{equation}
P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im 2 \frac{1}{(N/2+1)^2} {{\bf s}m s}im \frac{8}{N^2} \,.
{{\bf s}m e}nd{equation}
This expression coincides with the quantum asymptotic result in Eq.~{{\bf s}m e}qref{ps_asym} for $d=2$. As we now see, this is however a particularity of Bernoulli
distributions.
The derivation for $d>2$ is more involved, since the optimal guessing rule is not so obvious (see Appendix~{\bf s}ef{app:classic} for details). Loosely speaking, we should still assign samples with the same value to the same cluster. By doing so, we obtain up to $d$ preliminary clusters. We next merge them into two clusters in such a way that their final sizes are as balanced as possible. This last step, known as the {{{\bf s}m e}m partition problem}~\cite{Korf1998}, is weakly NP-complete. Namely,
\blue{its complexity is polynomial in the magnitudes of the data involved (the size of the preliminary clusters, which depends on $N$) but non-polynomial in the input size (the number of such clusters, determined by $d$).}
This means that the classical and semiclassical protocols cannot be implemented efficiently for arbitrary~$d$. In the asymptotic limit of large $N$, and for arbitrary fixed values of $d$, we obtain
\begin{equation}{(\lambdaambda)}bel{ps_asym_cl}
P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im \lambdaeft(\frac{2}{N}{\bf s}ight)^d \frac{(2d-2)!}{(d-2)!} \,.
{{\bf s}m e}nd{equation}
There is a huge difference between this result and Eq.~{{\bf s}m e}qref{ps_asym}. Whereas increasing the local dimension provides an asymptotic linear advantage in the optimal quantum clustering protocol---states become more orthogonal---it has the opposite effect in its classical and semiclassical analogues, as it reduces exponentially the success probability.
In the opposite regime, i.e., for $d$ asymptotically large and fixed values of~$N$, the optimal classical and semiclassical strategies provide no improvement over random guessing, and the clustering tasks become exceedingly hard and somewhat uninteresting. This follows from observing that the guessing rule relies on grouping repeated data values. In this regime, the typical string of samples~${\bf s}$ has no repeated elements, thus we are left with no alternative but to randomly guess the right clustering of the data and $P_{{\bf s}m s}^{{\bf s}m cl} {{\bf s}m s}im 2^{1-N}$.
\blue{
To complete the picture, we end up this section by considering known classical probability distributions. Akin to the quantum case, one would expect an increase in the success probability of clustering. An immediate consequence of knowing the distributions $P$ and $Q$ is that the rule for assigning a clustering given a string of samples~${\bf s}$ becomes trivial. Each symbol $s_i\in\{1,\lambdadots,d\}$ will be assigned to the most likely distribution, that is, to $P$ ($Q$) if $p_{s_i} > q_{s_i}$ ($p_{s_i} < q_{s_i}$). It is clear that knowing $P$ and~$Q$ helps to better classify the data.
This becomes apparent by considering the example of two three-dimensional distributions and the data string ${\bf s}=(112)$. If the distributions are unknown, such sequence leads to the guess $\hat{\bf x}=(PPQ)$ [or equivalently to $\hat{\bf x}=(QQP)$]. In contrast, if $P$ and $Q$ are known and, e.g., $p_1>q_1$ and $p_2>q_2$, the same sequence leads to the better guess $\hat{\bf x}=(PPP)$. The advantage of knowing the distribution, however, vanishes in the large $N$ limit, and
the asymptotic performance of the optimal clustering algorithm is shown to be given by Eq.~{{\bf s}m e}qref{ps_asym_cl}. The interested reader can find the details of the proof in Appendix~{\bf s}ef{app:classic_known}.
}
\blue{
{{\bf s}m s}ection{Methods}{(\lambdaambda)}bel{sec:methods}
Here we give the full proof of optimality of our quantum clustering protocol/device, which leads to our main result in Eq.~{{\bf s}m e}qref{ps_asym}. The proof relies on representation theory of the special unitary and the symmetric groups. In particular, the Schur-Weyl duality is used to efficiently represent the structure of the input quantum data and the action of the device. We then leverage this structure to find the optimal POVM and compute the minimum cost. Basic notions of representation theory that we use in the proof are covered in the Appendices~{\bf s}ef{app:partitions} and {\bf s}ef{app:irreps}.
We close the Methods section proving Eq.~{{\bf s}m e}qref{ps_known} for the optimal success probability of clustering known quantum states.
{{\bf s}m s}ubsection{Clustering quantum states: unknown input states}{(\lambdaambda)}bel{app:optimality}
In this
Section
we obtain the optimal POVM for quantum clustering and compute the minimum cost. First, we present a formal optimality proof for an arbitrary cost function $f({\bf x},{\bf x}')$, which specifies the penalty for guessing ${\bf x}$ if the input is ${\bf x}'$. Second, we particularize to the case of success probability, as discussed in the main text, for which explicit expressions are obtained.
{{\bf s}m s}ubsubsection{Generic cost functions}
We say a POVM is optimal if it minimizes the average cost
\begin{equation}{(\lambdaambda)}bel{app:av_cost}
{\bar f} = \int d\phi_0 \,d\phi_1 {{\bf s}m s}um_{{\bf x},\hat{\bf x}} {{\bf s}m e}ta_{{\bf x}}
\, f({\bf x},\hat{\bf x}) \, {{\bf s}m Pr}(\hat{\bf x}|{\bf x})
\,,
{{\bf s}m e}nd{equation}
where ${{\bf s}m e}ta_{{\bf x}}$ is the prior probability of input string ${\bf x}$, and ${{\bf s}m Pr}(\hat{\bf x}|{\bf x})={{\bf s}m tr}\, (\ketbrad{\Phi_{{\bf x}}} E_{\hat{\bf x}}) $ is the probability of obtaining measurement outcome (and guess) $\hat{\bf x}$ given input ${\bf x}$; recall that $\ket{\Phi_{\bf x}} = \ket{\phi_{x_1}}\otimes\ket{\phi_{x_2}}\otimes\cdots\otimes \ket{\phi_{x_N}}$, $x_k=0,1$, and an average is taken over all possible pairs of states $\{\ket{\phi_{0}},\ket{\phi_{1}}\}$, hence ${\bf x}$ and its complementary $\bar{\bf x}$ define de same clustering.
A convenient way to identify the different clusterings is by counting the number $n$, $0\lambdaeq n\lambdaeq \floor{N/2}$, of zeros in~${\bf x}$ (so, strings with more 0s than 1s are discarded) and giving a unique representative ${{\bf s}m s}igma$ of the equivalence class of permutations that turn the reference string $(0^n1^{\bar n})$, ${\bar n}=N-n$, into ${\bf x}$. We will denote the subset of these representatives by ${\mathscr S}_n{{\bf s}m s}ubset S_N$, and the number of elements in each equivalence class~by~$b_n$. A simple calculation gives us $b_n=2(n!)^2$ if $n=\bar n$, and $b_n=n!\bar n!$ otherwise.
As discussed in the main text, the clustering problem above is equivalent to a multi-hypothesis discrimination problem, where the hypotheses are given by
\begin{align}{(\lambdaambda)}bel{app:rho_ns}
{\bf s}ho_{{\bf x}} &= \int d\phi_0 \, d\phi_1 \ketbrad{\Phi_{\bf x}} \nonumber\\
&= c_n \, U_{{\bf s}m s}igma \, (\openone^{{\bf s}m sym}_n\otimes\openone^{{\bf s}m sym}_{\bar n} )\,U_{{\bf s}m s}igma^\dagger \,,
{{\bf s}m e}nd{align}
and we have used Schur lemma to compute the integral. Here,~$U_{{\bf s}m s}igma$ is a unitary matrix representation of the permutation~${{\bf s}m s}igma$, $\openone^{{\bf s}m sym}_k$ is a projector onto the completely symmetric subspace of $k$ systems, and $c_n= 1/(D^{{\bf s}m sym}_n D^{{\bf s}m sym}_{\bar n})$, where~$D^{{\bf s}m sym}_k=s_{(k,0)}$ [see Eq.~({\bf s}ef{mult_s})] is the dimension of symmetric subspace of~$k$ qudits.
The states~{{\bf s}m e}qref{app:rho_ns} are block-diagonal in the Schur basis, which decouples the commuting actions of the groups ${{\bf s}m SU}(d)$ and $S_N$ over product states of the form of~$|\Phi_{{\bf x}}{\bf s}angle$. More precisely, Schur-Weyl duality states that the representations of the two groups acting on the common space $(d,\mathbb{C})^{\otimes N}$ are each other's commutant. Moreover, it provides a decomposition of
this space
into decoupled subspaces associated to irreducible representations (irreps) of both ${{\bf s}m SU}(d)$ and $S_N$. We can then express the states~${\bf s}ho_{{\bf x}}$, where ${\bf x}$ is specified as $(n,{{\bf s}m s}igma)$ [${\bf x}=(n,{{\bf s}m s}igma)$ for short], in the Schur basis as
\begin{equation}{(\lambdaambda)}bel{app:rho_block}
{\bf s}ho_{n,{{\bf s}m s}igma} = c_n \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma} \,.
{{\bf s}m e}nd{equation}
In this direct sum, $\lambda$ is a label attached to the irreps of the joint action of ${{\bf s}m SU}(d)$ and $S_N$ and is usually identified with a partition of~$N$ or, equivalently, a Young diagram. As explained in the main text, a pair of parenthesis surrounding this type of label, like in ${(\lambdaambda)}$, mean that it refers specifically to irreps of ${{\bf s}m SU}(d)$. Likewise, a pair of brackets, e.g., ${\{\lambdaambda\}}$, indicate that the label refers to irreps of~$S_N$. In accordance with this convention, Schur-Weyl duality implies that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}=U_{{\bf s}m s}igma^\lambda \,\Omega_{\{\lambdaambda\}}^{n,e}\, (U_{{\bf s}m s}igma^\lambda)^\dagger$, where~$U_{{\bf s}m s}igma^\lambda$ is the matrix of the irrep~$\lambda$ that represents ${{\bf s}m s}igma\in S_N$, and $e$ denotes the identity permutation (for simplicity, we omit the index $e$ when no confusion arises).
In other words, the family of states~${\bf s}ho_{n,{{\bf s}m s}igma}$ is covariant with respect to $S_N$. One can easily check that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}$ is always a rank-1 projector (see Appendix~{\bf s}ef{app:irreps}). In Eq.~{{\bf s}m e}qref{app:rho_block} it is understood that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}=0$ outside of the range of~${\bf s}ho_{n,{{\bf s}m s}igma}$.
With no loss of generality, the optimal measurement that discriminates the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ can be represented by a POVM whose elements have the form shown in Eq.~{{\bf s}m e}qref{app:rho_block}. Moreover, we can assume it to be covariant under $S_N$~\cite{Holevo1982}.
So, such POVM elements can be written as
\begin{equation}{(\lambdaambda)}bel{app:povmelements}
E_{n,{{\bf s}m s}igma} = \bigoplus_\lambda \openone_{(\lambdaambda)}\otimes U^\lambda_{{\bf s}m s}igma\, \Xi_{\{\lambdaambda\}}^n (U^\lambda_{{\bf s}m s}igma)^\dagger \,,
{{\bf s}m e}nd{equation}
| 4,031 | 66,496 |
en
|
train
|
0.115.4
|
As discussed in the main text, the clustering problem above is equivalent to a multi-hypothesis discrimination problem, where the hypotheses are given by
\begin{align}{(\lambdaambda)}bel{app:rho_ns}
{\bf s}ho_{{\bf x}} &= \int d\phi_0 \, d\phi_1 \ketbrad{\Phi_{\bf x}} \nonumber\\
&= c_n \, U_{{\bf s}m s}igma \, (\openone^{{\bf s}m sym}_n\otimes\openone^{{\bf s}m sym}_{\bar n} )\,U_{{\bf s}m s}igma^\dagger \,,
{{\bf s}m e}nd{align}
and we have used Schur lemma to compute the integral. Here,~$U_{{\bf s}m s}igma$ is a unitary matrix representation of the permutation~${{\bf s}m s}igma$, $\openone^{{\bf s}m sym}_k$ is a projector onto the completely symmetric subspace of $k$ systems, and $c_n= 1/(D^{{\bf s}m sym}_n D^{{\bf s}m sym}_{\bar n})$, where~$D^{{\bf s}m sym}_k=s_{(k,0)}$ [see Eq.~({\bf s}ef{mult_s})] is the dimension of symmetric subspace of~$k$ qudits.
The states~{{\bf s}m e}qref{app:rho_ns} are block-diagonal in the Schur basis, which decouples the commuting actions of the groups ${{\bf s}m SU}(d)$ and $S_N$ over product states of the form of~$|\Phi_{{\bf x}}{\bf s}angle$. More precisely, Schur-Weyl duality states that the representations of the two groups acting on the common space $(d,\mathbb{C})^{\otimes N}$ are each other's commutant. Moreover, it provides a decomposition of
this space
into decoupled subspaces associated to irreducible representations (irreps) of both ${{\bf s}m SU}(d)$ and $S_N$. We can then express the states~${\bf s}ho_{{\bf x}}$, where ${\bf x}$ is specified as $(n,{{\bf s}m s}igma)$ [${\bf x}=(n,{{\bf s}m s}igma)$ for short], in the Schur basis as
\begin{equation}{(\lambdaambda)}bel{app:rho_block}
{\bf s}ho_{n,{{\bf s}m s}igma} = c_n \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma} \,.
{{\bf s}m e}nd{equation}
In this direct sum, $\lambda$ is a label attached to the irreps of the joint action of ${{\bf s}m SU}(d)$ and $S_N$ and is usually identified with a partition of~$N$ or, equivalently, a Young diagram. As explained in the main text, a pair of parenthesis surrounding this type of label, like in ${(\lambdaambda)}$, mean that it refers specifically to irreps of ${{\bf s}m SU}(d)$. Likewise, a pair of brackets, e.g., ${\{\lambdaambda\}}$, indicate that the label refers to irreps of~$S_N$. In accordance with this convention, Schur-Weyl duality implies that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}=U_{{\bf s}m s}igma^\lambda \,\Omega_{\{\lambdaambda\}}^{n,e}\, (U_{{\bf s}m s}igma^\lambda)^\dagger$, where~$U_{{\bf s}m s}igma^\lambda$ is the matrix of the irrep~$\lambda$ that represents ${{\bf s}m s}igma\in S_N$, and $e$ denotes the identity permutation (for simplicity, we omit the index $e$ when no confusion arises).
In other words, the family of states~${\bf s}ho_{n,{{\bf s}m s}igma}$ is covariant with respect to $S_N$. One can easily check that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}$ is always a rank-1 projector (see Appendix~{\bf s}ef{app:irreps}). In Eq.~{{\bf s}m e}qref{app:rho_block} it is understood that $\Omega_{\{\lambdaambda\}}^{n,{{\bf s}m s}igma}=0$ outside of the range of~${\bf s}ho_{n,{{\bf s}m s}igma}$.
With no loss of generality, the optimal measurement that discriminates the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ can be represented by a POVM whose elements have the form shown in Eq.~{{\bf s}m e}qref{app:rho_block}. Moreover, we can assume it to be covariant under $S_N$~\cite{Holevo1982}.
So, such POVM elements can be written as
\begin{equation}{(\lambdaambda)}bel{app:povmelements}
E_{n,{{\bf s}m s}igma} = \bigoplus_\lambda \openone_{(\lambdaambda)}\otimes U^\lambda_{{\bf s}m s}igma\, \Xi_{\{\lambdaambda\}}^n (U^\lambda_{{\bf s}m s}igma)^\dagger \,,
{{\bf s}m e}nd{equation}
where $\Xi_{\{\lambdaambda\}}^n$ is some positive operator. The resolution of the identity condition imposes constraints on them.
The condition reads
\begin{align}
\begin{split}\,
{{\bf s}m s}um_{n,{{\bf s}m s}igma} E_{n,{{\bf s}m s}igma} &= {{\bf s}m s}um_n \frac{1}{b_n} {{\bf s}m s}um_{{{\bf s}m s}igma\in S_N} \bigoplus_\lambda \openone_{(\lambdaambda)}\otimes U^\lambda_{{\bf s}m s}igma \Xi_{\{\lambdaambda\}}^n (U^\lambda_{{\bf s}m s}igma)^\dagger \\
&= \bigoplus_\lambda \openone_{(\lambdaambda)}\otimes\openone_{\{\lambdaambda\}} \,,
{{\bf s}m e}nd{split}
{{\bf s}m e}nd{align}
where we have used the factor $b_n$ to extend the sum over~${\mathscr S}_n$ to the entire group $S_N$ and applied Schur lemma. Taking the trace on both sides of the equation, we find the POVM constraint to be
\begin{equation}{(\lambdaambda)}bel{povmcond}
{{\bf s}m s}um_n \frac{N!}{b_n} {{\bf s}m tr}\,{\lambdaeft(\Xi_{\{\lambdaambda\}}^n{\bf s}ight)} = \nu_\lambda \,,\quad \forall \lambda \,,
{{\bf s}m e}nd{equation}
where $\nu_\lambda$ is the dimension of $\openone_{\{\lambdaambda\}}$ or, equivalently, the multiplicity of the irrep $\lambda$ of ${{\bf s}m SU}(d)$ [see Eq.~({\bf s}ef{mult_nu})].
So far we have analyzed the structure that the symmetries of the problem impose on the states ${\bf s}ho_{n,{{\bf s}m s}igma}$ and the measurements. We have learned that for any choice of operators $\Xi^n_{{\{\lambdaambda\}}}$ that fulfill Eq.~({\bf s}ef{povmcond}), the set of operators~({\bf s}ef{app:povmelements}) defines a valid POVM, but it need not be optimal. So, we now proceed to derive optimality conditions for $\Xi^n_{{\{\lambdaambda\}}}$. Those are provided by the Holevo-Yuen-Kennedy-Lax~\cite{Holevo1973a,Yuen1975} necessary and sufficient conditions for minimizing the average cost. For our clustering problem in Eq.~({\bf s}ef{app:av_cost}) they read
\begin{align}
{(\lambdaambda)}bel{Holevo1}
&(W_{{\bf x}}-\Gamma)E_{\bf x}=E_{\bf x}(W_{\bf x}-\Gamma)=0 \,,\\
{(\lambdaambda)}bel{Holevo2}
&\phantom{(}W_{{\bf x}}-\Gamma \geq 0 \,.
{{\bf s}m e}nd{align}
They must hold for all ${\bf x}$, where $\Gamma={{\bf s}m s}um_{{\bf x}} W_{\bf x} E_{\bf x}={{\bf s}m s}um_{\bf x} E_{\bf x} W_{\bf x}$, and $W_{\bf x} = {{\bf s}m s}um_{{\bf x}'} f({\bf x},{\bf x}') {{\bf s}m e}ta_{{\bf x}'} {\bf s}ho_{{\bf x}'}$.
We will assume that the prior distribution ${{\bf s}m e}ta_{{\bf x}}$ is flat and that the cost function is nonnegative and covariant with respect to the permutation group, i.e., $f({\bf x},{\bf x}')=f(\tau{\bf x},\tau{\bf x}')$ for all $\tau\in S_N$. Then, $W_{\tau{\bf x}}=U_\tau W_{{\bf x}} U_\tau^\dagger$
and we only need to ensure that conditions {{\bf s}m e}qref{Holevo1} and {{\bf s}m e}qref{Holevo2} are met for reference strings, for which ${\bf x}=(n,e)$.
In the Schur basis, their corresponding operators, which we simply call $W_n$, and the matrix $\Gamma$ take the form
\begin{align}
W_n &= \bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \omega^n_{\{\lambdaambda\}} \,,{(\lambdaambda)}bel{W_block}\\
\Gamma &= \bigoplus_\lambda k_\lambda \openone_{(\lambdaambda)}\otimes\openone_{\{\lambdaambda\}}\,,{(\lambdaambda)}bel{G_block}
{{\bf s}m e}nd{align}
where we have used Schur lemma to obtain Eq.~({\bf s}ef{G_block}) and defined $k_\lambda {{\bf s}m e}quiv {{\bf s}m s}um_n N!\, {{\bf s}m tr}\,{\lambdaeft(\omega^n_{\{\lambdaambda\}} \,\Xi^n_{\{\lambdaambda\}}{\bf s}ight)}/(b_n\nu_\lambda)$.
Note that $\Gamma$ is a diagonal matrix, in spite of the fact that $\omega_{\{\lambdaambda\}}^n$ are, at this point, arbitrary full-rank positive operators.
With Eqs.~{{\bf s}m e}qref{W_block} and {{\bf s}m e}qref{G_block}, the optimality conditions~{{\bf s}m e}qref{Holevo1} and {{\bf s}m e}qref{Holevo2} can be made explicit. First, we note that the subspace ${(\lambdaambda)}$ is irrelevant in this calculation, and that there will be an independent condition for each irrep $\lambda$. Taking into account these considerations, Eq.~{{\bf s}m e}qref{Holevo1} now reads
\begin{align}
\omega^n_{\{\lambdaambda\}}\Xi^n_{\{\lambdaambda\}} &=\Xi^n_{\{\lambdaambda\}}\omega^n_{\{\lambdaambda\}} =k_\lambda \Xi^n_{\{\lambdaambda\}} \,,\quad \forall n,\lambda \,.{(\lambdaambda)}bel{Holevo1b}
{{\bf s}m e}nd{align}
This equation tells us two things: (i) since the matrices~$\omega^n_{\{\lambdaambda\}}$ and~$\Xi^n_{\{\lambdaambda\}}$ commute, they have a common eigenbasis, and (ii)~Eq.~{{\bf s}m e}qref{Holevo1b} is a set of eigenvalue equations for $\omega^n_{\{\lambdaambda\}}$ with a common eigenvalue $k_\lambda$, one equation for each eigenvector of $\Xi^n_{\{\lambdaambda\}}$. Therefore, the support of $\Xi^n_{\{\lambdaambda\}}$ is necessarily restricted to a single eigenspace of $\omega^n_{\{\lambdaambda\}}$. Denoting by $\vartheta_{\lambda,a}^n$, $a=1,2,\dots$, the eigenvalues of $\omega_{\{\lambdaambda\}}^n$ sorted in increasing order, we have $k_\lambda=\vartheta_{\lambda,a}^n$ for some $a$, which may depend on ${(\lambdaambda)}mbda$ and $n$, or else $\Xi^n_{\{\lambdaambda\}}=0$.
The second Holevo condition~{{\bf s}m e}qref{Holevo2}, under the same considerations regarding the block-diagonal structure, leads to
\begin{equation}{(\lambdaambda)}bel{Holevo2b}
\omega_{\{\lambdaambda\}}^n \geq k_\lambda \openone_{\{\lambdaambda\}} \,,\quad \forall n,\lambda \,.
{{\bf s}m e}nd{equation}
This condition further induces more structure in the POVM.
Given $\lambda$, Eq.~{{\bf s}m e}qref{Holevo2b} has to hold for {{{\bf s}m e}m every} value of~$n$. In particular,
we must have $\min_{n'} \vartheta_{\lambda,1}^{n'}\ge k_\lambda$.
Therefore, $\min_{n'} \vartheta_{\lambda,1}^{n'}\ge\vartheta^n_{\lambda,a}$ for some $a$, or else $\Xi^n_{\{\lambdaambda\}}=0$. Since~$\Xi^n_{\{\lambdaambda\}}$ cannot vanish for all $n$ because of Eq.~({\bf s}ef{povmcond}), we readily see that
\begin{equation}{(\lambdaambda)}bel{povm}
k_\lambda\!=\!\vartheta^{n(\lambda)}_{\lambda,1}, \quad
\Xi_{\{\lambdaambda\}}^n \!=
\begin{cases}
{\bf x}i_\lambda^n \Pi_{1}(\omega_{\{\lambdaambda\}}^n) & {{\bf s}m if} \, n=n(\lambda), \\
0 & {{\bf s}m otherwise},
{{\bf s}m e}nd{cases}
{{\bf s}m e}nd{equation}
where $n(\lambda)={{\bf s}m argmin}_n \vartheta_{\lambda,1}^n$, $\Pi_{1}(\omega_{\{\lambdaambda\}}^n)$ is a projector onto the eigenspace of $\omega_{\{\lambdaambda\}}^n$ (not necessarily the whole subspace) corresponding to the minimum eigenvalue $\vartheta_{\lambda,1}^n$, and ${\bf x}i^n_{(\lambdaambda)}mbda$ is a suitable coefficient that can be read off from~Eq.~{{\bf s}m e}qref{povmcond}:
\begin{equation}{(\lambdaambda)}bel{povmcoef}
{\bf x}i_\lambda^{n} = \frac{\nu_\lambda b_n}{D_\lambda^{n} N!} \,,
{{\bf s}m e}nd{equation}
where $D_\lambda^n = \dim{[\Pi_{1}(\omega_{\{\lambdaambda\}}^n)]}$. This completes the construction of the optimal POVM.
For a generic cost function, we can now write down a closed, implicit formula for the minimum average cost achievable by any quantum clustering protocol. It reads
\begin{equation}{(\lambdaambda)}bel{opt_av_cost}
\bar f = {{\bf s}m tr}\, \Gamma = {{\bf s}m s}um_\lambda s_\lambda \,\nu_\lambda\, \vartheta_{\lambda,1}^{n(\lambda)}\,,
{{\bf s}m e}nd{equation}
where $s_\lambda$ is the dimension of $\openone_{(\lambdaambda)}$ or, equivalently, the multiplicity of the irrep $\lambda$ of $S_N$ [see Eq.~({\bf s}ef{mult_s})]. The only object that remains to be specified is the function~$n(\lambda)$, which depends ultimately on the choice of the cost function $f({\bf x},{\bf x}')$.
{{\bf s}m s}ubsubsection{Success probability}
| 3,977 | 66,496 |
en
|
train
|
0.115.5
|
With Eqs.~{{\bf s}m e}qref{W_block} and {{\bf s}m e}qref{G_block}, the optimality conditions~{{\bf s}m e}qref{Holevo1} and {{\bf s}m e}qref{Holevo2} can be made explicit. First, we note that the subspace ${(\lambdaambda)}$ is irrelevant in this calculation, and that there will be an independent condition for each irrep $\lambda$. Taking into account these considerations, Eq.~{{\bf s}m e}qref{Holevo1} now reads
\begin{align}
\omega^n_{\{\lambdaambda\}}\Xi^n_{\{\lambdaambda\}} &=\Xi^n_{\{\lambdaambda\}}\omega^n_{\{\lambdaambda\}} =k_\lambda \Xi^n_{\{\lambdaambda\}} \,,\quad \forall n,\lambda \,.{(\lambdaambda)}bel{Holevo1b}
{{\bf s}m e}nd{align}
This equation tells us two things: (i) since the matrices~$\omega^n_{\{\lambdaambda\}}$ and~$\Xi^n_{\{\lambdaambda\}}$ commute, they have a common eigenbasis, and (ii)~Eq.~{{\bf s}m e}qref{Holevo1b} is a set of eigenvalue equations for $\omega^n_{\{\lambdaambda\}}$ with a common eigenvalue $k_\lambda$, one equation for each eigenvector of $\Xi^n_{\{\lambdaambda\}}$. Therefore, the support of $\Xi^n_{\{\lambdaambda\}}$ is necessarily restricted to a single eigenspace of $\omega^n_{\{\lambdaambda\}}$. Denoting by $\vartheta_{\lambda,a}^n$, $a=1,2,\dots$, the eigenvalues of $\omega_{\{\lambdaambda\}}^n$ sorted in increasing order, we have $k_\lambda=\vartheta_{\lambda,a}^n$ for some $a$, which may depend on ${(\lambdaambda)}mbda$ and $n$, or else $\Xi^n_{\{\lambdaambda\}}=0$.
The second Holevo condition~{{\bf s}m e}qref{Holevo2}, under the same considerations regarding the block-diagonal structure, leads to
\begin{equation}{(\lambdaambda)}bel{Holevo2b}
\omega_{\{\lambdaambda\}}^n \geq k_\lambda \openone_{\{\lambdaambda\}} \,,\quad \forall n,\lambda \,.
{{\bf s}m e}nd{equation}
This condition further induces more structure in the POVM.
Given $\lambda$, Eq.~{{\bf s}m e}qref{Holevo2b} has to hold for {{{\bf s}m e}m every} value of~$n$. In particular,
we must have $\min_{n'} \vartheta_{\lambda,1}^{n'}\ge k_\lambda$.
Therefore, $\min_{n'} \vartheta_{\lambda,1}^{n'}\ge\vartheta^n_{\lambda,a}$ for some $a$, or else $\Xi^n_{\{\lambdaambda\}}=0$. Since~$\Xi^n_{\{\lambdaambda\}}$ cannot vanish for all $n$ because of Eq.~({\bf s}ef{povmcond}), we readily see that
\begin{equation}{(\lambdaambda)}bel{povm}
k_\lambda\!=\!\vartheta^{n(\lambda)}_{\lambda,1}, \quad
\Xi_{\{\lambdaambda\}}^n \!=
\begin{cases}
{\bf x}i_\lambda^n \Pi_{1}(\omega_{\{\lambdaambda\}}^n) & {{\bf s}m if} \, n=n(\lambda), \\
0 & {{\bf s}m otherwise},
{{\bf s}m e}nd{cases}
{{\bf s}m e}nd{equation}
where $n(\lambda)={{\bf s}m argmin}_n \vartheta_{\lambda,1}^n$, $\Pi_{1}(\omega_{\{\lambdaambda\}}^n)$ is a projector onto the eigenspace of $\omega_{\{\lambdaambda\}}^n$ (not necessarily the whole subspace) corresponding to the minimum eigenvalue $\vartheta_{\lambda,1}^n$, and ${\bf x}i^n_{(\lambdaambda)}mbda$ is a suitable coefficient that can be read off from~Eq.~{{\bf s}m e}qref{povmcond}:
\begin{equation}{(\lambdaambda)}bel{povmcoef}
{\bf x}i_\lambda^{n} = \frac{\nu_\lambda b_n}{D_\lambda^{n} N!} \,,
{{\bf s}m e}nd{equation}
where $D_\lambda^n = \dim{[\Pi_{1}(\omega_{\{\lambdaambda\}}^n)]}$. This completes the construction of the optimal POVM.
For a generic cost function, we can now write down a closed, implicit formula for the minimum average cost achievable by any quantum clustering protocol. It reads
\begin{equation}{(\lambdaambda)}bel{opt_av_cost}
\bar f = {{\bf s}m tr}\, \Gamma = {{\bf s}m s}um_\lambda s_\lambda \,\nu_\lambda\, \vartheta_{\lambda,1}^{n(\lambda)}\,,
{{\bf s}m e}nd{equation}
where $s_\lambda$ is the dimension of $\openone_{(\lambdaambda)}$ or, equivalently, the multiplicity of the irrep $\lambda$ of $S_N$ [see Eq.~({\bf s}ef{mult_s})]. The only object that remains to be specified is the function~$n(\lambda)$, which depends ultimately on the choice of the cost function $f({\bf x},{\bf x}')$.
{{\bf s}m s}ubsubsection{Success probability}
We now make Eq.~{{\bf s}m e}qref{opt_av_cost} explicit by considering the success probability $P_{{\bf s}m s}$ as a figure of merit, that is, we choose $f({\bf x},{\bf x}')=1-\delta_{{\bf x},{\bf x}'}$, hence $P_{{\bf s}m s}=1-\bar f$. We also assume that the source that produces the input sequence is equally likely to prepare either state, thus each string~${\bf x}$ has the same prior probability, ${{\bf s}m e}ta_{{\bf x}} = 2^{1-N} {{\bf s}m e}quiv{{\bf s}m e}ta$. In this case,~$W_n$ takes the simple form
\begin{equation}{(\lambdaambda)}bel{Wme}
W_n =
\bigoplus_\lambda \openone_{(\lambdaambda)} \otimes \lambdaeft(\mu_\lambda\openone_{\{\lambdaambda\}} -{{\bf s}m e}ta c_n \Omega_{\{\lambdaambda\}}^n{\bf s}ight) \,,
{{\bf s}m e}nd{equation}
where $\mu_{(\lambdaambda)}mbda$ are positive coefficients and we recall that the expression in parenthesis corresponds to $\omega_{\{\lambdaambda\}}^n$ in Eq.~{{\bf s}m e}qref{W_block}. From this expression one can easily derive the explicit forms of $\vartheta_{\lambda,1}^{n}$ and $n(\lambda)$. We just need to consider the maximum eigenvalue of the rank-one projector~$\Omega^n_{\{\lambdaambda\}}$, which can be either one or zero depending on whether or not the input state ${\bf s}ho_{n,{{\bf s}m s}igma}$ has support in the irrep $\lambda$ space. So, among the values of $n$ for which ${\bf s}ho_{n,{{\bf s}m s}igma}$ does have support there, $n(\lambda)$ is one that maximizes $c_n$. Since $c_n$ is a decreasing function of $n$ in its allowed range (recall that $n\lambdae\floor{N/2}$), $n(\lambda)$ is the smallest such value.
For the problem at hand, the irreps in the direct sum can be labeled by Young diagrams of at most two rows, or, equivalently, by partitions of $N$ of length at most two (see Appendix~{\bf s}ef{app:irreps}), hence $\lambda=(\lambda_1,\lambda_2)$, where $\lambda_1+\lambda_2=N$ and $\lambda_2$ runs from $0$ to $\floor{N/2}$.
Given~$\lambda$, only states~${\bf s}ho_n$ with $n=\lambda_2,\lambdadots,\floor{N/2}$ have support on the irrep ${(\lambdaambda)}mbda$ space, as readily follows from the Clebsch-Gordan decomposition rules.
Then,
\begin{equation}{(\lambdaambda)}bel{nl}
n(\lambda) = \lambda_2\,,\quad
\vartheta^{n(\lambda)}_{\lambda,1}=\mu_\lambda-{{\bf s}m e}ta c_{n(\lambda)} \,.
{{\bf s}m e}nd{equation}
Eq.~{{\bf s}m e}qref{nl} gives the optimal guess for the size, $n$, of the smallest cluster. The rule is in agreement with our intuition. The irrep $(N,0)$, i.e., $\lambda_2=0$, corresponding to the fully symmetric subspace, is naturally associated with the value $n=0$, i.e., with all $N$ systems being in the same state/cluster; the irrep with one antisymmetrized index has $\lambda_2=1$, and hints at a system being in a different state than the others, i.e., at a cluster of size one; and so on.
We now have all the ingredients to compute the optimal success probability from~Eq.~({\bf s}ef{opt_av_cost}). It reads
\begin{align}
P_{{\bf s}m s}
&= {{\bf s}m e}ta {{\bf s}m s}um_\lambda c_{n(\lambda)} s_\lambda \nu_\lambda \nonumber\\
&= {1\over2^{N-1}}{{\bf s}m s}um_{i=0}^{\floor{N/2}} \binom{N}{i} \frac{(d-1)(N-2i+1)^2}{(d+i-1)(N-i+1)^2} \,, {(\lambdaambda)}bel{app:ps}
{{\bf s}m e}nd{align}
where we have used the relation
${{\bf s}m s}um_\lambda s_\lambda \nu_\lambda \mu_\lambda=1$ that follows from ${{\bf s}m tr}\, {{\bf s}m s}um_{\bf x} {{\bf s}m e}ta_{\bf x} {\bf s}ho_{\bf x}=1$, and
the expressions of $\nu_\lambda$ and~$s_\lambda$ from Eqs.~{{\bf s}m e}qref{mult_nu} and {{\bf s}m e}qref{mult_s} in Appendix~{\bf s}ef{app:irreps}.
{{\bf s}m s}ubsection{Clustering quantum states: known input states}{(\lambdaambda)}bel{sec:quantumknown}
If the two possible states $\ket{\phi_0}$ and $\ket{\phi_1}$ are known, the optimal clustering protocol must use this information. It is then expected that the average performance will be much higher than for the universal protocol. It is natural in this context not to identify a given string~${\bf x}$ with its complementary $\bar{\bf x}$ (we stick to the notation in the main text), since mistaking one state for the other should clearly count as an error if the two preparations are specified. In this case, then, clustering is equivalent to discriminating the $2^N$ known pure states $\ket{\Phi_{\bf x}}\!=\!\ket{\phi_{x_1}{\bf s}angle\!\otimes\!|\phi_{x_2}{\bf s}angle\!\otimes\!\cdots\! \otimes\!|\phi_{x_N}}$ (hypotheses),
where with no loss of generality we can write
\begin{equation}
\ket{\phi_{0/1}}={{\bf s}m s}qrt{\frac{1+c}{2}}\ket{0}\pm {{\bf s}m s}qrt{\frac{1-c}{2}}\ket{1}
{(\lambdaambda)}bel{no loss}
{{\bf s}m e}nd{equation}
for a convenient choice of basis. Here $c=|{(\lambdaambda)}ngle\phi_0|\phi_1{\bf s}angle|$ is the overlap of the two states.
The Gram matrix $G$
encapsulates all the information needed to discriminate the states of the set. It is defined as having elements
$G_{{\bf x},{\bf x}'}=\braket{\Phi_{\bf x}}{\Phi_{{\bf x}'}}$. It is known that when the diagonal elements of its square root are all equal, i.e., $\big({{\bf s}m s}qrt{G}\,\big)_{{\bf x},{\bf x}}{{\bf s}m e}quiv S$ for all ${\bf x}$,
then the square root measurement is optimal~\cite{DallaPozza2015,Sentis2016} and the probability of successful indentification reads
simply $P_{{\bf s}m s}=S^2$. Notice that we have implicitly assumed uniformly distributed hypotheses.
For the case at hand,
\begin{align}
G_{{\bf x},{\bf x}'}&=({(\lambdaambda)}ngle\phi_{x_1}|\otimes\cdots\otimes{(\lambdaambda)}ngle\phi_{x_N}|)(|\phi_{x'_1}{\bf s}angle\otimes\cdots\otimes|\phi_{x'_N}{\bf s}angle)\nonumber
\\
&=\prod_{i=1}^N{(\lambdaambda)}ngle\phi_{x_i}|\phi_{x'_i}{\bf s}angle
=\lambdaeft({\mathscr G}^{\otimes N}{\bf s}ight)_{{\bf x},{\bf x}'},
{{\bf s}m e}nd{align}
where
\begin{equation}
{\mathscr G}=\begin{pmatrix} 1 & c \\ c & 1 {{\bf s}m e}nd{pmatrix}
{{\bf s}m e}nd{equation}
is the Gram matrix of $\{|\phi_0{\bf s}angle,|\phi_1{\bf s}angle\}$.
Thus, ${{\bf s}m s}qrt G\!=\!({{\bf s}m s}qrt{\mathscr G}\,)^{\otimes N}$, with
\begin{equation}
{{\bf s}m s}qrt{{\mathscr G}}=\begin{pmatrix} \displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}+\!{{\bf s}m s}qrt{1\!-\!c}}{2} & \displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}-\!{{\bf s}m s}qrt{1\!-\!c}}{2} \\[.8em]
\displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}-\!{{\bf s}m s}qrt{1\!-\!c}}{2} & \displaystyle\frac{ {{\bf s}m s}qrt{1\!+\!c}+\!{{\bf s}m s}qrt{1\!-\!c}}{2} {{\bf s}m e}nd{pmatrix}.
{{\bf s}m e}nd{equation}
As expected, the diagonal terms of ${{\bf s}m s}qrt{G}$ are all equal,
and the success probability is given by
\begin{equation}{(\lambdaambda)}bel{psnc}
P_{{\bf s}m s}(c)\!=\!\lambdaeft( \!{{\bf s}m s}qrt{1\!+\!c}\!+\!{{\bf s}m s}qrt{1\!-\!c}\over2\, {\bf s}ight)^{\!2N}\!\!\!=\lambdaeft( 1\!+\!{{\bf s}m s}qrt{1\!-\!c^2}\over2\, {\bf s}ight)^{\!N}.
{{\bf s}m e}nd{equation}
We call the reader's attention to the fact that one could have attained the very same success probability by performing an individual Helstrom measurement~\cite{Helstrom1976}, with basis
\begin{equation}
\ket{\psi_{0/1}}=\frac{\ket{0}\pm \ket{1}}{{{\bf s}m s}qrt{2}},
{(\lambdaambda)}bel{Helstrom basis}
{{\bf s}m e}nd{equation}
on each state of the input sequence and guessed that the label of that state was the outcome value.
In other words, for the problem at hand, global quantum measurements do not provide any improvement over individual fixed measurements.
| 4,033 | 66,496 |
en
|
train
|
0.115.6
|
{{\bf s}m s}ubsection{Clustering quantum states: known input states}{(\lambdaambda)}bel{sec:quantumknown}
If the two possible states $\ket{\phi_0}$ and $\ket{\phi_1}$ are known, the optimal clustering protocol must use this information. It is then expected that the average performance will be much higher than for the universal protocol. It is natural in this context not to identify a given string~${\bf x}$ with its complementary $\bar{\bf x}$ (we stick to the notation in the main text), since mistaking one state for the other should clearly count as an error if the two preparations are specified. In this case, then, clustering is equivalent to discriminating the $2^N$ known pure states $\ket{\Phi_{\bf x}}\!=\!\ket{\phi_{x_1}{\bf s}angle\!\otimes\!|\phi_{x_2}{\bf s}angle\!\otimes\!\cdots\! \otimes\!|\phi_{x_N}}$ (hypotheses),
where with no loss of generality we can write
\begin{equation}
\ket{\phi_{0/1}}={{\bf s}m s}qrt{\frac{1+c}{2}}\ket{0}\pm {{\bf s}m s}qrt{\frac{1-c}{2}}\ket{1}
{(\lambdaambda)}bel{no loss}
{{\bf s}m e}nd{equation}
for a convenient choice of basis. Here $c=|{(\lambdaambda)}ngle\phi_0|\phi_1{\bf s}angle|$ is the overlap of the two states.
The Gram matrix $G$
encapsulates all the information needed to discriminate the states of the set. It is defined as having elements
$G_{{\bf x},{\bf x}'}=\braket{\Phi_{\bf x}}{\Phi_{{\bf x}'}}$. It is known that when the diagonal elements of its square root are all equal, i.e., $\big({{\bf s}m s}qrt{G}\,\big)_{{\bf x},{\bf x}}{{\bf s}m e}quiv S$ for all ${\bf x}$,
then the square root measurement is optimal~\cite{DallaPozza2015,Sentis2016} and the probability of successful indentification reads
simply $P_{{\bf s}m s}=S^2$. Notice that we have implicitly assumed uniformly distributed hypotheses.
For the case at hand,
\begin{align}
G_{{\bf x},{\bf x}'}&=({(\lambdaambda)}ngle\phi_{x_1}|\otimes\cdots\otimes{(\lambdaambda)}ngle\phi_{x_N}|)(|\phi_{x'_1}{\bf s}angle\otimes\cdots\otimes|\phi_{x'_N}{\bf s}angle)\nonumber
\\
&=\prod_{i=1}^N{(\lambdaambda)}ngle\phi_{x_i}|\phi_{x'_i}{\bf s}angle
=\lambdaeft({\mathscr G}^{\otimes N}{\bf s}ight)_{{\bf x},{\bf x}'},
{{\bf s}m e}nd{align}
where
\begin{equation}
{\mathscr G}=\begin{pmatrix} 1 & c \\ c & 1 {{\bf s}m e}nd{pmatrix}
{{\bf s}m e}nd{equation}
is the Gram matrix of $\{|\phi_0{\bf s}angle,|\phi_1{\bf s}angle\}$.
Thus, ${{\bf s}m s}qrt G\!=\!({{\bf s}m s}qrt{\mathscr G}\,)^{\otimes N}$, with
\begin{equation}
{{\bf s}m s}qrt{{\mathscr G}}=\begin{pmatrix} \displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}+\!{{\bf s}m s}qrt{1\!-\!c}}{2} & \displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}-\!{{\bf s}m s}qrt{1\!-\!c}}{2} \\[.8em]
\displaystyle \frac{{{\bf s}m s}qrt{1\!+\!c}-\!{{\bf s}m s}qrt{1\!-\!c}}{2} & \displaystyle\frac{ {{\bf s}m s}qrt{1\!+\!c}+\!{{\bf s}m s}qrt{1\!-\!c}}{2} {{\bf s}m e}nd{pmatrix}.
{{\bf s}m e}nd{equation}
As expected, the diagonal terms of ${{\bf s}m s}qrt{G}$ are all equal,
and the success probability is given by
\begin{equation}{(\lambdaambda)}bel{psnc}
P_{{\bf s}m s}(c)\!=\!\lambdaeft( \!{{\bf s}m s}qrt{1\!+\!c}\!+\!{{\bf s}m s}qrt{1\!-\!c}\over2\, {\bf s}ight)^{\!2N}\!\!\!=\lambdaeft( 1\!+\!{{\bf s}m s}qrt{1\!-\!c^2}\over2\, {\bf s}ight)^{\!N}.
{{\bf s}m e}nd{equation}
We call the reader's attention to the fact that one could have attained the very same success probability by performing an individual Helstrom measurement~\cite{Helstrom1976}, with basis
\begin{equation}
\ket{\psi_{0/1}}=\frac{\ket{0}\pm \ket{1}}{{{\bf s}m s}qrt{2}},
{(\lambdaambda)}bel{Helstrom basis}
{{\bf s}m e}nd{equation}
on each state of the input sequence and guessed that the label of that state was the outcome value.
In other words, for the problem at hand, global quantum measurements do not provide any improvement over individual fixed measurements.
In order to compare with the results of the main text, we compute the average performance for a uniform distribution of states $\ket{\phi_0}$ and $\ket{\phi_1}$, i.e., the average
\begin{align}
P_{{\bf s}m s}\! &=\!\!\int\! d\phi_0 d\phi_1 P_{{\bf s}m s}(c)\nonumber\\
&=\!\!\int_0^1\! dc^2 P_{{\bf s}m s}(c)\!\int \!d\phi_0 d\phi_1\,\delta\!\lambdaeft(|{(\lambdaambda)}ngle\phi_0|\phi_1{\bf s}angle|^2\!-\!c^2{\bf s}ight)\nonumber\\
&=\!\!\int_0^1\! dc^2 P_{{\bf s}m s}(c)\!\int \!d\phi_1\,\delta\!\lambdaeft(|{(\lambdaambda)}ngle 0|\phi_1{\bf s}angle|^2\!-\!c^2{\bf s}ight)\nonumber\\
&=\!\!\int_0^1\! dc^2 \mu(c^2)P_{{\bf s}m s}(c),
{{\bf s}m e}nd{align}
where we have inserted the identity $1=\int_0^1dc^2 \delta(a^2-c^2)$, for $0<a{{\bf s}m e}quiv |{(\lambdaambda)}ngle\phi_0|\phi_1{\bf s}angle| < 1$, and used the invariance of the measure~$d\phi$ under ${{\bf s}m SU}(d)$ transformations.
The marginal distribution is $\mu(c^2)=(d-1) (1-c^2)^{d-2}$ (see Appendix~{\bf s}ef{app:prior}).
Using this result, the asymptotic behavior of the last integral is
\begin{equation}{(\lambdaambda)}bel{eq:meanqudits}
P_{{\bf s}m s}{{\bf s}m s}im \frac{4(d-1)}{N}\,.
{{\bf s}m e}nd{equation}
As expected, knowing the two possible states in the input string leads to a better behavior of the success probability: it decreases only linearly in $1/N$, as compared to the best universal quantum clustering protocol, which exhibits a quadratic decrease.
To do a fairer comparison with universal quantum clustering, guessing the complementary string $\bar{\bf x}$ instead of ${\bf x}$ will now be counted as success, that is,
now the clusterings are defined by the states
\begin{equation}
{\bf s}ho_{\bf x}=\frac{\ketbra{\Phi_{\bf x}}{\Phi_{\bf x}}+
\ketbra{\Phi_{\bar{{\bf x}}}}{\Phi_{\bar{{\bf x}}}}}{2}.
{{\bf s}m e}nd{equation}
For this variation of the problem, the optimal measurement is still local, and given by a POVM with elements
\begin{equation}
E_{\bf x}=\ketbra{\Psi_{\bf x}}{\Psi_{\bf x}}+\ketbra{\Psi_{\bar{{\bf x}}}}{\Psi_{\bar{{\bf x}}}},
{{\bf s}m e}nd{equation}
where $\ket{\Psi_{\bf x}}\!=\!\ket{\psi_{x_1}{\bf s}angle\!\otimes\!|\psi_{x_2}{\bf s}angle\!\otimes\!\cdots\! \otimes\!|\psi_{x_N}}$, and where we recall that $\{\ket{\psi_{0}},\ket{\psi_{1}}\}$ is the (local) Helstrom measurement basis in Eq.~({\bf s}ef{Helstrom basis}). Note that $\{E_{\bf x}\}$ are orthogonal projectors.
To prove the statement in the last paragraph, we show that the Holevo-Yuen-Kennedy-Lax conditions, Eq.~({\bf s}ef{Holevo1}), hold (recall that the Gram matrix technique does not apply to mixed states). For the success probability and assuming equal priors, these conditions take the simpler form
\begin{align}
{{\bf s}m s}um_{\bf x} E_{\bf x} {\bf s}ho_{\bf x}&={{\bf s}m s}um_{\bf x} {\bf s}ho_{\bf x} E_{\bf x}{{\bf s}m e}quiv\Gamma,{(\lambdaambda)}bel{holevo-cond0}\\
\Gamma-{\bf s}ho_{\bf x}&\geq 0 \quad \forall {\bf x},
{(\lambdaambda)}bel{holevo-cond}
{{\bf s}m e}nd{align}
where we have dropped the irrelevant factor ${{\bf s}m e}ta=2^{1-N}$.
Condition~({\bf s}ef{holevo-cond0}) is trivially satisfied. To check that condition~({\bf s}ef{holevo-cond}) also holds,
we recall the Weyl inequalities for the eigenvalues of Hermitian $n\times n$ matrices $A$, $B$~\cite{Horn2013}:
\begin{equation}
\vartheta_i(A+B)\lambdaeq \vartheta_{i+j}(A)+\vartheta_{n-j}(B),
{(\lambdaambda)}bel{Weyl ineq}
{{\bf s}m e}nd{equation}
for $j=0,1,\lambdadots,n-i$,
where the eigenvalues are labeled in increasing order $\vartheta_1\lambdaeq\vartheta_2\lambdaeq\cdots \lambdaeq\vartheta_n$. We use Eq.~({\bf s}ef{Weyl ineq}) to write
\begin{equation}
\vartheta_1(\Gamma)\lambdaeq \vartheta_{3}(\Gamma-{\bf s}ho_{\bf x})+\vartheta_{2^{N}-2}({\bf s}ho_{\bf x})
{(\lambdaambda)}bel{rmx}
{{\bf s}m e}nd{equation}
(note that effectively all these operators act on the $2^N$-dimensional subspace spanned by $\{|0{\bf s}angle,|1{\bf s}angle\}^{\otimes N}$).
As will be proved below, $\Gamma>0$, which implies that $\vartheta_1(\Gamma)>0$. We note that ${\bf s}ho_{\bf x}$ has rank two, i.e., it has only two strictly positive eigenvalues, so
$\vartheta_{2^N-2}({\bf s}ho_{\bf x})=0$. Then Eq.~({\bf s}ef{rmx}) implies
\begin{equation}
{(\lambdaambda)}bel{lambda3}
\vartheta_{3}(\Gamma-{\bf s}ho_{\bf x}) \geq \vartheta_1(\Gamma)>0.
{{\bf s}m e}nd{equation}
Finally,
notice that $\Gamma-{\bf s}ho_{\bf x}$ has two null eigenvalues, with eigenvectors $\ket{\Psi_{\bf x}}$ and $\ket{\Psi_{\bar{{\bf x}}}}$. Hence,
$\vartheta_1(\Gamma-{\bf s}ho_{\bf x})=\vartheta_2(\Gamma-{\bf s}ho_{\bf x})=0$, and it follows from Eq.~{{\bf s}m e}qref{lambda3} that condition~({\bf s}ef{holevo-cond}) must hold.
To show the positivity of $\Gamma$, which was assumed in the previous paragraph, we use Eqs.~({\bf s}ef{no loss}) and~({\bf s}ef{Helstrom basis}) to write
\begin{align}
\Gamma =\frac{1}{2}\lambdaeft[ \begin{pmatrix} a_1 & 0 \\ 0 &a_2 {{\bf s}m e}nd{pmatrix}^{\!\!\otimes N}
\!\!\! +\begin{pmatrix} b_1& 0 \\ 0 & b_2 {{\bf s}m e}nd{pmatrix}^{\!\!\otimes N} {\bf s}ight],
{{\bf s}m e}nd{align}
where
\begin{align}
a_{1/2}=&\frac{1\pm c+{{\bf s}m s}qrt{1-c^2} }{2},\nonumber \\
b_{1/2}=&\frac{ 1\pm c - {{\bf s}m s}qrt{1-c^2} }{2}.
{{\bf s}m e}nd{align}
Notice that
$a_1>b_1$ and $a_2>|b_2|$. Thus, if $0\lambdaeq c< 1$, we have
$\vartheta_k >0$ for $k=1,2,\dots, 2^{N}$.
The special case~$c=1$ is degenerate. Eq.~{{\bf s}m e}qref{holevo-cond} is trivially saturated, rendering $P_{{\bf s}m s}=2^{1-N}$, as it should be.
The maximum success probability can now be computed recalling that $P_{{\bf s}m s}(c)=2^{1-N} {{\bf s}m tr}\,\Gamma$. We obtain
\begin{equation}
P_{{\bf s}m s}(c)=\lambdaeft(\frac{1+{{\bf s}m s}qrt{1-c^2}}{2}{\bf s}ight)^{\!\!N}\!\!+\lambdaeft(\frac{1-{{\bf s}m s}qrt{1-c^2}}{2}{\bf s}ight)^{\!\!N},
{{\bf s}m e}nd{equation}
where the first term corresponds to guessing correctly all the states in the input string, whereas the second one results from guessing the other possible state all along the string.
One can easily check that the average over~$c$ of the second term vanishes exponentially for large $N$, and we end up with a success probability given again by~Eq.~{{\bf s}m e}qref{eq:meanqudits}.
Finally, we would like to mention that one could consider a simple unambiguous protocol~\cite{Ivanovic1987,Dieks1988,Peres1988,Chefles1998a} whereby each state of the input string would be identified with no error with probability $P_{{\bf s}m s}(c)=1-c$, i.e., the protocol would give an inconclusive answer with probability $1-P_{{\bf s}m s}=c$. Therefore, the average unambiguous probability of sorting the data would be
\begin{equation}
P_\mathrm{s}=2\!\! \int_0^1\!\! dc\,c \mu(c^2)(1-c)^N {{\bf s}m s}im \frac{2(d-1)}{N^2}\,.
{{\bf s}m e}nd{equation}
}
{{\bf s}m s}ection{Discussion}{(\lambdaambda)}bel{sec:discussion}
| 4,014 | 66,496 |
en
|
train
|
0.115.7
|
where we have dropped the irrelevant factor ${{\bf s}m e}ta=2^{1-N}$.
Condition~({\bf s}ef{holevo-cond0}) is trivially satisfied. To check that condition~({\bf s}ef{holevo-cond}) also holds,
we recall the Weyl inequalities for the eigenvalues of Hermitian $n\times n$ matrices $A$, $B$~\cite{Horn2013}:
\begin{equation}
\vartheta_i(A+B)\lambdaeq \vartheta_{i+j}(A)+\vartheta_{n-j}(B),
{(\lambdaambda)}bel{Weyl ineq}
{{\bf s}m e}nd{equation}
for $j=0,1,\lambdadots,n-i$,
where the eigenvalues are labeled in increasing order $\vartheta_1\lambdaeq\vartheta_2\lambdaeq\cdots \lambdaeq\vartheta_n$. We use Eq.~({\bf s}ef{Weyl ineq}) to write
\begin{equation}
\vartheta_1(\Gamma)\lambdaeq \vartheta_{3}(\Gamma-{\bf s}ho_{\bf x})+\vartheta_{2^{N}-2}({\bf s}ho_{\bf x})
{(\lambdaambda)}bel{rmx}
{{\bf s}m e}nd{equation}
(note that effectively all these operators act on the $2^N$-dimensional subspace spanned by $\{|0{\bf s}angle,|1{\bf s}angle\}^{\otimes N}$).
As will be proved below, $\Gamma>0$, which implies that $\vartheta_1(\Gamma)>0$. We note that ${\bf s}ho_{\bf x}$ has rank two, i.e., it has only two strictly positive eigenvalues, so
$\vartheta_{2^N-2}({\bf s}ho_{\bf x})=0$. Then Eq.~({\bf s}ef{rmx}) implies
\begin{equation}
{(\lambdaambda)}bel{lambda3}
\vartheta_{3}(\Gamma-{\bf s}ho_{\bf x}) \geq \vartheta_1(\Gamma)>0.
{{\bf s}m e}nd{equation}
Finally,
notice that $\Gamma-{\bf s}ho_{\bf x}$ has two null eigenvalues, with eigenvectors $\ket{\Psi_{\bf x}}$ and $\ket{\Psi_{\bar{{\bf x}}}}$. Hence,
$\vartheta_1(\Gamma-{\bf s}ho_{\bf x})=\vartheta_2(\Gamma-{\bf s}ho_{\bf x})=0$, and it follows from Eq.~{{\bf s}m e}qref{lambda3} that condition~({\bf s}ef{holevo-cond}) must hold.
To show the positivity of $\Gamma$, which was assumed in the previous paragraph, we use Eqs.~({\bf s}ef{no loss}) and~({\bf s}ef{Helstrom basis}) to write
\begin{align}
\Gamma =\frac{1}{2}\lambdaeft[ \begin{pmatrix} a_1 & 0 \\ 0 &a_2 {{\bf s}m e}nd{pmatrix}^{\!\!\otimes N}
\!\!\! +\begin{pmatrix} b_1& 0 \\ 0 & b_2 {{\bf s}m e}nd{pmatrix}^{\!\!\otimes N} {\bf s}ight],
{{\bf s}m e}nd{align}
where
\begin{align}
a_{1/2}=&\frac{1\pm c+{{\bf s}m s}qrt{1-c^2} }{2},\nonumber \\
b_{1/2}=&\frac{ 1\pm c - {{\bf s}m s}qrt{1-c^2} }{2}.
{{\bf s}m e}nd{align}
Notice that
$a_1>b_1$ and $a_2>|b_2|$. Thus, if $0\lambdaeq c< 1$, we have
$\vartheta_k >0$ for $k=1,2,\dots, 2^{N}$.
The special case~$c=1$ is degenerate. Eq.~{{\bf s}m e}qref{holevo-cond} is trivially saturated, rendering $P_{{\bf s}m s}=2^{1-N}$, as it should be.
The maximum success probability can now be computed recalling that $P_{{\bf s}m s}(c)=2^{1-N} {{\bf s}m tr}\,\Gamma$. We obtain
\begin{equation}
P_{{\bf s}m s}(c)=\lambdaeft(\frac{1+{{\bf s}m s}qrt{1-c^2}}{2}{\bf s}ight)^{\!\!N}\!\!+\lambdaeft(\frac{1-{{\bf s}m s}qrt{1-c^2}}{2}{\bf s}ight)^{\!\!N},
{{\bf s}m e}nd{equation}
where the first term corresponds to guessing correctly all the states in the input string, whereas the second one results from guessing the other possible state all along the string.
One can easily check that the average over~$c$ of the second term vanishes exponentially for large $N$, and we end up with a success probability given again by~Eq.~{{\bf s}m e}qref{eq:meanqudits}.
Finally, we would like to mention that one could consider a simple unambiguous protocol~\cite{Ivanovic1987,Dieks1988,Peres1988,Chefles1998a} whereby each state of the input string would be identified with no error with probability $P_{{\bf s}m s}(c)=1-c$, i.e., the protocol would give an inconclusive answer with probability $1-P_{{\bf s}m s}=c$. Therefore, the average unambiguous probability of sorting the data would be
\begin{equation}
P_\mathrm{s}=2\!\! \int_0^1\!\! dc\,c \mu(c^2)(1-c)^N {{\bf s}m s}im \frac{2(d-1)}{N^2}\,.
{{\bf s}m e}nd{equation}
}
{{\bf s}m s}ection{Discussion}{(\lambdaambda)}bel{sec:discussion}
Unsupervised learning, which assumes virtually nothing about the distributions underlying the data, is already a hard problem~\cite{Aloise2009,Ben-David2015}. Lifting the notion of classical data to quantum data (i.e., states) factors in additional obstacles, such as the impossibility to repeatedly operate with the quantum data without degrading it.
\blue{Most prominent classical clustering algorithms heavily rely on the iterative evaluation of a function on the input data (e.g., pairwise distances between points in a feature vector space, as in $k$-means~\cite{Lloyd1982}), hence they are not equipped to deal with degrading data and would expectedly fail in our scenario.
The unsupervised quantum classification algorithm we present is thus, by necessity, far away from its classical analogues.
In particular, since we are concerned with the optimal quantum strategy we need to consider the most general collective measurement, which is inherently single-shot:}
it yields a single sample of a stochastic action, namely, a posterior state and an outcome of a quantum measurement, where the latter provides the description of the clustering.
The main lesson stemming from our investigation is that, despite these limitations, clustering unknown quantum states is a feasible task.
\blue{The optimal protocol that solves it showcases some interesting features.}
\blue{
{{{\bf s}m e}m It does not completely erase the information about a given preparation of the input data after clustering.}}
This is apparent from Eq.~{{\bf s}m e}qref{povm_elements_main}, since the action of the POVM on the subspaces~${(\lambdaambda)}$ is the identity.
\blue{After the input data string in the global state $\ket{\Phi_{\bf x}}$ is measured and outcome $\lambda^*$ is obtained (recall that $\lambda^*$ gives us information about the size of the clusters), information relative to the particular states $\ket{\phi_{0/1}}$ remains in the subspace~$(\lambda^*)$ of the global post-measured state.
Therefore, one could potentially use further the posterior (clustered) states down the line as approximations of the two classes of states. This opens the door for our clustering device to be used as an intermediate processor in a quantum network.}
This notwithstanding, the amount of information that can be retrieved after optimal clustering is currently under investigation.
{{{\bf s}m e}m It outbeats the classical and semiclassical protocols.} If the local dimension of the quantum data is larger than two, the dimensionality of the symmetric subspaces spanned by the global states of the strings of data can be exploited by means of collective measurements with a twofold effect: enhanced distinguishability of states, resulting in improved clustering performance (exemplified by a linear increase in the asymptotic success probability), and information-preserving data handling (to some extent, as discussed above). This should be contrasted with the semiclassical protocol, which essentially obliterates the information content of the data (as a von Neumann measurement is performed on each system), and whose success probability vanishes exponentially with the local dimension. In addition, the optimal classical and semiclassical protocols require solving an NP-complete problem and their implementation is thus inefficient. In contrast, we observe that the first part of the quantum protocol, which consists in guessing the size of the clusters $n$, runs efficiently on a quantum computer: this step involves a Schur transform that runs in polynomial time in $N$ and $\lambdaog d$~\cite{Harrow2005,Krovi2018}, followed by a projective measurement with no computational cost. The second part, guessing the permutation ${{\bf s}m s}igma$, requires implementing a group-covariant POVM. The complexity of this step, and hence the overall computational complexity of our protocol, is still an open question currently under investigation.
{{{\bf s}m e}m It is optimal for a range of different cost functions.} There are various cost functions that could arguably be better suited to quantum clustering, e.g., the Hamming distance between the guessed and the true clusterings, or likewise, the trace distance or the infidelity between the corresponding effective states~${\bf s}ho_{n,{{\bf s}m s}igma}$ and~${\bf s}ho_{n',{{\bf s}m s}igma'}$. They are however hard to deal with analytically. The question arises as to whether our POVM is still optimal for such cost functions. To answer this question, we formulate an optimality condition that can be checked numerically for problems of finite size (see Appendix~{\bf s}ef{app:generalcosts}). Our numerics show that the POVM remains optimal for all these examples. This is an indication that the optimality of our protocol stems from the structure of the problem, independently of the cost function.
{{{\bf s}m e}m It stands a landmark in multi-hypothesis state discrimination.}
Analytical solutions to multi-hypothesis state discrimination exist only in a few specific cases~\cite{Barnett2001,Chiribella2004,Chiribella2006a,Krovi2015a,Sentis2016,Sentis2017}. Our set of hypotheses arises arguably from the minimal set of assumptions about a pure state source: it produces two states randomly.
Variants of this problem with much more restrictive assumptions
have been considered in Refs.~\cite{Korff2004,Hillery2011,Skotiniotis2018}.
Our clustering protocol departs from other notions of quantum unsupervised machine learning that can be found in the literature~\cite{Aimeur2013,Lloyd2013,Wiebe2014a,Kerenidis2018}. In these references, data coming from a classical problem is encoded in quantum states that are available on demand via a quantum random access memory~\cite{Giovannetti2008}. The goal is to surpass classical performance in the number of required operations. In contrast, we deal with unprocessed quantum data as input, and aim at performing a task that is genuinely quantum. This is a notably harder scenario, where known heuristics for classical algorithms simply cannot work.
\blue{Other extensions of this work currently under investigation are: clustering systems whose states can be of more than two types, where we expect a similar two-step measurement for the optimal protocol; and clustering of quantum processes, where the aim is to classify instances of unknown processes by letting them run on some input test state of our choice (see Ref.~\cite{Skotiniotis2018} for related work on identifying malfunctioning devices).
In this last case, an interesting application arises when considering causal relations as the defining feature of a cluster. A clustering algorithm would then aim to identify, within a set of unknown processes, which ones are causally connected. Identifying causal structures has recently attracted attention among the quantum information community~\cite{Chiribella2019}.}
\acknowledgments
We acknowledge the financial support of the Spanish MINECO, ref. FIS2016-80681-P (AEI/FEDER, UE), and Generalitat de Catalunya CIRIT, ref. 2017-SGR-1127. GS thanks the support of the Alexander von Humboldt Foundation.
EB also thanks the hospitality of Computer Science Department of the University of Hong Kong during his stay.
\appendix
{{\bf s}m s}ection{Partitions}{(\lambdaambda)}bel{app:partitions}
Partitions play an important role in the representation theory of groups and are central objects in combinatorics. Here, we collect a few definitions and results that are used in the next appendices, particularly in Appendix~{\bf s}ef{app:irreps}.
A {{\bf s}m e}mph{partition} $\lambda=(\lambda_1,\lambda_2,\lambdadots,\lambda_r,\lambdadots)$ is a sequence of nonnegative integers in nonincreasing order.
The {{\bf s}m e}mph{length}~of~$\lambda$, denoted $l(\lambda)$, is the number of nonzero elements in~$\lambda$.
We denote by \mbox{$\lambda\vdash N$} a partition $\lambda$ of the integer $N$, where $N={{\bf s}m s}um_i \lambda_i$.
A natural way of ordering partitions is by inverse lexicographic order, i.e., given two partitions $\lambda$ and $\lambda'$, we write $\lambda > \lambda' $ iff the first nonzero difference $\lambda_i-\lambda'_i$ is positive.
The total number of partitions of an integer $N$ is denoted by~$P_N$~\cite{Flajolet2009}, and the number of partitions such that $l(\lambda)\lambdae r$ by~$P^{(\lambdae r)}_N$. Similarly, the number of partitions of length~$r$ is denoted by~$P^{(r)}_N$. There exists no closed expression for any of these numbers, but there are widely known results (some of them by Hardy and Ramanujan are very famous~\cite{Andrews1976}) concerning their asymptotic behavior for large $N$. The one we will later use in Appendix~{\bf s}ef{app:classic} is
\begin{equation}
P^{(\lambdae r)}_N{{\bf s}m s}im {N^{r-1}\over r!(r-1)!}\,,
{(\lambdaambda)}bel{partAsym}
{{\bf s}m e}nd{equation}
which gives the dominant contribution for large $N$. Note that from the obvious relation $P^{(r)}_N=P^{(\lambdae r)}_N-P^{(\lambdae r-1)}_N$, it follows that the same asymptotic expression holds for~$P^{(r)}_N$.
| 3,839 | 66,496 |
en
|
train
|
0.115.8
|
{{{\bf s}m e}m It is optimal for a range of different cost functions.} There are various cost functions that could arguably be better suited to quantum clustering, e.g., the Hamming distance between the guessed and the true clusterings, or likewise, the trace distance or the infidelity between the corresponding effective states~${\bf s}ho_{n,{{\bf s}m s}igma}$ and~${\bf s}ho_{n',{{\bf s}m s}igma'}$. They are however hard to deal with analytically. The question arises as to whether our POVM is still optimal for such cost functions. To answer this question, we formulate an optimality condition that can be checked numerically for problems of finite size (see Appendix~{\bf s}ef{app:generalcosts}). Our numerics show that the POVM remains optimal for all these examples. This is an indication that the optimality of our protocol stems from the structure of the problem, independently of the cost function.
{{{\bf s}m e}m It stands a landmark in multi-hypothesis state discrimination.}
Analytical solutions to multi-hypothesis state discrimination exist only in a few specific cases~\cite{Barnett2001,Chiribella2004,Chiribella2006a,Krovi2015a,Sentis2016,Sentis2017}. Our set of hypotheses arises arguably from the minimal set of assumptions about a pure state source: it produces two states randomly.
Variants of this problem with much more restrictive assumptions
have been considered in Refs.~\cite{Korff2004,Hillery2011,Skotiniotis2018}.
Our clustering protocol departs from other notions of quantum unsupervised machine learning that can be found in the literature~\cite{Aimeur2013,Lloyd2013,Wiebe2014a,Kerenidis2018}. In these references, data coming from a classical problem is encoded in quantum states that are available on demand via a quantum random access memory~\cite{Giovannetti2008}. The goal is to surpass classical performance in the number of required operations. In contrast, we deal with unprocessed quantum data as input, and aim at performing a task that is genuinely quantum. This is a notably harder scenario, where known heuristics for classical algorithms simply cannot work.
\blue{Other extensions of this work currently under investigation are: clustering systems whose states can be of more than two types, where we expect a similar two-step measurement for the optimal protocol; and clustering of quantum processes, where the aim is to classify instances of unknown processes by letting them run on some input test state of our choice (see Ref.~\cite{Skotiniotis2018} for related work on identifying malfunctioning devices).
In this last case, an interesting application arises when considering causal relations as the defining feature of a cluster. A clustering algorithm would then aim to identify, within a set of unknown processes, which ones are causally connected. Identifying causal structures has recently attracted attention among the quantum information community~\cite{Chiribella2019}.}
\acknowledgments
We acknowledge the financial support of the Spanish MINECO, ref. FIS2016-80681-P (AEI/FEDER, UE), and Generalitat de Catalunya CIRIT, ref. 2017-SGR-1127. GS thanks the support of the Alexander von Humboldt Foundation.
EB also thanks the hospitality of Computer Science Department of the University of Hong Kong during his stay.
\appendix
{{\bf s}m s}ection{Partitions}{(\lambdaambda)}bel{app:partitions}
Partitions play an important role in the representation theory of groups and are central objects in combinatorics. Here, we collect a few definitions and results that are used in the next appendices, particularly in Appendix~{\bf s}ef{app:irreps}.
A {{\bf s}m e}mph{partition} $\lambda=(\lambda_1,\lambda_2,\lambdadots,\lambda_r,\lambdadots)$ is a sequence of nonnegative integers in nonincreasing order.
The {{\bf s}m e}mph{length}~of~$\lambda$, denoted $l(\lambda)$, is the number of nonzero elements in~$\lambda$.
We denote by \mbox{$\lambda\vdash N$} a partition $\lambda$ of the integer $N$, where $N={{\bf s}m s}um_i \lambda_i$.
A natural way of ordering partitions is by inverse lexicographic order, i.e., given two partitions $\lambda$ and $\lambda'$, we write $\lambda > \lambda' $ iff the first nonzero difference $\lambda_i-\lambda'_i$ is positive.
The total number of partitions of an integer $N$ is denoted by~$P_N$~\cite{Flajolet2009}, and the number of partitions such that $l(\lambda)\lambdae r$ by~$P^{(\lambdae r)}_N$. Similarly, the number of partitions of length~$r$ is denoted by~$P^{(r)}_N$. There exists no closed expression for any of these numbers, but there are widely known results (some of them by Hardy and Ramanujan are very famous~\cite{Andrews1976}) concerning their asymptotic behavior for large $N$. The one we will later use in Appendix~{\bf s}ef{app:classic} is
\begin{equation}
P^{(\lambdae r)}_N{{\bf s}m s}im {N^{r-1}\over r!(r-1)!}\,,
{(\lambdaambda)}bel{partAsym}
{{\bf s}m e}nd{equation}
which gives the dominant contribution for large $N$. Note that from the obvious relation $P^{(r)}_N=P^{(\lambdae r)}_N-P^{(\lambdae r-1)}_N$, it follows that the same asymptotic expression holds for~$P^{(r)}_N$.
Partitions are conveniently represented by {{\bf s}m e}mph{Young diagrams}. The Young diagram associated to the partition $\lambda \vdash N$
is an arrangement of $N$ empty boxes in $l(\lambda)$ rows, with~$\lambda_i$ boxes in the $i$th row. This association is one-to-one, hence $\lambda$ can be used to label Young diagrams as well. A {{\bf s}m e}mph{Young tableau} of $d$ entries is a Young diagram filled with integers from 1 up to~$d$, one in each box. There are two types of tableaux: A {{\bf s}m e}mph{standard Young tableau} (SYT) of shape $\lambda \vdash N$ is one where $d=N$ and such that the integers in each row increase from left to right, and from top to bottom in each column (hence each integer appears exactly once). A {{\bf s}m e}mph{semistandard Young tableau} (SSYT) of shape $\lambda \vdash N$ and $d$ entries, $d\geq l(\lambda)$, is one such that integers in each row are nondecreasing from left to right, and increasing from top to bottom in each column.
The number of different SYTs of shape $\lambda \vdash N$ is given by the {{\bf s}m e}mph{hook-length} formula
\begin{equation}{(\lambdaambda)}bel{mult_nu_general}
\nu_\lambda = \frac{N!}{\prod_{(i,j)\in\lambda} h_{ij}} \,,
{{\bf s}m e}nd{equation}
where $(i,j)$ denotes the box located in the $i$th row and the $j$th column of the Young diagram, and $h_{ij}$ is the hook-length of the box~$(i,j)$, defined as the number of boxes
located beneath or to the right of that box in the Young diagram, counting the box itself.
Likewise, the number of SSYTs of shape $\lambda \vdash N$ and $d$ entries is given by the formula
\begin{equation}{(\lambdaambda)}bel{mult_s_general}
s_\lambda = \frac{\Delta(\lambda_1+d-1,\lambda_2+d-2,\lambdadots,\lambda_d)}{\Delta(d-1,d-2,\lambdadots,0)} \,,
{{\bf s}m e}nd{equation}
where $\Delta(x_1,x_2,\dots,x_d)=\prod_{i<j}(x_i-x_j)$.
{{\bf s}m s}ection{Irreducible representations of SU$(d)$ and $S_N$ over $(d,\mathbb{C})^{\otimes N}$}{(\lambdaambda)}bel{app:irreps}
For the sake of convenience, we recall here some ingredients of representation theory that we use throughout the paper.
The results described below can be found in standard textbooks, for instance, in Refs. \cite{Sagan2001,Goodman2009}.
{{\bf s}m s}ubsection{Some results in representation theory}
Young diagrams or, equivalently, partitions $\lambda$, label the irreducible representations (irreps) of the general linear group GL$(d)$ and some of its subgroups, e.g., ${{\bf s}m SU}(d)$, and also the irreps of the symmetric group $S_N$. The dimension of these irreps are given by $s_\lambda$ and $\nu_\lambda$, respectively [Eqs.~({\bf s}ef{mult_nu_general}) and~({\bf s}ef{mult_s_general})].
Schur-Weyl duality \cite{Goodman2009} establishes a connection between irreps of both groups, as follows. Let us consider the transformations $R^{\otimes N}$ and $U_{{\bf s}m s}igma$ on the $N$-fold tensor product space $(d,\mathbb{C})^{\otimes N}\!$, where $R\in {{\bf s}m SU}(d)$ and $U_{{\bf s}m s}igma$ permutes the $N$ spaces $(d,\mathbb{C})$ of the tensor product according to the permutation ${{\bf s}m s}igma\in S_N$. Both $R^{\otimes N}$ and $U_{{\bf s}m s}igma$ define, respectively, a reducible unitary representation of the groups SU$(d)$ and~$S_N$ on $(d,\mathbb{C})^{\otimes N}\!$. Moreover, they are each other's commutants.
It follows that this reducible representation decomposes into irreps $\lambda$, so that their joint action can be expressed as
\begin{equation}{(\lambdaambda)}bel{schurweyl}
R^{\otimes N}U_{{\bf s}m s}igma= U_{{\bf s}m s}igma R^{\otimes N} = \bigoplus_{\lambda \vdash N} R^\lambda \otimes U^\lambda_{{\bf s}m s}igma \,,
{{\bf s}m e}nd{equation}
where $R^\lambda$ and $U^\lambda_{{\bf s}m s}igma$ are the matrices that represent $R$ and $U_{{\bf s}m s}igma$, respectively, on the irrep~$\lambda$. To resolve any ambiguity that may arise, we write $\lambda$ in parenthesis, ${(\lambdaambda)}$, when it refers to the irreps of SU$(d)$, or in brackets,~${\{\lambdaambda\}}$, when it refers to those of~$S_N$. Eq.~({\bf s}ef{schurweyl}) tells us that the dimension of ${(\lambdaambda)}$, $s_\lambda$, coincides with the {{\bf s}m e}mph{multiplicity} of ${\{\lambdaambda\}}$, and conversely, the dimension of ${\{\lambdaambda\}}$, $\nu_\lambda$, coincides with the multiplicity of ${(\lambdaambda)}$.
This block-diagonal structure provides a decomposition of Hilbert space $\mathcal{H}^{\otimes N}=(d,{\mathbb C})^{\otimes N}$ into subspaces that are invariant under the action of ${{\bf s}m SU}(d)$ and $S_N$, as $\mathcal{H}^{\otimes N}=\bigoplus_\lambda H_\lambda$, and in turn, $H_\lambda = H_{(\lambdaambda)} \otimes H_{\{\lambdaambda\}}$.
The basis in which ${\mathcal H}^{\otimes N}$ has this form is known as {{\bf s}m e}mph{Schur basis}, and the unitary transformation that changes from the computational to the Schur basis is called {{\bf s}m e}mph{Schur transform}.
To conclude this Appendix, let us recall the rules for reducing the tensor product of two ${{\bf s}m SU}(d)$ representations as a Clebsch-Gordan series of the form
\begin{equation}{(\lambdaambda)}bel{reduction1}
R^\lambda\otimes R^{\lambda'}=\bigoplus_{\lambda''} R^{\lambda''}\otimes \openone^{{(\lambdaambda)}mbda''} \,,\quad \forall R\in{{\bf s}m SU}(d)\,,
{{\bf s}m e}nd{equation}
where ${{\bf s}m dim}(\openone^{{(\lambdaambda)}mbda''})$ is the multiplicity of irrep ${(\lambdaambda)}mbda''$.
The same rules also apply to the reduction of the outer product of representations of $S_n$ and~$S_{n'}$ into irreps of~$S_{n''}$, where $n''=n+n'$. In this case one has
\begin{equation}{(\lambdaambda)}bel{reduction2}
(U^\lambda \otimes U^{\lambda'})_{{{\bf s}m s}igma}=\bigoplus_{\lambda''} U^{\lambda''}_{{{\bf s}m s}igma} \otimes \openone^{{(\lambdaambda)}mbda''}, \quad \forall {{\bf s}m s}igma\in S_{n''}.
{{\bf s}m e}nd{equation}
Note the different meanings of $\otimes$ in the last two equations (it is however standard notation). The rules are most easily stated in terms of the Young diagrams that label the irreps. They are as follows:
\begin{enumerate}
\item In one of the diagrams that label de irreps on the left hand side of Eq.~({\bf s}ef{reduction1}) or Eq.~({\bf s}ef{reduction2}) (preferably the smallest), write the symbol $a$ in all boxes of the first row, the symbol $b$ in all boxes of the second row, $c$ in all boxes of the third one, and so~on.
\item Attach boxes with $a$ to the second Young diagram in all possible ways subjected to the rules that no two $a$'s appear in the same column and that the resulting arrangement of boxes is still a Young diagram. Repeat this process with $b$'s, $c$'s, and so~on.
\item For each Young diagram obtained in step two, read the 1st row of added symbols from right to left, then the second row in the same order, and so on. The resulting sequence of symbols, e.g., $abaabc\dots$, must be a lattice permutation, namely, to the left of any point in the sequence, there are not fewer $a$'s than $b$'s, no fewer $b$'s than $c$'s, and so on. Discard all diagrams that do not comply with this rule.
{{\bf s}m e}nd{enumerate}
The Young diagrams $\lambda''$ that result from this procedure specify the irreps on the right hand side of Eqs.~({\bf s}ef{reduction1}) and~({\bf s}ef{reduction2}). A same diagram can appear a number $M$ of times, in which case $\lambda''$ has multiplicity ${{\bf s}m dim}(\openone^{{(\lambdaambda)}mbda''})=M$.
{{\bf s}m s}ubsection{Particularities of quantum clustering}
| 3,806 | 66,496 |
en
|
train
|
0.115.9
|
Schur-Weyl duality \cite{Goodman2009} establishes a connection between irreps of both groups, as follows. Let us consider the transformations $R^{\otimes N}$ and $U_{{\bf s}m s}igma$ on the $N$-fold tensor product space $(d,\mathbb{C})^{\otimes N}\!$, where $R\in {{\bf s}m SU}(d)$ and $U_{{\bf s}m s}igma$ permutes the $N$ spaces $(d,\mathbb{C})$ of the tensor product according to the permutation ${{\bf s}m s}igma\in S_N$. Both $R^{\otimes N}$ and $U_{{\bf s}m s}igma$ define, respectively, a reducible unitary representation of the groups SU$(d)$ and~$S_N$ on $(d,\mathbb{C})^{\otimes N}\!$. Moreover, they are each other's commutants.
It follows that this reducible representation decomposes into irreps $\lambda$, so that their joint action can be expressed as
\begin{equation}{(\lambdaambda)}bel{schurweyl}
R^{\otimes N}U_{{\bf s}m s}igma= U_{{\bf s}m s}igma R^{\otimes N} = \bigoplus_{\lambda \vdash N} R^\lambda \otimes U^\lambda_{{\bf s}m s}igma \,,
{{\bf s}m e}nd{equation}
where $R^\lambda$ and $U^\lambda_{{\bf s}m s}igma$ are the matrices that represent $R$ and $U_{{\bf s}m s}igma$, respectively, on the irrep~$\lambda$. To resolve any ambiguity that may arise, we write $\lambda$ in parenthesis, ${(\lambdaambda)}$, when it refers to the irreps of SU$(d)$, or in brackets,~${\{\lambdaambda\}}$, when it refers to those of~$S_N$. Eq.~({\bf s}ef{schurweyl}) tells us that the dimension of ${(\lambdaambda)}$, $s_\lambda$, coincides with the {{\bf s}m e}mph{multiplicity} of ${\{\lambdaambda\}}$, and conversely, the dimension of ${\{\lambdaambda\}}$, $\nu_\lambda$, coincides with the multiplicity of ${(\lambdaambda)}$.
This block-diagonal structure provides a decomposition of Hilbert space $\mathcal{H}^{\otimes N}=(d,{\mathbb C})^{\otimes N}$ into subspaces that are invariant under the action of ${{\bf s}m SU}(d)$ and $S_N$, as $\mathcal{H}^{\otimes N}=\bigoplus_\lambda H_\lambda$, and in turn, $H_\lambda = H_{(\lambdaambda)} \otimes H_{\{\lambdaambda\}}$.
The basis in which ${\mathcal H}^{\otimes N}$ has this form is known as {{\bf s}m e}mph{Schur basis}, and the unitary transformation that changes from the computational to the Schur basis is called {{\bf s}m e}mph{Schur transform}.
To conclude this Appendix, let us recall the rules for reducing the tensor product of two ${{\bf s}m SU}(d)$ representations as a Clebsch-Gordan series of the form
\begin{equation}{(\lambdaambda)}bel{reduction1}
R^\lambda\otimes R^{\lambda'}=\bigoplus_{\lambda''} R^{\lambda''}\otimes \openone^{{(\lambdaambda)}mbda''} \,,\quad \forall R\in{{\bf s}m SU}(d)\,,
{{\bf s}m e}nd{equation}
where ${{\bf s}m dim}(\openone^{{(\lambdaambda)}mbda''})$ is the multiplicity of irrep ${(\lambdaambda)}mbda''$.
The same rules also apply to the reduction of the outer product of representations of $S_n$ and~$S_{n'}$ into irreps of~$S_{n''}$, where $n''=n+n'$. In this case one has
\begin{equation}{(\lambdaambda)}bel{reduction2}
(U^\lambda \otimes U^{\lambda'})_{{{\bf s}m s}igma}=\bigoplus_{\lambda''} U^{\lambda''}_{{{\bf s}m s}igma} \otimes \openone^{{(\lambdaambda)}mbda''}, \quad \forall {{\bf s}m s}igma\in S_{n''}.
{{\bf s}m e}nd{equation}
Note the different meanings of $\otimes$ in the last two equations (it is however standard notation). The rules are most easily stated in terms of the Young diagrams that label the irreps. They are as follows:
\begin{enumerate}
\item In one of the diagrams that label de irreps on the left hand side of Eq.~({\bf s}ef{reduction1}) or Eq.~({\bf s}ef{reduction2}) (preferably the smallest), write the symbol $a$ in all boxes of the first row, the symbol $b$ in all boxes of the second row, $c$ in all boxes of the third one, and so~on.
\item Attach boxes with $a$ to the second Young diagram in all possible ways subjected to the rules that no two $a$'s appear in the same column and that the resulting arrangement of boxes is still a Young diagram. Repeat this process with $b$'s, $c$'s, and so~on.
\item For each Young diagram obtained in step two, read the 1st row of added symbols from right to left, then the second row in the same order, and so on. The resulting sequence of symbols, e.g., $abaabc\dots$, must be a lattice permutation, namely, to the left of any point in the sequence, there are not fewer $a$'s than $b$'s, no fewer $b$'s than $c$'s, and so on. Discard all diagrams that do not comply with this rule.
{{\bf s}m e}nd{enumerate}
The Young diagrams $\lambda''$ that result from this procedure specify the irreps on the right hand side of Eqs.~({\bf s}ef{reduction1}) and~({\bf s}ef{reduction2}). A same diagram can appear a number $M$ of times, in which case $\lambda''$ has multiplicity ${{\bf s}m dim}(\openone^{{(\lambdaambda)}mbda''})=M$.
{{\bf s}m s}ubsection{Particularities of quantum clustering}
Since the density operators [cf. Eq.~{{\bf s}m e}qref{app:rho_ns}] and POVM elements [cf. Eq.~{{\bf s}m e}qref{app:povmelements}] associated to each possible clustering emerge from the joint action of a permutation ${{\bf s}m s}igma\in S_N$ and a group average over ${{\bf s}m SU}(d)$, it is most convenient to work in the Schur basis, where the mathematical structure is much simpler. A further simplification specific to quantum clustering of two types of states, is that the irreps that appear in the block-diagonal decomposition of the states (and, hence, of the POVM elements) have at most length 2, i.e., they are labeled by bipartitions $\lambda=(\lambda_1,\lambda_2)$, and correspond to Young diagrams of at most two rows. This is because the ${\bf s}ho_{n,{{\bf s}m s}igma}$ arise from the tensor product of two {{\bf s}m e}mph{completely symmetric} projectors, $\openone^{{{\bf s}m sym}}_n$, $\openone^{{{\bf s}m sym}}_{\bar n}$, of~$n$ and~$\bar{n}$ systems [cf. Eq.~{{\bf s}m e}qref{app:rho_ns}]. They project into the irrep $\lambda=(n,0)$ and $\lambda'=(\bar n,0)$ subspaces, respectively.
According to the reduction rules above, in the Schur basis the tensor product reduces as
\ytableausetup{mathmode,boxsize=1em}
\ytableausetup{centertableaux}
\begin{eqnarray}
&&
\overbrace{
\begin{ytableau}
\phantom{.} & \phantom{.}
{{\bf s}m e}nd{ytableau}
\cdots
\begin{ytableau}
\phantom{.} & \phantom{.} & \phantom{.}
{{\bf s}m e}nd{ytableau}
}^{\bar n}
\ \otimes \
\overbrace{ \begin{ytableau}
a & a
{{\bf s}m e}nd{ytableau}
\cdots
\begin{ytableau}
a & a
{{\bf s}m e}nd{ytableau}}^{n}\nonumber\\
&=&
\overbrace{
\begin{ytableau}
\phantom{.} & \phantom{.} & \phantom{.}
{{\bf s}m e}nd{ytableau}
\cdots
\begin{ytableau}
a & a & a
{{\bf s}m e}nd{ytableau}
}^{n+\bar n}
\ \oplus\
\overbrace{
\begin{ytableau}
\phantom{.} & \phantom{.} & \phantom{.}\\
a
{{\bf s}m e}nd{ytableau}
{\bf s}aisebox{.5em}{
$\cdots$\!
\begin{ytableau}
a & a
{{\bf s}m e}nd{ytableau}
}\!\!
}^{n+\bar n-1}
{(\lambdaambda)}bel{reduction3}\\
&\oplus&
\overbrace{
\begin{ytableau}
\phantom{.} & \phantom{.} & \phantom{.}\\
a&a
{{\bf s}m e}nd{ytableau}
{\bf s}aisebox{.5em}{
$\cdots$\!
\begin{ytableau}
a & a
{{\bf s}m e}nd{ytableau}
}\!\!
}^{n+\bar n-2}\
\oplus \cdots \oplus
\overbrace{
\begin{ytableau}
\phantom{.} & \phantom{.} \\
a&a
{{\bf s}m e}nd{ytableau}
{\bf s}aisebox{0em}{
$\cdots$\!
\begin{ytableau}
\phantom{.}\\
a
{{\bf s}m e}nd{ytableau}
}
{\bf s}aisebox{0.5em}{
\!\!$\cdots$\!
\begin{ytableau}
\phantom{.}
{{\bf s}m e}nd{ytableau}
}
\!\!
}^{\bar n}\ .\nonumber
\\
&&\nonumber
{{\bf s}m e}nd{eqnarray}
This proves our statement.
There is yet another simplification that emerges from Eq.~({\bf s}ef{reduction3}). Note that all the irreps appear only once in the reduction. That is, fixing the indices $n$, ${{\bf s}m s}igma$, and ${\{\lambdaambda\}}$ uniquely defines a one-dimensional subspace. Thus, the projectors $\Omega^{n,{{\bf s}m s}igma}_{{\{\lambdaambda\}}}$ are rank one.
We conclude by giving explicit expressions for the dimensions of the irreps of $ S_N$ and ${{\bf s}m SU}(d)$, in Eqs.~{{\bf s}m e}qref{mult_nu_general} and~{{\bf s}m e}qref{mult_s_general}, for partitions of the form $\lambda=(\lambda_1,\lambda_2)$. These expressions are used to derive Eq.~{{\bf s}m e}qref{app:ps}, and read
\begin{eqnarray}
\nu_\lambda &=& \frac{N!(\lambda_1-\lambda_2+1)}{(\lambda_1+1)!\lambda_2! }\,, {(\lambdaambda)}bel{mult_nu} \\ \nonumber\\
s_\lambda &=& \frac{\lambda_1-\lambda_2+1}{\lambda_1+1} \binom{\lambda_1+d-1}{d-1} \binom{\lambda_2+d-2}{d-2} \,. {(\lambdaambda)}bel{mult_s}
{{\bf s}m e}nd{eqnarray}
One can check that Eqs.~({\bf s}ef{mult_nu}) and~({\bf s}ef{mult_s}) are consistent with Eq.~({\bf s}ef{reduction3}) by showing that the sum of the dimensions of the irreps on the right hand side agrees with the product of the two on the left hand side. Namely, by checking that
\begin{align}
s_{(\bar n,0)}s_{(n,0)}&={{\bf s}m s}um_{i=0}^n s_{(n+\bar n-i,i)}\,,{(\lambdaambda)}bel{check1}\\
\nu^{S_{\bar n}}_{(\bar n,0)}\nu^{S_n}_{(n,0)}\binom{n+\bar n}{n}&={{\bf s}m s}um_{i=0}^n \nu_{(n+\bar n-i,i)}\,,{(\lambdaambda)}bel{check2}
{{\bf s}m e}nd{align}
where the superscript remind us that the dimensions on the left hand side refer to irreps of $S_{\bar n}$, $S_{n}$. One obviously obtains $\nu^{S_{\bar n}}_{(\bar n,0)}=\nu^{S_n}_{(n,0)}=1$, since these are the trivial representations of either group. The binomial in~Eq.~({\bf s}ef{check2}) arises from the definition of outer product representation in~Eq.~({\bf s}ef{reduction2}), whereby the action of $S_{n+\bar n}$ is defined on basis vectors of the form $\bar v_{i_1i_2\dots i_{\bar n}}\otimes v_{i_{\bar n+1}i_{\bar n+2}\dots i_{\bar n+n}}$, with $\bar v_{i_1i_2\dots i_{\bar n}}\in H_{{\{\lambdaambda\}}}^{S_{\bar n}}$, $v_{i_1i_2\dots i_{n}}\in H_{\{\lambda'\}}^{S_{n}}$. There are, naturally, $\binom{\bar n+n}{n}$ ways of allocating $\bar n+n$ indices in this expression.
{{\bf s}m s}ection{Asymptotics of $P_{{\bf s}m s}$}{(\lambdaambda)}bel{app:asymptotics}
We next wish to address the asymptotic behavior of the success probability as the length~$N$ of the data string becomes large. Various behaviors will be derived depending on how the local dimension $d$ scales with $N$.
In the large $N$ limit it suffices to consider even values of~$N$, which slightly simplifies the derivation of the asymptotic expressions.
The success probability in Eq.~({\bf s}ef{app:ps}) for $N=2m$, $m\in{\mathbb N}$, can be written as (just define a new index as $j=m-i$)
\begin{equation}
\kern-.3em P_{{\bf s}m s}\!=\!{d\!-\!1\over 2^{2m-1}}{{\bf s}m s}um_{j=0}^m {(2j+1)^2\over (m\!+\!1\!+j)^2(m\!+\!d\!-\!1\!-\!j)}\!\begin{pmatrix}\!2m\\ m\!+\!j\!{{\bf s}m e}nd{pmatrix}\,.
{(\lambdaambda)}bel{P asymp}
{{\bf s}m e}nd{equation}
For large $m$, we write $j=m x$ and use
\begin{equation}
{1\over 2^{2m-1}}\begin{pmatrix}2m\\ m+j{{\bf s}m e}nd{pmatrix}{{\bf s}m s}im {2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}\,.
{(\lambdaambda)}bel{BinomialGauss}
{{\bf s}m e}nd{equation}
We start by assuming that $d$ scales more slowly than $N$, e.g., $d{{\bf s}m s}im N^{\gamma}$, with $0\lambdae\gamma<1$. In this situation, we can neglect~$d$ in the denominator of Eq.~({\bf s}ef{P asymp}).
Neglecting also other subleading terms in inverse powers of $m$ and using the Euler-Maclaurin formula, we have
| 3,998 | 66,496 |
en
|
train
|
0.115.10
|
{\bf s}aisebox{0.5em}{
\!\!$\cdots$\!
\begin{ytableau}
\phantom{.}
{{\bf s}m e}nd{ytableau}
}
\!\!
}^{\bar n}\ .\nonumber
\\
&&\nonumber
{{\bf s}m e}nd{eqnarray}
This proves our statement.
There is yet another simplification that emerges from Eq.~({\bf s}ef{reduction3}). Note that all the irreps appear only once in the reduction. That is, fixing the indices $n$, ${{\bf s}m s}igma$, and ${\{\lambdaambda\}}$ uniquely defines a one-dimensional subspace. Thus, the projectors $\Omega^{n,{{\bf s}m s}igma}_{{\{\lambdaambda\}}}$ are rank one.
We conclude by giving explicit expressions for the dimensions of the irreps of $ S_N$ and ${{\bf s}m SU}(d)$, in Eqs.~{{\bf s}m e}qref{mult_nu_general} and~{{\bf s}m e}qref{mult_s_general}, for partitions of the form $\lambda=(\lambda_1,\lambda_2)$. These expressions are used to derive Eq.~{{\bf s}m e}qref{app:ps}, and read
\begin{eqnarray}
\nu_\lambda &=& \frac{N!(\lambda_1-\lambda_2+1)}{(\lambda_1+1)!\lambda_2! }\,, {(\lambdaambda)}bel{mult_nu} \\ \nonumber\\
s_\lambda &=& \frac{\lambda_1-\lambda_2+1}{\lambda_1+1} \binom{\lambda_1+d-1}{d-1} \binom{\lambda_2+d-2}{d-2} \,. {(\lambdaambda)}bel{mult_s}
{{\bf s}m e}nd{eqnarray}
One can check that Eqs.~({\bf s}ef{mult_nu}) and~({\bf s}ef{mult_s}) are consistent with Eq.~({\bf s}ef{reduction3}) by showing that the sum of the dimensions of the irreps on the right hand side agrees with the product of the two on the left hand side. Namely, by checking that
\begin{align}
s_{(\bar n,0)}s_{(n,0)}&={{\bf s}m s}um_{i=0}^n s_{(n+\bar n-i,i)}\,,{(\lambdaambda)}bel{check1}\\
\nu^{S_{\bar n}}_{(\bar n,0)}\nu^{S_n}_{(n,0)}\binom{n+\bar n}{n}&={{\bf s}m s}um_{i=0}^n \nu_{(n+\bar n-i,i)}\,,{(\lambdaambda)}bel{check2}
{{\bf s}m e}nd{align}
where the superscript remind us that the dimensions on the left hand side refer to irreps of $S_{\bar n}$, $S_{n}$. One obviously obtains $\nu^{S_{\bar n}}_{(\bar n,0)}=\nu^{S_n}_{(n,0)}=1$, since these are the trivial representations of either group. The binomial in~Eq.~({\bf s}ef{check2}) arises from the definition of outer product representation in~Eq.~({\bf s}ef{reduction2}), whereby the action of $S_{n+\bar n}$ is defined on basis vectors of the form $\bar v_{i_1i_2\dots i_{\bar n}}\otimes v_{i_{\bar n+1}i_{\bar n+2}\dots i_{\bar n+n}}$, with $\bar v_{i_1i_2\dots i_{\bar n}}\in H_{{\{\lambdaambda\}}}^{S_{\bar n}}$, $v_{i_1i_2\dots i_{n}}\in H_{\{\lambda'\}}^{S_{n}}$. There are, naturally, $\binom{\bar n+n}{n}$ ways of allocating $\bar n+n$ indices in this expression.
{{\bf s}m s}ection{Asymptotics of $P_{{\bf s}m s}$}{(\lambdaambda)}bel{app:asymptotics}
We next wish to address the asymptotic behavior of the success probability as the length~$N$ of the data string becomes large. Various behaviors will be derived depending on how the local dimension $d$ scales with $N$.
In the large $N$ limit it suffices to consider even values of~$N$, which slightly simplifies the derivation of the asymptotic expressions.
The success probability in Eq.~({\bf s}ef{app:ps}) for $N=2m$, $m\in{\mathbb N}$, can be written as (just define a new index as $j=m-i$)
\begin{equation}
\kern-.3em P_{{\bf s}m s}\!=\!{d\!-\!1\over 2^{2m-1}}{{\bf s}m s}um_{j=0}^m {(2j+1)^2\over (m\!+\!1\!+j)^2(m\!+\!d\!-\!1\!-\!j)}\!\begin{pmatrix}\!2m\\ m\!+\!j\!{{\bf s}m e}nd{pmatrix}\,.
{(\lambdaambda)}bel{P asymp}
{{\bf s}m e}nd{equation}
For large $m$, we write $j=m x$ and use
\begin{equation}
{1\over 2^{2m-1}}\begin{pmatrix}2m\\ m+j{{\bf s}m e}nd{pmatrix}{{\bf s}m s}im {2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}\,.
{(\lambdaambda)}bel{BinomialGauss}
{{\bf s}m e}nd{equation}
We start by assuming that $d$ scales more slowly than $N$, e.g., $d{{\bf s}m s}im N^{\gamma}$, with $0\lambdae\gamma<1$. In this situation, we can neglect~$d$ in the denominator of Eq.~({\bf s}ef{P asymp}).
Neglecting also other subleading terms in inverse powers of $m$ and using the Euler-Maclaurin formula, we have
\begin{equation}
P_{{\bf s}m s}{{\bf s}m s}im (d-1) \int_{0}^\infty dx\; {4x^2\over (1+x)^2(1-x)} \;{2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}} \,,
{{\bf s}m e}nd{equation}
which we can further approximate by substituting $0$ for~$x$ in the denominator, as the Gaussian factor peaks at $x=0$ as $m$ becomes larger, so
\begin{align}
P_{{\bf s}m s}&{{\bf s}m s}im4(d-1)\! \int_{0}^\infty \!dx\; x \;{2x\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}\nonumber\\
&=-4(d-1) \int_{0}^\infty dx\; x \;{d\over dx} {{{\bf s}m e}^{-m x^2}\over m {{\bf s}m s}qrt{m\pi}}\,.
{{\bf s}m e}nd{align}
We integrate by parts to obtain
\begin{equation}
P_{{\bf s}m s}{{\bf s}m s}im {2(d-1)\over m}\int_{0}^\infty dx\; {2\,{{\bf s}m e}^{-m x^2}\over {{\bf s}m s}qrt{m\pi}}={2(d-1)\over m^2} \,.
{{\bf s}m e}nd{equation}
Hence, provided that $d$ scales more slowly than $N$, the probability of success vanishes asymptotically as~$N^{-2}$. More precisely, as
\begin{equation}
P_{{\bf s}m s}{{\bf s}m s}im {8(d-1)\over N^2} \,.
{(\lambdaambda)}bel{gamma < 1}
{{\bf s}m e}nd{equation}
Let us next assume that $d$ scales faster than $N$, e.g., as~$d{{\bf s}m s}im N^{\gamma}$, with $\gamma>1$. In this case, $d$ is the leading contribution in the second factor in the denominator of Eq.~({\bf s}ef{P asymp}). Accordingly, we have
\begin{align}
P_{{\bf s}m s}&{{\bf s}m s}im (d-1)m\int_0^\infty dx {4x^2\over (1+x)^2 d}{2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}\nonumber\\
&{{\bf s}m s}im 4m \int_0^\infty dx\, x {2 x\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}}={2\over m} \,,
{{\bf s}m e}nd{align}
and the asymptotic expression becomes
\begin{equation}
P_{{\bf s}m s}{{\bf s}m s}im {4\over N} \,,
{(\lambdaambda)}bel{gamma>1}
{{\bf s}m e}nd{equation}
independently of $d$.
Finally, let us assume that $d$ scales exactly as $N$ and write $d=s N$, $s>0$. The success probability can be cast as
\begin{equation}
P_{{\bf s}m s}\!{{\bf s}m s}im \!(d\!-\!1)\! \int_{0}^\infty \! dx\; {4x^2\over (1+x)^2(1+2s-x)} \;{2\,{{\bf s}m e}^{-m x^2}\over{{\bf s}m s}qrt{m\pi}} \,.
{{\bf s}m e}nd{equation}
Proceeding as above, we obtain
\begin{equation}
P_{{\bf s}m s}{{\bf s}m s}im {2(d-1)\over (2s+1)m^2} \,.
{{\bf s}m e}nd{equation}
Thus,
\begin{equation}
P_{{\bf s}m s}{{\bf s}m s}im {8s\over (2s+1)N} \,.
{(\lambdaambda)}bel{gamma=1}
{{\bf s}m e}nd{equation}
The three expressions, Eq.~({\bf s}ef{gamma < 1}), Eq.~({\bf s}ef{gamma>1}) and Eq.~({\bf s}ef{gamma=1}), can be combined into a single one as
\begin{equation}
P_{{\bf s}m s}
{{\bf s}m s}im {8(d-1)\over\lambdaeft(\displaystyle 2d+N{\bf s}ight) N}\,.
{{\bf s}m e}nd{equation}
{{\bf s}m s}ection{Optimal POVM for general cost functions}{(\lambdaambda)}bel{app:generalcosts}
This Appendix deals with the optimization of quantum clustering assuming other cost functions. We introduce a sufficient condition under which the type of POVM we used to maximize the success probability (Section~{\bf s}ef{app:optimality}) is also optimal for a given generic cost function. We conjecture that the condition holds under reasonable assumptions. We discuss numerical results for the cases of Hamming distance, trace distance, and infidelity.
Recall that Eq.~{{\bf s}m e}qref{app:povmelements} together with Eq.~{{\bf s}m e}qref{povm} define the optimal POVM for a generic cost function that preserves covariance under $S_N$.
However, this form is implicit and thus not very practical.
Particularizing to the success probability, we managed to specify the function~$n(\lambda)=\lambda_2$ [cf. Eq.~{{\bf s}m e}qref{nl}] and the operators
$\Xi_{\{\lambdaambda\}}^n = \Omega_{\{\lambdaambda\}}^{n} \delta_{n,\lambda_2}$. In summary, the POVM was specified solely in terms of the effective states ${\bf s}ho_{n,{{\bf s}m s}igma}$ (hypotheses).
Here we conjecture that the choice \mbox{$\Xi_{\{\lambdaambda\}}^n=\Omega_{\{\lambdaambda\}}^{n} \delta_{n,n(\lambda)}$} is still optimal for a large class of cost functions~$f({\bf x},{\bf x}')$, albeit with varying guessing rules $n(\lambda)$. If this holds, given~$f({\bf x},{\bf x}')$, one only has to compute $n(\lambda)={{\bf s}m argmin}_n \vartheta^n_{\lambda,1}$ to obtain the optimal POVM. The minimum average cost can then be computed via Eq.~{{\bf s}m e}qref{opt_av_cost}.
We now formulate this conjecture precisely as a testable mathematical condition.
For any cost function (distance) such that $f({\bf x},{\bf x}')\ge0$ and $f({\bf x},{\bf x}') = 0$ iff ${\bf x}={\bf x}'$,
we can always find some constant $t>0$ such that
\begin{equation}
t\,f({\bf x},{\bf x}')\geq \bar{\delta}_{{\bf x},{\bf x}'}{{\bf s}m e}quiv 1-\delta_{{\bf x},{\bf x}'},\quad \forall{\bf x},{\bf x}'.
{(\lambdaambda)}bel{minimal cost}
{{\bf s}m e}nd{equation}
We can then rescale the cost function $f\mapsto t^{-1}f$ and assume with no loss of generality that $f({\bf x},{\bf x}')\ge\bar\delta_{{\bf x},{\bf x}'}$.
We have
\begin{equation}
W_{\bf x}
= \bar W_{\bf x} + \Delta_{\bf x} ,
{{\bf s}m e}nd{equation}
where we have used the definition of $W_{\bf x}$ after Eq.~({\bf s}ef{Holevo1}) and similarly defined $\bar W_{\bf x}$ for the minimal cost~$\bar\delta_{{\bf x},{\bf x}'}$. As in Section~{\bf s}ef{app:optimality}, it suffices to consider ${\bf x}=(n,e)$. Then,
\begin{equation}
\Delta_{\bf x} = {{\bf s}m s}um_{{\bf x}'} {{\bf s}m e}ta_{{\bf x}'}[f({\bf x},{\bf x}')-\bar{\delta}_{{\bf x},{\bf x}'}] {\bf s}ho_{\bf x}' \geq 0.
{{\bf s}m e}nd{equation}
Using the same notation as in Eq.~({\bf s}ef{W_block}), this is equivalent~to
\begin{equation}
\omega_{\{\lambdaambda\}}^n - \bar\omega_{\{\lambdaambda\}}^n \geq 0 \,.
{{\bf s}m e}nd{equation}
We now recall the meaning of Eqs.~{{\bf s}m e}qref{Holevo1b} and {{\bf s}m e}qref{Holevo2b}: the operators $\Xi_{\{\lambdaambda\}}^n$ must be projectors onto the eigenspace of minimal eigenvalue of $\omega_{\{\lambdaambda\}}^n$. Then, according to Eq.~{{\bf s}m e}qref{povm},
the choice $\Xi_{\{\lambdaambda\}}^n=\Omega_{\{\lambdaambda\}}^n \delta_{n,n(\lambda)}$
is also optimal for arbitrary cost functions if it holds that
\begin{equation}{(\lambdaambda)}bel{conjecture}
{{\bf s}m supp}\lambdaeft(\Omega_{\{\lambdaambda\}}^n{\bf s}ight) = V_1\lambdaeft(\bar\omega_{\{\lambdaambda\}}^n{\bf s}ight) \overset{?}{{{\bf s}m s}ubset} V_1\lambdaeft(\omega_{\{\lambdaambda\}}^n{\bf s}ight),
{{\bf s}m e}nd{equation}
where $V_1(X)$ is the eigenspace of minimal eigenvalue of~$X$, and the equality follows from Eq.~{{\bf s}m e}qref{Wme}.
| 4,017 | 66,496 |
en
|
train
|
0.115.11
|
Thus,
\begin{equation}
P_{{\bf s}m s}{{\bf s}m s}im {8s\over (2s+1)N} \,.
{(\lambdaambda)}bel{gamma=1}
{{\bf s}m e}nd{equation}
The three expressions, Eq.~({\bf s}ef{gamma < 1}), Eq.~({\bf s}ef{gamma>1}) and Eq.~({\bf s}ef{gamma=1}), can be combined into a single one as
\begin{equation}
P_{{\bf s}m s}
{{\bf s}m s}im {8(d-1)\over\lambdaeft(\displaystyle 2d+N{\bf s}ight) N}\,.
{{\bf s}m e}nd{equation}
{{\bf s}m s}ection{Optimal POVM for general cost functions}{(\lambdaambda)}bel{app:generalcosts}
This Appendix deals with the optimization of quantum clustering assuming other cost functions. We introduce a sufficient condition under which the type of POVM we used to maximize the success probability (Section~{\bf s}ef{app:optimality}) is also optimal for a given generic cost function. We conjecture that the condition holds under reasonable assumptions. We discuss numerical results for the cases of Hamming distance, trace distance, and infidelity.
Recall that Eq.~{{\bf s}m e}qref{app:povmelements} together with Eq.~{{\bf s}m e}qref{povm} define the optimal POVM for a generic cost function that preserves covariance under $S_N$.
However, this form is implicit and thus not very practical.
Particularizing to the success probability, we managed to specify the function~$n(\lambda)=\lambda_2$ [cf. Eq.~{{\bf s}m e}qref{nl}] and the operators
$\Xi_{\{\lambdaambda\}}^n = \Omega_{\{\lambdaambda\}}^{n} \delta_{n,\lambda_2}$. In summary, the POVM was specified solely in terms of the effective states ${\bf s}ho_{n,{{\bf s}m s}igma}$ (hypotheses).
Here we conjecture that the choice \mbox{$\Xi_{\{\lambdaambda\}}^n=\Omega_{\{\lambdaambda\}}^{n} \delta_{n,n(\lambda)}$} is still optimal for a large class of cost functions~$f({\bf x},{\bf x}')$, albeit with varying guessing rules $n(\lambda)$. If this holds, given~$f({\bf x},{\bf x}')$, one only has to compute $n(\lambda)={{\bf s}m argmin}_n \vartheta^n_{\lambda,1}$ to obtain the optimal POVM. The minimum average cost can then be computed via Eq.~{{\bf s}m e}qref{opt_av_cost}.
We now formulate this conjecture precisely as a testable mathematical condition.
For any cost function (distance) such that $f({\bf x},{\bf x}')\ge0$ and $f({\bf x},{\bf x}') = 0$ iff ${\bf x}={\bf x}'$,
we can always find some constant $t>0$ such that
\begin{equation}
t\,f({\bf x},{\bf x}')\geq \bar{\delta}_{{\bf x},{\bf x}'}{{\bf s}m e}quiv 1-\delta_{{\bf x},{\bf x}'},\quad \forall{\bf x},{\bf x}'.
{(\lambdaambda)}bel{minimal cost}
{{\bf s}m e}nd{equation}
We can then rescale the cost function $f\mapsto t^{-1}f$ and assume with no loss of generality that $f({\bf x},{\bf x}')\ge\bar\delta_{{\bf x},{\bf x}'}$.
We have
\begin{equation}
W_{\bf x}
= \bar W_{\bf x} + \Delta_{\bf x} ,
{{\bf s}m e}nd{equation}
where we have used the definition of $W_{\bf x}$ after Eq.~({\bf s}ef{Holevo1}) and similarly defined $\bar W_{\bf x}$ for the minimal cost~$\bar\delta_{{\bf x},{\bf x}'}$. As in Section~{\bf s}ef{app:optimality}, it suffices to consider ${\bf x}=(n,e)$. Then,
\begin{equation}
\Delta_{\bf x} = {{\bf s}m s}um_{{\bf x}'} {{\bf s}m e}ta_{{\bf x}'}[f({\bf x},{\bf x}')-\bar{\delta}_{{\bf x},{\bf x}'}] {\bf s}ho_{\bf x}' \geq 0.
{{\bf s}m e}nd{equation}
Using the same notation as in Eq.~({\bf s}ef{W_block}), this is equivalent~to
\begin{equation}
\omega_{\{\lambdaambda\}}^n - \bar\omega_{\{\lambdaambda\}}^n \geq 0 \,.
{{\bf s}m e}nd{equation}
We now recall the meaning of Eqs.~{{\bf s}m e}qref{Holevo1b} and {{\bf s}m e}qref{Holevo2b}: the operators $\Xi_{\{\lambdaambda\}}^n$ must be projectors onto the eigenspace of minimal eigenvalue of $\omega_{\{\lambdaambda\}}^n$. Then, according to Eq.~{{\bf s}m e}qref{povm},
the choice $\Xi_{\{\lambdaambda\}}^n=\Omega_{\{\lambdaambda\}}^n \delta_{n,n(\lambda)}$
is also optimal for arbitrary cost functions if it holds that
\begin{equation}{(\lambdaambda)}bel{conjecture}
{{\bf s}m supp}\lambdaeft(\Omega_{\{\lambdaambda\}}^n{\bf s}ight) = V_1\lambdaeft(\bar\omega_{\{\lambdaambda\}}^n{\bf s}ight) \overset{?}{{{\bf s}m s}ubset} V_1\lambdaeft(\omega_{\{\lambdaambda\}}^n{\bf s}ight),
{{\bf s}m e}nd{equation}
where $V_1(X)$ is the eigenspace of minimal eigenvalue of~$X$, and the equality follows from Eq.~{{\bf s}m e}qref{Wme}.
Our conjecture is that Eq.~{{\bf s}m e}qref{conjecture} holds true for the class of ``reasonable'' cost functions considered in this paper, namely, for those that are nonnegative, covariant and satisfy the distance property stated before Eq.~({\bf s}ef{minimal cost}).
We checked its validity for problems of size up to $N=8$, local dimension $d=2$, and uniform prior probabilities for the following cost functions:
Hamming distance $h({\bf x},{\bf x}') = \min\{|{\bf x}-{\bf x}'|,|{\bf x}-{\bar {\bf x}}'|\}$ ($x_i=0,1$), trace distance $T({\bf x},{\bf x}')=\norm{{\bf s}ho_{\bf x}-{\bf s}ho_{{\bf x}'}}_1$, and infidelity $I({\bf x},{\bf x}')=1-{{\bf s}m tr}\,^2\big[({{\bf s}m s}qrt{{\bf s}ho_{\bf x}} {\bf s}ho_{{\bf x}'} {{\bf s}m s}qrt{{\bf s}ho_{\bf x}})^{1/2}\big]$.
The above examples
induce a much richer structure in the problem at hand. To illustrate this added complexity, in Fig.~{\bf s}ef{fig:avocado} we show a heat map of the Hamming distances
$h({\bf x},{\bf x}')$ between all pairs of clusterings for $N=8$. The figure shows that the largest values of $h({\bf x},{\bf x}')$ can occur for two clusterings with equal cluster size $n$, and that~$h({\bf x},{\bf x}')$ is extremely dependent on the pair of permutations ${{\bf s}m s}igma,{{\bf s}m s}igma'$. As a result, the guessing rule $n({(\lambdaambda)}mbda)$ is completely different from the one that maximizes the probability of success~$P_{{\bf s}m s}$.
In particular, irreps~${(\lambdaambda)}mbda$ are no longer in one-to-one correspondence with optimal guesses for~$n$. In Table~{\bf s}ef{tab:nl} we show values of~$n(\lambda)$ for our four cost functions and $N=4,\lambdadots,8$. In contrast to the case of the success probability (the cost function~$\bar\delta_{{\bf x},{\bf x}'}$), we note that in some cases it is actually optimal to map several irreps to the same guess, while never guessing certain cluster sizes.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.47]{Hamming_avocadoN8.pdf}
\caption{Heat map of the Hamming distances $h({\bf x},{\bf x}')$ between clusterings for $N=8$. The clusterings are grouped by size of the smallest cluster $n=0,1,2,3,4$. Each group contains all nontrivial permutations ${{\bf s}m s}igma$ for a given $n$. A brighter color means a smaller Hamming distance.}{(\lambdaambda)}bel{fig:avocado}
{{\bf s}m e}nd{figure}
\begin{table*}[htbp]
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\cline{2-20}
& \multicolumn{3}{c|}{$N=4$} & \multicolumn{3}{c|}{$N=5$} & \multicolumn{4}{c|}{$N=6$} & \multicolumn{4}{c|}{$N=7$} & \multicolumn{5}{c|}{$N=8$} \\ \hline
\multicolumn{1}{|c|}{$\lambda$} & (4,0) & (3,1) & (2,2) & (5,0) & (4,1) & (3,2) & (6,0) & (5,1) & (4,2) & (3,3) & (7,0) & (6,1) & (5,2) & (4,3) & (8,0) & (7,1) & (6,2) & (5,3) & (4,4) \\ \hline
\multicolumn{1}{|c|}{$\bar{\delta}$} & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 3 & 0 & 1 & 2 & 3 & 0 & 1 & 2 & 3 & 4 \\ \hline
\multicolumn{1}{|c|}{$h$} & 0 & 2 & 2 & 0 & 2 & 2 & 0 & 3 & 2,3 & 3 & 0 & 3 & 3 & 3 & 0 & 4 & 4 & 4 & 4 \\ \hline
\multicolumn{1}{|c|}{$T$} & 1 & 1 & 2 & 1 & 1 & 2 & 1 & 1 & 2 & 3 & 1 & 2 & 2 & 3 & 1 & 2 & 3 & 3 & 4 \\ \hline
\multicolumn{1}{|c|}{$I$} & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 3 & 0 & 1 & 3 & 3 & 0 & 1 & 3 & 4 & 4 \\ \hline
{{\bf s}m e}nd{tabular}
\caption{Values of $n(\lambda)$, i.e., of the optimal guess for the size of the smallest cluster, where $\lambda=(\lambda_1,\lambda_2)$ are the relevant irreps, for data sizes $N=4,5,6,7,8$, and cost functions $\bar{\delta}({\bf x},{\bf x}')$ (corresponding to the success probability), Hamming distance~$h({\bf x},{\bf x}')$, trace distance $T({\bf x},{\bf x}')$, and infidelity $I({\bf x},{\bf x}')$.}{(\lambdaambda)}bel{tab:nl}
{{\bf s}m e}nd{table*}
Performing the Schur transform is computationally inefficient on a classical computer\footnote{In contrast, as was mentioned in the main text, there exist efficient quantum circuits able to implement the Schur transform in a quantum computer. A circuit based on the Clebsch-Gordan transform achieves polynomial time in $N$ and $d$~\cite{Harrow2005}. Recently, an alternative method based on the representation theory of the symmetric group was shown to reduce the dimension scaling to ${{\bf s}m poly}(\lambdaog d)$~\cite{Krovi2018}.}, which sets a limit on the size of the data one can test---in our case it is~$N=8$. However, it is worth mentioning that this difficulty might actually be overcome. The fundamental objects needed for testing Eq.~{{\bf s}m e}qref{conjecture} are the operators $\Omega_{\{\lambdaambda\}}^n$. Their computation would, in principle, not require the full Schur transform, as they can be expressed in terms of generalized Racah coefficients, which give a direct relation between Schur bases arising from different coupling schemes of the tensor product space. It is indeed possible to calculate generalized Racah coefficients directly without going through a Clebsch-Gordan transform~\cite{Gliske2005}, and should this method be implemented, clustering problems of larger sizes might be tested. However, an extensive numerical analysis was not the aim of this paper.
{{\bf s}m s}ection{Prior distributions}{(\lambdaambda)}bel{app:prior}
In the interest of making the paper self-contained, in this appendix we include the derivation of some results about the prior distributions used in the paper.
Let ${\mathsf S}_d=\{p_s\ge 0| {{\bf s}m s}um_{s=1}^d p_s=1\}$ denote the standard $(d-\!1)$-dimensional (probability) simplex. Every categorical distribution (CD) $P=\{p_s\}_{s=1}^d$ is a point in ${\mathsf S}_d$.
The flat distribution of CDs is the volume element divided by the volume of~${\mathsf S}_d$, the latter denoted by $V_d$. Choosing coordinates $p_1,\dots,p_{d-1}$, the flat distribution is $\prod_{s=1}^{d-1} dp_s/V_d{{\bf s}m e}quiv dP$.
Let us compute the moments of the flat distribution; as a byproduct, we will obtain $V_d$. We have
\begin{align}
V_d\!\!\int_{{\mathsf S}_d}\!\!\! dP \prod_{s=1}^d \!p_s^{n_s}
\!&=\!\!\int_0^1 \!\!dp_1\!\!\int_0^{1-p_1}\!\!\!dp_2\cdots\!\!\int_0^{1-\!\mbox{\tiny $\displaystyle{{\bf s}m s}um_{s=1}^{d-2}\!\!p_s$}}\!\!\!dp_{d-1}
\!\!\prod_{s=1}^d \!p_s^{n_s}
\nonumber\\
&=
\frac{\prod_{s=1}^d n_s!}{\lambdaeft(d-1+{{\bf s}m s}um_{s=1}^d n_s{\bf s}ight)!}
{(\lambdaambda)}bel{int simplex}
{{\bf s}m e}nd{align}
[the calculation becomes straightforward by iterating the change of variables $p_r\mapsto x$, where $p_r=(1-{{\bf s}m s}um_{s=1}^{r-1}p_s)x$, $r=d-2,d-3,\dots,2,1$]. In particular, setting $n_s=0$ for all~$s$ in Eq.~({\bf s}ef{int simplex}), we obtain $V_d=1/(d-1)!$. Then
| 4,020 | 66,496 |
en
|
train
|
0.115.12
|
Performing the Schur transform is computationally inefficient on a classical computer\footnote{In contrast, as was mentioned in the main text, there exist efficient quantum circuits able to implement the Schur transform in a quantum computer. A circuit based on the Clebsch-Gordan transform achieves polynomial time in $N$ and $d$~\cite{Harrow2005}. Recently, an alternative method based on the representation theory of the symmetric group was shown to reduce the dimension scaling to ${{\bf s}m poly}(\lambdaog d)$~\cite{Krovi2018}.}, which sets a limit on the size of the data one can test---in our case it is~$N=8$. However, it is worth mentioning that this difficulty might actually be overcome. The fundamental objects needed for testing Eq.~{{\bf s}m e}qref{conjecture} are the operators $\Omega_{\{\lambdaambda\}}^n$. Their computation would, in principle, not require the full Schur transform, as they can be expressed in terms of generalized Racah coefficients, which give a direct relation between Schur bases arising from different coupling schemes of the tensor product space. It is indeed possible to calculate generalized Racah coefficients directly without going through a Clebsch-Gordan transform~\cite{Gliske2005}, and should this method be implemented, clustering problems of larger sizes might be tested. However, an extensive numerical analysis was not the aim of this paper.
{{\bf s}m s}ection{Prior distributions}{(\lambdaambda)}bel{app:prior}
In the interest of making the paper self-contained, in this appendix we include the derivation of some results about the prior distributions used in the paper.
Let ${\mathsf S}_d=\{p_s\ge 0| {{\bf s}m s}um_{s=1}^d p_s=1\}$ denote the standard $(d-\!1)$-dimensional (probability) simplex. Every categorical distribution (CD) $P=\{p_s\}_{s=1}^d$ is a point in ${\mathsf S}_d$.
The flat distribution of CDs is the volume element divided by the volume of~${\mathsf S}_d$, the latter denoted by $V_d$. Choosing coordinates $p_1,\dots,p_{d-1}$, the flat distribution is $\prod_{s=1}^{d-1} dp_s/V_d{{\bf s}m e}quiv dP$.
Let us compute the moments of the flat distribution; as a byproduct, we will obtain $V_d$. We have
\begin{align}
V_d\!\!\int_{{\mathsf S}_d}\!\!\! dP \prod_{s=1}^d \!p_s^{n_s}
\!&=\!\!\int_0^1 \!\!dp_1\!\!\int_0^{1-p_1}\!\!\!dp_2\cdots\!\!\int_0^{1-\!\mbox{\tiny $\displaystyle{{\bf s}m s}um_{s=1}^{d-2}\!\!p_s$}}\!\!\!dp_{d-1}
\!\!\prod_{s=1}^d \!p_s^{n_s}
\nonumber\\
&=
\frac{\prod_{s=1}^d n_s!}{\lambdaeft(d-1+{{\bf s}m s}um_{s=1}^d n_s{\bf s}ight)!}
{(\lambdaambda)}bel{int simplex}
{{\bf s}m e}nd{align}
[the calculation becomes straightforward by iterating the change of variables $p_r\mapsto x$, where $p_r=(1-{{\bf s}m s}um_{s=1}^{r-1}p_s)x$, $r=d-2,d-3,\dots,2,1$]. In particular, setting $n_s=0$ for all~$s$ in Eq.~({\bf s}ef{int simplex}), we obtain $V_d=1/(d-1)!$. Then
\begin{equation}
\int_{{\mathsf S}_d}\!\! dP \prod_{s=1}^d p_s^{n_s}=\frac{(d-1)!\prod_{s=1}^d n_s!}{\lambdaeft(d-1+N{\bf s}ight)!},
{(\lambdaambda)}bel{moments1}
{{\bf s}m e}nd{equation}
where $N={{\bf s}m s}um_{s=1}^d n_s$.
Next, we provide a simple proof that
any fixed von Neumann measurement on a uniform distribution of pure states in $(d,{\mathbb C})$ gives rise to CDs whose probability distribution is flat. As a result, the classical and semiclassical strategies discussed in the main text have the same success probability.
Take $|\phi{\bf s}angle\in (d,{\mathbb C})$ and let $\{|s{\bf s}angle\}_{s=1}^d$ be an orthonormal basis of $(d,{\mathbb C})$. By performing the corresponding von Neumann measurement, the probability of an outcome~$s$ is $p_s=|{(\lambdaambda)}ngle s|\phi{\bf s}angle|^2$. Thus, any distribution of pure states induces a distribution of CDs $\{p_s=|{(\lambdaambda)}ngle s|\phi{\bf s}angle|^2\}_{s=1}^d$ on ${\mathsf S}_d$. Let us compute the moments of the induced distribution, namely,
\begin{align}
\int\!\! d\phi \prod_{s=1}^d p_s^{n_s}\!&=
\!\!\int\!\! d\phi \,{{\bf s}m tr}\,\!\!\lambdaeft[\bigotimes_{s=1}^d\lambdaeft(|s{\bf s}angle{(\lambdaambda)}ngle s|{\bf s}ight)^{\otimes n_s}\!\lambdaeft(|\phi{\bf s}angle{(\lambdaambda)}ngle\phi|{\bf s}ight)^{\otimes N}\!{\bf s}ight]\nonumber\\
&=
\frac{1}{D^{{\bf s}m sym}_N}
{{\bf s}m tr}\,\!\!\lambdaeft[\bigotimes_{s=1}^d\lambdaeft(|s{\bf s}angle{(\lambdaambda)}ngle s|{\bf s}ight)^{\otimes n_s}\!\openone^{{\bf s}m sym}_N\!{\bf s}ight],
{(\lambdaambda)}bel{CalcMom}
{{\bf s}m e}nd{align}
where we recall that $D^{{\bf s}m sym}_N$ ($\openone^{{\bf s}m sym}_N$) is the dimension of (projector on) the symmetric subspace of~$(d,{\mathbb C})^{\otimes N}$ and we have used Schur lemma. A basis of the symmetric subspace is
\begin{equation}
|v_{\bf n}{\bf s}angle={{\bf s}m s}qrt{{\prod_{s=1}^d n_s!\over N!}}{{\bf s}m s}um_{{{\bf s}m s}igma\in S_N}\!\!U_{{\bf s}m s}igma \bigotimes_{s=1}^d|s{\bf s}angle^{\otimes n_s},
{{\bf s}m e}nd{equation}
where ${\bf n}=(n_1,n_2,\dots, n_d)$. Note that there are $\binom{N+d-1}{d-1}$ different strings $\bf n$ (weak compositions of $N$ in $d$ parts), which agrees with $D^{{\bf s}m sym}_{N}=s_{(N,0)}$ [recall Ed.~({\bf s}ef{mult_s})], as it should be. Since $\openone^{{\bf s}m sym}_N={{\bf s}m s}um_{\bf n}|v_{\bf n}{\bf s}angle{(\lambdaambda)}ngle v_{\bf n}|$, we can easily compute the trace in Eq.~({\bf s}ef{CalcMom}) to obtain
\begin{equation}
\int\!\! d\phi \prod_{s=1}^d p_s^{n_s}\!=\!\frac{\prod_{s=1}^d n_s!}{N! D^{{\bf s}m sym}_N}=\frac{(d-1)!\prod_{s=1}^d n_s!}{(N+d-1)!}.
{(\lambdaambda)}bel{moments2}
{{\bf s}m e}nd{equation}
This equation agrees with Eq.~({\bf s}ef{moments1}). This means that all the moments of the distribution induced from the uniform distribution of pure states coincide with the moments of a flat distribution of CDs on~${\mathsf S}_d$. Since the moments uniquely determine the distributions with compact support~\cite{Akhiezer1965} (and ${\mathsf S}_d$ is compact) we conclude that they are identical.
As a byproduct, we can compute the marginal distribution $\mu(c^2)$, where $c$ is the overlap of $|\phi{\bf s}angle$ with a fixed state $|\psi{\bf s}angle$.
Since we can always find a basis such that $|\psi{\bf s}angle$ is its first element, we have $c=|{(\lambdaambda)}ngle 1|\phi{\bf s}angle|$. Because of the results above, the marginal distribution is given by
\begin{align}
\mu(c^2)&=
\!\!\int_0^{1-p_1}\!\!dp_2\cdots\!\!\int_0^{1-\!\mbox{\tiny $\displaystyle{{\bf s}m s}um_{s=1}^{d-2}\!\!p_s$}}\!dp_{d-1}
\Bigg|_{p_1=c^2}
\nonumber\\
&=(d\!-\!1)(1\!-c^2)^{d-2},
{(\lambdaambda)}bel{marginal}
{{\bf s}m e}nd{align}
in agreement with Ref.~\cite{Alonso2016}.
{{\bf s}m s}ection{Optimal clustering protocol for unknown classical states}{(\lambdaambda)}bel{app:classic}
In this appendix we provide details on the derivation of the optimal protocol for a classical clustering problem, analogue to the quantum problem discussed in the main text. The results here also apply to quantum systems when the measurement performed on each of them is restricted to be local, projective, $d$-dimensional, and fixed. We call this type of protocols semiclassical.
Here, we envision a device that takes input strings of $N$ data points ${\bf s}=(s_{1}s_2\cdots s_{N})$, with the promise that each~$s_{i}$ is a symbol out of an alphabet of $d$ symbols, say the set $\{1,2,\dots,d\}$, and has been drawn from either roulette $P$, or from roulette $Q$, with corresponding categorical probability distributions $P=\{p_{s}\}_{s=1}^{d}$ and $Q=\{q_{s}\}_{s=1}^{d}$.
To simplify the notation, we use the same symbols for the roulettes and their corresponding probability distributions, and for the stochastic variables and their possible outcomes. Also, the range of values of the index~$s$ will always be understood to be $\{1,2,\dots, d\}$, unless specified otherwise.
The device's task is to group the data points in two clusters so that all points in either cluster have a common underlying probability distribution (either $P$ or $Q$). We wish the machine to be universal, meaning that it shall operate without knowledge on the distributions~$P$ and~$Q$. Accordingly, we will choose as figure of merit the probability of correctly classifying {{\bf s}m e}mph{all} data points, averaged over every possible sequence of roulettes ${\bf x}=(x_1x_2\cdots x_N)$, $x_i\in\{P,Q\}$, and over every possible distribution $P$ and $Q$. The latter are assumed to be uniformly distributed over the common probability simplex ${\mathsf S}_d$ on which they are defined.
Formally, this success probability is
\begin{eqnarray}
P_{{\bf s}m s}^{{\bf s}m cl}&=&\int_{{\mathsf S}_d}\!\! dP dQ {{\bf s}m s}um_{{{\bf x}},{{\bf s}}} {{\bf s}m Pr}\lambdaeft(\hat{\bf x}\in\{{\bf x},\bar{\bf x}\},{\bf s},{\bf x};P,Q{\bf s}ight)\nonumber\\
&=& 2 \int_{{\mathsf S}_d}\!\! dP dQ {{\bf s}m s}um_{{\bf x},{\bf s}} \delta_{\hat{\bf x},{\bf x} }{{\bf s}m Pr}\lambdaeft({\bf s},{\bf x} ;P,Q {\bf s}ight),
{{\bf s}m e}nd{eqnarray}
where $\hat{\bf x}$ is the guess of ${\bf x}$ emitted by the machine, which by the universality requirement, can {{{\bf s}m e}m only} depend on the data string ${\bf s}$. The sums are carried out over all $2^{N}$ possible strings ${\bf s}$ and sequences of roulettes ${\bf x}$.
The factor of two in the second equality takes into account that~$P$ and~$Q$ are unknown, hence identifying the complementary string $\bar{\bf x}$ leads to the same clustering.
By emitting~$\hat{\bf x}$, the device suggests a classification of the $N$ data points~$s_i$ in two clusters.
In the above equation we have used the notation of Appendix~{\bf s}ef{app:prior}
for the integral over the probability simplex.
An expression for the optimal success probability can be obtained from the trivial upper-bound
\begin{eqnarray}
P_{{\bf s}m s}^{{\bf s}m cl}&=&
2{{\bf s}m s}um_{{\bf s}} \int dP dQ \;{{\bf s}m Pr}\lambdaeft({\bf s},\hat{\bf x} ;P,Q {\bf s}ight)\nonumber\\
&\lambdaeq& 2 {{\bf s}m s}um_{{\bf s}} \max_{{\bf x}} \int dP dQ \; {{\bf s}m Pr}\lambdaeft({\bf s},{\bf x} ;P,Q {\bf s}ight) \nonumber\\
&=& 2{{\bf s}m s}um_{{\bf s}} \max_{{\bf x}}
\; {{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight) ,
{(\lambdaambda)}bel{eq:Psmax}
{{\bf s}m e}nd{eqnarray}
where ${{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight)$ is the joint marginal distribution of ${\bf s}$ and~${\bf x}$. This bound is attained by the guessing rule
\begin{equation}
\hat{\bf x}=\underset{{\bf x}}{\operatorname{argmax}} \;{{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight) .
{{\bf s}m e}nd{equation}
For two specific distributions $P$ and $Q$, the probability that a given roulette sequence ${\bf x}$ gives rise to a particular data string ${\bf s}$ is ${{\bf s}m Pr}({\bf s}|{\bf x};P,Q)=\prod_{s}p_s^{n_s}q_{s}^{m_{s}}$ where $n_{s}$ ($m_{s}$) is the number of occurrences of symbol~$s$ in~${\bf s}$ [i.e., how many $s_i\in{\bf s}$ satisfy $s_i=s$] arising from roulettes of type $P$ ($Q$). For later convenience, we define \mbox{$M_{s}=n_{s}+m_{s}$}, which gives the total number of such occurrences.
Note that $\{M_s\}$ is independent of ${\bf x}$, whereas~$\{n_s\}$ and $\{m_s\}$ are not.
Performing the integral over $P$ and~$Q$ we have
\begin{eqnarray}
{{\bf s}m Pr}({\bf s},{\bf x}) &=& \frac{{{\bf s}m Pr}({\bf s}|{\bf x})}{2^{N}} \nonumber\\
&=&\frac{1}{2^{N}}\int dP dQ\; {{\bf s}m Pr}({\bf s}|{\bf x};P,Q)\nonumber\\
&=& \frac{2^{-N} d_{\flat}!^{2} \prod_{s}n_{s}!m_{s}!} {(d_{\flat}+{{\bf s}m s}um_{s}m_{s})!(d_{\flat}+{{\bf s}m s}um_{s}n_{s})!} \,,
{(\lambdaambda)}bel{eq:pxbr}
{{\bf s}m e}nd{eqnarray}
| 3,994 | 66,496 |
en
|
train
|
0.115.13
|
\begin{eqnarray}
P_{{\bf s}m s}^{{\bf s}m cl}&=&\int_{{\mathsf S}_d}\!\! dP dQ {{\bf s}m s}um_{{{\bf x}},{{\bf s}}} {{\bf s}m Pr}\lambdaeft(\hat{\bf x}\in\{{\bf x},\bar{\bf x}\},{\bf s},{\bf x};P,Q{\bf s}ight)\nonumber\\
&=& 2 \int_{{\mathsf S}_d}\!\! dP dQ {{\bf s}m s}um_{{\bf x},{\bf s}} \delta_{\hat{\bf x},{\bf x} }{{\bf s}m Pr}\lambdaeft({\bf s},{\bf x} ;P,Q {\bf s}ight),
{{\bf s}m e}nd{eqnarray}
where $\hat{\bf x}$ is the guess of ${\bf x}$ emitted by the machine, which by the universality requirement, can {{{\bf s}m e}m only} depend on the data string ${\bf s}$. The sums are carried out over all $2^{N}$ possible strings ${\bf s}$ and sequences of roulettes ${\bf x}$.
The factor of two in the second equality takes into account that~$P$ and~$Q$ are unknown, hence identifying the complementary string $\bar{\bf x}$ leads to the same clustering.
By emitting~$\hat{\bf x}$, the device suggests a classification of the $N$ data points~$s_i$ in two clusters.
In the above equation we have used the notation of Appendix~{\bf s}ef{app:prior}
for the integral over the probability simplex.
An expression for the optimal success probability can be obtained from the trivial upper-bound
\begin{eqnarray}
P_{{\bf s}m s}^{{\bf s}m cl}&=&
2{{\bf s}m s}um_{{\bf s}} \int dP dQ \;{{\bf s}m Pr}\lambdaeft({\bf s},\hat{\bf x} ;P,Q {\bf s}ight)\nonumber\\
&\lambdaeq& 2 {{\bf s}m s}um_{{\bf s}} \max_{{\bf x}} \int dP dQ \; {{\bf s}m Pr}\lambdaeft({\bf s},{\bf x} ;P,Q {\bf s}ight) \nonumber\\
&=& 2{{\bf s}m s}um_{{\bf s}} \max_{{\bf x}}
\; {{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight) ,
{(\lambdaambda)}bel{eq:Psmax}
{{\bf s}m e}nd{eqnarray}
where ${{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight)$ is the joint marginal distribution of ${\bf s}$ and~${\bf x}$. This bound is attained by the guessing rule
\begin{equation}
\hat{\bf x}=\underset{{\bf x}}{\operatorname{argmax}} \;{{\bf s}m Pr}\lambdaeft({\bf s},{\bf x}{\bf s}ight) .
{{\bf s}m e}nd{equation}
For two specific distributions $P$ and $Q$, the probability that a given roulette sequence ${\bf x}$ gives rise to a particular data string ${\bf s}$ is ${{\bf s}m Pr}({\bf s}|{\bf x};P,Q)=\prod_{s}p_s^{n_s}q_{s}^{m_{s}}$ where $n_{s}$ ($m_{s}$) is the number of occurrences of symbol~$s$ in~${\bf s}$ [i.e., how many $s_i\in{\bf s}$ satisfy $s_i=s$] arising from roulettes of type $P$ ($Q$). For later convenience, we define \mbox{$M_{s}=n_{s}+m_{s}$}, which gives the total number of such occurrences.
Note that $\{M_s\}$ is independent of ${\bf x}$, whereas~$\{n_s\}$ and $\{m_s\}$ are not.
Performing the integral over $P$ and~$Q$ we have
\begin{eqnarray}
{{\bf s}m Pr}({\bf s},{\bf x}) &=& \frac{{{\bf s}m Pr}({\bf s}|{\bf x})}{2^{N}} \nonumber\\
&=&\frac{1}{2^{N}}\int dP dQ\; {{\bf s}m Pr}({\bf s}|{\bf x};P,Q)\nonumber\\
&=& \frac{2^{-N} d_{\flat}!^{2} \prod_{s}n_{s}!m_{s}!} {(d_{\flat}+{{\bf s}m s}um_{s}m_{s})!(d_{\flat}+{{\bf s}m s}um_{s}n_{s})!} \,,
{(\lambdaambda)}bel{eq:pxbr}
{{\bf s}m e}nd{eqnarray}
where we have used Eq.~({\bf s}ef{moments1}) and in the first equality we have assumed that the two types of roulette $P$ and~$Q$ are equally probable, hence each possible sequence ${\bf x}$ occurs with equal prior probability equal to $2^{-N}$. We have also introduced the notation $d_{\flat}{{\bf s}m e}quiv d-1$ to shorten the expressions throughout this appendix. Note that all the dependence on ${\bf x}$ is through the occurrence numbers $m_s$ and $n_s$.
According to {{\bf s}m e}qref{eq:Psmax}, for each string ${\bf s}$ we need to maximize the joint probability ${{\bf s}m Pr}({\bf s},{\bf x})$ in {{\bf s}m e}qref{eq:pxbr}
over all possible sequences of roulettes ${\bf x}$. We first note that, given a total of $M_s$ occurrences of a symbol $s$ in ${\bf s}$, ${{\bf s}m Pr}({\bf s},{\bf x})$ is maximized by a sequence ${\bf x}$ whereby all these occurrences come from the same type of roulette. In other words, by a sequence ${\bf x}$ such that either $m_s=M_s$ and~$n_s=0$ or else $m_s=0$ and~$n_s=M_s$.
In order to prove the above claim, we single out a particular symbol $r$ that occurs a total number of times $\mu=M_r$ in~${\bf s}$. We focus on the dependence of ${{\bf s}m Pr}({\bf s},{\bf x})$
on the occurrence number $t=m_r$ (so, $n_r=\mu-t$) by writing
\begin{eqnarray}
{{\bf s}m Pr}({\bf s},{\bf x})&=&
\frac{a \,(\mu-t)! t!}{(b+t )!(c-t)!} {{\bf s}m e}quiv f(t) ,
{{\bf s}m e}nd{eqnarray}
where the coefficients $a$, $b$, and $c$ are defined as
\begin{eqnarray}
a&=&\frac{d_{\flat}!^{2}}{2^{N}}\prod_{s\neq r} n_{s}!m_{s}!\,,\\
b&=&d_{\flat}+{{\bf s}m s}um_{s\neq r} m_s\,,\\
c&=&d_{\flat}+{{\bf s}m s}um_{s} n_s+m_r=d_{\flat}+N-{{\bf s}m s}um_{s\neq r} m_s\,,
{(\lambdaambda)}bel{the C}
{{\bf s}m e}nd{eqnarray}
and are independent of $t$. The function $f(t)$ can be extended to $t\in{\mathbb R}$ using the Euler gamma function and the relation $\Gamma(t+1)=t!$.
This enables us to compute the second derivative of $f(t)$ and show that it is a convex function of $t$ in the interval $[0,\mu]$. Indeed,
\begin{align}{(\lambdaambda)}bel{harmonic}
\kern-1em{f''(t)\over f(t)}&=\!\lambdaeft[H_1(c\!-\!t)\!-\!H_1(\mu\!-\!t)
\!-\!H_1(b\!+\!t)\!+\! H_1(t) {\bf s}ight]^2\nonumber\\
&+\! \phantom{\lambdaeft[{\bf s}ight.}H_2(c\!-\!t)\!-\!H_2(\mu\!-\!t)
\!+\! H_2(b\!+\!t)\!-\!H_2(t) \nonumber\\[.5em]
&\geq 0\,,
{{\bf s}m e}nd{align}
where $H_n(t)$ are the generalized harmonic numbers. For positive integer values of $t$ they are $H_n(t)={{\bf s}m s}um_{j=1}^{t} j^{-n}$. The relation $ H_n(t)=\zeta(n)-{{\bf s}m s}um_{j=1}^\infty (t+j)^{-n}$, where $\zeta(n)={{\bf s}m s}um_{j=1}^\infty j^{-n}$ is the Riemann zeta function, allows to extend the domain of $H_n(t)$ to real (and complex) values of $t$.
The positivity of $f''(t)$ follows from the positivity of both~$f(t)$ and the two differences of harmonic numbers in the second line of Eq.~({\bf s}ef{harmonic}). Note that $H_2(x)$ is an increasing function of $x$. Since, obviously, $b+t>t$, and $c-t>{{\bf s}m s}um_s n_s={{\bf s}m s}um_s(M_s-m_s)\ge \mu-t$ [as follows from the definition of $c$ in~Eq.~({\bf s}ef{the C})], we see that the two differences are positive.
The convexity of $f(t)$ for $t\in[0,\mu]$ implies that the maximum of $f(t)$ is either at $t=0$ or $t=\mu$. This holds for every value of $M_r $ and every symbol $r$ in the data string, so our claim holds. In summary, the optimal guessing rule must assign the same type of roulette to all the $M_s$ occurrences of a symbol $s$, i.e., it must group all data points that show the same symbol in the same cluster. This is in full agreement with our own intuition.
The description of the optimal protocol that runs on our device is not yet complete. We need to specify how to reduce the current number of clusters down to two, since at this point we may (and typically will) have up to $d$ clusters; as many as different symbols. The reduction, or merging of the $d$ clusters can only be based on their relative sizes, as nothing is known about the underlying probability distributions.
This is quite clear: Let ${\mathsf P}$ be the subset of symbols (e.g., the subset of $\{1,2,\dots,d\}$) for which $n_s=M_s$, and let ${\mathsf Q}$ be its complement, i.e.,~${\mathsf Q}$ contains the symbols for which $m_s=M_s$, and ${\mathsf P}=\bar{\mathsf Q}$. The claim we just proved tells us that in order to find the maximum of ${{\bf s}m Pr}({\bf s},{\bf x})$ it is enough to consider sequences of roulettes ${\bf x}$ that comply with the above conditions on the occurrence numbers.\footnote{For example, suppose $d=3$ and $N=12$. Assuming that ${\bf s}=(112321223112)$ is the string of data, the sequence of roulettes ${\bf x}$ in the table
$$
\begin{tabular}{c | c c c c c c c c c c c c c}
$i$ &1&2&3&4&5&6&7&8&9&10&11&12 \\ [0.5ex]
\hline
${\bf s}$ &1&1&2&3&2&1&2&2&3&1&1&2 \\ [0.5ex]
\hline
${\bf x}$ &$P$&$P$&$Q$&$Q$&$Q$&$P$&$Q$&$Q$&$Q$&$P$&$P$&$Q$
{{\bf s}m e}nd{tabular}
$$
satisfies the conditions $m_s=M_s$ or $n_s=M_s$, since
$n_1=M_1=5$, $m_2=M_2=5$, and $m_3=M_3=2$. In this case, ${\mathsf P}=\{1\}$, and ${\mathsf Q}=\{2,3\}$.
The suggested clustering is $\{(1,2,6,10,11),(3,4,5,7,8,9,12)\}$.
} For those, the joint probability~${{\bf s}m Pr}({\bf s},{\bf x})$ can be written as
\begin{equation}
{{\bf s}m Pr}({\bf s},{\bf x})=
\frac{a}{ \big(d_{\flat}+{{\bf s}m s}um_{s\in{\mathsf Q}} M_s \big)!\big(d_{\flat}+{{\bf s}m s}um_{s\in{\mathsf P}} M_s\big)!}\,,
{(\lambdaambda)}bel{eq:maxpsx-1}
{{\bf s}m e}nd{equation}
where $a$ now simplifies to $ 2^{-N} d_{\flat}!^{2}{\bf s}aisebox{.15em}{{{\bf s}m s}mall${\prod}_{s}$} M_s!$.
Thus, it just remains to find the partition $\{{\mathsf P},{\mathsf Q}\}$ that maximizes this expression. It can be also be written as
\begin{equation}
{{\bf s}m Pr}({\bf s},{\bf x})=
\frac{a}{(d_{\flat}+x )!(d_{\flat}+N-x)!}\,,
{(\lambdaambda)}bel{eq:maxpsx}
{{\bf s}m e}nd{equation}
where we have defined $x={\bf s}aisebox{.15em}{{{\bf s}m s}mall${{\bf s}m s}um_{s\in {\mathsf Q}}$} M_s$.
The maximum of this function is located at $x=N/2$, and one can easily check that it is monotonic on either side of its peak.
Note that, depending on the values of the occurrence numbers $\{M_{s}\}$, the optimal value, $x=N/2$, may not be attained. In such cases, the maximum of ${{\bf s}m Pr}({\bf s},{\bf x})$ is located at $x^*=N/2\pm\Delta$, where $\Delta$ is the bias
\begin{equation}
\Delta=\frac{1}{2}\min_{\mathsf Q}\lambdaeft|{{\bf s}m s}um_{s\in {\mathsf Q}}M_s-{{\bf s}m s}um_{s\in\bar{\mathsf Q}}M_s{\bf s}ight|\,.
{(\lambdaambda)}bel{eq:bias}
{{\bf s}m e}nd{equation}
The subset $\mathsf Q$ that minimizes this expression determines the optimal clustering.
In summary (and not very surprisingly), the optimal guessing rule consists in first partitioning the data ${\bf s}$ in up to $d$ groups according to the symbol of the data points, and secondly, merging those groups (without splitting them) in two clusters in such a way that their sizes are as similar as possible.
We have stumbled upon the so-called {{\bf s}m e}mph{partition problem}~\cite{Korf1998}, which is known to be weakly NP-complete. In particular, a large set of distinct occurrence counts $\{M_s\}$ rapidly hinders the efficiency of known algorithms, a situation likely to occur for large $d$. It follows that the optimal clustering protocol for the classical problem cannot be implemented efficiently in all instances of the problem.
| 3,908 | 66,496 |
en
|
train
|
0.115.14
|
The convexity of $f(t)$ for $t\in[0,\mu]$ implies that the maximum of $f(t)$ is either at $t=0$ or $t=\mu$. This holds for every value of $M_r $ and every symbol $r$ in the data string, so our claim holds. In summary, the optimal guessing rule must assign the same type of roulette to all the $M_s$ occurrences of a symbol $s$, i.e., it must group all data points that show the same symbol in the same cluster. This is in full agreement with our own intuition.
The description of the optimal protocol that runs on our device is not yet complete. We need to specify how to reduce the current number of clusters down to two, since at this point we may (and typically will) have up to $d$ clusters; as many as different symbols. The reduction, or merging of the $d$ clusters can only be based on their relative sizes, as nothing is known about the underlying probability distributions.
This is quite clear: Let ${\mathsf P}$ be the subset of symbols (e.g., the subset of $\{1,2,\dots,d\}$) for which $n_s=M_s$, and let ${\mathsf Q}$ be its complement, i.e.,~${\mathsf Q}$ contains the symbols for which $m_s=M_s$, and ${\mathsf P}=\bar{\mathsf Q}$. The claim we just proved tells us that in order to find the maximum of ${{\bf s}m Pr}({\bf s},{\bf x})$ it is enough to consider sequences of roulettes ${\bf x}$ that comply with the above conditions on the occurrence numbers.\footnote{For example, suppose $d=3$ and $N=12$. Assuming that ${\bf s}=(112321223112)$ is the string of data, the sequence of roulettes ${\bf x}$ in the table
$$
\begin{tabular}{c | c c c c c c c c c c c c c}
$i$ &1&2&3&4&5&6&7&8&9&10&11&12 \\ [0.5ex]
\hline
${\bf s}$ &1&1&2&3&2&1&2&2&3&1&1&2 \\ [0.5ex]
\hline
${\bf x}$ &$P$&$P$&$Q$&$Q$&$Q$&$P$&$Q$&$Q$&$Q$&$P$&$P$&$Q$
{{\bf s}m e}nd{tabular}
$$
satisfies the conditions $m_s=M_s$ or $n_s=M_s$, since
$n_1=M_1=5$, $m_2=M_2=5$, and $m_3=M_3=2$. In this case, ${\mathsf P}=\{1\}$, and ${\mathsf Q}=\{2,3\}$.
The suggested clustering is $\{(1,2,6,10,11),(3,4,5,7,8,9,12)\}$.
} For those, the joint probability~${{\bf s}m Pr}({\bf s},{\bf x})$ can be written as
\begin{equation}
{{\bf s}m Pr}({\bf s},{\bf x})=
\frac{a}{ \big(d_{\flat}+{{\bf s}m s}um_{s\in{\mathsf Q}} M_s \big)!\big(d_{\flat}+{{\bf s}m s}um_{s\in{\mathsf P}} M_s\big)!}\,,
{(\lambdaambda)}bel{eq:maxpsx-1}
{{\bf s}m e}nd{equation}
where $a$ now simplifies to $ 2^{-N} d_{\flat}!^{2}{\bf s}aisebox{.15em}{{{\bf s}m s}mall${\prod}_{s}$} M_s!$.
Thus, it just remains to find the partition $\{{\mathsf P},{\mathsf Q}\}$ that maximizes this expression. It can be also be written as
\begin{equation}
{{\bf s}m Pr}({\bf s},{\bf x})=
\frac{a}{(d_{\flat}+x )!(d_{\flat}+N-x)!}\,,
{(\lambdaambda)}bel{eq:maxpsx}
{{\bf s}m e}nd{equation}
where we have defined $x={\bf s}aisebox{.15em}{{{\bf s}m s}mall${{\bf s}m s}um_{s\in {\mathsf Q}}$} M_s$.
The maximum of this function is located at $x=N/2$, and one can easily check that it is monotonic on either side of its peak.
Note that, depending on the values of the occurrence numbers $\{M_{s}\}$, the optimal value, $x=N/2$, may not be attained. In such cases, the maximum of ${{\bf s}m Pr}({\bf s},{\bf x})$ is located at $x^*=N/2\pm\Delta$, where $\Delta$ is the bias
\begin{equation}
\Delta=\frac{1}{2}\min_{\mathsf Q}\lambdaeft|{{\bf s}m s}um_{s\in {\mathsf Q}}M_s-{{\bf s}m s}um_{s\in\bar{\mathsf Q}}M_s{\bf s}ight|\,.
{(\lambdaambda)}bel{eq:bias}
{{\bf s}m e}nd{equation}
The subset $\mathsf Q$ that minimizes this expression determines the optimal clustering.
In summary (and not very surprisingly), the optimal guessing rule consists in first partitioning the data ${\bf s}$ in up to $d$ groups according to the symbol of the data points, and secondly, merging those groups (without splitting them) in two clusters in such a way that their sizes are as similar as possible.
We have stumbled upon the so-called {{\bf s}m e}mph{partition problem}~\cite{Korf1998}, which is known to be weakly NP-complete. In particular, a large set of distinct occurrence counts $\{M_s\}$ rapidly hinders the efficiency of known algorithms, a situation likely to occur for large $d$. It follows that the optimal clustering protocol for the classical problem cannot be implemented efficiently in all instances of the problem.
To obtain the maximum success probability $P_{{\bf s}m s}^{{\bf s}m cl}$, Eq.~({\bf s}ef{eq:Psmax}), we need to sum the maximum joint probability, given by {{\bf s}m e}qref{eq:maxpsx} with $x=x^*$, over all
possible strings~${\bf s}$. Those with the same set of occurrence counts~$\{M_{s}\}$
give the same contribution. Moreover, all the dependence on~$\{M_s\}$ is through the bias $\Delta$. Therefore, if we define ${\bf x}i_\Delta$ to be the number of sets $\{M_s\}$ that give rise to a bias $\Delta$, then the corresponding number of data strings is ${\bf x}i_\Delta N!/{\bf s}aisebox{.15em}{{{\bf s}m s}mall${\prod}_{s}$} M_s!$. We thus can write
\begin{equation}
P_{s}^{{\bf s}m cl}={{\bf s}m s}um_{\Delta}
\frac{2^{1-N}{\bf x}i_\Delta d_{\flat}!^{2}N!}{\lambdaeft(d_{\flat}\!+\!{N\over2}\!+\!\Delta {\bf s}ight)!\lambdaeft(d_{\flat}\!+\!{N\over2}\!-\!\Delta{\bf s}ight)!}\,.
{(\lambdaambda)}bel{eq:Pssum}
{{\bf s}m e}nd{equation}
This is as far as we can go, as no explicit formula for the combinatorial factor ${\bf x}i_\Delta$ is likely to exist for general cases. However, it is possible to work out the asymptotic expression of the maximum success probability for large data sizes $N$.
We first note that a generic term in the sum~({\bf s}ef{eq:Pssum}) can be written as the factor $2^{2d_\flat+1}{\bf x}i_\Delta d_\flat!^2 N!/(2d_\flat+N)!$ times a binomial distribution that peaks at $\Delta=0$ for large $N$. Hence, the dominant contribution in this limit is
\begin{eqnarray}
P_{s}^{{\bf s}m cl}&{{\bf s}m s}im &{\bf x}i_{0}\frac{2^{2d_\flat+1} d_{\flat}!^{2}N!}{(2d_{\flat}+N)!}
{{\bf s}m s}im {\bf x}i_{0}\frac{2^{2d-1}(d-1)!^{2}}{N^{2d-2}} .
{(\lambdaambda)}bel{eq:Psfopt2}
{{\bf s}m e}nd{eqnarray}
From the definition of ${\bf x}i_\Delta$, given above Eq.~({\bf s}ef{eq:Pssum}), and that of $\Delta$ in Eq.~({\bf s}ef{eq:bias}), we readily see that ${\bf x}i_{0}$ is the number of ordered partitions (i.e., the order matters) of~$N$ in~$d$ addends or parts\footnote{These ordered partitions
are known as {{{\bf s}m e}m weak compositions} of $N$ into $d$ parts in combinatorics, where {{{\bf s}m e}m weak} means that some addends (or parts) can be zero; in contradistinction, the term {{{\bf s}m e}m composition} is used when all the parts are strictly positive.
} (the occurrence counts~$M_s$) such that a subset of these addends
is an ordered partition of~$N/2$ as well.
Young diagrams come in handy to compute~${\bf x}i_0$. First, we draw pairs of diagrams, $[\lambda,\lambda']$, each of $N/2$ boxes and such that $\lambda\ge\lambda'$ (in lexicographical order; see Appendix~{\bf s}ef{app:partitions}), and $l(\lambda)+l(\lambda'){{\bf s}m e}quiv r+r'\lambdae d$, i.e., the total number of rows should not exceed $d$. Next, we fill the boxes with symbols $s_i$ (representing possible data points) so that all the boxes in each row have the same symbol. We readily see that the number of different fillings gives us ${\bf x}i_0$.
An example is provided in Fig.~{\bf s}ef{fig:counting} for clarity.
\begin{figure}[htbp]
\ytableausetup{mathmode,boxsize=.8em,aligntableaux=top}
\begin{gather*}
{{\bf s}m s}criptstyle\frac{4\cdot3}{2}
\begin{array}{l}
\ydiagram{4}\\[-.3em]
\ydiagram{4}
{{\bf s}m e}nd{array}
\phantom{+}
{{\bf s}m s}criptstyle4\cdot3\cdot2
\begin{array}{l}
\ydiagram{4}\\[-.3em]
\ydiagram{3,1}
{{\bf s}m e}nd{array}
\phantom{+}
{{\bf s}m s}criptstyle{4\cdot3\cdot2\over2}
\begin{array}{l}
\ydiagram{4}\\[-.3em]
\ydiagram{2,2}
{{\bf s}m e}nd{array}
\phantom{+}{{\bf s}m s}criptstyle {4!\over2}
\begin{array}{l}
\ydiagram{4}\\[-.3em]
\ydiagram{2,1,1}
{{\bf s}m e}nd{array}\nonumber\\
{{\bf s}m s}criptstyle\frac{4!}{2\cdot2}
\begin{array}{l}
\ydiagram{3,1}\\
\ydiagram{3,1}
{{\bf s}m e}nd{array}
\phantom{+} {{\bf s}m s}criptstyle\frac{4!}{2}
\begin{array}{l}
\ydiagram{3,1}\\
\ydiagram{2,2}
{{\bf s}m e}nd{array}
\phantom{+}
{{\bf s}m s}criptstyle {{\bf s}m s}criptstyle {4!\over4!}
\begin{array}{l}
\ydiagram{2,2}\\
\ydiagram{2,2}
{{\bf s}m e}nd{array}
{{\bf s}m e}nd{gather*}
\caption{Use of Young diagrams for computing ${\bf x}i_0$. In the example, $N=8$ and $d=4$. The fraction before each pair gives the number of different fillings and hints at how it has been computed.}
{(\lambdaambda)}bel{fig:counting}
{{\bf s}m e}nd{figure}
Although this pictorial method eases the computation of ${\bf x}i_0$, it becomes unpractical even for relatively small values of $N$. However, it becomes again very useful in the asymptotic limit since the number of Young diagrams with at least two rows of equal size become negligibly small for large $N$.\footnote{Actually, the number of Young diagrams of a given length with unequal number of boxes in each row is equal to the number of Young diagrams of $N-r(r-1)/2$ boxes, i.e., it is equal to $P^{(r)}_{N-r(r-1)/2}$. Using the results in Appendix~{\bf s}ef{app:partitions}, we immediately see that for large $N$ one has $P^{(r)}_{N-r(r-1)/2}/P^{(r)}_N{{\bf s}m s}im 1$, which proves the statement.} The same conclusion applies to the whole pairs $[\lambda,\lambda']$, since e.g., by reshuffling rows, one could merge the two members into a single diagram of~$N$ boxes and length $r+r'$. Thus, we may assume that all pairs of diagrams with a given total length, have unequal number of boxes in each row, which renders the counting of different fillings trivial: there are \mbox{$d!/(d-r-r'+1)!$} ways of filling each pair of diagrams.
Recalling that there is a one-to-one mapping between partitions and Young diagrams, we can use Eq.~({\bf s}ef{partAsym}) and write
\begin{align}
{\bf x}i_{0}&{{\bf s}m s}im\frac{1}{2}{{\bf s}m s}um_{r=1}^{d-1}{{\bf s}m s}um_{r'=1}^{d-r} P^{(r)}_{N\over2}P^{(r')}_{N\over2}\frac{d!}{(d-r-r')!}\nonumber\\
&{{\bf s}m s}im\frac{1}{2} \lambdaeft(\frac{N}{2}{\bf s}ight)^{d-2}{{\bf s}m s}um_{r=1}^{d}\frac{r (d-r) d!}{r!^{2}(d-r)!^{2}}\nonumber\\
&{{\bf s}m s}im\frac{1}{2} \lambdaeft(\frac{N}{2}{\bf s}ight)^{d-2} \frac{(2d-2)!}{(d-2)!(d-1)!^{2}}\,.
{{\bf s}m e}nd{align}
This result, together with {{\bf s}m e}qref{eq:Psfopt2}, leads us to the desired asymptotic expression for the optimal success probability:
\begin{equation}
P_{s}^{{\bf s}m cl} {{\bf s}m s}im \lambdaeft(\frac{2}{N}{\bf s}ight)^{d} \frac{(2d-2)!}{(d-2)!}\,.
{(\lambdaambda)}bel{eq:PsfoptFinal}
{{\bf s}m e}nd{equation}
\blue{
{{\bf s}m s}ection{Optimal clustering protocol for known classical states}{(\lambdaambda)}bel{app:classic_known}
| 3,980 | 66,496 |
en
|
train
|
0.115.15
|
\begin{figure}[htbp]
\ytableausetup{mathmode,boxsize=.8em,aligntableaux=top}
\begin{gather*}
{{\bf s}m s}criptstyle\frac{4\cdot3}{2}
\begin{array}{l}
\ydiagram{4}\\[-.3em]
\ydiagram{4}
{{\bf s}m e}nd{array}
\phantom{+}
{{\bf s}m s}criptstyle4\cdot3\cdot2
\begin{array}{l}
\ydiagram{4}\\[-.3em]
\ydiagram{3,1}
{{\bf s}m e}nd{array}
\phantom{+}
{{\bf s}m s}criptstyle{4\cdot3\cdot2\over2}
\begin{array}{l}
\ydiagram{4}\\[-.3em]
\ydiagram{2,2}
{{\bf s}m e}nd{array}
\phantom{+}{{\bf s}m s}criptstyle {4!\over2}
\begin{array}{l}
\ydiagram{4}\\[-.3em]
\ydiagram{2,1,1}
{{\bf s}m e}nd{array}\nonumber\\
{{\bf s}m s}criptstyle\frac{4!}{2\cdot2}
\begin{array}{l}
\ydiagram{3,1}\\
\ydiagram{3,1}
{{\bf s}m e}nd{array}
\phantom{+} {{\bf s}m s}criptstyle\frac{4!}{2}
\begin{array}{l}
\ydiagram{3,1}\\
\ydiagram{2,2}
{{\bf s}m e}nd{array}
\phantom{+}
{{\bf s}m s}criptstyle {{\bf s}m s}criptstyle {4!\over4!}
\begin{array}{l}
\ydiagram{2,2}\\
\ydiagram{2,2}
{{\bf s}m e}nd{array}
{{\bf s}m e}nd{gather*}
\caption{Use of Young diagrams for computing ${\bf x}i_0$. In the example, $N=8$ and $d=4$. The fraction before each pair gives the number of different fillings and hints at how it has been computed.}
{(\lambdaambda)}bel{fig:counting}
{{\bf s}m e}nd{figure}
Although this pictorial method eases the computation of ${\bf x}i_0$, it becomes unpractical even for relatively small values of $N$. However, it becomes again very useful in the asymptotic limit since the number of Young diagrams with at least two rows of equal size become negligibly small for large $N$.\footnote{Actually, the number of Young diagrams of a given length with unequal number of boxes in each row is equal to the number of Young diagrams of $N-r(r-1)/2$ boxes, i.e., it is equal to $P^{(r)}_{N-r(r-1)/2}$. Using the results in Appendix~{\bf s}ef{app:partitions}, we immediately see that for large $N$ one has $P^{(r)}_{N-r(r-1)/2}/P^{(r)}_N{{\bf s}m s}im 1$, which proves the statement.} The same conclusion applies to the whole pairs $[\lambda,\lambda']$, since e.g., by reshuffling rows, one could merge the two members into a single diagram of~$N$ boxes and length $r+r'$. Thus, we may assume that all pairs of diagrams with a given total length, have unequal number of boxes in each row, which renders the counting of different fillings trivial: there are \mbox{$d!/(d-r-r'+1)!$} ways of filling each pair of diagrams.
Recalling that there is a one-to-one mapping between partitions and Young diagrams, we can use Eq.~({\bf s}ef{partAsym}) and write
\begin{align}
{\bf x}i_{0}&{{\bf s}m s}im\frac{1}{2}{{\bf s}m s}um_{r=1}^{d-1}{{\bf s}m s}um_{r'=1}^{d-r} P^{(r)}_{N\over2}P^{(r')}_{N\over2}\frac{d!}{(d-r-r')!}\nonumber\\
&{{\bf s}m s}im\frac{1}{2} \lambdaeft(\frac{N}{2}{\bf s}ight)^{d-2}{{\bf s}m s}um_{r=1}^{d}\frac{r (d-r) d!}{r!^{2}(d-r)!^{2}}\nonumber\\
&{{\bf s}m s}im\frac{1}{2} \lambdaeft(\frac{N}{2}{\bf s}ight)^{d-2} \frac{(2d-2)!}{(d-2)!(d-1)!^{2}}\,.
{{\bf s}m e}nd{align}
This result, together with {{\bf s}m e}qref{eq:Psfopt2}, leads us to the desired asymptotic expression for the optimal success probability:
\begin{equation}
P_{s}^{{\bf s}m cl} {{\bf s}m s}im \lambdaeft(\frac{2}{N}{\bf s}ight)^{d} \frac{(2d-2)!}{(d-2)!}\,.
{(\lambdaambda)}bel{eq:PsfoptFinal}
{{\bf s}m e}nd{equation}
\blue{
{{\bf s}m s}ection{Optimal clustering protocol for known classical states}{(\lambdaambda)}bel{app:classic_known}
In this Appendix, we give a short discussion on clustering classical states under the assumption that the underlying probability distributions are known. In particular, we discuss two low-dimensional cases, $d=2,3$, and derive the asymptotic expression of the success probability of clustering for large data string length $N$ and arbitrary data dimension~$d$.
We stick to the notation introduced in Appendix~{\bf s}ef{app:classic}.
If the underlying probability distributions are known, a given data point $s$ is optimally assigned to the probability distribution for which $s$ is most likely. The success probability is thus given by $\max\{p_s,q_s\}/2$ (recall that the data is assumed to be drawn from either $P$ or $Q$ with equal prior probability).
The average success probability of clustering over all possible strings of length~$N$ then reads
\begin{equation}
{(\lambdaambda)}bel{KP-2}
\kern-0.4em P^{{\bf s}m cl}_{{{\bf s}m s},PQ}\!
=\!\frac{1}{2^N}\!\!\lambdaeft[\!\lambdaeft({{\bf s}m s}um_{s=1}^d\! \max\{p_s,q_s\}\!\!{\bf s}ight)^{\!\!N}\!\!\!\!+\!
\lambdaeft({{\bf s}m s}um_{s=1}^d \!\min\{p_s,q_s\}\!\!{\bf s}ight)^{\!\!N}{\bf s}ight]\!\!,\!\!
{{\bf s}m e}nd{equation}
where the term in the second line arises because assigning the wrong probability distribution to {{{\bf s}m e}m all} data points in~${\bf s}$ gives a correct clustering.
In order to compare with our results for unknown classical states, we average the success probability over a uniform distribution of categorical probability distributions. This yields
\begin{equation}
{(\lambdaambda)}bel{KP-3}
P^{{\bf s}m cl}_{{\bf s}m s}=\int_{{\mathsf S}_d}\!\! dP \int_{{\mathsf S}_d}\!\! dQ \,P^{{\bf s}m cl}_{{{\bf s}m s},PQ}\,,
{{\bf s}m e}nd{equation}
where the integration over the simplex ${\mathsf S}_d$, shared by $P$ and $Q$,
is defined in Appendix~{\bf s}ef{app:prior}.
To perform the integral in Eq.~({\bf s}ef{KP-3}) we need to partition ${\mathsf S}_d\times{\mathsf S}_d$ in different regions according to whether \mbox{$p_s\lambdae q_s$} or $p_s>q_s$ for the various symbols.
By symmetry, the integral can only depend on the number $r$ of symbols for which $p_s\lambdae q_s$ (not in its particular value).
Hence, $r=1,\dots,d-1$ labels the different types of integrals that we need to compute to evaluate~$P_{{\bf s}m s}^{{\bf s}m cl}$. Notice that we have the additional symmetry $r\lambdaeftrightarrow d-r$, corresponding to exchanging $p_s$ and $q_s$ for all~$s$.
Since the value of these integrals does not depend on the specific value of~$s$, we can choose all $p_s$ with $s=1,2,\dots,r$ to satisfy $p_s>q_s$ and all $p_s$ with $s=r+1,r+2,\dots, d$ to satisfy \mbox{$p_s\lambdae q_s$}.
To shorten the expressions below, we define
\begin{equation}
{(\lambdaambda)}bel{KP-6}
{\mathfrak p}_k:={{\bf s}m s}um_{s=1}^k
p_s\,,\quad
{\mathfrak q}_k:={{\bf s}m s}um_{s=1}^k
q_s \,.
{{\bf s}m e}nd{equation}
With these definitions ${\mathfrak p}_d={\mathfrak q}_d=1$, ${{\bf s}m s}um_{s=r+1}^dq_s=1-{\mathfrak q}_{r}$, and likewise {\bf s}aisebox{0ex}[0ex][0ex]{${{\bf s}m s}um_{s=r+1}^dp_s=1-{\mathfrak p}_{r}$}.
The integrals that we need to compute are then
\begin{multline}
{(\lambdaambda)}bel{KP-8}
I^d_r:=\!\! \int_{{\mathsf S}_d} \!\!\!dP\,
{1\over V_d}\!\int_{0}^{p_1}\!\!\! dq_1 \cdots\!\!
\int_{0}^{p_r}\!\!\! dq_r \\
\times \int_{p_{r+1}}^{{\mathfrak p}_{r+1}\!-{\mathfrak q}_r}\!\!\!\! dq_{r+1} \cdots\!\!
\int_{p_{d-1}}^{{\mathfrak p}_{d-1}\!-{\mathfrak q}_{d-2}}\!\!\!\! dq_{d-1} \\
\times\lambdaeft[ (1\!+\!{\mathfrak p}_r\!-\!{\mathfrak q}_r)^N\!\!+(1\!+\!{\mathfrak q}_r\!-\!{\mathfrak p}_r)^N {\bf s}ight],
{{\bf s}m e}nd{multline}
and we note that, as anticipated, $I^d_r=I^d_{d-r}$. The average probability of successful clustering then reads
\begin{equation}
{(\lambdaambda)}bel{KP-11}
P^{{\bf s}m cl}_{{\bf s}m s}=\frac{1}{2^N}{{\bf s}m s}um_{r=1}^{d-1} \binom{d}{r}I^d_r,
{{\bf s}m e}nd{equation}
where the binomial is the number of equivalent integral regions for the given $r$.
{{\bf s}m s}ubsubsection*{Low data dimension}
We can now discuss the lowest dimensional cases, for which explicit closed formulas for $I^d_r$ can be derived. For $d=2$ one has
\begin{equation}
{(\lambdaambda)}bel{KP-12}
P^{{\bf s}m cl}_{{\bf s}m s}=\frac{8-2^{2-N}}{(N+2)(N+1)}.
{{\bf s}m e}nd{equation}
This result coincides with that of unknown probability distributions given in Eq.~{{\bf s}m e}qref{eq:Pssum} with ${\bf x}i_\Delta=1$. This is an expected result, as the optimal protocol for known and unknown probability distributions is exactly the same: assign to the same cluster all data points that show the same symbol~$s$. Therefore, knowing the probability distribution does not provide any advantage for $d=2$.
For $d>2$, however, knowledge of the distributions $P$ and $Q$ helps classifying the data points. If $d=3$, the success probability {{\bf s}m e}qref{KP-11} can be computed to be
\begin{equation}
{(\lambdaambda)}bel{KP-13}
P^{{\bf s}m cl}_{{\bf s}m s}=6\frac{2^5 (N-2)-2^{2-N}(N^2+7N+18)}{(N+4)(N+3)(N+2)(N+1)}\,.
{{\bf s}m e}nd{equation}
In Table~{\bf s}ef{table:K-U} we compare five values of $P^{{\bf s}m cl}_{{\bf s}m s}$ in Eq~{{\bf s}m e}qref{KP-13}, when $N=2,3,\dots,6$, with those for unknown distributions $P$ and $Q$ given by Eq.~{{\bf s}m e}qref{eq:Pssum}. As expected, the success probability is larger if $P$ and $Q$ are known. The source of the increase is illustrated by the string ${\bf s}=(112)$, which would be labeled as $PPQ$ (or $QQP$) if $P$ and $Q$ were unknown. However, if they are known and, e.g., $p_1>q_1$ and $p_2>q_2$, the string will be more appropriately labeled as $PPP$.
\begin{table}
\blue{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$N$ & 2 &3 &4 &5 & 6\\
\hline\hline
Unknown: & 7/12 & 11/30 & 0.250 & 0.176 & 0.130\\
\hline
Known: & 3/5 & 2/5 & 0.283 & 0.210 & 0.160 \\
\hline
{{\bf s}m e}nd{tabular}
\caption{The success probability $P^{{\bf s}m cl}_{{\bf s}m s}$ for $d=3$ and data string lengths $N=2,\dots,6$ in the cases of known and unknown distributions $P$ and $Q$. For unknown distributions, the values are computed using Eq.~{{\bf s}m e}qref{eq:Pssum} in Appendix~{\bf s}ef{app:classic}. For known distributions, the values are given by Eq.~{{\bf s}m e}qref{KP-13}. The table shows that knowing $P$ and $Q$ increases the success probability of clustering.}
{(\lambdaambda)}bel{table:K-U}
}
{{\bf s}m e}nd{table}
{{\bf s}m s}ubsubsection*{Arbitrary data dimension. Large $N$ limit}
For increasing $N$, however, the advantage of knowing $P$ and $Q$ becomes less significant and vanishes asymptotically. This can be checked explicitly for $d=2,3$ by expanding Eqs.~({\bf s}ef{KP-12}) and~({\bf s}ef{KP-13}) in inverse powers of $N$. In this regime the average is dominated by distributions for which ${\mathfrak p}_r \approx 1$ and ${\mathfrak q}_r \approx 0$. Since in a typical string
approximately half of the data will come from the distribution~$P$ and the other half from~$Q$, the optimal clustering protocol will essentially coincide with that for unknown distributions, i.e., it will collect the data points showing the same symbol in the same subcluster and afterwards merge the subclusters into two clusters of approximately the same size. We next prove that this intuition is right for all $d$.
| 4,026 | 66,496 |
en
|
train
|
0.115.16
|
and we note that, as anticipated, $I^d_r=I^d_{d-r}$. The average probability of successful clustering then reads
\begin{equation}
{(\lambdaambda)}bel{KP-11}
P^{{\bf s}m cl}_{{\bf s}m s}=\frac{1}{2^N}{{\bf s}m s}um_{r=1}^{d-1} \binom{d}{r}I^d_r,
{{\bf s}m e}nd{equation}
where the binomial is the number of equivalent integral regions for the given $r$.
{{\bf s}m s}ubsubsection*{Low data dimension}
We can now discuss the lowest dimensional cases, for which explicit closed formulas for $I^d_r$ can be derived. For $d=2$ one has
\begin{equation}
{(\lambdaambda)}bel{KP-12}
P^{{\bf s}m cl}_{{\bf s}m s}=\frac{8-2^{2-N}}{(N+2)(N+1)}.
{{\bf s}m e}nd{equation}
This result coincides with that of unknown probability distributions given in Eq.~{{\bf s}m e}qref{eq:Pssum} with ${\bf x}i_\Delta=1$. This is an expected result, as the optimal protocol for known and unknown probability distributions is exactly the same: assign to the same cluster all data points that show the same symbol~$s$. Therefore, knowing the probability distribution does not provide any advantage for $d=2$.
For $d>2$, however, knowledge of the distributions $P$ and $Q$ helps classifying the data points. If $d=3$, the success probability {{\bf s}m e}qref{KP-11} can be computed to be
\begin{equation}
{(\lambdaambda)}bel{KP-13}
P^{{\bf s}m cl}_{{\bf s}m s}=6\frac{2^5 (N-2)-2^{2-N}(N^2+7N+18)}{(N+4)(N+3)(N+2)(N+1)}\,.
{{\bf s}m e}nd{equation}
In Table~{\bf s}ef{table:K-U} we compare five values of $P^{{\bf s}m cl}_{{\bf s}m s}$ in Eq~{{\bf s}m e}qref{KP-13}, when $N=2,3,\dots,6$, with those for unknown distributions $P$ and $Q$ given by Eq.~{{\bf s}m e}qref{eq:Pssum}. As expected, the success probability is larger if $P$ and $Q$ are known. The source of the increase is illustrated by the string ${\bf s}=(112)$, which would be labeled as $PPQ$ (or $QQP$) if $P$ and $Q$ were unknown. However, if they are known and, e.g., $p_1>q_1$ and $p_2>q_2$, the string will be more appropriately labeled as $PPP$.
\begin{table}
\blue{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$N$ & 2 &3 &4 &5 & 6\\
\hline\hline
Unknown: & 7/12 & 11/30 & 0.250 & 0.176 & 0.130\\
\hline
Known: & 3/5 & 2/5 & 0.283 & 0.210 & 0.160 \\
\hline
{{\bf s}m e}nd{tabular}
\caption{The success probability $P^{{\bf s}m cl}_{{\bf s}m s}$ for $d=3$ and data string lengths $N=2,\dots,6$ in the cases of known and unknown distributions $P$ and $Q$. For unknown distributions, the values are computed using Eq.~{{\bf s}m e}qref{eq:Pssum} in Appendix~{\bf s}ef{app:classic}. For known distributions, the values are given by Eq.~{{\bf s}m e}qref{KP-13}. The table shows that knowing $P$ and $Q$ increases the success probability of clustering.}
{(\lambdaambda)}bel{table:K-U}
}
{{\bf s}m e}nd{table}
{{\bf s}m s}ubsubsection*{Arbitrary data dimension. Large $N$ limit}
For increasing $N$, however, the advantage of knowing $P$ and $Q$ becomes less significant and vanishes asymptotically. This can be checked explicitly for $d=2,3$ by expanding Eqs.~({\bf s}ef{KP-12}) and~({\bf s}ef{KP-13}) in inverse powers of $N$. In this regime the average is dominated by distributions for which ${\mathfrak p}_r \approx 1$ and ${\mathfrak q}_r \approx 0$. Since in a typical string
approximately half of the data will come from the distribution~$P$ and the other half from~$Q$, the optimal clustering protocol will essentially coincide with that for unknown distributions, i.e., it will collect the data points showing the same symbol in the same subcluster and afterwards merge the subclusters into two clusters of approximately the same size. We next prove that this intuition is right for all $d$.
In the proof, we will make repeated use of the trivial observation that, for asymptotically large $N$ and $0<a<b<c$, one has
\begin{equation}{(\lambdaambda)}bel{KP-ebc2}
\int_a^b (c-x)^N dx {{\bf s}m s}im (c-a)^{N+1} /N\,.
{{\bf s}m e}nd{equation}
We also
note that the contribution to the success probability coming from the completely wrong assignment of distributions, i.e., $(1+{\mathfrak q}_r-{\mathfrak p}_r)^N$, is exponentially vanishing, since we assumed $p_r>q_r$, and thus ${\mathfrak q}_r-{\mathfrak p}_r<0$ [this is well illustrated by the terms proportional to $2^{2-N}$ in Eqs.~({\bf s}ef{KP-12}) and~({\bf s}ef{KP-13})].
Because of this last observation, we can drop the second term in the integrand of $I^d_r$, Eq.~({\bf s}ef{KP-8}).
The integrals over~$q_s$, $s\lambdae r$, of the remaining term, $(1+{\mathfrak p}_r-{\mathfrak q}_r)^N$,
are dominated by the lower limit, $q_s=0$, as this value maximizes $1+{\mathfrak p}_r-{\mathfrak q}_r$. Using Eq.~({\bf s}ef{KP-ebc2}) we get
\begin{multline}
{(\lambdaambda)}bel{KP-ebc1}
I^d_r{{\bf s}m s}im{(d-1)!\over N^r}\int_{{\mathsf S}_d} \!\!\!dP\\
\times\int_{p_{r+1}}^{{\mathfrak p}_{r+1}\!-{\mathfrak q}_r}\!\!\!\! dq_{r+1} \cdots\!\!
\int_{p_{d-1}}^{{\mathfrak p}_{d-1}\!-{\mathfrak q}_{d-2}}\!\!\!\! dq_{d-1} (1\!+\!{\mathfrak p}_r)^{N+r}\,,
{{\bf s}m e}nd{multline}
where we recalled that the volume of the simplex ${\mathsf S}_d$ is $V_d=1/(d-1)!$.
For the remaining integrals over $q_s$
in Eq.~({\bf s}ef{KP-ebc1}) we can take the lower limits to be \mbox{$p_s\approx 0$}, for \mbox{$s\ge r+1$}, since the integrand is maximized by \mbox{${\mathfrak p}_r\approx 1$}. Therefore, the upper limits become $1$, $1-q_{r+1}$, \dots, $1-{{\bf s}m s}um_{s=r+1}^{d-2}q_s$. We identify this upper and lower limits as those of an integral over a $(d-r-1)$-dimensional probability simplex ${\mathsf S}_{d-r}$. We can thus write
\begin{equation}
{(\lambdaambda)}bel{KP-as-1}
I_r^d{{\bf s}m s}im \frac{(d-1)!}{(d-r-1)! N^r} \int_{{\mathsf S}_d} \!\!dP\,(1+{\mathfrak p}_r)^{N+r} \,.
{{\bf s}m e}nd{equation}
The last equation can be cast as
\begin{equation}
{(\lambdaambda)}bel{KP-as-1}
I_r^d{{\bf s}m s}im \frac{(d-1)!}{(d-r-1)! N^r} \int_{{\mathsf S}_d} \!\!dP\lambdaeft(\!2\!-\!{{\bf s}m s}um_{s=r}^{d-1} p_s\!{\bf s}ight)^{\!N+r}\,,
{{\bf s}m e}nd{equation}
where we have used again that ${\mathfrak p}_r=1-{{\bf s}m s}um_{s=r+1}^dp_s$ and noted that under the integral sign we are free to relabel the variables $p_s$.
According to the definition of~$\int_{{\bf s}aisebox{.3ex}[0ex][0ex]{\tiny${\mathsf S}_d$}}\!dP$, we need to perform $d-r$ integrals over the variables $p_{r},p_{r+1},\cdots, p_{d-1}$, for which we can use Eq.~({\bf s}ef{KP-ebc2}). This yields a factor $2^{N+d}/N^{d-r}$. The remaining integrals over~$p_1,p_2,\dots,p_{r-1}$ of this constant factor give an additional $1/(r-1)!$, as they effectively correspond to an integral over a $(r-1)$-dimensional simplex.
Putting the different pieces together, the asymptotic expression of $I^d_r$ reads
\begin{equation}
{(\lambdaambda)}bel{KP-as-2}
I_r^d{{\bf s}m s}imeq \frac{2^{N+d}}{N^d}\frac{[(d-1)!]^2}{(r-1)! (d-r-1)!}\,.
{{\bf s}m e}nd{equation}
We are now in position to compute the asymptotic success probability. Inserting Eq.~{{\bf s}m e}qref{KP-as-2} into Eq.~{{\bf s}m e}qref{KP-11} we readily obtain
\begin{align}
{(\lambdaambda)}bel{KP-as-3}
P^{{\bf s}m cl}_{{\bf s}m s}&{{\bf s}m s}im
\lambdaeft(\frac{2}{N}{\bf s}ight)^d(d\!-\!1)!(d\!-\!1){{\bf s}m s}um_{r=1}^{d-1} \binom{d}{r}\binom{d\!-\!2}{d\!-\!r\!-\!1} \nonumber\\
& = \lambdaeft(\frac{2}{N}{\bf s}ight)^d\frac{(2d-2)!}{(d-2)!}\,,
{{\bf s}m e}nd{align}
where we have used the well-known binomial identity ${{\bf s}m s}um_k\binom{a}{k}\binom{b}{s-k}=\binom{a+b}{s}$ [here, $k$ ranges over all values for which the binomials make sense].
Eq.~{{\bf s}m e}qref{KP-as-3} coincides with the asymptotic expression in the unknown case Eq.{{\bf s}m e}qref{ps_asym_cl}, as we anticipated.
}
{{\bf s}m e}nd{document}
| 2,973 | 66,496 |
en
|
train
|
0.116.0
|
\begin{document}
\title{Bribery Can Get Harder in\ Structured Multiwinner Approval Election}
\begin{abstract}
We study the complexity of bribery in the context of structured
multiwinner approval elections. Given such an election, we ask
whether a certain candidate can join the winning committee by
adding, deleting, or swapping approvals, where each such action
comes at a cost and we are limited by a budget. We assume our
elections to either have the candidate interval or the voter
interval property, and we require the property to hold also after
the bribery. While structured elections usually make manipulative
attacks significantly easier, our work also shows examples of
opposite behavior.
\end{abstract}
\section{Introduction}\label{sec:intro}
We study the complexity of bribery under the multiwinner approval
rule, in the case where the voters' preferences are structured.
Specifically, we use the bribery model of \citett{Faliszewski, Skowron,
and Talmon}{fal-sko-tal:c:bribery-measure}, where one can either add,
delete, or swap approvals, and we consider \emph{candidate interval}
and \emph{voter interval}
preferences~\citep{elk-lac:c:approval-sp-sc}.
In multiwinner elections,
the voters express their preferences over the available candidates
and
use this information to select a winning committee (i.e., a
fixed-size subset of candidates).
We focus on one of the simplest and most common scenarios, where each
voter specifies the candidates that he or she approves, and those with
the highest numbers of approvals form the committee. Such elections
are used, e.g., to choose city councils, boards of trustees, or to
shortlist job candidates. Naturally, there are many other rules and
scenarios, but they do not appear in practice as often as this
simplest one. For more details on multiwinner voting, we point the
readers to the overviews of
\citett{Faliszewski et al.}{fal-sko-sli-tal:b:multiwinner-voting} and
\citett{Lackner and Skowron}{lac-sko:t:approval-survey}.
In our scenario, we are given an election, including the contents of
all the votes, and, depending on the variant, we can either add,
delete, or swap approvals, but each such action comes at a cost. Our
goal is to find a cheapest set of actions that ensure that a given
candidate joins the winning committee.
Such problems, where we modify the votes to ensure a certain
outcome, are known under the umbrella name of \emph{bribery}, and
were first studied by \citett{Faliszewski, Hemaspaandra and
Hemaspaandra}{fal-hem-hem:j:bribery}, whereas our specific variant
is due to \citett{Faliszewski, Skowron, and
Talmon}{fal-sko-tal:c:bribery-measure}. Historically, bribery
problems indeed aimed to model vote buying, but currently more
benign interpretations prevail. For example, \citett{Faliszewski,
Skowron, and Talmon}{fal-sko-tal:c:bribery-measure} suggest using
the cost of bribery as a measure of a candidate's success:
A~candidate who did not win, but can be put into the committee at a
low cost, certainly did better than one whose bribery is
expensive. In particular, since our problem is used for
post-election analysis, it is natural to assume that we know all the
votes. For other similar interpretations, we point, e.g., to the
works of \citett{Xia}{xia:c:margin-of-victory},
\citett{Shiryaev, Yu, and Elkind}{shi-yu-elk:c:robustness},
\citett{Bredereck et al.}{bre-fal-kac-nie-sko-tal:j:robustness},
\citett{Boehmer et al.}{boe-bre-fal-nie:c:counting-swap-bribery}, or
\citett{Baumeister and Hogrebe}{bau-hog:c:counting-bribery}.
\citett{Faliszewski and Rothe}{fal-rot:b:control-bribery} give a more general overview of bribery problems.
We assume that our elections either satisfy the candidate interval
(CI) or the voter interval (VI)
property~\citep{elk-lac:c:approval-sp-sc}, which
correspond to the classic notions of
single-peakedness~\citep{bla:b:polsci:committees-elections} and
single-crossingness~\citep{mir:j:single-crossing,rob:j:tax} from the
world of ordinal elections. Briefly put, the CI property means that
the candidates are ordered and each voter approves some interval of
them, whereas the VI property requires that the voters are ordered
and each candidate is approved by some interval of voters. For
example, the CI assumption can be used to model political elections,
where the candidates appear on the left-to-right spectrum of
opinions and the voters approve those, whose opinions are close
enough to their own. Importantly, we require our elections to have
the CI/VI property also after the bribery; this approach is standard
in bribery problems with structured
elections~\citep{bra-bri-hem-hem:j:sp2,men-lar:c:bribery-sp-hard,elk-fal-gup-roy:c:swap-shift-sp-sc},
as well as in other problems related to manipulating election
results~\citep{wal:c:uncertainty-in-preference-elicitation-aggregation,fal-hem-hem-rot:j:single-peaked-preferences,fit-hem:c:ties-sp-manipulation-bribery}
(these references are examples only, there are many more papers that
use this model).
\begin{example}
Let us consider a hotel in a holiday resort. The hotel has its
base staff, but each month it also hires some additional help. For
the coming month, the expectation is to hire extra staff for $k$
days. Naturally, they would be hired for the days when the hotel
is most busy (the decision to request additional help is made a
day ahead, based on the observed load). Since hotel bookings are
typically made in advance, one knows which days are expected to be
most busy. However, some people will extend their stays, some
will leave early, and some will have to shift their stays. Thus
the hotel managers would like to know which days are likely to
become the busiest ones after such changes: Then they could inform
the extra staff as to when they are expected to be needed, and
what changes in this preliminary schedule might happen.
Our bribery
problem (for the CI setting) captures exactly the problem that the
managers want to solve: The days are the candidates, $k$ is the
committee size, and the bookings are the approval votes (note that each booking must regard a consecutive set of days). Prices of
adding, deleting, and moving approvals correspond to the
likelihood that a particular change actually happens (the managers usually know which changes are more or less likely).
Since the bookings must be consecutive, the election has
to have the CI property also after the bribery. The managers can
solve such bribery problem for each of the days and see
which ones can most easily be among the $k$ busiest ones.
\end{example}
\begin{example}
For the VI setting, let us consider a related scenario.
There is a team of archaeologists who booked a set of excavation
sites, each for some consecutive number of days (they work on several sites in parallel). The team may want to add
some extra staff to those sites that require most working
days. However, as in the previous example, the bookings might get
extended or shortened. The team's manager may use bribery to evaluate
how likely it is that each of the sites becomes one of the most
work-demanding ones. In this case, the days are the voters, and the
sites are the candidates.
\end{example}
There are two main reasons why structured elections are studied.
Foremost, as in the above examples, sometimes they simply capture the
exact problem at hand.
Second, many problems that are intractable in general, become
polynomial-time solvable if the elections are structured. Indeed,
this is the case for many ${\mathrm{NP}}$-hard winner-determination
problems~\citep{bet-sli-uhl:j:mon-cc,elk-lac:c:approval-sp-sc,pet-lac:j:spoc}
and for various problems where the goal is to make some candidate a
winner~\citep{fal-hem-hem-rot:j:single-peaked-preferences,mag-fal:j:sc-control},
including some bribery
problems~\citep{bra-bri-hem-hem:j:sp2,elk-fal-gup-roy:c:swap-shift-sp-sc}. There
are also some problems that stay intractable even for structured
elections~\citep{elk-fal-sch:j:fallback-shift,yan:c:borda-control-sp}\footnote{These
references are not complete and are meant as examples.}
as well as examples of complexity reversals,
where assuming structured preferences turns a polynomial-time
solvable problem into an intractable one. However, such reversals are
rare and, to the best of our knowledge, so far were only observed by
\citett{Menon and Larson}{men-lar:c:bribery-sp-hard}, for the case
of weighted elections with three candidates (but see also the work
of \citett{Fitzsimmons and
Hemaspaandra}{fit-hem:c:ties-sp-manipulation-bribery}, who find
complexity reversals that stem from replacing total ordinal votes
with ones that include ties).
\myparagraph{Our Contribution.} We provide an almost complete
picture of the complexity of bribery by either adding, deleting, or
swapping approvals under the multiwinner approval voting rule, for
the case of CI and VI elections, assuming either that each bribery
action has identical unit price or that they can be priced
individually (see Table~\ref{tab:results}). By comparing our results
to those for the unrestricted setting, provided by
\citett{Faliszewski, Skowron, and
Talmon}{fal-sko-tal:c:bribery-measure}, we find that any
combination of tractability and intractability in the structured and
unrestricted setting is possible. For example:
\begin{enumerate}
\item Bribery by adding approvals is solvable in polynomial time
irrespective if the elections are unrestricted or have the CI or VI
properties.
\item Bribery by deleting approvals (where each deleting action is
individually priced) is solvable in polynomial time in the
unrestricted setting, but becomes ${\mathrm{NP}}$-hard for CI elections (for
VI ones it is still in ${\mathrm{P}}$).
\item Bribery by swapping approvals only to the designated candidate (with
individually priced actions) is ${\mathrm{NP}}$-hard in the unrestricted
setting, but becomes polynomial-time solvable both for CI and VI
elections.
\item Bribery by swapping approvals (where each action is individually
priced and we are not required to swap approvals to the designated
candidate only) is ${\mathrm{NP}}$-hard in each of the considered settings.
\end{enumerate}
\noindent\textbf{Possibility of Complexity Reversals.}\;
So far, most of the problems studied for structured elections were
subproblems of the unrestricted ones. For example, a winner
determination algorithm that works for all elections, clearly also
works for the structured ones and complexity reversal is impossible.
The case of bribery is different because, by assuming structured
elections, not only do we restrict the set of possible inputs, but
also we constrain the possible actions. Yet, scenarios where
bribery is tractable are rare, and only a handful of papers
considered bribery in structured domains (we mention those of
\citett{Brandt et al.}{bra-bri-hem-hem:j:sp2}, \citett{Fitzsimmons
and Hemaspaandra}{fit-hem:c:ties-sp-manipulation-bribery},
\citett{Menon and Larson}{men-lar:c:bribery-sp-hard}, \citett{Elkind
et al.}{elk-fal-gup-roy:c:swap-shift-sp-sc}), so opportunities for
observing complexity reversals were, so far, very limited. We show
several such reversals, obtained for very natural settings.
\begin{table}
\centering
\setlength{\tabcolsep}{3pt}
\scalebox{0.83}{
\begin{tabular}{r|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{Unrestricted} & \multicolumn{2}{c|}{Candidate Interval (CI)} & \multicolumn{2}{c}{Voter Interval (VI)} \\
{\small (prices)}
& \multicolumn{1}{c}{\small (unit)}
& \multicolumn{1}{c|}{\small (any)}
& \multicolumn{1}{c}{\small (unit)}
& \multicolumn{1}{c|}{\small (any)}
& \multicolumn{1}{c}{\small (unit)}
& \multicolumn{1}{c}{\small (any)} \\
\midrule
| 3,573 | 24,204 |
en
|
train
|
0.116.1
|
There are two main reasons why structured elections are studied.
Foremost, as in the above examples, sometimes they simply capture the
exact problem at hand.
Second, many problems that are intractable in general, become
polynomial-time solvable if the elections are structured. Indeed,
this is the case for many ${\mathrm{NP}}$-hard winner-determination
problems~\citep{bet-sli-uhl:j:mon-cc,elk-lac:c:approval-sp-sc,pet-lac:j:spoc}
and for various problems where the goal is to make some candidate a
winner~\citep{fal-hem-hem-rot:j:single-peaked-preferences,mag-fal:j:sc-control},
including some bribery
problems~\citep{bra-bri-hem-hem:j:sp2,elk-fal-gup-roy:c:swap-shift-sp-sc}. There
are also some problems that stay intractable even for structured
elections~\citep{elk-fal-sch:j:fallback-shift,yan:c:borda-control-sp}\footnote{These
references are not complete and are meant as examples.}
as well as examples of complexity reversals,
where assuming structured preferences turns a polynomial-time
solvable problem into an intractable one. However, such reversals are
rare and, to the best of our knowledge, so far were only observed by
\citett{Menon and Larson}{men-lar:c:bribery-sp-hard}, for the case
of weighted elections with three candidates (but see also the work
of \citett{Fitzsimmons and
Hemaspaandra}{fit-hem:c:ties-sp-manipulation-bribery}, who find
complexity reversals that stem from replacing total ordinal votes
with ones that include ties).
\myparagraph{Our Contribution.} We provide an almost complete
picture of the complexity of bribery by either adding, deleting, or
swapping approvals under the multiwinner approval voting rule, for
the case of CI and VI elections, assuming either that each bribery
action has identical unit price or that they can be priced
individually (see Table~\ref{tab:results}). By comparing our results
to those for the unrestricted setting, provided by
\citett{Faliszewski, Skowron, and
Talmon}{fal-sko-tal:c:bribery-measure}, we find that any
combination of tractability and intractability in the structured and
unrestricted setting is possible. For example:
\begin{enumerate}
\item Bribery by adding approvals is solvable in polynomial time
irrespective if the elections are unrestricted or have the CI or VI
properties.
\item Bribery by deleting approvals (where each deleting action is
individually priced) is solvable in polynomial time in the
unrestricted setting, but becomes ${\mathrm{NP}}$-hard for CI elections (for
VI ones it is still in ${\mathrm{P}}$).
\item Bribery by swapping approvals only to the designated candidate (with
individually priced actions) is ${\mathrm{NP}}$-hard in the unrestricted
setting, but becomes polynomial-time solvable both for CI and VI
elections.
\item Bribery by swapping approvals (where each action is individually
priced and we are not required to swap approvals to the designated
candidate only) is ${\mathrm{NP}}$-hard in each of the considered settings.
\end{enumerate}
\noindent\textbf{Possibility of Complexity Reversals.}\;
So far, most of the problems studied for structured elections were
subproblems of the unrestricted ones. For example, a winner
determination algorithm that works for all elections, clearly also
works for the structured ones and complexity reversal is impossible.
The case of bribery is different because, by assuming structured
elections, not only do we restrict the set of possible inputs, but
also we constrain the possible actions. Yet, scenarios where
bribery is tractable are rare, and only a handful of papers
considered bribery in structured domains (we mention those of
\citett{Brandt et al.}{bra-bri-hem-hem:j:sp2}, \citett{Fitzsimmons
and Hemaspaandra}{fit-hem:c:ties-sp-manipulation-bribery},
\citett{Menon and Larson}{men-lar:c:bribery-sp-hard}, \citett{Elkind
et al.}{elk-fal-gup-roy:c:swap-shift-sp-sc}), so opportunities for
observing complexity reversals were, so far, very limited. We show
several such reversals, obtained for very natural settings.
\begin{table}
\centering
\setlength{\tabcolsep}{3pt}
\scalebox{0.83}{
\begin{tabular}{r|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{Unrestricted} & \multicolumn{2}{c|}{Candidate Interval (CI)} & \multicolumn{2}{c}{Voter Interval (VI)} \\
{\small (prices)}
& \multicolumn{1}{c}{\small (unit)}
& \multicolumn{1}{c|}{\small (any)}
& \multicolumn{1}{c}{\small (unit)}
& \multicolumn{1}{c|}{\small (any)}
& \multicolumn{1}{c}{\small (unit)}
& \multicolumn{1}{c}{\small (any)} \\
\midrule
\textsc{AddApprovals} & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ \\[-3pt]
& \multicolumn{2}{c|}{\scriptsize{Faliszewski et al.~\citeyearpar{fal-sko-tal:c:bribery-measure}}}
& \scriptsize{[Thm.~\ref{thm:add-approvals-ci}]} & \scriptsize{[Thm.~\ref{thm:add-approvals-ci}]} & \scriptsize{[Thm.~\ref{thm:add-approvals-vi}]} & \scriptsize{[Thm.~\ref{thm:add-approvals-vi}]} \\
\midrule
\textsc{DelApprovals} & ${\mathrm{P}}$ & ${\mathrm{P}}$ & \multirow{2}{*}{?} & ${\mathrm{NP}}$-com. & ${\mathrm{P}}$ & ${\mathrm{P}}$ \\
& \multicolumn{2}{c|}{\scriptsize{Faliszewski et al.~\citeyearpar{fal-sko-tal:c:bribery-measure}}}&
& \scriptsize{[Thm.~\ref{thm:delete-approvals-ci}]} & \scriptsize{[Thm.~\ref{thm:delete-approvals-vi}]} & \scriptsize{[Thm.~\ref{thm:delete-approvals-vi}]} \\
\midrule
\textsc{SwapApprovals} & ${\mathrm{P}}$ & ${\mathrm{NP}}$-com. & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ & ${\mathrm{P}}$ \\[-3pt]
\textsc{to p} &\multicolumn{2}{c|}{\scriptsize{Faliszewski et al.~\citeyearpar{fal-sko-tal:c:bribery-measure}}}& \scriptsize{[Thm.~\ref{thm:swap-approvals-to-p-ci}]} & \scriptsize{[Thm.~\ref{thm:swap-approvals-to-p-ci}]} & \scriptsize{[Thm.~\ref{thm:swap-approvals-to-p-vi}]} & \scriptsize{[Thm.~\ref{thm:swap-approvals-to-p-vi}]} \\[2mm]
\textsc{SwapApprovals} & ${\mathrm{P}}$ & ${\mathrm{NP}}$-com. & ${\mathrm{NP}}$-com. & ${\mathrm{NP}}$-com. & \multirow{2}{*}{?} & ${\mathrm{NP}}$-com. \\[-3pt]
&\multicolumn{2}{c|}{\scriptsize{Faliszewski et al.~\citeyearpar{fal-sko-tal:c:bribery-measure}}}& \scriptsize{[Thm.~\ref{thm:swap-approvals-ci}]} & \scriptsize{[Thm.~\ref{thm:swap-approvals-ci}]} & & \scriptsize{[Thm.~\ref{thm:vi-swaps}]} \\
\bottomrule
\end{tabular}}
\caption{Our results for the CI and VI domains,
together with those of {\mathrm{P}}rotect\citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure} for
the unrestricted setting. \textsc{SwapApprovals to p} refers to the
problem where each action has to move an approval to the preferred
candidate.}
\label{tab:results}
\end{table}
| 2,369 | 24,204 |
en
|
train
|
0.116.2
|
\section{Preliminaries}
For a positive integer $t$, we write $[t]$ to mean the set
$\{1, \ldots, t\}$. By writing $[t]_0$ we mean the set
$[t] \cup \{0\}$.
\myparagraph{Approval Elections.}
An \emph{approval election} $E = (C,V)$ consists of a set of
candidates $C = \{c_1, \ldots, c_m\}$ and a collection of voters
$V = \{v_1, \ldots, v_n\}$. Each voter $v_i \in V$ has \emph{an
approval ballot} (or, equivalently, \emph{an approval set}) which
contains the candidates that $v_i$ approves. We write $v_i$ to refer
both to the voter and to his or her approval ballot; the exact meaning
will always be clear from the context.
A \emph{multiwinner voting rule} is a function $f$ that given an
election $E = (C,V)$ and a committee size $k \in [|C|]$ outputs a
nonempty family of winning committees (where each committee is a
size-$k$ subset of $C$).
We disregard the issue of tie-breaking and assume all winning
committees to be equally worthy,
i.e., we adopt the nonunique winner model.
Given an election $E = (C,V)$, we let the approval score of a candidate
$c \in C$ be the number of voters that approve~$c$, and we denote it as
$\avscore_E(c)$. The approval score of a committee $S \subseteq C$ is
$\avscore_E(S) = \sum_{c \in S} \avscore_E(c)$. Given an election~$E$
and a committee size~$k$, the \emph{multiwinner approval voting} rule,
denoted $\av$, outputs all size-$k$ committees with the highest
approval score.
Occasionally we also consider the \emph{single-winner approval rule},
which is defined in the same way as its multiwinner variant, except
that the committee size is fixed to be one. For simplicity, in this
case we assume that the rule returns a set of tied winners (rather
than a set of tied size-$1$ winning committees).
\myparagraph{Structured Elections.} We focus on elections where the
approval ballots satisfy either the \emph{candidate interval (CI)} or
the \emph{voter interval (VI)}
properties~\citep{elk-lac:c:approval-sp-sc}:
\begin{enumerate}
\item An election has the CI
property (is a CI election) if there is an ordering of the candidates
(called the \emph{societal axis}) such that each approval ballot
forms an interval with respect to this ordering.
\item An election has the VI
property (is a VI election) if there is an ordering of the voters so
that each candidate is approved by an interval of the voters (for
this ordering).
\end{enumerate}
Given a CI election, we say that the voters have CI ballots or,
equivalently, CI preferences; we use analogous convention for the VI
case. As observed by \citett{Elkind and Lackner}{elk-lac:c:approval-sp-sc}, there are
polynomial-time algorithms that test if a given election is CI or VI
and, if so, provide appropriate orders of the candidates or voters;
these algorithms are based on solving the \emph{consecutive ones}
problem~\citep{boo-lue:j:consecutive-ones-property}.
\myparagraph{Notation for CI Elections.}
Let us consider a candidate set $C = \{c_1, \ldots, c_m\}$ and a
societal axis $\rhd = c_1 c_2 \cdots c_m$. Given two candidates
$c_i, c_j$, where $i \leq j$, we write $[c_i,c_j]$ to denote the
approval set $\{c_i, c_{i+1}, \ldots, c_j\}$.
\myparagraph{Bribery Problems.}
We focus on the variants of bribery in multiwinner approval elections
defined by~\citett{Faliszewski, Skowron, and Talmon}{fal-sko-tal:c:bribery-measure}.
Let $f$ be a multiwinner voting rule and let \textsc{Op} be one of
\textsc{AddApprovals}, \textsc{DelApprovals}, and
\textsc{SwapApprovals} operations (in our case $f$ will either be
$\av$ or its single-winner variant). In the
$f$-\textsc{Op-Bribery} problem we are given an election
$E = (C,V$), a committee size~$k$, a preferred candidate $p$, and a
nonnegative integer $B$ (the budget). We ask if it is possible to
perform at most $B$ unit operations of type \textsc{Op}, so that $p$
belongs to at least one winning committee:
\begin{enumerate}
\item For \textsc{AddApprovals}, a unit operation adds a given
candidate to a given voter's ballot.
\item For \textsc{DelApprovals}, a unit operation removes a given
candidate from a given voter's ballot.
\item For \textsc{SwapApprovals}, a unit operation replaces a given
candidate with another one in a given voter's ballot.
\end{enumerate}
Like \citett{Faliszewski, Skowron, and
Talmon}{fal-sko-tal:c:bribery-measure}, we also study the
variants of \textsc{AddApprovals} and \textsc{SwapApprovals} problems
where each unit operation must involve the preferred candidate.
We are also interested in the priced variants of the above problems,
where each unit operation comes at a cost that may depend both on the
voter and the particular affected candidates;
we ask if we can achieve our goal by performing operations of total
cost at most~$B$. We distinguish the priced variants
by putting a dollar sign in front of the operation type. For example,
\textsc{\$AddApprovals} means a variant where adding each candidate to
each approval ballot has an individual cost.
\myparagraph{Bribery in Structured Elections.}
We focus on the bribery problems where the elections have either the CI
or the VI property. For example, in the
\textsc{AV-\$AddApprovals-CI-Bribery} problem the input election has
the CI property (under a given societal axis) and we ask if it is
possible to add approvals with up to a given cost so that (a) the
resulting election has the CI property for the same societal axis, and
(b) the preferred candidate belongs to at least one winning committee.
The VI variants are defined analogously (in particular, the voters'
order witnessing the VI property is given
and the election must still have the VI property with respect to this
order after the bribery).
The convention that the election must have the same structural
property before and after the bribery, as well as the fact that the
order witnessing this property is part of the input, is standard in
the literature; see, e.g., the works of
\citett{Faliszewski et al.}{fal-hem-hem-rot:j:single-peaked-preferences},
\citett{Brandt et al.}{bra-bri-hem-hem:j:sp2},
\citett{Menon and Larson}{men-lar:c:bribery-sp-hard}, and
\citett{Elkind et al.}{elk-fal-gup-roy:c:swap-shift-sp-sc}.
\myparagraph{Computational Problems.} For a graph $G$, by $V(G)$ we
mean its set of vertices and by $E(G)$ we mean its set of edges. A
graph is cubic if each of its vertices is connected to exactly three
other ones.
Our ${\mathrm{NP}}$-hardness proofs rely on reductions from variants of the
\textsc{Independent Set} and \textsc{X3C} problems, both known to be
${\mathrm{NP}}$-complete~\citep{gar-joh:b:int,gon:j:x3c}.
\begin{definition}
In the \textsc{Cubic Independent Set} problem
we are given a cubic graph $G$ and an integer $h$; we ask if $G$
has an independent set of size $h$ (i.e., a set of $h$ vertices such that no two of them are connected).
\end{definition}
\begin{definition}
In the \textsc{Restricted Exact Cover by 3-Sets} problem (\textsc{RX3C}) we are given a universe
$X$
of $3n$ elements and a family
$\mathcal{S}$
of $3n$ size-$3$ subsets of
$X$. Each element from $X$ appears in exactly three sets from
$\mathcal{S}$. We ask if it is possible to choose $n$ sets from $\mathcal{S}$
whose union is $X$.
\end{definition}
| 2,254 | 24,204 |
en
|
train
|
0.116.3
|
\section{Adding Approvals}
\label{sec:adding}
\appendixsection{sec:adding}
For the case of adding approvals, all our bribery problems (priced and
unpriced, both for the CI and VI domains) remain solvable in
polynomial time. Yet, as compared to the unrestricted setting, our
algorithms require more care. For example, in the unrestricted case it
suffices to simply add approvals for the preferred candidate~\citep{fal-sko-tal:c:bribery-measure} (choosing
the voters where they are added in the order of increasing costs for
the priced variant); a similar
approach works for the VI case, but with a different ordering of the
voters.
\begin{theorem}
\label{thm:add-approvals-vi}
\textsc{AV-\$AddApprovals-VI-Bribery} $\in {\mathrm{P}}$.
\end{theorem}
\appendixproof{thm:add-approvals-vi}
{
\begin{proof}
Consider an input with election $E = (C,V)$, committee size $k$,
preferred candidate $p$, and budget $B$. Without loss of generality,
we assume that $V = \{v_1, \ldots, v_n\}$ and the order witnessing
the VI property is $v_1 \rhd v_2 \rhd \cdots \rhd v_n$. We note that
it is neither beneficial nor necessary to ever add approvals for
candidates other than $p$. Let $v_i, v_{i+1}, \ldots, v_j$ be the
interval of voters that approves $p$, and let $s$ be the lowest
number of additional approvals that $p$ needs to obtain to become a
member of some winning committee (note that $s$ is easily computable
in polynomial time). Our algorithm proceeds as follows: We consider
all nonnegative numbers $s_\ell$ and $s_r$ such that
(a)~$s = s_\ell + s_r$, (b)~$i - s_\ell \geq 1$, and
(c)~$j+s_r \leq n$, and for each of them we compute the cost of
adding an approval for $p$ to voters $v_{i-s_\ell}, \ldots, v_{i-1}$
and $v_{j+1}, \ldots, v_{j+s_r}$. We choose the pair that generates
lowest cost and we accept if this cost is at most $B$. Otherwise we
reject.
The polynomial running time follows directly. Correctness is
guaranteed by the fact that we need to maintain the VI property and
that it suffices to add approvals for $p$ only.
\end{proof}
}
The CI case introduces a different complication. Now, adding an approval for the preferred
candidate in a given vote also requires adding approvals for all
those between him or her and the original approval set.
Thus, in addition to bounding the bribery's cost, we also need to
track the candidates whose scores increase beyond a given level.
\begin{theorem}\label{thm:add-approvals-ci}
\textsc{AV-\$AddApprovals-CI-Bribery} $\in {\mathrm{P}}$.
\end{theorem}
\begin{proof}
Our input consists of an election $E = (C,V)$, committee size $k$,
preferred candidate $p \in C$, budget $B$, and the information about
the costs of all the possible operations (i.e., for each voter and
each candidate that he or she does not approve, we have the price
for adding this candidate to the voter's ballot). Without loss of
generality, we assume that
$C = \{\ell_{m'}, \ldots, \ell_1, p, r_1, \ldots, r_{m''}\}$,
$V = \{v_1, \ldots, v_n\}$, each voter approves at least one
candidate,\footnote{Without this assumption we could still make our
algorithm work. We would simply choose a certain number of voters
who do not approve any candidates to approve $p$ alone (though we
would not necessarily ask all such voters, due to possibly high
prices).} and the election is CI with respect to the order:
\[
\rhd = \ell_{m'} \cdots \ell_2\ \ell_1 \ p \ r_1\
r_2 \cdots r_{m''}.
\]
We start with a few observations. First, we note that if a voter
already approves $p$ then there is no point in adding any approvals
to his or her ballot. Second, if some voter does not approve $p$,
then we should either not add any approvals to his or her ballot, or
add exactly those approvals that are necessary to ensure that $p$
gets one. For example, if some voter has approval ballot
$\{r_3, r_4, r_5\}$ then we may either choose to leave it intact or
to extend it to $\{p, r_1, r_2, r_3, r_4, r_5\}$.
We let $L = \{\ell_{m'}, \ldots, \ell_1\}$ and
$R = \{r_1, \ldots, r_{m''}\}$, and we partition the voters into
three groups, $V_\ell$, $V_p$, and $V_r$, as follows:
\begin{enumerate}
\item $V_p$ contains all the voters who approve $p$,
\item $V_\ell$ contains the voters who approve members of $L$
only,
\item $V_r$ contains the voters who approve members of $R$ only.
\end{enumerate}
Our algorithm proceeds as follows (by guessing we mean iteratively
trying all possibilities; Steps~\ref{alg1:dp1} and~\ref{alg1:dp2}
will be described in detail below):
\begin{enumerate}
\item Guess the numbers $x_\ell$ and $x_r$ of voters from $V_\ell$
and $V_r$ whose approval ballots will be extended to approve $p$.
\item Guess the numbers $t_\ell$ and $t_r$ of candidates from
$L$ and $R$ that will end up with higher approval scores
than $p$ (we must have $t_\ell + t_r < k$ for $p$ to join a
winning committee).
\item \label{alg1:dp1} Compute the lowest cost of extending exactly $x_\ell$ votes
from $V_\ell$ to approve $p$, so that at most $t_\ell$ candidates from
$L$ end up with more than $\score_E(p)+x_\ell+x_r$ points (i.e.,
with score higher than $p$); denote this cost as $B_\ell$.
\item \label{alg1:dp2} Repeat the above step for the $x_r$ voters
from $V_r$, with at most $t_r$ candidates obtaining more than
$\score_E(p)+x_\ell+x_r$ points; denote the cost of this operation
as $B_r$.
\item If $B_\ell + B_r \leq B$ then accept (reject if no choice of
$x_\ell$, $x_r$, $t_\ell$, and $t_r$ leads to acceptance).
\end{enumerate}
One can verify that this algorithm is correct (assuming we know how
to perform Steps~\ref{alg1:dp1} and~\ref{alg1:dp2}).
Next we describe how to perform Step~\ref{alg1:dp1} in polynomial
time (Step~\ref{alg1:dp2} is handled analogously). We will need some
additional notation. For each $i \in [m']$, let $V_{\ell}(i)$
consist exactly of those voters from $V_\ell$ whose approval ballots
include candidate $\ell_i$ but do not include $\ell_{i-1}$ (in other
words, voters in $V_\ell(i)$ have approval ballots of the form
$[\ell_j, \ell_i]$, where $j \geq i$). Further, for each
$i \in [m']$ and each $e \in [|V_\ell(i)|]_0$ let $\cost(i,e)$ be
the lowest cost of extending $e$ votes from $V_\ell(i)$ to approve
$p$ (and, as a consequence, to also approve candidates
$\ell_{i-1}, \ldots, \ell_1$). If $V_\ell(i)$ contains fewer than
$e$ voters then $\cost(i,e) = +\infty$. For each $e \in [x_\ell]_0$,
we define $S(e) = \score_E(p) + e + x_r$. Finally, for each
$i \in [m']$, $e \in [x_\ell]_0$, and $t \in [t_\ell]_0$ we define
function $f(i,e,t)$ so that:
\begin{enumerate}
\item[] $f(i,e,t)$ = the lowest cost of extending exactly $e$ votes
from $V_\ell(1) \cup \cdots \cup V_\ell(i)$ (to approve~$p$) so
that at most $t$ candidates among $\ell_1, \ldots, \ell_i$ end up
with more than $S(e)$ points (function $f$ takes value $+\infty$
if satisfying all the given constraints is impossible).
\end{enumerate}
Our goal in Step~\ref{alg1:dp1} of the main algorithm is to compute
$f(m',x_\ell,t_\ell)$, which we do via dynamic programming. To this
end, we observe that the following recursive equation holds (let
$\chi(i,e)$ be $1$ if $\score_E(\ell_i) > S(e)$ and let $\chi(i,e)$
be $0$ otherwise; we explain the idea of the equation below):
\begin{align*}
f(i,e,t)\! =\! \min_{e' \in [e]_0} \! \big( \cost(i,e') + f(i-1, e-e', t - \chi(i,e)\big).
\end{align*}
The intuition behind this equation is as follows. We consider each
possible number $e' \in [e]_0$ of votes from $V_\ell(i)$ that can
be extended to approve $p$. The lowest cost of extending the votes
of $e'$ voters from $V_\ell(i)$ is, by definition,
$\cost(i,e')$. Next, we still need to extend $e-e'$ votes from
$V_\ell(i-1), \ldots, V_\ell(1)$ and, while doing so, we need to
ensure that at most $t$ candidates end up with at most $S(e)$
points. Candidate $\ell_i$ cannot get any additional approvals from
voters $V_\ell(i-1), \ldots, V_\ell(1)$, so he or she exceeds this
value exactly if $\score_E(\ell_i) > S(e)$ or, equivalently, if
$\chi(i,e) = 1$. This means that we have to ensure that at most
$t - \chi(i,e)$ candidates among $\ell_{i-1}, \ldots, \ell_1$ end up
with at most $S(e)$ points. However, since we extend $e'$ votes from
$V_\ell(i)$, we know that candidates $\ell_{i-1}, \ldots, \ell_1$
certainly obtain $e'$ additional points (as compared to the input
election). Thus we need to ensure that at most $t - \chi(i,e)$ of
them end up with score at most $S(e-e')$ after extending the votes
from $V_\ell(1) \cup \ldots \cup V_\ell(i-1)$. This is ensured by
the $f(i-1,e-e',t-\chi(i,e))$ component in the equation (which also
provides the lowest cost of the respective operations).
Using the above formula, the fact that $f(1,e,t)$ can be computed
easily for all values of~$e$ and~$t$, and standard dynamic
programming techniques, we can compute $f(m',x_\ell,t_\ell)$ in
polynomial time. This suffices for completing Step~\ref{alg1:dp1} of
the main algorithm and we handle Step~\ref{alg1:dp2}
analogously. Since all the steps of can be performed in polynomial
time, the proof is complete.
\end{proof}
We note that both above
theorems also apply to the cases where we can only add approvals for
the preferred candidate. Indeed, the algorithm from
Theorem~\ref{thm:add-approvals-vi} is designed to do just that, and
for the algorithm from Theorem~\ref{thm:add-approvals-ci}
we can set the price of adding other approvals to be $+\infty$.
| 3,238 | 24,204 |
en
|
train
|
0.116.4
|
\section{Deleting Approvals}
\label{sec:deleting}
\appendixsection{sec:deleting}
The case of deleting approvals is more intriguing. Roughly speaking, in the unrestricted
setting it suffices to delete approvals from sufficiently many
candidates that
have higher scores than~$p$, for whom doing so is least expensive~\citep{fal-sko-tal:c:bribery-measure}.
The same general strategy works for the VI case because we still can delete approvals for different candidates independently.
\begin{theorem}
\label{thm:delete-approvals-vi}
\textsc{AV-\$DelApprovals-VI-Bribery} $\in~{\mathrm{P}}$.
\end{theorem}
\appendixproof{thm:delete-approvals-vi}
{
\begin{proof}
Let our input consist of an election $E=(C,V)$, preferred candidate
$p \in C$, committee size~$k$, and budget~$B$. We assume that
$V = \{v_1, \ldots, v_n\}$ and the election is VI with respect to
ordering the voters by their indices. Let $s = \score_E(p)$ be the
score of $p$ prior to any bribery. We refer to the candidates with
score greater than~$s$ as superior.
Since it is impossible to increase the score of $p$ by deleting
approvals, we need to ensure that the number of superior candidates
drops to at most $k-1$.
For each superior candidate $c$, we compute the lowest cost for
reducing his or her score to exactly~$s$. Specifically, for each
such candidate $c$ we act as follows.
Let $t = \score_E(c) - s$ be the number of $c$'s approvals that we need to delete and
let $v_{a}, v_{a+1}, \ldots, v_{b}$ be the interval of voters that approve $c$.
For each $i \in [t]_0$ we compute the cost of deleting $c$'s approvals among
the first $i$ and the last $t-i$ voters in the interval (these are the only operations that achieve our goal and maintain the VI property of the election); we store the lowest of these costs as ``the cost of $c$.''
Let $S$ be the number of superior candidates (prior to any bribery).
We choose $S-(k-1)$ of them with the lowest costs. If the sum of these
costs is at most $B$ then we accept and, otherwise, we reject.
\end{proof}
}
For the CI case, our problem turns out to be ${\mathrm{NP}}$-complete.
Intuitively, the reason for this is that in the CI domain deleting an
approval for a given candidate requires either deleting all the
approvals to the left or all the approvals to the right on the
societal axis.
Indeed, our main trick is to introduce approvals that must be deleted
(at zero cost), but doing so requires choosing whether to delete their
left or their right neighbors (at nonzero cost). This result is our
first example of a complexity reversal.
\begin{theorem}\label{thm:delete-approvals-ci}
\textsc{AV-\$DelApprovals-CI-Bribery} is ${\mathrm{NP}}$-complete.
\end{theorem}
\begin{proof}
We give a reduction from \textsc{RX3C}. Let $I=(X,\mathcal{S})$ be
the input instance, where $X = \{x_1,\ldots, x_{3n}\}$ is the
universe and $\mathcal{S} = \{S_1,\ldots,S_{3n}\}$ is a family of
size-$3$ subsets of $X$. By definition, each element of $X$ belongs
to exactly three sets from $\mathcal{S}$.
We form an
instance of
\textsc{AV-\$DelApprovals-CI-Bribery}
as follows.
We have the preferred candidate $p$, for each universe element
$x_i \in X$ we have corresponding universe candidate $x_i$, for each
set $S_j \in \mathcal{S}$ we have set candidate $s_j$, and we have set $D$
of $2n$ dummy candidates (where each individual one is denoted
by~$\diamond$). Let $C$ be the set of just-described $8n+1$
candidates and let $S = \{s_1, \ldots, s_{3n}\}$ contain the set
candidates. We fix the societal axis to be:
\[
\rhd =
\underbrace{\overbrace{s_1 \cdots s_{3n}}^{3n}
\overbrace{\diamond \cdots \diamond}^{2n}
\overbrace{x_1 \cdots x_{3n}}^{3n}
p}_{8n+1}
\]
Next, we form the voter collection $V$:
\begin{enumerate}
\item For each candidate in $S \cup D \cup \{p\}$, we have two
voters that approve exactly this candidate. We refer to them as
the \emph{fixed voters} and we set the price for deleting their
approvals to be $+\infty$. We refer to their approvals as
\emph{fixed}.
\item For each set $S_j = \{x_a,x_b,x_c\}$, we form three
\emph{solution voters}, $v(s_j,x_a)$, $v(s_j,x_b)$, and
$v(s_j,x_c)$, with approval sets $[s_j,x_a]$, $[s_j, x_b]$, and
$[s_j,x_c]$, respectively. For a solution voter $v(s_i,x_d)$, we
refer to the approvals that $s_i$ and $x_d$ receive as
\emph{exterior}, and to all the other ones as
\emph{interior}. The cost for deleting each exterior approval is
one, whereas the cost for deleting the interior approvals is
zero. Altogether, there are $9n$ solution voters.
\end{enumerate}
To finish the construction, we set the committee size $k=n+1$ and
the budget $B=9n$. Below, we list the approval scores prior to any
bribery (later we will see that in successful briberies one always
deletes all the interior approvals):
\begin{enumerate}
\item $p$ has $2$ fixed approvals,
\item each universe candidate has $3$~exterior approvals (plus some
number of interior ones),
\item each set candidate has $3$ exterior approvals and $2$ fixed
ones (plus some number of interior ones), and
\item each dummy candidate has $2$ fixed approvals (and $9n$
interior ones).
\end{enumerate}
We claim that there is a bribery of cost at most $B$ that ensures
that $p$ belongs to some winning committee if and only if $I$ is a
yes-instance of \textsc{RX3C}. For the first direction, let us
assume that $I$ is a yes-instance and let $\mathcal{T}$ be a size-$n$
subset of $\mathcal{S}$ such that $\bigcup_{S_i \in \mathcal{T}} S_i = X$ (i.e.,
$\mathcal{T}$ is the desired exact cover). We perform the following
bribery: First, for each solution voter we delete all his or her
interior approvals. Next, to maintain the CI property (and to lower
the scores of some candidates), for each solution voter we delete
one exterior approval. Specifically, for each set
$S_j = \{x_a,x_b,x_c\}$, if $S_j$ belongs to the cover (i.e., if
$S_i \in \mathcal{T}$) then we delete the approvals for $x_a$, $x_b$, and
$x_c$ in
$v(s_j,x_a)$, $v(s_j,x_b)$, and $v(s_j,x_c)$, respectively;
otherwise, i.e., if $S_j \notin \mathcal{T}$, we delete the approvals for
$s_j$ in these votes. As a consequence, all the universe candidates
end up with two exterior approvals each, the $n$ set candidates
corresponding to the cover end up with three approvals each (two
fixed ones and one exterior), the $2n$ remaining set candidates and
all the dummy candidates end up with two fixed approvals
each. Since~$p$ has two approvals, the committee size is $n+1$,
and only $n$ candidates have score higher than $p$, $p$ belongs to
some winning committee (and the cost of the bribery is~$B$).
For the other direction, let us assume that there is a bribery with
cost at most $B$ that ensures that $p$ belongs to some winning
committee. It must be the case that this bribery deletes exactly one
exterior approval from each solution voter. Otherwise, since there
are $9n$ solution voters and the budget is also $9n$, some solution
voter would keep both his or her exterior approvals, as well as all
the interior ones. This means that after the bribery there would be
at least $2n$ dummy candidates with at least three points each. Then,
$p$ would not belong to any winning committee. Thus, each solution
voter deletes exactly one exterior approval, and we may assume that
he or she also deletes all the interior ones (this comes at zero
cost and does not decrease the score of $p$).
By the above discussion, we know that all the dummy candidates end
up with two fixed approvals, i.e., with the same score as~$p$. Thus,
for $p$ to belong to some winning committee, at least $5n$
candidates among the set and universe ones also must end up with at
most two approvals (at most $n$ candidates can have score higher
than $p$). Let $x$ be the number of set candidates whose approval
score drops to at most two, and let $y$ be the number of such
universe candidates. We have that:
\begin{align}\label{eq:1}
0 \leq x \leq 3n&,& 0 \leq y \leq 3n&,& \text{and}&& x+y \geq 5n.
\end{align}
Prior to the bribery, each set candidate has five non-interior approvals
(including three exterior approvals) so bringing his or her score to
at most two costs three units of budget. Doing the same for a
universe candidate costs only one unit of budget, as universe
candidates originally have only three non-interior approvals.
Since our total budget is $9n$, we have:
\begin{equation}\label{eq:2}
3x+y \leq 9n.
\end{equation}
Together, inequalities~\eqref{eq:1} and~\eqref{eq:2} imply that
$x = 2n$ and $y = 3n$. That is, for each universe candidate $x_i$
there is a solution voter $v(s_j,x_d)$ who is bribed to delete the
approval for $x_d$ (and, as a consequence of our previous
discussion, who is not bribed to delete the approval for $s_j$). We
call such solution voters \emph{active} and we define a family of
sets:
\[
\mathcal{T} = \{ S_j \mid s_j \text{ is approved by some active solution
voter} \}.
\]
We claim that $\mathcal{T}$ is an exact cover for the \textsc{RX3C}
instance $I$. Indeed, by definition of active solution voters we
have that $\bigcup_{S_i \in \mathcal{T}} S_i = X$. Further, it must be the
case that $|\mathcal{T}| = n$. This follows from the observation that if
some solution voter is active then his or her corresponding set
candidate $s_j$ has at least three approvals after the bribery (each
set candidate receives exterior approvals from exactly three
solution voters and these approvals must be deleted if the candidate
is to end up with score two; this is possible only if all the three
solution voters are not active). Since exactly $2n$ set candidates
must have their scores reduced to two, it must be that
$3n - |\mathcal{T}| = 2n$, so $|\mathcal{T}| = n$. This completes the proof.
\end{proof}
The above proof strongly relies on using $0$/$1$/$+\infty$ prices. The case of unit prices remains open and we believe that resolving it
might be quite challenging.
| 3,194 | 24,204 |
en
|
train
|
0.116.5
|
\section{Swapping Approvals}
\label{sec:swapping}
\appendixsection{sec:swapping} In some sense, bribery by swapping
approvals is our most interesting scenario because there are cases
where a given problem has the same complexity both in the unrestricted
setting and for some structured domain (and this happens both for
tractability and ${\mathrm{NP}}$-completeness), as well as cases where the
unrestricted variant is tractable but the structured one is not or the
other way round.
\subsection{Approval Swaps to the Preferred Candidate}
Let us first consider a variant of \textsc{AV-SwapApprovals-Bribery}
where each unit operation moves an approval from some candidate to the
preferred one. We call operations of this form \textsc{SwapApprovals
to p}. In the unrestricted setting, this problem is in ${\mathrm{P}}$ for unit
prices but is ${\mathrm{NP}}$-complete if the prices are arbitrary. For the CI
and VI domains, the problem can be solved in polynomial time for both
types of prices. While for the CI domain this is not so
surprising---indeed, in this case possible unit operations are very
limited---the VI case requires quite some care.
\begin{theorem}
\label{thm:swap-approvals-to-p-ci}
\textsc{AV-\$SwapApprovals to p-CI-Bribery} $\in {\mathrm{P}}$.
\end{theorem}
\appendixproof{thm:swap-approvals-to-p-ci}
{
\begin{proof}
Consider an input with CI election $E = (C,V)$, committee size $k$, preferred candidate $p$, and budget $B$.
W.l.o.g., we assume that $C = \{l_{m'}, \ldots, l_1, p, r_1, \ldots, r_{m''}\}$, $V = \{v_1, \ldots, v_n\}$, and the societal axis is:
\[
\rhd = \ell_{m'}\ \cdots\ \ell_2\ \ell_1 \ p \ r_1\ r_2\ \cdots\ r_{m''}.
\]
Since unit operations must move approvals to $p$, for each voter $v_i$ exactly one of the following holds:
\begin{enumerate}
\item There is $t \in [m']$ such that $v_i$ has approval set $[\ell_t,\ell_1]$ and the only possible operation is to move an approval from~$\ell_t$ to $p$ at some given cost.
\item There is $t \in [m'']$ such that $v_i$ has approval set $[r_1,r_t]$ and the only possible operation is to move an approval from~$r_t$ to $p$ at some given cost.
\item It is illegal to move any approvals for this voter.
\end{enumerate}
For each candidate $c \in C \setminus \{p\}$
and each integer $x$, we let $f(c,x)$ be the lowest cost of moving
$x$ approvals from $c$ to $p$ (we assume that $f(c,x) = +\infty$ if
doing so is impossible). By the above discussion, we
can compute the values of $f$ in polynomial time.
Our algorithm proceeds as follows. First, we guess score
$y \in [n]_0$ that we expect $p$ to end up with. Second, we let~$S$
be the set of candidates that in the input election have score
higher than $y$. For each candidate $c \in S$ we define his or her
cost to be $f(c, \score_E(c)-y)$, i.e., the lowest cost of moving
approvals from $c$ to $p$ so that $c$ ends up with score~$y$. Then
we let~$S'$ be a set of~$|S|-(k-1)$ members of~$S$ with the lowest
costs (if~$|S| \leq k-1$ then~$S'$ is an empty set). For
each~$c \in S'$, we perform the approval moves implied by
$f(c,\score_E(c)-y)$. Finally, we ensure that~$p$ has~$y$ approvals
by performing sufficiently many of the cheapest still-not-performed
unit operations (we reject for this value of $y$ if not enough
operations remained). If the total cost of all performed unit
operations is at most $B$, we accept (indeed, we have just found a
bribery that ensures that there are at most $k-1$ candidates with
score higher than $p$ and whose cost is not too high). Otherwise, we
reject for this value of~$y$. If there is no $y$ for which we
accept, we reject.
\end{proof}
}
Our algorithm for the VI case is based on
dynamic programming (expressed as searching for a shortest path
in a certain graph) and relies on the fact that due to the VI property
we avoid performing the same unit operations twice.
\begin{theorem}
\label{thm:swap-approvals-to-p-vi}
For the case where we can only swap approvals to the preferred candidate,
\textsc{AV-\$SwapApprovals-VI-Bribery} is in ${\mathrm{P}}$.
\end{theorem}
\begin{proof}
Consider an instance of our problem with an election $E=(C,V)$,
committee size~$k$, preferred candidate $p$, and budget $B$.
Without loss of generality, we assume that $V = \{v_1, \ldots, v_n\}$
and that the election is VI with respect to the order
$v_1 \rhd v_2 \rhd \cdots \rhd v_n$. We also assume that $p$ has at
least one approval (if it were not the case, we could try all
possible single-approval swaps to $p$).
On the high level, our algorithm proceeds as follows: We try all
pairs of integers $\alpha$ and $\beta$ such that
$1 \leq \alpha \leq \beta \leq n$ and, for each of them, we check if
there is a bribery with cost at most $B$ that ensures that the
preferred candidate is (a)~approved exactly by voters
$v_\alpha, \ldots, v_\beta$, and (b)~belongs to some winning
committee. If such a bribery exists then we accept and, otherwise, we
reject. Below we describe the algorithm that finds the cheapest
successful bribery for a given pair $\alpha, \beta$ (if one exists).
Let $\alpha$ and $\beta$ be fixed. Further, let $x, y$ be two
integers such that in the original election $p$ is approved exactly
by voters $v_x, v_{x+1}, \ldots, v_y$. Naturally, we require that
$\alpha \leq x \leq y \leq \beta$;
if this condition is not met then we
drop this $\alpha$ and $\beta$.
We let $s = \beta - \alpha + 1$ be the score that $p$ is to have
after the bribery. We say that a candidate $c \in C \setminus \{p\}$
is \emph{dangerous} if his or her score in the original election is
above $s$. Otherwise, we say that this candidate is \emph{safe}. Let
$D$~be the number of dangerous candidates. For $p$ to become a
member of some winning committee, we need to ensure that after the
bribery at most $k-1$ dangerous candidates still have more than $s$
points (each safe candidate certainly has at most $s$ points).
To do so, we analyze a certain digraph.
For each pair of integers $a$ and $b$ such that
$\alpha \leq a \leq b \leq \beta$ and each integer $d \in [|C|-1]_0$
we form a node $(a,b,d)$, corresponding to the fact that there is a
bribery after which $p$ is approved exactly by voters
$v_a, v_{a+1}, \ldots, v_b$ and exactly $d$ dangerous candidates
have scores above $s$. Given two nodes $u' = (a',b', d')$ and
$u'' = (a'',b'',d'')$, such that $a' \geq a''$, $b' \leq b''$, and
$d'' \in \{d',d'-1\}$, there is a directed edge from $u'$ to $u''$
with weight $\cost(u',u'')$ exactly if there is a candidate $c$ such
that after bribing voters $v_{a''}, v_{a''+1}, \ldots, v_{a'-1}$ and
$v_{b'+1}, \ldots, v_{b''-1}, v_{b''}$ to move an approval from $c$
to $p$ it holds that:
\begin{enumerate}
\item voters approving $c$ still form an interval,
\item if $c$ is a dangerous candidate and his or her score drops to
at most $s$, then $d'' = d'-1$, and, otherwise, $d'' = d'$, and
\item the cost of this bribery is exactly $\cost(u',u'')$.
\end{enumerate}
One can verify that for each node $u = (a,b,d)$ the weight of the
shortest path from $(x,y,D)$ to $u$ is exactly the price of the
lowest-cost bribery that ensures that $p$ is approved by voters
$v_a, \ldots, v_b$ and exactly $d$ dangerous candidates have
scores above $s$ (the VI property ensures that no approval is ever
moved twice). Thus it suffices to find a node $(\alpha, \beta, K)$
such that $K < k$, for which the shortest path from $(x,y,D)$ is at
most $B$. Doing so is possible in polynomial time using, e.g., the
classic algorithm of Dijkstra.
\end{proof}
| 2,496 | 24,204 |
en
|
train
|
0.116.6
|
\subsection{Arbitrary Swaps}
Let us now consider the full variant of bribery by swapping approvals.
For the unrestricted domain,
the problem is ${\mathrm{NP}}$-complete for general prices, but admits
a polynomial-time algorithm for unit ones~\citep{fal-sko-tal:c:bribery-measure}. For the CI domain,
${\mathrm{NP}}$-completeness holds even in the latter setting.
\begin{remark}
The model of unit prices, applied directly to the case of
\textsc{SwapApprovals-CI-Bribery}, is somewhat unintuitive. For
example, consider societal axis
$c_1 \rhd c_2 \rhd \cdots \rhd c_{10}$ and an approval set
$[c_3,c_5]$. The costs of swap operations that transform it into,
respectively, $[c_4,c_6]$, $[c_5,c_7]$, and $[c_6,c_8]$ are $1$,
$2$, and $3$, as one would naturally expect. Yet, the cost of
transforming it into, e.g., $[c_8,c_{10}]$ would also be~$3$ (move
an approval from $c_3$ to $c_8$, from $c_4$ to $c_9$, and from $c_5$
to $c_{10}$), which is not intuitive. Instead, it would be natural
to define this cost to be $5$ (move the interval by $5$ positions to
the right). Our proof of Theorem~\ref{thm:swap-approvals-ci} works
without change for both these interpretations of unit prices.
\end{remark}
\begin{theorem}
\label{thm:swap-approvals-ci}
\textsc{AV-SwapApprovals-CI-Bribery} is ${\mathrm{NP}}$-complete.
\end{theorem}
\begin{proof}
We give a reduction from \textsc{Cubic Independent Set}. Let $G$ be
our input graph, where $V(G) = \{c_1, \ldots, c_n\}$ and
$E(G) = \{e_1, \ldots, e_L\}$, and let $h$ be the size of the
desired independent set. We construct the corresponding
\textsc{AV-SwapApprovals-CI-Bribery} instance as follows.
Let $B = 3h$ be our budget and let $t = B+1$ be a certain parameter
(which we interpret as ``more than the budget''). We form candidate
set $C = V(G) \cup \{p\} \cup F \cup D$, where $p$ is the preferred
candidate, $F$ is a set of $t(n+1)$ filler candidates, and $D$ is a set
of $t$ dummy candidates. Altogether, there are $t(n+2)+n+1$
candidates. We denote individual filler candidates by $\diamond$ and
individual dummy candidates by $\bullet$; we fix the societal axis
to be:
\[
\rhd =
\underbrace{
\overbrace{\diamond \cdots \diamond}^t c_1 \overbrace{\diamond \cdots \diamond}^t c_2 \diamond \cdots \diamond c_{n-1} \overbrace{\diamond \cdots \diamond}^t c_n \overbrace{\diamond \cdots \diamond}^t \overbrace{\bullet \cdots \bullet}^t p
}_{t(n+2) + n + 1}
\]
For each positive integer $i$ and each candidate $c$, we write
${\mathrm{P}}prec_i(c)$ to mean the $i$-th candidate preceding $c$ in $\rhd$.
Similarly, we write $\mathrm{succ}_i(c)$ to denote the $i$-th candidate
after~$c$. We introduce the following voters:
\begin{enumerate}
\item For each edge $e_i = \{c_a,c_b\}$ we add an edge voter
$v_{a,b}$ with approval set $[c_a,c_b]$. For each vertex
$c_i \in V(G)$, we write $V(c_i)$ to denote the set of the three
edge voters corresponding to the edges incident to $c_i$.
\item Recall that $L = |E(G)|$. For each vertex candidate
$c_i \in V(G)$, we add sufficiently many voters with approval set
$[{\mathrm{P}}prec_{t}(c_i), \mathrm{succ}_{t}(c_i)]$, so that, together with the
score from the edge voters, $c_i$~ends up with $L$ approvals.
\item We add $L-3$ voters that approve $p$.
\item For each group of $t$ consecutive filler candidates, we add
$L+4t$ filler voters, each approving all the candidates in the
group.
\end{enumerate}
Altogether, $p$ has score $L-3$, all vertex candidates have score
$L$, the filler candidates have at least $L+4t$ approvals each, and
the dummy candidates have score $0$. We set the committee size to
be $k = t(n+1) + (n-h)+1$. Prior to any bribery, each winning
committee consists of $t(n+1)$ filler candidates and $(n-h)+1$
vertex ones (chosen arbitrarily). This completes our construction.
Let us assume that $G$ has a size-$h$ independent set and denote it
with $S$. For each $c_i \in S$ and each edge $e_t = \{c_i,c_j\}$, we
bribe edge voter $v_{i,j}$ to move an approval from $c_i$ to a
filler candidate right next to $c_j$. This is possible for each of
the three edges incident to $c_i$ because $S$ is an independent set.
As a consequence, each vertex from $S$ ends up with $L-3$ approvals.
Thus only $n-h$ vertex candidates have score higher than $p$ and,
so, there is a winning committee that includes $p$.
For the other direction, let us assume that it is possible to ensure
that $p$ belongs to some winning committee via a bribery of cost at
most $B$. Let us consider the election after some such bribery was
executed. First, we note that all the filler candidates still have
scores higher than $L+3t$ (this is so because decreasing a
candidate's score always has at least unit cost and $B <
t$). Similarly, $p$ still has score $L-3$ because increasing his or
her score, even by one, costs at least $t$ (indeed, $p$ is separated
from the other candidates by $t$ dummy candidates). Since $p$
belongs to some winning committee, this means that at least $h$
vertex voters must have ended up with score at most $L-3$. In fact,
since our budget is $B = 3h$, a simple counting argument shows that
exactly $h$ of them have score exactly $L-3$, and all the other ones
still have score~$L$. Let~$S$ be the set of vertex candidates with
score $L-3$. The only way to decrease the score of a vertex
candidate $c_i$ from $L$ to $L-3$ by spending three units of the
budget is to bribe each of the three edge voters from $V(c_i)$ to
move an approval from $c_i$ to a filler candidate. However, if we
bribe some edge voter $v_{i,j}$ to move an approval from $c_i$ to a
filler candidate, then we cannot bribe that same voter to also move
an approval away from $c_j$ (this would either cost more than $t$
units of budget or would break the CI condition). Thus it must be
the case that the candidates in $S$ correspond to a size-$h$
independent set for~$G$. \iffalse Now let us consider some vertex
candidate $c_i$ whose score decreased from $L$ to $L-3$ by spending
three units of budget. This bribery must have involved the three
edge voters co
However, the only way to decrease a vertex candidate's score from $L$ to $L-3$ by spending three units of budget is to bribe voters corresponding to the edges touching
Next we show that there is a bribery that ensures $p$'s membership
in some winning committee if and only if $G$ has a size-$h$ independent set. For the first direction, let $S$ be such an independent set.
For each $c_i \in S$, we bribe the three edge voters
Let us now show that there are $h$ vertices in $V(G)$ such that no
two of them are connected. Let $S$ be a set of such vertices.
We perform the following bribery swaps:
For each
$c_i \in S$ there are three different edge voters that we bribe
We present a reduction from the \textsc{IS} problem, where given an integer $h$ and a graph $G$
we ask if there is a set of $h$ pairwise non-adjacent vertices in $G$.
Let $(G,h)$ be an instance of \textsc{IS}, where $G=(V,E)$ is a cubic graph and $|G| = n$.
We construct an instance of \textsc{AV-SwapApprovals-Bribery},
which will have a solution if and only if $G$ contains an independent set of size $h$.
During construction we use parameter $t$ which we set to $3h+1$.
We set the committee size to be $k=t(n+1) + n - h + 1$ and we set budget $B=3h$
and ensure that an optimal bribery has cost $B$ if and only if $G$ contains an independent set of size $h$.
We construct candidates set $C$ as follows. We say that $C_v = \{ c_i | i=1,...,n \}$
is the set of the candidates corresponding to vertices from $V=(v_1,...,v_n)$.
We also introduce a set of dummy candidates $D, |D|=t(n+2)$, which will fill the societal axis.
Individual dummy candidates are denoted by $\diamond$.
Finally we add the preferred candidate $p$, so $C = C_v \cup D \cup \{p\}$.
We define the societal axis $\rhd$ as follows:
$$
\rhd =
\underbrace{
\overbrace{\diamond ... \diamond}^t c_1 \overbrace{\diamond ... \diamond}^t c_2 \diamond ... \diamond c_{n-1} \overbrace{\diamond ... \diamond}^t c_n \overbrace{\diamond ... \diamond}^t \overbrace{\diamond ... \diamond}^t p
}_{t(n+2) + n + 1}
$$
To define the preferences of the voters, we introduce the following notation.
For each positive integer $i$ we denote the $i$th candidate before $c$ in $\rhd$ as $prec_i(c)$.
Similarly we denote as $succ_i(c)$ the $i$th candidate after $c$ in $\rhd$.
For each edge $e = (v_a, v_b), a < b$ in $E$ we introduce an edge voter $w_{\{a,b\}} = (c_a,succ_{1}(c_a),...,prec_{1}(c_b),c_b)$.
Edge voters form set $W_e$.
Then we find an integer $L$ being the largest number of approvals received from $W_e$ by any candidate $c_i$.
Then we assure that all candidates from $C_v$ have approval score equal to $L$
by adding appropriate number of vertex voters of the form $w_i = ( prec_{t}(c_i), prec_{t-1}(c_i), ..., succ_{t-1}(c_i), succ_{t}(c_i) )$,
which make up the set $W_v$.
Next we assure that all dummy candidates from the range from the beginning of the $\rhd$ up to $succ_{t}(c_n)$
have score equal to $4t + L$ by adding voters approving single dummy candidates.
Finally we add $L-3$ voters approving $p$.
This completes the construction, which is done in polynomial time.
With the preference profile thus created all candidates corresponding to the vertices have approval score equal to $L$,
$p$ has the score of $L-3$,
dummies between $succ_{t}(c_n)$ and $p$ do not have any approvals,
and the remaining dummies have score $4t + L$.
The committee will always contain $t(n+1)$ dummy candidates and it is impossible to decrease their scores
without exceeding budget $B$, even by one point.
It is clear that we can not make $p$ a member of the winning committee by increasing his or her approval score,
as increasing it by every single point would cost at least $t$.
Regardless of any bribery actions, out of all $t(n+1) + n - h + 1$ committee seats, $t(n+1)$ will be filled by dummies.
There are left $n - h + 1$ places in the committee to be distributed over $n$ vertex candidates and candidate $p$.
The only way to make $p$ a member of at least one winning committee is to decrease the score of $h$ candidates from $C_v$ to $L-3$,
so the winning committee would have the following form:
$t(n+1)$ dummy candidates with score at least $4t + L$, $n-h$ vertex candidates with score $L$, one candidate with score equal to $L-3$,
selected in a way of tie-breaking.
Decreasing the score of some $c_i, c_i \in C_v$ by 1 point by any voter from $W_v$ would cost $t+1$, which exceeds the budget.
If there is a voter $w_{\{a,b\}}$, then educing the score of either $c_a$ or $c_b$ by 1 point has unit cost,
but it is impossible to reduce the scores of both $c_a$ and $c_b$ and fit within constrained budget.
In other words for each edge voter we can choose only one of its ends to have its score decreased.
As $G$ is cubic graph and all of its vertices have degree equal to 3, there are exactly 3 voters in $W_e$ approving each vertex candidate.
Note that with the budget constrained to $3h$, we can choose $h$ vertex candidates and decrease their scores by $3$
only if among the selected candidates not a single pair is approved by a common edge voter.
Such a result exists if and only if $G$ contained independent set of size at least $h$.
If $G$ contained only a set of size at most $h-1$, only $h-1$ candidates could have their scores reduced by $3$.
The $h$th candidate would have a score of at least $L-2$ and would outrank $p$ and other candidates with score of $L-3$.
To complete the reduction, we need to transform the result of the bribery to the set of vertices from $G$.
If the list of vertex candidates who after bribery have approval score of $L-3$ is $(c_{i1}, c_{i2}, ..., c_{ih})$,
then vertices $(v_{i1}, v_{i2}, ..., v_{ih})$ form an independent set in $G$.
\fi
\end{proof}
| 3,940 | 24,204 |
en
|
train
|
0.116.7
|
\hyphenation{fa-li-szew-ski}
Next let us consider the VI domain.
On the one hand, the complexity of our problem for unit prices
remains open. On the other, for arbitrary prices we show that it
remains ${\mathrm{NP}}$-complete. Our proof works even for
the single-winner setting.
In the unrestricted
domain, the single-winner variant can be solved in
polynomial-time~\citep{fal:c:nonuniform-bribery}.
\begin{theorem}
\label{thm:vi-swaps}
\textsc{AV-\$SwapApprovals-VI-Bribery} is ${\mathrm{NP}}$-complete, even for
the single-winner case (i.e., for committees of size one).
\end{theorem}
\appendixproof{thm:vi-swaps}
{
\begin{proof}
We give a reduction from \textsc{RX3C}. Let $I = (X,\mathcal{S})$ be
an instance of \textsc{RX3C}, where
$X = \{x_1, \ldots, x_{3n}\}$ is a universe and
$\mathcal{S} = \{S_1, \ldots, S_{3n}\}$ is a family of size-$3$
subsets of $X$ (recall that each element from $X$ belongs to
exactly three sets from $\mathcal{S}$). We form a single-winner
approval election with $10n+1$ voters
$V = \{v_0, v_1, \ldots, v_{10n}\}$ and the following candidates:
\begin{enumerate}
\item We have the preferred candidate $p$.
\item For each universe element $x_i \in X$ we have a corresponding
universe candidate, also referred to as $x_i$.
\item For each set $S_t = \{x_i, x_j, x_k\}$ we have two set
candidates, $S'_t$ and $S''_t$, and three content candidates
$y_{t,i}$, $y_{t,j}$, and $y_{t,k}$.
\end{enumerate}
The approvals for these candidates, and the costs of moving them,
are as follows (if we do not explicitly list the cost of moving some
approval from a given candidate to another, then this move has cost
$+\infty$, i.e., this swap is impossible; the construction is
illustrated in Figure~\ref{fig:vi-swap}):
\begin{enumerate}
\item Candidate $p$ is approved by $P = 7n-1$ voters, $v_1, \ldots, v_P$.
\item Each candidate $x_i \in X$ is approved by exactly $P+1$
voters, $v_{i}, \ldots, v_{i+P}$. For each of the set candidates
$S'_t$, the cost of moving $v_i$'s approval from $x_i$ to $S'_t$
is $0$ if $x_i$ belongs to the set $S_t$, and it is $+\infty$
otherwise.
\item None of the $S'_t$ candidates is initially approved by any of
the voters.
\item Each candidate $S''_t$ is approved by all the voters. For each
$t \in [3n]$ we have the following costs of moving the approval
of $S''_t$:
\begin{enumerate}
\item The cost of moving $v_0$'s approval from $S''_t$
to $S'_t$ is~$0$; the same holds for $v_{3n+1}$.
\item For each $i \in [3n]$, the cost of moving $v_i$'s
approval from $S''_t$ to $S'_t$ is $0$ if $x_i$
belongs to $S_t$, and it is $+\infty$ otherwise.
\item For each $x_i \in S_t$, the cost of moving $v_i$'s
approval from $S''_t$ to $y_{t,i}$ is $0$.
\item The cost of moving $v_{10n}$'s approval from
$S''_t$ to $S'_t$ is $1$.
\item For each voter in $\{v_{P}, \ldots, v_{10n-1}\}$,
the cost of moving the approval from $S''_t$ to $S'_t$
is $0$.
\end{enumerate}
\end{enumerate}
One can verify that this election has the VI property for the
natural order of the voters (i.e., for $v_0 \rhd \cdots \rhd v_{10n}$). We
claim that it is possible to ensure that $p$ becomes a winner of
this election by approval-moves of cost at most $B = 2n$ (such that
the election still has the VI property after these moves) if and
only if $I$ is a yes-instance of \textsc{RX3C}.
\begin{figure}
\caption{\label{fig:vi-swap}
\label{fig:vi-swap}
\end{figure}
For the first direction, let us assume that $I$ is a yes-instance
of \textsc{RX3C} and that $R \subseteq [3n]$ is a size-$n$ set
such that $\bigcup_{i \in R}S_i = X$ (naturally, for each
$t, \ell \in R$, sets $S_t$ and $S_\ell$ are disjoint). It is
possible to ensure that $p$ becomes a winner of our election by
performing the following swaps (intuitively, for $t \notin R$,
$3n+2$ voters with the highest indices move approvals from $S''_t$
to $S'_t$, and for $t \in R$, $3n+2$ voters with the lowest
indices move their approvals from $S''_t$ either to $S'_t$ or to
the content candidates):
\begin{enumerate}
\item For each $t \in [3n] \setminus R$, voters $v_{P}, \ldots, v_{10n}$ move
approvals from $S''_{t}$ to $S'_t$. In total, this costs $2n$
units of budget and ensures that each involved candidate $S''_t$
has score~$P$, whereas each involved candidate $S'_t$ has score
$3n+2$.
\item For each $t \in R$, we perform the following moves (we assume
that $S_t = \{x_i, x_j, x_k\}$). For each voter
$v_\ell \in \{v_i, v_j, v_k\}$, we move this voter's approval from
$x_\ell$ to $S'_t$ (at zero cost), and we move this voter's
approval from $S''_t$ to $y_{t,\ell}$ (at zero cost). For each
$\ell \in \{0, \ldots, 3n+1 \} \setminus \{i,j,k\}$, we move voter
$v_\ell$'s approval from $S''_t$ to $S'_t$ (at zero cost). All in
all, candidates $x_i$, $x_j$, and $x_k$ lose one approval each,
and end up with $P$ approvals; $S''_t$ loses $3n+2$ approvals and
also ends up with $P$ approvals, $S'_t$ ends up with $3n+2$
approvals (from voters $v_0, \ldots, v_{3n+1}$) and candidates
$y_{t,i}$, $y_{t,j}$, and $y_{t,k}$ end up with a single approval
each. The cost of these operations is zero.
\end{enumerate}
One can verify that after these swaps the election still has the VI
property, $p$ has score $P$, and all the other candidates have score
at most $P$.
For the other direction, let us assume that there is a sequence of
approval moves that costs at most $2n$ units of budget and ensures
that $p$ is a winner. Since all the moves of approvals from and to
$p$ have cost $+\infty$, this means that every candidate ends up
with at most $P$ points. Thus each universe candidate moves one of
his or her approvals to the set candidate $S''_t$ that corresponds
to a set that contains this universe candidate (other moves of
approvals from the universe candidates are too expensive), and each
set candidate $S''_t$ loses at least $3n+2$ points.
Let us consider some candidate $S''_t$. Since, in the end, $S''_t$
has at most $P$ points, it must be that either voter $v_0$ moved
an approval from $S''_t$ to $S'_t$ or voter $v_{10n}$ did so (it
is impossible for both of these voters to make the move since
then, to ensure the VI property, we would have to move all the
approvals from $S''_t$ to $S'_t$ and such a bribery would be too
expensive; similarly, it is impossible that either of these
voters moves his or her approval to some other candidate). If
$v_0$ moves approval from $S''_t$ to $S'_t$, then we refer to
$S''_t$ as a \emph{solution candidate}. Otherwise we refer to
him or her as a \emph{non-solution} candidate. Due to the costs
of approval moves, there are at most $2n$ non-solution
candidates. W.l.o.g., we can assume that for each non-solution
candidate voters $v_{P}, \ldots, v_{10n}$ move approvals from
$S''_t$ to $S'_t$ (indeed, one can verify that this is the only
way for ensuring that non-solution candidates have at most score
$P$, the election still satisfies the VI property for these
candidates, and we do not exceed the budget).
Let us consider some solution candidate $S''_t$ such that
$S_t = \{x_i,x_j,x_k\}$. For $S''_t$ to end up with at most $P$
points, voters $v_0, \ldots, v_{3n+1}$ must move approvals from
$S''_t$ someplace else (for $v_0$ this is by definition of a
solution candidate, for the other voters this follows from the
fact that $v_{10n}$ cannot move his or her approval to $S'_t$,
and from the need to maintain the VI property). In fact, the
only option of moving approvals from $S''_t$ for voters $v_0$
and $v_{3n+1}$ is to move these approvals to $S'_t$. However,
then for the VI property to hold for $S'_t$ and $S''_t$, voters
$v_1, \ldots, v_{3n}$ must approve $S'_t$ and disapprove $S''_t$.
This is possible only if:
\begin{enumerate}
\item For each $\ell \in \{i,j,k\}$, voter $v_\ell$ moves an
approvals from $x_\ell$ to $S'_t$ and from $S''_t$ to $y_{t,\ell}$.
\item For each $\ell \in [3n] - \{i,j,k\}$, voter $v_\ell$ moves an
approval from $S''_t$ to $S'_t$.
\end{enumerate}
The above is possible exactly if the solution candidates correspond
to $n$ sets forming an exact cover of the universe.
As the reduction clearly runs in polynomial time, this completes the
proof.
\end{proof}
}
| 2,964 | 24,204 |
en
|
train
|
0.116.8
|
\section{Summary}
We have studied bribery in multiwinner approval elections, for the
case of candidate interval (CI) and voter interval (VI)
preferences. Depending on the setting, our problem can either be
easier, harder, or equally difficult as in the unrestricted domain.
{\mathrm{P}}aragraph{Acknowledgments.}
This project has received funding from the European
Research Council (ERC) under the European Union’s Horizon 2020
research and innovation programme (grant agreement No 101002854).
\noindent \includegraphics[width=3cm]{erceu}
\end{document}
| 176 | 24,204 |
en
|
train
|
0.117.0
|
\begin{document}
\title{QuiversToricVarieties: a package to construct quivers of sections on complete toric varieties}
\author{Nathan Prabhu-Naik}
\date{9th October 2014}
\begin{abstract}
Given a collection of line bundles on a complete toric variety, the \emph{Macaulay2} package \emph{QuiversToricVarieties} contains functions to construct its quiver of sections and check whether the collection is strong exceptional. It contains a database of full strong exceptional collections of line bundles for smooth Fano toric varieties of dimension less than or equal to 4.
\end{abstract}
\maketitle
\section{Introduction}
\noindent For a collection of non-isomorphic line bundles $\mathcal{L} = \lbrace \mathcal{L}_0 := \mathcal{O}_X, \mathcal{L}_1, \ldots, \mathcal{L}_r \rbrace$ on a complete normal toric variety $X$, the endomorphism algebra $\operatorname{End}(\bigoplus_i \mathcal{L}_i)$ can be described as the quotient of the path algebra of its quiver of sections by an ideal of relations determined by labels on the arrows in the quiver \cite{CrSm}. The vertices of the quiver correspond to the line bundles and there is a natural order on the vertices defined by $i < j$ if $\operatorname{Hom}(L_j, L_i) = 0$. For $i < j$, the number of arrows from $i$ to $j$ is equal to the dimension of the cokernel of the map
\begin{equation}
\bigoplus_{i<k<j} \operatorname{Hom}(L_i , L_k) \otimes \operatorname{Hom}(L_k , L_j) \longrightarrow \operatorname{Hom} (L_i , L_j),
\end{equation}
and we label each arrow by the toric divisors corresponding to the sections in a basis for the cokernel.
Using the given order on $\mathcal{L}$, the collection is \emph{strong exceptional} if
\begin{equation}
\operatorname{Ext}^i (L_j, L_k) = 0, \forall j, \ k, \text{ and } i \neq 0.
\end{equation}
\noindent Let $\mathcal{D}^b(X)$ denote the bounded derived category of coherent sheaves on $X$. The collection $\mathcal{L}$ is \emph{full}, or \emph{generates} $\mathcal{D}^b(X)$, if the smallest triangulated full subcategory of $\mathcal{D}^b(X)$ containing $\mathcal{L}$ is $\mathcal{D}^b(X)$ itself. A tilting bundle $T$ on $X$ is a vector bundle such that $T$ generates $\mathcal{D}^b(X)$ and $\operatorname{Ext}^i(T,T) = 0$ for $i>0$; given a full strong exceptional collection of line bundles $\mathcal{L}$ on $X$, the direct sum $\bigoplus_{L_i \in \mathcal{L}} L_i$ is a tilting bundle. The following theorem by Baer and Bondal allows us to understand $\mathcal{D}^b(X)$ in terms of the module category of a finite dimensional algebra.
\begin{theorem}\cite{Baer,Bond}
Let $T$ be a tilting bundle on $X$, $A = \operatorname{End}(T)$ and $\mathcal{D}^b(\mod A)$ be the bounded derived category of finitely generated right $A$-modules. Then
\begin{equation}
\mathbf{R}\operatorname{Hom} (T, - ) \colon \mathcal{D}^b(X) \rightarrow \mathcal{D}^b(\mod A)
\end{equation}
is an equivalence of triangulated categories.
\end{theorem}
\noindent A complete normal toric variety induces a short exact sequence of abelian groups
\begin{equation}
\begin{CD}\operatorname{op}eratorname{lab}el{ses}
0@>>> M @>>> \ensuremath{\mathbb{Z}}^{\Sigma(1)} @>\deg >> \operatorname{Cl}(X)@>>> 0,
\end{CD}
\end{equation}
where $M$ is the character lattice of the dense torus in $X$, $\Sigma(1)$ is the set of rays in the fan $\Sigma$ of $X$, and the map $\deg$ sends a toric divisor $D \in \ensuremath{\mathbb{Z}}^{\Sigma(1)}$ to the rank $1$ reflexive sheaf $\mathcal{O}_{X} (D)$ in the class group $\operatorname{Cl} (X)$ (see for example \cite{Fult}). Showing that $\mathcal{L}$ is strong exceptional in this situation is equivalent to checking that $H^i(X, L^{-1}_j \otimes L_k) = 0$ for $i > 0,\ 0 \leq j,k \leq r$. Using a theorem of Eisenbud, Musta{\c{t}}{\u{a}} and Stillman \cite{EiMuSt}, we can determine if the cohomology of $\mathcal{O}_X(D)$ vanishes by considering when $\mathcal{O}_X(D)$ avoids certain affine cones constructed in $\operatorname{Cl}(X)$, which we call \emph{non-vanishing cohomology cones}. The purpose of the package \emph{QuiversToricVarieties} for \emph{Macaulay2} \cite{M2} is to construct the quiver of sections for a collection of line bundles on a complete toric variety and check if the collection is strong exceptional. We note that there does exist computer programs that check if a collection of line bundles on a toric variety is strong exceptional; see for example Perling's \emph{TiltingSheaves} \cite{Perl}.
Restricting our attention to smooth toric Fano varieties, toric divisorial contractions give the collection of $n$-dimensional toric Fano varieties a poset structure, described for $n=3$ by \cite{Oda} and $n=4$ by \cite{Sato} (see also \cite[Remark 2.4]{Prna}). The contractions induce lattice maps between the short exact sequences (\ref{ses}) determined by the varieties and these lattice maps are an essential ingredient in the proof that each smooth toric Fano variety of dimension $\leq 4$ has a full strong exceptional collection of line bundles \cite[Theorem 6.4]{Prna}. The package \emph{QuiversToricVarieties} contains a database of these lattice maps and of full strong exceptional collections of line bundles on all smooth toric Fano varieties of dimension $\leq 4$.
In the case when $X$ is a smooth toric Fano variety, let $Y = \operatorname{tot}(\omega_X)$ be the total space of the canonical bundle on $X$. The package \emph{QuiversToricVarieties} contains methods to check if the pullback of a full strong exceptional collection of line bundles on $X$ along the morphism $Y \rightarrow X$ is a tilting bundle on $Y$.
\emph{QuiversToricVarieties} depends on the package \emph{NormalToricVarieties} for the construction of toric varieties and for the database of smooth toric Fano varieties. All varieties are defined over $\ensuremath{\Bbbk} = \ensuremath{\mathbb{C}}$.
\section{Overview of the Package}
\noindent Let $X$ be a complete normal toric variety constructed in \emph{NormalToricVarieties} with a torsion-free class group. The class group lattice of $X$ has a basis determined by \verb+fromWDivToCl+ and the function \verb+fromPicToCl+ can be used to determine which vectors in the lattice correspond to line bundles. The input for the method \verb+quiver+ is a complete normal toric variety with a torsion-free class group, together with a list of vectors $v_i$ in the class group lattice that correspond to the line bundles $L_i$. The vectors are ordered by \verb+quiver+ and the basis of $\operatorname{Hom} (L_i,L_j)$ is calculated by determining the basis of the multidegree $v_j-v_i$ over the Cox ring of the variety. From this basis, the irreducible maps are chosen and listed as arrows, with the corresponding toric divisors as labels. If some of the vectors do not correspond to line bundles then a quiver is still constructed but the resulting path algebra modulo relations may not be isomorphic to $\operatorname{End}(\bigoplus_{i \in Q_0} E_i)$, where $E_i$ are the rank $1$ reflexive sheaves corresponding to $v_i$. Alternatively, we can produce a quiver by explicitly listing the vertices, the arrows with labels and the variety. The methods \verb+source+, \verb+target+, \verb+label+ and \verb+index+ return the specific details of an arrow in the quiver, a list of which can be accessed by inputting \verb+Q_1+.
Besides the method \verb+quiver+, the method \verb+doHigherSelfExtsVanish+ forms the core of the package. The primary input is a quiver of sections. The method creates the non-vanishing cohomology cones in the class group lattice for $X$ and determines if the vectors $v_i - v_j$ avoid these cones. The cones are determined by certain subsets $I$ of the rays of the fan $\Sigma$ for $X$; if the complement of the supporting cones for $I$ in $\Sigma$ has non-trivial reduced homology, then $I$ is called a \emph{forbidden set} and it determines a cone in $\ensuremath{\mathbb{Z}}^{\Sigma(1)}$. The forbidden sets can be calculated using the function \verb+forbiddenSets+, and the image of a cone determined by a forbidden set under the map \verb+fromWDivToCl X+ is a non-vanishing cohomology cone in $\operatorname{Cl}(X)$.
A database in \emph{NormalToricVarieties} contains the smooth toric Fano varieties up to dimension $6$ and can be accessed using \verb+smoothFanoToricVariety+. The divisorial contractions between the smooth toric Fano varieties up to dimension $4$ are listed under the \verb+contractionList+ command, and the induced maps between their respective short exact sequences (\ref{ses}) are recalled from a database in \emph{QuiversToricVarieties} using the \verb+tCharacterMap+, \verb+tDivisorMap+ and the \verb+picardMap+ commands. Note that as each variety considered is smooth, its class group is isomorphic to its Picard group.
The database containing full strong exceptional collections of line bundles for smooth Fano toric varieties in dimension $\leq 4$ can be accessed using \verb+fullStrExcColl+. The collections for the surfaces were calculated by King \cite{King}, the threefolds by Costa--Mir\'{o}-Roig \cite{CoMR1}, Bernardi--Tirabassi \cite{BeTi} and Uehara \cite{Ueha} and the fourfolds by Prabhu-Naik \cite{Prna}.
| 2,635 | 5,551 |
en
|
train
|
0.117.1
|
\section{Overview of the Package}
\noindent Let $X$ be a complete normal toric variety constructed in \emph{NormalToricVarieties} with a torsion-free class group. The class group lattice of $X$ has a basis determined by \verb+fromWDivToCl+ and the function \verb+fromPicToCl+ can be used to determine which vectors in the lattice correspond to line bundles. The input for the method \verb+quiver+ is a complete normal toric variety with a torsion-free class group, together with a list of vectors $v_i$ in the class group lattice that correspond to the line bundles $L_i$. The vectors are ordered by \verb+quiver+ and the basis of $\operatorname{Hom} (L_i,L_j)$ is calculated by determining the basis of the multidegree $v_j-v_i$ over the Cox ring of the variety. From this basis, the irreducible maps are chosen and listed as arrows, with the corresponding toric divisors as labels. If some of the vectors do not correspond to line bundles then a quiver is still constructed but the resulting path algebra modulo relations may not be isomorphic to $\operatorname{End}(\bigoplus_{i \in Q_0} E_i)$, where $E_i$ are the rank $1$ reflexive sheaves corresponding to $v_i$. Alternatively, we can produce a quiver by explicitly listing the vertices, the arrows with labels and the variety. The methods \verb+source+, \verb+target+, \verb+label+ and \verb+index+ return the specific details of an arrow in the quiver, a list of which can be accessed by inputting \verb+Q_1+.
Besides the method \verb+quiver+, the method \verb+doHigherSelfExtsVanish+ forms the core of the package. The primary input is a quiver of sections. The method creates the non-vanishing cohomology cones in the class group lattice for $X$ and determines if the vectors $v_i - v_j$ avoid these cones. The cones are determined by certain subsets $I$ of the rays of the fan $\Sigma$ for $X$; if the complement of the supporting cones for $I$ in $\Sigma$ has non-trivial reduced homology, then $I$ is called a \emph{forbidden set} and it determines a cone in $\ensuremath{\mathbb{Z}}^{\Sigma(1)}$. The forbidden sets can be calculated using the function \verb+forbiddenSets+, and the image of a cone determined by a forbidden set under the map \verb+fromWDivToCl X+ is a non-vanishing cohomology cone in $\operatorname{Cl}(X)$.
A database in \emph{NormalToricVarieties} contains the smooth toric Fano varieties up to dimension $6$ and can be accessed using \verb+smoothFanoToricVariety+. The divisorial contractions between the smooth toric Fano varieties up to dimension $4$ are listed under the \verb+contractionList+ command, and the induced maps between their respective short exact sequences (\ref{ses}) are recalled from a database in \emph{QuiversToricVarieties} using the \verb+tCharacterMap+, \verb+tDivisorMap+ and the \verb+picardMap+ commands. Note that as each variety considered is smooth, its class group is isomorphic to its Picard group.
The database containing full strong exceptional collections of line bundles for smooth Fano toric varieties in dimension $\leq 4$ can be accessed using \verb+fullStrExcColl+. The collections for the surfaces were calculated by King \cite{King}, the threefolds by Costa--Mir\'{o}-Roig \cite{CoMR1}, Bernardi--Tirabassi \cite{BeTi} and Uehara \cite{Ueha} and the fourfolds by Prabhu-Naik \cite{Prna}.
\section{An Example}
\noindent We illustrate the main methods in \emph{QuiversToricVarieties} using the blowup of $\ensuremath{\mathbb{P}}^2$ at three points, the birationally-maximal smooth toric Fano surface. It is contained in the toric Fano database in \emph{NormalToricVarieties}, which is loaded by the \emph{QuiversToricVarieties} package.
\begin{verbatim}
i1 : loadPackage "QuiversToricVarieties";
i2 : X = smoothFanoToricVariety(2,4);
\end{verbatim}
\noindent A full strong exceptional collection $\mathcal{L}$, first considered by King \cite{King}, can be recalled from the database and its quiver of sections can be created.
\begin{verbatim}
i3 : L = fullStrExcColl(2,4);
o3 = {{0,0,0,0},{0,0,1,1},{0,1,0,0},{0,1,1,0},{1,0,0,0},{1,0,0,1}}
i4 : Q = quiver(L,X);
\end{verbatim}
\noindent We can view the details of the quiver, either by displaying the arrows at each vertex, or by listing all of the arrows and considering their source, target and label.
\begin{verbatim}
i5 : Q#0
o5 = HashTable{1 => {x_0x_1 , x_3x_4 } }
2 => {x_1x_2 , x_4x_5 }
3 => {x_2x_3 , x_0x_5 }
degree => {0, 0, 0, 0}
i6 : first Q_1
o6 = arrow_1
i7 : source oo, target oo, label oo
o7 = (0, 1, x_0x_1 )
\end{verbatim}
\noindent The forbidden sets of rays can be computed and the collection of line bundles can be checked to be strong exceptional. The method \verb+doHigherSelfExtsVanish+ creates a copy of the non-vanishing cohomology cones in the cache table for $X$, where the cones are given by a vector and a matrix $\{w,M\}$ encoding the supporting closed half spaces of the cone, in which case the lattice points of the cone are $\{ v \in \operatorname{Cl} (X) \mid M v \leq w \}$. The non-vanishing cone for $H^2$ is displayed below.
\begin{verbatim}
i8 : peek forbiddenSets X
o8 = MutableHashTable{1 => {{0,2},{0,3},{1,3},{0,1,3},{0,2,3},{0,4},{1,4},...}
2 => {{0,1,2,3,4,5}}
i9 : doHigherSelfExtsVanish Q
o9 = true
i10 : X.cache.cones#2
o10 = {{| -1 |, | 1 1 1 0 1 |}}
| -1 | | 1 0 1 1 1 |
| -1 | | 0 1 1 0 0 |
| -1 | | 0 0 0 1 1 |
\end{verbatim}
Consider the chain of divisorial contractions $X =: X_4 \rightarrow X_3 \rightarrow X_2 \rightarrow X_0$ from $X$ to the toric Fano surfaces numbered $3$, $2$ and $0$ in the database. The contractions induces lattice maps $\operatorname{Pic}(X_4) \rightarrow \operatorname{Pic}(X_3) \rightarrow \operatorname{Pic}(X_2) \rightarrow \operatorname{Pic}(X_0)$ and the method \verb+doHigherExtsVanish+ can check if the non-isomorphic line bundles in the image of $\mathcal{L}$ under these lattice maps are strong exceptional for each contraction.
\begin{verbatim}
i11 : doHigherSelfExtsVanish(Q,{4,3,2,0})
o11 = true
\end{verbatim}
Now consider the morphism $\pi \colon \operatorname{tot}(\omega_X) \rightarrow X$. The pullback $\pi^* (\bigoplus_{L_i \in \mathcal{L}} L_i)$ is a tilting bundle on $Y = \operatorname{tot} (\omega_X)$ if
\[ H^k(X,L_i \otimes L_j^{-1} \otimes \omega_X^{-m}) = 0\]
for all $k>0$, $m \geq 0$ and $L_i,L_j \in \mathcal{L}$ (see for example \cite[Theorem 6.7]{Prna}). As $\omega^{-1}_X$ is ample, there exists a non-negative integer $n$ such that $L_i \otimes L_j^{-1} \otimes \omega_X^{-m}$ is nef for $0 \leq i,j \leq r$ and $m \geq n$, and hence $H^k(X,L_i \otimes L_j^{-1} \otimes \omega_X^{-m}) = 0$ for all $k>0$ by Demazure vanishing. The method \verb+bundlesNefCheck+ checks for a given integer $n$ whether $L_i \otimes L_j^{-1} \otimes \omega_X^{-n}$ is nef for all $L_i,L_j \in \mathcal{L}$.
\begin{verbatim}
i12 : n=2;
i13 : bundlesNefCheck(Q,n)
o13 = true
\end{verbatim} If an integer $p$ is included as an additional input in \verb+doHigherSelfExtsVanish+, then the method checks that for all $0 \leq m \leq p$, whether the line bundles $L_i \otimes L_j^{-1} \otimes \omega^{-m}$ avoid the non-vanishing cohomology cones. Note that for our example, the computation above implies that it is enough to use the integer $n-1$.
\begin{verbatim}
i14 : doHigherSelfExtsVanish(Q,n-1)
o14 = true
\end{verbatim}
For $t \in \{4,3,2,0\}$, let $\{ L_{i,t} \}_{i \in I_t}$ denote the list of non-isomorphic line bundles in the image of $\mathcal{L}$ under the map given by \verb+picardMap+ from $\operatorname{Pic}(X) \rightarrow \operatorname{Pic}(X_t)$, where $I_t$ is an index set. By including the list of divisorial contractions as an input in \verb+doHigherSelfExtsVanish+, we can check that
\[ H^k(X_t,L_{i,t} \otimes (L_{j,t})^{-1} \otimes \omega_{X_t}^{-m}) = 0\]
for $k >0$, $0 \leq m \leq n-1$, $t \in \{4,3,2,0\}$ and all $i,j \in I_t$.
\begin{verbatim}
i15 : doHigherSelfExtsVanish(Q,{4,3,2,0},n-1)
o15 = true
\end{verbatim}
For all $n$-dimensional smooth toric Fano varieties, $1 \leq n \leq 3$, and 88 of the 124 smooth toric Fano fourfolds, the database contains a chain complex of modules over the Cox ring for the variety. The chain complexes are used in \cite{Prna} to show that the collections of line bundles in the database for these varieties are full.
\begin{verbatim}
i16 : C = resOfDiag(2,4);
i17 : SS = ring C;
i18 : C
6 12 6
o18 = SS <-- SS <-- SS
\end{verbatim}
\nocite{M2}
\end{document}
| 2,916 | 5,551 |
en
|
train
|
0.118.0
|
\begin{document}
\thispagestyle{empty}
\setcounter{page}{1}
\begin{center}
{\large\bf Stability of a functional equation deriving from quartic
and additive functions
\vskip.20in
{\bf M. Eshaghi Gordji } \\[2mm]
{\footnotesize Department of Mathematics,
Semnan University,\\ P. O. Box 35195-363, Semnan, Iran\\
[-1mm] e-mail: {\tt maj\[email protected]}} }
\end{center}
\vskip 5mm
\noindent{\footnotesize{\bf Abstract.} In this paper, we obtain the general solution and the generalized
Hyers-Ulam Rassias stability of the functional equation
$$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))-\frac{3}{7}(f(2y)-2f(y))+2f(2x)-8f(x).$$
\vskip.10in
\footnotetext { 2000 Mathematics Subject Classification: 39B82,
39B52.}
\footnotetext { Keywords:Hyers-Ulam-Rassias stability.}
\newtheorem{df}{Definition}[section]
\newtheorem{rk}[df]{Remark}
\newtheorem{lem}[df]{Lemma}
\newtheorem{thm}[df]{Theorem}
\newtheorem{pro}[df]{Proposition}
\newtheorem{cor}[df]{Corollary}
\newtheorem{ex}[df]{Example}
\setcounter{section}{0}
\numberwithin{equation}{section}
\vskip .2in
\begin{center}
\section{Introduction}
\end{center}
The stability problem of functional equations originated from a
question of Ulam [24] in 1940, concerning the stability of group
homomorphisms. Let $(G_1,.)$ be a group and let $(G_2,*)$ be a
metric group with the metric $d(.,.).$ Given $\varepsilonilon >0$, dose
there exist a $\delta
>0$, such that if a mapping $h:G_1\longrightarrow G_2$ satisfies the
inequality $d(h(x.y),h(x)*h(y)) <\delta,$ for all $x,y\in G_1$, then
there exists a homomorphism $H:G_1\longrightarrow G_2$ with
$d(h(x),H(x))<\varepsilonilon,$ for all $x\in G_1?$ In the other words,
Under what condition dose there exists a homomorphism near an
approximate homomorphism? The concept of stability for functional
equation arises when we replace the functional equation by an
inequality which acts as a perturbation of the equation. In 1941, D.
H. Hyers [9] gave a first affirmative answer to the question of
Ulam for Banach spaces. Let $f:{E}\longrightarrow{E'}$ be a mapping
between Banach spaces such that
$$\|f(x+y)-f(x)-f(y)\|\leq \delta, $$
for all $x,y\in E,$ and for some $\delta>0.$ Then there exists a
unique additive mapping $T:{E}\longrightarrow{E'}$ such that
$$\|f(x)-T(x)\|\leq \delta,$$
for all $x\in E.$ Moreover if $f(tx)$ is continuous in t for each
fixed $x\in E,$ then $T$ is linear. Finally in 1978, Th. M. Rassias
[21] proved the following Theorem.
\begin{thm}\label{t1} Let $f:{E}\longrightarrow{E'}$ be a mapping from
a norm vector space ${E}$
into a Banach space ${E'}$ subject to the inequality
$$\|f(x+y)-f(x)-f(y)\|\leq \varepsilonilon (\|x\|^p+\|y\|^p), \eqno \hspace {0.5
cm} (1.1)$$
for all $x,y\in E,$ where $\varepsilonilon$ and p are constants with
$\varepsilonilon>0$ and $p<1.$ Then there exists a unique additive mapping
$T:{E}\longrightarrow{E'}$ such that
$$\|f(x)-T(x)\|\leq \frac{2\varepsilonilon}{2-2^p}\|x\|^p, \eqno \hspace {0.5
cm}(1.2)$$ for all $x\in E.$
If $p<0$ then inequality (1.1) holds for all $x,y\neq 0$, and (1.2)
for $x\neq 0.$ Also, if the function $t\mapsto f(tx)$ from $\Bbb R$
into $E'$ is continuous for each fixed $x\in E,$ then T is linear.
\end{thm}
In 1991, Z. Gajda [5] answered the question for the case $p>1$,
which was rased by Rassias. This new concept is known as
Hyers-Ulam-Rassias stability of functional equations (see [1,2],
[5-11], [18-20]).
In [15], Won-Gil Prak and Jea Hyeong Bae, considered the following
functional equation:
$$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))+24f(x)-6f(y).\eqno \hspace {0.5cm}(1.3)$$
In fact they proved that a function
f between real vector spaces X and Y is a solution of (1.3) if and only if there
exists a unique symmetric multi-additive function $B:X\times X\times X\times X\longrightarrow Y$ such that
$f(x)=B(x,x,x,x)$ for all $x$ (see [3,4], [12-17], [22,23]). It is easy to show that
the function $f(x)=x^4$ satisfies the functional equation (1.3), which is called a
quartic functional equation and every solution of the quartic functional equation is said to be a
quartic function.
We deal with the next functional equation deriving from quartic and
additive functions:
$$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))-\frac{3}{7}(f(2y)-2f(y))+2f(2x)-8f(x).\eqno \hspace {0.5cm}(1.4)$$
It is easy to see that
the function $f(x)=ax^4+bx$ is a solution of the functional equation (1.4). In the
present paper we investigate the general solution and the generalized
Hyers-Ulam-Rassias stability of the functional equation (1.4).
\vskip 5mm \begin{center}
| 1,711 | 12,720 |
en
|
train
|
0.118.1
|
\section{Introduction}
\end{center}
The stability problem of functional equations originated from a
question of Ulam [24] in 1940, concerning the stability of group
homomorphisms. Let $(G_1,.)$ be a group and let $(G_2,*)$ be a
metric group with the metric $d(.,.).$ Given $\varepsilonilon >0$, dose
there exist a $\delta
>0$, such that if a mapping $h:G_1\longrightarrow G_2$ satisfies the
inequality $d(h(x.y),h(x)*h(y)) <\delta,$ for all $x,y\in G_1$, then
there exists a homomorphism $H:G_1\longrightarrow G_2$ with
$d(h(x),H(x))<\varepsilonilon,$ for all $x\in G_1?$ In the other words,
Under what condition dose there exists a homomorphism near an
approximate homomorphism? The concept of stability for functional
equation arises when we replace the functional equation by an
inequality which acts as a perturbation of the equation. In 1941, D.
H. Hyers [9] gave a first affirmative answer to the question of
Ulam for Banach spaces. Let $f:{E}\longrightarrow{E'}$ be a mapping
between Banach spaces such that
$$\|f(x+y)-f(x)-f(y)\|\leq \delta, $$
for all $x,y\in E,$ and for some $\delta>0.$ Then there exists a
unique additive mapping $T:{E}\longrightarrow{E'}$ such that
$$\|f(x)-T(x)\|\leq \delta,$$
for all $x\in E.$ Moreover if $f(tx)$ is continuous in t for each
fixed $x\in E,$ then $T$ is linear. Finally in 1978, Th. M. Rassias
[21] proved the following Theorem.
\begin{thm}\label{t1} Let $f:{E}\longrightarrow{E'}$ be a mapping from
a norm vector space ${E}$
into a Banach space ${E'}$ subject to the inequality
$$\|f(x+y)-f(x)-f(y)\|\leq \varepsilonilon (\|x\|^p+\|y\|^p), \eqno \hspace {0.5
cm} (1.1)$$
for all $x,y\in E,$ where $\varepsilonilon$ and p are constants with
$\varepsilonilon>0$ and $p<1.$ Then there exists a unique additive mapping
$T:{E}\longrightarrow{E'}$ such that
$$\|f(x)-T(x)\|\leq \frac{2\varepsilonilon}{2-2^p}\|x\|^p, \eqno \hspace {0.5
cm}(1.2)$$ for all $x\in E.$
If $p<0$ then inequality (1.1) holds for all $x,y\neq 0$, and (1.2)
for $x\neq 0.$ Also, if the function $t\mapsto f(tx)$ from $\Bbb R$
into $E'$ is continuous for each fixed $x\in E,$ then T is linear.
\end{thm}
In 1991, Z. Gajda [5] answered the question for the case $p>1$,
which was rased by Rassias. This new concept is known as
Hyers-Ulam-Rassias stability of functional equations (see [1,2],
[5-11], [18-20]).
In [15], Won-Gil Prak and Jea Hyeong Bae, considered the following
functional equation:
$$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))+24f(x)-6f(y).\eqno \hspace {0.5cm}(1.3)$$
In fact they proved that a function
f between real vector spaces X and Y is a solution of (1.3) if and only if there
exists a unique symmetric multi-additive function $B:X\times X\times X\times X\longrightarrow Y$ such that
$f(x)=B(x,x,x,x)$ for all $x$ (see [3,4], [12-17], [22,23]). It is easy to show that
the function $f(x)=x^4$ satisfies the functional equation (1.3), which is called a
quartic functional equation and every solution of the quartic functional equation is said to be a
quartic function.
We deal with the next functional equation deriving from quartic and
additive functions:
$$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))-\frac{3}{7}(f(2y)-2f(y))+2f(2x)-8f(x).\eqno \hspace {0.5cm}(1.4)$$
It is easy to see that
the function $f(x)=ax^4+bx$ is a solution of the functional equation (1.4). In the
present paper we investigate the general solution and the generalized
Hyers-Ulam-Rassias stability of the functional equation (1.4).
\vskip 5mm \begin{center}
\section{ General solution}
\end{center}
In this section we establish the general solution of functional
equation (1.4).
\begin{thm}\label{t1} Let $X$,$Y$
be vector spaces, and let $f:X\longrightarrow Y$ be a function
satisfies (1.4). Then the following assertions hold.
a) If f is even function, then f is quartic.
b) If f is odd function, then f is additive.
\end{thm}
\begin{proof} a) Putting $x=y=0$ in (1.4), we get $f(0)=0$. Setting $x=0$ in (1.4), by evenness of f, we obtain
$$f(2y)=16f(y), \eqno \hspace {0.5cm}(2.1)$$
for all $y\in X.$ Hence (1.4) can be written as
$$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))+24f(x)-6f(y) \eqno \hspace {0.5cm}(2.2)$$
for all $x,y \in X.$ This means that $f$ is a quartic function.
b) Setting $x=y=0$ in (1.4) to obtain $f(0)=0.$ Putting $x=0$ in
(1.4), then by oddness of f, we have
$$f(2y)=2f(y), \eqno \hspace {0.5cm}(2.3)$$
for all $y\in X.$ We obtain from (1.4) and (2.3) that
$$f(2x+y)+f(2x-y)=4(f(x+y)+f(x-y))-4f(x), \eqno \hspace {0.5cm}(2.4)$$
for all $x,y\in X.$ Replacing y by -2y in (2.4), it follows that
$$f(2x-2y)+f(2x+2y)=4(f(x-2y)+f(x+2y))-4f(x). \eqno \hspace {0.5cm}(2.5)$$
Combining (2.3) and (2.5) to obtain
$$f(x-y)+f(x+y)=2(f(x-2y)+f(x+2y))-2f(x). \eqno \hspace {0.5cm}(2.6)$$
Interchange x and y in (2.6) to get the relation
$$f(x+y)+f(x-y)=2(f(y-2x)+f(y+2x))-2f(y). \eqno \hspace {0.5cm}(2.7)$$
Replacing y by -y in (2.7), and using the oddness of f to get
$$f(x-y)-f(x+y)=2(f(2x-y)-f(2x+y))+2f(y). \eqno \hspace {0.5cm}(2.8)$$
From (2.4) and (2.8), we obtain
$$4f(2x+y)=9f(x+y)+7f(x-y))-8f(x)+2f(y). \eqno \hspace {0.5cm}(2.9)$$
Replacing x+y by y in (2.9) it follows that
$$7f(2x-y)=4f(x+y)+2f(x-y))-9f(y)+8f(x). \eqno \hspace {0.5cm}(2.10)$$
By using (2.9) and (2.10), we lead to
\begin{align*}
f(2x+y)&+f(2x-y)=\frac{79}{28}f(x+y)+\frac{57}{28}f(x-y))\\
&-\frac{6}{7}f(x)-\frac{11}{14}f(y). \hspace {8cm}(2.11)
\end{align*}
We get from (2.4) and (2.11) that
$$3f(x+y)+5f(x-y))=8f(x)-28f(y). \eqno \hspace {0.5cm}(2.12)$$
Replacing x by 2x in (2.4) it follows that
$$f(4x+y)+f(4x-y)=16(f(x+y)+f(x-y))-24f(x). \eqno \hspace {0.5cm}(2.13)$$
Setting $2x+y$ instead of y in (2.4), we arrive at
$$f(4x+y)-f(y)=4(f(3x-y)+f(x-y))-4f(x).\eqno \hspace {0.5cm}(2.14)$$
Replacing y by -y in (2.14), and using oddness of f to get
$$f(4x-y)+f(y)=4(f(3x+y)+f(x+y))-4f(x). \eqno \hspace {0.5cm}(2.15)$$
Adding (2.14) to (2.15) to get the relation
\begin{align*}
f(4x+y)+f(4x-y)&=4(f(3x+y)+f(3x-y))\\&-4(f(x+y)+f(x-y))-8f(x).
\hspace {4.2cm}(2.16)
\end{align*}
Replacing y by x+y in (2.4) to obtain
$$f(3x+y)+f(x-y)=4(f(2x+y)-f(y))-4f(x). \eqno \hspace {0.5cm}(2.17)$$
Replacing y by -y in (2.17), and using the oddness of f, we lead to
$$f(3x-y)+f(x+y)=4(f(2x-y)+f(y))-4f(x). \eqno \hspace {0.5cm}(2.18)$$
Combining (2.17) and (2.18) to obtain
$$f(3x+y)+f(3x-y)=15(f(x+y)+f(x-y))-24f(x). \eqno \hspace {0.5cm}(2.19)$$
Using (2.16) and (2.19) to get
$$f(4x+y)+f(4x-y)=56(f(x+y)+f(x-y))-104f(x). \eqno \hspace {0.5cm}(2.20)$$
Combining (2.13) and (2.20), we arrive at
$$f(x+y)+f(x-y)=2f(x).\eqno \hspace {0.5cm}(2.21)$$
Hence by using (2.12) and (2.21) it is easy to see that f is
additive. This completed the proof of Theorem.
\end{proof}
\begin{thm}\label{t2}
Let $X$,$Y$ be vector spaces, and let $f:X\longrightarrow Y$ be a
function. Then f satisfies (1.4) if and only if there exist a unique
symmetric multi-additive function $B:X\times X\times X\times
X\longrightarrow Y$ and a unique additive function
$A:X\longrightarrow Y$ such that
$f(x)=B(x,x,x,x)+A(x)$ for all $x\in X.$
\end{thm}
\begin{proof}
Let $f$ satisfies (1.4). We decompose f into the even part and odd
part by setting
$$f_e(x)=\frac{1}{2}(f(x)+f(-x)),~~\hspace {0.3 cm}f_o(x)=\frac{1}{2}(f(x)-f(-x)),$$
for all $x\in X.$ By (1.4), we have
\begin{align*}
f_e(2x+y)&+f_e(2x-y)=\frac{1}{2}[f(2x+y)+f(-2x-y)+f(2x-y)+f(-2x+y)]\\
&=\frac{1}{2}[f(2x+y)+f(2x-y)]+\frac{1}{2}[f(-2x+(-y))+f(-2x-(-y))]\\
&=\frac{1}{2}[4(f(x+y)+f(x-y))-\frac{3}{7}(f(2y)-2f(y))+2f(2x)-8f(x)]\\
&+\frac{1}{2}[4(f(-x-y)+f(-x-(-y)))-\frac{3}{7}(f(-2y)-2f(-y))+2f(-2x)-8f(-x)]\\
&=4[\frac{1}{2}(f(x+y)+f(-x-y))+\frac{1}{2}(f(-x+y)+f(x-y))]\\
&-\frac{3}{7}[\frac{1}{2}(f(2y)+f(-2y))-(f(y)-f(-y))]\\
&+2[\frac{1}{2}(f(2x)+f(-2x))]-8[\frac{1}{2}(f(x)+f(-x))]\\
&=4(f_e(x+y)+f_e(x-y))-\frac{3}{7}(f_e(2y)-2f_e(y))+2f_e(2x)-8f_e(x)
\end{align*}
for all $x,y\in X.$ This means that $f_e$ holds in (1.4). Similarly
we can show that $f_o$ satisfies (1.4). By above Theorem, $f_e$ and
$f_o$ are quartic and additive respectively. Thus there exists a
unique symmetric multi-additive function $B:X\times X\times X\times
X\longrightarrow Y$ such that $f_e(x)=B(x,x,x,x)$ for all $x\in X.$
Put $A(x):=f_o(x)$ for all $x\in X.$ It follows that
$f(x)=B(x)+A(x)$ for all $x\in X.$ The proof of the converse is
trivially.
\end{proof}
| 3,896 | 12,720 |
en
|
train
|
0.118.2
|
\section{ Stability}
Throughout this section, X and Y will be a real normed space and a
real Banach space, respectively. Let $f:X\rightarrow Y$ be a
function then we define $D_f:X\times X \rightarrow Y$ by
\begin{align*}
D_{f}(x,y)&=7[f(2x+y)+f(2x-y)]-28[f(x+y)+f(x-y)]\\
&+3[f(2y)-2f(y)]-14[f(2x)-4f(x)]
\end{align*}
for all $x,y \in X.$
\begin{thm}\label{t2} Let $\psi:X\times X\rightarrow [0,\infty)$
be a function satisfies $\sum^{\infty}_{i=0}
\frac{\psi(0,2^ix)}{16^i}<\infty$ for all $x\in X$, and $\lim
\frac{\psi(2^n x,2^n y)}{16^n}=0$ for all $x,y\in X$. If
$f:X\rightarrow Y$ is an even function such that $f(0)=0,$ and that
$$\|D_f(x,y)\|\leq \psi(x,y), \eqno \hspace {0.5cm}(3.1)$$ for all $x,y\in
X$, then there exists a unique quartic function $Q:X \rightarrow Y$
satisfying (1.4) and
$$\|f(x)-Q(x)\|\leq \frac{1}{48}\sum^{\infty}_{i=0} \frac{\psi(0,2^i x)}{16^i},\eqno \hspace {0.5cm}(3.2)$$
for all $x\in X$.
\end{thm}
\begin{proof}
Putting $x=0$ in (3.1), then we have
$$\|3f(2y)-48f(y)\|\leq \psi(0,y). \eqno\hspace {0.5cm}(3.3)$$
Replacing y by x in (3.3) and then dividing by 48 to obtain
$$\parallel \frac{f(2x)}{16}-f(x)\parallel \leq
\frac{1}{48}\psi(0,x), \eqno\hspace {0.5cm}(3.4)$$ for all $x\in X.$
Replacing x by 2x in (3.4) to get
$$\parallel\frac{f(4x)}{16}-f(2x)\parallel\leq
\frac{1}{48}\psi(0,2x). \eqno\hspace {0.5cm}(3.5)$$ Combine (3.4)
and (3.5) by use of the triangle inequality to get
$$\parallel\frac{f(4x)}{16^2}-f(x)\parallel
\leq\frac{1}{48}(\frac{\psi(0,2x)}{16}+\psi(0,x)). \eqno\hspace
{0.5cm}(3.6)$$ By induction on $n\in \mathbb{N}$, we can show that
$$\parallel \frac{f(2^n x)}{16^ n}-f(x)\parallel\leq\frac {1}{48}
\sum^{n-1}_{i=0} \frac {\psi(0,2^i x)}{16^i}.\eqno \hspace
{0.5cm}(3.7)$$ Dividing (3.7) by $16^m$ and replacing x by $2^m x$
to get
\begin{align*}
\parallel \frac {f(2^{m+n}x)}{16^{m+n}}-\frac{f(2^{m}x)}{16^{m}}\parallel
&=\frac{1}{16^m}\parallel f(2^n 2^m x)-f(2^m x)\parallel\\
&\leq\frac {1}{48\times16^m}\sum^{n-1}_{i=0}\frac{\psi(0,2^i
x)}{16^i}\\
&\leq\frac{1}{48}\sum^{\infty}_{i=0}\frac{\psi(0,2^i 2^m
x)}{16^{m+i}},
\end{align*}
for all $x \in X$. This shows that $\{\frac{f(2^n x)}{16^n}\}$ is a
Cauchy sequence in Y, by taking the $\lim m\rightarrow \infty.$
Since Y is a Banach space, then the sequence $\{\frac{f(2^n
x)}{16^n}\}$ converges. We define $Q:X\rightarrow Y$ by
$Q(x):=\lim_n \frac{f(2^n x)}{16^n}$ for all $x\in X$. Since f is
even function, then Q is even. On the other hand we have
\begin{align*}
\|D_Q (x,y)\|&=\lim_n \frac{1}{16^n}\|D_f(2^n x, 2^n y)\|\\
&\leq\lim_n \frac{\psi(2^n x,2^n y)}{16^n}=0,
\end{align*}
for all $x,y\in X$. Hence by Theorem 2.1, Q is a quartic function.
To shows that Q is unique, suppose that there exists another quartic
function $\acute{Q}:X\rightarrow Y$ which satisfies (1.4) and (3.2).
We have $Q(2^n x)=16^n Q(x)$, and $\acute{Q}(2^n x)=16^n
\acute{Q}(x)$, for all $x\in X$. It follows that
\begin{align*}
\parallel \acute{Q}(x)-Q(x)\parallel &=\frac{1}{16^n}\parallel
\acute{Q}(2^n x)-Q(2^n x)\parallel\\
&\leq \frac{1}{16^n}[\parallel \acute{Q}(2^n x)-f(2^n
x)\parallel+\parallel f(2^n x)-Q(2^n x)\parallel]\\
&\leq\frac{1}{24}\sum^{\infty}_{i=0}\frac{\psi(0,2^{n+i}
x)}{16^{n+i}},
\end{align*}
for all $x\in X$. By taking $n\rightarrow \infty$ in this inequality
we have $\acute{Q}(x)=Q(x)$.
\end{proof}
\begin{thm}\label{t'2} Let $\psi:X\times X\rightarrow [0,\infty)$
be a function satisfies $\sum^{\infty}_{i=0}
16^i\psi(0,2^{-i-1}x)<\infty$ for all $x\in X$, and $\lim
16^n\psi(2^{-n} x,2^{-n} y)=0$ for all $x,y\in X$. Suppose that an
even function $f:X\rightarrow Y$ satisfies f(0)=0, and (3.1). Then
the limit $Q(x):=\lim_n 16^n{f(2^{-n} x)}$ exists for all $x\in X$
and $Q:X \rightarrow Y$ is a unique quartic function
satisfies (1.4) and
$$\|f(x)-Q(x)\|\leq \frac{1}{3}\sum^{\infty}_{i=0} 16^i\psi(0,2^{-i-1} x), \eqno \hspace {0.5cm}(3.8)$$
for all $x\in X$.
\end{thm}
\begin{proof} Putting $x=0$ in (3.1), then we have
$$\|3f(2y)-48f(y)\|\leq \psi(0,y). \eqno \hspace {0.5cm}(3.9)$$
Replacing y by $\frac{x}{2}$ in (3.9) and result dividing by 3 to
get
$$\parallel 16f(2^{-1}x)-f(x)\parallel \leq
\frac{1}{3}\psi(0,2^{-1}x),\eqno \hspace {0.5cm}(3.10)$$ for all
$x\in X.$ Replacing x by $\frac{x}{2}$ in (3.10) it follows that
$$\parallel 16f(4^{-1}x)-f(2^{-1}x)\parallel\leq
\frac{1}{3}\psi(0,2^{-2}x).\eqno \hspace {0.5cm}(3.11)$$ Combining
(3.10) and (3.11) by use of the triangle inequality to obtain
$$\parallel 16^2f(4^{-1}x)-f(x)\parallel
\leq\frac{1}{3}(\frac{\psi(0,2^{-2}x)}{16}+\psi(0,2^{-1}x)). \eqno
\hspace {0.5cm}(3.12)$$ By induction on $n\in \mathbb{N}$, we have
$$\parallel 16^n f(2^{-n} x)-f(x)\parallel\leq\frac {1}{3}
\sum^{n-1}_{i=0} 16^i\psi(0,2^{-i-1} x).\eqno \hspace
{0.5cm}(3.13)$$ Multiplying (3.13) by $16^m$ and replacing x by
$2^{-m} x$ to obtain
\begin{align*}
\parallel 16^{m+n} {f(2^{-m-n}x)}-16^m{f(2^{-m}x)}\parallel
&={16^m}\parallel f(2^{-n} 2^{-m} x)-f(2^{-m} x)\parallel\\
&\leq\frac {16^m}{3}\sum^{n-1}_{i=0}16^i{\psi(0,2^{-i-1}
x)}\\
&\leq\frac{1}{3}\sum^{\infty}_{i=0}{16^{m+i}}{\psi(0,2^{-i-1} 2^{-m}
x)},
\end{align*}
for all $x \in X$. By taking the $\lim m\rightarrow \infty,$ it
follows that $\{16^n{f(2^{-n} x)}\}$ is a Cauchy sequence in Y.
Since Y is a Banach space, then the sequence $\{16^n{f(2^{-n} x)}\}$
converges. Now we define $Q:X\rightarrow Y$ by $Q(x):=\lim_n
16^n{f(2^{-n} x)}$ for all $x\in X$. The rest of proof is similar to
the proof of Theorem 3.1.
\end{proof}
\begin{thm}\label{t2} Let $\psi:X\times X\rightarrow [0,\infty)$ be
a function such that $$\sum \frac{\psi(0,2^i x)}{2^i}< \infty, \eqno
\hspace {0.5cm}(3.14)$$ and
$$\lim_n \frac {\psi(2^n x,2^n y)}{2^n}=0, \eqno \hspace {0.5cm}(3.15)$$ for
all $x,y \in X$. If $f:X\rightarrow Y$ is an odd function such that
$$\|D_f (x,y)\|\leq \psi(x,y), \eqno \hspace {0.5cm}(3.16)$$
for all $x,y\in X$. Then there exists a unique additive function
$A:X\rightarrow Y$ satisfies (1.4) and
$$\|f(x)-A(x)\|\leq\frac{1}{2} \sum^{\infty}_{i=0}\frac{\psi(0,2^i
x)}{2^i},$$ for all $x\in X$.
\end{thm}
\begin{proof} Setting $x=0$ in (3.16) to get
$$\|f(2y)-2f(y)\|\leq \psi(o,y). \eqno \hspace {0.5cm}(3.17)$$
Replacing y by x in (3.17) and result dividing by 2, then we have
$$\|\frac{f(2x)}{2}-f(x)\|\leq\frac{1}{2} \psi(0,x). \eqno\hspace {0.5cm}(3.18)$$
Replacing x by 2x in (3.18) to obtain
$$\|\frac{f(4x)}{2}-f(2x)\|\leq\frac{1}{2} \psi(0,2x). \eqno\hspace {0.5cm}(3.19)$$
Combine (3.18) and (3.19) by use of the triangle inequality to get
$$\|\frac{f(4x)}{4}-f(x)\|\leq\frac{1}{2} (\psi(0,x)+\frac{1}{2}\psi(0,2x)).\eqno\hspace {0.5cm}(3.20)$$
Now we use iterative methods and induction on $n$ to prove our next
relation.
$$\|\frac{f(2^n x)}{2^n}-f(x)\|\leq\frac{1}{2} \sum^{n-1}_{i=0}\frac{\psi(0,2^i x)}{2^i}.\eqno\hspace {0.5cm}(3.21)$$
Dividing (3.21) by $2^m$ and then substituting x by $2^m x$, we get
\begin{align*}
\parallel \frac {f(2^{m+n}x)}{2^{m+n}}-\frac{f(2^{m}x)}{2^{m}}\parallel
&=\frac{1}{2^m}\parallel \frac{f(2^n 2^m x)}{2^n}-f(2^m x)\parallel\\
&\leq\frac {1}{2^{m+1}}\sum^{n-1}_{i=0}\frac{\psi(0,2^i
2^m x)}{2^i}\\
&\leq\frac{1}{2}\sum^{\infty}_{i=0}\frac{\psi(0,2^{i+m}x)}{2^{m+i}}\hspace
{5.5cm}(3.22)
\end{align*}
Taking $m\rightarrow \infty$ in (3.22), then the right hand side of
the inequality tends to zero. Since Y is a Banach space, then
$A(x)=\lim_n \frac {f(2^n x)}{2^n}$ exits for all $x\in X$. The
oddness of f implies that A is odd. On the other hand by (3.15) we
have
\begin{align*}
D_A (x,y)=\lim_n \frac{1}{2^n}\|D_f(2^n x,2^n y)\|\leq\lim_n
\frac{\psi(2^n x,2^n y)}{2^n}=0.
\end{align*}
Hence by Theorem 1.2, A is additive function. The rest of the proof
is similar to the proof of Theorem 3.1.
\end{proof}
\begin{thm}\label{t''2} Let $\psi:X\times X\rightarrow [0,\infty)$
be a function satisfies $$\sum^{\infty}_{i=0}
2^i\psi(0,2^{-i-1}x)<\infty,$$ for all $x\in X$, and $\lim
2^n\psi(2^{-n} x,2^{-n} y)=0$ for all $x,y\in X$. Suppose that an
odd function $f:X\rightarrow Y$ satisfies (3.1). Then the limit
$A(x):=\lim_n 2^n{f(2^{-n} x)}$ exists for all $x\in X$ and $A:X
\rightarrow Y$ is a unique additive function satisfying (1.4) and
$$\|f(x)-A(x)\|\leq \sum^{\infty}_{i=0} 2^i\psi(0,2^{-i-1} x)$$
for all $x\in X$.
\end{thm}
\begin{proof}
It is similar to the proof of Theorem 3.3.
\end{proof}
| 4,031 | 12,720 |
en
|
train
|
0.118.3
|
\begin{thm}\label{t2} Let $\psi:X\times X\rightarrow [0,\infty)$ be
a function such that $$\sum \frac{\psi(0,2^i x)}{2^i}< \infty, \eqno
\hspace {0.5cm}(3.14)$$ and
$$\lim_n \frac {\psi(2^n x,2^n y)}{2^n}=0, \eqno \hspace {0.5cm}(3.15)$$ for
all $x,y \in X$. If $f:X\rightarrow Y$ is an odd function such that
$$\|D_f (x,y)\|\leq \psi(x,y), \eqno \hspace {0.5cm}(3.16)$$
for all $x,y\in X$. Then there exists a unique additive function
$A:X\rightarrow Y$ satisfies (1.4) and
$$\|f(x)-A(x)\|\leq\frac{1}{2} \sum^{\infty}_{i=0}\frac{\psi(0,2^i
x)}{2^i},$$ for all $x\in X$.
\end{thm}
\begin{proof} Setting $x=0$ in (3.16) to get
$$\|f(2y)-2f(y)\|\leq \psi(o,y). \eqno \hspace {0.5cm}(3.17)$$
Replacing y by x in (3.17) and result dividing by 2, then we have
$$\|\frac{f(2x)}{2}-f(x)\|\leq\frac{1}{2} \psi(0,x). \eqno\hspace {0.5cm}(3.18)$$
Replacing x by 2x in (3.18) to obtain
$$\|\frac{f(4x)}{2}-f(2x)\|\leq\frac{1}{2} \psi(0,2x). \eqno\hspace {0.5cm}(3.19)$$
Combine (3.18) and (3.19) by use of the triangle inequality to get
$$\|\frac{f(4x)}{4}-f(x)\|\leq\frac{1}{2} (\psi(0,x)+\frac{1}{2}\psi(0,2x)).\eqno\hspace {0.5cm}(3.20)$$
Now we use iterative methods and induction on $n$ to prove our next
relation.
$$\|\frac{f(2^n x)}{2^n}-f(x)\|\leq\frac{1}{2} \sum^{n-1}_{i=0}\frac{\psi(0,2^i x)}{2^i}.\eqno\hspace {0.5cm}(3.21)$$
Dividing (3.21) by $2^m$ and then substituting x by $2^m x$, we get
\begin{align*}
\parallel \frac {f(2^{m+n}x)}{2^{m+n}}-\frac{f(2^{m}x)}{2^{m}}\parallel
&=\frac{1}{2^m}\parallel \frac{f(2^n 2^m x)}{2^n}-f(2^m x)\parallel\\
&\leq\frac {1}{2^{m+1}}\sum^{n-1}_{i=0}\frac{\psi(0,2^i
2^m x)}{2^i}\\
&\leq\frac{1}{2}\sum^{\infty}_{i=0}\frac{\psi(0,2^{i+m}x)}{2^{m+i}}\hspace
{5.5cm}(3.22)
\end{align*}
Taking $m\rightarrow \infty$ in (3.22), then the right hand side of
the inequality tends to zero. Since Y is a Banach space, then
$A(x)=\lim_n \frac {f(2^n x)}{2^n}$ exits for all $x\in X$. The
oddness of f implies that A is odd. On the other hand by (3.15) we
have
\begin{align*}
D_A (x,y)=\lim_n \frac{1}{2^n}\|D_f(2^n x,2^n y)\|\leq\lim_n
\frac{\psi(2^n x,2^n y)}{2^n}=0.
\end{align*}
Hence by Theorem 1.2, A is additive function. The rest of the proof
is similar to the proof of Theorem 3.1.
\end{proof}
\begin{thm}\label{t''2} Let $\psi:X\times X\rightarrow [0,\infty)$
be a function satisfies $$\sum^{\infty}_{i=0}
2^i\psi(0,2^{-i-1}x)<\infty,$$ for all $x\in X$, and $\lim
2^n\psi(2^{-n} x,2^{-n} y)=0$ for all $x,y\in X$. Suppose that an
odd function $f:X\rightarrow Y$ satisfies (3.1). Then the limit
$A(x):=\lim_n 2^n{f(2^{-n} x)}$ exists for all $x\in X$ and $A:X
\rightarrow Y$ is a unique additive function satisfying (1.4) and
$$\|f(x)-A(x)\|\leq \sum^{\infty}_{i=0} 2^i\psi(0,2^{-i-1} x)$$
for all $x\in X$.
\end{thm}
\begin{proof}
It is similar to the proof of Theorem 3.3.
\end{proof}
\begin{thm}\label{t5} Let $\psi:X\times X\rightarrow Y$ be a
function such that
$$\sum^{\infty}_{i=o} \frac{\psi(0,2^i x)}{2^i}\leq
\infty \quad and \quad \lim_n \frac{\psi(2^n x,2^n x)}{2^n}=0,$$ for
all $x\in X$. Suppose that a function $ f:X\rightarrow Y$ satisfies
the inequality $$\|D_f (x,y)\|\leq \psi(x,y),$$ for all $x,y\in X$,
and $f(0)=0$. Then there exist a unique quartic function
$Q:X\rightarrow Y$ and a unique additive function $A:X\rightarrow Y$
satisfying (1.4) and
\begin{align*}
\parallel f(x)-Q(x)-A(x)\parallel
&\leq\frac {1}{48}[\sum^{\infty}_{i=0}(\frac{\psi(0,2^i x)+\psi(0,-2^i x)}{2\times16^i}\\
&+\frac{12(\psi(0,2^i x)+\psi(0,-2^i x))}{2^i})], \hspace
{4.2cm}(3.23)
\end{align*}
for all $x,y\in X$.
\end{thm}
\begin{proof} We have
$$\|D_{f_e} (x,y)\|\leq\frac{1}{2}[\psi(x,y)+\psi(-x,-y)]$$
for all $x,y\in X$. Since $f_e(0)=0$ and $f_e$ is and even function,
then by Theorem 3.1, there exists a unique quartic function
$Q:x\rightarrow Y$ satisfying
$$\parallel f_e(x)-Q(x)\parallel \leq\frac {1}{48}\sum^{\infty}_{i=0} \frac{\psi(0,2^i x)+\psi(0,-2^i
x)}{2\times16^i}, \eqno\hspace {0.5cm}(3.24)$$ for all $x\in X$. On
the other hand $f_0$ is odd function and $$\|D_{f_0} (x,y)\|\leq
\frac{1}{2}[\psi(x,y)+\psi(-x,-y)],$$ for all $x,y\in X$. Then by
Theorem 3.3, there exists a unique additive function $A:X\rightarrow
Y$ such that
$$\parallel f_0(x)-A(x)\parallel \leq\frac {1}{2}\sum^{\infty}_{i=0} \frac{\psi(0,2^i x)+\psi(0,-2^i
x)}{2\times2^i}, \eqno\hspace {0.5cm}(3.25)$$ for all $x\in X$.
Combining (3.24) and (3.25) to obtain (3.23). This completes the
proof of Theorem.
\end{proof}
By Theorem 3.5, we are going to investigate the Hyers-Ulam -Rassias
stability problem for functional equation (1.4).
\begin{cor}\label{t2}
Let $\theta\geq0$, $P<1$. Suppose $f:X\rightarrow Y$ satisfies the
inequality
$$\|D_f (x,y)\|\leq\theta(\|x\|^p+\|y\|^p),$$
for all $x,y\in X$, and $f(0)=0$. Then there exists a unique quartic
function $Q:X\rightarrow Y$, and a unique additive function
$A:X\rightarrow Y$ satisfying (1.4), and
$$\parallel f(x)-Q(x)-A(x)\parallel \leq\frac {\theta}{48}\|x\|^p
(\frac{16}{16-2^p}+\frac{96}{1-2^{p-1}}),$$ for all $x\in X$.
\end{cor}
By Corollary 3.6, we solve the following Hyers-Ulam stability
problem for functional equation (1.4).
\begin{cor}\label{t2} Let $\varepsilonilon$ be a positive real number, and let $f:X\rightarrow Y$ be a function satisfies $$\|D_f
(x,y)\|\leq\varepsilonilon,$$ for all $x,y\in X$. Then there exist a unique
quartic function $Q:X\rightarrow Y$, and a unique additive function
$A:X\rightarrow Y$ satisfying (1.4) and
$$\parallel f(x)-Q(x)-A(x)\parallel \leq\frac {362}{45} ~\varepsilonilon,$$
for all $x\in X$.
\end{cor}
By applying Theorems 3.2 and 3.4, we have the following Theorem.
\begin{thm}\label{t5} Let $\psi:X\times X\rightarrow Y$ be a
function such that
$$\sum^{\infty}_{i=o} 16^i{\psi(0,2^{-i-1} x)}\leq
\infty \quad and \quad \lim_n 16^n{\psi(2^n x,2^n x)}=0,$$ for all
$x\in X$. Suppose that a function $ f:X\rightarrow Y$ satisfies the
inequality $$\|D_f (x,y)\|\leq \psi(x,y),$$ for all $x,y\in X$, and
$f(0)=0$. Then there exist a unique quartic function $Q:X\rightarrow
Y$ and a unique additive function $A:X\rightarrow Y$ satisfying
(1.4) and
\begin{align*}
\parallel f(x)-Q(x)-A(x)\parallel
&\leq\sum^{\infty}_{i=0}[(\frac{16^i}{3}+2^i)(\frac{\psi(0,2^{-i-1}
x)+\psi(0,-2^{-i-1} x)}{2})],
\end{align*}
for all $x,y\in X$.
\end{thm}
\begin{cor}\label{t2}
Let $\theta\geq0$, $P>4$. Suppose $f:X\rightarrow Y$ satisfies the
inequality
$$\|D_f (x,y)\|\leq\theta(\|x\|^p+\|y\|^p),$$
for all $x,y\in X$, and $f(0)=0$. Then there exist a unique quartic
function $Q:X\rightarrow Y$, and a unique additive function
$A:X\rightarrow Y$ satisfying (1.4), and
$$\parallel f(x)-Q(x)-A(x)\parallel \leq\frac {\theta}{3\times 2^p}\|x\|^p
(\frac{1}{1-2^{4-p}}+\frac{1}{1-2^{1-p}}),$$ for all $x\in X$.
\end{cor}
{\small
}
\end{document}
| 3,082 | 12,720 |
en
|
train
|
0.119.0
|
\begin{document}
\title{The title of my page}
\begin{abstract}
We consider here the problem of classifying orbits of an action of the diffeomorphism group of 3-space on a tower of fibrations with $\mathbb{P}^2$-fibers that generalize the Monster Tower due to Montgomery and Zhitomirskii. As a corollary we give the first steps towards the problem of classifying Goursat 2-flags of small length. In short, we classify the orbits within the first four levels of the Monster Tower and show that
there is a total of $34$ orbits at the fourth level in the tower.
\end{abstract}
\section{Introduction}\label{sec:intro}
A Goursat flag is a nonholonomic distribution $D$ with ``slow growth''.
By slow growth we mean that the associated flag of distributions
$$D \hspace{.2in} \subset \hspace{.2in} D + [D,D]\hspace{.2in} \subset \hspace{.2in} D + [D,D] + [[D,D],[D,D]] \dots ,$$
grows by one dimension at each bracketing step and after $n$ steps it will span the entire tangent bundle. By an abuse of notation, $D$ in this context also means the sheaf of vector fields spanning $D$.
Though less popular than her nonholonomic siblings like the contact distribution, or rolling distribution in mechanics \cite{gil},
Goursat distributions are more common than one would think. The canonical Cartan distributions in the jet spaces
$J^k(\mathbb{R},\mathbb{R})$, or the non-slip constraint for a jackknifed truck \cite{jean} are examples.
Generalizations of Goursat flags have been proposed in the literature. One such notion is that of a {\it Goursat multi-flag.} Typical examples of Goursat multi-flags include the Cartan distributions $C$ of the jet spaces $J^k(\mathbb{R},\mathbb{R}^n),n\geq 2.$
Iterated bracketing this time:
$$C\hspace{.2in} \subset \hspace{.2in} C + [C,C] \hspace{.2in} \subset \hspace{.2in} C + [C,C] + [[C,C],[C,C]] \dots,$$
leads to a jump in rank by $n$ dimensions at each step.
To our knowledge the general theory behind Goursat multi-flags made their first appearance in the works of A. Kumpera and J. L Rubin \cite{kumpera1}.
P. Mormul has also been very active in breaking new ground \cite{mormul1}, and developed new combinatorial tools to investigate the normal forms of these distributions. In addition to Mormul's work is Yamaguchi's work \cite{yamaguchi1} and \cite{yamaguchi2} where he investigated the local properties of Goursat multi-flags. It is also important to mention that results from his work in \cite{yamaguchi2} were crucial
to our classification proceedure.
In this note we concentrate on the problem of classifying local germs of Goursat multi-flags of small length.
We will consider Goursat 2-flags of length up to 4. Goursat 2-flags exhibit many new geometric features our old Goursat
flags(Goursat $1$-flags) did not possess. The geometric properties of Goursat multi-flags was the main subject of the paper \cite{castro}.
We approach the classification problem from a geometric standpoint, and will partially follow the programme started by Montgomery and Zhitomirskii in \cite{mont1}.
Our starting point is the universality theorem proved by Yamaguchi and Shibuya \cite{yamaguchi1} stating that any germ $D$ of a Goursat distribution, or using Mormul's terminology, a Goursat $n$-flag of length $k$ is analytically equivalent to the germ of the canonical distribution on
the $k$-step Cartan prolongation \cite{bryant} of the flat $\mathbb{R}^{n+1}$ with its trivial bundle, the germs being taken at some
$D$-dependent point.
In what follows we apply Yamaguchi's theorem to the $n=2$ case, and translate the classification problem of Goursat 2-flags into a problem of classifying points in a tower of real $\mathbb{P}^2$ (projective plane) fibrations
\begin{equation}\label{eqn:tower}
\cdots \rightarrow \mathcal{P}^{4}(2)\rightarrow \mathcal{P}^{3}(2)\rightarrow \mathcal{P}^{2}(2) \rightarrow
\mathcal{P}^{1}(2) \rightarrow \mathcal{P}^{0}(2) = \mathbb{R}^3,
\end{equation}
where $\text{dim}(\mathcal{P}^{k}(2)) = 3 + 2k$ and $\mathcal{P}^{k}(2)$ is the Cartan prolongation of $\mathcal{P}^{k-1}(2)$. The global topology of these manifolds is much more interesting though, and has yet to be explored (\cite{castro}).
Each $\mathcal{P}^{k}(2)$ comes equipped with a rank-3 distribution $\Delta_k$. At a dense open subset of
$\mathcal{P}^{k}(2)$ this $\Delta_{k}$ is locally diffeomorphic to the Cartan distribution $C$ in $J^k(\mathbb{R},\mathbb{R}^2).$
$\Delta_k$ has an associated flag of length $k$.
A description of symmetries of the $\Delta_k$ is the content of a theorem due to Yamaguchi \cite{yamaguchi2}, attributed
to Backl\"und, and dating from the $1980$'s. He showed that that any symmetry of $(\mathcal{P}^{k}(2),\Delta_k)$ results from the Cartan prolongation of a symmetry of the base manifold, i.e. of a diffeomorphism of the 3-fold $\mathbb{R}^3$.
By applying the techniques developed in \cite{castro,mont2}, we will attack the classification problem utilizing the {\it curves-to-points} approach and a new technique we named {\it isotropy representation}, and used to some extent in \cite{mont1}, somehow inspired by \'E. Cartan's moving frame method \cite{favard}. Our main result states that there are $34$ Goursat $2$-flags of length 4, and we provide the exact numbers for each length size. Our approach is constructive. Normal forms for each equivalence class can be made explicit. Due to space limitations we will write down only a couple instructive examples.
We would like to mention that P. Mormul and Pelletier \cite{mormul2} have attempted an alternative solution to the classification problem.
In their classification work, they employed the results and tools proved from previous works done by Mormul. In \cite{mormul} Mormul discusses two coding systems for special
$2$-flags and proves that the two coding systems are the same. One system is the Extend Kumpera Ruiz system, which is a coding of system
used to describe $2$-flags. The other is called Singularity Class coding, which is an intrinsic coding system that describes
the sandwich diagram \cite{mont1} associated to $2$-flags. A brief outline on how these coding systems relate
to Montgomery's $RVT$ coding is discussed in \cite{castro}. Then, building upon Mormul's work in \cite{mormul3}, Mormul and Pelletier
use the idea of strong nilpotency of special multi-flags, along with properties and relationships between his two coding systems, to classify the
these distributions, up to length $4$(equivalently, up to level $4$ of the Monster Tower). Our $34$ agrees with theirs.
Here is a short description of the paper. In section one we acquaint ourselves with the main definitions necessary for the statements of our main results. Section two contains our precise statements, and a few explanatory remarks to help the reader progress through the theory with with us.
Section three consists of the statements of our main results. In section four we discuss the basic tools and ideas that will
be needed to prove our various results. Section five is devoted to technicalities, and the actual proofs. We conclude the paper, section six, with a quick summary of our findings.
For the record, we have also included an appendix where our lengthy computations are scrutinized.
\noindent {\bf Acknowlegements.} We would like to thank Corey Shanbrom, and Richard Montgomery (both at UCSC) for many useful conversations and remarks.
| 2,092 | 39,152 |
en
|
train
|
0.119.1
|
\section{Main definitions}
\subsection{Constructions}
Let $(Z,\Delta)$ be a manifold of dimension $n$ equipped with a plane field of rank $r$ and let $\Pj (\Delta)$ be the {\it projectivization} of $\Delta$.
As a manifold, $$Z^1 = \Pj (\Delta) .$$ Various objects in $Z$ can be canonically prolonged (lifted) to the new manifold $Z^1$.
\begin{table}\label{tab:prol}
\caption{Some geometric objects and their Cartan prolongations.}
\begin{tabular}{|c|c|}
\hline
curve $c:(\mathbb{R},0)\rightarrow (Z,p)$ & curve $c^{1}:(\mathbb{R},0)\rightarrow (Z^1,p),$ \\
& $c^{1}(t) =\text{(point,moving line)} = (c(t),span\{ \frac{dc(t)}{dt} \}) $\\ \hline
diffeomorphism $\Phi: Z \circlearrowleft$ & diffeomorphism $\Phi^{1}: Z^1 \circlearrowleft$, \\
& $\Phi^{1}(p,\ell) = (\Phi(p),d\Phi_p(\ell))$ \\ \hline
rank $r$ linear subbundle & rank $r$ linear subbundle $\Delta_1=d\pi_{(p,\ell)}^{-1}(\ell)\subset TZ^1$,\\
$\Delta \subset TZ$ & $\pi: Z^1 \rightarrow Z$ is the canonical projection. \\
\hline
\end{tabular}
\end{table}
Given an analytic curve $c:(I,0)\rightarrow (Z,p)$, where $I$ is some open interval about zero in $\mathbb{R}$, we can naturally define a curve a new curve $$c^{1}:(I,0)\rightarrow (Z^1,(p,\ell))$$ with image in $Z^1$ and where $\ell = span\{ \frac{dc(0)}{dt} \}$.
This new curve, $c^{1}(t)$, is called the \textit{prologation} of $c(t)$. If $t = t_{0}$ is not a regular point, then we define $c^{1}(t_{0})$ by taking the limit $\lim_{t \rightarrow t_{0}} c^{1}(t)$ where the limit varies over the regular points $t \rightarrow t_{0}$. An important fact to
note, proved in \cite{mont1}, is that the analyticity of $Z$ and $c$ implies that the limit is well defined and that the prolonged curve
$c^{1}(t)$ is analytic as well. Since this process can be iterated, we will write
$c^{k}(t)$ to denoted the $k$-fold prolongation of the curve $c(t)$.
The manifold $Z^1$ also comes equipped with a distribution $\Delta_1$ called the {\it Cartan prolongation of $\Delta$} \cite{bryant} and is defined as follows.
Let $\pi : Z^1 \rightarrow Z$ be the projection map $(p, \ell)\mapsto p$. Then
$$\Delta_1(p,\ell) = d\pi_{(p,\ell)}^{-1}(\ell),$$
i.e. {\it it is the subspace of $T_{(p,\ell)}Z^1$ consisting of all tangents to curves which are prolongations of curves one level down through $p$ in the direction $\ell$.}
It is easy to check using linear algebra that $\Delta_1$ is also a $k$-plane field. The pair $(Z^1,\Delta_{1})$ is called the {\it Cartan prolongation} of $(Z,\Delta)$.
\begin{example}
Take $Z = \mathbb{R}^{3}$ with its tangent bundle, which we denote by $\Delta_0$.
Then the tower shown in equation (\ref{eqn:tower}) is obtained by prolonging the pair $(\mathbb{R}^{3},\Delta_0)$ four times.
\end{example}
By a {\it symmetry} of the pair $(Z,\Delta)$ we mean a (local or global) diffeomorphism $\Phi \in Diff(3)$ of $Z$ that preserves the subbundle $\Delta$.
The symmetries of $(Z,\Delta)$ can also be prolonged to symmetries $\Phi^{1}$ of $(Z^1,\Delta_{1})$ as follows.
Define
$$\Phi^{1}(p,\ell)=(\Phi(p), d\Phi(\ell)).$$ Since $d\Phi(p)$ is invertible, and $d\Phi$ is linear the second component is well defined as a projective map. The new born symmetry we denoted by $\Phi^{1}$ is the prolongation (to $Z^1$) of $\Phi$.
Objects of interest in this paper and their Cartan prolongations are summarized in table (1). Unless otherwise mentioned prolongation will always refer to Cartan prolongation.
\begin{eg}[Prolongation of a cusp.]
Let $c(t) = (t^{2}, t^{3}, 0)$ be the $A_{2}$ cusp in $\R^{3}$. Then
$c^{1}(t) = (x(t), y(t), [dx, dy, dz]) = (t^{2}, t^{3}, 0, [2t: 3t^{2}: 0])$. After we introduce fiber affine coordinates
$u = \frac{dy}{dx}$ and $v = \frac{dz}{dx}$ around the point $[1: 0: 0]$ we obtain the immersed curve
$$c^{1}(t) = (t^{2}, t^{3}, 0, \frac{3}{2}t, 0)$$
\end{eg}
\subsection{Constructing the Monster tower.}
We start with $\R^{n+1}$ as our base and take $\Delta_{0} = T \R^{n+1}$. Prolonging $\Delta_{0}$ we get that
$\mathcal{P}^{1}(n) = \Pj \Delta_{0}$ with the distribution $\Delta_{1}$. By
iterating this process we end up with the manifold $\mathcal{P}^{k}(n)$ which is endowed with the rank $n$
distribtuion $\Delta_{k} = (\Delta_{k-1})^{1}$ and fibered over $\mathcal{P}^{k-1}(n)$.
In this paper we will be looking at the case of when $n=3$.
\begin{defn}
The Monster tower is a sequence of manifolds with distributions, $(\mathcal{P}^{k}, \Delta_{k})$,
together with fibrations $$\cdots \rightarrow \mathcal{P}^{k}(n) \rightarrow \mathcal{P}^{k-1}(n) \rightarrow \cdots \rightarrow
\mathcal{P}^{1}(n) \rightarrow \mathcal{P}^{0}(n) = \mathbb{R}^{n+1}$$
and we write $\pi_{k,i}: \mathcal{P}^{k}(n) \rightarrow \mathcal{P}^{i}(n)$, with $i < k$ for the projections.
\end{defn}
\begin{thm}
For $n > 1$ and $k>0$ any local diffeomorphism of $\mathcal{P}^{k}(n)$ preserving the distribution $\Delta_{k}$
is the restriction of the $k$-th prolongation of a local diffeomorphism $\Phi \in Diff(n)$.
\end{thm}
Proof: This was shown by Yamaguchi and Shibuya in (\cite{yamaguchi2}).
\begin{rem}
The importance of the above result cannot be stressed enough.
This theorem by Yamaguchi and Shibuya is what allows the isotropy representation method, discussed in section five of the paper,
of classifying orbits within the Monster Tower to work.
\end{rem}
\begin{rem}
Since we will be working exclusively with the $n=2$ Monster tower in this paper, we will just write
$\mathcal{P}^{k}$ for $\mathcal{P}^{k}(2)$.
\end{rem}
\begin{defn}
Two points $p,q$ in $\mathcal{P}^k$ are said to be equivalent if and only if there is a $\Phi\in \text{Diff}(3)$ such that $\Phi^{k}(p)=q$, in other words, $q\in \mathcal{O}(p)$($\mathcal{O}(p)$ is the orbit of the point $p$).
\end{defn}
\subsection{Orbits.}
Yamaguchi's theorem states that any symmetry of $\mathcal{P}^k$ comes from prolonging a diffeomorphism of $\mathbb{R}^3$ $k$-times. This remark is essential to our computations. Let us denote by $\mathcal{O}(p)$ the orbit of the point $p$ under the action of $Diff(\mathbb{R}^3)$.
In trying to calculate the various orbits within the Monster tower we see that it is easier to fix the base points
$p_{0} = \pi_{k,0}(p_{k})$ and $q_{0} = \pi_{k,0}(q_{k})$ to be $0 \in \R^{3}$. This means that we can replace the pseudogroup
$Diff(3)$, diffeomorphism germs of $\R^{3}$, by the group $Diff_{0}(3)$ of diffeomorphism germs that map the origin back to the
origin in $\R^{3}$.
\begin{figure}\label{fig:prologdist}
\label{fig:prologdiff}
\label{fig:prolongation}
\end{figure}
\begin{defn}
We say that a curve or curve germ $\gamma: (\R, 0) \rightarrow (\R^{3}, 0)$ realizes the point $p_{k} \in \mathcal{P}^{k}$
if $\gamma^{k}(0) = p_{k}$, where $p_{0} = \pi_{k,0}(p_{k})$.
\end{defn}
\begin{defn}
A direction $\ell \subset \Delta_{k}(p_{k})$, $k \geq 1$ is called a critical direction if there exists an
immersed curve, at level $k$, that is tangent to the direction $\ell$ whose projection to the zero-th level is the constant curve.
If no such curve exists,
then we call $\ell$ a regular direction.
\end{defn}
\begin{defn}
Let $p \in \mathcal{P}^{k}$, then the
\begin{eqnarray*}
\text{Germ}(p) &=& \{ c :(\mathbb{R},0)\rightarrow (\mathbb{R}^3,0)| \text{$\frac{dc^{k}}{dt}\vert_{t=0}\neq 0$ is a regular direction} \}.
\end{eqnarray*}
\end{defn}
\begin{defn}
Two curves $\gamma$, $\sigma$ in $\R^{3}$ are $(RL)$ equivalent, written $\gamma \sim \sigma$ $\Leftrightarrow$ there
exists a diffeomorphism germ $\Phi \in Diff(3)$ and a reparametrization $\tau \in Diff_{0}(1)$ of $(\R,0)$ such that
$\sigma = \Phi \circ \gamma \circ \tau$.
\end{defn}
| 2,672 | 39,152 |
en
|
train
|
0.119.2
|
\section{Main results}
\begin{thm}[Orbit counting per level]\label{thm:main}
In the $n=2$ Monster tower the number of orbits within each of the first four levels of the tower are as follows:
level $1$ has $1$ orbit, level $2$ has $2$ orbits, level $3$ has $7$ orbits, and level $4$ has $34$ orbits.
\end{thm}
The main idea behind this classification is a coding system developed by Castro and Montgomery \cite{castro}. This
coding system is known as $RVT$ coding where each point in the Monster tower is labeled by a sequence of
$R$'s, $V$'s, $T$'s, and $L$'s along with various decorations. We will give an explanation of this
coding system in the next section. Using this coding system we went class by class and determined the number
or orbits within every possible $RVT$ class that could arise at each of the first four levels.
\begin{thm}[Listing of orbits within each $RVT$ code.]\label{thm:count}
The above table, is a break down of the number of orbits that appear within each $RVT$ class within the first three levels.
\begin{table}\label{tab:codes}
\begin{tabular}{|c|c|c|c|}
\hline
Level of tower & $RVT$ code & Number of orbits & Normal forms \\
$1$ & $R$ & $1$ & $(t,0,0)$ \\
$2$ & $RR$ & $1$ & $(t,0,0)$ \\
& $RV$ & $1$ & $(t^{2}, t^{3}, 0)$\\
$3$ & $RRR$ & $1$ & $(t,0,0)$ \\
& $RRV$ & $1$ & $(t^{2}, t^{5}, 0)$ \\
& $RVR$ & $1$ & $(t^{2}, t^{3}, 0)$ \\
& $RVV$ & $1$ & $(t^{3}, t^{5}, t^{7})$, $(t^{3}, t^{5}, 0)$ \\
& $RVT$ & $2$ & $(t^{3}, t^{4},t^{5})$, $(t^{3}, t^{4}, 0)$ \\
& $RVL$ & $1$ & $(t^{4}, t^{6}, t^{7})$ \\
\hline
\end{tabular}
\end{table}
\no For level $4$ there is a total of $23$ possible $RVT$ classes. Of the $23$ possibilities $14$ of them consist of a single orbit.
The classes $RRVT$, $RVRV$, $RVVR$, $RVVV$, $RVVT$, $RVTR$, $RVTV$, $RVTL$ consist of $2$ orbits, and the class $RVTT$ consists of $4$ orbits.
\end{thm}
\begin{rem}
There are a few words that should be said to explain the normal forms column in table $2$.
Let $p_{k} \in \mathcal{P}^{k}$, for $k = 1,2,3$, having $RVT$ code $\omega$, meaning $\omega$ is a word
from the second column of the table. For $\gamma \in Germ(p_{k})$,
then $\gamma$ is $(RL)$ equivalent to one of the curves listed in the normal forms column for the
$RVT$ class $\omega$. Now, notice that for the class $RVV$ that there are two inequivalent curves sitting in the
normal forms column, but that there is only one orbit within that class. This is because the two normal forms are equal
to each other, at $t=0$, after three prolongations. However, after four prolongations they
represent different points at the fourth level. This corresponds to the fact that at the fourth level class
$RVVR$ breaks up into two orbits.
\end{rem}
The following theorems are results that were proved in \cite{castro} and which helped to reduce the number calculations in our orbit classification process.
\begin{defn}
A point $p_{k} \in \mathcal{P}^{k}$ is called a Cartan point if its $RVT$ code is $R^{k}$.
\end{defn}
\begin{thm}\label{thm:cartan}
The $RVT$ class $R^{k}$ forms a single orbit at any level within the Monster tower $\mathcal{P}^{k}(n)$ for $k \geq 1$ and $n \geq 1$.
Every point at level $1$ is a Cartan point. For $k > 1$ the set $R^{k}$ is an open dense subset of $\mathcal{P}^{k}(n)$.
\end{thm}
\begin{defn}
A parametrized curve is an $A_{2k}$ curve, $k \geq 1$ if it is equivalent to the curve
$$(t^{2}, t^{2k + 1}, 0) $$
\end{defn}
\begin{thm}\label{thm:ak}
Let $p_{j} \in \mathcal{P}^{j}$ with $j = k + m + 1$, with $m,k \geq 0$, $m,k$ are positive integers, and $p_{j} \in R^{k}CR^{m}$,
then $Germ(p_{j})$ contains a curve germ equivalent to the $A_{2k}$ singularity, which means that the $RVT$ class
$R^{k}CR^{m}$ consists of a single orbit.
\end{thm}
\begin{rem}
One could ask ``Why curves?'' The space of $k$-jets of functions $f:\mathbb{R}\rightarrow \mathbb{R}^{2}$, usually denoted by $J^k(\mathbb{R},\mathbb{R}^{2})$ is an open dense subset of $\mathcal{P}^{k}$. It is in this sense that a point $p\in \mathcal{P}^{k}$ is roughly speaking the $k$-jet of a curve in $\mathbb{R}^3$. Sections of the bundle
$$J^k(\mathbb{R},\mathbb{R}^{2}) \rightarrow \mathbb{R}\times \mathbb{R}^{2}$$
are $k$-jet extensions of functions.
Explicitly, given a function $t\mapsto (t,x(t),y(t))$ its $k$-jet extension is defined as
$$(x,f(x))\mapsto (t,x(t),y(t),x'(t),y'(t),\dots,x^{(k)}(t),y^{(k)}(t)).$$ (Superscript here denotes the order of the derivative.)
It is an instructive example to show that for certain choices of fiber affine coordinates in $\mathcal{P}^{k}$, not involving critical directions, that our local charts will look like a copy of $J^k(\mathbb{R},\mathbb{R}^{2})$.
\
Another reason for looking at curves is because it gives us a better picture for the overall behavior of an $RVT$ class.
If one knows all the possible curve normal forms for a particular $RVT$ class, say $\omega$, then not only does one know how many
orbits are within the class $\omega$, but they also know how many orbits are within the regular prolongation of $\omega$. By
regular prolongation of an $RVT$ class $\omega$ we mean the addition of $R$'s to the end of the word $\omega$, i.e. the regular
prolongation of $\omega$ is $\omega R \cdots R$. This method of using curves to classify $RVT$ classes was used in \cite{mont1} and proved to be very
successful in classifing points within the $n=1$ Monster Tower.
\end{rem}
| 1,907 | 39,152 |
en
|
train
|
0.119.3
|
\section{Tools and ideas involved in the proofs}\label{sec:tools}
Before we begin the proofs we need to define the $RVT$ code.
\subsection{$RC$ coding of points.}
\begin{defn}
A point $p_{k} \in \mathcal{P}^{k}$, where $p_{k} = (p_{k-1}, \ell)$ is called regular or critical point if the
line $\ell$ is a regular direction or a critical direction.
\end{defn}
\begin{defn}
For $p_{k} \in \mathcal{P}^{k}$, $k \geq 1$ and $p_{i} = \pi_{k,i}(p_{k})$, we write
$\omega_{i}(p_{k}) = R$ if $p_{i}$ is a regular point and $\omega_{i}(p_{k}) = C$ if $p_{i}$ is a critical point.
Then the word $\omega(p_{k}) = \omega_{1}(p_{k}) \cdots \omega_{k}(p_{k})$ is called the $RC$ code for the point $p_{k}$.
Note that $\omega_{1}(p_{k})$ is always equal to $R$ by Theorem $3.4$.
\end{defn}
\no So far we have not discussed how critical directions arise inside of $\Delta_{k}$. The following section will show
that there is more than one kind of critical direction that can appear within the distribution $\Delta_{k}$.
\subsection{Baby Monsters.}
One can apply prolongation to any analytic $n$-dimensional manifold $F$ in place of $\R^{n}$. Start out with
$\mathcal{P}^{0}(F) = F$ and take $\Delta^{F}_{0} = TF$. Then the prolongation of the pair
$(F, \Delta^{F}_{0})$ is $\mathcal{P}^{1}(F) = \Pj TF$, with canonical rank $m$ distribution
$\Delta^{F}_{1} = (\Delta^{F}_{0})^{1}$. By iterating this process $k$ times we end up with the pair
$(\mathcal{P}^{k}(F), \Delta^{F}_{k})$, which is analytically diffeomorphic to $(\mathcal{P}^{k}(n-1), \Delta_{k})$.
\
Now, apply this process to the fiber $F_{i}(p_{i}) = \pi^{-1}_{i, i-1}(p_{i-1}) \subset \mathcal{P}^{i}$ through the point
$p_{i}$ at level $i$. The fiber is an $(n-1)$-dimensional integral submanifold for $\Delta_{i}$. Prolonging, we see the
$\mathcal{P}^{1}(F_{i}(p_{i})) \subset \mathcal{P}^{i + 1}$, and has the associated distribution
$\delta^{1}_{i} = \Delta^{F_{i}(p_{i})}_{1}$; that is,
$$\delta^{1}_{i}(q) = \Delta_{i + 1}(q) \cap T_{q}(\mathcal{P}^{1}(F_{i}(p_{i}))) $$
which is a hyperplane within $\Delta_{i + 1}(q)$, for $q \in \mathcal{P}^{1}(F_{i}(p_{i}))$. When this prolongation process is
iterated, we end up with the submanifolds
$$\mathcal{P}^{j}(F_{i}(p_{i})) \subset \mathcal{P}^{i + j}$$
with the hyperplane subdistribution $\delta^{j}_{i} \subset \Delta_{i + j}(q)$ for $q \in \mathcal{P}^{j}(F_{i}(p_{i}))$.
\begin{defn} A baby Monster born at level $i$ is a sub-tower $(\mathcal{P}^{i}(F_{i}(p_{i})), \delta^{j}_{i})$,
for $j \geq 0$ within the Monster tower. If $q \in \mathcal{P}^{j}(F_{i}(p_{i}))$ then we will say that a baby Monster born
at level $i$ passes through $q$ and that $\delta^{j}_{i}(q)$ is a critical hyperplane passing through $q$, and born at level $i$.
\end{defn}
\begin{rem} The vertical plane $V_k (q)$, which is of the form $\delta^{0}_{k} (q)$, is always one
of the critical hyperplanes
passing through $q$.
\end{rem}
\begin{thm}
A direction $\ell \subset \Delta_{k}$ is called critical $\Leftrightarrow$ $\ell$ is contained in a critical hyperplane.
\end{thm}
\begin{figure}
\caption{Arrangement of critical hyperplanes.}
\label{fig:one-plane}
\label{fig:two-planes}
\label{fig:three-planes}
\label{fig:arrangement}
\end{figure}
\subsection{Arrangements of critical hyperplanes for $n = 2$.}
Over any point $p_{i}$, at the i-th level of the Monster tower, there is a total of three different hyperplane configurations
for $\Delta_{i}$. These three configurations are shown in diagrams $(a)$, $(b)$, and $(c)$. Figure $(a)$ is the picture
for $\Delta_{i}(p_{i})$ when the i-th letter in the $RVT$ code for $p_{i}$ is the letter $R$. From our earlier discussion, this means that the
vertical hyperplane, labeled with a $V$, is the only critical hyperplane sitting inside of $\Delta_{i}(p_{i})$. Figure $(b)$ is the
picture for $\Delta_{i}(p_{i})$ when the i-th letter in the $RVT$ code is either the letter $V$ or the letter $T$. This gives that there is a
total of two critical hyperplanes sitting inside of $\Delta_{i}(p_{i})$: one is the vertical hyperplane and the other is the
tangency hyperplane, labeled by the letter $T$. Now, figure $3$ describes the picture for $\Delta_{i}(p_{i})$ when the i-th letter
in the $RVT$ code of $p_{i}$ is the letter $L$. Figure $(c)$ depicts this situation where there is now a total of three
critical hyperplanes: one for the vertical hyperplane, and two tangency hyperplanes, labeled as $T_{1}$ and $T_{2}$. Now, because of
the presence of these three critical hyperplanes we need to refine our notion of an $L$ direction and add two more distinct $L$ directions.
These three directions are labeled as $L_{1}$, $L_{2}$, and $L_{3}$.
\
With the above in mind, we can now refine our $RC$ coding and define the $RVT$ code for points within the Monster tower.
Take $p_{k} \in \mathcal{P}^{k}$ and if $\omega_{i}(p_{k}) = C$ then we look at the point $p_{i} = \pi_{k,i}(p_{k})$, where
$p_{i} = (p_{i-1}, \ell_{i-1})$. Then depending on which hyperplane $\ell_{i-1}$ is contained in we relabel the letter $C$ by
the letter $V$, $T$, $L$, $T_{i}$ for $i = 1,2$, or $L_{j}$ for $j = 1,2,3$. As a result, we see that each of the first four
levels of the Monster tower is made up of the following $RVT$ classes:
\
\begin{itemize}
\item{}Level 1: $R$.
\item{}Level 2: $RR, RV$.
\item{}Level 3: $$RRR, RRV, RVR, RVV, RVT, RVL$$
\item{}Level 4:
$$RRRR, RRRV$$
$$RRVR, RRVV, RRVT, RRVL$$
$$RVRR, RVRV, RVVR, RVVV, RVVT, RVVL $$
$$RVTR, RVTV, RVTT , RVTL$$
$$RVLR, RVLV, RVLT_1, RVLT_2, RVLL_1, RVLL_2, RVLL_3$$
\end{itemize}
\
\begin{rem}
As was pointed out in \cite{castro}, the symmetries, at any level in the Monster tower, preserve the critical hyperplanes.
In other words, if $\Phi^{k}$ is a symmetry at level $k$ in the Monster tower and $\delta^{j}_{i}$ is a critical hyperplane within
$\Delta_{k}$, then $\Phi^{k}_{\ast}(\delta^{j}_{i}) = \delta^{j}_{i}$. As a result, the $RVT$ classes creates a partition of the various
points within any level of the Monster tower.
\end{rem}
Now, from the above configurations of critical hyperplanes section one might ask the following question: how does one "see" the two
tangency hyperplanes that appear over an "L" point and where do they come from? This question was an important one to ask when
trying to classify the number of orbits within the fourth level of the Monster Tower and to better
understand the geometry of the tower. We will provide an example to answer this question, but
before we do so we must discuss some details about a particular coordinate system called Kumpera-Rubin coordinates to help us do various
computations on the Monster tower.
\
| 2,291 | 39,152 |
en
|
train
|
0.119.4
|
\subsection{Kumpera-Rubin coordinates} When doing local computations in the tower (\ref{eqn:tower}) one needs to work with suitable coordinates.
A good choice of coordinates were suggested by Kumpera and Ruiz \cite{kumpera1} in the Goursat case, and later generalized by Kumpera and Rubin \cite{kumpera2} for multi-flags. A detailed description of the inductive construction of Kumpera-Rubin coordinates was given in \cite{castro} and is discussed in the example following this section, as well as in the proof of our level $3$ classification. For the sake of clarity, we will highlight
the coordinates' attributes through an example.
\begin{eg}[Constructing fiber affine coordinates in $\mathcal{P}^{2}$]
\end{eg}
\subsection*{Level One:}
Consider the pair $(\mathbb{R}^{3}, T \R^{3})$ and let $(x,y,z)$ be local coordinates. The set of 1-forms $\{dx,dy,dz\}$ form a coframe of $T^*\mathbb{R}^3$.
Any line $\ell$ through $p \in \mathbb{R}^3$ has projective coordinates $[dx(\ell): dy(\ell): dz(\ell)]$.
Since the affine group, which is contained in $Diff(3)$, acts transitively on $\mathcal{P}(T\mathbb{R}^3)$ we can fix $p=(0,0,0)$ and $\ell = \text{span} \left\{ (1,0,0) \right\}$.
Thus $dx(\ell)\neq 0$ and we introduce fiber affine coordinates $[1: dy/dx: dz/dx]$ or, $$u = \frac{dy}{dx}, v = \frac{dz}{dx}.$$
The Pfaffian system describing the prolonged distribution $\Delta_1$ on $$\mathcal{P}^{1} \approx \mathbb{R}^3\times \mathbb{P}^2$$ is
$$ \{dy - u dx = 0, dz - v dx = 0 \} = \Delta_1 \subset T\mathcal{P}^{1}.$$
At the point $p_{1} = (p_{0}, \ell) = (x,y,z,u,v) = (0,0,0,0,0)$ the distribution is the linear subspace $$\Delta_1(0,0,0)=\{dy = 0,dz = 0\}.$$ The triad of one-forms $dx,du,dv$ form a local coframe for $\Delta_1$ near $p_{1} = (p_{0}, \ell)$.
The fiber, $F_{1}(p_{1}) = \pi^{-1}_{1,0}(p_{0})$, is given by $x = y = z = 0$. The 2-plane of critical directions (``bad-directions'') is thus spanned by $\frac{\partial}{\partial u},\frac{\partial}{\partial v}$.
\
The reader may have noticed that we could have instead chosen any regular direction at level $1$, instead, e.g. $\frac{\partial}{\partial x} + a \frac{\partial}{\partial u} + b \frac{\partial}{\partial v}$ and centered our chart on it. All regular directions, at level one, are equivalent.
\subsection*{Level Two: $RV$ points.}
Any line $\ell\subset \Delta_1(p_{1}')$, for $p_{1}'$ near $p_{1}$, will have projective coordinates
$$[dx(\ell): du(\ell): dv(\ell)] .$$
If we choose a critical direction, say $\ell =\frac{\partial}{\partial u}$, then $du(\frac{\partial}{\partial u})=1$ and we can take the projective chart $[\frac{dx}{du} : 1 : \frac{dv}{du}]$. We will show below that any two critical directions are equivalent and therefore such a choice does not result in any loss of generality. We introduce new fiber affine coordinates
$$u_2 = \frac{dx}{du},v_2 = \frac{dy}{du},$$
and the distribution $\Delta_2$ will be described in this chart as
\begin{eqnarray*}
\Delta_2 = \{dy - u dx = 0, dz - v dx = 0,\\
dx - u_2 du = 0, dv - v_2 du = 0\} \subset T\mathcal{P}^{2}.
\end{eqnarray*}
\subsection*{Level Three: The Tangency Hyperplanes over an $L$ point.}
We take $p_{3} = (p_{2}, \ell) \in RVL$ with $p_{2}$ as in the level two discussion.
We will show what the local affine coordinates near this point are and that the tangency hyperplane $T_{1}$, in $\Delta_{3}(p_{3})$, is the critical hyperplane $\delta^{1}_{2}(p_{3}) =
span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa v_{3}} \}$ and the tangency hyperplane $T_{2}$ is the critical hyperplane
$\delta^{2}_{1}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}}\}$.
\
We begin with the local coordinates near $p_{3}$. First, recall that the distribution $\Delta_{2}$, in this case, is coframed
by $[du: du_{2}: dv_{2}]$. Within $\Delta_{2}$ the vertical hyperplane is given by $du = 0$ and the tangency hyperplane by
$du_{2} = 0$. The point $p_{3} = (p_{2}, \ell)$ with $\ell$ being an $L$ direction means that both $du(\ell) = 0$
and $du_{2}(\ell) = 0$. This means that the only choice for local coordinates near $p_{3}$ is given by
$[\frac{du}{dv_{2}}: \frac{du_{2}}{dv_{2}}: 1]$ to give that the fiber coordinates at level $3$ are
$$u_{3} = \frac{du}{dv_{2}}, v_{3} = \frac{du_{2}}{dv_{2}} $$
and the distribution $\Delta_{3}$ will be described in this chart as
\begin{eqnarray*}
\Delta_{3} = \{dy - u dx = 0, dz - v dx = 0,\\
dx - u_2 du = 0, dv - v_2 du = 0, \\
du - u_{3}dv_{2} = 0, du_{2} - v_{3}dv_{2} = 0 \} \subset T\mathcal{P}^{3}.
\end{eqnarray*}
With this in mind, we are ready to determine how the two tangency hyperplanes are situated within $\Delta_{3}$.
\
{\it $T_{1} = \delta^{1}_{2}(p_{3})$:}
First we note that $p_{3} = (x,y,z,u,v,u_{2},v_{2},u_{3},v_{3}) = (0,0,0,0,0,0,0,0,0)$ with $u = \frac{dy}{dx}$,
$v = \frac{dz}{dx}$, $u_{2} = \frac{dx}{du}$, $v_{2} = \frac{dv}{du}$, $u_{3} = \frac{du}{dv_{2}}$,
$v_{3} = \frac{du_{2}}{dv_{2}}$. With this in mind, we start by looking at the vertical hyperplane
$V_{2}(p_{2}) \subset \Delta_{2}(p_{2})$ and prolong the fiber $F_{2}(p_{2})$ associated to $V_{2}(p_{2})$ and see that
$$ \mathcal{P}^{1}(F_{2}(p_{2})) = \Pj V_{2} = (p_{1}, u_{2}, v_{2}, [ du: du_{2}: dv_{2} ]) =
(p_{1}, u_{2}, v_{2}, [ 0: a: b ]) $$
$$= (p_{1}, u_{2}, v_{2}, [ 0: \frac{a}{b}: 1 ]) = (p_{1}, u_{2}, v_{2}, 0, v_{3})$$
where $a,b \in \R$ with $b \neq 0$. Then, since $\Delta_{3}$, in a neighborhood of $p_{3}$, is given by
$$ \Delta_{3} = span \{ u_{3}Z^{(2)}_{1} + v_{3}\frac{\pa}{\pa u_{2}} + \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}}, \frac{\pa}{\pa v_{3}} \}$$
with $Z^{(2)}_{1} = u_{2}Z^{(1)}_{1} + \frac{\pa}{\pa u} + v_{2} \frac{\pa}{\pa v}$ and
$Z^{(1)}_{1} = u \frac{\pa}{\pa y} + v \frac{\pa}{\pa z} + \frac{\pa}{\pa x}$ and that
$T_{p_{3}}(\mathcal{P}^{1}(F_{2}(p_{2}))) = span \{ \frac{\pa}{\pa u_{2}}, \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa v_{3}} \}$ we see that since
$$\delta^{1}_{2}(p_{3}) = \Delta_{3}(p_{3}) \cap T_{p_{3}}(\mathcal{P}^{1}(F_{2}(p_{2})))$$
it gives us that
$$\delta^{1}_{2}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa v_{3}} \}$$
Now, since $V_{3}(p_{3}) \subset \Delta_{3}(p_{3})$ is given by $V_{3}(p_{3}) = span \{ \frac{\pa}{\pa u_{3}}, \frac{\pa}{\pa v_{3}} \}$
we see, based upon figure $(c)$, that $T_{1} = \delta^{1}_{2}(p_{3})$.
{\it $T_{2} = \delta^{2}_{1}(p_{3})$:}
We begin by looking at $V_{1}(p_{1}) \subset \Delta_{1}(p_{1})$ and at the fiber $F_{1}(p_{1})$ associated to $V_{1}(p_{1})$.
When we prolong the fiber space we see that
$$\mathcal{P}^{1}(F_{1}(p_{1})) = \Pj V_{1} = (0,0,0, u, v, [dx: du: dv]) = (0,0,0, u, v, [0: a: b])$$
$$ = (0,0,0, u, v, [0: 1: \frac{b}{a}]) = (0,0,0, u, v, 0, v_{2})$$
where $a,b \in \R$ with $a \neq 0$. Then, since $\Delta_{2}$, in a neighborhood of $p_{2}$, is given by
$$ \Delta_{2} = span \{ u_{2}Z^{(1)}_{1} + \frac{\pa}{\pa u} + v_{2}\frac{\pa}{\pa v}, \frac{\pa}{\pa u_{2}}, \frac{\pa}{\pa v_{2}} \}$$
and $T_{p_{2}}(\mathcal{P}^{1}(F_{1}(p_{1}))) = span \{ \frac{\pa}{\pa u}, \frac{\pa}{\pa v}, \frac{\pa}{\pa v_{2}} \}$ that
$$\delta^{1}_{1}(p_{2}) = \Delta_{2}(p_{2}) \cap T_{p_{2}}(\mathcal{P}^{1}(F_{1}(p_{1})))$$
and we see that in a neighborhood of $p_{2}$ that
$$\delta^{1}_{1} = span \{ u_{2}Z^{(1)}_{1} + \frac{\pa}{\pa u} + v_{2}\frac{\pa}{\pa v}, \frac{\pa}{\pa v_{2}} \}$$
Now, in order to figure out what $\delta^{2}_{1}(p_{3})$ is we need to prolong the fiber $F_{1}(p_{1})$ twice and then look at
the tangent space at the point $p_{3}$. We see that
$$\mathcal{P}^{2}(F_{1}(p_{1})) = \Pj \delta^{1}_{1} = (0,0,0, u,v, 0, v_{2}, [du: du_{2}: dv_{2}])$$
$$ = (0,0,0, u,v, 0, v_{2}, [a: 0: b]) = (0,0,0,u, v, 0, v_{2}, [\frac{a}{b}: 0: 1]) = (0,0,0, u, v, 0, v_{2}, u_{3}, 0)$$
then since
$$\delta^{2}_{1}(p_{3}) = \Delta_{3}(p_{3}) \cap T_{p_{3}}(\mathcal{P}^{2}(F_{1}(p_{1})))$$
with $\Delta_{3}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}}, \frac{\pa}{\pa v_{3}} \}$ and
$T_{p_{3}}(\mathcal{P}^{2}(F_{1}(p_{1}))) = span \{ \frac{\pa}{\pa u}, \frac{\pa}{\pa v}, \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}} \}$
it gives that
$$\delta^{2}_{1}(p_{3}) = span \{ \frac{\pa}{\pa v_{2}}, \frac{\pa}{\pa u_{3}} \}$$
and from looking at figure $(c)$ one can see that $T_{2} = \delta^{2}_{1}(p_{3})$.
\begin{figure}
\caption{Critical hyperplane configuration over $p_{3}
\end{figure}
\begin{rem}
The above example, along with figure $2$, gives concrete reasoning for why a critical hyperplane, which is not the vertical one,
is called a "tangency" hyperplane. Also, in figure $2$ we have drawn the submanifolds $\mathcal{P}^{1}(F_{2}(p_{2}))$ and
$\mathcal{P}^{1}(F_{1}(p_{1}))$ to reflect the fact that they are tangent to the manifolds $\mathcal{P}^{3}$ and $\mathcal{P}^{2}$
respectively with one of their dimensions tangent to the vertical space. At the same time, the submanifold
$\mathcal{P}^{2}(F_{1}(p_{1}))$ is drawn to reflect that it is tangent to the manifold $\mathcal{P}^{3}$ with one of its dimensions
tangent to the appropriate direction in the vertical hyperplane. In particular, it is drawn to show the fact that
$\mathcal{P}^{2}(F_{1}(p_{1}))$ is tangent to the $\frac{\partial}{\partial u_{3}}$ direction while
$\mathcal{P}^{1}(F_{2}(p_{2}))$ is tangent to the $\frac{\partial}{\partial v_{3}}$ direction.
\end{rem}
| 3,655 | 39,152 |
en
|
train
|
0.119.5
|
\subsection{Semigroup of a curve}
\begin{defn}
The \textit{order} of an analytic curve germ $f(t) = \Sigma a_{i}t^{i}$ is the smallest integer $i$ such that
$a_{i} \neq 0$. We write $ord(f)$ for this (nonegative) integer. The \textit{multiplicity} of a curve germ
$\gamma:(\mathbb{R},0) \to (\mathbb{R}^n,0)$, denoted $mult(\gamma)$, is the minimum of the orders of its coordinate functions
$\gamma_{i}(t)$ relative to any coordinate system vanishing at $p$.
\end{defn}
\begin{defn}
If $\gamma: (\mathbb{R},0) \to (\mathbb{R}^n,0)$ is a well-parameterized curve germ, then
its \emph{semigroup} is the collection of positive integers $\text{ord}(P(\gamma(t)))$ as
$P$ varies over analytic functions of $n$ variables vanishing at $0$.
\end{defn} Because $\text{ord}(PQ(\gamma(t))) = \text{ord}(P(\gamma(t)) + \text{ord}(Q(\gamma(t))$
the curve semigroup is indeed an algebraic semigroup, i.e. a subset of $\mathbb{N}$ closed under addition.
The semigroup of a well-parameterized curve is a basic diffeomorphism invariant of the curve.
\begin{defn}[Following Arnol'd (cf.\cite{arnsing}, end of his introduction]
A curve germ $\gamma$ in $\mathbb{R}^d$, $d \ge 2$
has symbol $[m, n]$, $[m, n, p]$, or $[m, (n, p)]$ if it is equivalent to a curve germ of the form
$(t^m, t^n, 0, \ldots,0 ) + O(t^{n+1})$,
$(t^m, t^n, t^p, 0, \ldots,0) + O(t^{p+1})$, or
$(t^m, t^n + t^p, 0, \ldots,0) + O(t^{p+1})$.
Here $m< n < p$ are positive integers.
\end{defn}
\subsection{The points-to-curves and back philosophy}
The idea is to translate the problem of classifying orbits in the tower (\ref{eqn:tower}) into an equivalent classification problem for finite jets of space curves. Here we are going to mention some highlights of this approach, we will refer the diligent reader to \cite{castro} check the technical details.
\subsubsection*{Methodology. How does the curve-to-point philosophy work?}
To any $p\in \mathcal{P}^{k}(n)$ we associate the set
\begin{eqnarray*}
\text{Germ}(p) &=& \{ c :(\mathbb{R},0)\rightarrow (\mathbb{R}^3,0)| \text{$\frac{dc^{k}}{dt}\vert_{t=0}\neq 0$ is a regular direction} \}.
\end{eqnarray*}
The operation of $k$-fold prolongation applied to $\text{Germ}(p)$ yields immersed curves at level $k$ in the Monster tower, and tangent to some line $\ell$ having nontrivial projection onto the base manifold $\mathbb{R}^3.$ Such ``good directions'' were named {\it regular} in \cite{castro} and within each subspace $\Delta_k$ they form an open dense set.
A ``bad direction'' $\ell_{\text{critical}}$, or {\it critical direction} in the jargon of \cite{castro} are directions which will project down to a point. The set of critical direction has codimension 1, and consists of a finite union of 2-planes within each $\Delta_k$. Symmetries of $\mathcal{P}^{k}$ do preserve the different types of directions.
In \cite{castro} it was proved that that $\text{Germ}(p)$ is always non-empty.
Consider now the set valued map $p\mapsto \text{Germ}(p)$.
One can prove that $p\sim q$ iff $\text{Germ}(p)\sim \text{Germ}(q)$. (The latter equivalence means ``to any curve in $\text{Germ}(p)$ there is a curve in $\text{Germ}(q)$ and vice-versa.'')
\begin{lemma}[Fundamental lemma of points-to-curves approach]
Let $\Omega$ be a subset of $\mathcal{P}^{k}(n)$ and suppose that
$$\bigcup_{p\in \Omega}\text{Germ}(p)= \{\text{finite no. of equivalence classes of curves}\}.$$
Then $$\Omega = \{\text{finite no. of orbits}\}.$$
\end{lemma}
| 1,173 | 39,152 |
en
|
train
|
0.119.6
|
\section{Proofs.}
Now we are ready to prove Theorem \ref{thm:main} and Theorem \ref{thm:count}. We start at level $1$ of the tower and work our way
up to level $4$. At each level we classify the orbits within each of the $RVT$ classes that can arise at that
particular level. We begin with the first level.
\noindent {\bf Proof of Theorem \ref{thm:main} and Theorem \ref{thm:count}, the classification of points at level $1$ and level $2$.}
Theorem $3.3$ tells us that all points at the first level of the tower are equivalent, giving that there is a single
orbit. For level $2$ there are only two possible $RVT$ codes: $RR$ and $RV$. Again, any point in the class $RR$ is
a Cartan point and by Theorem $3.4$ consists of only one orbit. The class $RV$ consists of a single orbit by
Theorem $3.5$.
\
{\bf The classifiction of points at level $3$.}
There is a total of six distinct $RVT$ classes at level three in the Monster tower. We begin with the class $RRR$.
{\it The class $RRR$:}
Any point within the class $RRR$ is a Cartan point that by Theorem \ref{thm:cartan} that there is only one orbit within
this class.
{\it The classes $RVR$ and $RRV$:}
From Theorem \ref{thm:ak} we know that any point within the class $RVR$ has a single orbit, which
is represented by the point $\gamma^{3}(0)$ where $\gamma$ is the curve $\gamma(t) = (t^{2}, t^{3}, 0)$.
Similarly, the class $RRV$ has a single orbit, which is represented by the point $\tilde{\gamma}^{3}(0)$ where
$\tilde{\gamma}(t) = (t^{2}, t^{5}, 0)$.
\no Before we continue, we need to pause and provide some framework to help us with the classification of the
remaining $RVT$ codes.
\
{\it Setup for classes of the form $RVC$:}
We set up coordinates $x,y,z,u,v,u_{2},v_{2}$ for a point in the class $RV$ as in subsection $4.4$.
Then for $p_{2} \in RV$ we have $\Delta_{2}(p_{2}) = span \{ \frac{\pa}{\pa u}, \frac{\pa}{\pa u_{2}}, \frac{\pa}{\pa v_{2}} \}$
where $p_{2} = (x,y,z,u,v,u_{2}, v_{2}) = (0,0,0,0,0,0,0)$ and for any point $p_{3} \in RVC \subset \mathcal{P}^{3}$ that
$p_{3} = (p_{2}, \ell_{2}) = (p_{2}, [du(\ell_{2}): du_{2}(\ell_{2}): dv_{2}(\ell_{2})])$. Since the point $p_{2}$ is in the class $RV$ we see that
if $du = 0$ along $\ell_{2}$ that $p_{3} \in RVV$, $du_{2} = 0$ with $du \neq 0$ along $\ell_{2}$ that $p_{3} \in RVT$,
and if $du = 0$ and $du_{2} = 0$ along $\ell_{2}$ that $p_{3} \in RVL$. With this in mind, we are ready to continue
with the classification.
\
{\it The class $RVV$:}
Let $p_{3} \in RVV$ and let $\gamma \in Germ(p_{3})$. We prolong $\gamma$ three times and have that
$\gamma^{3}(t) = (x(t),y(t),z(t), u(t), v(t), u_{2}(t), v_{2}(t))$ and we look at the component functions
$u(t)$, $u_{2}(t)$, and $v_{2}(t)$ where we set $u(t) = \Sigma_{i} a_{i}t^{i}$, $u_{2}(t) = \Sigma_{j}b_{j}t^{j}$, and
$v_{2}(t) = \Sigma_{k}c_{k}t^{k}$. Now, since $\gamma^{2}(t)$ needs to be tangent to the vertical hyperplane in $\Delta_{3}$
that $\gamma^{2}(0)'$ must be a proper vertical direction in $\Delta_{3}$, meaning $\gamma^{2}(0)'$ is not an $L$ direction.
Since $\Delta_{3}$ is coframed by $du$, $du_{2}$, and $dv_{2}$, we must have that $du = 0$ and $du_{2} \neq 0$ along
$\gamma^{2}(0)'$. This imposes the condition for the functions $u(t)$ and $u_{2}(t)$ that $a_{1} = 0$ and $b_{1} \neq 0$, but
for $v_{2}(t)$ it may or may not be true that $c_{1}$ is nonzero. Also it must be true that $a_{2} \neq 0$ or else the curve
$\gamma$ will not be in the set $Germ(p_{3})$. We first look at the case when $c_{1} \neq 0$.
{\it Case $1$, $c_{1} \neq 0$:}
From looking at the one-forms that determine $\Delta_{2}$, we see that in order for the curve $\gamma^{3}$ to be integral
to this distribution that the other component function for $\gamma^{3}$ must satisfy the following relations:
$$ \dot{y}(t) = u(t) \dot{x}(t), \dot{z}(t) = v(t) \dot{x}(t)$$
$$ \dot{x}(t) = u_{2}(t) \dot{u}(t), \dot{v}(t) = v_{2}(t) \dot{u}(t)$$
We start with the expressions for $\dot{x}(t)$ and $\dot{v}(t)$ and see that, based upon what we know about
$u(t)$, $u_{2}(t)$, and $v_{2}(t)$, that $x(t) = \frac{2a_{2}b_{1}}{3}t^{3} + \ldots$ and
$v(t) = \frac{2a_{2}c_{1}}{3}t^{3} + \ldots$. We can then use this information to help us find $y(t)$ and $z(t)$.
We see that $y(t) = \frac{2a^{2}_{2}b_{1}}{5}t^{5} + \ldots$ and $z(t) = \frac{4a^{2}_{2}b_{1}c_{1}}{3}t^{7} + \ldots$.
Now, we know what the first nonvanishing coeffiecients are for the curve $\gamma(t) = (x(t), y(t), z(t))$ and we want to
determine the simpliest curve that $\gamma$ must be equivalent to. In order to do this we will first look at the semigroup
for the curve $\gamma$. In this case the semigroup is given by $S = \{3, [4], 5, 6, 7, \cdots \}$.
\begin{rem}
We again pause to explain the notation used for the semigroup $S$. The set $S = \{3, [4], 5, 6, 7, \cdots \}$ is a semigroup
where the binary operation is addition. The numbers $3$, $5$, $6$, and so on are elements of this semigroup while the bracket
around the number $4$ means that it is not an element of $S$. When we write "$\cdots$" after the number $7$ it means that
every positive integer after $7$ is an element in our semigroup.
\end{rem}
This means that every term, $t^{i}$ for $i \geq 7$, can be eliminated
from the above power series expansion for the component functions $x(t)$, $y(t)$, and $z(t)$ by a change of variables
given by $(x,y,z) \mapsto (x + f(x,y,z), y + g(x,y,z), z + h(x,y,z))$. With this in mind and
after we rescale the leading coeffiecients for each of the component for $\gamma$, we see that
$$\gamma(t) = (x(t), y(t), z(t)) \sim (\tilde{x}(t), \tilde{y}(t), \tilde{z}(t)) = (t^{3} + \alpha t^{4}, t^{5}, t^{7}) $$
We now want to see if we can eliminate the $\alpha$ term, if it is nonzero. To do this we will use a combination of reparametrization
techniques along with semigroup arguments. Use the reparamentrization $t = T(1 - \frac{\alpha}{3}T)$ and we get that
$\tilde{x}(T) = T^{3}(1 - \frac{\alpha}{3}T)^{3} + T^{4}(1 - \frac{\alpha}{3}T)^{4} + \ldots = T^{3} + O(T^{5})$.
This gives us that $(\tilde{x}(T), \tilde{y}(T), \tilde{z}(T)) = (T^{3} + O(T^{5}), T^{5} + O(T^{6}), T^{7} + O(T^{8}))$
and since we can eliminate all of the terms of degree $5$ and higher we see that
$(\tilde{x}(T), \tilde{y}(T), \tilde{z}(T)) \sim (T^{3}, T^{5}, T^{7})$. This means that our original $\gamma$ is equivalent
to the curve $(t^{3}, t^{5}, t^{7})$.
{\it Case $2$, $c_{1} = 0$:}
By repeating an argument similar to the above one, we will end up with $\gamma(t) = (x(t), y(t), z(t)) =
(\frac{2a_{2}b_{1}}{3}t^{3} + \ldots, \frac{2a^{2}_{2}b_{1}}{5}t^{5} + \ldots, \frac{a^{2}_{2}b_{1}c_{2}}{8}t^{8} + \ldots)$.
Note that $c_{2}$ may or may not be equal to zero. This gives that the semigroup for the curve $\gamma$ is $S = \{ 3, [4], 5, 6, [7], 8 \cdots \}$ and that our curve $\gamma$ is such that
$$\gamma(t) = (x(t), y(t), z(t)) \sim (\tilde{x}(t), \tilde{y}(t), \tilde{z}(t)) =
(t^{3} + \alpha_{1}t^{4} + \alpha_{2}t^{7} ,t^{5} + \beta t^{7}, 0)$$
Again, we want to know if we can eliminate the $\alpha_{i}$ and $\beta$ terms. First we focus on the $\alpha_{i}$ terms
in $\tilde{x}(t)$. We use the reparametrization given by $t = T(1 - \frac{\alpha_{1}}{3}T)$ to give us that
$\tilde{x}(T) = T^{3} + \alpha_{2}'T^{7} + O(T^{8})$. Then to eliminate the $\alpha_{2}'$ term we use the reparametrization
given by $T = S(1 - \frac{\alpha_{2}'}{3}S^{4})$ to give that $\tilde{x}(S) = S^{3} + O(S^{8})$.
We are now ready to deal with the $\tilde{y}$ function. Now, because of our two reparametrizations we get that
$\tilde{y}$ is of the form $\tilde{y}(t) = t^{5} + \beta't^{7}$. To get rid of the $\beta'$ term we simply use the
rescaling given by $t \mapsto \frac{1}{ \sqrt{ \left|\beta' \right|} }t$ and then use the scaling diffeomorphism
given by $(x,y,z) \mapsto ( \left| \beta' \right|^{\frac{3}{2}}x, \left| \beta' \right|^{\frac{5}{2}}y, z)$
to give us that $\gamma$ is equivalent to either $(t^{3}, t^{5} + t^{7}, 0)$ or $(t^{3},t^{5} - t^{7}, 0)$.
Note that the above
calculations were done under the assumption that $\beta_{1} \neq 0$. If $\beta_{1} = 0$ then we see, using similar calculations
as the above, that we get the normal form $(t^{3}, t^{5}, 0)$. This means that there is a total of $4$ possible normal forms
that represent the points within the class $RVV$. It is tempting, at first glance, to believe that these curves are all inequivalent.
However, it can be shown that the $3$ curves $(t^{3}, t^{5} + t^{7}, 0)$, $(t^{3},t^{5} - t^{7}, 0)$, $(t^{3}, t^{5}, 0)$
are actually equivalent. It is not very difficult to show this equivalence, but it does amount to rather messy calculation.
As a result, the techniques used to show this equivalence are outlined in section $7.1$ of the appendix.
\
This means that the total number of possible normal forms is reduced to $2$ possibilities: $\gamma_{1}(t) = (t^{3}, t^{5}, t^{7})$ and
$\gamma_{2}(t) = (t^{3}, t^{5}, 0)$.
We will show that these two curves are inequivalent to one another. One possibility is to look at the semigroups that each of these
curves generate. The curve $\gamma_{1}$ has the semigroup $S_{1} = \{3,[4], 5, 6, 7, \cdots \}$, while the curve $\gamma_{2}$ has the semigroup
$S_{2} = \{3, [4], 5, 6, [7], 8, \cdots \}$. Since the semigroup of a curve is an invariant of the curve and the two curves generate different
semigroups, then the two curves must be inequivalent. In \cite{castro} another method was outlined to check and see whether
or not these two curves are equivalent, which we will now present. One can see that the curve $(t^{3}, t^{5}, 0)$ is a planar curve and
in order for the curve $\gamma_{1}$ to be equivalent to the curve $\gamma_{2}$, then we must be able to find a way to turn
$\gamma_{1}$ into a planar curve, meaning we need to find a change of variables or a reparametrization which
will make the $z$-component function of $\gamma_{1}$ zero. If it were true that $\gamma_{1}$ is actually a planar curve, then $\gamma_{1}$
must lie in an embedded surface in $\R^{3}$(or embedded surface germ), say $M$. Since $M$ is an embedded surface it means that there exists a local defining
function at each point on the manifold. Let the local defining function near the origin be the real analytic function $f: \R^{3} \rightarrow \R$. Since $\gamma_{1}$ is on M, then $f(\gamma_{1}(t)) = 0$ for all $t$ near zero.
However, when one looks at the individual terms in the Taylor series expansion of $f$ composed with $\gamma_{1}$ that there will
be nonzero terms which will show up and give that $f(\gamma_{1}(t)) \neq 0$ for all $t$ near zero, which creates a contradiction. This tell us that $\gamma_{1}$ cannot be equivalent
to any planar curve near $t=0$. As a result, there is a total of two inequivalent normal forms for
the class $RVV$: $(t^{3}, t^{5}, t^{7})$ and $(t^{3}, t^{5}, 0)$.
\
The remaining classes $RVT$ and $RVL$ are proved in an almost identical manner using the above ideas and techniques. As a result,
we will omit the proofs and leave them to the reader.
\
| 4,007 | 39,152 |
en
|
train
|
0.119.7
|
This means that the total number of possible normal forms is reduced to $2$ possibilities: $\gamma_{1}(t) = (t^{3}, t^{5}, t^{7})$ and
$\gamma_{2}(t) = (t^{3}, t^{5}, 0)$.
We will show that these two curves are inequivalent to one another. One possibility is to look at the semigroups that each of these
curves generate. The curve $\gamma_{1}$ has the semigroup $S_{1} = \{3,[4], 5, 6, 7, \cdots \}$, while the curve $\gamma_{2}$ has the semigroup
$S_{2} = \{3, [4], 5, 6, [7], 8, \cdots \}$. Since the semigroup of a curve is an invariant of the curve and the two curves generate different
semigroups, then the two curves must be inequivalent. In \cite{castro} another method was outlined to check and see whether
or not these two curves are equivalent, which we will now present. One can see that the curve $(t^{3}, t^{5}, 0)$ is a planar curve and
in order for the curve $\gamma_{1}$ to be equivalent to the curve $\gamma_{2}$, then we must be able to find a way to turn
$\gamma_{1}$ into a planar curve, meaning we need to find a change of variables or a reparametrization which
will make the $z$-component function of $\gamma_{1}$ zero. If it were true that $\gamma_{1}$ is actually a planar curve, then $\gamma_{1}$
must lie in an embedded surface in $\R^{3}$(or embedded surface germ), say $M$. Since $M$ is an embedded surface it means that there exists a local defining
function at each point on the manifold. Let the local defining function near the origin be the real analytic function $f: \R^{3} \rightarrow \R$. Since $\gamma_{1}$ is on M, then $f(\gamma_{1}(t)) = 0$ for all $t$ near zero.
However, when one looks at the individual terms in the Taylor series expansion of $f$ composed with $\gamma_{1}$ that there will
be nonzero terms which will show up and give that $f(\gamma_{1}(t)) \neq 0$ for all $t$ near zero, which creates a contradiction. This tell us that $\gamma_{1}$ cannot be equivalent
to any planar curve near $t=0$. As a result, there is a total of two inequivalent normal forms for
the class $RVV$: $(t^{3}, t^{5}, t^{7})$ and $(t^{3}, t^{5}, 0)$.
\
The remaining classes $RVT$ and $RVL$ are proved in an almost identical manner using the above ideas and techniques. As a result,
we will omit the proofs and leave them to the reader.
\
With this in mind, we are now ready to move on to the fourth level of the tower. We initially tried to tackle the problem of classifying the orbits
at the fourth level by using the curve approach from the third level. Unfortunately, the curve approach becomes a bit
too unwieldy to use to determine what the normal forms were for the various $RVT$ classes.
The problem was simply this: when we looked at the semigroup
for a particular curve in a number of the $RVT$ classes at the fourth level that there were too many "gaps" in the various
semigroups. The first occuring class, according to codimension, in which this occured was the class $RVVV$.
\begin{eg}
The semigroups for the class $RVVV$. Let $p_{4} \in RVVV$, and for $\gamma \in Germ(p_{4})$ that
$\gamma^{3}(t) = (x(t), y(t), z(t), u(t), v(t), u_{2}(t), v_{2}(t), u_{3}(t), v_{3}(t))$ with
$u = \frac{dy}{dx}$, $v = \frac{dz}{dx}$, $u_{2} = \frac{dx}{du}$, $v_{2} = \frac{dv}{du}$, $u_{3} = \frac{du}{du_{2}}$,
$v_{3} = \frac{dv_{2}}{du_{2}}$. Since $\gamma^{4}(0) = p_{4}$ we must have that $\gamma^{3}(t)$ is tangent to the vertical
hyperplane within $\Delta_{3}$, which is coframed by $\left\{ du_{2}, du_{3}, dv_{3} \right\}$. One can see that $du_{2} = 0$ along
$\gamma^{3}(0)'$. Then, just as with the analysis with the third level, we look at $u_{2}(t) = \Sigma_{i} a_{i}t^{i}$,
$u_{3}(t) = \Sigma_{j}b_{j}t^{j}$, $v_{3}(t) = \Sigma_{k}c_{k}t^{k}$ where we must have that $a_{1} = 0$, $a_{2} \neq 0$,
$b_{1} \neq 0$, and $c_{1}$ may or may not be equal to zero. When we go from the fourth level back down to the zeroth level
we see that $\gamma(t) = (t^{5} + O(t^{11}), t^{8} + O(t^{11}), O(t^{11}))$. If $c_{1} \neq 0$, then we get that
$\gamma(t) = (t^{5} + O(t^{12}), t^{8} + O(t^{12}), t^{11} + O(t^{12}))$ and the semigroup for this curve is
$S = \{5, [6], [7], 8, [9], 10, 11, [12], 13, [14], 15, 16, [17], 18 \cdots \}$. If $c_{1} \neq 0$, then we get that
$\gamma(t) = (t^{5} + O(t^{12}), t^{8} + O(t^{12}), O(t^{12}))$ and the semigroup for this curve is
$S = \{5, [6], [7], 8, [9], 10, [11], [12], 13, [14], 15, 16, [17], 18, [19], 20, 21, [22], 23 \cdots \}$.
\end{eg}
As a result, it became impractical to work stictly using the curve approach.
This meant that we had to look at a different approach to the classification problem. This lead us to work with a tool
called the isotropy representation.
| 1,645 | 39,152 |
en
|
train
|
0.119.8
|
\subsection{The isotropy method.}
Here is the general idea of the method.
Suppose we want to look at a particular RVT class, at the k-th level, given by $\omega_k$ (a word of
length $k$) and we want to see how many orbits there are. Suppose as well that we understand its projection
$\omega_{k-1}$ one level down, which decomposes into $N$ orbits. Choose representative points $p_{i}$, $i = 1, \cdots , N$ for
the $N$ orbits in $\omega_{k-1}$, and consider the group $G_{k-1}(p_{i})$ of level $k-1$ symmetries that fix $p_{i}$.
This group is called the \textit{isotropy group of} $p_{i}$. Since elements $\Phi^{k-1}$ of the isotropy group
fix $p_{i}$, their prolongations $\Phi^{k} = (\Phi^{k-1}, \Phi^{k-1}_{\ast})$ act on the fiber over $p_{i}$.
Under the action of the isotropy group the fiber decomposes into some
number $n_{i} \geq 1$ (possibly infinite) of orbits. Summing, we find that $\omega_{k}$ decomposes into
$\sum_{i = 1}^{N} n_{i} \geq N$ orbits.
For the record, $\Phi_*^{\bullet}$ denotes the tangent map.
This will tell us how many
orbits there are for the class $\omega_k$.
This is the theory. Now we need to explain how one actually prolongs diffeomorphisms in practice. Since the manifold $\mathcal{P}^{k}$ is a type of fiber compactification of $J^k(\mathbb{R},\mathbb{R}^2)$, it is reasonable to expect that the prolongation of diffeomorphisms from the base $\mathbb{R}^3$ should be similar to what one does when prolonging point symmetries from the theory of jet spaces. See specially (\cite{duzhin}, last chapter) and (\cite{olver}, p. 100).
Given a point $p_k \in \mathcal{P}^{k}$ and a map $\Phi\in \text{Diff}(3)$ we would like to write explicit formulas for
$$\Phi^{k}(p_k).$$
Coordinates of $p_k$ can be made explicit.
Now take any curve $\gamma(t) \in \text{Germ}(p_k)$, and consider the prolongation of $\Phi\circ \gamma(t)$. The coordinates of $\Phi^{k}(p_k)$ are exactly the coordinates of $(\Phi\circ \gamma)^{(k)}(0) = \Phi^{k}(\gamma^{k}(0))$. Moreover the resulting point is independent of the choice of a regular $\gamma \in \text{Germ}(p)$.
| 665 | 39,152 |
en
|
train
|
0.119.9
|
\subsection{Proof of theorem \ref{thm:main}}
\
{\bf The classifiction of points at level $4$.}
We are now ready to begin with the classification of points at level $4$. We will present the proof for the classification of
the class $RVVV$ as an example of how the isotropy representation method works.
{\it The class $RVVV$.}
Before we get started, we will summarize the main idea of the following calculation to classify the number of orbits within the
class $RVVV$. Let $p_{4} \in RVVV \subset \mathcal{P}^{4}$ and start with the projection of $p_{4}$ to level zero,
$\pi_{4,0}(p_{4}) = p_{0}$. Since all of the points at level zero are equivalent, then one is free to choose any representative
for $p_{0}$. For simplicity, it is easiest to choose it to be the point $p_{0} = (0,0,0)$. Next, we look at all of the points
at the first level, which project to $p_{0}$. Since all of these points are equivalent it gives that there is a single orbit in the
first level and we are again able to choose any point in $\mathcal{P}^{1}$ as our representive so long as it projects to the point $p_{0}$.
We will pick $p_{1} = (0,0,0,[1: 0: 0]) = (0,0,0,0,0)$ with $u = \frac{dy}{dx}$ and $v = \frac{dz}{dx}$ and we will look at all of the diffeomorphisms that
fix the point $p_{0}$ and $\Phi_{\ast}([1: 0: 0]) = [1: 0: 0]$. This condition will place some restrictions on the component functions
of the local diffeomorphisms $\Phi$ in $Diff_{0}(3)$ when we evaluate at the the point $p_{0}$ and tell us what $\Phi^{1} = (\Phi, \Phi_{\ast})$ will
look like at the point $p_{1}$. We call this group of diffeomorphisms $G_{1}$. We can then move on to the second level and look at the class $RV$. For any $p_{2} \in RV$ it is of the form $p_{2} = (p_{1}, \ell_{1})$ with $\ell_{1}$ contained in the vertical hyperplane inside of $\Delta_{1}(p_{1})$. Now, apply the pushforwards of the $\Phi^{1}$'s in $G_{1}$ to the vertical hyperplane and see if these symmetries will act transitively on
the critical hyperplane. If they do act transitively then there is a single orbit within the class $RV$. If not, then there exists more
than one orbit within the class $RV$. Note that because of Theorem \ref{thm:ak} that we should expect to only see one orbit within this class.
Once this is done, we can just iterate the above process to classify the number of orbits within the class $RVV$ at the third level and
then within the class $RVVV$ at the fourth level.
{\it Level 0:} Let $G_{0}$ be the group that contains all diffeomorphism germs that fix the origin.
{\it Level 1:} We know that all the points in $\mathcal{P}^{1}$ are equivalent, giving that there is only a single orbit.
So we pick a representative element from the single orbit of $\mathcal{P}^{1}$. We will take our representative to be $p_{1} = (0,0,0,0,0) = (0,0,0, [1: 0: 0]) = (x,y,z, [dx: dy: dz])$ and
take $G_{1}$ to be the
set of all $\Phi \in G_{0}$ such that $\Phi^{1}$ will take the tangent to the $x$-Axis back to the
$x$-Axis, meaning $\Phi_{\ast}([1: 0: 0]) = [1: 0: 0]$.
\
\no Then for $\Phi \in G_{1}$ and $\Phi(x,y,z) = (\phi^{1}, \phi^{2}, \phi^{3})$ we have that
\
\[
\Phi_{\ast} =
\begin{pmatrix}
\phi_{x}^{1} & \phi_{y}^{1} & \phi_{z}^{1} \\
\phi_{x}^{2} & \phi_{y}^{2} & \phi_{z}^{2} \\
\phi_{x}^{3} & \phi_{y}^{3} & \phi_{z}^{3} \\
\end{pmatrix}
=
\begin{pmatrix}
\phi_{x}^{1} & \phi_{y}^{1} & \phi_{z}^{1} \\
0 & \phi_{y}^{2} & \phi_{z}^{2} \\
0 & \phi_{y}^{3} & \phi_{z}^{3} \\
\end{pmatrix}
\]
\
When we evalutate at $(x,y,z) = (0,0,0)$.
\
Here is the \textit{Taylor triangle} representing the different coefficients in the Taylor series expansion of a diffeomorphism
in $G_{i}$. The three digits represent the number of partial derivatives with respect to either $x$, $y$, or $z$.
For example, $(1,2,0) = \frac{\pa^{3}}{\pa x \pa^{2} y}$. The vertical column denotes the coefficient order. We start with the
Taylor triangle for $\phi^{2}$:
\
\begin{center}
\begin{tabular}{rcccccccccc}
$n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{
}
$n=1$:& & & & $\xcancel{(1,0,0)}$ & (0,1,0) & (0,0,1)\\\noalign{
}
$n=2$:& & & (2,0,0) & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{
}
\end{tabular}
\end{center}
\
We have crossed out $(1,0,0)$ since $\frac{\pa \phi^{2}}{\pa y}(\z) = 0$. Next is the Taylor triangle for
$\phi^{3}$:
\
\begin{center}
\begin{tabular}{rcccccccccc}
$n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{
}
$n=1$:& & & & $\xcancel{(1,0,0)}$ & (0,1,0) & (0,0,1)\\\noalign{
}
$n=2$:& & & (2,0,0) & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{
}
\end{tabular}
\end{center}
\
This describes the properties of the elements $\Phi \in G_{1}$.
We now try to figure out what the $\Phi^{1}$, for $\Phi \in G_{1}$, will look like in $KR$-coordinates. First, take $\ell \subset \Delta_{0}$ where we write
$\ell = a \frac{\partial}{\partial x} + b \frac{\partial}{\partial y} + c \frac{\partial}{\partial z}$ with $a,b,c \in \R$ and
$a \neq 0$.
We see that
\begin{eqnarray*}
\Phi_{\ast}(\ell)&=&span\left\{ (a \phi_{x}^{1} + b \phi_{y}^{1} + c \phi_{z}^{1})\frac{\pa}{\pa x} +
(a \phi_{x}^{2} + b \phi_{y}^{2} + c \phi_{z}^{2})\frac{\pa}{\pa y} +
(a \phi_{x}^{3} + b \phi_{y}^{3} + c \phi_{z}^{3})\frac{\pa}{\pa z} \right\} \\
&=&span \left\{ (\phi_{x}^{1} + u \phi_{y}^{1} + v \phi_{z}^{1})\frac{\pa}{\pa x} +
(\phi_{x}^{2} + u \phi_{y}^{2} + v \phi_{z}^{2})\frac{\pa}{\pa y} +
(\phi_{x}^{3} + u \phi_{y}^{3} + v \phi_{z}^{3})\frac{\pa}{\pa z} \right\} \\
&=&span\left\{ a_{1}\frac{\pa}{\pa x} + a_{2}\frac{\pa}{\pa y} + a_{3}\frac{\pa}{\pa z} \right\}
\end{eqnarray*}
where in the second to last step we divided by
"$a$" to get that $u = \frac{b}{a}$ and $v = \frac{c}{a}$.
Now, since $\Delta_{1}$ is given by
\
$dy - udx = 0$
\
$dz - vdx = 0$
and since $[dx: dy: dz] = [1: \frac{dy}{dx}: \frac{dz}{dx}]$ we have
for $\Phi \in G_{1}$ that it is given locally as $\Phi^{1}(x,y,z,u,v) =
(\phi^{1}, \phi^{2}, \phi^{3}, \tilde{u}, \tilde{v})$ where
\
$\tilde{u} = \frac{a_{2}}{a_{1}} = \frac{\phi_{x}^{2} + u \phi_{y}^{2} + v \phi_{z}^{2}}{\phi_{x}^{1} + u \phi_{y}^{1} + v \phi_{z}^{1}}$
\
$\tilde{v} = \frac{a_{3}}{a_{1}} = \frac{\phi_{x}^{3} + u \phi_{y}^{3} + v \phi_{z}^{3}}{\phi_{x}^{1} + u \phi_{y}^{1} + v \phi_{z}^{1}}$
{\it Level $2$:}
At level $2$ we are looking at the class $RV$ which consists of a single orbit. This means that we can pick any point in the class
$RV$ as our representative. We will pick our point to be $p_{2} = (p_{1}, \ell_{1})$ with $\ell_{1} \subset \Delta_{1}(p_{1})$ to be the vertical line
$\ell_{1} =[dx: du: dv] = [0: 1: 0]$. Now, we will let $G_{2}$ be the set of symmetries from $G_{1}$ that fix the vertical line $\ell_{1} = [0: 1: 0]$ in $\Delta_{1}(p_{1})$, meaning we want $\Phi^{1}_{\ast}([0: 1: 0]) = [0: 1: 0]$ for all $\Phi \in G_{2}$.
Then this says that $\Phi^{1}_{\ast}([dx_{| \ell_{1}}: du_{| \ell_{1}}: dv_{| \ell_{1}}]) = \Phi^{1}_{\ast}([0: 1: 0]) =
[0: 1: 0] = [d \phi^{1}_{| \ell_{1}}: d \tilde{u}_{| \ell_{1}}: d \tilde{v}_{| \ell_{1}}]$. When we fix this direction it might
yield some new information about the component functions for the $\Phi$ in $G_{2}$.
\
$\bullet$ $d \phi^{1}_{| \ell_{1}} = 0$.
\
$d \phi^{1} = \phi_{x}^{1} dx + \phi_{y}^{1} dy + \phi_{z}^{1} dz$ and that, based on the above,
$d \phi^{1}\rest{\ell_{1}} = 0$ and can see that we will not gain any new information about the component functions
for $\Phi \in G_{2}$.
\
$\bullet$ $d \tilde{v}_{| \ell_{1}} = 0$
\
$d \tilde{v} = d(\frac{a_{3}}{a_{1}}) = \frac{da_{3}}{a_{1}} - \frac{(da_{1})a_{3}}{a_{1}^{2}}$
\
First notice that when we evaluate at $(x,y,z,u,v) = (0,0,0,0,0)$ that $a_{3} = 0$ and since we are setting
$d \tilde{v}\rest{\ell_{1}} = 0$ then $d a_{3} \rest{\ell_{1}}$ must be equal to zero. We calculate that
$$da_{3} = \phi_{xx}^{3}dx + \phi_{xy}^{3}dy + \phi_{xz}^{3}dz + \phi_{y}^{3}du + u (d \phi_{y}^{3}) +
\phi_{z}^{3}dv + v(d \phi_{z}^{3})$$
then when we evaluate we get
$$d a_{3} \rest{\ell_{1}} = \phi^{3}_{y}(\z)du \rest{\ell_{1}} = 0 $$
since $du \rest{\ell_{1}} \neq 0$. This forces $\phi^{3}_{y}(\z) = 0$.
\
This gives us the updated Taylor triangle for $\phi^{3}$:
\
\begin{center}
\begin{tabular}{rcccccccccc}
$n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{
}
$n=1$:& & & & $\xcancel{(1,0,0)}$ & \xcancel{(0,1,0)} & (0,0,1)\\\noalign{
}
$n=2$:& & & (2,0,0) & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{
}
\end{tabular}
\end{center}
\
We have determined some of the properties about elements in $G_{2}$ and now we will see what these elements look like locally.
We look at $\Phi^{1}_{\ast}(\ell)$ for $\ell \subset \Delta_{1}$, near the vertical hyperplane in $\Delta_{1}$, and is of the
form $\ell = a Z^{1} + b \frac{\partial}{\partial u} +
c \frac{\partial}{\partial v}$ with $a,b,c \in \R$ and $b \neq 0$ with $Z^{1} = u \frac{\partial}{\partial y} + v \frac{\partial}{\partial z} + \frac{\partial}{\partial x}$. This gives that
\
\[
\Phi^{1}_{\ast}(\ell) =
\begin{pmatrix}
\phi_{x}^{1} & \phi_{y}^{1} & \phi_{z}^{1} & 0 & 0 \\
\phi_{x}^{2} & \phi_{y}^{2} & \phi_{z}^{2} & 0 & 0 \\
\phi_{x}^{3} & \phi_{y}^{3} & \phi_{z}^{3} & 0 & 0 \\
\frac{\partial \tilde{u}}{\partial x} & \frac{\partial \tilde{u}}{\partial y} & \frac{\partial \tilde{u}}{\partial z} &
\frac{\partial \tilde{u}}{\partial u} & \frac{\partial \tilde{u}}{\partial v} \\
\frac{\partial \tilde{v}}{\partial x} & \frac{\partial \tilde{v}}{\partial y} & \frac{\partial \tilde{v}}{\partial z} &
\frac{\partial \tilde{v}}{\partial u} & \frac{\partial \tilde{v}}{\partial v}
\end{pmatrix}
\begin{pmatrix}
a \\
au \\
av \\
b \\
c
\end{pmatrix}
\]
\
| 3,974 | 39,152 |
en
|
train
|
0.119.10
|
{\it Level $2$:}
At level $2$ we are looking at the class $RV$ which consists of a single orbit. This means that we can pick any point in the class
$RV$ as our representative. We will pick our point to be $p_{2} = (p_{1}, \ell_{1})$ with $\ell_{1} \subset \Delta_{1}(p_{1})$ to be the vertical line
$\ell_{1} =[dx: du: dv] = [0: 1: 0]$. Now, we will let $G_{2}$ be the set of symmetries from $G_{1}$ that fix the vertical line $\ell_{1} = [0: 1: 0]$ in $\Delta_{1}(p_{1})$, meaning we want $\Phi^{1}_{\ast}([0: 1: 0]) = [0: 1: 0]$ for all $\Phi \in G_{2}$.
Then this says that $\Phi^{1}_{\ast}([dx_{| \ell_{1}}: du_{| \ell_{1}}: dv_{| \ell_{1}}]) = \Phi^{1}_{\ast}([0: 1: 0]) =
[0: 1: 0] = [d \phi^{1}_{| \ell_{1}}: d \tilde{u}_{| \ell_{1}}: d \tilde{v}_{| \ell_{1}}]$. When we fix this direction it might
yield some new information about the component functions for the $\Phi$ in $G_{2}$.
\
$\bullet$ $d \phi^{1}_{| \ell_{1}} = 0$.
\
$d \phi^{1} = \phi_{x}^{1} dx + \phi_{y}^{1} dy + \phi_{z}^{1} dz$ and that, based on the above,
$d \phi^{1}\rest{\ell_{1}} = 0$ and can see that we will not gain any new information about the component functions
for $\Phi \in G_{2}$.
\
$\bullet$ $d \tilde{v}_{| \ell_{1}} = 0$
\
$d \tilde{v} = d(\frac{a_{3}}{a_{1}}) = \frac{da_{3}}{a_{1}} - \frac{(da_{1})a_{3}}{a_{1}^{2}}$
\
First notice that when we evaluate at $(x,y,z,u,v) = (0,0,0,0,0)$ that $a_{3} = 0$ and since we are setting
$d \tilde{v}\rest{\ell_{1}} = 0$ then $d a_{3} \rest{\ell_{1}}$ must be equal to zero. We calculate that
$$da_{3} = \phi_{xx}^{3}dx + \phi_{xy}^{3}dy + \phi_{xz}^{3}dz + \phi_{y}^{3}du + u (d \phi_{y}^{3}) +
\phi_{z}^{3}dv + v(d \phi_{z}^{3})$$
then when we evaluate we get
$$d a_{3} \rest{\ell_{1}} = \phi^{3}_{y}(\z)du \rest{\ell_{1}} = 0 $$
since $du \rest{\ell_{1}} \neq 0$. This forces $\phi^{3}_{y}(\z) = 0$.
\
This gives us the updated Taylor triangle for $\phi^{3}$:
\
\begin{center}
\begin{tabular}{rcccccccccc}
$n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{
}
$n=1$:& & & & $\xcancel{(1,0,0)}$ & \xcancel{(0,1,0)} & (0,0,1)\\\noalign{
}
$n=2$:& & & (2,0,0) & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{
}
\end{tabular}
\end{center}
\
We have determined some of the properties about elements in $G_{2}$ and now we will see what these elements look like locally.
We look at $\Phi^{1}_{\ast}(\ell)$ for $\ell \subset \Delta_{1}$, near the vertical hyperplane in $\Delta_{1}$, and is of the
form $\ell = a Z^{1} + b \frac{\partial}{\partial u} +
c \frac{\partial}{\partial v}$ with $a,b,c \in \R$ and $b \neq 0$ with $Z^{1} = u \frac{\partial}{\partial y} + v \frac{\partial}{\partial z} + \frac{\partial}{\partial x}$. This gives that
\
\[
\Phi^{1}_{\ast}(\ell) =
\begin{pmatrix}
\phi_{x}^{1} & \phi_{y}^{1} & \phi_{z}^{1} & 0 & 0 \\
\phi_{x}^{2} & \phi_{y}^{2} & \phi_{z}^{2} & 0 & 0 \\
\phi_{x}^{3} & \phi_{y}^{3} & \phi_{z}^{3} & 0 & 0 \\
\frac{\partial \tilde{u}}{\partial x} & \frac{\partial \tilde{u}}{\partial y} & \frac{\partial \tilde{u}}{\partial z} &
\frac{\partial \tilde{u}}{\partial u} & \frac{\partial \tilde{u}}{\partial v} \\
\frac{\partial \tilde{v}}{\partial x} & \frac{\partial \tilde{v}}{\partial y} & \frac{\partial \tilde{v}}{\partial z} &
\frac{\partial \tilde{v}}{\partial u} & \frac{\partial \tilde{v}}{\partial v}
\end{pmatrix}
\begin{pmatrix}
a \\
au \\
av \\
b \\
c
\end{pmatrix}
\]
\
\begin{eqnarray*}
= & & span \{ (a \phi_{x}^{1} + au \phi_{y}^{1} + av \phi_{z}^{1})\frac{\pa}{\pa x} \\
& + & (a \frac{\partial \tilde{u}}{\partial x} + a u \frac{\partial \tilde{u}}{\partial y} + a v \frac{\partial \tilde{u}}{\partial z} +
b \frac{\partial \tilde{u}}{\partial u} + c \frac{\partial \tilde{u}}{\partial v})\frac{\pa}{\pa u} \\
& + & (a \frac{\partial \tilde{v}}{\partial x} + a u \frac{\partial \tilde{v}}{\partial y} + a v \frac{\partial \tilde{v}}{\partial z} +
b \frac{\partial \tilde{v}}{\partial u} + c \frac{\partial \tilde{v}}{\pa v})\frac{\pa}{\pa v} \}
\end{eqnarray*}
\begin{eqnarray*}
= & & span \{ (u_{2} \phi_{x}^{1} + uu_{2} \phi_{y}^{1} + vu_{2} \phi_{z}^{1})\frac{\pa}{\pa x} \\
& + & ( u_{2} \frac{\partial \tilde{u}}{\partial x} + u u_{2} \frac{\partial \tilde{u}}{\partial y} +
v u_{2} \frac{\partial \tilde{u}}{\partial z} + \frac{\partial \tilde{u}}{\partial u} +
v_{2} \frac{\partial \tilde{u}}{\partial v})\frac{\pa}{\pa u} \\
& + & ( u_{2} \frac{\partial \tilde{v}}{\partial x} + u u_{2} \frac{\partial \tilde{v}}{\partial y} + vu_{2} \frac{\partial \tilde{v}}{\partial z} + \frac{\partial \tilde{v}}{\partial u} + v_{2} \frac{\partial \tilde{v}}{v})\frac{\pa}{\pa v} \}
\end{eqnarray*}
$= span \{ b_{1}\frac{\pa}{\pa x} + b_{2}\frac{\pa}{\pa u} + b_{3}\frac{\pa}{\pa v} \}$. Notice that we have only paid attention to the
$x$, $u$, and $v$ coordinates since $\Delta_{1}^{\ast}$ is framed by $dx$, $du$, and $dv$.
Since $u_{2} = \frac{dx}{du}$ and $v_{2} = \frac{dv}{du}$ we get that
\
$\tilde{u}_{2} = \frac{b_{1}}{b_{2}} = \frac{u_{2} \phi_{x}^{1} + uu_{2} \phi_{y}^{1} + vu_{2} \phi_{z}^{1}}{u_{2} \frac{\partial \tilde{u}}{\partial x} + u u_{2} \frac{\partial \tilde{u}}
{\partial y} + v u_{2} \frac{\partial \tilde{u}}{\partial z} +
\frac{\partial \tilde{u}}{\partial u} + v_{2} \frac{\partial \tilde{u}}{\partial v}}$
\
$\tilde{v}_{2} = \frac{b_{3}}{b_{2}} = \frac{u_{2} \frac{\partial \tilde{v}}{\partial x} + u u_{2} \frac{\partial \tilde{v}}{\partial y} + vu_{2} \frac{\partial \tilde{v}}{\partial z} +
\frac{\partial \tilde{v}}{\partial u} + v_{2} \frac{\partial \tilde{v}}{\partial v}}
{u_{2} \frac{\partial \tilde{u}}{\partial x} + u u_{2} \frac{\partial \tilde{u}}
{\partial y} + v u_{2} \frac{\partial \tilde{u}}{\partial z} +
\frac{\partial \tilde{u}}{\partial u} + v_{2} \frac{\partial \tilde{u}}{\partial v}}$
\
This tells us what the new component functions $\tilde{u}_{2}$ and $\tilde{v}_{2}$ are for $\Phi^{2}$.
{\it Level $3$:}
At level $3$ we are looking at the class $RVV$. We know from our work on the third level that there will be only one orbit
within this class. This means that we can pick any point in the class
$RVV$ as our representative. We will pick the point to be $p_{3} = (p_{2}, \ell_{2})$ with $\ell_{2} \subset \Delta_{2}$ to be the vertical line
$\ell_{2} =[du: du_{2}: dv_{2}] = [0: 1: 0]$. Now, we will let $G_{3}$ be the set of symmetries from $G_{2}$ that fix the vertical line $\ell_{2} = [0: 1: 0]$ in
$\Delta_{2}$, meaning we want $\Phi^{2}_{\ast}([0: 1: 0]) = [0: 1: 0] =
[ d \tilde{u}\rest{\ell_{3}}: d \tilde{u}_{2}\rest{\ell_{3}}: d \tilde{v}_{2}\rest{\ell_{3}} ]$ for all $\Phi \in G_{3}$.
Since we are taking $du\rest{\ell_{3}} 0$ and $dv_{2}\rest{ \ell_{3}} = 0$, with
$du_{2}\rest{\ell_{3}} \neq 0$ we need to look at $d \tilde{u}\rest{\ell_{3}} = 0$ and $d \tilde{v}_{2}\rest{\ell_{3}} = 0$
to see if these relations will gives us more information about the component functions of $\Phi$.
\
$\bullet$ $d \tilde{u}\rest{\ell_{3}} = 0$.
\
$d \tilde{u} = d(\frac{a_{2}}{a_{1}}) = \frac{da_{2}}{a_{1}} - \frac{a_{2} da_{1}}{a_{1}^{2}}$ and can see that
$a_{2}(p_{2}) = 0$ and that
\
$da_{2}\rest{\ell _{3}} = \phi_{xx}^{2}dx\rest{\ell_{3}} + \phi_{xy}^{2} dy \rest{ \ell_{3}} + \phi_{xz}^{2} dz\rest{ \ell_{3}}
+ \phi_{y}^{2} du \rest{\ell_{3}} + \phi_{z}^{2} dv\rest{\ell_{3}} = 0$. Since all of the differentials are going to be equal
to zero when we put the line $\ell_{3}$ into them that we will not gain any new information about the $\phi^{i}$'s.
\
$\bullet$ $d \tilde{v}_{2}\rest{\ell_{3}} = 0$
\
$d \tilde{v}_{2} = d(\frac{b_{3}}{b_{2}}) = \frac{db_{3}}{b_{2}} - \frac{b_{3} db_{2}}{b_{2}^{2}}$. When we evaluate
we see that $b_{3}(p_{2}) = 0$ since $\frac{\pa \tilde{v}}{\pa u}(p_{2}) = \phi^{3}_{y}(\z) = 0$, which means
that we only need to look at $\frac{db_{3}}{b_{2}}$.
\begin{eqnarray*}
db_{3} & = & d(u_{2} \frac{\pa \tilde{v}}{\pa x} + u_{2} u \frac{\pa \tilde{v}}{\pa z} + \frac{\pa \tilde{v}}{\pa u}
+ v_{2} \frac{\pa \tilde{v}}{\pa v}) \\
& = & \frac{\pa \tilde{v}}{\pa x} du_{2} + u_{2}(d \frac{\pa \tilde{v}}{\pa x}) + u \frac{\pa \tilde{v}}{\pa y} du_{2} \\
& + & u_{2}\frac{\pa \tilde{v}}{\pa y} du + u_{2} u (d \frac{\pa \tilde{v}}{\pa y}) +
v \frac{\pa \tilde{v}}{\pa z} du_{2} \\
& + & u_{2} \frac{\pa \tilde{v}}{\pa z} dv + u_{2}v(d\frac{\pa \tilde{v}}{\pa z}) +
\frac{\pa \tilde{v}}{\pa u \pa x} dx \\
& + & \frac{\pa \tilde{v}}{\pa u \pa y} dy + \frac{\pa \tilde{u}}{\pa u \pa z} dz +
\frac{\pa \tilde{v}}{\pa v}dv_{2} + v_{2}(d \frac{\pa \tilde{v}}{\pa v}).
\end{eqnarray*}
\
Then evaluating we get
\
$db_{3} \rest{\ell_{3}} = \frac{\pa \tilde{v}}{\pa x}(p_{3})du_{2} \rest{\ell_{3}} = 0$ and since $du_{2} \rest{\ell_{3}} \neq 0$
it forces $\frac{\pa \tilde{v}}{\pa x}(p_{3}) = 0$. We have $\frac{\pa \tilde{v}}{\pa x}(p_{3}) =
\frac{\phi^{3}_{xx}(\z)}{\phi^{1}_{x}(\z)}
- \frac{\phi^{1}_{xx}(\z) \phi^{3}_{x}(\z)}{\phi^{1}_{x}(\z)}$ and that $\phi^{3}_{x}(\z) = 0$ to give that
$\frac{\pa \tilde{v}}{\pa x}(p_{3}) = \frac{\phi^{3}_{xx}(\z)}{\phi^{1}_{x}(\z)} = 0$ which forces
$\phi^{3}_{xx}(\z) = 0$, which gives us information about $\Phi^{3}$.
This gives us the updated Taylor triangle for $\phi^{3}$:
\begin{center}
\begin{tabular}{rcccccccccc}
$n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{
}
$n=1$:& & & & $\xcancel{(1,0,0)}$ & \xcancel{(0,1,0)} & (0,0,1)\\\noalign{
}
$n=2$:& & & \xcancel{(2,0,0)} & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{
| 4,049 | 39,152 |
en
|
train
|
0.119.11
|
\
$\bullet$ $d \tilde{u}\rest{\ell_{3}} = 0$.
\
$d \tilde{u} = d(\frac{a_{2}}{a_{1}}) = \frac{da_{2}}{a_{1}} - \frac{a_{2} da_{1}}{a_{1}^{2}}$ and can see that
$a_{2}(p_{2}) = 0$ and that
\
$da_{2}\rest{\ell _{3}} = \phi_{xx}^{2}dx\rest{\ell_{3}} + \phi_{xy}^{2} dy \rest{ \ell_{3}} + \phi_{xz}^{2} dz\rest{ \ell_{3}}
+ \phi_{y}^{2} du \rest{\ell_{3}} + \phi_{z}^{2} dv\rest{\ell_{3}} = 0$. Since all of the differentials are going to be equal
to zero when we put the line $\ell_{3}$ into them that we will not gain any new information about the $\phi^{i}$'s.
\
$\bullet$ $d \tilde{v}_{2}\rest{\ell_{3}} = 0$
\
$d \tilde{v}_{2} = d(\frac{b_{3}}{b_{2}}) = \frac{db_{3}}{b_{2}} - \frac{b_{3} db_{2}}{b_{2}^{2}}$. When we evaluate
we see that $b_{3}(p_{2}) = 0$ since $\frac{\pa \tilde{v}}{\pa u}(p_{2}) = \phi^{3}_{y}(\z) = 0$, which means
that we only need to look at $\frac{db_{3}}{b_{2}}$.
\begin{eqnarray*}
db_{3} & = & d(u_{2} \frac{\pa \tilde{v}}{\pa x} + u_{2} u \frac{\pa \tilde{v}}{\pa z} + \frac{\pa \tilde{v}}{\pa u}
+ v_{2} \frac{\pa \tilde{v}}{\pa v}) \\
& = & \frac{\pa \tilde{v}}{\pa x} du_{2} + u_{2}(d \frac{\pa \tilde{v}}{\pa x}) + u \frac{\pa \tilde{v}}{\pa y} du_{2} \\
& + & u_{2}\frac{\pa \tilde{v}}{\pa y} du + u_{2} u (d \frac{\pa \tilde{v}}{\pa y}) +
v \frac{\pa \tilde{v}}{\pa z} du_{2} \\
& + & u_{2} \frac{\pa \tilde{v}}{\pa z} dv + u_{2}v(d\frac{\pa \tilde{v}}{\pa z}) +
\frac{\pa \tilde{v}}{\pa u \pa x} dx \\
& + & \frac{\pa \tilde{v}}{\pa u \pa y} dy + \frac{\pa \tilde{u}}{\pa u \pa z} dz +
\frac{\pa \tilde{v}}{\pa v}dv_{2} + v_{2}(d \frac{\pa \tilde{v}}{\pa v}).
\end{eqnarray*}
\
Then evaluating we get
\
$db_{3} \rest{\ell_{3}} = \frac{\pa \tilde{v}}{\pa x}(p_{3})du_{2} \rest{\ell_{3}} = 0$ and since $du_{2} \rest{\ell_{3}} \neq 0$
it forces $\frac{\pa \tilde{v}}{\pa x}(p_{3}) = 0$. We have $\frac{\pa \tilde{v}}{\pa x}(p_{3}) =
\frac{\phi^{3}_{xx}(\z)}{\phi^{1}_{x}(\z)}
- \frac{\phi^{1}_{xx}(\z) \phi^{3}_{x}(\z)}{\phi^{1}_{x}(\z)}$ and that $\phi^{3}_{x}(\z) = 0$ to give that
$\frac{\pa \tilde{v}}{\pa x}(p_{3}) = \frac{\phi^{3}_{xx}(\z)}{\phi^{1}_{x}(\z)} = 0$ which forces
$\phi^{3}_{xx}(\z) = 0$, which gives us information about $\Phi^{3}$.
This gives us the updated Taylor triangle for $\phi^{3}$:
\begin{center}
\begin{tabular}{rcccccccccc}
$n=0$:& & & & & \xcancel{(0,0,0)} \\\noalign{
}
$n=1$:& & & & $\xcancel{(1,0,0)}$ & \xcancel{(0,1,0)} & (0,0,1)\\\noalign{
}
$n=2$:& & & \xcancel{(2,0,0)} & (1,1,0) & (1,0,1) & (0,1,1) & (0,0,2)\\\noalign{
}
\end{tabular}
\end{center}
\
Now, our goal is to look at how $\Phi^{3}_{\ast}$'s act on the distribution $\Delta_{3}(p_{3})$ to determine the number
of orbits within the class $RVVV$.
In order to do so we will need to figure out what the local component functions, call them $\tilde{u}_{3}$ and $\tilde{v}_{3}$, for
$\Phi^{3}$, where $\Phi \in G_{3}$, will look like. To do this we will again look at $\Phi^{2}_{\ast}$ applied to a line $\ell$ that is
near the vertical hyperplane in $\Delta_{2}$.
\
Set $\ell = aZ^{(2)}_{1} + b \frac{\pa}{\pa u_{2}} + c \frac{\pa}{\pa v_{2}}$ for $a,b,c \in \R$ and $b \neq 0$ where
$Z^{(2)}_{1} = u_{2}(u \frac{\pa}{\pa y} + v \frac{\pa}{\pa z} + \frac{\pa}{\pa x}) + \frac{\pa}{\pa u} + v_{2} \frac{\pa}{\pa v}$.
This gives that
\
\[
\Phi^{2}_{\ast}(\ell) =
\begin{pmatrix}
\phi^{1}_{x} & \phi^{1}_{y} & \phi^{1}_{z} & 0 & 0 & 0 & 0 \\
\phi^{2}_{x} & \phi^{2}_{y} & \phi^{2}_{z} & 0 & 0 & 0 & 0 \\
\phi^{3}_{x} & \phi^{3}_{y} & \phi^{3}_{z} & 0 & 0 & 0 & 0 \\
\frac{\pa \tilde{u}}{\pa x} & \frac{\pa \tilde{u}}{\pa y} & \frac{\pa \tilde{u}}{\pa z} &
\frac{\pa \tilde{u}}{\pa u} & \frac{\pa \tilde{u}}{\pa v} & 0 & 0 \\
\frac{\pa \tilde{v}}{\pa x} & \frac{\pa \tilde{v}}{\pa y} & \frac{\pa \tilde{v}}{\pa z} &
\frac{\pa \tilde{v}}{\pa u} & \frac{\pa \tilde{v}}{\pa v} & 0 & 0 \\
\frac{\pa \tilde{u}_{2}}{\pa x} & \frac{\pa \tilde{u}_{2}}{\pa y} & \frac{\pa \tilde{u}_{2}}{\pa z} &
\frac{\pa \tilde{u}_{2}}{\pa u} & \frac{\pa \tilde{u}_{2}}{\pa v} &
\frac{\pa \tilde{u}_{2}}{\pa u_{2}} & \frac{\pa \tilde{u}_{2}}{\pa v_{2}} \\
\frac{\pa \tilde{v}_{2}}{\pa x} & \frac{\pa \tilde{v}_{2}}{\pa y} & \frac{\pa \tilde{v}_{2}}{\pa z} &
\frac{\pa \tilde{v}_{2}}{\pa u} & \frac{\pa \tilde{v}_{2}}{\pa v} &
\frac{\pa \tilde{v}_{2}}{\pa u_{2}} & \frac{\pa \tilde{v}_{2}}{\pa v_{2}}
\end{pmatrix}
\begin{pmatrix}
au_{2} \\
a u u_{2} \\
a v u_{2} \\
a \\
a v_{2} \\
b \\
c
\end{pmatrix}
\]
\
\begin{eqnarray*}
& = & span\{ (a u_{2} \frac{\pa \tilde{u}}{\pa x} + a u u_{2} \frac{\pa \tilde{u}}{\pa y} + avu_{2} \frac{\pa \tilde{u}}{\pa z}
+ a \frac{\pa \tilde{u}}{\pa u} + av_{2} \frac{\pa \tilde{u}}{\pa v})\frac{\pa}{\pa u} \\
& + & (au_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + auu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + avu_{2} \frac{\pa \tilde{u}_{2}}{\pa z}
+ a \frac{\pa \tilde{u}_{2}}{\pa u} + av_{2} \frac{\pa \tilde{u}_{2}}{\pa v} +
b \frac{\pa \tilde{u}_{2}}{\pa u_{2}} + c \frac{\pa \tilde{u}_{2}}{\pa v_{2}})\frac{\pa}{\pa u_{2}} \\
& + & (au_{2} \frac{\pa \tilde{v}_{2}}{\pa x} + auu_{2} \frac{\pa \tilde{v}_{2}}{\pa y} + avu_{2} \frac{\pa \tilde{v}_{2}}{\pa z} +
a \frac{\pa \tilde{v}_{2}}{\pa u} + av_{2} \frac{\pa \tilde{v}_{2}}{\pa v}
+ b \frac{\pa \tilde{v}_{2}}{\pa u_{2}} + c \frac{\pa \tilde{v}_{2}}{\pa v_{2}})\frac{\pa}{\pa v_{2}} \}
\end{eqnarray*}
\begin{eqnarray*}
& = & span\{ (u_{3} u_{2} \frac{\pa \tilde{u}}{\pa x} + u_{3} u u_{2} \frac{\pa \tilde{u}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}}{\pa z} +
u_{3} \frac{\pa \tilde{u}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}}{\pa v})\frac{\pa}{\pa u} \\
& + & (u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}_{2}}{\pa z} +
u_{3} \frac{\pa \tilde{u}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}_{2}}{\pa v} +
\frac{\pa \tilde{u}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{u}_{2}}{\pa v_{2}})\frac{\pa}{\pa u_{2}} \\
& + & u_{3}u_{2} \frac{\pa \tilde{v}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{v}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{v}_{2}}{\pa z} +
u_{3} \frac{\pa \tilde{v}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{v}_{2}}{\pa v} +
\frac{\pa \tilde{v}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{v}_{2}}{\pa v_{2}})\frac{\pa}{\pa v_{2}} \}
\end{eqnarray*}
$= span\{ c_{1}\frac{\pa}{\pa u} + c_{2}\frac{\pa}{\pa u_{2}} + c_{3}\frac{\pa}{\pa v_{2}} \}$, since
our local coordinates are given by $[du: du_{2}: dv_{2}] = [\frac{du}{du_{2}}: 1: \frac{dv_{2}}{du_{2}}] = [u_{3}: 1: v_{3}]$ we find that
\
$\tilde{u}_{3} = \frac{c_{1}}{c_{2}} = \frac{u_{3} u_{2} \frac{\pa \tilde{u}}{\pa x} + u_{3} u u_{2} \frac{\pa \tilde{u}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}}{\pa z} +
u_{3} \frac{\pa \tilde{u}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}}{\pa v}}
{u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}_{2}}{\pa z} +
u_{3} \frac{\pa \tilde{u}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}_{2}}{\pa v} +
\frac{\pa \tilde{u}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{u}_{2}}{\pa v_{2}}} $
\
| 3,349 | 39,152 |
en
|
train
|
0.119.12
|
\begin{eqnarray*}
& = & span\{ (a u_{2} \frac{\pa \tilde{u}}{\pa x} + a u u_{2} \frac{\pa \tilde{u}}{\pa y} + avu_{2} \frac{\pa \tilde{u}}{\pa z}
+ a \frac{\pa \tilde{u}}{\pa u} + av_{2} \frac{\pa \tilde{u}}{\pa v})\frac{\pa}{\pa u} \\
& + & (au_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + auu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + avu_{2} \frac{\pa \tilde{u}_{2}}{\pa z}
+ a \frac{\pa \tilde{u}_{2}}{\pa u} + av_{2} \frac{\pa \tilde{u}_{2}}{\pa v} +
b \frac{\pa \tilde{u}_{2}}{\pa u_{2}} + c \frac{\pa \tilde{u}_{2}}{\pa v_{2}})\frac{\pa}{\pa u_{2}} \\
& + & (au_{2} \frac{\pa \tilde{v}_{2}}{\pa x} + auu_{2} \frac{\pa \tilde{v}_{2}}{\pa y} + avu_{2} \frac{\pa \tilde{v}_{2}}{\pa z} +
a \frac{\pa \tilde{v}_{2}}{\pa u} + av_{2} \frac{\pa \tilde{v}_{2}}{\pa v}
+ b \frac{\pa \tilde{v}_{2}}{\pa u_{2}} + c \frac{\pa \tilde{v}_{2}}{\pa v_{2}})\frac{\pa}{\pa v_{2}} \}
\end{eqnarray*}
\begin{eqnarray*}
& = & span\{ (u_{3} u_{2} \frac{\pa \tilde{u}}{\pa x} + u_{3} u u_{2} \frac{\pa \tilde{u}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}}{\pa z} +
u_{3} \frac{\pa \tilde{u}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}}{\pa v})\frac{\pa}{\pa u} \\
& + & (u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}_{2}}{\pa z} +
u_{3} \frac{\pa \tilde{u}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}_{2}}{\pa v} +
\frac{\pa \tilde{u}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{u}_{2}}{\pa v_{2}})\frac{\pa}{\pa u_{2}} \\
& + & u_{3}u_{2} \frac{\pa \tilde{v}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{v}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{v}_{2}}{\pa z} +
u_{3} \frac{\pa \tilde{v}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{v}_{2}}{\pa v} +
\frac{\pa \tilde{v}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{v}_{2}}{\pa v_{2}})\frac{\pa}{\pa v_{2}} \}
\end{eqnarray*}
$= span\{ c_{1}\frac{\pa}{\pa u} + c_{2}\frac{\pa}{\pa u_{2}} + c_{3}\frac{\pa}{\pa v_{2}} \}$, since
our local coordinates are given by $[du: du_{2}: dv_{2}] = [\frac{du}{du_{2}}: 1: \frac{dv_{2}}{du_{2}}] = [u_{3}: 1: v_{3}]$ we find that
\
$\tilde{u}_{3} = \frac{c_{1}}{c_{2}} = \frac{u_{3} u_{2} \frac{\pa \tilde{u}}{\pa x} + u_{3} u u_{2} \frac{\pa \tilde{u}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}}{\pa z} +
u_{3} \frac{\pa \tilde{u}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}}{\pa v}}
{u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa \tilde{u}_{2}}{\pa z} +
u_{3} \frac{\pa \tilde{u}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}_{2}}{\pa v} +
\frac{\pa \tilde{u}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{u}_{2}}{\pa v_{2}}} $
\
$\tilde{v}_{3} = \frac{c_{3}}{c_{2}} = \frac{u_{3}u_{2} \frac{\pa \tilde{v}_{2}}{\pa x} + u_{3}uu_{2} \frac{\pa \tilde{v}_{2}}{\pa y} + u_{3}vu_{2} \
\frac{\pa \tilde{v}_{2}}{\pa z} +
u_{3} \frac{\pa \tilde{v}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{v}_{2}}{\pa v} +
\frac{\pa \tilde{v}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{v}_{2}}{\pa v_{2}}}
{u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} +
u_{3}uu_{2} \frac{\pa \tilde{u}_{2}}{\pa y} + u_{3}vu_{2} \frac{\pa
\tilde{u}_{2}}{\pa z} +
u_{3} \frac{\pa \tilde{u}_{2}}{\pa u} + u_{3}v_{2} \frac{\pa \tilde{u}_{2}}{\pa v} +
\frac{\pa \tilde{u}_{2}}{\pa u_{2}} + v_{3} \frac{\pa \tilde{u}_{2}}{\pa v_{2}}}$.
{\it Level $4$:}
Now that we know what the component functions are for $\Phi^{3}$, with $\Phi \in G_{3}$, we are ready to apply its pushforward
to the distribution $\Delta_{3}$ at $p_{3}$ and figure out how many orbits there are for the class $RVVV$.
We let $\ell = b \frac{\pa}{\pa u_{3}} + c \frac{\pa}{\pa v_{3}}$, with $b,c \in \R$, be a vector in the vertical hyperplane
of $\Delta_{3}(p_{3})$ and we see that
$$\Phi^{3}_{\ast}(\ell) =
span \{ (b \frac{\pa \tilde{u}_{3}}{\pa u_{3}}(p_{3}) + c \frac{\pa \tilde{u}_{3}}{\pa v_{3}}(p_{3}))\frac{\pa}{\pa u_{3}} +
(b \frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{3}) + c \frac{\pa \tilde{v}_{3}}{\pa v_{3}}(p_{3}))\frac{\pa}{\pa v_{3}} \} .$$
This means that we need to see what $\frac{\pa \tilde{u}_{3}}{\pa u_{3}}$, $\frac{\pa \tilde{u}_{3}}{\pa v_{3}}$,
$\frac{\pa \tilde{v}_{3}}{\pa u_{3}}$, $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}$ are when we evaluate at
$p_{3} = (x,y,z,u,v,u_{2},v_{2},u_{3},v_{3}) = (0,0,0,0,0,0,0,0,0)$. This will amount to a somewhat long process, so
we will just first state what the above terms evaluate to and leave the computations for the appendix.
After evaluating we will see that
$\Phi^{3}_{\ast}(\ell) = span \{ (b \frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}})\frac{\pa}{\pa u_{3}} +
(c \frac{\phi^{3}_{z}(\z)}{(\phi^{1}_{x}(\z))^{2}})\frac{\pa}{\pa v_{3}} \}$, this means
that for $\ell = \frac{\pa}{\pa u_{3}}$ we get $\Phi^{3}_{\ast}(\ell) =
span \{\frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}}\frac{\pa}{\pa u_{3}} \}$ to
give one orbit. Then, for $\ell = \frac{\pa}{\pa u_{3}} + \frac{\pa}{\pa v_{3}}$ we see that
$\Phi^{3}_{\ast}(\ell) = span \{(\frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}})\frac{\pa}{\pa u_{3}} +
(\frac{\phi^{3}_{z}(\z)}{(\phi^{1}_{x}(\z))^{2}})\frac{\pa}{\pa v_{3}} \}$, and notice
that $\phi^{1}_{x}(\z) \neq 0$, $\phi^{2}_{y}(\z) \neq 0$, and $\phi^{3}_{z}(\z) \neq 0$, but
we can choose them to be anything else to get any vector of the form $b' \frac{\pa}{\pa u_{3}} + c' \frac{\pa}{\pa v_{3}}$ with
$b', c' \neq 0$ to give another, seperate, orbit(Recall that in order for $\ell$ to be a vertical direction, in this case, that it
must be of the form $\ell = b \frac{\pa}{\pa u_{3}} + c \frac{\pa}{\pa v_{3}}$ with $b \neq 0$.)
This means that there is a total of $2$ orbits for the class $RVVV$, as seen in Figure $3$.
\begin{figure}
\caption{Orbits within the class $RVVV$.}
\label{fig:orbits}
\end{figure}
The classification of the other $RVT$ classes at level $4$ are done in a very similar manner. The details of these other
calculations will be given in a subsequent work by one of the authors.
\
| 2,678 | 39,152 |
en
|
train
|
0.119.13
|
\section{Conclusion}
We have exbited above a canonical procedure for lifting the action of $Diff(3)$ to the fibered manifold
$\mathcal{P}^k(2)$, and by a mix of singularity theory of space curves and the representation theory of diffeomorphism groups we were able to completely classify orbits of this extended action for small values of $k$. A cursory glance at our computational methods will convince the reader that these results can nominally be extended to higher values of $k$, say $k\geq 5$ but given the exponential increase in computational effort the direct approach can become somewhat unwieldy. Progress has been made though to try and extend the classification results of the present paper and we hope to release these findings sometime in the near future.
In (\cite{castro}) we already called the attention upon a lack of discrete invariants to assist with the classification problem, we hope to return to this problem in future publications.
\
It came to our attention recently that Respondek and Li \cite{respondek} constructed a mechanical system consisting of $k$-rigid bars moving in $\mathbb{R}^{n+1}$ subjected to a nonholonomic constraint which is equivalent to the Cartan distribution of $J^k(\mathbb{R},\mathbb{R}^{n})$ at regular configuration points. We conjecture that the singular configurations of the $k$-bar will be related to singular Goursat multiflags similar to those presented here, though in Respondek and Li's case the configuration manifold is a tower of $S^n$ fibrations instead of our $\mathbb{P}^n$ tower.
Another research venue, and that to our knowledge has been little explored, is that of understanding how these results could be applied to the geometric theory of differential equations. Let us remind the reader that the spaces $\mathcal{P}^k(2)$, or more generally
$\mathcal{P}^k(n)$ are vertical compactifications of the jet spaces $J^{k}(\mathbb{R},\mathbb{R}^2)$ and $J^k(\mathbb{R},\mathbb{R}^n)$ respectively.
Kumpera and collaborators (\cite{kumpera2}) have used the geometric theory to study the problem underdetermined systems of systems of ordinary differential equations, but it remains to be explored how our singular orbits can be used to make qualitative statements about the behavior of solutions to singular differential equations.
| 539 | 39,152 |
en
|
train
|
0.119.14
|
\section{Appendix}
\subsection{A technique to eliminate terms in the short parameterization of a curve germ.}
\
The following technique that we will discuss is outlined in \cite{zariski} on pg. $23$.
Let $C$ be a parameterization of a planar curve germ. A \textit{short parameterization} of $C$
is of the form
$$C = \begin{cases}
x = t^{n} & \\
y = t^{m} + bt^{\nu_{\rho}} + \Sigma^{q}_{i = \rho + 1} a_{\nu_{i}}t^{\nu_{i}} &\mbox{$b \neq 0$ if $\rho \neq q + 1$}
\end{cases}$$
\
\noindent where the $\nu_{i}$, for $i = \rho, \cdots , q$ are positive integers such that $\nu_{\rho} < \cdots < \nu_{q}$
and they do not belong to the semigroup of the curve $C$. Suppose that $\nu_{\rho} + n \in n \Z_{+} + m \Z_{+}$.
Now, notice that $\nu_{\rho} + n \in m \Z_{+}$ because $\nu_{\rho}$ is not in the semigroup of $C$. Let
$j \in \Z_{+}$ be such that $\nu_{\rho} + n = (j + 1)m$; notice that $j \geq 1$ since $\nu_{\rho} > m$. Then set
$a = \frac{bn}{m}$ and \\ $x' = t^{n} + at^{jm} +$( terms of degree $> jm$). Let
$\tau^{n} = t^{n} + at^{jm} +$ (terms of degree $> jm$). From this expression one can show that
$t = \tau - \frac{a}{n} \tau^{jm - n + 1} +$ (terms of degree $> jm - n + 1$), and when we substitute this into the
original expression above for $C$ that
$$C = \begin{cases}
x' = \tau^{n} \\
y = \tau^{m} + \mbox{terms of degree $> \nu_{\rho}$}
\end{cases}$$
\
We can now apply semigroup arguments to the above expression for $C$ and see that $C$ has the parametrization
$$C = \begin{cases}
x' = \tau^{n} & \\
y' = \tau^{m} + \Sigma^{q}_{i = \rho + 1} a'_{\nu_{i}} \tau^{\nu_{i}}
\end{cases}$$
\
We can apply the above technique to the two curves $(t^{3}, t^{5} + t^{7}, 0)$ and
$(t^{3}, t^{5} - t^{7}, 0)$ to get that they are equivalent to the curve $(t^{3}, t^{5}, 0)$.
| 710 | 39,152 |
en
|
train
|
0.119.15
|
\subsection{Computations for the class $RVVV$.}
\
We will now provide the details that show what the functions $\frac{\pa \tilde{u}_{3}}{\pa u_{3}}$, $\frac{\pa \tilde{u}_{3}}{\pa v_{3}}$,
$\frac{\pa \tilde{v}_{3}}{\pa u_{3}}$, $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}$ are when we evaluate at
$p_{4} = (x,y,z,u,v,u_{2},v_{2},u_{3},v_{3}) = (0,0,0,0,0,0,0,0,0)$.
$\bullet$ $\frac{\pa \tilde{u}_{3}}{\pa u_{3}}$
\
Recall that $\tilde{u}_{3} = \frac{c_{1}}{c_{2}}$. Then
$\frac{\pa \tilde{u}_{3}}{\pa u_{3}} = \frac{u_{2} \frac{\pa \tilde{u}_{x}}{\pa x} + uu_{2} \frac{\pa \tilde{u}}{\pa y} +
\frac{\pa \tilde{u}}{\pa u} + v_{2} \frac{\pa \tilde{u}}{\pa}}{c_{2}} - \frac{\frac{\pa c_{2}}{\pa u_{3}} c_{1}}{c^{2}_{2}}$ and
\
$\frac{\pa \tilde{u}_{3}}{\pa u_{3}}(p_{4}) =
\frac{ \frac{\pa \tilde{u}}{\pa u}(p_{4}) }{ \frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4})}$, since
$c_{1}(p_{4}) = 0$. We recall that $\frac{\pa \tilde{u}}{\pa u}(p_{4}) = \frac{\phi^{2}_{y}(\z)}{\phi^{1}_{x}(\z)}$,
$\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}) = \frac{\phi^{1}_{x}(\z)}{\frac{\pa \tilde{u}}{\pa u}(p_{4})}$ to give that
\
$\frac{\pa \tilde{u}_{3}}{\pa u_{3}}(p_{4}) = \frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}}$.
\
$\bullet$ $\frac{\pa \tilde{u}_{3}}{\pa v_{3}}$.
\
Have that $\tilde{u}_{3} = \frac{c_{1}}{c_{2}}$, then
\
$\frac{\pa \tilde{u}_{3}}{\pa v_{3}}(p_{4}) = \frac{\frac{\pa c_{1}}{\pa v_{3}}(p_{4})}{c_{2}(p_{4})} -
\frac{\frac{\pa c_{2}}{\pa v_{3}}(p_{4}) c_{1}(p_{4})}{c^{2}_{2}(p_{4})} =
0$, since $c_{1}$ is not a function of $u_{3}$ and $c_{1}(p_{4}) = 0$.
\
$\bullet$ $\frac{\pa \tilde{v}_{3}}{\pa u_{3}}$
\
Have that $\tilde{v}_{3} = \frac{c_{3}}{c_{2}}$, then
\
$\frac{\pa \tilde{v}_{3}}{\pa u_{3}} = \frac{u_{2}\frac{\pa \tilde{v}_{2}}{\pa x} + ... + \frac{\pa \tilde{v}_{2}}{\pa u} +
v_{2} \frac{\pa \tilde{v}_{2}}{\pa v}}{c_{2}}
- \frac{(u_{2} \frac{\pa \tilde{u}_{2}}{\pa x} + ... + \frac{\pa \tilde{u}_{2}}{\pa u} +
... + v_{2} \frac{\pa \tilde{u}_{2}}{\pa v})c_{1}}{c^{2}_{2}}$
\
$\frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{4}) = \frac{ \frac{\pa \tilde{v}_{2}}{\pa u} (p_{4})}{ \frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}) }
- \frac{ \frac{\pa \tilde{u}_{2}}{\pa u}(p_{4}) \frac{\pa \tilde{v}_{2}}{\pa u_{2}}(p_{4}) }
{ (\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}))^{2} }$
\
This means that we will need to figure out what $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}$, $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}$,
and $\frac{\pa \tilde{v}_{2}}{\pa u}$ are
when we evaluate at $p_{4}$.
\
$\circ$ $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}$
\
Can recall from work at the level below that $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}) =
\frac{\phi^{1}_{x}(\z)}{\frac{\pa \tilde{u}}{\pa u}(p_{4})} = \frac{\phi^{2}_{y}(\z)}{(\phi^{1}_{x}(\z))^{2}}$ since
$\frac{\pa \tilde{u}}{\pa u}(p_{4}) = \frac{\phi^{2}_{y}(\z) }{\phi^{1}_{x}(\z)}$.
\
$\circ$ $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}$
\
Can recall from work at level $3$ that $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}(p_{4}) =
\frac{\frac{\pa \tilde{v}}{\pa x}(p_{4})}{\frac{\pa \tilde{u}}{\pa u}(p_{4})} = 0$ since
$\frac{\pa \tilde{v}}{\pa x}(p_{4}) = \frac{\phi^{3}_{xx}(\z)}{\phi^{1}_{x}(\z)}$
and have that $\phi^{1}_{xx}(\z) = 0$ to give $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}(p_{4}) = 0$.
This gives the reduced expression $\frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{4}) =
\frac{\frac{\pa \tilde{v}_{2}}{\pa u}(p_{4})}{\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4})}$.
\
$\circ$ $\frac{\pa \tilde{v}_{2}}{\pa u}$
\
Recall that $\frac{\pa \tilde{v}_{2}}{\pa u} = \frac{b_{3}}{b_{2}}$, then we get that
\
$\frac{\pa \tilde{v}_{2}}{\pa u} =
\frac{ u_{2} \frac{\pa \tilde{v}}{\pa x \pa u} + u_{2} \frac{\pa \tilde{v}}{\pa y} + ... + \frac{\pa \tilde{v}}{\pa^{2} u} +
v_{2} \frac{\pa \tilde{v}}{\pa v \pa u}}{b_{2}} - \frac{(u_{2} \frac{\pa \tilde{u}}{\pa x \pa u} + ... + \frac{\pa \tilde{u}}{\pa^{2} u} +
v_{2} \frac{\pa \tilde{u}}{\pa v \pa u})b_{3}}{b_{2}^{2}}$
\
$\frac{\pa \tilde{v}_{2}}{\pa u}(p_{4}) = \frac{\frac{\pa \tilde{v}}{\pa^{2} u}(p_{4})}{ \frac{\pa \tilde{u}}{\pa u}(p_{4})} -
\frac{\frac{\pa \tilde{u}}{\pa^{2} u}(p_{4}) \frac{\pa \tilde{v})}{\pa u}(p_{4})}{ (\frac{\pa \tilde{u}}{\pa u}(p_{4}))^{2}}$ since
$b_{2}(p_{4}) = \frac{\pa \tilde{u}}{\pa u}(p_{4})$ and $b_{3}(p_{4}) = \frac{\pa \tilde{v}}{\pa u}(p_{4})$. In order to
figure out $\frac{\pa \tilde{v}_{2}}{\pa u}(p_{4})$ will be we need to look at
$\frac{\pa \tilde{v}}{\pa u}(p_{4})$, $\frac{\pa \tilde{v}}{\pa^{2} u}(p_{4})$, and $\frac{\pa \tilde{u}}{\pa^{2} u}(p_{4})$.
\
$\circ$ $\frac{\pa \tilde{v}}{\pa u}$
\
Recall that $\tilde{v} = \frac{a_{3}}{a_{1}}$ and that
$\frac{\pa \tilde{v}}{\pa u} = \frac{\phi^{3}_{y}}{a_{1}} - \frac{\phi^{1}_{y} a_{3}}{a^{2}_{1}}$, then
\
$\frac{\pa \tilde{v}}{\pa u}(p_{4}) = \frac{\phi^{3}_{y}(\z)}{\phi^{1}_{x}(\z)} -
\frac{ \phi^{1}_{y}(\z) \phi^{3}_{x}(\z)}{(\phi^{1}_{x}(\z))^{2}} = 0$ since
$\phi^{3}_{y}(\z) = 0$ and $\phi^{3}_{x}(\z) = 0$.
\
$\circ$ $\frac{\pa \tilde{v}}{\pa^{2} u}$
\
From the above we have $\frac{\pa \tilde{v}}{\pa u} = \frac{\phi^{3}_{y}}{a_{1}} - \frac{\phi^{1}_{y} a_{3}}{a^{2}_{1}}$, then
\
$\frac{\pa \tilde{v}}{\pa^{2} u}(p_{4}) = \frac{0}{a_{1}(p_{4})} - \frac{\phi^{3}_{y}(\z) \phi^{1}_{y}(\z)}{a^{2}_{1}(p_{4})} -
\frac{\phi^{1}_{y}(\z) \phi^{3}_{y}(\z)}{a^{2}_{1}(p_{4})} + \frac{(\phi^{1}_{y}(\z))^{2} \phi^{3}_{x}(\z)}{a^{3}_{1}(p_{4})} = 0$ since
$\phi^{3}_{y}(\z) = 0$ and
$\phi^{3}_{x}(\z) = 0$.
\
We see that we do not need to determine what $\frac{\pa \tilde{u}}{\pa^{2} u}(p_{4})$ is, since
$\frac{\pa \tilde{v}}{\pa u}$ and $\frac{\pa \tilde{v}}{\pa^{2} u}$ will be zero at $p_{4}$ and give us that
$\frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{4}) = 0$.
\
$\bullet$ $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}$.
\
Recall that $\tilde{v}_{3} = \frac{c_{3}}{c_{2}}$, then
\
$\frac{\pa \tilde{v}_{3}}{\pa v_{3}} = \frac{ u_{3}u_{2} \frac{\pa \tilde{v}_{2}}{\pa x \pa v_{3}} + ... + \frac{\pa \tilde{v}_{2}}{\pa v_{2}}
}{c_{2}} - \frac{ (u_{3}u_{2} \frac{\pa \tilde{u}_{2}}{\pa x \pa v_{3}} + ... + \frac{\pa \tilde{u}}{\pa v_{2}})c_{3}}{c^{2}_{2}}$
\
$\frac{\pa \tilde{v}_{3}}{\pa v_{3}}(p_{4}) = \frac{ \frac{\pa \tilde{v}_{2}}{\pa v_{2}}(p_{4})}{ \frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4})} -
\frac{ \frac{\pa \tilde{u}}{\pa v_{2}(p_{4})} \frac{\pa \tilde{v}_{2}}{\pa u_{2}}(p_{4})}{(\frac{\pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}))^{2}}$. This means that we need to look at $\frac{\pa \tilde{v}_{2}}{\pa v_{2}}$, $\frac{\pa \tilde{u}}{\pa v_{2}}$, $\frac{\pa \tilde{v}_{2}}{\pa u_{2}}$, and
$\frac{\pa \tilde{u}_{2}}{\pa u_{2}}$ evaluated at $p_{4}$.
\
$\circ$ $\frac{\pa \tilde{v}_{2}}{\pa v_{2}}$.
\
We recall from an earlier calculation that $\frac{\pa \tilde{v}_{2}}{\pa v_{2}}(p_{4}) =
\frac{ \frac{\pa \tilde{v}}{\pa v}(p_{4})}{\frac{\pa \tilde{u}}{\pa u}(p_{4})} = \frac{\phi^{3}_{z}(\z)}{\phi^{2}_{y}(\z)}$.
\
$\circ$ $\frac{\pa \tilde{u}}{\pa v_{2}}$.
\
Recall from an earlier calculation that $\frac{\pa \tilde{u}_{2}}{\pa v_{2}}(p_{4}) = 0$.
\
$\circ$ $\frac{\pa \tilde{u}_{2}}{\pa u_{2}}$.
\
Recall that $\tilde{u}_{2} = \frac{b_{1}}{b_{2}}$ and that
$\frac{\pa \tilde{u}_{2}}{\pa u_{2}} = \frac{ \phi^{1}_{x} + u \phi^{1}_{y} + v \phi^{1}_{z}}{b_{2}} -
\frac{(\frac{\pa \tilde{u}}{\pa x} + u \frac{\pa \tilde{u}}{\pa y} + v \frac{\pa \tilde{u}}{\pa z})b_{1}}{b^{2}_{2}}$, then
\
$\frac{ \pa \tilde{u}_{2}}{\pa u_{2}}(p_{4}) = \frac{\phi^{1}_{x}(\z)}{\frac{\pa \tilde{u}}{\pa u}(p_{4})} =
\frac{(\phi^{1}_{x}(\z))^{2}}{\phi^{2}_{y}(p_{4})}$.
With the above in mind we see that $\frac{\pa \tilde{v}_{3}}{\pa v_{3}}(p_{4}) = \frac{\phi^{3}_{z}(\z)}{(\phi^{1}_{x}(\z))^{2}}$.
\
This gives that $\Phi^{3}_{\ast}(\ell) = span \{ (b \frac{\pa \tilde{u}_{3}}{\pa u_{3}}(p_{4}) +
c \frac{\pa \tilde{u}_{3}}{\pa v_{3}}(p_{4}))\frac{\pa}{\pa u_{3}} +
(b \frac{\pa \tilde{v}_{3}}{\pa u_{3}}(p_{4}) + c \frac{\pa \tilde{v}_{3}}{\pa v_{3}}(p_{4}))\frac{\pa}{\pa v_{3}} \} =
span \{ (b \frac{(\phi^{2}_{y}(\z))^{2}}{(\phi^{1}_{x}(\z))^{3}})\frac{\pa}{\pa u_{3}} +
c \frac{\phi^{3}_{z}(\z)}{(\phi^{1}_{x}(\z))^{2}}\frac{\pa}{\pa v_{3}} \}$.
\end{document}
| 3,746 | 39,152 |
en
|
train
|
0.120.0
|
\begin{document}
\title{On the volume of unit balls\ of finite-dimensional Lorentz spaces}
\begin{abstract}
We study the volume of unit balls $B^n_{p,q}$ of finite-dimensional Lorentz sequence spaces $\ell_{p,q}^n.$
We give an iterative formula for ${\rm vol}(B^n_{p,q})$ for the weak Lebesgue spaces with $q=\infty$ and explicit formulas for $q=1$ and $q=\infty.$
We derive asymptotic results for the $n$-th root of ${\rm vol}(B^n_{p,q})$ and show that
$[{\rm vol}(B^n_{p,q})]^{1/n}\approx n^{-1/p}$ for all $0<p<\infty$ and $0<q\le\infty.$
We study further the ratio between the volume of unit balls of weak Lebesgue spaces
and the volume of unit balls of classical Lebesgue spaces. We conclude with an application of
the volume estimates and characterize the decay of the entropy numbers of the embedding of
the weak Lebesgue space $\ell_{1,\infty}^n$ into $\ell_1^n.$
\end{abstract}
{\bf Keywords:} Lorentz sequence spaces, weak Lebesgue spaces, entropy numbers, vo\-lu\-me estimates, (non-)convex bodies
\section{Introduction and main results}
It was observed already in the first half of the last century (cf. the interpolation theorem of Marcinkiewicz \cite{Mar}) that the scale
of Lebesgue spaces $L_p(\Omega)$, defined on a subset $\Omega\subset \mathbb{R}^n$, is not sufficient to describe the fine properties
of functions and operators. After the pioneering work of Lorentz \cite{Lor1,Lor2}, Hunt defined in \cite{Hunt1, Hunt2}
a more general scale of function spaces $L_{p,q}(\Omega)$, the so-called Lorentz spaces. This scale includes Lebesgue spaces
as a special case (for $p=q$) and Lorentz spaces have found applications in many areas of mathematics, including harmonic analysis (cf. \cite{Graf1,Graf2})
and the analysis of PDE's (cf. \cite{LR,Meyer}).
If $\Omega$ is an atomic measure space (with all atoms of the same measure), one arrives naturally at the definition of Lorentz spaces
on (finite or infinite) sequences. If $n$ is a positive integer and $0<p\le\infty$, then the Lebesgue $n$-dimensional space $\ell_p^n$ is
$\mathbb{R}^n$ equipped with the (quasi-)norm
\begin{equation*}
\|x\|_p=\begin{cases}\displaystyle \Bigl(\sum_{j=1}^n |x_j|^p\Bigr)^{1/p},&\quad\text{for}\ 0<p<\infty,\\
\displaystyle\max_{j=1,\dots,n}|x_j|,&\quad\text{for}\ p=\infty
\end{cases}
\end{equation*}
for every $x=(x_1,\dots,x_n)\in\mathbb{R}^n$. We denote by $B_p^n$ its unit ball
\begin{equation}\label{eq:defBp}
B_p^n=\{x\in\mathbb{R}^n:\|x\|_p\le 1\}.
\end{equation}
If $0<p,q\le\infty$, then the Lorentz space $\ell_{p,q}^n$ stands for $\mathbb{R}^n$ equipped with the \mbox{(quasi-)norm}
\begin{equation}\label{eq:defpq}
\|x\|_{p,q}=\|k^{\frac{1}{p}-\frac{1}{q}}x_k^*\|_q,
\end{equation}
where $x^*=(x_1^*,\dots,x_n^*)$ is the non-increasing rearrangement of $(|x_1|,\dots,|x_n|)$.
If $p=q$, then $\ell_{p,p}^n=\ell_p^n$ are again the Lebesgue sequence spaces.
If $q=\infty$, then the space $\ell_{p,\infty}^n$ is usually referred to as a weak Lebesgue space.
Similarly to \eqref{eq:defBp}, we denote by $B_{p,q}^n$ the unit ball of $\ell_{p,q}^n$, i.e. the set
\begin{equation}
B_{p,q}^n=\{x\in\mathbb{R}^n:\|x\|_{p,q}\le 1\}.
\end{equation}
Furthermore, $B^{n,+}_{p}$ (or $B_{p,q}^{n,+}$)
will be the set of vectors from $B_p^n$ (or $B_{p,q}^{n}$) with all coordinates non-negative.
Lorentz spaces of (finite or infinite) sequences have been used extensively in different areas of mathematics.
They form a basis for many operator ideals of Pietsch, cf. \cite{Pietsch1, Pietsch2, Triebel2},
they play an important role in the interpolation theory, cf. \cite{BS,BL,Haroske,LiPe}, and their weighted counterparts are the main building blocks
of approximation function spaces, cf. \cite{ST,Triebel1}.
Weak Lebesgue sequence spaces (i.e. Lorentz spaces with $q=\infty$) were used by Cohen, Dahmen, Daubechies, and DeVore \cite{CDDD}
to characterize functions of bounded variation. Lorentz spaces further appear in approximation theory \cite{DeVore,DeLo,DePeTe} and signal processing \cite{IEEE1,IEEE2,FR}.
The volume of unit balls of classical Lebesgue sequence spaces $B_p^n$ is known since the times of Dirichlet \cite{Dirichlet},
who showed for $0<p\le\infty$ that
\begin{equation}\label{eq:volBp}
\operatornamewithlimits{vol}(B_p^n)=2^n\cdot\frac{\Gamma\bigl(1+\frac{1}{p}\bigr)^n}{\Gamma\bigl(1+\frac{n}{p}\bigr)}.
\end{equation}
Here, $\operatornamewithlimits{vol}(A)$ stands for the Lebesgue measure of a (measurable) set $A\subset\mathbb{R}^n$
and $\Gamma(x)=\int_0^\infty t^{x-1}e^{-t}dt$ is the Gamma function for $x>0$.
Since then, \eqref{eq:volBp} and its consequences
play an important role in many results about finite-dimensional Lebesgue spaces, cf. \cite{Pisier}.
Although many properties of Lorentz sequence spaces
were studied in detail earlier (cf. \cite{LorTheo2, LorTheo1, BS,BL, LorTheo5, LorTheo3, LorTheo4}), there seems to be only very little known about
the volume of their unit balls. The aim of this work is to fill to some extent this gap.
We present two ways, which lead to recursive formulas for $\operatornamewithlimits{vol}(B_{p,\infty}^n)$ if $0<p<\infty$.
The first one (cf. Theorem \operatornamewithlimits{Re}f{thm:ind:1})
\begin{equation}\label{eq:intro:1}
\operatornamewithlimits{vol}(B^{n,+}_{p,\infty})=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\operatornamewithlimits{vol}(B^{n-j,+}_{p,\infty})
\end{equation}
is quite well suited for calculating $\operatornamewithlimits{vol}(B_{p,\infty}^n)$ for moderate values of $n$
and we present also some numerical results on the behavior of this quantity for different values of $p$.
Although an explicit formula for $\operatornamewithlimits{vol}(B_{p,\infty}^n)$ can be derived from \eqref{eq:intro:1}, cf. Theorem \operatornamewithlimits{Re}f{thm:ind:2},
due to its combinatorial nature it seems to be only of a limited practical use. In Section \operatornamewithlimits{Re}f{sec:integral}
we derive the same formula with the help of iterated multivariate integrals, very much in the spirit of the original proof of Dirichlet.
Surprisingly, a simple explicit formula can be given for $\operatornamewithlimits{vol}(B_{p,1}^n)$ for the full range of $0<p\le\infty$. Indeed, we show in Theorem \operatornamewithlimits{Re}f{thm:q1} that
$$
\operatornamewithlimits{vol}(B_{p,1}^n)=2^n\prod_{k=1}^n\frac{1}{\varkappa_p(k)},\quad \text{where}\quad \varkappa_p(k)=\sum_{j=1}^kj^{1/p-1}.
$$
If $p=1$, then $\varkappa_1(k)=k$ and this formula reduces immediately to the well-known relation $\operatornamewithlimits{vol}(B_{1}^n)=2^n/n!.$
Using Stirling's formula, \eqref{eq:volBp} implies that $[\operatornamewithlimits{vol}(B_p^n)]^{1/n}\approx n^{-1/p}$ for all $0<p<\infty$ with
the constants of equivalence independent of $n$. Using the technique of entropy numbers, we show in Theorem \operatornamewithlimits{Re}f{thm:asym:1} that essentially the same is true
for the whole scale of Lorentz spaces $\ell_{p,q}^n$ (with a remarkable exception for $p=\infty$, cf. Theorem \operatornamewithlimits{Re}f{thm:asym:inf1}).
It is a very well known fact (cf. Theorem \operatornamewithlimits{Re}f{thm:emb:1}) that $B_{p}^n\subset B_{p,\infty}^n$
for all $0<p\le\infty$ and it is a common folklore to consider the unit balls of weak Lebesgue spaces (i.e. Lorentz spaces with $q=\infty$)
as the ``slightly larger'' counterparts of the unit balls of Lebesgue spaces with the same summability parameter $p$.
This intuition seems to be further confirmed by Theorem \operatornamewithlimits{Re}f{thm:asym:1}, which shows that the quantities
$[\operatornamewithlimits{vol}(B^n_p)]^{1/n}$ and $[\operatornamewithlimits{vol}(B^n_{p,\infty})]^{1/n}$ are equivalent to each other with constants independent on $n$.
On the other hand, we show in Theorem \operatornamewithlimits{Re}f{thm:ratio} that $\operatornamewithlimits{vol}(B^n_{p,\infty})/\operatornamewithlimits{vol}(B^n_{p})$ grows exponentially
in $n$ at least for $p\le 2$. We conjecture (but it remains an open problem)
that the same is true for all $p<\infty.$
We conclude our work by considering the entropy numbers of the embeddings between Lorentz spaces
of finite dimension, which complements the seminal work of Edmunds and Netrusov \cite{EN}.
We characterize in Theorem \operatornamewithlimits{Re}f{thm:entropy}
the decay of the entropy numbers
$e_k(id:\ell_{1,\infty}^n\to \ell_{1}^n)$, which turns out to exhibit a rather
unusual behavior, namely
$$
e_k(id:\ell_{1,\infty}^n\to \ell_1^n)\approx\begin{cases} \log(1+n/k),\quad 1\le k\le n,\\
2^{-\frac{k-1}{n}},\quad k\ge n
\end{cases}
$$
with constants of equivalence independent of $k$ and $n$.
We see that after a logarithmic decay for $1\le k \le n$, the exponential decay in $k$
takes over for $k\ge n.$
| 2,907 | 20,723 |
en
|
train
|
0.120.1
|
\section{Recursive and explicit formulas}
In this section, we present different formulas for the volume of unit balls of Lorentz spaces
for two special cases, namely for the weak Lebesgue spaces with $q=\infty$ and for Lorentz spaces with $q=1.$
Surprisingly, different techniques have to be used in these two cases.
| 82 | 20,723 |
en
|
train
|
0.120.2
|
\subsection{Weak Lebesgue spaces}
We start by the study of weak Lebesgue spaces, i.e. the Lorentz spaces $\ell_{p,\infty}^n$. If $p=\infty$, then $\ell_{p,\infty}^n=\ell_\infty^n$.
Therefore, we restrict ourselves to $0<p<\infty$ in this section.
\subsubsection{Using the inclusion-exclusion principle}
In this section, we assume the convention
$$
\operatornamewithlimits{vol}(B^{1,+}_{p,\infty})=\operatornamewithlimits{vol}(B^{0,+}_{p,\infty})=1
$$
for every $0<p<\infty.$
\begin{thm}\label{thm:ind:1} Let $n\in\mathbb{N}$ and $0<p<\infty$. Then
\begin{equation}\label{eq:ind:1}
\operatornamewithlimits{vol}(B^{n,+}_{p,\infty})=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\operatornamewithlimits{vol}(B^{n-j,+}_{p,\infty}).
\end{equation}
\end{thm}
\begin{proof} For $1\le k \le n$, we denote $A_k=\{x\in B^{n,+}_{p,\infty}:x_k\le n^{-1/p}\}$.
If $x\in B^{n,+}_{p,\infty}$, then at least one of the coordinates of $x$ must be smaller than or equal to $n^{-1/p}$. Therefore
$$
B^{n,+}_{p,\infty}=\bigcup_{j=1}^n A_j.
$$
For a non-empty index set $K\subset \{1,\dots,n\}$, we denote
$$
A_K=\bigcap_{k\in K}A_k=\{x\in B_{p,\infty}^{n,+}:x_k\le n^{-1/p}\ \text{for all}\ k\in K\}.
$$
If we denote by $x_{K^c}$ the restriction of $x$ onto $K^c=\{1,\dots,n\}\setminus K$, then the $j^{\rm th}$
largest coordinate of $x_{K^c}$ can be at most $j^{-1/p}$, i.e. $x_{K^c}\in B^{n-|K|,+}_{p,\infty}$.
Here, $|K|$ stands for the number of elements in $K$.
We therefore obtain
\begin{equation}\label{eq:volK}
\operatornamewithlimits{vol}(A_K)=\Bigl(\prod_{k\in K}n^{-1/p}\Bigr)\cdot \operatornamewithlimits{vol}(B^{n-|K|,+}_{p,\infty})=n^{-|K|/p}\cdot \operatornamewithlimits{vol}(B^{n-|K|,+}_{p,\infty}).
\end{equation}
Finally, we insert \eqref{eq:volK} into the inclusion-exclusion principle and obtain
\begin{align*}
\operatornamewithlimits{vol}(B^{n,+}_{p,\infty})&=\sum_{\emptyset\not=K\subset\{1,\dots,n\}}(-1)^{|K|-1}\operatornamewithlimits{vol}(A_K)\\
&=\sum_{\emptyset\not=K\subset\{1,\dots,n\}}(-1)^{|K|-1}n^{-|K|/p}\operatornamewithlimits{vol}(B^{n-|K|,+}_{p,\infty})\\
&=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\operatornamewithlimits{vol}(B^{n-j,+}_{p,\infty}).
\end{align*}
\end{proof}
The relation \eqref{eq:ind:1} is already suitable for calculation of $\operatornamewithlimits{vol}(B^{n}_{p,\infty})$ for moderate values of $n$, cf. Table \operatornamewithlimits{Re}f{table_infty}.
Let us remark, that $\operatornamewithlimits{vol}(B^n_{1,\infty})$ is maximal for $n=4$ and $\operatornamewithlimits{vol}(B^n_{2,\infty})$ attains its maximum at $n=18.$
\begin{table}[h t p]\centering
\pgfplotstableread[col sep = semicolon]{q1prehled15.txt}\loadedtable
\setlength{\tabcolsep}{8pt}
\pgfplotstabletypeset[
precision=3,
every head row/.style={
before row=\toprule,after row=\midrule},
every last row/.style={
after row=\bottomrule},
columns/n/.style={int detect,
column name={$n$}},
columns/pul/.style={sci, sci zerofill, sci sep align,
column name={$p=1/2$}},
columns/jedna/.style={sci, sci zerofill, sci sep align,
column name={$p=1$}},
columns/dva/.style={sci, sci zerofill, sci sep align,
column name={$p=2$}},
columns/sto/.style={sci, sci zerofill, sci sep align,
column name={$p=100$}},
]\loadedtable
\caption{$\text{vol}(B^n_{p,\infty})$ for dimensions up to 15}
\label{table_infty}
\end{table}
Next we exploit Theorem \operatornamewithlimits{Re}f{thm:ind:1} to give a certain explicit result about the volume of unit balls of weak Lebesgue spaces.
For this, we denote by ${\bf K}_n$ the set of integer vectors of finite length $k=(k_1,\dots,k_j)$ with positive
coordinates $k_1,\dots,k_j$, which sum up to $n$. We denote by $\ell(k)=j$ the length of $k\in{\bf K}_n.$
Similarly, we denote by ${\bf M}_n$ the set of all increasing sequences $m=(m_0,\dots,m_j)$
which grow from zero to $n$, i.e. with $0=m_0<m_1<\dots<m_j=n.$ The quantity $\ell(m)=j$ is again the length of $m\in{\bf M}_n$.
Hence,
\begin{align*}
{\bf K}_n&:=\{k=(k_1,\dots,k_j):k_i\in\mathbb{N}, \sum_{i=1}^j k_i=n\},\\
{\bf M}_n&:=\{m=(m_0,\dots,m_j):m_i\in\mathbb{N}_0,\ 0=m_0<m_1<\dots<m_j=n\}.
\end{align*}
For $k\in{\bf K}_n$, we also write
$$
{n\choose k}={n\choose k_1,\dots,k_{\ell(k)}}=\frac{n!}{k_1!\dots k_{\ell(k)}!}
$$
The explicit formula for $\operatornamewithlimits{vol}(B_{p,\infty}^n)=2^n\operatornamewithlimits{vol}(B_{p,\infty}^{n,+})$ is then presented in the following theorem.
\begin{thm}\label{thm:ind:2} Let $0<p<\infty$ and $n\in\mathbb{N}.$ Then
\begin{align}
\notag \operatornamewithlimits{vol}(B^{n,+}_{p,\infty})&=\sum_{k\in {\bf K}_n} (-1)^{n+\ell(k)}{n\choose k}
\prod_{l=1}^{\ell(k)}\Bigl(n-\sum_{i=1}^{l-1}k_i\Bigr)^{-k_l/p}\\
\label{eq:ind:2}&=n!\sum_{m\in {\bf M}_n} (-1)^{n+\ell(m)}
\prod_{l=0}^{\ell(m)-1} \frac{(n-m_l)^{-(m_{l+1}-m_{l})/p}}{(m_{l+1}-m_{l})!}.
\end{align}
\end{thm}
\begin{proof}
First, we prove the second identity in \eqref{eq:ind:2}. Indeed, the mapping $k=(k_1,\dots,k_j)\to (0,k_1,k_1+k_2,\dots,\sum_{i=1}^jk_i)$
maps one-to-one ${\bf K}_n$ onto ${\bf M}_n$, preserving also the length of the vectors.
Next, we proceed by induction to show the first identity of \eqref{eq:ind:2}. For that sake, we denote ${\bf K}_0=\{0\}$ with $\ell(0)=0$.
With this convention, \eqref{eq:ind:2}
is true for both $n=0$ and $n=1$, where both the sides of \eqref{eq:ind:2} are equal to one.
The rest follows from \eqref{eq:ind:1}.
Indeed, we obtain
\begin{align*}
\operatornamewithlimits{vol}(B^{n,+}_{p,\infty})&=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\operatornamewithlimits{vol}(B^{n-j,+}_{p,\infty})\\
&=\sum_{j=1}^n (-1)^{j-1}{n\choose j}n^{-j/p}\sum_{k\in{\bf K}_{n-j}}(-1)^{n-j+\ell(k)}{n-j\choose k}\prod_{l=1}^{\ell(k)}\Bigl(n-j-\sum_{i=1}^{l-1}k_i\Bigr)^{-k_l/p}\\
&=\sum_{j=1}^n \sum_{k\in{\bf K}_{n-j}} (-1)^{n+\ell(k)-1}{n\choose j}{n-j\choose k}n^{-j/p}\prod_{l=1}^{\ell(k)}\Bigl(n-j-\sum_{i=1}^{l-1}k_i\Bigr)^{-k_l/p}\\
&=\sum_{\nu\in{\bf K}_{n}} (-1)^{n+\ell(\nu)}{n\choose \nu}n^{-\nu_1/p}\prod_{l=1}^{\ell(\nu)-1}\Bigl(n-\sum_{i=1}^{l}\nu_i\Bigr)^{-\nu_{l+1}/p}\\
&=\sum_{\nu\in{\bf K}_{n}} (-1)^{n+\ell(\nu)}{n\choose \nu}\prod_{l=1}^{\ell(\nu)}
\Bigl(n-\sum_{i=1}^{l-1}\nu_i\Bigr)^{-\nu_{l}/p},
\end{align*}
where we identified the pair $(j,k)$ with $1\le j\le n$ and $k=(k_1,\dots,k_{\ell(k)})\in {\bf K}_{n-j}$ with $\nu=(j,k_1,\dots,k_{\ell(k)})\in{\bf K}_n$.
If $j=n$, then the pair $(n,0)$ is identified with $\nu=(n)$. In any case, $\ell(\nu)=\ell(k)+1.$
\end{proof}
| 2,754 | 20,723 |
en
|
train
|
0.120.3
|
\subsubsection{Integral approach}\label{sec:integral}
The result of Theorem \operatornamewithlimits{Re}f{thm:ind:2} can be obtained also by an iterative evaluation of integrals, resembling
the approach of the original work of Dirichlet \cite{Dirichlet}.
To begin with, we define a scale of expressions which for some specific choice of parameters lead to a formula for $\operatornamewithlimits{vol}(B^n_{p,\infty})$.
\begin{dfn}\label{def:V} Let $m\in\mathbb{N}_0$ and $n\in\mathbb{N}$. Let $a\in\mathbb{R}^n$ be a decreasing positive vector, i.e. $a=(a_1,\dots,a_n)$
with $a_1>a_2>\dots>a_n>0.$ We denote
\begin{equation}\label{eq:def:V}
V^{(m)}(n,a)=\int_0^{a_n}\int_{x_n}^{a_{n-1}}\dots\int_{x_2}^{a_1}x_1^mdx_1\dots dx_{n-1}dx_n.
\end{equation}
\end{dfn}
The domain of integration in \eqref{eq:def:V} is defined by the following set of conditions
\begin{align*}
0\le x_n\le a_n,\quad x_n\le x_{n-1}\le a_{n-1},\quad \dots,\quad x_2\le x_1\le a_1,
\end{align*}
which can be reformulated also as
\begin{align*}
0\le x_n\le x_{n-1}\le \dots\le x_2\le x_1\quad\text{and}\quad x_j\le a_j\ \text{for all}\ j=1,\dots,n.
\end{align*}
Hence, the integration in \eqref{eq:def:V} goes over the cone of non-negative non-increasing vectors in $\mathbb{R}^n$ intersected with the set
$\{x\in\mathbb{R}^n:x^*_j\le a_j\ \text{for}\ j=1,\dots,n\}.$ If we set $a^{(p)}=(a_1^{(p)},\dots,a_n^{(p)})$ with $a_j^{(p)}=j^{-1/p}$ for $0<p<\infty$ and $1\le j\le n$,
this set coincides with $B^{n,+}_{p,\infty}$. Finally, considering all the possible reorderings of $x$, we get
\begin{equation}\label{eq:BV1}
\operatornamewithlimits{vol}(B^{n,+}_{p,\infty})=n!\cdot V^{(0)}(n,a^{(p)}).
\end{equation}
In what follows, we simplify the notation by assuming $V^{(0)}(0,\emptyset)=1.$
The integration in \eqref{eq:def:V} leads to the following recursive formula for $V^{(m)}(n,a)$.
\begin{lem}\label{lem:ind:V1} Let $m\in\mathbb{N}_0$, $n\in\mathbb{N}$ and $a\in\mathbb{R}^n$ with $a_1>a_2>\dots>a_n>0.$ Then
\begin{equation}\label{eq:ind:V1}
V^{(m)}(n,a)=\sum_{i=1}^{n}(-1)^{i+1}\frac{a_i^{m+i}m!}{(m+i)!}V^{(0)}(n-i,(a_{i+1},\dots,a_n)).
\end{equation}
\end{lem}
\begin{proof}
First, we obtain
\begin{align*}
V^{(m)}(n,a)&=\int_0^{a_n}\int_{x_n}^{a_{n-1}}\dots\int_{x_2}^{a_1}x_1^mdx_1\dots dx_{n-1}dx_n\\
&=\frac{1}{m+1}\int_0^{a_n}\int_{x_n}^{a_{n-1}}\dots\int_{x_3}^{a_2}(a_1^{m+1}-x_2^{m+1})dx_2\dots dx_{n-1}dx_n\\
&=\frac{a_1^{m+1}}{m+1}V^{(0)}(n-1,(a_2,\dots,a_n))-\frac{1}{m+1}V^{(m+1)}(n-1,(a_2,\dots,a_n)).
\end{align*}
The proof of \eqref{eq:ind:V1} now follows by induction over $n$. For $n=1$, $V^{(m)}(1,(a_1))=\frac{a_1^{m+1}}{m+1}$,
which is in agreement with \eqref{eq:ind:V1}. To simplify the notation later on,
we write $a_{[k,l]}=(a_k,\dots,a_l)$ for every $1\le k\le l\le n$.
We assume, that \eqref{eq:ind:V1} holds for $n-1$ and calculate
\begin{align*}
V^{(m)}(n,a)&=\frac{a_1^{m+1}}{m+1}V^{(0)}(n-1,a_{[2,n]})-\frac{1}{m+1}V^{(m+1)}(n-1,(a_{[2,n]}))\\
&=\frac{a_1^{m+1}}{m+1}V^{(0)}(n-1,a_{[2,n]})-
\frac{1}{m+1}\sum_{i=1}^{n-1}(-1)^{i+1}\frac{a_{i+1}^{m+1+i}(m+1)!}{(m+1+i)!}V^{(0)}(n-1-i,(a_{[i+2,n]}))\\
&=\frac{a_1^{m+1}}{m+1}V^{(0)}(n-1,a_{[2,n]})+
\sum_{j=2}^{n}(-1)^{j+1}\frac{a_{j}^{m+j}m!}{(m+j)!}V^{(0)}(n-j,(a_{[j+1,n]}))\\
&=\sum_{j=1}^{n}(-1)^{j+1}\frac{a_{j}^{m+j}m!}{(m+j)!}V^{(0)}(n-j,(a_{[j+1,n]})),
\end{align*}
which finishes the proof of \eqref{eq:ind:V1}.
\end{proof}
Lemma \operatornamewithlimits{Re}f{lem:ind:V1} allows for a different proof of Theorem \operatornamewithlimits{Re}f{thm:ind:2}.
\begin{proof}[Alternative proof of Theorem \operatornamewithlimits{Re}f{thm:ind:2}]
By \eqref{eq:BV1}, we need to calculate $V^{(0)}(n,a)$ and then substitute $a=a^{(p)}.$
We show by induction that
\begin{equation}\label{eq:BV2}
V^{(0)}(n,a)=\sum_{m\in{\bf M}_n}(-1)^{n+\ell(m)}\prod_{l=0}^{\ell(m)-1}\frac{a_{n-m_l}^{m_{l+1}-m_l}}{(m_{l+1}-m_l)!}.
\end{equation}
If $n=1$, then both sides of \eqref{eq:BV2} are equal to $a_1.$ For general $n$, we obtain by \eqref{eq:ind:V1}
\begin{align*}
V^{(0)}(n,a)&=\sum_{i=1}^{n}(-1)^{i+1}\frac{a_i^{i}}{i!}V^{(0)}(n-i,(a_{i+1},\dots,a_n))\\
&=\sum_{i=1}^{n}(-1)^{i+1}\frac{a_i^{i}}{i!} \sum_{m\in{\bf M}_{n-i}}(-1)^{n-i+\ell(m)}\prod_{l=0}^{\ell(m)-1}\frac{a_{n-i-m_l+i}^{m_{l+1}-m_l}}{(m_{l+1}-m_l)!}\\
&=\sum_{i=1}^{n}\sum_{m\in{\bf M}_{n-i}}(-1)^{n+1+\ell(m)}\frac{a_i^{i}}{i!} \prod_{l=0}^{\ell(m)-1}\frac{a_{n-m_l}^{m_{l+1}-m_l}}{(m_{l+1}-m_l)!}\\
&=\sum_{\mu\in{\bf M}_n} (-1)^{n+\ell(\mu)} \frac{a_{n-\mu_{\ell(\mu)-1}}^{\mu_{\ell(\mu)}-\mu_{\ell(\mu)-1}}}{(\mu_{\ell(\mu)}-\mu_{\ell(\mu)-1})!}
\prod_{l=0}^{\ell(\mu)-2}\frac{a_{n-\mu_l}^{\mu_{l+1}-\mu_l}}{(\mu_{l+1}-\mu_l)!}\\
&=\sum_{\mu\in{\bf M}_n} (-1)^{n+\ell(\mu)} \prod_{l=0}^{\ell(\mu)-1}\frac{a_{n-\mu_l}^{\mu_{l+1}-\mu_l}}{(\mu_{l+1}-\mu_l)!},
\end{align*}
where we identified the pair $(i,m)$ with $1\le i \le n$ and $m=(m_0,\dots,m_{\ell(m)})\in{\bf M}_{n-i}$ with
$\mu=(\mu_0,\dots,\mu_{\ell(\mu)})=(m_0,\dots,m_{\ell(m)},n)\in{\bf M}_n$. Hence, $\ell(\mu)=\ell(m)+1.$
\end{proof}
| 2,421 | 20,723 |
en
|
train
|
0.120.4
|
\subsection{Lorentz spaces with $q=1$}
We give an explicit formula for $\operatornamewithlimits{vol}(B_{p,1}^n)$, which takes a surprisingly simple form.
The approach is based on the polarization. Recall, that for $0<p\le \infty$, the \mbox{(quasi-)norm}
$\|x\|_{p,1}$ is defined as
$$
\|x\|_{p,1}=\sum_{k=1}^n k^{1/p-1}x_k^*.
$$
\begin{thm}\label{thm:q1} Let $0<p\le \infty$. Then
$$
\operatornamewithlimits{vol}(B_{p,1}^n)=2^n\prod_{k=1}^n\frac{1}{\varkappa_p(k)},\quad \text{where}\quad \varkappa_p(k)=\sum_{j=1}^kj^{1/p-1}.
$$
\end{thm}
\begin{proof}
Let $f:[0,\infty)\to \mathbb{R}$ be a smooth non-negative function with a sufficient decay at infinity (later on, we will just choose $f(t)=e^{-t}$).
Then
\begin{align}
\notag\int_{\mathbb{R}^n}f(\|x\|_{p,1})dx&=-\int_{\mathbb{R}^n}\int_{\|x\|_{p,1}}^\infty f'(t)dtdx=-\int_0^\infty f'(t)\int_{x:\|x\|_{p,1}\le t}1dx dt\\
\label{eq:q1:1}&=-\int_0^\infty f'(t)\operatornamewithlimits{vol}(\{x:\|x\|_{p,1}\le t\})dt\\
\notag&=-\int_0^\infty t^n f'(t)\operatornamewithlimits{vol}(\{x:\|x\|_{p,1}\le 1\})dt=-\operatornamewithlimits{vol}(B^n_{p,1})\int_0^\infty t^n f'(t)dt.
\end{align}
For the choice $f(t)=e^{-t}$, we get
\begin{equation}\label{eq:q1:2}
-\int_0^\infty t^n f'(t)dt=\int_0^\infty t^ne^{-t}dt=\Gamma(n+1)=n!.
\end{equation}
It remains to evaluate
\begin{align*}
I^n_{p}=\int_{\mathbb{R}^n}\exp(-\|x\|_{p,1})dx=\int_{\mathbb{R}^n}\exp\Bigl(-\sum_{k=1}^n k^{1/p-1}x_k^*\Bigr)dx
=2^n\cdot n!\cdot\int_{{\mathcal C}_+^n}\exp\Bigl(-\sum_{k=1}^n k^{1/p-1}x_k\Bigr)dx,
\end{align*}
where
$$
{\mathcal C}_+^n=\Bigl\{x\in\mathbb{R}^n:x_1\ge x_2\ge\dots\ge x_n\ge 0\Bigr\}.
$$
We denote for $t\ge 0$, $0<p\le\infty$, and $n\in \mathbb{N}$
$$
A(n,p,t)=\int_{{\mathcal C}_{t,+}^n}\exp\Bigl(-\sum_{k=1}^n k^{1/p-1}x_k\Bigr)dx,
$$
where ${\mathcal C}_{t,+}^n=\Bigl\{x\in\mathbb{R}^n:x_1\ge x_2\ge\dots\ge x_n\ge t\Bigr\}$, i.e. $I_p^n=2^n\cdot n!\cdot A(n,p,0).$
We observe that
\begin{equation}\label{eq:int:1}
A(1,p,t)=\int_t^\infty e^{-u}du=e^{-t}
\end{equation}
and
\begin{align}
\notag A(n,p,t)&=\int_{t}^\infty \exp\Bigl({-n^{1/p-1}x_n}\Bigr)\int_{x_n}^\infty \exp\Bigl({-(n-1)^{1/p-1}x_{n-1}}\Bigr)\dots \int_{x_2}^\infty \exp({-x_1})dx_1\dots dx_{n-1}dx_n\\
\label{eq:int:2}
&=\int_{t}^\infty \exp\Bigl({-n^{1/p-1}x_n}\Bigr)A(n-1,p,x_n)dx_n.
\end{align}
Combining \eqref{eq:int:1} and \eqref{eq:int:2}, we prove by induction
\begin{align*}
A(n,p,t)=\prod_{k=1}^n\frac{1}{\varkappa_p(k)}\exp(-\varkappa_p(n) t),\quad \text{where}\quad \varkappa_p(k)=\sum_{j=1}^kj^{1/p-1}
\end{align*}
and
\begin{equation}\label{eq:q1:3}
I_p^n=2^n\cdot n!\cdot \prod_{k=1}^n\frac{1}{\varkappa_p(k)}.
\end{equation}
Finally, we combine \eqref{eq:q1:1} with \eqref{eq:q1:2} and \eqref{eq:q1:3} and obtain
$$
\operatornamewithlimits{vol}(B_{p,1}^n)=2^n\prod_{k=1}^n\frac{1}{\varkappa_p(k)}.
$$
\end{proof}
\begin{rem}
Let us point out that for $p=1$, we get $\varkappa_1(k)=k$ and we recover the very well known formula $\operatornamewithlimits{vol}(B^n_{1,1})=2^n/n!$.
The application of the polarization identity to other values of $q\not=1$ is also possible, but one arrives to an $n$-dimensional integral,
which (in contrary to $I_p^n$) seems to be hard to compute explicitly.
\end{rem}
| 1,520 | 20,723 |
en
|
train
|
0.120.5
|
\section{Asymptotic behavior}
Volumes of convex and non-convex bodies play an important role in many areas of mathematics, cf. \cite{Pisier}.
Nevertheless, for most of the applications we do not need the exact information about the volume, it is often enough
to apply good lower and/or upper bounds of this quantity. For example, for the use in local Banach theory,
it is sometimes sufficient to have some asymptotic bounds on $\operatornamewithlimits{vol}(B_p^n)$ for $n$ large.
In this section, we provide two such estimates.
\subsection{Asymptotic behavior of $\operatornamewithlimits{vol}(B_{p,q}^n)^{1/n}$}
The first quantity, we would like to study, is the $n$-th root of $\operatornamewithlimits{vol}(B^n_{p,q})$. In the Lebesgue case $q=p$, \eqref{eq:volBp}
can be combined with the Stirling's formula (cf. \cite{WiW})
\begin{equation}\label{eq:Stirling}
\Gamma(t)=(2\pi)^{1/2}t^{t-1/2}e^{-t}e^{\theta(t)/t},\quad 0<t<\infty,
\end{equation}
where $0<\theta(t)<1/12$ for all $t>0$, to show that
\begin{equation}\label{eq:asymp:p}
\operatornamewithlimits{vol}(B_p^n)^{1/n}\approx n^{-1/p},
\end{equation}
where the constants of equivalence do not depend on $n$. Combining \eqref{eq:asymp:p} with the embedding (cf. Theorem \operatornamewithlimits{Re}f{thm:emb:1})
$$
B_p^n\subset B_{p,\infty}^n\subset (1+\log(n))^{1/p}B_{p}^n,
$$
we observe that
\begin{equation}\label{eq:asymp:pq1}
n^{-1/p}\lesssim \operatornamewithlimits{vol}(B^n_{p,\infty})\lesssim \Bigl(\frac{1+\log(n)}{n}\Bigr)^{1/p}
\end{equation}
for all $0<p\le \infty.$ The aim of this section is to show, that the lower bound in \eqref{eq:asymp:pq1} is sharp
and that \eqref{eq:asymp:p} generalizes to all $0<p<\infty$ and $0<q\le\infty$ without additional logarithmic factors.
If $0<p<\infty$ and $q=1$, this can be obtained as a consequence of Theorem \operatornamewithlimits{Re}f{thm:q1}. Indeed, elementary estimates give
$$
\varkappa_p(k)\approx k^{1/p}
$$
with constants independent on $k$ and Theorem \operatornamewithlimits{Re}f{thm:q1} then implies
$$
\operatornamewithlimits{vol}(B^n_{p,1})^{1/n}\approx \Bigl(\prod_{k=1}^n \varkappa_p(k)^{-1}\Bigr)^{1/n}\approx \Bigl(\prod_{k=1}^n k^{-1/p}\Bigr)^{1/n}\approx (n!)^{-\frac{1}{p}\cdot\frac{1}{n}}.
$$
The result is then finished by another application of Stirling's formula.
To extend the result also to $q\not=1$, we apply the technique of entropy together with interpolation.
\begin{thm}\label{thm:asym:1}
Let $n\in\mathbb{N}$, $0<p<\infty$ and $0<q\le\infty$. Then
\begin{equation}\label{eq:thm:asymp}
\operatornamewithlimits{vol}(B_{p,q}^n)^{1/n}\approx n^{-1/p}
\end{equation}
with the constants of equivalence independent of $n$.
\end{thm}
\begin{proof}
\emph{Step 1.:} First, we show the upper bound of $\operatornamewithlimits{vol}(B^n_{p,\infty})^{1/n}$.
For that reason, we define the entropy numbers of a bounded linear operator between two quasi-normed Banach spaces $X$ and $Y$
as follows
$$
e_k(T:X\to Y)=\inf\Bigl\{\varepsilon>0:\exists \{y_l\}_{l=1}^{2^{k-1}}\subset Y\ \text{such that}\ T(B_X)\subset\bigcup_{l=1}^{2^{k-1}}(y_l+\varepsilon B_Y)\Bigr\}.
$$
Here, $B_X$ and $B_Y$ stand for the unit ball of $X$ and $Y$, respectively.
We use the interpolation inequality for entropy numbers (cf. Theorem \cite[Theorem 1.3.2 (i)]{ET})
together with the interpolation property of Lorentz spaces (cf. \cite[Theorems 5.2.1 and 5.3.1]{BL}) and obtain that
$$
e_{k}(id:\ell_{p,\infty}^n\to\ell_\infty^n)\le c_p e_k(id:\ell_{p/2}^n\to\ell_{\infty}^n)^{1/2},
$$
where $c_p>0$ depends only on $p$. Together with the known estimates of entropy numbers of embeddings of Lebesgue-type sequence spaces \cite{GL,KV,K2001,S},
we obtain
$$
e_{n}(id:\ell^n_{p,\infty}\to\ell_\infty^n)\le c_p n^{-1/p}.
$$
By the definition of entropy numbers, this means that $B_{p,\infty}^n$ can be covered with $2^{n-1}$ balls in $\ell_\infty^n$ with radius
$(1+\varepsilon)c_pn^{-1/p}$ for every $\varepsilon>0$. Comparing the volumes, we obtain
$$
\operatornamewithlimits{vol}(B_{p,\infty}^n)\le 2^{n-1}[(1+\varepsilon)c_pn^{-1/p}]^n\operatornamewithlimits{vol}(B_\infty^n),
$$
i.e. $\operatornamewithlimits{vol}(B_{p,\infty}^n)^{1/n}\le c_p' n^{-1/p}.$
\emph{Step 2.:} The estimate from above for general $0<q\le \infty$ is covered by the embedding of Theorem \operatornamewithlimits{Re}f{thm:emb:1} and by the previous step.
\emph{Step 3.:} For the lower bound, we use again the interpolation
of entropy numbers leading to
$$
e_{n}(id:\ell_{p/2}^n\to \ell_{p,q}^n)\le c_{p,q} e_{n}(id:\ell_{p/2}^n\to \ell_{\infty}^n)^{1/2}.
$$
Therefore, the unit ball $B_{p/2}^n$ can be covered by $2^{n-1}$ copies of $B_{p,q}^n$ multiplied by $(1+\varepsilon)c n^{-1/p}$.
Comparing the volumes, we obtain
$$
c_1n^{-2/p}\le \operatornamewithlimits{vol}(B_{p/2}^n)^{1/n}\le c_2n^{-1/p}\operatornamewithlimits{vol}(B_{p,q}^n)^{1/n},
$$
which finishes the proof.
\end{proof}
The result of Theorem \operatornamewithlimits{Re}f{thm:asym:1} seems to be a bit surprising at first look - especially in view of \eqref{eq:asymp:pq1},
which suggests that some additional logarithmic factor could appear.
That the outcome of Theorem \operatornamewithlimits{Re}f{thm:asym:1} was by no means obvious is confirmed by inspecting the case $p=\infty$,
where the behavior of $n$-th root of the volume of the unit ball actually differs from \eqref{eq:thm:asymp}.
\begin{thm}\label{thm:asym:inf1}
Let $n\in \mathbb{N}$ be a positive integer. Then
\begin{equation}\label{eq:asym:inf1}
[\operatornamewithlimits{vol}(B^n_{\infty,1})]^{1/n}\approx (\log (n+1))^{-1}
\end{equation}
with the constants of equivalence independent on $n$.
\end{thm}
\begin{proof}
By Theorem \operatornamewithlimits{Re}f{thm:q1}, we know that
$$
\operatornamewithlimits{vol}(B^n_{\infty,1})^{1/n}\approx \Bigl(\prod_{k=1}^n \varkappa_\infty(k)^{-1}\Bigr)^{1/n},
$$
where
$$
\varkappa_\infty(k)=\sum_{j=1}^k \frac{1}{j}\approx \log(k+1)\quad \text{for any}\quad k\ge 1.
$$
Therefore,
$$
\operatornamewithlimits{vol}(B^n_{\infty,1})^{1/n}\approx \Bigl(\prod_{k=1}^n \frac{1}{\log(k+1)}\Bigr)^{1/n}.
$$
The lower bound of this quantity is straightforward
$$
\Bigl(\prod_{k=1}^n \frac{1}{\log(k+1)}\Bigr)^{1/n}\ge
\Bigl(\prod_{k=1}^n \frac{1}{\log(n+1)}\Bigr)^{1/n}=\frac{1}{\log(n+1)}.
$$
For the upper bound, we use the inequality between geometric and arithmetic mean and obtain
$$
\Bigl(\prod_{k=1}^n \frac{1}{\log(k+1)}\Bigr)^{1/n}\le \frac{1}{n} \sum_{k=1}^n \frac{1}{\log(k+1)}\le \frac{1}{n}\biggl\{\frac{1}{\log(2)}+\int_2^{n+1}\frac{1}{\log(t)}dt\biggr\}.
$$
The last integral is known as the (offset) logarithmic integral and is known to be asymptotically $O(x/\log(x))$ for $x$ going to infinity, cf. \cite[Chapter 5]{AS}.
Alternatively, the same fact can be shown easily by the L'Hospital's rule.
This finishes the proof.
\end{proof}
| 2,550 | 20,723 |
en
|
train
|
0.120.6
|
\subsection{Ratio of volumes}
The unit balls $B^n_{p,\infty}$ of weak Lebesgue spaces
are commonly considered to be ``slightly larger'' than the unit balls of Lebesgue spaces
with the same summability parameter.
The aim of this section is to study their relation in more detail. For that sake, we define for $0<p<\infty$
\begin{equation}\label{eq:ratio:1}
R_{p,n}:=\frac{\operatornamewithlimits{vol}(B_{p,\infty}^n)}{\operatornamewithlimits{vol}(B_p^n)}.
\end{equation}
By the embedding in Theorem \operatornamewithlimits{Re}f{thm:emb:1} (which we give below with the full proof for reader's convenience, cf. \cite[Chapter 4, Proposition 4.2]{BS})
we know that this quantity is bounded from below by one.
Later on, we would like to study its behavior (i.e. growth) when $n$ tends to $\infty.$
\begin{thm}\label{thm:emb:1}
If $0<p<\infty$ and $0<q\le r\le \infty$, then
\begin{equation}\label{eq:emb:p2}
B_{p,q}^n\subset c_{p,q,r} B_{p,r}^n,
\end{equation}
where the quantity $c_{p,q,r}$ does not depend on $n$. In particular, $B_{p,q}^n\subset B_{p,r}^n$
if also $q\le p$.
\end{thm}
\begin{proof}
First, we prove the assertion with $r=\infty$. If $1\le l\le n$ is a positive integer, the result follows from
\begin{align*}
\|x\|_{p,q}^q=\sum_{k=1}^n k^{q/p-1}(x_k^*)^q\ge\sum_{k=1}^l k^{q/p-1}(x_k^*)^q \ge (x_l^*)^q\sum_{k=1}^lk^{q/p-1}
\end{align*}
and
$$
l^{q/p}(x_l^*)^q\le \|x\|_{p,q}^q\cdot l^{q/p}\cdot\Bigl(\sum_{k=1}^l k^{q/p-1}\Bigr)^{-1}.
$$
We obtain that
\begin{equation}\label{eq:emb:p1}
\|x\|_{p,\infty}=\max_{l=1,\dots,n}l^{1/p}x_l^*\le \|x\|_{p,q}\sup_{l\in\mathbb{N}}l^{1/p}\cdot\Bigl(\sum_{k=1}^l k^{q/p-1}\Bigr)^{-1/q}.
\end{equation}
For $q\le p$, it can be shown by elementary calculus that
$$
\sum_{k=1}^lk^{q/p-1}\ge l^{q/p}.
$$
Together with \eqref{eq:emb:p1}, this implies that $\|x\|_{p,\infty}\le \|x\|_{p,q}$
and we obtain $B_{p,q}^n\subset B_{p,\infty}^n,$ i.e. \eqref{eq:emb:p2} with $c_{p,q,\infty}=1.$
If, on the other hand, $q>p$ we estimate
$$
\sum_{k=1}^lk^{q/p-1}\ge\int_0^l t^{q/p-1}dt=\frac{p}{q}\cdot l^{q/p}
$$
and \eqref{eq:emb:p2} follows with $c_{p,q,\infty}=(q/p)^{1/q}$.
If $0<q< r<\infty$, we write
\begin{align*}
\|x\|_{p,r}&=\Bigl\{\sum_{k=1}^n k^{r/p-1}(x_k^*)^r\Bigr\}^{1/r}
=\Bigl\{\sum_{k=1}^n k^{q/p-1}(x_k^*)^q k^{(r-q)/p}(x_k^*)^{r-q}\Bigr\}^{1/r}\\
&\le \|x\|_{p,\infty}^{\frac{r-q}{r}}\cdot\|x\|_{p,q}^{q/r}\le [c_{p,q,\infty}\|x\|_{p,q}]^{\frac{r-q}{r}}\cdot\|x\|_{p,q}^{q/r},
\end{align*}
i.e.
$$
\|x\|_{p,r}\le c_{p,q,r}\|x\|_{p,q}\quad\text{with}\quad c_{p,q,r}=(c_{p,q,\infty})^{\frac{r-q}{r}}.
$$
\end{proof}
We show that the ratio $R_{p,n}$ defined in \eqref{eq:ratio:1} grows exponentially for $0<p\le 2$.
Naturally, we also conjecture that the same is true for all $0<p<\infty$, but we leave this as an open problem.
\begin{thm}\label{thm:ratio}
For every $0<p\le 2$, there is a constant $C_p>1$, such that
$$
R_{p,n}\gtrsim C_p^n
$$
with the multiplicative constant independent on $n$.
\end{thm}
\begin{proof}
We give the proof for even $n$'s, the proof for odd $n$'s is similar, only slightly more technical.
Let ${\mathcal B^n_p}\subset \mathbb{R}^n$ be the set of vectors $x\in\mathbb{R}^n$, which satisfy
$$
x_1^*\in\Bigl[\frac{1}{2^{1/p}},1\Bigr],\quad x_2^*\in \Bigl[\frac{1}{3^{1/p}},\frac{1}{2^{1/p}}\Bigr],\dots,x^*_{n/2}\in\Bigl[\frac{1}{(n/2+1)^{1/p}},\frac{1}{(n/2)^{1/p}}\Bigr]
$$
and $\displaystyle x^*_{n/2+1},\dots,x_n^*\in\Bigl[0,\frac{1}{n^{1/p}}\Bigr]$.
Then ${\mathcal B}_p^n\subset B_{p,\infty}^n$
and the volume of ${\mathcal B}_p^n$ can be calculated by combinatorial methods. Indeed,
there is ${n\choose n/2}$ ways how to choose the $n/2$ indices of the smallest coordinates. Furthermore, there is $(n/2)!$
ways, how to distribute the $n/2$ largest coordinates. We obtain that
\begin{align}\label{eq:ratio:proof1}
R_{p,n}&=\frac{\operatornamewithlimits{vol}(B_{p,\infty}^{n})}{\operatornamewithlimits{vol}(B_p^n)}\ge \frac{\operatornamewithlimits{vol}({\mathcal B}_p^n)}{\operatornamewithlimits{vol}(B_p^n)}\\
\notag&\ge\frac{\Gamma(1+n/p)}{\Gamma(1+1/p)^n}\cdot \binom{n}{n/2}\cdot (n/2)! \cdot\prod_{i=1}^{n/2}\frac{(i+1)^{1/p} - i^{1/p}}{i^{1/p}(i+1)^{1/p}} \cdot\left(\frac{1}{n^{1/p}}\right)^{n/2}.
\end{align}
First, we observe that, by Stirling's formula \eqref{eq:Stirling},
\begin{align}
\notag\Gamma(1+n/p)&\cdot\binom{n}{n/2}\cdot (n/2)! \cdot\prod_{i=1}^{n/2}\frac{1}{i^{1/p}(i+1)^{1/p}} \cdot\left(\frac{1}{n^{1/p}}\right)^{n/2}\\
\notag& = \frac{\Gamma(1+n/p)n!}{\left[(n/2)!\right]^{1+1/p}\left[(n/2+1)!\right]^{1/p}n^{n/(2p)}}\\
\label{eq:ratio:proof2}&\approx \frac{\sqrt{2\pi n/p}\left(\frac{n}{pe}\right)^{n/p}\sqrt{2\pi n}\left(\frac{n}{e}\right)^n}{\left(\sqrt{\pi n}\right)^{1+2/p}(n/2+1)^{1/p}\left(\frac{n}{2e}\right)^{n/2+n/p}n^{n/(2p)}}\\
\notag&\approx \left(\frac{2^{1/2+1/p}}{p^{1/p}e^{1/2}}\right)^n\cdot n^{n/2-n/(2p)+1/2-2/p}.
\end{align}
By the mean value theorem, we obtain
$$
(i+1)^{1/p} - i^{1/p}\geq \begin{cases}
\frac{i^{1/p-1}}{p},\quad 0<p\le 1,\\
\frac{(i+1)^{1/p-1}}{p},\quad 1<p\le 2.
\end{cases}
$$
We use \eqref{eq:Stirling} to estimate $\Gamma(1+1/p)$ together with \eqref{eq:ratio:proof1} and \eqref{eq:ratio:proof2} and obtain
\begin{align*}
R_{p,n} & \gtrsim \left(\frac{2^{1/2+1/p}}{\Gamma(1+1/p)p^{1/p}e^{1/2}}\right)^n\cdot \frac{n^{n/2-n/(2p)+1/2-2/p}}{p^{n/2}}
\cdot\left[(n/2)!\right]^{1/p-1}(n/2+1)^{\alpha(1/p-1)}\\
& \approx \left(\frac{2^{1+1/(2p)}}{\Gamma(1+1/p)p^{1/p+1/2}e^{1/(2p)}}\right)^n\cdot n^{-3/(2p)+\alpha(1/p-1)}
\gtrsim \left(\frac{2^{1/2+1/(2p)}}{\pi^{1/2}e^{p/12-1/(2p)}}\right)^n n^{-3/(2p)+\alpha(1/p-1)},
\end{align*}
where $\alpha=0$ for $0<p\le 1$ and $\alpha=1$ for $1<p\le 2.$
The proof is then finished by monotonicity and
$$
\frac{2^{1/2+1/(2p)}}{\pi^{1/2}e^{p/12-1/(2p)}}=\sqrt{\frac{2}{\pi}} \cdot\frac{ (2e)^{1/(2p)}}{e^{p/12}}\ge
\sqrt{\frac{2}{\pi}}\cdot \frac{(2e)^{1/4}}{e^{1/6}}>1.
$$
\end{proof}
| 2,731 | 20,723 |
en
|
train
|
0.120.7
|
\section{Entropy numbers}
We have already seen the closed connection between estimates of volumes of unit balls of
finitedimensional (quasi-)Banach spaces and the decay of entropy numbers of embeddings of such spaces
in the proof of Theorem \operatornamewithlimits{Re}f{thm:asym:1}.
With the same arguments as there, it is rather straightforward to prove that
\begin{equation}\label{eq:interpol:1}
e_k(id:\ell^n_{p_0,q_0}\to \ell_{p_1,q_1}^n)\approx e_k(id:\ell^n_{p_0}\to \ell_{p_1}^n)
\end{equation}
for $0<p_0,p_1<\infty$ with $p_0\not=p_1.$
On the other hand, it was shown in \cite{EN}, that the entropy numbers of diagonal operators between Lorentz sequence spaces
can exhibit also a very complicated behavior. Actually, they served in \cite{EN} as the first counterexample to a commonly conjectured interpolation
inequality for entropy numbers.
We complement \eqref{eq:interpol:1} by considering the limiting case $p_0=p_1.$
As an application of our volume estimates, accompanied by further arguments, we will investigate in this section
the decay of the entropy numbers $e_k(id:\ell^n_{1,\infty}\to \ell^n_{1}).$
Before we come to our main result, we state a result from coding theory \cite{coding:1,coding:2}, which turned out to be useful
also in connection with entropy numbers \cite{EN,KV} and even in optimality of sparse recovery in compressed sensing \cite{BCKV,FPRU,FR}.
\begin{lem}\label{Lemma:coding} Let $k\le n$ be positive integers. Then there are $M$ subsets $T_1,\dots,T_M$ of $\{1,\dots,n\}$,
such that
\begin{enumerate}
\item[(i)] $\displaystyle M\ge \Bigl(\frac{n}{4k}\Bigr)^{k/2}$,
\item[(ii)] $|T_i|=k$ for all $k=1,\dots,M$,
\item[(iii)] $|T_i\cap T_j|<k/2$ for all $i\not=j.$
\end{enumerate}
\end{lem}
To keep the argument simple, we restrict ourselves to $p=1$.
\begin{thm}\label{thm:entropy} Let $k$ and $n$ be positive integers. Then
$$
e_k(id:\ell_{1,\infty}^n\to \ell_1^n)\approx\begin{cases} \log(1+n/k),\quad 1\le k\le n,\\
2^{-\frac{k-1}{n}},\quad k\ge n,
\end{cases}
$$
where the constants of equivalence do not depend on $k$ and $n$.
\end{thm}
\begin{proof}
\emph{Step 1. (lower bound for $k\ge n$):} If $B_{1,\infty}^n$ is covered by $2^{k-1}$ balls in $\ell_1^n$ with radius $\varepsilon>0$, it must hold
$$
\operatornamewithlimits{vol}(B_{1,\infty}^n)^{1/n}\le 2^{\frac{k-1}{n}}\varepsilon\operatornamewithlimits{vol}(B_1^n)^{1/n},
$$
which (in combination with Theorem \operatornamewithlimits{Re}f{thm:asym:1}) gives the lower bound for $k\ge n$.\\
\emph{Step 2. (upper bound for $k\ge n$):} We use again volume arguments.
Let $\varepsilon>0$ be a parameter to be chosen later on.
Let $\{x_1,\dots,x_N\}\subset B_{1,\infty}^n$
be a maximal $\varepsilon$-distant set in the metric of $\ell_1^n.$ This means that
$$
B_{1,\infty}^n\subset \bigcup_{j=1}^N (x_j+\varepsilon B_1^n)
$$
and $\|x_i-x_j\|_1>\varepsilon$ for $i\not =j.$ Hence, any time $N\le 2^{k-1}$ for some positive integer $k$, then
$e_k(id:\ell_{1,\infty}^n\to \ell_1^n)\le \varepsilon.$ To estimate $N$ from above, let us note that $(x_j+\varepsilon B_{1}^n)\subset 2(1+\varepsilon)B_{1,\infty}^n$,
which follows by the quasi-triangle inequality for $\ell_{1,\infty}^n$. On the other hand, the triangle inequality of $\ell_1^n$ implies that
$(x_j+\frac{\varepsilon}{2}B_1^n)$ are mutually disjoint. Hence,
$$
N\Bigl(\frac{\varepsilon}{2}\Bigr)^n\operatornamewithlimits{vol}(B_1^n)\le 2^n(1+\varepsilon)^n\operatornamewithlimits{vol}(B_{1,\infty}^n),
$$
i.e.
\begin{equation}\label{eq:entropy:1}
N\le 4^n\Bigl(1+\frac{1}{\varepsilon}\Bigr)^n\frac{\operatornamewithlimits{vol}(B_{1,\infty}^n)}{\operatornamewithlimits{vol}(B_{1}^n)}\le
\frac{8^n}{\varepsilon^n}\frac{\operatornamewithlimits{vol}(B_{1,\infty}^n)}{\operatornamewithlimits{vol}(B_{1}^n)}
\end{equation}
if $0<\varepsilon<1.$ We now define the parameter $\varepsilon$ by setting the right-hand side of \eqref{eq:entropy:1} equal to $2^{k-1}$.
By Theorem \operatornamewithlimits{Re}f{thm:asym:1}, there exists an integer $\gamma\ge 1$, such that
$\varepsilon<1$ for $k\ge \gamma n$. In this way, we get $N\le 2^{k-1}$
and $\varepsilon=8 [\operatornamewithlimits{vol}(B_{1,\infty}^n)/\operatornamewithlimits{vol}(B_{1}^n)]^{1/n}\cdot 2^{-\frac{k-1}{n}}\le c\,2^{-\frac{k-1}{n}}$. This gives the result
for $k\ge \gamma n.$
\emph{Step 3. (upper bound for $k\le n$):} Let $1\le l\le n/2$ be a positive integer, which we will chose later on.
To every $x\in B_{1,\infty}^n$, we associate $S\subset\{1,\dots,n\}$ to be the indices of its $l$ largest entries (in absolute value).
Furthermore, $x_S\in\mathbb{R}^n$ denotes the restriction of $x$ to $S$.
We know that
$$
\|x-x_S\|_1\le \sum_{k=l+1}^n \frac{1}{k}\le \int_{l}^{n}\frac{dx}{x}=\log(n)-\log(l)=\log(n/l).
$$
By Step 2, there is an absolute constant $c>0$ (independent of $l$), such that
$$
e_{\gamma l}(id:\ell_{1,\infty}^l\to \ell_1^l)< c,
$$
where $\gamma\ge 1$ is the integer constant from Step 2.
Hence, there is a point set ${\mathcal N}\subset \mathbb{R}^l$, with $|{\mathcal N}|=2^{\gamma l}$,
which is a $c$-net of $B_{1,\infty}^l$ in the $\ell_1^l$-norm. For any set $S$ as above, we embed ${\mathcal N}$ into $\mathbb{R}^n$ by extending the points from
${\mathcal N}$ by zero outside of $S$ and obtain a point set ${\mathcal N}_S$, which is a $c$-net of
$\{x\in B_{1,\infty}^n:{\rm supp\ }(x)\subset S\}$. Taking the union of all these nets over all sets $S\subset \{1,\dots,n\}$ with $|S|=l$,
we get $2^{\gamma l}{n\choose l}$ points,
which can approximate any $x\in B_{1,\infty}^n$ within $c+\log(n/l)$
in the $\ell_1^n$-norm.
We use the elementary estimate ${n\choose l}\le (en/l)^l$ and assume (without loss of generality) that $\gamma\ge 2$.
Then we may conclude, that whenever $2^{k-1}\ge 2^{\gamma l}{n\choose l}$, we have $e_k(id:\ell_{1,\infty}^n\to\ell_{1}^n)\le c+\log(n/l)$, i.e.
$$
k-1\ge \gamma l(1+\log(en/l))\operatornamewithlimits{Im}plies e_k(id:\ell_{1,\infty}^n\to\ell_{1}^n)\lesssim (1+\log(n/l)).
$$
By a standard technical argument, $l$ can be chosen up to the order of $k/\log(n/k)$, which gives the result
for $n$ large enough and $k$ between $(\gamma+1)\log(en)$ and $n$. Using monotonicity of entropy numbers, the upper bound from Step 2
and the elementary bound $e_k(id:\ell_{1,\infty}^n\to\ell_{1}^n)\le \|id:\ell_{1,\infty}^n\to\ell_{1}^n\|\le 1+\log(n)$ concludes the proof of the upper bounds.
\emph{Step 4. (lower bound for $k\le n$):} Let $n$ be a sufficiently large positive integer and let $\nu\ge 1$ be the largest integer with $12\cdot 4^\nu\le n$.
Let $1\le\mu\le \nu$ be a positive integer.
We apply Lemma \operatornamewithlimits{Re}f{Lemma:coding} with $k$ replaced by $4^l$ for every integer $l$ with $\mu\le l \le \nu$.
In this way, we obtain a system of subsets $T^l_1,\dots,T^l_{M_l}$ of $\{1,\dots,n\}$, such that
$|T^l_i|=4^l$ for every $1\le i \le M_l$, $|T^l_i\cap T^l_j|<4^l/2$ for $i\not=j$
and
$$
\displaystyle M_l\ge \Bigl(\frac{n}{4^{l+1}}\Bigr)^{4^l/2}\ge M:=\Bigl(\frac{n}{4^{\mu+1}}\Bigr)^{4^\mu/2}.
$$
For $j\in\{1,\dots,M\}$, we put
\begin{align*}
\widetilde T^\mu_j&=T^\mu_j,\\
\widetilde T^{\mu+1}_j&=T^{\mu+1}_j\setminus T^{\mu}_j,\\
&\vdots\\
\widetilde T^{\nu}_j&=T^{\nu}_j\setminus (T^{\nu-1}_j\cup\dots\cup T^{\mu}_j).
\end{align*}
Observe, that by this construction the sets $\{\widetilde T^{l}_j:\mu\le l\le \nu\}$ are mutually disjoint and $|\widetilde T^l_j|\le |T^l_j|=4^l$.
Furthermore, $|\widetilde T^\mu_j|=4^\mu$ and
\begin{align}
\notag |\widetilde T^l_j|&\ge |T^{l}_j|- [|T^{l-1}_j|+\dots+|T^{\mu}_j|]=4^l-[4^{l-1}+\dots+4^\mu]\\
\label{eq:entro:11}&\ge 4^l\Bigl(1-\sum_{s=1}^\infty\frac{1}{4^s}\Bigr)=\frac{2}{3}\cdot 4^l
\end{align}
for $\mu<l\le\nu.$
We associate to the sets $\{\widetilde T_j^l:\mu\le l\le \nu, 1\le j\le M\}$ a system of vectors $x^1,\dots,x^M\in{\mathbb R}^n$.
First, we observe that if $u\in\{1,\dots,n\}$ belongs to $\widetilde T^l_j$ for some $l\in\{\mu,\mu+1,\dots,\nu\}$, then this $l$ is unique and we put $(x^j)_u=\frac{1}{4^l}.$
Otherwise, we set $(x^j)_u=0.$ We may also express this construction by
$$
x^j=\sum_{l=\mu}^{\nu}\frac{1}{4^l}\chi_{\widetilde T_j^l},
$$
where $\chi_A$ is the indicator function of a set $A$.
Now we observe that
\begin{align*}
\|x^j\|_{1,\infty}&\le \max\Bigl\{4^\mu\cdot\frac{1}{4^\mu}, \frac{4^{\mu}+4^{\mu+1}}{4^{\mu+1}},
\dots,\frac{4^{\mu}+4^{\mu+1}+\dots+4^\nu}{4^{\nu}}\Bigr\}\\
&\le 1+\frac{1}{4}+\frac{1}{4^2}+\dots=\frac{4}{3}.
\end{align*}
Furthermore, let $i\not=j$ and let $u\in \widetilde T^l_i$ with $u\not\in\widetilde T^l_j$. Then
\begin{equation}\label{eq:entro:12}
|(x^i)_u-(x^j)_u|\ge \frac{1}{4^l}-\frac{1}{4^{l+1}}=\frac{3}{4}\cdot\frac{1}{4^l}.
\end{equation}
To estimate the $\ell_1$-distances among the points $\{x^1,\dots,x^M\}$, we combine \eqref{eq:entro:12}, \eqref{eq:entro:11},
and obtain for $i\not= j$
\begin{align*}
\|x^i-x^j\|_1&\ge\sum_{l=\mu}^\nu \sum_{u\in\widetilde T_i^l\setminus \widetilde T^l_j}|(x^i)_u-(x^j)_u|
\ge \sum_{l=\mu}^\nu |\widetilde T^l_i\setminus \widetilde T^l_j|\cdot \frac{3}{4}\cdot\frac{1}{4^l}\\
&=\frac{3}{4}\Bigl\{\sum_{l=\mu}^\nu |\widetilde T^l_i|\cdot \frac{1}{4^l}-\sum_{l=\mu}^\nu |\widetilde T^l_i \cap \widetilde T^l_j|\cdot \frac{1}{4^l}\Bigr\}\\
&\ge \frac{3}{4}\Bigl\{1+\sum_{l=\mu+1}^\nu \frac{2}{3}\cdot 4^l\cdot\frac{1}{4^l}
-\sum_{l=\mu}^\nu |T^l_i \cap T^l_j|\cdot \frac{1}{4^l}\Bigr\}\\
&\ge \frac{3}{4}\Bigl\{1+\frac{2}{3}(\nu-\mu)
-\sum_{l=\mu}^\nu \frac{4^l}{2}\cdot \frac{1}{4^l}\Bigr\}\\
&= \frac{3}{4}\Bigl\{1+\frac{2}{3}(\nu-\mu)-\frac{1}{2}(\nu-\mu+1)\Bigr\}\ge \frac{1}{8}(\nu-\mu+1).
\end{align*}
We conclude, that the points $\{x^j:j=1,\dots,M\}$ satisfy
$$
\|x^j\|_{1,\infty}\le \frac{4}{3}\quad \text{and}\quad \|x^i-x^j\|_1\ge \frac{1}{8}(\nu-\mu+1)\ \text{for}\ i\not=j.
$$
It follows that if a positive integer $k$ satisfies
\begin{equation}\label{eq:entropy:2}
M=\Bigl(\frac{n}{4^{\mu+1}}\Bigr)^{4^\mu/2}\ge 2^{k-1},
\end{equation}
then
$$
e_k(id:\ell_{1,\infty}^n\to\ell_1^n)\ge c(\nu-\mu+1),
$$
where the absolute constant $c$ can be taken $c=\frac{3}{64}.$
| 4,068 | 20,723 |
en
|
train
|
0.120.8
|
For $j\in\{1,\dots,M\}$, we put
\begin{align*}
\widetilde T^\mu_j&=T^\mu_j,\\
\widetilde T^{\mu+1}_j&=T^{\mu+1}_j\setminus T^{\mu}_j,\\
&\vdots\\
\widetilde T^{\nu}_j&=T^{\nu}_j\setminus (T^{\nu-1}_j\cup\dots\cup T^{\mu}_j).
\end{align*}
Observe, that by this construction the sets $\{\widetilde T^{l}_j:\mu\le l\le \nu\}$ are mutually disjoint and $|\widetilde T^l_j|\le |T^l_j|=4^l$.
Furthermore, $|\widetilde T^\mu_j|=4^\mu$ and
\begin{align}
\notag |\widetilde T^l_j|&\ge |T^{l}_j|- [|T^{l-1}_j|+\dots+|T^{\mu}_j|]=4^l-[4^{l-1}+\dots+4^\mu]\\
\label{eq:entro:11}&\ge 4^l\Bigl(1-\sum_{s=1}^\infty\frac{1}{4^s}\Bigr)=\frac{2}{3}\cdot 4^l
\end{align}
for $\mu<l\le\nu.$
We associate to the sets $\{\widetilde T_j^l:\mu\le l\le \nu, 1\le j\le M\}$ a system of vectors $x^1,\dots,x^M\in{\mathbb R}^n$.
First, we observe that if $u\in\{1,\dots,n\}$ belongs to $\widetilde T^l_j$ for some $l\in\{\mu,\mu+1,\dots,\nu\}$, then this $l$ is unique and we put $(x^j)_u=\frac{1}{4^l}.$
Otherwise, we set $(x^j)_u=0.$ We may also express this construction by
$$
x^j=\sum_{l=\mu}^{\nu}\frac{1}{4^l}\chi_{\widetilde T_j^l},
$$
where $\chi_A$ is the indicator function of a set $A$.
Now we observe that
\begin{align*}
\|x^j\|_{1,\infty}&\le \max\Bigl\{4^\mu\cdot\frac{1}{4^\mu}, \frac{4^{\mu}+4^{\mu+1}}{4^{\mu+1}},
\dots,\frac{4^{\mu}+4^{\mu+1}+\dots+4^\nu}{4^{\nu}}\Bigr\}\\
&\le 1+\frac{1}{4}+\frac{1}{4^2}+\dots=\frac{4}{3}.
\end{align*}
Furthermore, let $i\not=j$ and let $u\in \widetilde T^l_i$ with $u\not\in\widetilde T^l_j$. Then
\begin{equation}\label{eq:entro:12}
|(x^i)_u-(x^j)_u|\ge \frac{1}{4^l}-\frac{1}{4^{l+1}}=\frac{3}{4}\cdot\frac{1}{4^l}.
\end{equation}
To estimate the $\ell_1$-distances among the points $\{x^1,\dots,x^M\}$, we combine \eqref{eq:entro:12}, \eqref{eq:entro:11},
and obtain for $i\not= j$
\begin{align*}
\|x^i-x^j\|_1&\ge\sum_{l=\mu}^\nu \sum_{u\in\widetilde T_i^l\setminus \widetilde T^l_j}|(x^i)_u-(x^j)_u|
\ge \sum_{l=\mu}^\nu |\widetilde T^l_i\setminus \widetilde T^l_j|\cdot \frac{3}{4}\cdot\frac{1}{4^l}\\
&=\frac{3}{4}\Bigl\{\sum_{l=\mu}^\nu |\widetilde T^l_i|\cdot \frac{1}{4^l}-\sum_{l=\mu}^\nu |\widetilde T^l_i \cap \widetilde T^l_j|\cdot \frac{1}{4^l}\Bigr\}\\
&\ge \frac{3}{4}\Bigl\{1+\sum_{l=\mu+1}^\nu \frac{2}{3}\cdot 4^l\cdot\frac{1}{4^l}
-\sum_{l=\mu}^\nu |T^l_i \cap T^l_j|\cdot \frac{1}{4^l}\Bigr\}\\
&\ge \frac{3}{4}\Bigl\{1+\frac{2}{3}(\nu-\mu)
-\sum_{l=\mu}^\nu \frac{4^l}{2}\cdot \frac{1}{4^l}\Bigr\}\\
&= \frac{3}{4}\Bigl\{1+\frac{2}{3}(\nu-\mu)-\frac{1}{2}(\nu-\mu+1)\Bigr\}\ge \frac{1}{8}(\nu-\mu+1).
\end{align*}
We conclude, that the points $\{x^j:j=1,\dots,M\}$ satisfy
$$
\|x^j\|_{1,\infty}\le \frac{4}{3}\quad \text{and}\quad \|x^i-x^j\|_1\ge \frac{1}{8}(\nu-\mu+1)\ \text{for}\ i\not=j.
$$
It follows that if a positive integer $k$ satisfies
\begin{equation}\label{eq:entropy:2}
M=\Bigl(\frac{n}{4^{\mu+1}}\Bigr)^{4^\mu/2}\ge 2^{k-1},
\end{equation}
then
$$
e_k(id:\ell_{1,\infty}^n\to\ell_1^n)\ge c(\nu-\mu+1),
$$
where the absolute constant $c$ can be taken $c=\frac{3}{64}.$
Let now $n\ge 200$ and $1\le k\le n/200$ be positive integers. Then we chose $\nu\ge 1$ to be the largest integer with $12\cdot 4^\nu\le n$
and let $\mu\ge 1$ be the smallest integer with $k\le 4^\mu/2.$
Due to $\frac{n}{4^{\mu+1}}\ge 2$, this choice ensures \eqref{eq:entropy:2} and
$$
\nu-\mu+1\ge \log_4\Bigl(\frac{n}{48}\Bigr)-\log_4(2k)\gtrsim \log(1+n/k).
$$
The remaining pairs of $k$ and $n$ are covered by monotonicity of entropy numbers at the cost of constants of equivalence.
\end{proof}
| 1,641 | 20,723 |
en
|
train
|
0.120.9
|
{\bf Acknowledgement:} We would like to thank Franck Barthe for proposing the problem to us
and to Leonardo Colzani and Henning Kempka for valuable discussions.
\end{document}
| 49 | 20,723 |
en
|
train
|
0.121.0
|
\begin{document}
\title[Coarse Types Of Tropical Matroid Polytopes]{Coarse Types Of Tropical Matroid Polytopes}
\author[Kulas]{Katja Kulas}
\address{Fachbereich Mathematik, TU Darmstadt, 64293 Darmstadt, Germany}
\email{[email protected]}
\begin{abstract}
Describing the combinatorial structure of the tropical complex
$\mathcal{C}$ of a tropical matroid polytope, we obtain a formula for the coarse
types of the maximal cells of $\mathcal{C}$. Due to the connection between
tropical complexes and resolutions of monomial ideals, this yields the generators for
the corresponding coarse type ideal introduced in \cite{DJS09}. Furthermore, a
complete description of the minimal tropical halfspaces of the uniform tropical
matroid polytopes, i.e. the tropical hypersimplices, is given.
\end{abstract}
\maketitle
\section{Introduction}
Tropical matroid polytopes have been introduced in~\cite{DSS2005} as the
tropical convex hull of the cocircuits, or dually, of the bases of a matroid.
The arrangement of finitely many points $V$ in the
tropical torus ${\mathbb{T}}^d$ has a natural decomposition $\mathcal{C}_V$ of ${\mathbb{T}}^d$ into (ordinary)
polytopes, the tropical complex, equipped with a (fine) type $T$, which encodes the relative position to the generating
points. The \emph{coarse types} only count the cardinalities of $T$. In~\cite{DS04}, Develin and Sturmfels showed that
the bounded cells of $\mathcal{C}_V$ yield the tropical convex hull
of $V$, which is dual to the regular subdivision $\Sigma$ of a product of two
simplices (or equivalently---due to the Cayley Trick---to the regular mixed subdivisions of a dilated
simplex).
The authors of ~\cite{BlockYu} and~\cite{DJS09} use the
connection of the cellular structure of $\mathcal{C}_V$ or rather of $\Sigma$ to
minimal cellular resolutions of certain monomial ideals to provide an
algorithm for determining the facial structure of the bounded subcomplex
of $\mathcal{C}_V$. A main result of~\cite{DJS09} says that the labeled complex
$\mathcal{C}_V$ supports a minimal cellular
resolution of the ideal $I$ generated by monomials
corresponding to the set of all (coarse) types.
The main theme of this paper is the study of the tropical complex of tropical
convex polytopes associated with matroids arising from graphs---the \emph{tropical
matroid polytopes}.
Recall that a {\it matroid} $M$ is a finite collection $\mathcal{F}$ of subsets
of $[n]={1,2,\ldots,n}$, called {\it independent sets}, such that three
properties are satisfied: (i) $\emptyset\in\mathcal{F}$, (ii) if
$X\in\mathcal{F}$ and $Y\subseteq X$ then $Y\in \mathcal{F}$, (iii) if
$U,V\in\mathcal{F}$ and $\lvert U \rvert=\lvert V \rvert+1$ there exists
$x\in U\setminus V$ such that $V\cup x\in\mathcal{F}$. The last one is also
called the \emph{exchange property}. The maximal independent sets are the
\emph{bases} of $M$. A matroid can also be defined by specifying its \emph{non-bases},
i.e. the subsets of $E$ with cardinality $k$ that are not bases. For more
details on matroids see the survey of Oxley
~\cite{Oxley2003} and the books of White(\cite{White1986},~\cite{White1987},~\cite{White1992}).
An important class of matroids are the graphic or cycle matroids proven to be
regular, that is, they are representable over every field.
A \emph{graphic matroid} is associated with a simple undirected graph
$G$ by letting $E$ be the set of edges of $G$ and taking as the bases the edges of
the spanning forests.
Matroid polytopes were first studied in connection with optimization and
linear programming, introduced by Jack Edmonds~\cite{Edmonds03}.
A nice polytopal characterization for a matroid polytope was given by Gelfand
et~al.~\cite{GGMS1987} stating that each of its edges is a parallel translate of
$e_i-e_j$ for some $i$ and $j$.
In the case of tropical matroid polytopes the coarse
types display the number $b_{I,J}$ of bases $B$ of the associated matroid with subsets $I,J$,
where all elements of $I$ but none of $J$ are contained in $B$.
\filbreak
\begin{theorem}
Let $\mathcal{C}$ be the tropical complex of a tropical matroid polytope with
$d+1$ elements and rank $k$. The set of all coarse types of the maximal cells arising in $\mathcal{C}$ is
given by the tuples $(t_1,\ldots,t_{d+1})$ with
\begin{equation*} t_j \ = \
\begin{cases}
b_{\{i_1\},\emptyset}+b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}} & \text{if $j=i_1$} \, ,\\
b_{\{i_l\},\{i_1,\ldots,i_{l-1}\}} & \text{if $j=i_l\in \{i_2,\ldots i_{d'+1}\}$} \, ,\\
0 & \text{otherwise} \, .
\end{cases}
\end{equation*}
where $d'\in [d-k+1]$ and $\{i_1,i_2,\ldots,i_{d'+1}\}$ is a sequence of elements such that
$[d+1]\setminus\{i_1,i_2,\ldots,i_{d'}\}$ contains a basis of the associated
matroid.
\end{theorem}
Subsequently, we relate our combinatorial result to commutative algebra. For the
coarse type $\mathbf{t}(p)$ of $p$ and
$x^{\mathbf{t}(p)}={x_1}^{{\mathbf{t}(p)}_1}{x_2}^{{\mathbf{t}(p)}_2}\cdots{x_{d+1}}^{{\mathbf{t}(p)}_{d+1}}$
the monomial ideal \[I=\langle x^{\mathbf{t}(p)}\colon
p\in{\mathbb{T}}^d\rangle\subset{\mathbb{K}}[x_1,\ldots,x_{d+1}]\]
is called the \emph{coarse type ideal}.
In~\cite{DJS09}, Corollary 3.5, it was shown that $I$ is generated by the
monomials, which are assigned to the coarse types of the inclusion-maximal cells of
the tropical complex. As a direct consequence of Theorem 3.6 in~\cite{DJS09}, we obtain the generators of $I$.
\begin{corollary}The coarse type ideal $I$ for the tropical complex of a
tropical matroid polytope with $d+1$ elements and rank $k$
is equal to \[\langle x_{i_1}^{t_{i_1}}x_{i_2}^{t_{i_2}}\cdots x_{i_{d'+1}}^{t_{i_{d'+1}}}\colon
[d+1]\setminus\{i_1,\ldots,i_{d'}\} \text{ contains a basis }\rangle\] where $(t_{i_1},t_{i_2},\ldots,t_{i_{d'+1}})=\big(b_{\{i_1\},\emptyset}+b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}},b_{\{i_2\},\{i_1\}},\ldots,b_{\{i_{d'+1}\},\{i_1,\ldots,i_{d'}\}}\big)$.
\end{corollary}
Furthermore, we apply these results to the special case of uniform
matroids, introduced and studied in~\cite{Joswig05}. We close this work by
stating the minimal tropical halfspaces containing a uniform tropical matroid polytope
by using the characterization of Proposition 1 in~\cite{GaubertKatz09}.
| 2,111 | 21,062 |
en
|
train
|
0.121.1
|
\section{Basics of tropical convexity}
We start with collecting basic facts about tropical convexity
and fixing the notation. Defining \emph{tropical addition} by $x\oplus y:=\min(x,y)$ and
\emph{tropical multiplication} by $x\odot y:=x+y$ yields the
\emph{tropical semi-ring} $({\mathbb{R}},\oplus,\odot)$. Component-wise
tropical addition and \emph{tropical scalar multiplication}
\begin{equation*}
\lambda \odot (\xi_0,\dots,\xi_d) := (\lambda \odot
\xi_1,\dots,\lambda \odot \xi_d) = (\lambda+\xi_0,\dots,\lambda+\xi_d)
\end{equation*}
equips ${\mathbb{R}}^{d+1}$ with a semi-module structure. For
$x,y\in{\mathbb{R}}^{d+1}$ the set
\begin{equation*} [x,y]_\mathrm{trop} := \SetOf{(\lambda \odot x) \oplus (\mu
\odot y)}{\lambda,\mu \in {\mathbb{R}}}
\end{equation*}
defines the \emph{tropical line segment} between $x$ and $y$. A subset of
${\mathbb{R}}^{d+1}$ is \emph{tropically convex} if it contains the tropical
line segment between any two of its points. A direct computation
shows that if $S\subset{\mathbb{R}}^{d+1}$ is tropically convex then $S$ is
closed under tropical scalar multiplication. This leads to the
definition of the \emph{tropical torus} as the quotient
semi-module
\begin{equation*}
{\mathbb{T}}^d := {\mathbb{R}}^{d+1} / ({\mathbb{R}}\odot(1,\dots,1)) .
\end{equation*}
Note that ${\mathbb{T}}^d$ was called ``tropical projective space'' in
\cite{DS04}, \cite{Joswig05}, \cite{DevelinYu06}, and
\cite{JoswigSturmfelsYu07}. Tropical convexity gives rise to the hull
operator $\operatorname{tconv}$. A \emph{tropical polytope} is the tropical convex
hull of finitely many points in ${\mathbb{T}}^d$.
Like an ordinary polytope each tropical polytope $P$ has a unique set
of generators which is minimal with respect to inclusion; these are
the \emph{tropical vertices} of $P$.
There are several natural ways to choose a representative coordinate
vector for a point in ${\mathbb{T}}^d$. For instance, in the coset
$x+({\mathbb{R}}\odot(1,\dots,1))$ there is a unique vector $c(x)\in{\mathbb{R}}^{d+1}$
with non-negative coordinates such that at least one of them is zero;
we refer to $c(x)$ as the \emph{canonical coordinates} of $x\in{\mathbb{T}}^d$.
Moreover, in the same coset there is also a unique vector
$(\xi_0,\dots,\xi_d)$ such that $\xi_0=0$. Hence, the map
\begin{equation*}\label{eq:c_0}
c_0 : {\mathbb{T}}^d \to {\mathbb{R}}^d , (\xi_1,\dots,\xi_{d+1}) \mapsto (\xi_2-\xi_1,\dots,\xi_{d+1}-\xi_1)
\end{equation*}
is a bijection. Often we will identify ${\mathbb{T}}^d$ with ${\mathbb{R}}^d$ via this
map.
The \emph{tropical hyperplane} ${\mathcal{H}}_a$ defined by the \emph{tropical
linear form} $a=(\alpha_1,\dots,\alpha_{d+1})\in{\mathbb{R}}^{d+1}$ is the set of
points $(\xi_1,\dots,\xi_{d+1})\in{\mathbb{T}}^d$ such that the minimum
\begin{equation*}
(\alpha_1 \odot \xi_1) \oplus \dots \oplus (\alpha_{d+1} \odot \xi_{d+1})
\end{equation*}
is attained at least twice. For $d=3$ the tropical hyperplane is shown in
Figure~\ref{fig:3hypersimplicesb}. The complement of a tropical hyperplane
in ${\mathbb{T}}^d$ has exactly $d+1$ connected components, each of which is an
\emph{open sector}. A \emph{closed sector} is the topological closure
of an open sector. The set
\begin{equation*}
S_k := \SetOfbig{(\xi_1,\dots,\xi_{d+1})\in{\mathbb{T}}^d}{\xi_k=0 \text{ and }
\xi_i>0 \text{ for } i\ne k} ,
\end{equation*}
for $1\le k \le d+1$, is the \emph{$k$-th open sector} of the tropical
hyperplane ${\mathcal{Z}}$ in ${\mathbb{T}}^d$ defined by the zero tropical linear form.
Its closure is
\begin{equation*}
\bar S_k := \SetOfbig{(\xi_1,\dots,\xi_{d+1})\in{\mathbb{T}}^d}{\xi_k=0 \text{ and
} \xi_i\ge 0 \text{ for } i\ne k} .
\end{equation*}
We also use the notation $\bar S_I:=\bigcup\SetOf{\bar S_i}{i\in I}$
for any set $I\subset[d+1]:=\{1,\dots,d+1\}$.
If $a=(\alpha_1,\dots,\alpha_{d+1})$ is an arbitrary tropical linear form
then the translates $-a+S_k$ for $1\le k\le d+1$ are the open sectors of
the tropical hyperplane ${\mathcal{H}}_a$. The point $-a$ is the unique point
contained in all closed sectors of ${\mathcal{H}}_a$, and it is called the
\emph{apex} of ${\mathcal{H}}_a$. For each $I\subset[d+1]$ with $1\le
\# I \le d$ the set $-a+\bar S_I$ is the \emph{closed tropical
halfspace} of ${\mathcal{H}}_a$ of type $I$. A tropical halfspace $H(-a,I)$ can
also be written in the form
\begin{eqnarray*}
H(-a,I)&=&\{x\in{\mathbb{T}}^d\mid\text{ the minimum of }\displaystyle\bigoplus_{i=1}^{d+1} \alpha_i\odot
\xi_i\text{ is attained}\\
&&\,\,\text{ at a coordinate }i\in I\}\\&=&\{x\in{\mathbb{T}}^d\mid \displaystyle\bigoplus_{i\in
I} (\alpha_i\odot \xi_i)\leq\bigoplus_{j\in
J} (\alpha_j\odot \xi_j)\}\end{eqnarray*}where $I$ and $J$ are disjoint subsets of
$[d+1]$ and $I\cup J= [d+1]$. The tropical polytopes in
${\mathbb{T}}^d$ are exactly the bounded intersections of finitely many closed
tropical halfspaces; see \cite{GaubertKatz09} and \cite{Joswig05}.
We concentrate on the combinatorial structure of tropical polytopes. Let
$V:=(v_1,\dots,v_n)$ be a sequence of points in ${\mathbb{T}}^d$.
The \emph{(fine) type} of $x\in{\mathbb{T}}^d$ with respect to $V$ is the ordered
$(d+1)$-tuple $\operatorname{type}_V(x):=(T_1,\dots,T_{d+1})$ where
\begin{equation*}
T_k := \SetOf{i\in\{1,\dots,n\}}{v_i\in x+\bar S_k} .
\end{equation*}
For a given type ${\mathcal{T}}$ with respect to $V$ the set
\begin{equation*}
X^{\circ}_V({\mathcal{T}}) := \SetOfbig{x\in{\mathbb{T}}^d}{\operatorname{type}_V(x)={\mathcal{T}}}
\end{equation*}
is a relatively open subset of ${\mathbb{T}}^d$ and is called the \emph{cell} of
type ${\mathcal{T}}$ with respect to $V$. The set $X^{\circ}_V({\mathcal{T}})$ as well as its
topological closure are tropically and ordinary convex; in \cite{JK08}, these
were called \emph{polytropes}. With respect
to inclusion the types with respect to $V$ form a partially ordered
set. The intersection of two cells $X_V({\mathcal{S}})$ and $X_V({\mathcal{T}})$ with
type ${\mathcal{S}}$ and ${\mathcal{T}}$ is equal to the polyhedron $X_V({\mathcal{S}}\cup{\mathcal{T}})$.
The collection of all (closed) cells induces a polyhedral subdivision
$\mathcal{C}_V$ of ${\mathbb{T}}^d$. A $\min$-tropical polytope $P=\operatorname{tconv}(V)$ is the
union of cells in the bounded subcomplex $\mathcal{B}_V$ of $\mathcal{C}_V$
induced by the arrangement $\mathcal{A}_V$ of $\max$-tropical hyperplanes with apices $v\in
V$. A cell of $\mathcal{C}_V$ is unbounded if and only if one of its type components is
the empty set. The type of $x$ equals the union of the types of the (maximal)
cells that contain $x$ in their closure. The dimension of a cell $X_T$ can be
calculated as the
number of the connected components of the undirected graph
$G=\big(\{1,2,\ldots,d+1\},\,\{(j,k)\mid T_j\cap T_k\neq\emptyset\}\big)$
minus one. The zero-dimensional cells
are called pseudovertices of $P$.
Replacing the (fine) type entries $T_k\subseteq [n]$ for $k\in [d+1]$ of a point $p\in{\mathbb{T}}^d$ by
their cardinalities $t_k:=\left|T_k\right|$ we get the \emph{coarse type}
$t_V(p)=(t_1,\ldots,t_{d+1})\in{\mathbb{N}}^{d+1}$ of $p$.
A coarse type entry $t_k$ displays how many generating points lie in the $k$-th closed sector
$p+\overline{S_k}$.
In~\cite{DJS09}, the authors associate the tropical complex of a tropical
polytope with a monomial ideal, the coarse type ideal \[I:=\langle
{x_1}^{t_1}{x_2}^{t_2}\cdots{x_{d+1}}^{t_{d+1}}\colon
p\in{\mathbb{T}}^d\rangle\subset{\mathbb{K}}[x_1,\ldots,x_{d+1}].\] By Corollary 3.5 of~\cite{DJS09}, $I$ is generated by the
monomials assigned to the coarse types of the inclusion-maximal cells of
the tropical complex. The tropical complex
$\mathcal{C}_V$ gives rise to minimal cellular resolutions of $I$.
\begin{theorem}[ \cite{DJS09}, Theorem 3.6 ]\label{thm:DJS2009} The labeled complex
$\mathcal{C}_V$ supports a
minimal cellular resolution of the ideal $I$ generated by monomials
corresponding to the set of all (coarse) types.
\end{theorem}
Considering cellular resolutions of monomial ideals, introduced in~\cite{BPS1998} and~\cite{BS1998}, is a natural
technique to construct resolutions of monomial ideals using labeled cellular
complexes and provide an important interface
between topological constructions, combinatorics and algebraic ideas. The authors
of~\cite{BlockYu} and \cite{DJS09} use this to give an algorithm for determining the facial structure of a
tropical complex. More precisely, they associate a squarefree monomial ideal $I$ with a tropical
polytope and calculate a minimal
cellular resolution of $I$, where the $i$-th syzygies of $I$ are encoded by the
$i$-dimensional faces of a polyhedral complex.
A tropical halfspace is called \emph{minimal} for a tropical polytope $P$ if
it is minimal with respect to inclusion among all tropical halfspaces containing
$P$. Consider a tropical halfspace $H(a,I)\subset {\mathbb{T}}^d$ with $I\subset[d+1]$ and apex
$a\in {\mathbb{T}}^d$, and a tropical polytope $P=\operatorname{tconv}\{v_1,\ldots,v_n\}\subseteq {\mathbb{T}}^d$.
To show that $H(a,I)$ is minimal for $P$, it suffices to prove, by Proposition 1
of~\cite{GaubertKatz09}, that the following three
criteria hold for the type $(T_1,T_2,\ldots,T_{d+1})=\operatorname{type}_V(a)$ of the apex $a$:
\begin{itemize}\item[(i)] $\displaystyle\bigcup_{i\in I}T_i=[n]$, \item[(ii)] for each
$j\in I^C$ there exists an $i\in I$ such that $T_i\cap
T_j\neq\emptyset$, \item[(iii)] for each $i\in I$ there exists $j\in I^C$ such that
$\displaystyle T_i\cap T_j\not\subset\bigcup_{k\in I\setminus\{i\}}T_k$.\end{itemize}
Here, we denote the complement of a set $I\subseteq [d+1]$ as
$I^C=[d+1]\setminus I$.
\noindent
Obvious minimal tropical halfspaces of a tropical polytope
$P=\operatorname{tconv}(V)\subseteq {\mathbb{T}}^{d}$ are its cornered halfspaces, see~\cite{Joswig08}.
The {\it $k$-th corner} of $P$ is defined as
\[c_k(V):= (-v_{1,k}) \odot v_1\oplus(-v_{2,k})\odot v_2
\oplus\cdots\oplus(-v_{n,k})\odot v_n.\]
The tropical halfspace $H_k:=c_k(V)+\overline{S_k}$ is called the
{\it $k$-th cornered tropical halfspace} of $P$ and the intersection
of all $d+1$ cornered halfspaces is the {\it cornered hull} of $P$.
| 3,670 | 21,062 |
en
|
train
|
0.121.2
|
\section{Tropical Matroid Polytopes}
The tropical matroid polytope of a matroid $\mathcal{M}$
is defined in~\cite{DSS2005} as the tropical convex hull of the negative incidence vectors
of the bases of $\mathcal{M}$. In this paper, we restrict ourselves to matroids arising
from graphs.
The \emph{graphic matroid} of a simple undirected graph $G=(V,E)$ is
$\mathcal{M}(G)=(E,\mathcal{I}=\{F\subseteq E\colon F \text{ is acyclic}\})$.
While the forests of $G$ form the system of independent sets of
$\mathcal{M}(G)$ its bases are the spanning forests. We will assume that $G$ is
connected, so the bases of $\mathcal{M}(G)$ are the spanning trees of
$G$. Furthermore, we exclude bridges, i.e. edges whose deletion increases the
number of connected components of $G$, leading to elements that are contained
in every basis.
Let $d+1$ be the number of elements and $n$ be the number of bases of
$\mathcal{M}:=\mathcal{M}(G)$ and $\mathcal{B}:=\{B_1,\ldots,B_n\}$ its bases.
It follows from the exchange property of matroids that all bases of
$\mathcal{M}$ have the same number of elements, which is called the
\emph{rank} of $\mathcal{M}$. Consider the $0/1$-matrix
$M\in{\mathbb{R}}^{(d+1)\times n}$ with rows indexed by the elements of the
ground set $E$ and columns indexed by the bases of $\mathcal{M}$
which has a $0$ in entry $(i,j)$ if the $i$-th element is in the $j$-th
basis. The \emph{tropical matroid polytope} $P$ of $\mathcal{M}$ is the
tropical convex hull of the columns of $M$. Let
\begin{equation}\label{eq:generators}V=\left\{-e_B:=\sum_{i\in
B}-e_i\big|B\in \mathcal{B}\right\}\end{equation} be the set of generators of
$P$. It turns out that these are just the tropical vertices of $P$, see Lemma~\ref{lem:pseudovertices}.
If the underlying matroid has rank $k$, then the canonical coordinate
vectors of $V$ have exactly $k$ zeros and $d+1-k$ ones and
will be denoted as $v_{B_i}$ or for short $v_i$ if the corresponding basis is
$B_i\in\mathcal{B}$. Note that with $\oplus$ as $\max$ instead of $\min$ the generators
of a tropical matroid polytope are the positive incidence vectors of
the bases of the corresponding matroid. Throughout this paper we write $\mathcal{P}_{k,d}$ for the set of all tropical matroid polytopes arising from a
graphic matroid with $d+1$ elements and rank $k$.
\begin{example}
The tropical hypersimplex
$\Delta_k^d$ in ${\mathbb{T}}^d$ studied in \cite{Joswig05} is a
tropical matroid polytope of a uniform matroid of rank $k$ with
$d+1$ elements and $\binom{d+1}{k}$ bases. It is defined as the
tropical convex hull of all points $-e_I:=\displaystyle\sum_{i\in I}-e_i$
where $e_i$ is the $i$-th unit vector of ${\mathbb{R}}^{d+1}$ and $I$ is a
$k$-element subset of $[d+1]$. The tropical vertices of $\Delta_k^d$ are
\[\V(\Delta_k^d)=\left\{-e_I\big|I\in\binom{[d+1]}{k}\right\}\, \text{ for all }k>0.\]
In~\cite{Joswig05}, it is shown that
$\Delta_{k+1}^d\subsetneq\Delta_k^d$ implying that the first tropical
hypersimplex contains all other tropical hypersimplices in ${\mathbb{T}}^d$.
The first tropical hypersimplex $\Delta^d=\Delta_1^d$ in ${\mathbb{T}}^d$ is the
$d$-dimensional tropical standard simplex which is also a polytrope.
Clearly, we have for a tropical matroid polytope $P\in\mathcal{P}_{k,d}$ the
chain $P\subseteq\Delta_k^d\subsetneq\cdots\subsetneq\Delta_1^d=\Delta^d$.
For $d=3$ the three tropical hypersimplices are shown in Figure~\ref{fig:3hypersimplices}.
\begin{figure}
\caption{\label{fig:3hypersimplices}
\label{fig:3hypersimplicesa}
\label{fig:3hypersimplicesb}
\label{fig:3hypersimplicesc}
\label{fig:3hypersimplices}
\end{figure}
\end{example}
The origin $\mathbf{0}\in{\mathbb{T}}^{d}$ and its fine type are crucial for the
calculation of the fine and the
coarse types of the maximal cells in the cell complex of $P$.
\begin{lemma}
A tropical matroid polytope $P\in\mathcal{P}_{k,d}$ with generators $V$ contains the origin
$\mathbf{0}\in{\mathbb{T}}^{d}$. Its type is
$\operatorname{type}_{V}(\mathbf{0})=(T^{(0)}_1,T^{(0)}_2,\ldots,T^{(0)}_{d+1})$ with $T^{(0)}_i=\{j\mid i\in B_j\}$.
\end{lemma}
\begin{proof}
By Proposition 3 of \cite{DS04} about the shape of a tropical
line segment, the only
pseudovertex of the tropical line segment between two distinct $0$-$1$-vectors
$u$ and $v$ in ${\mathbb{T}}^{d}$ is the point $w$ with $w_{l}=0$ if $u_l=0$ or $v_l=0$ and
$w_{l}=1$ otherwise. Since every element of $E$ is contained in any basis of
$\mathcal{M}(G)$ (apply any spanning-tree-greedy-algorithm for the connected
components of $G$ starting from this element) and by using the previous argument,
the origin must be contained in $P$.
An index $j$ is contained in the $i$-th type coordinate $T^{(0)}_i$ if $v_{j,i}=\min\{v_{j,1},v_{j,2},\ldots,v_{j,{d+1}}\}$, which is
satisfied by all indices $i\in B_j$.
\end{proof}
The $i$-th type entry $T_i^{(0)}$
of $\mathbf{0}$ contains all bases of $\mathcal{M}$ with element $i$, and $|T_i^{(0)}|$ is
the number of bases of $\mathcal{M}$ containing $i$.
Now it is time to introduce our running example.
\initfloatingfigs
\begin{example}\label{ex:RunEx1}
The graphical matroid given by the following graph $G$ has $d+1=5$ elements
(edges with bold indices), rank $k=3$, $n=8$ bases
$B_1=\{\mathbf{1},\mathbf{2},\mathbf{4}\},\,B_2=\{\mathbf{1},\mathbf{2},\mathbf{5}\},\,B_3=\{\mathbf{1},\mathbf{3},\mathbf{4}\},\,B_4=\{\mathbf{1},\mathbf{3},\mathbf{5}\},\,B_5=\{\mathbf{1},\mathbf{4},\mathbf{5}\},\,B_6=\{\mathbf{2},\mathbf{3},\mathbf{4}\},\,B_7=\{\mathbf{2},\mathbf{3},\mathbf{5}\},\,B_8=\{\mathbf{3},\mathbf{4},\mathbf{5}\}$
and the non-bases $\{\mathbf{1},\mathbf{2},\mathbf{3}\},\,\{\mathbf{2},\mathbf{4},\mathbf{5}\}$.
\vspace*{0.1cm}\\
\begin{minipage}{0.2\textwidth}
\centering
\psfrag{0}{$\mathbf{1}$}
\psfrag{G}{$G:$}
\psfrag{1}{$\mathbf{2}$}
\psfrag{2}{$\mathbf{3}$}
\psfrag{3}{$\mathbf{4}$}
\psfrag{4}{$\mathbf{5}$}
\psfrag{T0}{\!$\mathit{(12345)}$}
\psfrag{T1}{$\mathit{(1267)}$}
\psfrag{T2}{$\mathit{(34678)}$}
\psfrag{T3}{$\mathit{(13568)}$}
\psfrag{T4}{$\mathit{(24578)}$}
\includegraphics[scale=0.9]{RunningExample_G}
\end{minipage}
\begin{minipage}{0.7\textwidth}
Let $P$ be the corresponding tropical matroid polytope with its generators
{\begin{eqnarray*}V&=&\{v_{B_1},\ldots,v_{B_8}\}\\ &=&\left\{\scriptsize
\begin{pmatrix}0\\0\\1\\0\\1\end{pmatrix},
\begin{pmatrix}0\\0\\1\\1\\0\end{pmatrix},
\begin{pmatrix}0\\1\\0\\0\\1\end{pmatrix},
\begin{pmatrix}0\\1\\0\\1\\0\end{pmatrix},
\begin{pmatrix}0\\1\\1\\0\\0\end{pmatrix},
\begin{pmatrix}1\\0\\0\\0\\1\end{pmatrix},
\begin{pmatrix}1\\0\\0\\1\\0\end{pmatrix},
\begin{pmatrix}1\\1\\0\\0\\0\end{pmatrix}
\right\}.\end{eqnarray*}} The type of the origin $\mathbf{0}$ of $P$
is $(12345,1267,34678,13568,24578)$ where the $i$-th type entry contains all bases
using the edge $i$ (italic edge attributes).
\end{minipage}
\end{example}
In the next lemma we will show that the tropical standard simplex $\Delta^d$ is
the cornered hull of all tropical matroid polytopes in $\mathcal{P}_{k,d}$.
\begin{lemma}\label{lem:chull of tmp}
The cornered hull of a tropical matroid polytope $P\in\mathcal{P}_{k,d}$ with
generators $V$ is the
$d$-dimensional tropical standard simplex $\Delta^d$. The $i$-th corner
of $P$ is the vector $e_i$. The type of $e_i$ with respect to
$V$ is $\operatorname{type}_{V}(e_i)=(T_1,\ldots,T_{d+1})$ with
\begin{equation*} T_j \ = \
\begin{cases}
[d+1] & \text{if $j=i$} \, ,\\
\{l\mid j\in B_l \text{ and }i\notin B_l \}& \text{otherwise} \, .
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
For $B\in\mathcal{B}$ the $i$-th (canonical) coordinate of $v_B$ is \begin{equation*} v_{B,i} \ = \
\begin{cases}
0 & \text{if $i\in B$} \, ,\\
1& \text{otherwise} \, .
\end{cases}
\end{equation*}
The $j$-th coordinate of the $i$-th corner $c_i(V)$ of $P$ is
\begin{equation*}c_i(V)_j=\displaystyle\min_{J\in \mathcal{B}}(v_{J,j}-v_{J,i})\ = \
\begin{cases}
0 & \text{if $i=j$} \, ,\\
-1& \text{otherwise} \, .
\end{cases}
\end{equation*}
In canonical coordinates we get $c_i(V)=e_i$, which at the same
time is the $i$-th apex vertex of the tropical standard simplex $\Delta^d$.
The type of $e_i$ is $\operatorname{type}_{V}(e_i)=(T_1,T_2,\ldots,T_{d+1})$, where some index $l$ is contained in the $j$-th coordinate $T_j$ for $j\neq i$
if $v_{l,j}=\min\{v_{l,1},v_{l,2},\ldots,v_{l,i}-1,\ldots,v_{l,{d+1}}\}$. This is
satisfied by all bases $B_l\in\mathcal{B}$ with $j\in B_l$ and $i\notin B_l$. For $j=i$ all
indices $l\in[d+1]$ are contained in $T_i$ since the right hand side of
$v_{l,i}-1=\min\{v_{l,1},v_{l,2},\ldots,v_{l,i}-1,\ldots,v_{l,{d+1}}\}$ is
smaller or equal than the left hand side in every case.
\end{proof}
| 3,386 | 21,062 |
en
|
train
|
0.121.3
|
Besides the point $\mathbf{0}$, the other pseudovertices of a tropical matroid
polytope correspond to unions of its bases.
\begin{lemma}\label{lem:pseudovertices}
The pseudovertices of $P\in\mathcal{P}_{k,d}$ are \[\PV(P)=\left\{-e_J\big|J
= \bigcup_{i\in I}B_i \text{ for some }I\subseteq[n]\right\}.\]
The pseudovertices of the first tropical hypersimplex are
\[\PV(\Delta^d)=\left\{-e_J\big|J\in\bigcup_{j=1}^{d}\binom{[d+1]}{j}\right\}.\]
Let $(T^{(0)}_1,\ldots,T^{(0)}_{d+1})$ be the type of the
pseudovertex $\mathbf{0}$ with respect to $V$ and consider a point $-e_J\in\PV(P)$.
If the complement $J^C$ of $J$ is equal to $\{i_1,\ldots,i_r\}$, then the type
$(T_1,\ldots,T_{d+1})$ of $-e_J$ with respect to $V$ is given by
\begin{equation*} \label{eq:typestmp}T_j \ = \
\begin{cases}
T^{(0)}_j\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_r})
& \text{if $j\in J$} \, ,\\
T^{(0)}_j\cup({T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_r}}^C)& \text{otherwise} \, .
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
Consider the point $v_J:=c(-e_J)=e_{J^C}$ with canonical coordinates \[v_{J,i}=
\begin{cases}
0 & \text{if $i\in J$} \, ,\\
1& \text{otherwise} \, .
\end{cases}\] and $\operatorname{type}_{V}(v_J)=(T_1,\ldots,T_{d+1})$.
Since the union of the elements of one or more bases of $\mathcal{M}$ consists
of at least $k$ elements, the index set $J$ has at least $k$ elements and thus
we have $r\leq d-k+1$ for the cardinality $r$ of $J^C$.
We can assume that $J^C=\{1,2,\ldots,r\}$. Then some index $l$ occurs in the
$j$-th coordinate $T_j$ if and only if
\begin{eqnarray}\label{eq:condHypersimplex:typePseudovert}
v_{l,j}-v_{J,j}&=&\min\{v_{l,1}-1,\ldots,v_{l,r}-1,v_{l,r+1},\ldots,v_{l,d+1}\}\\
&=&\min\{v_{l,1}-1,\ldots,v_{l,r}-1\}\in \{-1,0\}\nonumber.
\end{eqnarray}
For $j\in J$ the left hand side of
equation~(\ref{eq:condHypersimplex:typePseudovert}) is $v_{l,j}-0\in\{0,1\}$.
If $j\in B_l$, we get $v_{l,j}-v_{J,j}=0-0$ and
this is minimal in~(\ref{eq:condHypersimplex:typePseudovert}) if the coordinates
$v_{l,i}$ are equal to one for all $i\in J^C$,
i.e. $i\notin B_l$. If $j\notin B_l$, we get
$v_{l,j}-v_{J,j}=1\notin\{-1,0\}$. Therefore, $T_j$ is equal to $\{(l\mid j\in B_l)\,\wedge (i\notin B_l \text{ for all } i\in J^C)\}=T^{(0)}_j\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_r})$.
For $j\in J^C$ the left hand side is $v_{l,j}-1\in\{0,-1\}$.
If $j\in B_l$, we get
$v_{l,j}-v_{J,j}=-1=\min\{v_{l,1}-1,\ldots,v_{l,j}-1,\ldots,v_{l,r}-1\}$.
If $j\notin B_l$, we get $v_{l,j}-v_{J,j}=1-1=0$ and this is minimal
in~\eqref{eq:condHypersimplex:typePseudovert} if the
coordinates $v_{l,i}$ are equal to one for all $i\in J^C$, i.e. $i\notin
B_l$. Therefore, $T_j$ is equal to $\{l\mid j\in B_l\,\text{\bf or } (i\notin
B_l \text{ for all } i\in J^C)\}=T^{(0)}_j\cup({T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_r}}^C)$.
If $r=d-k+1$, the pseudovertex $v:=c(-e_J)$ is a generator of $P$. Each of
its type entries contains the index, which is assigned to a basis
$B\in\mathcal{B}$. Since $B$ is the only basis with
$i_1,\ldots,i_{d-k+1}\notin B$, its index is the only element of
$T_j=T^{(0)}_j\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_{d-k+1}})$ for
$j\in B$. For this reason, the generators as defined in~(\ref{eq:generators}) are exactly the tropical vertices of $P$.
Now we consider the other points of $\PV(V)$, i.e. $r<d-k+1$.
The intersection of two type entries $T_{j_1}\cap T_{j_2}$ is equal to
\begin{equation} \label{eq:intersection}T_{j_1}\cap T_{j_2} \ = \
\begin{cases}
(T^{(0)}_{j_1}\cap T^{(0)}_{j_2})\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_r})
& \text{if $j_1,j_2\in J$} \, ,\\
(T^{(0)}_{j_1}\cap T^{(0)}_{j_2})\cup({T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_r}}^C)& \text{otherwise} \, .
\end{cases}
\end{equation}
In the first case of \ref{eq:intersection}, $T_{j_1}\cap T_{j_2}$
consists of at least one tropical vertex $v_l$ with $v_{l,j_1}=v_{l,j_2}=0$ and
$v_{l,i}=1$ for all $i\in J^C$. In the second case there are even
more tropical vertices allowed and $T_{j_1}\cap T_{j_2}\neq \emptyset$. Hence, Proposition 17 of~\cite{DS04}
tells us that the cell $X_T$ has dimension $0$, i.e. the given points really
are pseudovertices of $P$.
For $J=\bigcup_{i\in I}B_i$ and $J'=\bigcup_{i\in
I'}B_i$ with $I\neq I'\subseteq[n]$ the tropical line segment
between $v_J$ and $v_{J'}$ is the concatenation of the two
ordinary line segments $[v_J, v_{J\cup \tilde{J}}]$ and
$[v_{J\cup \tilde{J}},v_{J'}]$. The point $v_{J\cup
\tilde{J}}$ is again a point of $\PV(P)$. Therefore, there are no
other pseudovertices as the given points in $\PV(P)$.
Now we consider the tropical standard simplex $\Delta^d$. If the tropical vertex $v_l:=v_{B_l}$,
$B_l\in\binom{[d+1]}{1}$, of $\Delta^d$ is given by the vector $v_{B_l}=-e_l$
($l=1,\ldots,d+1$), then
the type of the origin $\mathbf{0}$ with respect to $\Delta^d$ is
$T^{\mathbf{0}}=(1,2,\ldots,d+1)$. Therefore, this is an interior point of
$\Delta^d$. Let $v_J$ with $J\in\bigcup_{j=1}^{d}\binom{[d+1]}{j}$ be any pseudovertex of
$\Delta^d$. Since for $i\in J$ and $i\notin J$, we have \mbox{$v_{i,i}-v_{J,i}=
0=\displaystyle\min\{v_{l,1}-1,\ldots,v_{l,r}-1,v_{l,r+1},\ldots,v_{l,i},\ldots,v_{l,d+1}\}$} and\\\mbox{$v_{i,i}-v_{J,i}=-1=\displaystyle\min\{v_{l,1}-1,\ldots,v_{l,i}-1,\ldots,v_{l,r}-1\}$}, respectively, it follows that the index $i$ is contained in the $i$-th entry of $T$ for all $i=1,\ldots,d+1$, i.e. $T^{\mathbf{0}}\subset T$. Hence, $\Delta^d$ is a polytrope.
\end{proof}
| 2,495 | 21,062 |
en
|
train
|
0.121.4
|
Let $v_J= \sum_{i\in J}-e_i=-e_J$ be a pseudovertex of $P$ with
$J=\bigcup_{i\in I}B_i$ for $I\subseteq[n]$. If the complement $J^C$ of $J$ is
equal to $\{i_1,i_2,\ldots,i_r\}$ with $r\le d-k+1$, we will denote $v_J$ as
$e_{i_1,i_2,\ldots,i_r}$ and its type with respect to $P$ as
\[\operatorname{type}_{V}(v_J)=T({v}_J)=\big(T_1({v}_J),\ldots,T_{d+1}({v}_J)\big).\]
Because of the previous lemma, the $i$-th entry of $T({v}_J)$ contains all bases using
edge $i\in J$ that are possible after deleting the edges of $J^C$ in the
corresponding graph $G$ or, equivalently, all bases that are possible after
(re-)inserting edge
$i\in J^C$ into $(V(G),E(G)\setminus\{J^C\})$.
We call a sequence of pseudovertices
$e_{\emptyset},e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d-k+1}}$, or rather the set \\\mbox{$\{i_1,\ldots,i_{d-k+1}\}\subset[d+1]$}, {\it valid} if
the edge set $E\setminus\{i_1,\ldots,i_{d-k+1}\}$ contains a spanning tree of the underlying
graph $G$.
The first point $e_{\emptyset}=\mathbf{0}$ is assigned to the total edge set $E$ of
$G$. Then we delete edge after edge such that the graph is
still connected until the edge set forms a connected graph without cycles. So
the last point of a valid sequence is the tropical vertex $v_B$ of $P$
with $B=[d+1]\setminus\{i_1,i_2,\ldots,i_{d-k+1}\}$.
It turns out that the pseudovertices of the valid
sequences and subsequences of them play a major
role in the calculation of the maximal bounded und unbounded cells of
$P$.
\begin{lemma}
The maximal bounded cells of $P\in\mathcal{P}_{k,d}$ are of dimension $d-k+1$. They form the tropical convex
hull of the pseudovertices of a valid sequence
$\mathbf{0},e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d-k+1}}$,
where the last
pseudovertex is a tropical vertex $v_B$ according to the basis
$B=[d+1]\setminus\{i_1,i_2,\ldots,i_{d-k+1}\}\in\mathcal{B}$ of $\mathcal{M}$.
Let $T^{(0)}=(T^{(0)}_1,\ldots,T^{(0)}_{d+1})$ be the type of the
pseudovertex $\mathbf{0}$ with respect to $P$. Then the type
$T=(T_1,\ldots,T_{d+1})$ of the interior of the bounded cell
$X_{T}=\operatorname{tconv}(\mathbf{0},e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d-k+1}})$ is given by
$T_{i_1}=T^{(0)}_{i_1},T_{i_2}=T^{(0)}_{i_2}\setminus
T^{(0)}_{i_1},\ldots,T_{i_{d-k+1}}=T^{(0)}_{i_{d-k+1}}\setminus (T^{(0)}_{i_1}\cup
T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k}})$ and $T_j=T^{(0)}_j\setminus (T^{(0)}_{i_1}\cup
T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k+1}})$ for all $j\in B$.
\end{lemma}
\begin{proof}
First, we will show that this sequence really defines a bounded cell
of $P$, i.e. $T_{j}\neq\emptyset$ for all $j\in [d+1]$.
So consider the type entry at some coordinate $i_j\in B^C$
\begin{eqnarray*}
T_{i_j}&=&T_{i_j}(\mathbf{0})\cap T_{i_j}(e_{i_1})\cap \ldots \cap\\ &&
T_{i_j}(e_{i_1,\ldots,i_{j-1}})\cap\\ &&
T_{i_j}(e_{i_1,\ldots,i_{j}})\cap\ldots\cap\\ &&
T_{i_j}(e_{i_1,\ldots,i_{d-k+1}})\\
&=&\{l\mid i_j\in B_l\}\cap\{l\mid i_j\in B_l \text{ and } i_1\notin
B_l\}\cap\ldots\cap\\ &&\{l\mid i_j\in B_l \text{ and }
(i_1,\ldots,i_{j-1}\notin B_l)\}\cap\\ &&
\{l\mid i_j\in B_l \text{ or } (i_1,\ldots,i_{j}\notin
B_l)\}\cap\ldots\cap\\ &&
\{l\mid i_j\in B_l \text{ or } (i_1,\ldots,i_{d-k+1}\notin B_l)\}\\
&=& \{l\mid i_j\in B_l \text{ and }(i_1,\ldots,i_{j-1}\notin B_l)\}\\
&=& T^{(0)}_{i_j}\setminus(T^{(0)}_{i_1}\cup\ldots\cup T^{(0)}_{i_{j-1}}).
\end{eqnarray*}
The cardinality of $T_{i_j}=T^{(0)}_{i_j}\cap {T^{(0)}_{i_1}}^C\cap\ldots\cap
{T^{(0)}_{i_{j-1}}}^C$ is equal to the number of tropical vertices
$v$ of $P$ with $v_{i_j}=0$ and $v_{i_1}=\ldots=v_{i_{j-1}}=1$ (in
canonical coordinates) respectively to the number of bases $B$ with $i_j\in B$ and
$i_1,\ldots,i_{j-1}\notin B$, which is greater than $0$ since we consider only
valid sequences. So every type
coordinate $T_{i_j}$ contains at least
one entry. In the case of uniform matroids we have the choice of $d+1-j$ {\it free} coordinates from which
$k-1$ must be equal to $0$, i.e. the cardinality
of $T_{i_j}$ is equal to $\binom{d+1-j}{k-1}$.
Analogously, the other type entries $T_j=T^{(0)}_j\setminus (T^{(0)}_{i_1}\cup
T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k+1}})=\{v_B\}$ for $j\in B$ and their cardinality $|T_j|=1$ can be
verified. Furthermore, we have $T_1\cup \cdots \cup T_{d+1}=[n]$,
because $T^{(0)}_1\cup \cdots \cup T^{(0)}_{d+1}=[n]$.
Since no type entry of $T$ is empty, the cell $X_{T}$ is bounded. More precisely,
$T_{i_1},\ldots,T_{i_{d-k+1}}$ is a partition of the indices of
$\V(P)\setminus\{v_B\}$, and the other type coordinates each contain the
index of the tropical vertex $v_B$; we call this a {\it pre-partition}.
By Proposition 17 in \cite{DS04}, the dimension of $X_{T}$ is $d-k+1$.
Removing one pseudovertex $e_{i_1,\ldots,i_r}$ with $r\in [d-k+1]$ from a valid sequence, we obtain
$T_{i_{r+1}}=T^{(0)}_{i_{r+1}}\setminus(T^{(0)}_{i_1}\cup\cdots\cup T^{(0)}_{i_{r-1}})$ and
$T_{i_r}\cap T_{i_{r+1}}\neq\emptyset$. This yields a bounded cell with lower
dimension than $d-k+1$.
Adding a pseudovertex $e_J$ to $X_{T}$, $J\neq B$ with $J^C=\{j_1,\ldots,j_r\}$ ($1\leq r\leq d-k+1$) and
$(j_1,\ldots,j_l)\neq (i_1,\ldots,i_l)$ for all $l=1,\ldots,r$, we
consider $T'=T\cap\operatorname{type}_{P}(e_J)$. To keep the status of a maximal bounded
cell, the type of the cell still has to be a pre-partition of $[n]$ without empty
type entries. There are three different cases (1)-(3).
(1) For $J^C\not\subseteq B^C$ and $J\cap B\neq\emptyset$, there is an
index $j\in J\cap B$. We consider the $j$-th type entry of $T'$ that is
equal to $T_{j}\cap T^{(0)}_j(e_J)=T^{(0)}_j\cap
{T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_{d-k+1}}}^C\cap {T^{(0)}_{j_1}}^C\cap\cdots\cap
{T^{(0)}_{j_r}}^C$. This is an empty set since there are no tropical vertices of
$P$ with $d-k+1+r$ entries equal to one. The cells with empty type entries are
not bounded.
(2) For $J^C\not\subseteq B^C$ and $J\cap B=\emptyset$, we consider an index
$j\in J\cap B^C$ that corresponds to a valid sequence with $i_t=j$, $t\in\{1,\ldots,d-k+1\}$. The $j$-th type entry of
$T'$ is equal to
$T^{(0)}_j(e_J)\cap T_j=T^{(0)}_j\cap
{T^{(0)}_{j_1}}^C\cap\cdots\cap {T^{(0)}_{j_r}}^C\cap
{T^{(0)}}^C_{i_1}\cap\cdots\cap {T^{(0)}}^C_{i_{t-1}}$. Since
$J^C\not\subseteq\{i_1,\ldots,i_{t-1}\}$, the cardinality of $T'_j$ is less than $|T^{(0)}_j|$,
and we get no valid partition of $[n]$.
(3) For $J^C\subset B^C$ we have $r<d+1-k$ (otherwise $J=B$). We choose the smallest index $j$ such that $i_j\in J\cap B^C$. That means
$i_1,\ldots,i_{j-1}\in J^C\subset B^C$. Since we have
$(i_1,\ldots,i_l)\neq(j_1,\ldots j_l)$ for all
$l=1,\ldots,r$, we know that
$(i_1,\ldots,i_{j-1})\neq(j_1,\ldots,j_r)$ leading to $|T_{i_j}|=|T^{(0)}_{i_j}\cap
{T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_{j-1}}}^C|>|T'_{i_j}|=|T^{(0)}_{i_j}\cap
{T^{(0)}_{j_1}}^C\cap\cdots\cap {T^{(0)}_{j_r}}^C|$. As in the other two cases this is
no valid pre-partition of $[n]$.
In every case the adding of a pseudovertex from another sequence leads to
unfeasible types of bounded cells.
Similarly, it is not difficult to see that removing a pseudovertex and adding
a new one from another sequence leads to unfeasible types or lower dimensional
bounded cells, i.e. mixing of valid sequences is not possible. Altogether, we
get the desired maximal bounded cells of $P$.
\end{proof}
| 3,152 | 21,062 |
en
|
train
|
0.121.5
|
There are $n\cdot(d+1-k)!$ maximal bounded cells of $P$ since we have
$(d+1-k)!$ possibilities to add edges to a spanning tree until we get the whole graph.
\begin{example}
The tropical matroid polytope $P$ from Example \ref{ex:RunEx1} is contained
in the $4$-dimensional tropical hyperplane with apex $\mathbf{0}$. It is shown in
Figure~\ref{fig:PVG} as the abstract graph of the vertices and edges of its bounded subcomplex. Its maximal bounded cells are ordinary
simplices of dimension $d-k+1=2$, whose pseudovertices are the tropical
vertices $V=\{v_{B_1},\ldots,v_{B_8}\}$ (dark), the origin $\mathbf{0}$ (the
centered point) and the five corners $c_i=e_i$ (light).
The four tropical vertices with indices $3$,~$4$,~$5$ and $8$ correspond to the
bases that are possible after deleting edge $1$ in the underlying graph and
therefore adjacent to the point $e_1$. One valid sequence $i_1,i_2$ leading
to a maxim bounded cell is for example the (tropical/ordinary) convex hull of
$e_{\emptyset}=(0,0,0,0,0),\,e_4=(0,0,0,0,1)$ and $e_{4,2}=v_{B_1}=(0,0,1,0,1)$,
i.e. $i_1=4$ and $i_2=2$, with interior cell type $(1,1,36,1,24578)$,
representing the basis $B_1=\{1,2,4\}$.
\begin{figure}
\caption{\label{fig:PVG}
\label{fig:PVG}
\end{figure}
\end{example}
All cells in the tropical complex $\mathcal{C}_{V}$, bounded or not, are pointed, i.e. they do not
contain an affine line. So each cell of $\mathcal{C}_{V}$ must contain
a bounded cell as an ordinary face.
We now state the main theorem about the coarse types of maximal cells in the
cell complex of a tropical matroid polytope. Let $b_{I,J}$ denote the number of bases
$B\in\mathcal{B}$ with $I\subseteq B$ and $J\subseteq B^C$.
\begin{theorem}\label{thm:coarse types of tmp}
Let $\mathcal{C}$ be the
tropical complex induced by the tropical vertices of a tropical matroid polytope
$P\in\mathcal{P}_{k,d}$. The set of all coarse types of the maximal cells arising in $\mathcal{C}$ is
given by those tuples $(t_1,\ldots,t_{d+1})$ with
\begin{equation}\label{eq:coarse types of tmp} t_j \ = \
\begin{cases}
b_{\{i_1\},\emptyset}+b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}} & \text{if $j=i_1$} \, ,\\
b_{\{i_l\},\{i_1,\ldots,i_{l-1}\}} & \text{if $j=i_l\in \{i_2,\ldots i_{d'+1}\}$} \, ,\\
0 & \text{otherwise} \, .
\end{cases}
\end{equation}
where $e_{i_1},\ldots,e_{i_1,i_2,\ldots,i_{d'}}$ form a subsequence of a valid
sequence of $P$.
\end{theorem}
\begin{proof}
Depending on the maximal bounded (ordinary) face in the boundary, there are
three types of maximal unbounded cells in $\mathcal{C}_{V}$.
The first one, $X_{T}$, contains a maximal bounded cell of dimension $d-k+1$, which is the tropical convex hull of the
pseudovertices of a complete valid sequence
$\mathbf{0},e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d-k+1}}$ where
$B^C=\{i_1,\ldots,i_{d-k+1}\}$ is the complement of a basis of $\mathcal{M}$. To get full-dimensional we have
the choice between $k-1$ of $k$ free directions $-e_{i}$, $i\in B$. So let
$-e^{\infty}_{j_1},\ldots,-e^{\infty}_{j_{k-1}}$ be the {\it extreme rays} of
$X_{T}$, and $(T^{(0)}_1,\ldots,T^{(0)}_{d+1})$ be the type of the pseudovertex $\mathbf{0}$
with respect to $P$.
Then the type
$T=(T_1,\ldots,T_{d+1})$ of the interior of this unbounded cell
$X_{T}$ is given by the intersection of the types of its vertices and therefore
$T_{i_1}=T^{(0)}_{i_1},T_{i_2}=T^{(0)}_{i_2}\setminus
T^{(0)}_{i_1},\ldots,T_{i_{d-k+1}}=T^{(0)}_{i_{d-k+1}}\setminus (T^{(0)}_{i_1}\cup
T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k}})$, $T_i=T^{(0)}_i\setminus (T^{(0)}_{i_1}\cup
T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k+1}})$ for $i\notin B^C\cup\{j_1,\ldots,j_{k-1}\}$ and
$T_{j_1}=\ldots=T_{j_{k-1}}=\emptyset$. Choosing $d'=d-k+1$ and $i_{d'+1}=i$, we
get the coarse type entries of equation~(\ref{eq:coarse types of tmp}).
The second type, $X_{T}$, of maximal unbounded cells contains a bounded cell of
lower dimension $d'\in\{0,\ldots,d-k\}$, which is the tropical convex hull of the
pseudovertices of some subsequence
$e_{i_1},e_{i_1,i_2},\ldots,e_{i_1,i_2,\ldots,i_{d'+1}}$.
To get full-dimensional we still need the extreme rays
$e_{i_1,i_2,\ldots,i_{d'+1}}-e^{\infty}_l$ for all directions
$l\notin\{i_1,\ldots,i_{d'+1}\}$. Then the type
$T=(T_1,\ldots,T_{d+1})$ of the interior of this unbounded cell
$X_{T}$ is given by
$T_{i_1}=T^{(0)}_{i_1}\cup({T^{(0)}_{i_1}}^C\cap\cdots\cap {T^{(0)}_{i_{d'+1}}}^C),T_{i_2}=T^{(0)}_{i_2}\setminus
T^{(0)}_{i_1},\ldots,T_{i_{d'+1}}=T^{(0)}_{i_{d'+1}}\setminus (T^{(0)}_{i_1}\cup
T^{(0)}_{i_2}\cup T^{(0)}_{i_{d'}})$, $T_j=\emptyset$ for $j\notin
\{i_1,\ldots,i_{d'+1}\}$ with the coarse type as given in equation~(\ref{eq:coarse
types of tmp}).
The third and last type of maximal unbounded cells contains a bounded cell of
dimension $d-k$ and is assigned to the non-bases of $\mathcal{M}$, i.e. to
the subsets of $E$ with cardinality $k$ that are not bases. Let
$i_1,\ldots,i_{d-k+1}$ be the complement of a non-basis $N$ and
$i_1,\ldots,i_{d-k}$ a valid subsequence. Then there is
an unbounded cell $X_{T}$ that is the tropical convex hull of the pseudovertices
$\mathbf{0}, e_{i_1},\ldots,e_{i_{d-k}}$ and the extreme rays $\mathbf{0}-e^{\infty}_l$ for all directions
$l\notin\{i_1,\ldots,i_{d-k+1}\}$ and with type entries
$T_{i_1}=T^{(0)}_{i_1},T_{i_2}=T^{(0)}_{i_2}\setminus
T^{(0)}_{i_1},\ldots,T_{i_{d-k+1}}=T^{(0)}_{i_{d-k+1}}\setminus (T^{(0)}_{i_1}\cup
T^{(0)}_{i_2}\cup T^{(0)}_{i_{d-k}})$, $T_j=\emptyset$ for $j\notin
\{i_1,\ldots,i_{d-k+1}\}$. Choosing $d'=d-k$ and observing that
$b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}}=0$ for the non-basis
$\{i_1,i_2,\ldots,i_{d'+1}\}^C$ we get the desired result.
\end{proof}
| 2,313 | 21,062 |
en
|
train
|
0.121.6
|
Restricting ourselves to the uniform case, we get the following result.
\begin{corollary}
The coarse types of the maximal cells in the tropical complex induced by the
tropical vertices of the tropical hypersimplex $\Delta_k^d$ in ${\mathbb{T}}^d$ with
$2\le k< d+1$ are up to symmetry of $\operatorname{Sym}(d+1)$ given by
\[\left(\binom{d+1-\alpha}{k}+\binom{d}{k-1},\,\binom{d-1}{k-1},\ldots,\,\binom{d-(\alpha-1)}{k-1},\underbrace{0,\ldots,0}_{d+1-\alpha}\right)\]
where $0\le\alpha\le d+2-k$ correlates to the maximal dimension of a bounded
cell of its boundary.
\end{corollary}
Now we relate the combinatorial properties of the tropical complex
$\mathcal{C}$ of a tropical
matroid polytope to algebraic properties of a monomial ideal which is
assigned to $\mathcal{C}$.
As a
direct consequence of Theorem~\ref{thm:DJS2009} and Corollary 3.5
in~\cite{DJS09}, we can state the generators
of the coarse type ideal \[I=\langle x^{\mathbf{t}(p)}\colon
p\in{\mathbb{T}}^d\rangle\subset{\mathbb{K}}[x_1,\ldots,x_{d+1}],\] where $\mathbf{t}(p)$ is the
coarse type of $p$ and
$x^{\mathbf{t}(p)}={x_1}^{{\mathbf{t}(p)}_1}{x_2}^{{\mathbf{t}(p)}_2}\cdots{x_{d+1}}^{{\mathbf{t}(p)}_{d+1}}$.
\begin{corollary}The coarse type ideal $I$
is equal to \[\langle x_{i_1}^{t_{i_1}}x_{i_2}^{t_{i_2}}\cdots x_{i_{d'+1}}^{t_{i_{d'+1}}}\colon
[d+1]\setminus\{i_1,\ldots,i_{d'}\} \text{ contains a basis }\rangle\] with
$(t_{i_1},t_{i_2},\ldots,t_{i_{d'+1}})=\big(b_{\{i_1\},\emptyset}+b_{\,\emptyset,\{i_1,i_2,\ldots,i_{d'+1}\}},b_{\{i_2\},\{i_1\}},\ldots,b_{\{i_{d'+1}\},\{i_1,\ldots,i_{d'}\}}\big)$.
\end{corollary}
\begin{example}The tropical complex $\mathcal{C}$ of the tropical matroid
polytope of Example~\ref{ex:RunEx1} has 73 maximal cells. There are five maximal cells for the case
$d'=0$ with $t_{i_{d'+1}}=8$ and $t_j=0$ for $j\neq i_{d'+1}$, and 48 for the case
$d'=2$ according to the 8 bases. Finally, there are 20 maximal cells for the case $d'=1$, where
$[d+1]\setminus\{i_1\}$ contains a basis, but $[d+1]\setminus\{i_1,i_2\}$ does
not necessarily contain a basis.
The coarse type ideal of $\mathcal{C}$ is given by
\begin{eqnarray*}I&=\langle&
{x_1}^1{x_2}^2{x_3}^5,{x_1}^1{x_2}^5{x_3}^2,{x_1}^2{x_2}^1{x_3}^5,{x_1}^4{x_2}^1{x_3}^3,{x_1}^4{x_2}^3{x_3}^1,{x_1}^2{x_2}^5{x_3}^1,{x_2}^2{x_3}^6,{x_2}^6{x_3}^2,\\
&&{x_2}^2{x_3}^5{x_4}^1,{x_2}^5{x_3}^2{x_4}^1,{x_1}^2{x_3}^6,{x_1}^5{x_3}^3,{x_1}^2{x_3}^5{x_4}^1,{x_1}^4{x_3}^3{x_4}^1,{x_3}^8,{x_3}^5{x_4}^3,{x_1}^8,{x_1}^5{x_2}^3,\\
&&{x_1}^5{x_4}^3,{x_1}^4{x_3}^1{x_4}^3,{x_1}^4{x_2}^3{x_4}^1,{x_1}^4{x_2}^1{x_4}^3,{x_0}^2{x_4}^6,{x_4}^8,{x_0}^2{x_1}^1{x_4}^5,{x_0}^1{x_1}^2{x_4}^5,{x_1}^2{x_4}^6,\\
&&{x_0}^1{x_2}^2{x_4}^5,{x_2}^2{x_4}^6,{x_0}^2{x_2}^1{x_4}^5,{x_1}^1{x_2}^2{x_4}^5,{x_1}^2{x_2}^1{x_4}^5,{x_0}^2{x_3}^1{x_4}^5,{x_3}^3{x_4}^5,{x_2}^2{x_3}^1{x_4}^5,\\
&&{x_1}^2{x_3}^1{x_4}^5,{x_0}^1{x_2}^5{x_4}^2,{x_2}^6{x_4}^2,{x_2}^5{x_3}^1{x_4}^2,{x_1}^1{x_2}^5{x_4}^2,{x_0}^2{x_3}^6,{x_0}^1{x_2}^2{x_3}^5,{x_0}^2{x_2}^1{x_3}^5,\\
&&{x_0}^2{x_1}^1{x_3}^5,{x_0}^1{x_1}^2{x_3}^5,{x_0}^2{x_3}^5{x_4}^1,{x_0}^1{x_2}^5{x_3}^2,{x_0}^3{x_2}^5,{x_2}^8,{x_0}^1{x_1}^2{x_2}^5,{x_1}^2{x_2}^6,{x_1}^2{x_2}^5{x_4}^1,\\
&&{x_0}^1{x_1}^4{x_2}^3,{x_0}^6{x_4}^2,{x_0}^5{x_1}^1{x_4}^2,{x_0}^5{x_1}^2{x_4}^1,{x_0}^3{x_1}^4{x_4}^1,{x_0}^1{x_1}^4{x_4}^3,{x_0}^5{x_2}^1{x_4}^2,{x_0}^5{x_2}^3,\\
&&{x_0}^5{x_1}^2{x_2}^1,{x_0}^3{x_1}^4{x_2}^1,{x_0}^5{x_3}^1{x_4}^2,{x_0}^5{x_2}^1{x_3}^2,{x_0}^5{x_3}^2{x_4}^1,{x_0}^6{x_3}^2,{x_0}^5{x_1}^1{x_3}^2,{x_0}^6{x_1}^2,\\
&&{x_0}^3{x_1}^5,{x_0}^5{x_1}^2{x_3}^1,{x_0}^3{x_1}^4{x_3}^1,{x_0}^8,{x_0}^1{x_1}^4{x_3}^3\,\rangle\subseteq
R:={\mathbb{R}}[{x_0},{x_1},{x_2},{x_3},{x_4}]\end{eqnarray*}We obtain its minimal free
resolution, which is induced by $\mathcal{C}$\[\mathcal{F}_{\bullet}^{\mathcal{C}}\colon\,0\rightarrow R^{14}\rightarrow R^{78}\rightarrow
R^{172}\rightarrow R^{180}\rightarrow R^{73}\rightarrow I\rightarrow
0,\]where the exponents $i$ of the free graded $R$-modules $R^i$ correspond to
the entries of the $f$-vector $f(\mathcal{C})=(1,14,78,172,180,73)$ of $\mathcal{C}$.
\end{example}
In (ordinary) convexity swapping between interior and exterior description of a
polytope is a famous problem known as the \emph{convex hull problem}. For a
uniform matroid it is possible to indicate the minimal tropical halfspaces of
its tropical matroid polytope.
\begin{theorem} The tropical hypersimplex $\Delta_k^d$ in ${\mathbb{T}}^d$ is the
intersection of its cornered halfspaces and the tropical halfspaces
$H(\mathbf{0},I)$, where $I$ is a $(d-k+2)$-element subset of $[d+1]$.
\end{theorem}
\begin{proof}
For $k=1$ the tropical standard simplex is a polytrope and coincides with its
cornered hull. For $k\geq 2$ we want to verify the three conditions of Gaubert
and Katz in Proposition 1 of~\cite{GaubertKatz09}.
Let $T=(T_1,\ldots,T_{d+1})$ be the type of
the apex $\mathbf{0}$ of $H(\mathbf{0},I)$.
If a vertex $v\in\V(\Delta_k^d)$
appears in some type entry $T_i$, then the $i$-th (canonical) coordinate of $v$
is equal to zero. Hence, exactly $k$ entries of $T$ contain the index of
$v$. Since the cardinality of $I^C=[d+1]\setminus I$ is only $k-1$,
every tropical vertex of $\Delta_k^d$ is contained in some sector $\overline{S_i}$ with
$i\in I$, i.e. $\Delta_k^d\subseteq H(\mathbf{0},I)$.
Consider the complement $I^C$ of $I$. For all $i\in I^C$ there is a tropical vertex $v$ with
$v_i=0$, i.e. $v\in T_i$. Since the cardinality of $I^C$ is equal to $k-1$ and
$v$ has $k$ entries equal to zero, there must be an index $j\in I$ such that
$v_j=0$. We can conclude that $T_i\cap T_j\neq\emptyset$.
The intersection $T_i\cap T_j$ is not empty for arbitrary $i,j\in[d+1]$, because its cardinality is equal to the number
of tropical vertices $v$ with $v_i=v_j=0$, which is $\binom{d}{k-1}$ with
$k>1$.
For $i\in I$ and $j\in I^C$, the set $T_i\cap T_j$ consists of all tropical vertices
$v$ with $v_i=0$ and $v_j=1$ (in canonical coordinates).
On the other hand, the set $\bigcup_{k\in I\setminus\{i\}}T_k$ contains all
tropical vertices $v$
with $v_i=1$.
So we get $T_i\cap T_j \not\subset \bigcup_{k\in I\setminus\{i\}}T_k$.
Hence, we obtain that $H(\mathbf{0},I)$ is a minimal tropical
halfspace, and $\Delta_k^d$ is contained in the intersection of its cornered
hull $\displaystyle \bigcap_{i\in[d+1]}H(e_i,\{i\})$ with $\displaystyle
\bigcap_{I\in\binom{[d+1]}{d-k+2}}H(\mathbf{0},I)$.
We still have to prove that the intersection of the given minimal tropical
halfspaces is contained in $\Delta_k^d$. Let us assume that there is a point
$x\in{\mathbb{T}}^d\setminus\Delta_k^d$ with $\operatorname{type}_{\Delta_k^d}(x)_i=\emptyset$. Then for any tropical halfspace $H(\mathbf{0},I)$, $I\in\binom{[d+1]}{d-k+2}$, with $i\in I^C$
we obtain $x\notin H(\mathbf{0},I)$.
Consequently, the tropical hypersimplex $\Delta_k^d$ is
the set of all points $x\in{\mathbb{T}}^d$ satisfying
\begin{eqnarray*}
\displaystyle\bigoplus_{i\in I} x_i&\leq&\bigoplus_{j\in I^C} x_j \text{ for
all }I\subseteq [d+1]\text{ with }\lvert I \rvert=d-k+2\\\text{and }
(-1)\odot x_i&\leq&\bigoplus_{j\neq i} x_j \text{ for all }i\in[d+1].
\end{eqnarray*}
\end{proof}
\begin{example}The second tropical hypersimplex $\Delta_2^3$ in ${\mathbb{T}}^3$ is the intersection
of the $4$ cornered halfspaces $(c_i,\{i\})$ for $i=1,\ldots,4$ and the tropical
halfspaces $(\mathbf{0},\{1,2,3\})$, $(\mathbf{0},\{1,2,4\})$, $(\mathbf{0},\{1,3,4\})$ and
$(\mathbf{0},\{2,3,4\})$ with apex $\mathbf{0}\in {\mathbb{T}}^d$.
The second tropical hypersimplex $\Delta_2^2$ in ${\mathbb{T}}^2$ is the intersection
of the three cornered halfspaces $(c_i,\{i\})$ for $i=1,\ldots,3$ and the tropical
halfspaces $(\mathbf{0},\{1,2\})$, $(\mathbf{0},\{1,3\})$ and
$(\mathbf{0},\{2,3\})$ with apex $\mathbf{0}\in {\mathbb{T}}^2$, see Figure \ref{fig:Delta22}.
\begin{figure}
\caption{\label{fig:Delta22}
\label{fig:Delta22a}
\label{fig:Delta22b}
\label{fig:Delta22}
\end{figure}
\end{example}
\noindent\emph{Acknowledgements.} I would like to thank my advisor Michael
Joswig for suggesting the problem, and for
supporting me writing this article.
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document}
| 3,935 | 21,062 |
en
|
train
|
0.122.0
|
\begin{document}
\title{Algebraic duality for partially ordered sets}
\begin{center}
Department of Mathematics, SPb UEF, Griboyedova 30/32, \\
191023, St-Petersburg, Russia \\
{\em and}
\\ Division of Mathematics, Istituto per la Ricerca di Base, \\
I-86075, Monteroduni, Molise, Italy
\end{center}
\begin{abstract}
For an arbitrary partially ordered set $P$ its {\em dual} $P^*$ is
built as the collection of all monotone mappings $P\to{\bf 2}$ where
${\bf 2}=\{0,1\}$ with $0<1$. The set of mappings $P^*$ is proved to be a
complete lattice with respect to the pointwise partial order. The
{\em second dual} $P^{**}$ is built as the collection of all morphisms
of complete lattices $P^*\to{\bf 2}$ preserving universal bounds. Then it
is proved that the partially ordered sets $P$ and $P^{**}$ are
isomorphic.
\end{abstract}
\paragraph{AMS classification:} 06A06, 06A15
\section*{Introduction}
The results presented in this paper can be considered as the
algebraic counterpart of the duality in the theory of linear
spaces. The outline of the construction looks as follows.
Several categories occur in the theory of partially ordered sets.
The most general is the category ${\cal POSET}$ whose objects are
partially ordered sets and the morphisms are the monotone mappings.
Another category which will be used is
${\cal BCL}$ whose objects are (bounded) complete lattices
and the morhisms are the lattice homomorphisms preserving
universal bounds. Evidently ${\cal BCL}$ is the subcategory of ${\cal POSET}$.
To introduce the algebraic duality (I use the term `algebraic' to
avoid confusion with the traditional duality based on order reversal)
the two element partially ordered set ${\bf 2}$ is used:
\[ {\bf 2}=\{0,1\}\quad,\quad 0<1 \]
Let $P$ be an object of ${\cal POSET}$. Consider its dual $P^*$:
\begin{equation}\label{f129}
P^* = {\sf Mor}_{\scriptscriptstyle {\cal POSET}}(P,{\bf 2})
\end{equation}
The set $P^*$ has the pointwise partial order. Moreover, it is
always the complete lattice with respect to this partial order
(section \ref{s130}). Furthermore, starting from $P^* \in {\cal BCL}$
(bounded compete lattices) consider the set $P^{**}$ of all morphisms
in the appropriate category:
\begin{equation}\label{f129d}
P^{**} = {\sf Mor}_{\scriptscriptstyle {\cal BCL}}(P^*,{\bf 2})
\end{equation}
And again, the set of mappings $P^{**}$ is pointwise partaially
ordered. Finally, it is proved in section \ref{s139} that $P^{**}$
is isomorphic to the initial partially ordered set $P$ (the
isomorphism lemma \ref{l141}):
\[ P^{**} \simeq P \]
The account of the results is organized as follows. First it is proved
that $P^*$ (\ref{f129}) is complete lattice. Then the
embeddings $p\to\lambda_p$ and $p\to\upsilon_p$ of the poset $P$ into $P^*$
are built (\ref{f131d}). Then it is shown that the
principal ideals $[0,\lambda_p]$ in $P^*$ are prime for all $p\in P$
(lemma \ref{l139}). Moreover, it is shown that there is
no more principal prime ideals in $P^*$. Finally, it is observed
that the principle prime ideals on $P^*$ are in 1-1 correspondence
with the elements of $P^{**}$.
\section{The structure of the dual space} \label{s130}
First define the pointwise partial order on the elements of $P^*$
(\ref{f129}). For any $x,y\in P^*$
\begin{equation}\label{f130p}
x\le y \kern0.3em\Leftrightarrow\kern0.3em \forall p \in P \quad x(p)\le y(p)
\end{equation}
Evidently the following three statements are equivalent for $x,y\in
P^*$:
\begin{equation}\label{f130}
\left.
\begin{array}{lcl}
&x\le y & \cr
\forall p\in P \quad x(p)=1 &\Rightarrow & y(p)=1 \cr
\forall p\in P \quad y(p)=0 &\Rightarrow & x(p)=0
\end{array}
\right.
\end{equation}
To prove that $P^*$ is complete lattice, consider its arbitrary
subset $K\subseteq P^*$ and define the following mappings
$u,v:P \to {\bf 2}$:
\begin{equation}\label{f131i}
u(p) = \left\lbrace \begin{array}{rcl}
1, & \exists k\in K & k(p)=1 \cr
0, & \forall k\in K & k(p)=0
\end{array}\right. \qquad
v(p) = \left\lbrace \begin{array}{rcl}
0, & \exists k\in K & k(p)=0 \cr
1, & \forall k\in K & k(p)=1
\end{array}\right.
\end{equation}
The direct calculations show that both $u$ and $v$ are monotone
mappings: $u,v\in P^*$ and
\[ u=\sup_{P^*}K \quad,\quad v=\inf_{P^*}K \]
which proves that $P^*$ is the complete lattice. Denote by {\bf 0},{\bf 1} the
universal bounds of the lattice $P^*$:
\[ \forall p\in P \quad {\bf 0}(p)=0 \quad, \quad {\bf 1}(p)=1 \]
Let $p$ be an element of $P$. Define the elements $\lambda_p,\upsilon_p\in
P^*$ associated with $p$: for all $q\in P$
\begin{equation}\label{f131d}
\lambda_p(q) = \left\lbrace \begin{array}{rcl}
0 &, & q\le p\cr
1 &, & \hbox{otherwise}
\end{array}\right. \qquad
\upsilon_p(q) = \left\lbrace \begin{array}{rcl}
1 &, & q\ge p \cr
0 &, & \hbox{otherwise}
\end{array}\right.
\end{equation}
\begin{lemma}\label{l132} For any $x\in P^*,\quad p\in\ P$
\begin{equation}\label{f132l}
\begin{array}{rcl}
x(p)=0 &\Leftrightarrow & x\le \lambda_p \hbox{ \rm in } P^* \cr
x(p)=1 &\Leftrightarrow & x\ge \upsilon_p \hbox{ \rm in } P^*
\end{array}
\end{equation}
\end{lemma}
\paragraph{Proof.} Rewrite the left side of the first equivalency as \( \forall
q \quad q\le p \kern0.3em\Rightarrow\kern0.3em x(q)=0$, hence $\forall q \quad\lambda_p(q)=0
\kern0.3em\Rightarrow\kern0.3em x(q)=0$, therefore $x\le \lambda_p$ by virtue of (\ref{f130}).
The second equivalency is proved likewise. \hspace*{\fill}$\Box$
We shall focus on the 'inner' characterization of the elements
$\lambda_p,\upsilon_p$ in mere terms of the lattice $P^*$ itself. To do it,
recall the necessary definitions.
Let $L$ be a complete lattice. An element $a\in L$ is called {\sc
join-irreducible} ({\sc meet-irreducible}) if it can not be
represented as the join (resp., meet) of a collection of elements
of $L$ different from $a$. To make this definition more verifiable
introduce for every $a\in L$ the following elements of $L$:
\begin{equation}\label{f131}
\begin{array}{rcl}
\check{a} &=& \inf_L\{x\in L\mid\quad x>a\} \cr
\hat{a} &=& \sup_L\{y\in L\mid\quad y<a\}
\end{array}
\end{equation}
\noindent which do exist since $L$ is complete. Clearly,
$\check{a}\ge a \ge \hat{a}$ and the equivalencies
\begin{equation}\label{f132}
\begin{array}{rcl}
a\neq \check{a} &\Leftrightarrow & a\hbox{ is meet-irreducible} \cr
a\neq \hat{a} &\Leftrightarrow & a\hbox{ is join-irreducible}
\end{array}
\end{equation}
follow directly from the above definitions.
\begin{lemma} \label{l138} An element $w\in P^*$ is meet
irreducible if and only if it is equal to $\lambda_p$ for some $p\in P$.
Dually, $v\in P^*$ is join irreducible iff $v=\upsilon_p$ for some $p\in
P$.
\end{lemma}
\paragraph{Proof.} First prove that every $\lambda_p$ is meet irreducible. To do it
we shall use the criterion (\ref{f132}). Let $p\in P$. Define $u\in
P^*$ as:
\[ u(q) = \left\lbrace \begin{array}{rcl}
0 &,& q<p \cr
1 &,& \hbox{otherwise}
\end{array}
\right.
\]
then the following equivalency holds:
\begin{equation}\label{f135}
x\le y \kern0.3em\Leftrightarrow\kern0.3em ( \forall q \quad x(q)=0 \kern0.3em\Rightarrow\kern0.3em q<p )
\end{equation}
Now let $x>\lambda_p$, then $x(p)=1$ (otherwise (\ref{f132l}) would
enable $x\le\lambda_p$). Then $x>\lambda_p$ implies $x\ge\lambda_p$, hence
$\forall q \quad x(q)=0 \kern0.3em\Rightarrow\kern0.3em q\le p$, although $q=p$ is excluded,
hence we get exactly the right side of (\ref{f135}). That means
that
\[ u=\inf_{P^*}\{x\mid x>\lambda_p\} = {\check{\lambda}}_p \]
differs from $\lambda_p$, hence $\lambda_p$ is meet irreducible by virtue of
(\ref{f131}). The second dual statement is proved quite
analogously.
Conversely, suppose we have a meet irreducible $w\in P^*$, hence,
according to (\ref{f131}), there exists $p\in P^*$ such that
$\check{w}(p)\neq 0$ while $w(p)=0$. The latter means $w\le \lambda_p$
for this $p$. To disprove $w<\lambda_p$ rewrite $\check{w}(p)\neq 0$
as \( \lnot(\inf\{x\mid w<x\}\quad =\quad \lambda_p) \)
which is equivalent to
\[ \exists y (\forall x \quad w<x \Rightarrow y\le x) \quad \&
\quad \lnot(y\le \lambda_p) \]
In particular, it must hold for $x=\lambda_p$, thus the assumption
$w<\lambda_p$ implies \( \exists y \quad y\le \lambda_p \kern0.4em \&
\kern0.4em \lnot(y\le \lambda_p) \),
and the only remaining possibility for $w$ is to be equal to $\lambda_p$.
\hspace*{\fill}$\Box$
\paragraph{Dual statement.} The join irreducibles of $P^*$ are the
elements $\upsilon_p, p\in P$ and only they.
| 3,132 | 4,953 |
en
|
train
|
0.122.1
|
\section{Second dual and the isopmorphism lemma} \label{s139}
Introduce the necessary definitions. Let $L$ be a lattice. An {\sc
ideal} in $L$ is a subset $K\subseteq L$ such that
\begin{itemize}
\item $k\in K, x\le k \kern0.3em\Rightarrow\kern0.3em x\in K$
\item $a,b\in K \kern0.3em\Rightarrow\kern0.3em a\lor b \in K$
\end{itemize}
Replacing $\le$ by $\ge$ and $\lor$ by $\land$ the notion of
{\sc filter} is introduced. An ideal (filter) $K\subseteq L$ is
called {\sc prime} if its set complement $L\setminus K$ is a
filter (resp., ideal) in $L$. Now return to the lattice $P^*$.
\begin{lemma}\label{l139} For any $p\in P$ both the principal ideal
$[0,\lambda_p]$ and the principal filter $[\upsilon_p,1]$ are prime in $P^*$.
Moreover,
\[ [\upsilon_p,1] = P^*\setminus [0,\lambda_p] \]
\end{lemma}
\paragraph{Proof.} Fix up $p\in P$, then for any $x\in P^*$ the value $x(p)$
is either 0 (hence $x\le \lambda_p$) or 1 (and then $x\ge \upsilon_p$)
according to (\ref{f132l}). Since $\lambda_p$ never equals
$\upsilon_p$ (because their values at $p$ are different), the sets
$[\upsilon_p,1]$ and $[0,\lambda_p]$ are disjoint, which completes the proof.
\hspace*{\fill}$\Box$
The converse statement is formulated in the following lemma.
\begin{lemma} \label{l141} For any pair $u,v\in P^*$ such that
\begin{equation}\label{f141}
[0,u] = P^*\setminus [v,1]
\end{equation}
there exists an element $p\in P$ such that $u=\lambda_p$ and $v=\upsilon_p$.
\end{lemma}
\paragraph{Proof.} It follows from (\ref{f141}) that $u$ and $v$ are not
comparable, therefore $u\land v < v$. Thus there exists $p\in P$
such that $(u\land v)(p) =0$ while $v(p) = 1$. Then (\ref{f132l})
implies $u\land v \le \lambda_p$ and $v\ge \upsilon_p$. Suppose $v\neq \upsilon_p$,
then (\ref{f141}) implies $\upsilon_p\le u$, which together with $\upsilon_p\le
v$ implies $\upsilon_p \le u\land v \le \lambda_p$ which never holds since
$\upsilon_p$ and $\lambda_p$ are not comparable. So, we have to conclude that
$v=\upsilon_p$, thus $u=\lambda_p$. \hspace*{\fill}$\Box$
Now introduce the {\sc second dual} $P^{**}$ as the set of all
homomorphisms of complete lattices $P^*\to {\bf 2}$ preserving universal
bounds, that is, for any ${\bf p} \in P^{**}, K\subseteq P^*$
\[ \begin{array}{l}
{\bf p}(\sup_K) = \sup_{k\in K}{\bf p}(k) \cr
{\bf p}(\inf_K) = \inf_{k\in K}{\bf p}(k) \cr
{\bf p}(0) = 0\hbox{ ; }{\bf p}(1) = 1
\end{array} \]
with the pointwise partial order as in (\ref{f130p}).
Now we are ready to prove the following {\em isomorphism lemma}.
\begin{lemma} The partially ordered sets $P$ and $P^{**}$ are
isomorphic.
\end{lemma}
\paragraph{Proof.} Define the mapping $F:P\to P^{**}$ by putting
\[ F(p) = {\bf p}:\quad {\bf p}(x) = x(p) \qquad \forall x\in P^{**} \]
Evidently $F$ is the order preserving injection. To build the
inverse mapping $G:P^{**}\to P$, for any ${\bf p}\in P^{**}$ consider
the ideal ${\bf p}^{-1}(0)$ and the filter ${\bf p}^{-1}(1)$ in $P^*$
both being prime (see \cite{lt}, II.4). Let $u=\sup{\bf p}^{-1}(0)$
and $v=\inf{\bf p}^{-1}(1)$. Since ${\bf p}$ is the homomorphism of
complete lattices, $u\in {\bf p}^{-1}(0)$ and $v\in {\bf p}^{-1}(1)$,
hence ${\bf p}^{-1}(0) = [0,u]$ and ${\bf p}^{-1}(1) = [v,1]$. Applying
lemma \ref{l141} we see that there exists $p\in P$ such that $u=\lambda_p$
and $v=\upsilon_p$. Put $G({\bf p}) = p$. The mapping $G$ is order preserving
and injective (since the different principal ideals have different
suprema). It remains to prove that $F,G$ are mutually inverse.
Let $p\in P$, consider $G(F(p))$. Denote ${\bf p} = F(p)$, then
${\bf p}^{-1}(1) = \{x\in P^*\mid x(p)=0\} = \{x\mid x\le \lambda_p\}$. Thus
$\sup{\bf p}^{-1}(0) = \lambda_p$, then $G\circ F = {\rm id}_P$ which
completes the proof. \hspace*{\fill}$\Box$
\section*{Concluding remarks}
The results presented in this paper show that besides the well
known duality in partially ordered sets based on order reversal, we
can establish quite another kind of duality {\em \`a la} linear algebra.
As in the theory of linear topological spaces, we see that the
`reflexivity' expressed as $P=P^{**}$ can be achieved by appropriate
{\em definition} of dual space.
We see that a general partially ordered set have the dual
space being a complete lattice. We also see that not every complete
lattice can play the r\^ole of dual for a poset. These complete
lattices can be characterized in terms of spaces with two closure
operations \cite{cr}. For the category of {\em ortho}posets this
construction was introduced in \cite{mayet}. Another approach to
dual spaces when they are treated as sets of two-valued measures
(in terms of this paper, as sub-posets of $P^{**}$) is in
\cite{tkadlec}. The main feature of the techniques suggested in the
present paper is that all the constructions are formulated in mere
terms of partially ordered sets and lattices.
The work was supported by the RFFI research grant (97-14.3-62).
The author acknowledges the financial support from the Soros
foundation (grant A97-996) and the research grant "Universities
of Russia".
\end{document}
| 1,821 | 4,953 |
en
|
train
|
0.123.0
|
\begin{document}
\title{Non-deterministic weighted automata \protect\\ evaluated over Markov chains\footnote{This paper has been published in Journal of Computer and System Sciences: \url{https://doi.org/10.1016/j.jcss.2019.10.001}
\begin{abstract}
We present the first study of non-deterministic weighted automata under probabilistic semantics.
In this semantics words are random events, generated by a Markov chain, and functions computed by weighted automata are random variables.
We consider the probabilistic questions of computing the expected value and the cumulative distribution for such random variables.
The exact answers to the probabilistic questions for non-deterministic automata can be irrational and are uncomputable in general.
To overcome this limitation, we propose approximation algorithms for the probabilistic questions, which work in exponential time in the size of the automaton and polynomial time in the size of the Markov chain and the given precision.
We apply this result to show that non-deterministic automata can be effectively determinised with respect to the standard deviation metric.
\end{abstract}
\newcommand{\introPara}[1]{\noindent\emph{#1}.}
\section{Introduction}
Weighted automata are (non-deterministic) finite automata in which transitions carry weights~\cite{Droste:2009:HWA:1667106}.
We study here weighted automata (on finite and infinite words) whose semantics is given by \emph{value functions} (such as the sum or the average)~\cite{quantitativelanguages}.
In such weighted automata transitions are labeled with rational numbers and hence every run yields a sequence of rationals, which the value function aggregates into a single
(real) number.
This number is the value of the run, and
the value of a word is the infimum over the values of all accepting runs on that word.
The value function approach has been introduced to express quantitative system properties (performance, energy consumption, etc.) and it serves as a foundation for \emph{quantitative verification}~\cite{quantitativelanguages,henzingerotop17}.
Basic decision questions for weighted automata are quantitative counterparts of the emptiness and universality questions obtained by imposing a threshold on the values of words.
\introPara{Probabilistic semantics}
The emptiness and the universality problems correspond to the best-case and the worst-case analysis. For the average-case analysis, weighted automata are considered under probabilistic semantics, in which words are random events
generated by a Markov chain~\cite{DBLP:conf/icalp/ChatterjeeDH09,lics16}.
In such a setting, functions from words to reals computed by deterministic weighted automata are measurable and hence can be considered as random variables.
The fundamental probabilistic questions are to compute \emph{the expected value} and \emph{the cumulative distribution} for a given automaton and a Markov chain.
\introPara{The deterministic case}
Weighted automata under probabilistic semantics have been studied only in the deterministic case.
A close relationship has been established between weighted automata under probabilistic semantics and weighted Markov chains~\cite{DBLP:conf/icalp/ChatterjeeDH09}.
For a weighted automaton $\mathcal{A}$ and a Markov chain $\mathcal{M}$ representing the distribution over words, the probabilistic problems for $\mathcal{A}$ and $\mathcal{M}$ coincide with the probabilistic problem of the weighted Markov chain $\mathcal{A} \times \mathcal{M}$.
Weighted Markov chains have been intensively studied with single and multiple quantitative objectives~\cite{BaierBook,DBLP:conf/concur/ChatterjeeRR12,filar,DBLP:conf/cav/RandourRS15}.
The above reduction does not extend to non-deterministic weighted automata \cite[Example~30]{lics16}.
\introPara{Significance of nondeterminism}
Non-deterministic weighted automata are provably more expressive than their deterministic counterpart~\cite{quantitativelanguages}.
Many important system properties
can be expressed with weighted automata only in the nondeterministic setting. This includes minimal response time, minimal number of errors and the edit distance problem~\cite{henzingerotop17}, which serves as the foundation for the \emph{specification repair} framework from~\cite{DBLP:conf/lics/BenediktPR11}.
Non-determinism can also arise as a result of abstraction. The exact systems are often too large and complex to operate on and hence they are approximated with
smaller non-deterministic models~\cite{clarke2016handbook}. The abstraction is especially important for multi-threaded programs, where the explicit model grows exponentially with the number of threads~\cite{DBLP:conf/popl/GuptaPR11}.
\paragraph*{Our contributions}
We study non-deterministic weighted automata under probabilistic semantics.
We work with weighted automata as defined in~\cite{quantitativelanguages}, where a value function $f$ is used to aggregate weights along a run, and
the value of the word is the infimum over values of all runs. (The infimum can be changed to supremum as both definitions are dual).
We primarily focus on the two most interesting value functions: the sum of weights over finite runs, and
the limit average over infinite runs.
The main results presented in this paper are as follows.
\begin{itemize}
\item We show that the answers to the probabilistic questions for weighted automata with the sum and limit-average value functions can be irrational and even transcendental (Theorem~\ref{th:irrational}) and cannot be computed by any effective representation (Theorem~\ref{th:limavg-undecidable}).
\item We establish approximation algorithms for the probabilistic questions for weighted automata with the sum and limit-average value functions.
The approximation is \textsc{\#P}-complete for (total) weighted automata with the sum value function (Theorem~\ref{th:approximation-sum}), and it
is $\PSPACE$-hard and solvable in exponential time for weighted automata with the limit-average value function (Theorem~\ref{th:approximation-limavg}).
\item We show that weighted automata with the limit-average value function can be approximately determinised (Theorem~\ref{th:approximateDeterminisation}).
Given an automaton $\mathcal{A}$
and $\epsilon >0$, we show how to compute a deterministic automaton $\mathcal{A}_D$ such that the expected difference between the values returned by both automata is at most $\epsilon$.
\end{itemize}
\paragraph*{Applications}
We briefly discuss applications of our contributions in quantitative verification.
\begin{itemize}
\item The expected-value question corresponds to the average-case analysis in quantitative verification~\cite{DBLP:conf/icalp/ChatterjeeDH09,lics16}.
Using results from this paper, we can perform the average-case analysis with respect to quantitative specifications given by non-deterministic weighted automata.
\item Some quantitative-model-checking frameworks~\cite{quantitativelanguages} are based on the universality problem for non-deterministic automata, which asks whether all words have the value below a given threshold.
Unfortunately, the universality problem is undecidable for weighted automata with the sum or the limit average values functions.
The distribution question can be considered as a computationally-attractive variant of universality, i.e., we ask whether almost all words have value below some given threshold.
We show that if the threshold can be approximated, the distribution question can be computed effectively.
\item Weighted automata have been used to formally study online algorithms~\cite{aminof2010reasoning}. Online algorithms have been modeled by deterministic weighted automata, which make choices based solely on the past, while
offline algorithms have been modeled by non-deterministic weighted automata.
Relating deterministic and non-deterministic models allowed for formal verification of the worst-case competitiveness ratio of online algorithms.
Using the result from our paper, we can extend the analysis from~\cite{aminof2010reasoning} to the average-case competitiveness.
\end{itemize}
\paragraph*{Related work} The problem considered in this paper is related to the following areas from the literature.
\noindent\emph{Probabilistic verification of qualitative properties}.
Probabilistic verification asks for the probability of the set of traces satisfying a given property.
For non-weighted automata, it has been extensively studied~\cite{DBLP:conf/focs/Vardi85,DBLP:journals/jacm/CourcoubetisY95,
BaierBook} and implemented~\cite{DBLP:journals/entcs/KwiatkowskaNP06,DBLP:conf/tacas/HintonKNP06}.
The prevalent approach in this area is to work with deterministic automata, and apply determinisation as needed.
To obtain better complexity bounds, the probabilistic verification problem has been directly studied for unambiguous B\"uchi automata in~\cite{DBLP:conf/cav/BaierK0K0W16};
the authors explain there the potential pitfalls in the probabilistic analysis of non-deterministic automata.
\noindent\emph{Weighted automata under probabilistic semantics}.
Probabilistic verification of weighted automata and their extensions has been studied in~\cite{lics16}.
All automata considered there are deterministic.
\noindent\emph{Markov Decision Processes (MDPs)}. MDPs are a classical extension of Markov chains, which models control in a stochastic environment~\cite{BaierBook,filar}.
In MDPs, probabilistic and non-deterministic transitions are interleaved;
this can be explained as a game between two players: Controller and Environment.
Given a game objective (e.g. state reachability), the goal of Controller is to maximize the probability of the objective by selecting non-deterministic transitions.
Environment is selecting probabilistic transitions at random w.r.t. a probability distribution described in the current state of the MDP.
Intuitively, the non-determinism in MDPs is resolved based on the past, i.e., each time Controller selects a non-deterministic transition, its choice is based on previously picked transitions.
Our setting can be also explained in such a game-theoretic framework: first, Environment generates a complete word, and only then non-deterministic choices are resolved by Controller, who generates a run of a given weighted automaton.
That non-determinism in a run at some position $i$ may depend on letters in the input word on positions past $i$ (i.e., future events).
Partially Observable Markov Decision Process (POMDPs)~\cite{ASTROM1965174} are an extension of MDPs, which models weaker non-determinism.
In this setting, the state space is partitioned into \emph{observations} and the non-deterministic choices have to be the same for sequences consisting of the same observations (but possibly different states).
Intuitively, Controller can make choices based only on the sequence of observations it has seen so far.
While in POMDPs Controller is restricted, in our setting Controller is stronger than in the MDPs case.
\noindent\emph{Non-deterministic probabilistic automata}.
The combination of nondeterminism with stochasticity has been recently studied in the framework of probabilistic automata~\cite{nondet-prob}.
There have been defined non-deterministic probabilistic automata (NPA) and there has been proposed two possible semantics for NPA.
It has been shown that the equivalence problem for NPA is undecidable (under either of the considered two semantics).
Related problems, such as the threshold problem, are undecidable already for (deterministic) probabilistic automata~\cite{bertoni1977some}.
While NPA work only over finite words, the interaction between probabilistic and non-deterministic transitions is more general than in our framework.
In particular, non-determinism in NPA can influence the probability distribution, which is not possible in our framework.
\noindent\emph{Approximate determinisation}.
As weighted automata are not determinisable, Boker and Henzinger~\cite{BokerH12} studied \emph{approximate} determinisation defined as follows.
The distance $d_{\sup}$ between weighted automata $\mathcal{A}_1, \mathcal{A}_2$ is defined as $d_{\sup}(\mathcal{A}_1, \mathcal{A}_2) = \sup_{w} | \mathcal{A}_1(w) - \mathcal{A}_2(w)|$.
A nondeterministic weighted automaton $\mathcal{A}$ can be \emph{approximately} determinised if for every $\epsilon >0$
there exists a deterministic automaton $\mathcal{A}_D$ such that $d_{\sup}(\mathcal{A}, \mathcal{A}_D) \leq \epsilon$.
Unfortunately, weighted automata with the limit average value function cannot be approximately determinised~\cite{BokerH12}.
In this work we show that the approximate determinisation is possible for the standard deviation metric $d_{\mathrm{std}}$
defined as $d_{\mathrm{std}}(\mathcal{A}_1, \mathcal{A}_2) = \mathbb{E}(|\mathcal{A}_1(w) - \mathcal{A}_2(w)|)$.
This paper is an extended and corrected version of~\cite{concur2018}. It contains full proofs, an extended discussion and a stronger version of Theorem~\ref{th:irrational}.
We have showed in~\cite{concur2018} that the expected values and the distribution values may be irrational.
In this paper we show that these values can be even transcendental (Theorem~\ref{th:irrational}).
We have corrected two claims from~\cite{concur2018}.
First, we have corrected statements of Theorems~\ref{th:irrational} and~\ref{th:limavg-undecidable}. For $\textsc{LimAvg}$-automata and the distribution question $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\lambda)$,
the values, which can be irrational and uncomputable are not the values of the probability $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\lambda) = \mathbb{P}_{\mathcal{M}}(\set{w \mid \valueL{\mathcal{A}} \leq \lambda})$, but the values of the thershod $\lambda$ that correspond to mass points, i.e.,
values $\lambda$ such that $\mathbb{P}_{\mathcal{M}}(\set{w \mid \valueL{\mathcal{A}} = \lambda}) > 0$.
We have also removed from Theorem~\ref{th:all-pspace-hard} PSPACE-hardness claim for the distribution question for (non-total) $\textsc{Sum}$-automata.
We show that the (exact) distribution question for all $\textsc{Sum}$-automata is \textsc{\#P}-complete.
| 3,724 | 36,415 |
en
|
train
|
0.123.1
|
\section{Preliminaries}
Given a finite alphabet $\Sigma$ of letters, a \emph{word} $w$ is a finite or infinite sequence
of letters.
We denote the set of all finite words over $\Sigma$ by $\Sigma^*$, and the set of all infinite words over $\Sigma$ by $\Sigma^\omega$.
For a word $w$, we define $w[i]$ as the $i$-th letter of $w$, and we define $w[i,j]$ as the subword $w[i] w[i+1] \ldots w[j]$ of $w$.
We use the same notation for other sequences defined later on.
By $|w|$ we denote the length of $w$.
A \emph{(non-deterministic) finite automaton} (NFA) is a tuple $(\Sigma, Q, Q_0, F, \delta)$
consisting of
an input alphabet $\Sigma$ ,
a finite set of states $Q$,
a set of initial states $Q_0 \subseteq Q$,
a set of final states $F$, and
a finite transition relation $\delta \subseteq Q \times \Sigma \times Q$.
We define $\delta(q,a) = \set{q' \in \mathbb{Q} \mid \delta(q,a,q')}$
and $\delta(S,a) = \bigcup_{q \in S} \delta(q,a)$.
We extend this to words $\widehat{\delta} \colon 2^Q \times \Sigma^* \to 2^Q$ in the following way:
$\widehat{\delta}(S,\epsilon) = S$ (where $\epsilon$ is the empty word) and
$\widehat{\delta}(S,aw) = \widehat{\delta}(\delta(S,a),w)$, i.e., $\widehat{\delta}(S,w)$ is the set of states reachable from $S$ via $\delta$ over the word $w$.
\Paragraph{Weighted automata}
A \emph{weighted automaton} is a finite automaton whose transitions are labeled by rational numbers called \emph{weights}.
Formally, a weighted automaton is a tuple
$(\Sigma, Q, Q_0, F, \delta, {C})$, where the first five elements are as in the finite automata, and ${C} \colon \delta \to \mathbb{Q}$ is a function that defines \emph{weights} of transitions.
An example of a weighted automaton is depicted in Figure~\ref{fig:aut}.
The size of a weighted automaton $\mathcal{A}$, denoted by $|\mathcal{A}|$,
is $|Q| + |\delta| + \sum_{q, q', a} \mathrm{len}(C(q, a, q'))$, where $\mathrm{len}$ is the sum of the lengths of the binary representations of the numerator and the denominator of a given rational number.
A \emph{run} $\pi$ of an automaton $\mathcal{A}$ on a word $w$ is a sequence of states $\pi[0] \pi[1] \dots$ such that $\pi[0]$ is an initial state and for each $i$ we have $(\pi[i-1],w[i],\pi[i]) \in \delta$.
A finite run $\pi$ of length $k$ is \emph{accepting} if and only if the last state $\pi[k]$ belongs to the set of accepting states $F$.
As in~\cite{quantitativelanguages}, we do not consider $\omega$-accepting conditions and assume that all infinite runs are accepting.
Every run $\pi$ of an automaton $\mathcal{A}$ on a (finite or infinite) word $w$ defines a sequence of weights
of successive transitions of $\mathcal{A}$ as follows.
Let $({C}(\pi))[i]$ be the weight of the $i$-th transition,
i.e., ${C}(\pi[i-1], w[i], \pi[i])$.
Then, ${C}(\pi)=({C}(\pi)[i])_{1\leq i \leq |w|}$.
A \emph{value functions} $f$ is a function that
assigns real numbers to sequences of rational numbers.
The value $f(\pi)$ of the run $\pi$ is defined as $f({C}(\pi))$.
The value of a (non-empty) word $w$ assigned by the automaton $\mathcal{A}$, denoted by $\valueL{\mathcal{A}}(w)$,
is the infimum of the set of values of all accepting runs on $w$. The value of a word that has no (accepting) runs is infinite.
To indicate a particular value function $f$ that defines the semantics,
we will call a weighted automaton $\mathcal{A}$ an $f$-automaton.
\Paragraph{Value functions}
We consider the following value functions. For finite runs, functions $\textsc{Min}$ and $\textsc{Max}$ are defined in the usual manner, and the function $\textsc{Sum}$ is defined as
\[\textsc{Sum}(\pi) = \sum\nolimits_{i=1}^{|C(\pi)|} ({C}(\pi))[i]\]
For infinite runs we consider the supremum $\textsc{Sup}$ and infimum $\textsc{Inf}$ functions (defined like $\textsc{Max}$ and $\textsc{Min}$ but on infinite runs) and the limit average function $\textsc{LimAvg}$ defined as
\[\textsc{LimAvg}(\pi) = \limsup\limits_{k \rightarrow \infty} \favg{\pi[0, k]} \]
where for finite runs $\pi$ we have \(\favg{\pi}=\frac{\textsc{Sum}(\pi)}{|C(\pi)|}\).
| 1,332 | 36,415 |
en
|
train
|
0.123.2
|
\subsection{Probabilistic semantics}
A (finite-state discrete-time) \emph{Markov chain} is a tuple $\tuple{\Sigma,S,s_0,E}$,
where $\Sigma$ is the alphabet of letters,
$S$ is a finite set of states, $s_0$ is an initial state,
$E \colon S \times \Sigma \times S \mapsto [0,1]$ is an edge probability function, which
for every $s \in S$ satisfies that $\sum_{a \in \Sigma, s' \in S} E(s,a,s') = 1$.
An example of a single-state Markov chain is depicted in Figure~\ref{fig:aut}.
In this paper, Markov chains serve as a mathematical model as well as the input to algorithms.
Whenever a Markov chain is the input to a problem or an algorithm, we assume that all edge probabilities are rational and the size of a Markov chain $\mathcal{M}$ is defined as
$|\mathcal{M}|=|S|+|E|+\sum_{q, q', a}\mathrm{len}(E(q, a, q'))$.
The probability of a finite word $u$ w.r.t.\ a Markov chain $\mathcal{M}$, denoted by $\mathbb{P}_{\mathcal{M}}(u)$, is the sum of probabilities of paths from $s_0$ labeled by $u$,
where the probability of a path is the product of probabilities of its edges.
For sets $u\cdot \Sigma^\omega = \{ uw \mid w \in \Sigma^{\omega} \}$, called \emph{cylinders},
we have $\mathbb{P}_{\mathcal{M}}(u\cdot \Sigma^\omega)=\mathbb{P}_{\mathcal{M}}(u)$, and then the
probability measure over infinite words defined by $\mathcal{M}$ is the unique
extension of the above measure to the $\sigma$-algebra generated by cylinders (by Carath\'{e}odory's extension theorem~\cite{feller})
We will denote the unique probability measure defined by $\mathcal{M}$ as $\mathbb{P}_{\mathcal{M}}$.
For example, for the Markov chain $\mathcal{M}$ presented in Figure~\ref{fig:aut}, we have that $\mathbb{P}_{\mathcal{M}}(ab) =
\frac{1}{4}$, and so $\mathbb{P}_{\mathcal{M}}(\set{w \in \set{a,b}^\omega \mid w[0,1]=ab})=\frac{1}{4}$, whereas $\mathbb{P}_{\mathcal{M}}(X)=0$ for any countable set of infinite words $X$.
A function $f \colon \Sigma^{\omega} \to \mathbb{R}$ the is measureable w.r.t. $\mathbb{P}_{\mathcal{M}}$ is called a \emph{random variable} (w.r.t. $\mathbb{P}_{\mathcal{M}}$).
A random variable $g$ is \emph{discrete}, if there exists a countable set $Y \subset \mathbb{R}$ such that $g$ returns a value for $Y$ with probability $1$ ($\mathbb{P}_{\mathcal{M}}(\set{w \mid g(w) \in Y}) = 1$).
For the discrete random variable $g$, we define the \emph{expected value} $\mathbb{E}_{\mathcal{M}}(g)$ (w.r.t. the measure $\mathbb{P}_{\mathcal{M}}$) as
\[
\mathbb{E}_{\mathcal{M}}(g) = \sum_{y \in Y} y \cdot \mathbb{P}_{\mathcal{M}}(\set{w \mid g(w) = y}).
\]
Every non-negative random variable $h \colon \Sigma^{\omega} \to \mathbb{R}^+$ is a point-wise limit of some sequence of monotonically increasing discrete random variables $g_1, g_2, \ldots$ and the expected value $\mathbb{E}_{\mathcal{M}}(h)$ is the limit
of expected values $\mathbb{E}_{\mathcal{M}}(g_i)$~\cite{feller}. Finally, every random variable $f$ can be presented as the difference $h_1 - h_2$ of non-negative random variables $h_1, h_2$ and we have
$\mathbb{E}_{\mathcal{M}}(f) = \mathbb{E}_{\mathcal{M}}(h_1) - \mathbb{E}_{\mathcal{M}}(h_2)$~\cite{feller}.
A \emph{terminating} Markov chain $\mathcal{M}^T$ is
a tuple $\tuple{\Sigma,S,s_0,E, T}$,
where $\Sigma$, $S$ and $s_0$ are as usual,
$E \colon S \times (\Sigma\cup \set{\epsilon}) \times S \mapsto [0,1]$ is the edge probability function, such that if $E(s, a, t)$, then $a=\epsilon$ if and only if $t\in T$, and
for every $s \in S$ we have $\sum_{a \in \Sigma\cup \set{\epsilon}, s' \in S} E(s,a,s') = 1$,
and
$T$ is a set of terminating states such that the probability of reaching a terminating state from any state $s$ is positive.
Notice that the only $\epsilon$-transitions in a terminating Markov chain are those that lead to a terminating state.
The probability of a finite word $u$ w.r.t. $\mathcal{M}^T$, denoted $\mathbb{P}_{\mathcal{M}^T}(u)$, is the sum of probabilities of paths from $s_0$ labeled by $u$ such that the only terminating state on this path is the last one.
Notice that $\mathbb{P}_{\mathcal{M}^T}$ is a probability distribution on finite words whereas $\mathbb{P}_{\mathcal{M}}$ is not (because the sum of probabilities may exceed 1).
A function $f \colon \Sigma^{*} \to \mathbb{R}$ is called a \emph{random variable} (w.r.t. $\mathbb{P}_{\mathcal{M}^T}$).
Since words generated by $\mathcal{M}^T$ are finite, the co-domain of $f$ is countable and hence $f$ is discrete. The expected value of $f$ w.r.t. $\mathcal{M}^T$ is defined in the same way as for non-terminating Markov chains.
\Paragraph{Automata as random variables}
An infinite-word weighted automaton $\mathcal{A}$ defines the function $\valueL{\mathcal{A}}$ that assigns each word from $\Sigma^{\omega}$ its value $\valueL{\mathcal{A}}(w)$.
This function is measurable for all the automata types we consider in this paper (see Remark~\ref{rem:measurability} below).
Thus, this function can be interpreted as a random variable with respect to the probabilistic space we consider.
Hence, for a given automaton $\mathcal{A}$ (over infinite words) and a Markov chain $\mathcal{M}$, we consider the following quantities:
\noindent\fbox{\parbox{0.96\textwidth}{
$\mathbb{E}_{\mathcal{M}}(\mathcal{A})$ --- the expected value of
the random variable $\valueL{\mathcal{A}}$ w.r.t. the measure $\mathbb{P}_{\mathcal{M}}$.
\\
$\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda) = \mathbb{P}_{\mathcal{M}}(\{w \mid \valueL{\mathcal{A}}(w) \leq \lambda \})$ --- the
(cumulative) distribution function of $\valueL{\mathcal{A}}$ w.r.t. the measure $\mathbb{P}_{\mathcal{M}}$.
}}
In the finite words case, the expected value $\mathbb{E}_{\mathcal{M}^T}$ and the distribution $\mathbb{D}_{\mathcal{M}^T, \mathcal{A}}$ are defined in the same manner.
\begin{nremark}[Bounds on the expected value and the distribution]
Both quantities can be easily bounded: the value of the distribution function $\mathbb{D}_{\mathcal{M}, \mathcal{A}}$ is always between $0$ and $1$.
For a $\textsc{LimAvg}$-automaton $\mathcal{A}$, we have $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) \in [\min_\mathcal{A}, \max_\mathcal{A}] \cup \set{\infty}$, where $\min_\mathcal{A}$ and $\max_\mathcal{A}$ denote the minimal and the maximal weight of $\mathcal{A}$ and $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) = \infty$ if and only if the probability of the set of words with no accepting runs in $\mathcal{A}$ is positive.
{Note that we consider no $\omega$-accepting conditions, and hence all infinite runs of $\textsc{LimAvg}$-automata are accepting, but there can be infinite words, on which a given $\textsc{LimAvg}$-automaton has no infinite runs. }
For a $\textsc{Sum}$-automaton $\mathcal{A}$, we have $\mathbb{E}_{\mathcal{M}^T}(\mathcal{A}) \in [L_{\mathcal{M}^T} \cdot \min_\mathcal{A}, L_{\mathcal{M}^T} \cdot \max_\mathcal{A}] \cup \set{\infty}$, where $L_{\mathcal{M}^T}$ is the expected length of a word generated by $\mathcal{M}^T$
(it can be computed in a standard way~\cite[Section 11.2]{Grinstead12})
and, as above, $\mathbb{E}_{\mathcal{M}^T}(\mathcal{A}) = \infty$ if and only if there is a finite word $w$ generated by $\mathcal{M}^T$ with non-zero probability such that $\mathcal{A}$ has no accepting runs on $w$.
We show in Section~\ref{sec:irrational} that the distribution and expected value may be irrational, even for integer weights and uniform distributions.
\end{nremark}
\begin{nremark}[Measurability of functions represented by automata]
\label{rem:measurability}
For automata on finite words, $\textsc{Inf}$-automata and $\textsc{Sup}$-automata, measurability of $\valueL{\mathcal{A}}$ is straightforward.
To show that $\valueL{\mathcal{A}}(w) \colon \Sigma^\omega \mapsto \mathbb{R}$ is measurable for any non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$, it suffices to show that for every $x \in \mathbb{R}$, the preimage $\valueL{\mathcal{A}}^{-1}(-\infty,x]$ is measurable.
Let $Q$ be the set of states of $\mathcal{A}$. We can define a subset $A_x \subseteq \Sigma^{\omega} \times Q^\omega$ of the pairs,
the word and the run on it, where the value of the run is less than or equal to $x$.
We show that $A_x$ is Borel. For $p \in \mathbb{N}$, let $B_x^p$ be the subset of $\Sigma^{\omega} \times Q^\omega$
of pairs $(w, \pi)$
such that up to position $p$ the sequence $\pi$ is a run on $w$ and the average of weights up to $p$ is at most $x$.
Observe that $B_x^p$ is an open set and $A_x$ is equal to $\bigcap_{\epsilon \in \mathbb{Q}^+} \bigcup_{p_0 \in \mathbb{N}} \bigcap_{p \geq p_0} B_{x+\epsilon}^p$, i.e.,
$A_x$ consists of pairs $(w, \pi)$ satisfying that for every $\epsilon\in\mathbb{Q}^+$ there exists $p_0$ such that for all $p\geq p_0$ the average
weight of $\pi$ at $p$ does not exceed $x+\epsilon$ and $\pi$ is a run on $w$ (each finite prefix is a run).
Finally, $\valueL{\mathcal{A}}^{-1}(-\infty,x]$ is the projection of $A_x$ on the first component $\Sigma^{\omega}$.
The projection of a Borel set is an \emph{analytic set}, which is measurable~\cite{kechris}.
Thus, $\valueL{\mathcal{A}}$ defined by a non-deterministic $\textsc{LimAvg}$-automaton is measurable.
The above proof of measurability requires some knowledge of descriptive set theory.
We will give a direct proof of measurability of $\valueL{\mathcal{A}}$ in the paper (Theorem~\ref{th:approximation-limavg}).
\end{nremark}
\subsection{Computational questions} We consider the following basic computational questions:
\noindent\fbox{\parbox{0.96\textwidth}{
\emph{The expected value question}: Given an $f$-automaton $\mathcal{A}$ and a (terminating) Markov chain $\mathcal{M}$, compute $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$.
\emph{The distribution question}: Given an $f$-automaton $\mathcal{A}$, a (terminating) Markov chain $\mathcal{M}$ and a threshold $\lambda \in \mathbb{Q}$, compute $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$.
}}
Each of the above questions have its decision variant (useful for lower bounds), where instead of computing the value we ask whether the value is less than a given threshold $t$.
The above questions have their approximate variants:
\noindent\fbox{\parbox{0.96\textwidth}{
\emph{The approximate expected value question}:
Given an $f$-automaton $\mathcal{A}$, a (terminating) Markov chain $\mathcal{M}$, $\epsilon \in \mathbb{Q}^+$, compute a number $y \in \mathbb{Q}$ such that $|y - \mathbb{E}_{\mathcal{M}}(\mathcal{A})| \leq \epsilon$.
\emph{The approximate distribution question}:
Given an $f$-automaton $\mathcal{A}$, a (terminating) Markov chain $\mathcal{M}$, a threshold $\lambda \in \mathbb{Q}$ and
$\epsilon \in \mathbb{Q}^+$
compute a number $y \in \mathbb{Q}$ which belongs to $[\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda-\epsilon)-\epsilon, \mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda+\epsilon)+\epsilon]$.
}}
\begin{nremark}
The notion of approximation for the distribution question is based on the Skorokhod metric~\cite{billingsley2013convergence}.
Let us compare here this notion with two possible alternatives: the \emph{inside approximation}, where
$y$ belongs to $[\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda-\epsilon), \mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda+\epsilon)]$,
and the \emph{outside approximation}, where
$y$ belongs to $[\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)-\epsilon, \mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)+\epsilon]$.
The outside approximation is reasonable for $\textsc{Sum}$-automata, where the exact value of the probability is hard to compute, but for the $\textsc{LimAvg}$-automata its complexity is the same as computing the exact value
(because the latter is difficult already for automata which return the same value for almost all words, as shown in Remark~\ref{r:ultimatelyperiodic}).
For the inside approximation, it is the other way round: for $\textsc{Sum}$-automata it makes little sense as the problem is undecidable even for automata returning integer values, but for $\textsc{LimAvg}$-automata it is a reasonable definition as the returned values can be irrational.
We chose a definition that works for both types of automata.
However, the results we present can be easily adjusted to work in the case of the outside approximation for $\textsc{Sum}$-automata and in the case of the inside approximation for the $\textsc{LimAvg}$-automata.
\end{nremark}
\begin{figure}
\caption{The automaton $\mathcal{A}
\label{fig:aut}
\end{figure}
| 3,848 | 36,415 |
en
|
train
|
0.123.3
|
\section{Basic properties}
Consider an $f$-automaton $\mathcal{A}$, a Markov chain $\mathcal{M}$ and a set of words $X$. We denote by $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid X)$ the expected value of $\mathcal{A}$ w.r.t. $\mathcal{M}$ restricted only to words in the set $X$ (see \cite{feller}).
The following says that we can disregard a set of words with probability $0$ (e.g. containing only some of the letters under uniform distribution) while computing the expected value.
\begin{fact}\label{f:equal-expected}
If $\mathbb{P}(X)=1$ then $\mathbb{E}_\mathcal{M}(\mathcal{A}) = \mathbb{E}_\mathcal{M}(\mathcal{A} \mid X)$.
\end{fact}
The proof is rather straightforward; the only interesting case is when there are some words not in $X$ with infinite values.
But for all the functions we consider, one can show that in this case there is a set of words with infinite value that has a non-zero probability, and therefore $\mathbb{E}_\mathcal{M}(\mathcal{A}) = \mathbb{E}_\mathcal{M}(\mathcal{A} \mid X)=\infty$.
One corollary of Fact \ref{f:equal-expected} is that if $\mathcal{M}$ is, for example, uniform, then because the set $Y$ of ultimately-periodic words (i.e., words of the form $vw^\omega$) is countable and hence has probability $0$,
we have $\mathbb{E}_\mathcal{M}(\mathcal{A}) = \mathbb{E}_\mathcal{M}(\mathcal{A} \mid \Sigma^\omega \setminus Y)$.
This suggests that the values of ultimately-periodic words might not be representative for an automaton.
We exemplify this in Remark~\ref{r:ultimatelyperiodic}, where we show an automaton whose value is irrational for almost all words, yet rational for all ultimately-periodic words.
\subsection{Example of computing expected value by hand}\label{s:example}
Consider a $\textsc{LimAvg}$-automaton $\mathcal{A}$ and a Markov chain $\mathcal{M}$ depicted in Figure~\ref{fig:aut}. We encourage the reader to take a moment to study this automaton and try to figure out its expected value.
The idea behind $\mathcal{A}$ is as follows.
Assume that $\mathcal{A}$ is in a state $q_l$ for some $l \in \{a,b\}$. Then, it reads a word up to the first occurrence of a subword $ba$, where it has a possibility to go to $q_x$ and then to non-deterministically choose $q_a$ or $q_b$ as the next state.
Since going to $q_x$ and back to $q_l$ costs the same as staying in $q_l$, we will assume that the automaton always goes to $q_x$ in such a case.
When an automaton is in the state $q_x$ and has to read a word $w=a^jb^k$, then the average cost of a run on $w$ is $\frac{j}{j+k}$ if the run goes to $q_b$ and $\frac{k}{j+k}$ otherwise.
So the run with the lowest value is the one that goes to $q_a$ if $j>k$ and $q_b$ otherwise.
To compute the expected value of the automaton, we focus on the set $X$ of words $w$ such that for each positive $n \in \mathbb{N}$ there are only finitely many prefixes of $w$ of the form $w'a^jb^k$ such that $\frac{j+k}{|w'|+j+k} \geq \frac{1}{n}$. Notice that this means that $w$ contains infinitely many $a$ and infinitely many $b$.
It can be proved in a standard manner that $\mathbb{P}_\mathcal{M}(X)=1$.
Let $w \in X$ be a random event, which is a word generated by $\mathcal{M}$.
Since $w$ contains infinitely many letters $a$ and $b$, it can be partitioned in the following way.
Let $w=w_1w_2w_3 \dots$ be a partition of $w$ such that each $w_i$ for $i>0$ is of the form $a^jb^k$ for $j\geq 0, k>0$, and for $i>1$ we also have $j>0$.
For example, the partition of $w=baaabbbaabbbaba\dots$ is such that $w_1=b$, $w_2=aaabbb$, $w_3=aabbb$, $w_4=ab$, \dots. Let $s_i=|w_1w_2 \dots w_i|$.
We now define a run $\pi_w$ on $w$ as follows:
\[
q^w_1 \dots q^w_1 q_x q^w_2\dots q^w_2q_xq^w_3 \dots q^w_3 q_x q^w_4 \dots
\]
where the length of each block of $q_i$ is $|w_i|-1$, $q^w_0=q_a$ and
$q^w_i=q_a$ if $w_i=a^jb^k$ for some $j>k$ and $q^w_i$=$q_b$ otherwise.
It can be shown by a careful consideration of all possible runs that this run's value is the infimum of values of all the runs on this word.
\begin{lemma}\label{l:best-run}
For every $w \in X$ we have $\valueL{\mathcal{A}}(w) = \textsc{LimAvg}(\pi_w)$.
\end{lemma}
\begin{proof}
We show that for every accepting run $\pi$ on $w \in X$ we have $\textsc{LimAvg}(\pi_w) \leq \textsc{LimAvg}(\pi)$. It follows that $\valueL{\mathcal{A}}(w) = \textsc{LimAvg}(\pi_w)$.
Consider a run $\pi$ of $\mathcal{A}$ on $w$.
The cost of a run over $w_i=a^jb^k$ is at least $min(j, k)-1$, which is reached by $\pi_w$, therefore
for every $i\in \mathbb{N}$ we have
\begin{equation}\label{e:property}
\favg{\pi_w[0,s_i]} \leq \favg{\pi[0,s_i]}.
\end{equation}
It may happen, however, that for some $p$, the value of $\favg{\pi[0,p]}$ is less than $\favg{\pi_w[0,p]}$; for example, for a word starting with $baaabbbb$, we have
$\pi_w[0,4]=q_aq_xq_bq_bq_b$ and $\favg{\pi_w[0,4]}$ is $\frac{1}{2}$, but for a run $\pi'=q_aq_xq_aq_aq_a\dots$ we have $\favg{\pi'[0,4]}=0$.
For arbitrary words, a run that never visits $q_b$ may have a better value.
We show, however, that for words from $X$ this is not the case.
We show that for any position $p$ such that $s_i<p<s_{i+1}$,
\begin{equation}\label{e:propertyTwo}
\favg{\pi_w[0,p]} \leq \favg{\pi[0,s_i]} + \frac{p-s_i}{p}
\end{equation}
Observe that
\[
\begin{split}
\favg{\pi_w[0,p]}
= \frac{\textsc{Sum}(\pi_w[0,p])}{p} &\leq \frac{\textsc{Sum}(\pi_w[0,s_i])}{s_i} + \frac{\textsc{Sum}(\pi_w[s_i,p])}{p} \\ &= \favg{\pi_w[0, s_i]} + \frac{\textsc{Sum}(\pi_w[s_i,p])}{p}.
\end{split}
\]
By \eqref{e:property} and the fact that the weights of the automaton do not exceed 1,
we obtain
\[
\favg{\pi_w[0, s_i]} + \frac{\textsc{Sum}(\pi_w[s_i,p])}{p} \leq
\favg{\pi[0, s_i]} + \frac{p-s_i}{p},
\] thus \eqref{e:propertyTwo}.
Assume $n \in \mathbb{N}$. By the definition of $X$, there can be only finitely many prefixes of $w$ of the form $w'a^jb^k$ where $\frac{j+k}{|w'|+j+k} \geq \frac{1}{n}$, so
\(
\favg{\pi_w[0,p]} \geq \favg{\pi[0,s_i]} + \frac{1}{n}\)
may hold only for finitely many $p$.
Therefore, $\textsc{LimAvg}(\pi_w) \leq \textsc{LimAvg}(\pi) + \frac{1}{n}$ for every $n$, so $\textsc{LimAvg}(\pi_w) \leq \textsc{LimAvg}(\pi)$.
\end{proof}
By Fact~\ref{f:equal-expected} and Lemma~\ref{l:best-run}, it remains to compute the expected value of $\textsc{LimAvg}(\set{\pi_w \mid w \in X})$.
As the expected value of the sum is the sum of expected values, we can state that
\[\mathbb{E}_\mathcal{M}(\textsc{LimAvg}(\set{\pi_w \mid w \in X}))
=
\limsup\limits_{s \rightarrow \infty} \frac{1}{s} \cdot
\sum_{i=1}^{s} \mathbb{E}_\mathcal{M}\left(\set{({C}(\pi_w))[i] \mid w \in X}\right)
\]
It remains to compute $\mathbb{E}_\mathcal{M}(({C}(\pi_w))[i])$.
If $i$ is large enough (and since the expected value does not depend on a finite number of values, we assume that it is),
the letter $\pi_w[i]$ is in some block $w_s=a^jb^k$. There are $j+k$ possible letters in this block, and the probability that the letter $\pi_w[i]$ is an $i$th letter in such a block is $2^{-(j+k+2)}$ (``+2'', because the block has to be maximal, so we need to include the letters before the block and after the block). So the probability that a letter is in a block $a^jb^k$ is $\frac{j+k}{2^{j+k+2}}$. The average cost of a such a letter is $\frac{\min(j, k)}{j+k}$, as there are $j+k$ letters in this block and the block contributes $\min(j, k)$ to the sum.
It can be analytically checked that
\[
\sum_{j=1}^\infty
\sum_{k=1}^\infty
\frac{j+k}{2^{j+k+2}} \cdot \frac{\min(j, k)}{j+k}
=
\sum_{j=1}^\infty
\sum_{k=1}^\infty
\frac{\min(j, k)}{2^{j+k+2}}
= \frac{1}{3}
\]
We can conclude that \(\mathbb{E}_\mathcal{M}(\textsc{LimAvg}(\pi_w))=\frac{1}{3}\) and, by Lemma \ref{l:best-run}, $\mathbb{E}_\mathcal{M}(\mathcal{A})=\frac{1}{3}$.
The bottom line is that even for such a simple automaton with only one strongly connected component consisting of three states (and two of them being symmetrical), the analysis is complicated.
On the other hand, we conducted a simple Monte Carlo experiment in which we computed the value of this automaton on 10000 random words of length $2^{22}$ generated by $\mathcal{M}$, and observed that the obtained values are in the interval $[0.3283, 0.3382]$,
with the average of $0.33336$, which is a good approximation of the expected value $0.(3)$.
This foreshadows our results for $\textsc{LimAvg}$-automata: we show that computing the expected value is, in general, impossible, but it is possible to approximate it with arbitrary precision.
Furthermore, the small variation of the results is not accidental -- we show that for strongly-connected $\textsc{LimAvg}$-automata, almost all words have the same value (which is equal to the expected value).
| 3,159 | 36,415 |
en
|
train
|
0.123.4
|
\subsection{Irrationality of the distribution and the expected value}\label{sec:irrational}
We show that the exact values in the probabilistic questions for $\textsc{Sum}$-automata and $\textsc{LimAvg}$-automata may be (strongly) irrational.
More precisely, we show that for the $\textsc{Sum}$-automaton depicted in Figure~\ref{fig:irrational}, the distribution $\mathbb{D}_{\mathcal{A}}(-1)$ is transcendental, i.e., it is irrational and, unlike for instance $\sqrt{2}$, there is no
polynomial with integer coefficients whose one of the roots is $\mathbb{D}_{\mathcal{A}}(-1)$.
For the expected value, we construct an automaton $\mathcal{A}'$ such that $\mathbb{E}(\mathcal{A})- \mathbb{E}(\mathcal{A}') = 1 - \mathbb{D}_{\mathcal{A}}(-1)$ is transcendental.
Therefore, one of $\mathbb{E}(\mathcal{A})$, $\mathbb{E}(\mathcal{A}')$ is transcendental.
Furthermore, we modify $\mathcal{A}$ and $\mathcal{A}'$ to show that there exists $\textsc{LimAvg}$-automaton $\mathcal{A}Inf$ whose expected value is transcendental and the value $\lambda$ such that $\mathbb{P}(\{w \mid \valueL{\mathcal{A}Inf}(w) = \lambda) = 1$ is transcendental.
It follows that the minimal $\lambda$ such that $\mathbb{D}_{\mathcal{A}Inf}(\lambda) = 1$ is transcendental.
\begin{theorem}[Irrational values]
\label{th:irrational}
The following conditions hold:
\begin{enumerate}
\item There exists a $\textsc{Sum}$-automaton whose
distribution and expected value w.r.t. the uniform distribution are transcendental.
\item There exists a $\textsc{LimAvg}$-automaton such that the expected value and
the value of almost all words w.r.t. the uniform distribution are transcendental.
\end{enumerate}
\end{theorem}
\begin{proof}
We assume that the distribution of words is uniform.
In the infinite case, this means that the Markov chain contains a single state where it loops over any letter with probability $\frac{1}{|\Sigma|}$, where $\Sigma$ is the alphabet.
In the finite case, this amounts to a terminating Markov chain with one regular state and one terminating state; it loops over any letter in the non-terminating state with probability $\frac{1}{|\Sigma|+1}$ or
go to the terminating state over $\epsilon$ with probability $\frac{1}{|\Sigma|+1}$.
Below we omit the Markov chain as it is fixed (for a given alphabet).
We define a $\textsc{Sum}$-automaton $\mathcal{A}$ (Figure~\ref{fig:irrational}) over the alphabet $\Sigma = \set{a, \# }$ such that
$\mathcal{A}(w) = 0$ if $w = a \# a^4 \# \ldots \# a^{4^n}$ and $\mathcal{A}(w) \leq -1$ otherwise.
Such an automaton basically picks a block with an inconsistency and verifies it.
For example, if $w$ contains a block $\# a^i \# a^j \#$, the automaton $\mathcal{A}$ first assigns $-4$ to each letter $a$
and upon $\#$ it switches to the mode in which it assigns $1$ to each letter $a$. Then, $\mathcal{A}$ returns the value $j - 4\cdot i$. Similarly,
we can encode the run that returns the value $4\cdot i - j$. Therefore, all the runs return $0$ if and only if each block of $a$'s is four times as long as the previous block.
Finally, $\mathcal{A}$ checks whether the first block of $a$'s has length $1$ and returns $-1$ otherwise.
Let $\gamma$ be the probability that a word is of the form $a \# a^4 \# \ldots \# a^{4^n}$.
Such a word has length $l_n = \frac{4^{n+1} -1}{3}+n$ and its probability is
${ 3^{-(l_n+1)} }$ (as the probability of any given word with $m$ letters over a two-letters alphabet is $3^{-(m+1)}$).
Therefore $\gamma$ is equal to $\sum_{n=0}^{\infty} { 3^{-(l_n+1)} }$.
Observe that $\gamma$ written in base $3$ has arbitrary long sequences of $0$'s and hence its representation is acyclic. Thus, $\gamma$ is irrational.
Due to Roth's Theorem~\cite{roth}, if $\alpha \in \mathbb{R}$ is algebraic but irrational, then there are only finitely many pairs $(p,q)$ such that
$|\alpha - \frac{p}{q}| \leq \frac{1}{q^3}$. We show that there are infinitely many such pairs for $\gamma$ and hence it is transcendental.
Consider $i \in \mathbb{N}$ and let $p_i, q_i \in \mathbb{N}$ be such that $q_i = 3^{-(l_i+1)}$ and
$\frac{p_i}{q_i} = \sum_{n=0}^{i} { 3^{-(l_n+1)}}$.
Then, \[ 0 < \gamma - \frac{p_i}{q_i} < 2\cdot 3^{-(l_{i+1}+1)} \]
Observe that for $i>1$ we have $l_{i+1} > 3 (l_{i}+1)$ and hence
\[
\gamma - \frac{p_i}{q_i} < 2\cdot 3^{-(l_{i+1}+1)} < \frac{2}{3} 3^{-3(l_i+1)} < \frac{1}{q_i^3}.
\]
Therefore,
$\gamma$ is transcendental.
Observe that $\gamma = 1 - \mathbb{D}_{\mathcal{A}}(-1)$. Therefore, $\mathbb{D}_{\mathcal{A}}(-1)$ is transcendental.
For the expected value, we construct $\mathcal{A}'$ such that for every word $w$ we have $\valueL{\mathcal{A}'}(w) = min(\valueL{\mathcal{A}}(w),-1)$.
This can be done by adding to $\mathcal{A}$ an additional initial state $q_0$, which starts an automaton that assigns to all words value $-1$.
Observe that $\mathcal{A}$ and $\mathcal{A}'$ differ only on words $w$ of the form $a \# a^4 \# \ldots \# a^{4^n}$, where $\mathcal{A}(w) = 0$ and $\mathcal{A}'(w) = -1$.
On all other words, both automata return the same values. Therefore, $\mathbb{E}(\mathcal{A}) - \mathbb{E}(\mathcal{A}') = \gamma$.
It follows that at least one of the values $\mathbb{E}(\mathcal{A})$, $\mathbb{E}(\mathcal{A}')$ is transcendental.
The same construction works for $\textsc{LimAvg}$-automata.
We take $\mathcal{A}$ defined as above and convert it to a $\textsc{LimAvg}$-automaton $\mathcal{A}Inf$ over $\Sigma' = \Sigma \cup \{\$\}$, where the fresh letter $\$$ resets the automaton,
i.e., $\mathcal{A}Inf$ has transitions labeled by $\$$ from any final state of $\mathcal{A}$ to any of its initial states. We apply the same construction to $\mathcal{A}'$ defined as above and denote the resulting automaton by $\mathcal{A}Inf'$.
Observe that $\mathbb{E}(\mathcal{A}Inf) = \mathbb{E}(\mathcal{A})$ (resp., $\mathbb{E}(\mathcal{A}Inf') = \mathbb{E}(\mathcal{A}')$.
To see that, consider random variables $X_1, X_2, \ldots$ defined on $\Sigma^{\omega}$, where $X_i(w)$ is the average value $\frac{1}{|u|}\valueL{\mathcal{A}}(u)$ of the $i$-th block $\$u\$$ in $w$, i.e.,
$w =u_1 \$ u_2 \$ \ldots \$u_i \$ \ldots$, all $u_j$ are from $\Sigma^*$ and $u = u_i$.
Observe that $X_1, X_2, \ldots$ are independent and identically distributed random variables and hence with probability $1$ we have
\[
\liminf\limits_{s \rightarrow \infty} \frac{1}{s} (X_1 + \ldots + X_s) = \limsup\limits_{s \rightarrow \infty} \frac{1}{s} (X_1 + \ldots + X_s) = \mathbb{E}(X_i) = \mathbb{E}(\mathcal{A})
\]
Therefore, with probability $1$ over words $w$ we have $\valueL{\mathcal{A}Inf}(w) = \mathbb{E}(\mathcal{A})$. It follows that
$\mathbb{E}(\mathcal{A}Inf) = \mathbb{E}(\mathcal{A})$ and the minimal $\lambda$ such that
$\mathbb{D}_{\mathcal{A}Inf}(\lambda) = 1$ equals $\mathbb{E}(\mathcal{A})$.
Similarly, $\mathbb{E}(\mathcal{A}Inf') = \mathbb{E}(\mathcal{A}')$ and the minimal $\lambda$ such that $\mathbb{D}_{\mathcal{A}Inf'}(\lambda) = 1$ equals $\mathbb{E}(\mathcal{A}')$.
Therefore, for one of automata $\mathcal{A}Inf, \mathcal{A}Inf'$, the value of almost all words and the expected value are transcendental.
\begin{figure}
\caption{The automaton $\mathcal{A}
\label{fig:irrational}
\end{figure}
\end{proof}
| 2,422 | 36,415 |
en
|
train
|
0.123.5
|
\section{The exact value problems}
\label{s:exact}
In this section we consider the probabilistic questions for non-deterministic $\textsc{Sum}$-automata and $\textsc{LimAvg}$-automata, i.e., the problems of computing the exact values of the expected value $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$ and the distribution $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$ w.r.t. a Markov chain $\mathcal{M}$ and an $f$-automaton $\mathcal{A}$.
The answers to these problems are related values may be irrational (Theorem~\ref{th:irrational}), but
one can perhaps argue that there might be some representation of irrational numbers that can be employed to avoid this problem.
We prove that this is not the case by showing that computing the exact value to any representation with
decidable equality of two numbers is impossible.
\begin{theorem}\label{th:limavg-undecidable}
The following conditions hold:
\begin{enumerate}
\item The expected value and the distribution of (non-deterministic) $\textsc{Sum}$-automata are uncomputable even for the uniform probability measure.
\item The expected value and the value of almost all words (if it exists) of (non-deterministic) $\textsc{LimAvg}$-automata are uncomputable even for the uniform probability measure.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is by a (Turing) reduction from the quantitative universality problem for $\textsc{Sum}$-automata, which is undecidable~\cite{Krob94,AlmagorBK11}:
\noindent\fbox{\parbox{0.96\textwidth}{\emph{The quantitative universality problem for $\textsc{Sum}$-automata}: Given a $\textsc{Sum}$-automaton with weights $-1$, $0$ and $1$, decide whether for all words $w$ we have $\valueL{\mathcal{A}}(w) \leq 0$.
}}
We first discuss reductions to the probabilistic problems for $\textsc{Sum}$-automata.
Consider an instance of the quantitative universality problem, which is a $\textsc{Sum}$-automaton $\mathcal{A}$.
If there is a word $w$ with the value greater than $0$, then due to uniformity of the probability measure we have $\mathbb{P}(w)>0$, and thus $\mathbb{D}_{\mathcal{A}}(0) < 1$.
Otherwise, clearly $\mathbb{D}_{\mathcal{A}}(0) = 1$.
Therefore, solving the universality problem amounts to computing whether the $\mathbb{D}_{\mathcal{A}}(0) = 1$, and thus the latter problem is undecidable.
For the expected value, we construct a $\textsc{Sum}$-automaton $\mathcal{A}'$
such that for every word $w$ we have $\valueL{\mathcal{A}'}(w) = min(\valueL{\mathcal{A}}(w),0)$.
Observe that $\mathbb{E}(\mathcal{A}) = \mathbb{E}(\mathcal{A}')$ if and only if for every word $w$ we have $\valueL{\mathcal{A}}(w) \leq 0$, i.e., the answer to the universality problem is YES.
Therefore, there is no Turing machine, which given a $\textsc{Sum}$-automaton $\mathcal{A}$ computes $\mathbb{E}(\mathcal{A})$ (in any representation allowing for effective equality testing).
For the $\textsc{LimAvg}$ case, we construct a $\textsc{LimAvg}$-automaton $\mathcal{A}Inf$ from the $\textsc{Sum}$-automaton $\mathcal{A}$, by connecting all accepting states (of $\mathcal{A}$) with all initial states by transitions of weight $0$
labeled by an auxiliary letter $\#$.
We construct $\mathcal{A}Inf'$ from $\mathcal{A}'$ in the same way.
The automata $\mathcal{A}Inf, \mathcal{A}Inf'$ have been constructed from $\mathcal{A}$ and respectively $\mathcal{A}'$ as in the proof of Theorem~\ref{th:irrational}, and the virtually the same argument shows that
for almost all words $w$ (i.e., with the probability $1$) we have $\valueL{\mathcal{A}Inf}(w) = \mathbb{E}{\mathcal{A}}$ (resp., $\valueL{\mathcal{A}Inf'}(w) = \mathbb{E}(\mathcal{A}')$).
Therefore, $\mathbb{E}(\mathcal{A}Inf) = \mathbb{E}(\mathcal{A}Inf')$ if and only if for every finite word $u$ we have $\valueL{\mathcal{A}}(u) \leq 0$.
In consequence, there is no Turing machine computing the expected value of a given $\textsc{LimAvg}$-automaton.
Furthermore, since $\mathcal{A}Inf$ (resp., $\mathcal{A}Inf'$) returns $\mathbb{E}(\mathcal{A}Inf)$ (resp., $\mathbb{E}(\mathcal{A}Inf')$) on almost all words, there is no Turing machine computing
the value of almost all words of a given (non-deterministic) $\textsc{LimAvg}$-automaton.
\end{proof}
\subsection{Extrema automata}
We discuss the distribution problem for $\textsc{Min}$-, $\textsc{Max}$-, $\textsc{Inf}$- and $\textsc{Sup}$-automata, where
$\textsc{Min}$ and $\textsc{Max}$ return the minimal and respectively the maximal element of a finite sequence, and
$\textsc{Inf}$ and $\textsc{Sup}$ return the minimal and respectively the maximal element of an infinite sequence.
The expected value of an automaton can be easily computed based on the distribution as there are only finitely many possible values of a run (each possible value is a label of some transition).
\begin{theorem}
\label{th:extrema}
For $\textsc{Min}$-, $\textsc{Max}$-, $\textsc{Inf}$- and $\textsc{Sup}$-automata $\mathcal{A}$ and a Markov chain $\mathcal{M}$,
the expected value and
the distribution problems can be solved in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$.
\end{theorem}
\begin{proof}
We discuss the case of $f = \textsc{Inf}$ as the other cases are similar.
Consider an $\textsc{Inf}$-automaton $\mathcal{A}$.
Observe that every value returned by $\mathcal{A}$ is one of its weights.
For each weight $x$ of $\mathcal{A}$, we construct a (non-deterministic) $\omega$-automaton $\mathcal{A}_x$ that accepts only words of value greater than $x$, i.e.,
$\valueL{\mathcal{A}_x} = \{ w \mid \valueL{\mathcal{A}}(w) > x \}$. To construct $\mathcal{A}_x$, we take $\mathcal{A}$, remove the transitions of weight less or equal to $x$, and drop all the weights.
Therefore, the set of words with the value greater than $x$ is regular, and hence it is measurable and we can compute its probability $p_x$ by computing the probability of $\valueL{\mathcal{A}_x}$.
The probability of an $\omega$-regular language given by a non-deterministic $\omega$-automaton (without acceptance conditions) can be computed in exponential time in the size of the automaton and
polynomial time in the Markov chain defining the probability distribution~\cite[Chapter 10.3]{BaierBook}.
It follows that $p_x$ can be computed in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$.
Observe that $p_x = 1 - \mathbb{D}_{\mathcal{M},\mathcal{A}}(x)$ and hence we can compute the distribution question $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\lambda)$ by computing $1 - p_x$ for the maximal weight $x$ that does not exceed $\lambda$.
For the expected value, let $x_1, \ldots, x_k$ be all weights of $\mathcal{A}$ listed in the ascending order.
Let $p_0 = 1$.
Observe that for all $i \in \set{1, \ldots, k}$ we have $p_{x_{i-1}} - p_{x_i} = \mathbb{P}_{\mathcal{M}}(\set{w \mid \valueL{\mathcal{A}}(w) = x_i})$ is the probability of the set of words of the value $x_i$.
Therefore, $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) = \sum_{i=1}^{k} (p_{x_{i-1}} - p_{x_i}) \cdot x_i$ and hence the expected value can be computed in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$.
\end{proof}
| 2,105 | 36,415 |
en
|
train
|
0.123.6
|
\section{The approximation problems}
\label{s:approx}
\newcommand{\mathrm{fin}}{\mathrm{fin}}
\newcommand{g[{k}]}{g[{k}]}
\newcommand{g[{2^k}]}{g[{2^k}]}
\newcommand{M[k]}{M[k]}
\newcommand{N[k]}{N[k]}
\newcommand{\markov^{\approx}}{\mathcal{M}^{\approx}}
\newcommand{\xi_o}{\xi_o}
\newcommand{\pi_f}{\pi_f}
\newcommand{\markov^{diff}}{\mathcal{M}^{diff}}
We start the discussion on the approximation problems by showing a hardness result that holds for a wide range of value functions.
We say that a function is $0$-preserving if its value is $0$ whenever the input consists only of $0$s.
The functions $\textsc{Sum}$, $\textsc{LimAvg}$, $\textsc{Min}$, $\textsc{Max}$, $\textsc{Inf}$, $\textsc{Sup}$ and virtually all the functions from the literature~\cite{quantitativelanguages} are $0$-preserving.
The hardness result follows from the fact that accepted words have finite values, which we can force to be $0$, while words without accepting runs have infinite values.
The answers in the approximation problems are numbers and to study the lower bounds, we consider their decision variants, called the \emph{separation problems}.
The \emph{expected separation problem} is a variant of the expected value problem, in which the input is enriched with numbers $a, b$ such that $b-a>2\epsilon$
and the instance is such that $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) \not \in [a, b]$ and the question is whether $\mathbb{E}_{\mathcal{M}}(\mathcal{A})<a$.
In the \emph{distribution separation problem}, the input is enriched with numbers $a,b,c,d$ such that $b-a >2\epsilon$ and $d-c >2\epsilon$, the instance satisfies
for all $\lambda \in [c,d]$ we have $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\lambda) \not \in [a, b]$, and we ask whether $\mathbb{D}_{\mathcal{M},\mathcal{A}}(\frac{c+d}{2})<a$.
Note that having an algorithm computing one of the approximate problems (for the distribution or the expected value), we can use it to decide the separation question.
Conversely, using the separation problem as an oracle, we can perform binary search on the domain
to solve the corresponding approximation problem in polynomial time.
\begin{theorem}
\label{th:all-pspace-hard}
The following conditions hold:
\begin{enumerate}
\item For any $0$-preserving function $f$, the expected separation problem for non-deterministic $f$-automata is $\PSPACE$-hard.
\item For any $0$-preserving function $f$ over infinite words, the distribution separation problem for non-deterministic $f$-automata over infinite words is $\PSPACE$-hard.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is via reduction from the universality question for non-deterministic (unweighted) finite-word automata, which is $\PSPACE$-complete~\cite{HU79}.
\noindent\emph{The finite-word case}.
We consider the uniform distribution over finite words.
Given a non-deterministic finite-word automaton $\mathcal{A}$, we construct a finite-word $f$-automaton $\mathcal{A}_{\mathrm{fin}}$ by labeling all transitions of $\mathcal{A}$ with $0$.
Observe that if there exists a word which is not accepted by $\mathcal{A}$ then the expected value of $\mathcal{A}_{\mathrm{fin}}$ is $\infty$.
Otherwise, all words have value $0$ and hence the expected value for $\mathcal{A}_{\mathrm{fin}}$ is $0$.
The universality problem for $\mathcal{A}$ reduces to the expected separation problem for $\mathcal{A}_{\mathrm{fin}}$.
\noindent\emph{The infinite-word case}.
We consider the uniform distribution over infinite words.
Given a non-deterministic finite-word automaton $\mathcal{A}$, we construct an infinite-word $f$-automaton $\mathcal{A}Inf$ in the following way.
We start with the automaton $\mathcal{A}$.
First, we extend the input alphabet with a fresh letter $\#$, which resets the automaton.
More precisely, we add transitions labeled by $\#$ between any final state of $\mathcal{A}$ and any initial state of $\mathcal{A}$.
Finally, we label all transitions with $0$. The resulting automaton is $\mathcal{A}Inf$.
If there exists a finite word $u$ rejected by $\mathcal{A}$, then for every infinite word $w$ containing the infix $\# u \#$ the automaton $\mathcal{A}Inf$ has no infinite run and hence it assigns value $\infty$ to $w$.
Observer that the set of words containing $\# u \#$ has probability $1$ (for any finite word $u$). Therefore, if $\mathcal{A}$ rejects some word,
the expected value for $\mathcal{A}Inf$ is $\infty$ and the distribution of $\mathcal{A}Inf$ for any $\lambda \in \mathbb{R}$ is $0$.
Otherwise, if $\mathcal{A}$ accepts all words, the expected value of $\mathcal{A}Inf$ is $0$ and the distribution of $\mathcal{A}Inf$ for any $\lambda \geq 0$ is $1$.
The universality problem for $\mathcal{A}$ reduces to the separation problems for $\mathcal{A}Inf$.
\end{proof}
| 1,357 | 36,415 |
en
|
train
|
0.123.7
|
\Paragraph{Total automata} Theorem~\ref{th:all-pspace-hard} gives us a general hardness result, which is due to accepting conditions rather than values returned by weighted automata.
In the following, we focus on weights and we assume that weighted automata are \emph{total}, i.e., they accept all words
(resp., almost all words in the infinite-word case).
For $\textsc{Sum}$-automata under the totality assumption, the approximate probabilistic questions become \textsc{\#P}-complete.
We additionally show that the approximate distribution question for $\textsc{Sum}$-automata is in \textsc{\#P}{} regardless of totality assumption.
\begin{theorem}
\label{th:approximation}
The following conditions hold:
\begin{enumerate}
\item The approximate expected value and the approximate distribution questions for total non-deterministic total $\textsc{Sum}$-automata are \textsc{\#P}-complete.
\item The approximate distribution question for non-deterministic $\textsc{Sum}$-automata is \textsc{\#P}-complete.
\end{enumerate}
\end{theorem}
\begin{proof}
\noindent\emph{\textsc{\#P}-hardness}.
Consider the problem of counting the number of satisfying assignment of a given
propositional formula $\varphi$ in Conjunctive Normal Form (CNF)~\cite{valiant1979complexity,papadimitriou2003computational}. This problem is \textsc{\#P}-complete. We reduce it to
the problem of approximation of the expected value for total $\textsc{Sum}$-automata.
Consider a formula $\varphi$ in CNF over $n$ variables.
Let $\mathcal{M}^T$ be a terminating Markov chain over $\{0,1\}$, which at each step produces $0$ and $1$ with equal probability $\frac{1}{3}$, and it terminates with probability $\frac{1}{3}$.
We define a total $\textsc{Sum}$-automaton $\mathcal{A}_{\varphi}$ such that it assigns $0$ to all words of length different than $n$. For words $u \in \set{0,1}^n$,
the automaton $\mathcal{A}_{\varphi}$ regards $u$ as an assignment for variables of $\varphi$; $\mathcal{A}_{\varphi}$ non-deterministically picks one clause of $\varphi$ and returns $1$ if that clause is satisfied and $0$ otherwise.
We can construct such $\mathcal{A}_{\varphi}$ to have polynomial size in $|\varphi|$.
Observe that $\mathcal{A}_{\varphi}(u) = 0$ if some clause of $\varphi$ is not satisfied by $u$, i.e., $\varphi$ is false under the assignment given by $u$.
Otherwise, if the assignment given by $u$ satisfies $\varphi$, then $\mathcal{A}_{\varphi}(u) = 1$.
It follows that the expected value of $\mathcal{A}_{\varphi}$ equals ${3}^{-(n+1)} \cdot C$, where ${3}^{-(n+1)}$ is the probability of generating a word of length $n$ and
$C$ is the number of variable assignments satisfying $\varphi$.
Therefore, we can compute $C$ by computing the expected value of $\mathcal{A}_{\varphi}$ with any $\epsilon$ less than $0.5 \cdot {3}^{-(n+1)}$.
Observe that the automaton $\mathcal{A}_{\varphi}$ returns values $0$ and $1$ and hence
the expected value $\mathbb{E}_{\mathcal{M}}(\mathcal{A}_{\varphi}) = 1 - \mathbb{D}_{\mathcal{M}, \mathcal{A}_{\varphi}}(0)$, where $1 - \mathbb{D}_{\mathcal{M}, \mathcal{A}_{\varphi}}(0)$ is the probability that
$\mathcal{A}_{\varphi}$ returns $1$.
\noindent\emph{Containment of the approximate distribution question in \textsc{\#P}}.
Consider a terminating Markov chain $\mathcal{M}^T$, a $\textsc{Sum}$-automaton $\mathcal{A}$, and $\epsilon \in \mathbb{Q}^+$.
Let $C$ be the smallest number such that every non-zero probability in $\mathcal{M}^T$ is at least $2^{-C}$.
Such $C$ is polynomial in the input size. Consider $N=C \cdot \mathrm{len}(\epsilon)+1$ and let
$\mathbb{D}_{\mathcal{M}^T, \mathcal{A}}(\lambda, N)$ be the distribution of $\mathcal{A}$ over words up to length $N$, i.e., $\mathbb{P}_{\mathcal{M}^T}(\{w \mid |w| \leq N\ \wedge \valueL{\mathcal{A}}(w) \leq \lambda\})$.
We show that the distribution of $\mathcal{A}$ and the distribution of $\mathcal{A}$ over words up to length $N$ differ by less than $\frac{\epsilon}{2}$, i.e., that
\[
|\mathbb{D}_{\mathcal{M}^T, \mathcal{A}}(\lambda) - \mathbb{D}_{\mathcal{M}^T, \mathcal{A}}(\lambda,n)| \leq \frac{\epsilon}{2}.
\]
To do so, let $p_n$, for $n \in \mathbb{N}$, be the probability that $\mathcal{M}^T$ emits a word of the length greater than $n$.
From any state of $\mathcal{M}^T$, the probability of moving to a terminating state is at least $2^{-C}$.
We can (very roughly) bound the probability of generating a word of length greater than $i$ ($p_{i})$ by $(1-2^{-C})^i$.
This means that $p_n$ decreases exponentially with $n$.
Since $(1-\frac{1}{n})^n \leq \frac{1}{2}$ for all $n>1$, we obtained the desired inequality.
Let $K = (N+1)\cdot \log(|\Sigma|) \cdot \epsilon^{-1}+1$.
We build a non-deterministic Turing machine $H_1$ such that on the input $\mathcal{M}^T$, $\mathcal{A}$, $\epsilon$, and $\lambda$ such that the number $c_A$ of accepting computations of $H_1$ satisfies the following:
\[
\Bigl\lvert \mathbb{D}_{\mathcal{M}^T,\mathcal{A}}(\lambda, n) - \frac{c_A}{2^K}\Bigl\rvert \leq \frac{\epsilon}{2}.
\]
To $\epsilon$-approximate $\mathbb{D}_{\mathcal{M}^T,\mathcal{A}}(\lambda)$, we need to compute $c_A$ and divide it by $2^K$, which can be done in polynomial time.
The machine $H_1$ works as follows.
Given the input $\mathcal{M}^T$, $\mathcal{A}$, $\epsilon$, it non-deterministically generates a string $u\alpha$, where $u \in (\Sigma \cup \{\#\})^N$ is a word and $\alpha \in \set{0, 1}^K$ is a number written in binary.
The machine rejects unless $u$ is of the form $wv$, where $w\in \Sigma^*$ and $v\in\set{\#}^*$.
Then, the machine accepts if
$\valueL{\mathcal{A}}(w) \leq \lambda$
and
$\alpha \leq 2^K \cdot \mathbb{P}_{\mathcal{M}^T}(w)$.
Therefore, provided that $H_1$ generates $w$ with $\valueL{\mathcal{A}}(w) \leq \lambda$, the number of accepting computations $c_A^w$ equals $\lfloor 2^K \cdot \mathbb{P}_{\mathcal{M}^T}(w) \rfloor$. It follows that
$c_A^w$ divided by $2^K$ is a $2^{-K}$-approximation of $\mathbb{P}_{\mathcal{M}^T}(w)$, i.e.,
\[
\Bigl\lvert \mathbb{P}_{\mathcal{M}^T}(w) - \frac{c_A^w}{2^K} \Bigl\rvert < 2^{-K}.
\]
The total number of accepting paths of $H_1$ is given by
\[
c_A = \sum_{w \colon |w| \leq N\ \wedge \valueL{\mathcal{A}}(w) \leq \lambda} c_A^w.
\]
We estimate the difference between $\mathbb{D}_{\mathcal{M}^T,\mathcal{A}}(\lambda, n)$ and the value $\frac{c_A}{2^K}$:
\[
\Bigl\lvert \mathbb{D}_{\mathcal{M}^T,\mathcal{A}}(\lambda, n) - \frac{c_A}{2^K} \Bigr\rvert \leq
\sum_{w \colon |w| \leq N\ \wedge \valueL{\mathcal{A}}(w) \leq \lambda} \Bigr\lvert \mathbb{P}_{\mathcal{M}^T}(w) - \frac{c_A^w}{2^K}\Bigr\rvert \leq |\Sigma|^{N+1} \cdot 2^{-K}
< \frac{\epsilon}{2}.
\]
\noindent\emph{Containment of the approximate expected value question in \textsc{\#P}}
\newcommand{\mathbb{E}N}{M}
Assume that $\mathcal{A}$ is total.
For readability we assume that $\mathcal{A}$ has only integer weights. If it does not, we can multiply all weights by least common multiple of all denominators of weights in $\mathcal{A}$; this operation multiples the expected value by the same factor.
Recall that $C$ is the smallest number such that every non-zero probability in $\mathcal{M}^T$ is at least $2^{-C}$.
Let $W$ be the maximal absolute value of weights in $\mathcal{A}$ and
let $\mathbb{E}N = C \mathrm{len}(\epsilon)\cdot \log(C W) +1$ and $\mathbb{E}_{\mathcal{M}^T}(\mathcal{A}, N)$ be the expected value of $\mathcal{M}^T$ for words up to length $\mathbb{E}N$, i.e., computing only the finite sum from the definition of the expected value.
We show that
\[\bigl\lvert \mathbb{E}_{\mathcal{M}^T}(\mathcal{A})-\mathbb{E}_{\mathcal{M}^T}(\mathcal{A},\mathbb{E}N)\bigl\rvert \leq \frac{\epsilon}{2}.\]
Recall that $p_n$ is the probability that $\mathcal{M}^T$ emits a word of the length greater than $n$, and $p_{n} \leq (1-2^{-C})^n$.
Since $\mathcal{A}$ is total, the value of every word $w$ is finite and it belongs to the interval $[-|w| \cdot W, |w| \cdot W]$.
The value of a word of the length bounded by $i$ is at most $i \cdot W$.
Therefore, the expected value of $\mathcal{A}$ over words of grater than $k$ is bounded from above by
\(
\sum_{i \geq k} p_{i} \cdot i \cdot W \leq W \cdot (1- 2^{-C})^k \cdot (k+1)
\).
W.l.o.g. we assume, that there are no transitions to the initial state in $\mathcal{A}$.
Next, we transform $\mathcal{A}$ to an automaton $\mathcal{A}'$ that returns natural numbers on all words of length at most $\mathbb{E}N$ by adding $W \cdot \mathbb{E}N$ to every transition from the initial state.
Observe that $\mathbb{E}_{\mathcal{A}'} = \mathbb{E}_{\mathcal{A}} + W\cdot \mathbb{E}N$ and
$D = 2\cdot W \cdot \mathbb{E}N$ is an upper bound on values returned by the automaton $\mathcal{A}'$ on words of length at most $\mathbb{E}N$.
Finally, we construct a Turing machine $H_2$, similar to $H_1$.
Let $K = (D+1) \cdot (N+1)\cdot (|\Sigma|+1) \cdot \epsilon^{-1}+1$.
$H_2$ non-deterministically chooses a word $u\alpha$, where
$u \in (\Sigma \cup \{\#\})^N$ is a word and $\alpha \in \set{0, 1}^K$ is a number written in binary, and also non-deterministically picks a natural number $\beta \in [0, D]$.
The machine rejects unless $u$ is of the form $wv$, where $w\in \Sigma^*$ and $v\in\set{\#}^*$.
Then $H_2$ accepts if and only if $\valueL{\mathcal{A}'}(w) \leq \beta$
and $\alpha \leq 2^K \cdot \mathbb{P}_{\mathcal{M}^T}(w)$.
Then, provided that $H_2$ generates $w$, the number of accepting computations $c_A^w$ equals $\lfloor 2^K \cdot \mathbb{P}_{\mathcal{M}^T}(w) \cdot \valueL{\mathcal{A}'}(w) \rfloor$.
Therefore, using estimates similar to the distribution case, we obtain the desired inequality
\[\bigl\lvert \mathbb{E}_{\mathcal{M}^T}(\mathcal{A}',\mathbb{E}N) - \frac{c_A}{2^K}\bigl\rvert \leq \frac{\epsilon}{2}.\]
Finally, we obtain that $\frac{c_A}{2^K} -WM$ is an $\epsilon$-approximation of $ \mathbb{E}_{\mathcal{M}^T}(\mathcal{A})$, i.e.,
\[\bigl\lvert \mathbb{E}_{\mathcal{M}^T}(\mathcal{A}) - \big(\frac{c_A}{2^K} -WM\big)\bigl\rvert \leq {\epsilon}.\]
\end{proof}
We show that the approximation problem for $\textsc{LimAvg}$-automata is $\PSPACE$-hard over the class of total automata.
\begin{theorem}
\label{th:approximation-sum}
The separation problems for non-deterministic total $\textsc{LimAvg}$-automata are $\PSPACE$-hard.
\end{theorem}
\begin{proof}
We consider the uniform distribution over infinite words.
Given a non-deterministic finite-word automaton $\mathcal{A}$, we construct an infinite-word $\textsc{LimAvg}$-automaton $\mathcal{A}Inf$ from $\mathcal{A}$ in the following way.
We introduce an auxiliary symbol $\#$ and we add transitions labeled by $\#$ between any final state of $\mathcal{A}$ and any initial state of $\mathcal{A}$.
Then, we label all transitions of $\mathcal{A}Inf$ with $0$.
Finally, we connect all non-accepting states of $\mathcal{A}$ with an auxiliary state $q_{\mathrm{sink}}$, which is a sink state with all transitions of weight $1$.
The automaton $\mathcal{A}Inf$ is total.
Observe that if $\mathcal{A}$ is universal, then $\mathcal{A}Inf$ has a run of value $0$ on every word. Otherwise, if $\mathcal{A}$ rejects a word $w$, then
upon reading a subword $\# w \#$, the automaton $\mathcal{A}Inf$ reaches $q_{\mathrm{sink}}$, i.e., the value of the whole word is $1$.
Almost all words contain an infix $\# w \#$ and hence almost all words have value $1$.
Therefore, the universality problem for $\mathcal{A}$ reduces to the problem
deciding whether for almost all words $w$ we have $\valueL{\mathcal{A}Inf}(w) = 0$ or
for almost all words $w$ we have $\valueL{\mathcal{A}Inf}(w) = 1$? The latter problem reduces to the expected separation problem as well as the distribution separation problem for $\mathcal{A}Inf$.
\end{proof}
| 3,837 | 36,415 |
en
|
train
|
0.123.8
|
\section{Approximating $\textsc{LimAvg}$-automata in exponential time}
In this section we develop algorithms for the approximate expected value and approximate distribution questions
for (non-deterministic) $\textsc{LimAvg}$-automata.
The presented algorithms work in exponential time in the size of the automaton, polynomial time in the size of the Markov chain and the precision.
The case of $\textsc{LimAvg}$-automata is significantly more complex than the other cases and hence we present the algorithms in stages.
First, we restrict our attention to \emph{recurrent} $\textsc{LimAvg}$-automata and the uniform distribution over infinite words.
Recurrent automata are strongly connected with an appropriate set of initial states.
We show that deterministic $\textsc{LimAvg}$-automata with bounded look-ahead approximate recurrent automata.
Next, in Section~\ref{s:non-uniform} we extend this result to non-uniform measures given by Markov chains.
Finally, in Section~\ref{s:non-recurrent} we show the approximation algorithms for all (non-deterministic) $\textsc{LimAvg}$-automata and measures given by Markov chains.
\Paragraph{Recurrent automata}
Let $\mathcal{A} = (\Sigma, Q, Q_0,\delta)$ be a non-deterministic $\textsc{LimAvg}$-automaton and $\widehat{\delta}$ be the extension of $\delta$
to all words $\Sigma^*$. The automaton $\mathcal{A}$ is \emph{recurrent} if and only if the following conditions hold:
\begin{enumerate}[(1)]
\item for every state $q \in Q$ there is a finite word $u$ such that $\widehat{\delta}(q, u) = Q_0$ ($\widehat{\delta}$ is the transition relation extended to words), and
\item for every set $S \subseteq Q$, if $\widehat{\delta}(Q_0, w) = S$ for some word $w$, then there is a finite word $u$ such that $\widehat{\delta}(S, u) = Q_0$.
\end{enumerate}
Intuitively, in recurrent automata $\mathcal{A}$, if two runs deviate at some point, with high probability it is possible to synchronize them.
More precisely, for almost all words $w$, if $\pi$ is a run on $w$, and $\rho$ is a finite run up to position $i$, then $\rho$ can be extended to an infinite run that eventually coincides with $\pi$.
Moreover, we show that with high probability, they synchronize within doubly-exponential number of steps in $|\mathcal{A}|$ (Lemma~\ref{l:resetWrods}).
\begin{example}
Consider the automaton depicted in Figure~\ref{fig:aut}. This automaton is recurrent with the initial set of states $Q_0 = \set{q_x, q_a, q_b}$.
For condition (1) from the definition of recurrent automata, observe that for every state $q$ we have $\widehat{\delta}(q, abab) = Q_0$.
For condition (2), observe that $\widehat{\delta}(Q_0, b) = Q_0$, $\widehat{\delta}(Q_0, a) = \set{q_a, q_b}$ and $\widehat{\delta}(\set{q_a, q_b}, a) = \widehat{\delta}(\set{q_a, q_b}, b) = Q_0$.
The automaton would also be recurrent in the case of $Q_0=\set{q_a, q_b}$, but not in any other case.
Consider an automaton $\mathcal{A}$ depicted below:
\begin{center}
{
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.0cm,semithick]
\tikzstyle{every state}=[fill=white,draw=black,text=black,minimum size=0.4cm]
\tikzset{
datanode/.style = {
draw,
circle,
text width=0.30cm,
inner ysep = +0.4em},
labelnode/.style = {
draw,
rounded corners,
align=center,
fill=white}
}
\foreach \lab\lX/\lY/\name/\kind/{C}/\time/\attr in
{A/0/0/$q_L$/square/0/10/5,
B/2/0/$q_R$/walk/0/100/10
}
{
\node[datanode] (\lab) at (\lX,\lY)
{
\name
};
}
\draw (A) edge[bend left] node[above] {$a:0$} (B);
\draw (B) edge[bend left] node {$a:0$} (A);
\end{tikzpicture}
}
\end{center}
The automaton $\mathcal{A}$ is recurrent if the set of initial states is either $\{q_L\}$ or $\{q_R\}$, but not in the case of $\{q_L, q_R\}$.
Indeed, if we pick $q_L$ (resp., $q_R$) we can never reach the whole set $\{q_L, q_R\}$.
This realizes our intuition that runs that start in $q_L$ and $q_R$ will never synchronize.
\end{example}
We discuss properties of recurrent automata.
For every $\mathcal{A}$ that is strongly connected as a graph there exists a set of initial states $T$ with which it becomes recurrent.
Indeed, consider $\mathcal{A}$ as an unweighted $\omega$-automaton and construct a deterministic $\omega$-automaton $\mathcal{A}^D$ through the power-set construction applied to $\mathcal{A}$.
Observe that $\mathcal{A}^D$ has a single bottom strongly-connected component (BSCC), i.e., a strongly connected component such that there are no transitions leaving that component.
The set $Q_0$ belongs to that BSCC.
Conversely, for any strongly connected automaton $\mathcal{A}$, if $Q_0$ belongs to the BSCC of $\mathcal{A}^D$, then $\mathcal{A}$ is recurrent.
Observe that for a recurrent automaton $\mathcal{A}$ the probability of words accepted by $\mathcal{A}$ is either $0$ or $1$.
Now, for a word $w$ consider a sequence of reachable sets of states $\Pi_w$ defined as $Q_0, \widehat{\delta}(Q_0, w[1]), \widehat{\delta}(Q_0, w[2]), \ldots$
Since $\mathcal{A}^D$ has a single BSCC containing $Q_0$, all sets of $\Pi_w$ belong to that BSCC and hence
either for almost all words $w$, the sequence $\Pi_w$ eventually contains only empty sets or
for all words $w$, the sequence $\Pi_w$ consists of non-empty sets only.
Observe that $\mathcal{A}$ has an infinite run on $w$ if and only if $\Pi_w$ consists of non-empty sets.
It follows that the probability of the set of words having any infinite run in $\mathcal{A}$ is either $0$ or $1$.
While Markov chains generate words letter by letter, to define a run of a word of the minimal value we need to have the completely generated word, i.e., the optimal transition at some position $i$ may depend on some positions $j>i$ in the word.
This precludes application of standard techniques for probabilistic verification, which rely on the fact that the word and the run on it are generated simultaneously~\cite{DBLP:conf/focs/Vardi85,DBLP:journals/jacm/CourcoubetisY95,BaierBook}.
\Paragraph{Key ideas} Our main idea is to change the non-determinism to \emph{bounded look-ahead}.
This must be inaccurate, as the expected value of a deterministic automaton with bounded look-ahead is always rational, whereas Theorem~\ref{th:irrational} shows that the values of non-deterministic automata may be irrational.
Nevertheless, we show that bounded look-ahead is sufficient to \emph{approximate} the probabilistic questions for recurrent automata (Lemma~\ref{l:convergence}).
Furthermore, the approximation can be done effectively (Lemma~\ref{l:jumpingRuns}), which in turn
gives us an exponential-time approximation algorithm for recurrent automata (Lemma~\ref{l:singleSCC}).
Then, we comment on the extension to all distributions given by Markov chains (Section~\ref{s:non-uniform}).
Finally, we show the proof for all $\textsc{LimAvg}$-automata over probability measures given by Markov chains
(Theorem~\ref{th:approximation-limavg}).
| 2,172 | 36,415 |
en
|
train
|
0.123.9
|
\subsection{Nearly-deterministic approximations}
\Paragraph{Jumping runs} Let $k>0$ and let $N_k$ be the set of natural numbers not divisible by $k$.
A \emph{$k$-jumping run} $\xi$ of $\mathcal{A}$ on a word $w$ is an infinite sequence of states such that
for every position $i \in N_k$ we have $(\pi[i-1],w[i],\pi[i]) \in \delta$.
An $i$-th \emph{block} of a $k$-jumping run is a sequence $\xi[ki, k(i+1)-1] $; within a block the sequence $\xi$ is consistent with transitions of $\mathcal{A}$. The positions $k,2k, \ldots \notin N_k$ are \emph{jump} positions, where the sequence $\xi$ need not obey the transition relation of $\mathcal{A}$.
The cost ${C}$ of a transition of a $k$-jumping run $\xi$ within a block is defined as usual, while the cost of a jump is defined as the minimal weight of $\mathcal{A}$.
The value of a $k$-jumping run $\xi$ is defined as the limit average computed for such costs.
\Paragraph{Optimal and block-deterministic jumping runs}
We say that a $k$-jumping run $\xi$ on a word $w$ is \emph{optimal} if its value is the infimum over values of all $k$-jumping runs on $w$.
We show that optimal $k$-jumping runs can be constructed nearly deterministically, i.e., only looking ahead to see the whole current block.
For every $S \subseteq Q$ and $u \in \Sigma^k$ we fix a run $\xi_{S,u}$ on $u$ starting in one of states of $S$, which has the minimal average weight.
Then, given a word $w \in \Sigma^{\omega}$, we define a $k$-jumping run $\xi$ as follows.
We divide $w$ into $k$-letter blocks $u_1, u_2, \ldots$ and
we put $\xi = \xi_{S_0, u_1} \xi_{S_1, u_2} \ldots$, where $S_0 = \set{q_0}$ and for $i>0$, $S_i$ is the set of states reachable from $q_0$ on the word $u_1 \ldots u_i$.
The run $\xi$ is a $k$-jumping run and it is indeed optimal.
We call such runs \emph{block-deterministic} --- they can be constructed based on finite memory --- the set of reachable states $S_i$ and the current block of the input word.
Since all runs of $\mathcal{A}$ are in particular $k$-jumping runs, the value of (any) optimal $k$-jumping run on $w$ is less or equal to $\mathcal{A}(w)$. We show that for recurrent $\textsc{LimAvg}$-automata, the values of $k$-jumping runs on $w$ converge to $\mathcal{A}(w)$ as $k$ tends to infinity. To achieve this, we construct a run of $\mathcal{A}$ which tries to ``follow'' a given jumping run, i.e.,
after almost all jump positions it is able to synchronize with the jumping run quickly.
\Paragraph{Proof plan} Let $k>0$. Consider a word $w$ and some optimal $k$-jumping run $\xi_o$ on $w$.
We construct a run $\pi_f$ of $\mathcal{A}$ in the following way. Initially, both runs start in some initial state $q_0$ and coincide.
However, at the first jump position $\xi_o$ may take a move that is not a transition of $\mathcal{A}$.
The run $\pi_f$ attempts to synchronize with $\xi_o$, i.e., to be at the same position in the same state, and
then repeat transitions of $\xi_o$ until the end of the block. Then, in the next block, regardless of whether $\pi_f$ managed to synchronize with $\xi_o$ or not, we repeat the process.
We say that a run $\pi_f$ constructed in such a way is a \emph{run following} $\xi_o$.
In the following Lemma~\ref{l:resetWrods}, we show that for $m \in \mathbb{N}$ large enough, with high probability, the run $\pi_f$ synchronizes with $\xi_o$ within $m$ steps.
We then show that if $m$ is large enough and $k$ is much larger than $m$, then the values of runs $\pi_f$ and $\xi_o$ differ by less than $\epsilon$ (Lemma~\ref{l:convergence}).
Let $q$ be a state of $\mathcal{A}$ and $u$ be a finite word.
We say that a word $v$ \emph{saturates} the pair $(q, u)$, if the set of reachable states from $q$ over $v$ equals all the states reachable over $uv$ from the initial states, i.e.,
$\widehat{\delta}(Q_0, uv) = \widehat{\delta}(q, v)$.
\begin{example}
Consider the automaton from Figure~\ref{fig:aut} with $Q_0=Q$.
For any $(q, u)$, any word that contains the infix $abab$ saturates $(q, u)$, as
$\widehat{\delta}(Q_0, uv'abab) = \widehat{\delta}(q, v'abab)= Q_0$ for any $v'$.
\end{example}
Observe that in the above, the probability that a random word of a length $4\ell$
does not saturate $(q,u)$ is bounded by $(1-\frac{1}{16})^\ell$. So the probability that a random word $v$ saturates $(q, u)$ quickly tends to $1$ with $|v|$. The next lemma shows that this is not a coincidence.
\begin{lemma}
\label{l:resetWrods}
Let $\mathcal{A}$ be an NFA, $u$ be a finite word, and $q \in \widehat{\delta}(Q_0, u)$.
For every $\Delta >0$ there exists a natural number $\ell=2^{2^{O(|\mathcal{A}|)}} \log (\rev{\Delta})$ such that over the uniform distribution on $\Sigma^{\ell}$ we have
$\mathbb{P}(\{v \in \Sigma^{\ell} \mid v\text{ saturates } (q,u) \}) \geq 1 - \Delta$.
\end{lemma}
\begin{proof}
First, observe that there exists a word $v$ saturating $(q,u)$.
Let $S = \widehat{\delta}(Q_0, u)$. Then, $q \in S$.
Since $\mathcal{A}$ is recurrent, there exists a word $\alpha$ such that $Q_0 = \widehat{\delta}(q, \alpha)$.
It follows that $S = \widehat{\delta}(q, \alpha u)$.
Since $q \in S$, we have $\widehat{\delta}(q, \alpha u) \subseteq \widehat{\delta}(S, \alpha u)$.
It follows that for $i\geq 0$ we have
$\widehat{\delta}(S, (\alpha u)^i ) = \widehat{\delta}(q, (\alpha u)^{i+1}) \subseteq \widehat{\delta}(S, (\alpha u)^{i+1})$.
Therefore, for some $i >0$ we have $\widehat{\delta}(q, (\alpha u)^{i}) = \widehat{\delta}(S, (\alpha u)^{i})$, i.e., the word $(\alpha u)^{i}$ saturates $(q,u)$.
Now, we observe that there exists a saturating word that is exponentially bounded in $|\mathcal{A}|$.
We start with the word $v_0$ equal $(\alpha u)^{i}$ and we pick any two positions $k < l$ such that
$\widehat{\delta}(q, v_0[1,k]) = \widehat{\delta}(q, v_0[1,l])$ and
$\widehat{\delta}(S, v_0[1,k]) = \widehat{\delta}(S, v_0[1,l])$.
Observe that for $v_1$ obtained from $v_0$ by removal of $v[k+1, l]$,
the reachable sets do not change, i.e., $\widehat{\delta}(q, v_0) = \widehat{\delta}(q, v_1)$ and
$\widehat{\delta}(S, v_0) = \widehat{\delta}(S, v_1)$. We iterate this process until there are no such positions.
The resulting word $v'$ satisfies
$\widehat{\delta}(S, v') = \widehat{\delta}(q, v')$.
Finally, each position $k$ of $v'$ defines the unique pair $(\widehat{\delta}(q, v_0[1,k]),
\widehat{\delta}(S, v_0[1,k]))$ of subsets of $Q$. Therefore, the length of $v'$ is bounded by $2^{2\cdot|Q|}$.
We have shown above that for every pair $(q,u)$ there exists a saturating word $v_{q,u}$ of length bounded by $N = 2^{2\cdot|Q|}$.
The probability of the word $v_{q,u}$ is $p_0 = 2^{-O(N)}$.
Let $\ell = \frac{1}{p_0} \cdot \log (\rev{\Delta})$; we show that the probability that $(q, u)$ is not saturated by a word from $\Sigma^{N \cdot \ell}$
is at most $\Delta$.
Consider a word $x \in \Sigma^{N \cdot \ell}$. We can write it as $x = x_1 \ldots x_{\ell}$, where all words $x_k$ have length $N$.
If $x_k$ saturates $(q,u x_1 \ldots x_{k-1})$, then $x_1 \ldots x_k$ (as well as $x$) saturates $(q,u)$.
Therefore, the word $x$ does not saturate $(q_u)$ if for all $1 \leq k \leq \ell$, $x_k$ does not saturate $(q, u x_1 \ldots x_{k-1})$.
The probability that $x \in \Sigma^{N \cdot \ell}$ does not saturate $(q,u)$ is at most $(1 - p_0)^{\ell} \leq (\frac{1}{2})^{\log (\rev{\Delta})} \leq \Delta$.
\end{proof}
Finally, we show that for almost all words the value of an optimal $k$-jumping run approximates
the values of the word.
\begin{lemma}
\label{l:convergence}
Let $\mathcal{A}$ be a recurrent $\textsc{LimAvg}$-automaton.
For every $\epsilon \in \mathbb{Q}^+$, there exists $k$ such that
for almost all words $w$, the value $\mathcal{A}(w)$ and the value of an optimal $k$-jumping run on $w$ differ by at most $\epsilon$.
The value $k$ is doubly-exponential in $|\mathcal{A}|$ and polynomial in $\rev{\epsilon}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{l:resetWrods}, for all $\Delta >0$, $\ell = 2^{2^{O(|\mathcal{A}|)}} \log (\rev{\Delta})$, and all $k > \ell$,
the probability that the run $\pi_f$ synchronizes with
an optimal $k$-jumping run $\xi_o$ within $\ell$ steps in a block is at least $1 - \Delta$.
Consider some $k>\ell$ and an optimal $k$-jumping run $\xi_o$ that is block-deterministic.
Observe that the run $\pi_f$ of $\mathcal{A}$ following $\xi_o$ is also block-deterministic.
Consider a single block $\xi_o[i, i+k-1]$. By Lemma \ref{l:resetWrods},
the probability that $\pi_f[i+\ell-1]=\xi_o[i+\ell-1]$ is at least $1-\Delta$.
In such a case, the sum of costs on that block of $\pi_f$ exceeds $\xi_o$ by at most $D \cdot \ell$,
where $D$ is the difference between the maximal and the minimal weight in $\mathcal{A}$.
Otherwise, if $\pi_f$ does not synchronize, we bound the difference of the sums of values on that block by
the maximal possible difference $D \cdot k$.
Since runs are block-deterministic, synchronization of $\pi_f$ and $\xi_o$ satisfies the Markov property; it
depends only on the current block and the set of states $S$ reachable on the input word until the beginning of the current block.
We observe that as $\mathcal{A}$ is recurrent, the corresponding Markov chain, whose states are reachable sets of states $S$ of $\mathcal{A}$, has only a single BSCC.
Therefore, for almost all words, the average ratio of $k$-element blocks, in which
$\pi_f$ synchronizes with $\xi_o$ within $\ell$ steps, is $1 - \Delta$.
We then conclude that for almost all words the difference between $\pi_f$ and $\xi_o$ is bounded by $\gamma = \frac{(1 - \Delta)\cdot (D \cdot \ell) + \Delta \cdot (D \cdot k)}{k}$.
Observe that with $\Delta < \frac{\epsilon}{2\cdot D}$ and $k > \frac{2\cdot D \cdot \ell}{\epsilon}$, the value $\gamma$ is less than $\epsilon$.
\end{proof}
| 3,346 | 36,415 |
en
|
train
|
0.123.10
|
\subsection{Random variables}
Given a recurrent $\textsc{LimAvg}$-automaton $\mathcal{A}$ and $k>0$,
we define a function $g[{k}] : \Sigma^{\omega} \to \mathbb{R}$ such that
$g[{k}](w)$ is the value of some optimal $k$-jumping run $\xi_o$ on $w$.
We can pick $\xi_o$ to be block-deterministic and
hence $g[{k}]$ corresponds to a Markov chain $M[k]$.
More precisely, we define $M[k]$ labeled by $\Sigma^{k}$ such that
for every word $w$,
the limit average of the path in $M[k]$ labeled by blocks of $w$ (i.e., blocks $w[1,k] w[k+1, 2k] \ldots$) equals $g[{k}](w)$.
Moreover, the distribution of blocks $\Sigma^k$ is uniform and hence $M[k]$ corresponds to $g[{k}]$ over the uniform distribution over $\Sigma$.
The Markov chain $M[k]$ is a labeled weighted Markov chain~\cite{filar}, such that
its states are all subsets of $Q$, the set of states of $\mathcal{A}$.
For each state $S \subseteq Q$ and $u \in \Sigma^k$, the Markov chain $\mathcal{M}$ has an edge $(S,\widehat{\delta}(S,u))$
of probability $\frac{1}{|\Sigma|^k}$.
The weight of an edge $(S,S')$ labeled by $u$ is the minimal average of weights of any run from some state of $S$ to some state of $S'$ over the word $w$.
We have the following:
\begin{lemma}
\label{l:recurrentMeasurable}
Let $\mathcal{A}$ be a recurrent $\textsc{LimAvg}$-automaton and $k>0$.
(1)~The functions $g[{k}]$ and $\valueL{\mathcal{A}}$ are random variables.
(2)~For almost all words $w$ we have $g[{k}](w) = \mathbb{E}(g[{k}])$ and $\valueL{\mathcal{A}}(w) = \mathbb{E}(\valueL{\mathcal{A}})$.
\end{lemma}
\begin{proof}
Since $\mathcal{A}$ is recurrent, $M[k]$ has a single BSCC and hence $M[k]$ and $g[{k}]$ return the same value for almost all words~\cite{filar}.
This implies that the preimage through $g[{k}]$ of each set has measure $0$ or $1$, and hence $g[{k}]$ is measurable~\cite{feller}.
Lemma~\ref{l:convergence} implies that (measurable functions) $g[{k}]$ converge to $\valueL{\mathcal{A}}$ with probability $1$, and hence $\valueL{\mathcal{A}}$ is measurable~\cite{feller}.
As the limit of $g[{k}]$, $\valueL{\mathcal{A}}$ also has the same value for almost all words.
\end{proof}
\begin{nremark}\label{r:ultimatelyperiodic}
The automaton $\mathcal{A}$ from the proof of Theorem \ref{th:irrational} is recurrent (it resets after each $\$$), so the value of $\mathcal{A}$ on almost all words is irrational. Yet, for every ultimately periodic word $vw^\omega$, the value of
$\mathcal{A}$ is rational. This means that while the expected value is realized by almost all words, it is not realized by any ultimately periodic word.
\end{nremark}
| 885 | 36,415 |
en
|
train
|
0.123.11
|
\subsection{Approximation algorithms}
We show that the expected value of $g[{k}]$ can be efficiently approximated.
The approximation is exponential in the size of $\mathcal{A}$, but only logarithmic in $k$ (which is doubly-exponential due to Lemma~\ref{l:convergence}).
\newcommand{\newh}[1]{\tilde{h}^{#1}}
\newcommand{\mfloor}[2]{\lfloor#1\rfloor_{#2}}
To approximate the expected value of $g[{k}]$ we need to compute the expected value of $\mathcal{A}$ over $k$-letter blocks.
Such blocks are finite and hence we consider $\mathcal{A}$ as a finite-word automaton with the average value function $\textsc{Avg}$.
More precisely, for $S$ being a subset of states of $\mathcal{A}$, we define $\mathcal{A}_{S}^{\mathrm{fin}}$ as a $\textsc{Avg}$-automaton over finite words as $\mathcal{A}$, which initial states set to $S$ and
all states accepting. We can approximate the expected value of $\mathcal{A}_{S}^{\mathrm{fin}}$ over words $\Sigma^{k}$ in logarithmic time in $k$.
\begin{lemma}
\label{l:approxFinExpected}
Let $\mathcal{A}$ be a recurrent $\textsc{LimAvg}$-automaton, let $S, S'$ be subsets of states of $\mathcal{A}$, and let $i > 0$.
We can approximate the expected value $\mathbb{E}( \{ \mathcal{A}_{S}^{\mathrm{fin}}(w) \mid |w| = 2^k $ and
$ \widehat{\delta}(S,w) = S'\})$ within a given $\epsilon \in \mathbb{Q}^+$
in exponential time in $|\mathcal{A}|$ and polynomial time in $k$ and $\rev{\epsilon}$.
\end{lemma}
\begin{proof}
\newcommand{\roundup}[1]{\left[#1\right]_{\epsilon_0}}
Let $h(q, w, q')$ be the infimum average weight over runs from $q$ to $q'$ over $w$.
Consider $\epsilon_0 =\frac{\epsilon}{k+1}$.
Let $H=\set{j\cdot \epsilon_0 \mid j\in \mathbb{Z}} \cap (-|\mathcal{A}|, |\mathcal{A}|)$ be a finite set and
let $\roundup{x}$ stand for the greatest number from $H$ not exceeding $x$.
Consider $i \in \set{0, \dots, k}$ and let $N = 2^i$. We define a function $\newh{i}: Q\times \Sigma^{N} \times Q \to H$ as follows. First, we define $\newh{0}(q, w, q')=\roundup{h(q, w, q')}$.
Then, inductively, we define
\[\newh{i+1}(q,w_1w_2, q') = \roundup{\min_{q'' \in Q} \frac{\newh{i}(q, w_1, q'') + \newh{i}(q'', w_2, q')}{2}}\]
\noindent We show by induction on $i$ that for all $i$, $q$, $q'$, $N = 2^i$ and $w\in \Sigma^{N}$ we have $|h(q, w, q') -\newh{i}(q, w, q') |\leq (i+1) \epsilon_0$.
First, we comment on the deteriorating precision.
Notice that $|h(q, w, q') -\newh{i}(q, w, q') |\leq \epsilon_0$ may not hold in general.
Let us illustrate this with a simple toy example.
Consider $\epsilon_0=1$, $x \in (0, 1)$ and $y \in (1, 2)$. Then $\frac{x+y}{2}\in (\frac{1}{2}, \frac{3}{2})$, thus $\roundup{\frac{x+y}{2}}\in \set{0, 1}$.
However, knowing only $\roundup{x}$ and $\roundup{y}$, we cannot asses whether the answer should be $0$ or $1$.
Therefore, when iterating the above-described procedure, we may lose some precision (up to one $\epsilon_0$ at each step); this is why we start with $\epsilon_0$ rather than $\epsilon$.
Now, we show by induction $|h(q, w, q') -\newh{i}(q, w, q') |\leq (i+1) \epsilon_0$. More precisely, we show that
\begin{enumerate}[(1)]
\item $\newh{i}(q, w, q') \leq h(q, w, q') $ and
\item $h(q, w, q') -\newh{i}(q, w, q') \leq (i+1) \epsilon_0$.
\end{enumerate}
The case $i=0$ follows from the definition of $\newh{0}(q, w, q')$.
Consider $i >0$ and assume that for all words $w$ of length $2^i$ the induction hypothesis holds.
Consider $w = w_1 w_2$ and states $q, q'$. There exists $q''$ such that
$h(q, w, q') = h(q, w_1, q'') + h(q'', w_1, q')$. Then, due to induction assumption of (1) we have
$\newh{i-1}(q, w_1, q'') \leq h(q, w_1, q'') $ and
$\newh{i-1}(q'',w_2, q') \leq h(q'', w_2, q') $. In consequence, we get (1).
Now, to show (2), consider a state $s$ that realizes the minimum from the definition of $\newh{i}(q, w, q')$.
There are numbers $a,b \in \mathbb{Z}$ such that $\newh{i-1}(q, w_1, s) = a\epsilon_0$,
and $\newh{i-1}(s, w_2, q') = b\epsilon_0$.
Then, $h(q,w,q') \leq \frac{h(q,w_1,s) + h(s,w_2,q)}{2}$ and we have
\[
h(q,w,q') - \newh{i}(q, w, q') \leq
\frac{h(q,w_1,s) + h(s,w_2,q)}{2} - \roundup{\frac{(a+b)\epsilon_0}{2}}
\]
Observe that $\roundup{\frac{(a+b)\epsilon_0}{2}} = \frac{(a+b)\epsilon_0}{2}$ if $a+b$ is even and
$
\roundup{\frac{(a+b)\epsilon_0}{2}} = \frac{(a+b)\epsilon_0}{2} - \frac{\epsilon_0}{2}$ otherwise.
This gives us the following inequality
\[
h(q,w,q') - \newh{i}(q, w, q') \leq \frac{(h(q,w_1,s) -a\epsilon_0)+ (h(s,w_2,q)-b\epsilon_0)}{2} + \frac{\epsilon_0}{2}
\]
Due to the induction hypothesis (2) we have $h(q, w_1, q'') - a\epsilon_0 \leq i \epsilon_0$
$h(q'', w_2, q') - b\epsilon_0 \leq i \epsilon_0$ and it gives us (2).
We cannot compute the functions $\newh{i}$ directly (in reasonable time), because there are too many words to be considered.
However, we can compute them symbolically.
Define the \emph{clusterization function} $c^i$ as follows. Let $N = 2^i$. For each function $f \colon Q\times \Sigma^N \times Q \to H$ we define $c^i(f) = |\set{w \mid \forall q, q' .\newh{i}(q, w, q')=f(q, w, q')}|$.
Basically, for each function $f$, clasterization counts the number of words realizing $f$ though functions $\newh{i}(\cdot, w, \cdot)$.
The function $c^0$ can be computed directly. Then, $c^{i+1}(f)$ can be computed as the sum of $c^{i}(f_1) \cdot c^{i}(f_2)$ over all the functions $f_1, f_2$ such that $f=f_1 * f_2$, where
$h_1 * h_2(q, q'')=\roundup{\min_{q' \in F_2} \frac{h_1(q,q') + h_2(q',q'')}{2}}$.
It follows that we can compute the $k$-clusterization in time exponential in $|\mathcal{A}|$, polynomial in $\rev{\epsilon}$ and $k$.
The desired expected valued can be derived from the $k$-clusterization in the straightforward way.
\end{proof}
In consequence, we can approximate the expected value of $g[{k}]$ in exponential time in $|\mathcal{A}|$ but logarithmic in $k$, which is important as $k$ may be doubly-exponential in $|\mathcal{A}|$ (Lemma~\ref{l:convergence}).
\begin{lemma}
\label{l:jumpingRuns}
Given a recurrent $\textsc{LimAvg}$-automaton $\mathcal{A}$, $k=2^l$ and $\epsilon \in \mathbb{Q}^+$, the expected value $\mathbb{E}(g[{k}])$ can be approximated up to $\epsilon$
in exponential time in $|\mathcal{A}|$, logarithmic time in $k$
and polynomial time in $\rev{\epsilon}$.
\end{lemma}
\begin{proof}
Recall that the expected values of $M[k]$ and $g[{k}]$ coincide.
Observe that $M[k]$ can be turned into a weighted Markov chain $N[k]$ over the same set of states with one edge between any pair of states as follows.
For an edge $(S,S')$, we define its probability as
$\frac{1}{|\Sigma|^k}$ multiplied by the number of edges from $S$ to $S'$ with positive probability in $M[k]$ and the weight of
$(S,S')$ in $N[k]$ is the average of the weights of all such the edges in $M[k]$, i.e.,
the weight of $(S,S')$ is $\mathbb{E}( \{ \mathcal{A}_{S}^{\mathrm{fin}}(w) \mid w \in \Sigma^k $ and $ \widehat{\delta}(S,w) = S'\})$ (see Lemma~\ref{l:approxFinExpected}).
Observe that the expected values of $M[k]$ and $N[k]$ coincide.
Having the Markov chain $N[k]$, we can compute its expected value in polynomial time~\cite{filar}.
Since $N[k]$ has the exponential size in $|\mathcal{A}|$, we can compute it in exponential time in $|\mathcal{A}|$.
However, we need to show how to construct $N[k]$. In particular, computing
$\mathbb{E}( \{ \mathcal{A}_{S}^{\mathrm{fin}}(w) \mid w \in \Sigma^k $ and $ \widehat{\delta}(S,w) = S'\})$ can be computationally expensive as $k$ can be doubly-exponential in $|\mathcal{A}|$ (Lemma~\ref{l:convergence}).
Still, due to Lemma~\ref{l:approxFinExpected}, we can approximate $\mathbb{E}( \{ \mathcal{A}_{S}^{\mathrm{fin}}(w) \mid w \in \Sigma^k $ and $ \widehat{\delta}(S,w) = S'\})$ in exponential time in $|\mathcal{A}|$, logarithmic time in $k$
and polynomial time in $\epsilon$.
Therefore, we can compute a Markov chain $\markov^{\approx}$ with the same structure as $N[k]$ and such that
for every edge $(S,S')$ the weight of $(S,S')$ in $\markov^{\approx}$ differs from the weight in $N[k]$ by at most $\epsilon$.
Therefore, the expected values of $\markov^{\approx}$ and $N[k]$ differ by at most ${\epsilon}$.
\end{proof}
Lemma~\ref{l:convergence} and Lemma~\ref{l:jumpingRuns} give us approximation algorithms for the expected value and the distribution of recurrent automata over the uniform distribution:
\begin{lemma}
\label{l:singleSCC-uniform}
Given a recurrent $\textsc{LimAvg}$-automaton $\mathcal{A}$, $\epsilon \in \mathbb{Q}^+$ and $\lambda \in \mathbb{Q}$,
we can compute $\epsilon$-approximations
of the distribution $\mathbb{D}_{\mathcal{A}}(\lambda)$ and the expected value $\mathbb{E}(\mathcal{A})$
with respect to the uniform measure
in exponential time in $|\mathcal{A}|$ and polynomial time in $\rev{\epsilon}$.
\end{lemma}
\begin{proof}
For uniform distributions, by Lemma~\ref{l:convergence}, for every $\epsilon > 0$, there exists
$k$ such that $|\mathbb{E}(\mathcal{A}) - \mathbb{E}(g[{k}]) | \leq \frac{\epsilon}{2}$.
The value $k$ is doubly-exponential in $|\mathcal{A}|$ and polynomial in $\rev{\epsilon}$.
Then, by
Lemma~\ref{l:jumpingRuns}, we can compute $\gamma$ such that $|\gamma - \mathbb{E}(g[{k}])| \leq \frac{\epsilon}{2}$ in exponential time in $|\mathcal{A}|$ and polynomial in $\rev{\epsilon}$.
Thus, $\gamma$ differs from $\mathbb{E}(\mathcal{A})$ by at most $\epsilon$.
Since almost all words have the same value, we can approximate $\mathbb{D}_{\mathcal{A}}(\lambda)$ by comparing $\lambda$ with $\gamma$, i.e.,
$1$ is an $\epsilon$-approximation of $\mathbb{D}_{\mathcal{A}}(\lambda)$ if $\lambda \leq \gamma$, and otherwise $0$ is an $\epsilon$-approximation of $\mathbb{D}_{\mathcal{A}}(\lambda)$.
\end{proof}
| 3,533 | 36,415 |
en
|
train
|
0.123.12
|
\subsection{Non-uniform measures}
\label{s:non-uniform}
We briefly discuss how to adapt Lemma~\ref{l:singleSCC-uniform} to all measures given by Markov chains. We sketch the main ideas.
\noindent \emph{Key ideas}.
Assuming that (a variant of) Lemma~\ref{l:resetWrods} holds for any probability measure given by a Markov chain, the proofs of
Lemmas~\ref{l:convergence}, \ref{l:jumpingRuns} and \ref{l:singleSCC} can be easily adapted.
Therefore we focus on adjusting Lemma~\ref{l:resetWrods}.
Observe that if a Markov chain $\mathcal{M}$ produces all finite prefixes $u \in \Sigma^*$ with non-zero probability, then the proof of Lemma~\ref{l:resetWrods} can be straightforwardly adapted.
Otherwise, if some finite words cannot be produced by a Markov chain $\mathcal{M}$, then Lemma~\ref{l:resetWrods} may be false.
However, if there are words $w$ such that $\mathcal{A}$ has an infinite run on $w$, but $\mathcal{M}$ does not emit $w$, we can restrict $\mathcal{A}$ to reject such words.
Therefore, we assume that for every word $w$, if $\mathcal{A}$ has an infinite run on $w$, then $\mathcal{M}$ has an infinite path with non-zero probability on $w$ ($\mathcal{M}$ emits $w$).
Then, the current proof of Lemma~\ref{l:resetWrods} can be straightforwardly adapted to the probability measure given by $\mathcal{M}$.
In consequence, we can compute $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$ and $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$
in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$ and $\rev{\epsilon}$.
More precisely, we first observe that we may enforce $\mathcal{M}$ to be ``deterministic'', i.e., for all states $s$ and letters $a$ at most one outgoing transition labeled with $a$ has positive probability.
We can determinise $\mathcal{M}$ by extending the alphabet $\Sigma$ to $\Sigma \times S$, where $S$ is the set of states of $\mathcal{M}$.
The second component in $\Sigma \times S$ encodes the target state in the transition.
Observe that $\mathcal{A}$ can be extended to the corresponding automaton $\mathcal{A}'$ over $\Sigma \times S$ by cloning transitions, i.e., for every transition $(q,a,q')$, the automaton $\mathcal{A}'$
has transitions $(q,(a,s),q')$ for every $s \in S$ (i.e., $\mathcal{A}'$ ignores the state of $\mathcal{M}$). For such a deterministic Markov chain $\mathcal{M}'$, we define a deterministic $\omega$-automaton $\mathcal{A}_{\mathcal{M}}$ that accepts words emitted
by $\mathcal{M}'$. Finally, we consider the automaton $\mathcal{A}^R = \mathcal{A}_{\mathcal{M}} \times \mathcal{A}'$, which has infinite runs only on words that are emitted by $\mathcal{M}'$.
Therefore, as we discussed, we can adapt the proof of Lemma~\ref{l:singleSCC-uniform} in such a case and compute $\mathbb{D}_{\mathcal{M}', \mathcal{A}^R}(\lambda)$ and
$\mathbb{E}_{\mathcal{M}'}(\mathcal{A}^R)$ (in exponential time in $|\mathcal{A}^R|$, polynomial time in $|\mathcal{M}|$ and $\rev{\epsilon}$; notice that $|\mathcal{A}^R|$ is polynomial in $|\mathcal{A}|$).
Finally, observe that
$\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda) = \mathbb{D}_{\mathcal{M}', \mathcal{A}^R}(\lambda)$ and
$\mathbb{E}_{\mathcal{M}}(\mathcal{A}) = \mathbb{E}_{\mathcal{M}'}(\mathcal{A}^R)$.
In consequence, we have the following:
\begin{lemma}
\label{l:singleSCC}
Given a recurrent $\textsc{LimAvg}$-automaton $\mathcal{A}$, Markov chain $\mathcal{M}$, $\epsilon \in \mathbb{Q}^+$ and $\lambda \in \mathbb{Q}$, we can compute $\epsilon$-approximations
of the distribution $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$ and the expected value $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$
in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$ and $\rev{\epsilon}$.
\end{lemma}
| 1,131 | 36,415 |
en
|
train
|
0.123.13
|
\section{Non-recurrent automata}
\label{s:non-recurrent}
We present the approximation algorithms for all non-deterministic $\textsc{LimAvg}$-automata over measures given by Markov chains.
\begin{theorem}
\label{th:approximation-limavg}
(1)~For a non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$ the function $\valueL{\mathcal{A}} : \Sigma^{\omega} \to \mathbb{R}$ is measurable.
(2)~Given a non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$, Markov chain $\mathcal{M}$, $\epsilon \in \mathbb{Q}^+$, and $\lambda \in \mathbb{Q}$,
we can $\epsilon$-approximate the distribution $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$ and the expected value $\mathbb{E}(\mathcal{A})$
in exponential time in $|\mathcal{A}|$ and polynomial time in $|\mathcal{M}|$ and $\rev{\epsilon}$.
\end{theorem}
\begin{proof}
\newcommand{\rho}{\rho}
\newcommand{\mathcal{A}Det}{\mathcal{A}^D}
Consider $\mathcal{A}$ as an $\omega$-automaton.
It has no acceptance conditions and hence we can determinise it with the standard power-set construction to a deterministic automaton $\mathcal{A}Det$.
Then, we construct a Markov chain $\mathcal{M} \times \mathcal{A}Det$, compute all its BSCCs $R_1, \ldots, R_k$ along with the probabilities $p_1, \ldots, p_k$ of reaching each of these sets.
This can be done in polynomial time in $\mathcal{M} \times \mathcal{A}Det$~\cite{filar,BaierBook}, and hence polynomial in $\mathcal{M}$ and exponential in $\mathcal{A}$.
Let $H_1, \ldots, H_k$ be sets of paths in $\mathcal{M} \times \mathcal{A}Det$ such that for each $i$, all $\rho \in H_i$ eventually reach $R_i$ and stay there forever.
Observe that each $H_i$ is a Borel set; the set $H_i^p$ of paths that stay in $R_i$ past position $p$ is closed and $H_i = \bigcup_{p\geq 0} H_i^p$.
It follows that each $H_i$ is measurable.
We show how to compute an $\epsilon$-approximation of the conditional expected value $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$.
Consider a BSCC $R_i$. The projection of $R_i$ on the first component $R_i^1$ is a BSCC in $\mathcal{M}$ and the projection on the second component $R_i^2$ is an SCC of $\mathcal{A}_D$.
Let $(s,A) \in R_i$. If we fix $s$ as the initial state of $R_i^1$ and $A$ as the initial state of $R_i^2$, then $R_i$ are all reachable states of $R_i^1 \times R_i^2$.
The set $R_i^2$ consists of the states of $\mathcal{A}Det$, which are subsets of states of $\mathcal{A}$.
Therefore, the union $\bigcup R_i^2$ is a subset of states of $\mathcal{A}$ and it consists of some SCCs $S_1, \ldots, S_m$ of $\mathcal{A}$.
All these SCCs are reachable, but it does not imply that there is a run of $\mathcal{A}$ that stays in $S_j$ forever. We illustrate that in the following example.
Consider the automaton $\mathcal{A}$ presented in Figure~\ref{fig:autTwo}, where $q_I$ is the initial state, and a single-state Markov chain $\mathcal{M}$ generating uniform distribution.
\begin{figure}
\caption{An automaton with a reachable SCC $q_F$ such that almost no runs stay in $q_F$ forever}
\label{fig:autTwo}
\end{figure}
Then, all paths in $\mathcal{M} \times \mathcal{A}Det$ are eventually contained in $\mathcal{M} \times \{Q\}$, i.e., the second component consists of all states of $\mathcal{A}$.
Still, if a word $w$ has infinitely many letters $a$, then $\mathcal{A}$ has no (infinite) run on $w$ that visits the state $q_F$.
The set of infinite words that contain finitely many letters $a$ is countable and hence has probability $0$.
Therefore, almost all words (i.e., all except for some set of probability $0$) have no run that visits the state $q_F$.
To avoid such pathologies, we divide SCCs into two types: \emph{permanent} and \emph{transitory}.
More precisely, for a path $\rho$ in $\mathcal{M} \times \mathcal{A}Det$ let $w_{\rho}$ be the word labeling $\rho$.
We show that for each SCC $S_j$, one of the following holds:
\begin{itemize}
\item $S_j$ is \emph{permanent}, i.e., for almost all paths $\rho \in H_i$ (i.e., the set of paths of probability $1$), the automaton $\mathcal{A}$ has a run on the word $w_{\rho}$ that eventually stays in $S_j$ forever, or
\item $S_j$ is \emph{transitory}, i.e., for almost all paths $\rho \in H_i$, the automaton $\mathcal{A}$ has no run on $w_{\rho}$ that eventually stays in $S_j$.
\end{itemize}
Consider an SCC $S_j$. If $S_j$ is permanent, then it is not transitory. We show that if $S_j$ is not permanent, then it is transitory.
Suppose that $S_j$ is not permanent and consider any $(s,A) \in H_i$.
Almost all paths in $H_i$ visit $(s,A)$ and since $S_j$ in not permanent,
there exists an infinite path $\rho$ that visits $(s,A)$ and $\mathcal{A}$ has no run on $w_{\rho}$ that stays in $S_j$ forever.
Let $u$ be the suffix of $w_{\rho}$ that labels $\rho$ past some occurrence of $(s,A)$.
We observe that $\widehat{\delta}(A \cap S_j, u) = \emptyset$ and hence for some finite prefix $u'$ of $u$ we have $\widehat{\delta}(A \cap S_j, u') = \emptyset$.
Let $p$ be the probability that $\mathcal{M} \times \mathcal{A}Det$ in the state $(s,A)$ generates a path labeled by $u'$.
The probability that a path that visits $(s,A)$ at least $\ell$ times does not contain $(s,A)$ followed by labels $u'$ is at most $(1-p)^{\ell}$.
Observe that
for almost all paths in $H_i$, the state $(s,A)$ is visited infinitely often and hence almost all paths contain $(s,A)$ followed by labels $u'$ upon which the path leaves $S_j$.
Therefore, $S_j$ is transitory.
To check whether $S_j$ is permanent or transitory, observe that for any $(s,A) \in H_i$,
in the Markov chain $\mathcal{M} \times \mathcal{A}Det$, we can reach the set $\mathcal{M} \times \{\emptyset \}$ from $(s, A \cap S_j)$ if and only if
$S_j$ is transitory. The former condition can be checked in polynomial space.
We mark each SCC $S_1, \ldots, S_k$ as permanent or transitory and
for every permanent SCC $S_j$, we compute
an $\epsilon$-approximation of $\mathbb{E}_{\mathcal{M}}(\mathcal{A}[S_j] \mid H_i)$, which is the expected value of $\mathcal{A}$ under condition $H_i$ with the restriction to runs that
eventually stay in $S_j$.
Observe that an $\epsilon$-approximation of $\mathbb{E}_{\mathcal{M}}(\mathcal{A}[S_j] \mid H_i)$ can be computed
using Lemma~\ref{l:singleSCC}.
Indeed, we pick $(s, A) \in H_i$ and observe that $\mathcal{A}$ restricted to states $S_j$ is recurrent (with an appropriate initial states).
Finally, we pick the minimum $\gamma$ over the computed expected values $\mathbb{E}_{\mathcal{M}}(\mathcal{A}[S_j] \mid H_i)$ and observe that almost all words in $H_i$ have value $\gamma$.
It follows that $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i) = \gamma$.
In each BSCC $R_i$, almost all words have value $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$.
As we discussed earlier, each $H_i$ is measurable, and hence the function $\valueL{\mathcal{A}} : \Sigma^{\omega} \to \mathbb{R}$ is measurable.
Moreover, to approximate the distribution $\mathbb{D}_{\mathcal{M}, \mathcal{A}}(\lambda)$,
we sum probabilities of $p_i$ of reaching the BSCCs $R_i$ over $R_i$'s such that the $\epsilon$-approximation of $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$ is less or equal to $\lambda$.
Finally, we compute an $\epsilon$-approximation of $\mathbb{E}_{\mathcal{M}}(\mathcal{A})$ from $\epsilon$-approximations of conditional expected values $\mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$ using the
identity $\mathbb{E}_{\mathcal{M}}(\mathcal{A}) = \sum_{i=1}^k p_i \cdot \mathbb{E}_{\mathcal{M}}(\mathcal{A} \mid H_i)$.
\end{proof}
\section{Determinising and approximating $\textsc{LimAvg}$-automata}
\newcommand{\mathcal{A}B}{\mathcal{B}}
For technical simplicity, we assume that the distribution of words is uniform.
However, the results presented here extend to all distributions given by Markov chains.
Recall that for the $\textsc{LimAvg}$ automata, the value of almost all words (i.e., all except for some set of words
of probability $0$) whose optimal runs end up in the same SSC, is the same.
This means that there is a finite set of values (not greater than the number of SSCs of the automaton) such that
almost all words have their values in this set.
$\textsc{LimAvg}$-automata are not determinisable~\cite{quantitativelanguages}.
We say that a non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$ is \emph{weakly determinisable} if there
is a deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}B$ such that $\mathcal{A}$ and $\mathcal{A}B$ have the same value over almost all words.
From \cite{lics16} we know that deterministic automata return rational values for almost all words, so not all $\textsc{LimAvg}$-automata are weakly determinisable.
However, we can show the following.
\begin{theorem}\label{t:determinisation}
A $\textsc{LimAvg}$-automaton $\mathcal{A}$ is weakly determinisable if and only if
it returns rational values for almost all words.
\end{theorem}
\begin{proof}[Proof sketch]
Assume an automaton $\mathcal{A}$ with SSCs $C_1, \dots, C_m$.
For each $i$ let $v_i$ be defined as
the expected value of $\mathcal{A}$ when its set of initial states is $C_i$ and the run is bounded to stay in $C_i$.
If $\mathcal{A}$ has no such runs for some $C_i$, then $v_i = \infty$.
We now construct a deterministic automaton $B$ with rational weights using the standard power-set construction.
We define the cost function such that the cost of any transition from a state $Y$ is the minimal value $v_i$ such that $v_i$ is rational and $Y$ contains a state from $C_i$.
If there are no such $v_i$, then we set the cost to the maximal cost of $\mathcal{A}$.
Roughly speaking, $B$ tracks in which SSCs $A$ can be and the weight corresponds to the SSC with the lowest value.
To see that $B$ weakly determinises $A$ observe that for almost all words $w$, a run with the lowest value over $w$ ends in some SSC and its value then equals the expected value of this component,
which is rational as the value of this word is rational.
\end{proof}
A straightforward corollary is that every non-deterministic $\textsc{LimAvg}$-automaton can be weakly determinised by an $\textsc{LimAvg}$-automaton with real weights.
Theorem \ref{t:determinisation} does not provide an implementable algorithm for weak-determinisation, because of the hardness of computing the values $v_i$.
It is possible, however, to approximate this automaton.
We say that a deterministic $\textsc{LimAvg}$-automaton $B$ \emph{$\epsilon$-approximates} $\mathcal{A}$ if for almost every word $w$ we have that $\valueL{B}(w)\in[\valueL{\mathcal{A}}(w) - \epsilon, \valueL{\mathcal{A}}(w) + \epsilon]$.
\begin{theorem}
\label{th:approximateDeterminisation}
For every $\epsilon>0$ and a non-deterministic $\textsc{LimAvg}$-automaton $\mathcal{A}$, one can compute in exponential time a deterministic $\textsc{LimAvg}$-automaton that $\epsilon$-approximates $\mathcal{A}$.
\end{theorem}
The proof of this theorem is similar to the proof of Theorem~\ref{t:determinisation}, except now it is enough to approximate the values $v_i$, which can be done in exponential time.
\Paragraph{Acknowledgements}
Our special thanks go to G\"unter Rote who pointed out an error in an earlier version of our running example.
\doclicenseThis
\end{document}
| 3,564 | 36,415 |
en
|
train
|
0.124.0
|
\begin{document}
\title[Ideals, supports and harmonic operators]
{Ideals of the Fourier algebra, supports and harmonic operators}
\date{}
\author{M. Anoussis, A. Katavolos and I. G. Todorov}
\address{Department of Mathematics, University of the Aegean,
Samos 83 200, Greece}
\email{[email protected]}
\address{Department of Mathematics, University of Athens,
Athens 157 84, Greece}
\email{[email protected]}
\address{Pure Mathematics Research Centre, Queen's University Belfast,
Belfast BT7 1NN, United Kingdom}
\email{[email protected]}
\keywords{Fourier algebra, masa-bimodule, invariant subspaces, harmonic operators}
\begin{abstract}
We examine the common null spaces of families of Herz-Schur multipliers and
apply our results to study jointly harmonic operators and their relation with
jointly harmonic functionals. We show how an annihilation formula obtained in
\cite{akt} can be used to give a short proof {as well as a generalisation}
of a result of Neufang and Runde
concerning harmonic operators with respect to a normalised positive definite function.
We compare the two notions of support of an operator that have been
studied in the literature and show how one can be expressed in terms of the other.
\end{abstract}
\maketitle
| 405 | 18,604 |
en
|
train
|
0.124.1
|
\section{Introduction and Preliminaries}
In this paper we investigate, for a locally compact group $G$,
the common null spaces of families of Herz-Schur multipliers
(or completely bounded multipliers of the Fourier algebra $A(G)$)
and their relation to ideals of $A(G)$.
This provides a new perspective for our previous results in \cite{akt}
concerning (weak* closed) spaces of operators on $L^2(G)$ which are simultaneously
invariant under all Schur multipliers and under
{conjugation by the right regular representation}
of $G$ on $L^2(G)$ ({\em jointly invariant} subspaces -- see below
for precise definitions).
At the same time, it provides a new approach to, as well as an extension
of, a result of Neufang and Runde \cite{neurun} concerning the space
$\widetilde{\cl H}_\sigma$
of operators which are `harmonic' with respect to
a positive definite normalised function $\sigma:G\to\bb C$. The notion of $\sigma$-harmonic
operators was
introduced in \cite{neurun} as an extension of the notion of $\sigma$-harmonic
functionals on $A(G)$ as defined and studied by Chu and Lau in \cite{chulau}.
One of the main results of Neufang and Runde is that $\widetilde{\cl H}_\sigma$
is the von Neumann algebra
on $L^2(G)$ generated
by the algebra $\cl D$ of multiplication operators together
with the space ${\cl H}_\sigma$ of harmonic functionals,
considered as a subspace of the von Neumann algebra $\vn(G)$ of the group.
It will be seen that this result can be obtained as a consequence of the fact
(see Corollary \ref{c_jho})
that, for any family $\Sigma$ of completely bounded multipliers of $A(G)$,
the space $\widetilde{\cl H}_\Sigma$ of {\em jointly $\Sigma$-harmonic operators}
can be obtained as the weak* closed $\cl D$-bimodule
generated by the {\em jointly $\Sigma$-harmonic functionals} ${\cl H}_\Sigma$.
In fact, the spaces $\widetilde{\cl H}_\Sigma$ belong to the class
of jointly invariant subspaces of $\cl B(L^2(G))$ studied
in \cite[Section 4]{akt}.
The space ${\cl H}_\Sigma$ is the annihilator in $\vn(G)$ of a certain ideal of $A(G)$.
Now from any given closed ideal $J$ of the Fourier algebra $A(G)$,
there are two `canonical' ways to arrive at
a weak* closed $\cl D$-bimodule of
$\cl B(L^2(G))$. One way is to consider its annihilator $J^\perp$
in $\vn(G)$ and then take the weak* closed
$\cl D$-bimodule
generated by $J^{\perp}$. We call this bimodule $\Bim(J^\perp)$. The other way is to
take a suitable saturation $\Sat(J)$ of $J$
within the trace class operators
on $L^2(G)$ (see Theorem \ref{th_satlcg}), and then form its annihilator. This gives
a masa bimodule $(\Sat J)^{\perp}$ in $\cl B(L^2(G))$. In \cite{akt},
we proved that these
two procedures yield the same bimodule, that is, $\Bim(J^\perp) = (\Sat J)^{\perp}$.
Our proof that $\widetilde{\cl H}_\Sigma=\Bim({\cl H}_\Sigma)$
rests on this equality.
The notion of {\em support}, $\mathop{\mathrm{supp}}G(T)$, of an element $T\in\vn(G)$ was
introduced by Eymard in \cite{eymard} by considering $T$ as a linear functional
on the function algebra $A(G)$; thus $\mathop{\mathrm{supp}}G(T)$ is a closed subset of $G$.
This notion was extended by Neufang and Runde in \cite{neurun} to an arbitrary
$T\in\cl B(L^2(G))$
and used to describe harmonic operators. By considering joint supports,
we show that this extended notion of $G$-support for an operator $T\in\cl B(L^2(G))$
coincides
with the joint $G$-support of a family of elements of $\vn (G)$ naturally associated
to $T$ (Proposition \ref{propsame2}).
On the other hand, the notion of support of an operator $T$ acting on $L^2(G)$
was first introduced by Arveson in \cite{arv}
as a certain closed subset of $G \times G$.
This notion was used
in his study of what was later called operator synthesis.
A different but related approach appears in \cite{eks},
where the notion of
$\omega$-support, $\mathop{\mathrm{supp}}o(T)$, of $T$ was introduced
and used to establish a bijective correspondence between
reflexive masa-bimodules and $\omega$-closed subsets of $G\times G$.
We show that the joint $G$-support $\mathop{\mathrm{supp}}G(\cl A)$ of an arbitrary family
$\cl A\subseteq \cl B(L^2(G))$ can be fully described in terms of its
joint $\omega$-support $\mathop{\mathrm{supp}}o(\cl A)$ (Theorem \ref{th_compsa}).
The converse does not hold in general,
as the $\omega$-support, being a subset of $G\times G$, contains in general more
information about an arbitrary operator than its $G$-support
(see Remark \ref{last});
however, in case $\cl A$ is a (weak* closed) jointly invariant subspace,
we show that its $\omega$-support can be recovered from its $G$-support
(Theorem \ref{312}).
We also show that, if a set $\Omega\subseteq G\times G$ is invariant
under all maps $(s,t)\to (sr,tr), \, r\in G$,
then $\Omega$ is marginally equivalent to an $\omega$-closed set if and only if
it is marginally equivalent to a (topologically) closed set.
This can fail for non-invariant sets (see for example \cite[p. 561]{eks}).
{For a related result, see \cite[Proposition 7.3]{stt_clos}.}
\noindent\textbf{Preliminaries and Notation }
Throughout, $G$ will denote a second countable locally compact group,
equipped with left Haar measure.
Denote by $\cl D\subseteq\cl{B}(L^2(G))$ the maximal abelian selfadjoint algebra
(masa, for short) consisting of all multiplication operators $M_f:g\to fg$, where
$f\in L^\infty(G)$.
We write $\vn (G)$ for the von Neumann algebra $\{\lambda_s : s\in G\}''$ generated by
the left regular representation $s\to \lambda_s$ of $G$ on $L^2(G)$
(here $(\lambda_sg)(t)=g(s\an t)$).
Every element of the predual of $\vn (G)$ is a vector functional,
$\omega_{\xi,e^{it}a}: T\to (T\xi,e^{it}a)$,
where $\xi,e^{it}a\in L^2(G)$, and $\nor{\omega_{\xi,e^{it}a}}$ is the infimum of the products
$\|\xi\|_2\|e^{it}a\|_2$ over all such representations. This
predual can be identified \cite{eymard} with the set $A(G)$ of all
complex functions $u$ on $G$ of the form $s\to u(s)=\omega_{\xi,e^{it}a}(\lambda_s)$.
With the above norm and pointwise operations, $A(G)$
is a (commutative, regular, semi-simple) Banach algebra of continuous functions
on $G$ vanishing at infinity,
called the \emph{Fourier algebra} of $G$; its Gelfand spectrum can
be identified with $G$ {\it via} point evaluations.
The set $A_c(G)$ of compactly supported
elements of $A(G)$ is dense in $A(G)$.
A function $\sigma:G\to\bb C$ is a {\em multiplier} of $A(G)$ if for all $u\in A(G)$
the pointwise product $\sigma u$ is again in $A(G)$. By duality, a multiplier $\sigma$
induces a bounded operator $T\to \sigma\cdot T$ on $\vn(G)$. We say $\sigma$
is {\em a completely bounded (or Herz-Schur) multiplier},
and write $\sigma\in M^{\cb}A(G)$,
if the latter operator is completely bounded, that is, if there exists a constant $K$
such that $\nor{[\sigma\cdot T_{ij}]}\le K \nor{[T_{ij}]}$ for all $n\in\bb N$ and all
$[T_{ij}]\in M_n(\vn (G))$ (the latter being the space of all $n$ by $n$ matrices with
entries in $\vn (G)$).
The least such constant is the \emph{cb norm} of $\sigma$.
The space $M^{\cb}A(G)$ with pointwise operations and the cb norm is
a Banach algebra into which $A(G)$ embeds contractively.
For a subset $\Sigma\subseteq M^{\cb}A(G)$, we let
$Z(\Sigma)=\{s\in G: \sigma(s)=0 \text{ for all } \sigma\in\Sigma\}$
be its \emph{zero set}.
A subset $\Omega\subseteq G\times G$ is called {\em marginally null} if
there exists a null set (with respect to Haar measure) $X\subseteq G$ such that
$\Omega\subseteq (X\times G)\cup(G\times X)$.
Two sets $\Omega,\Omega'\subseteq G\times G$ are {\em marginally equivalent}
if their symmetric difference is a marginally null set;
we write $\Omega_1\cong \Omega_2$.
A set $\Omega\subseteq G\times G$ is said to be {\em $\omega$-open} if
it is marginally equivalent to a {\em countable} union of Borel rectangles $A\times B$;
it is called {\em $\omega$-closed} when its complement is $\omega$-open.
Given any set $\Omega\subseteq G\times G$, we denote by
$\frak M_{\max}(\Omega)$ the set of all $T\in\cl{B}(L^2(G))$ which are {\em supported}
by $\Omega$ in the sense that $M_{\chi_ B}TM_{\chi_A}=0$ whenever
$A\times B\subseteq G\times G$
is a Borel rectangle disjoint from $\Omega$
(we write $\chi_A$ for the characteristic function of a set $A$).
Given any set $\cl U\subseteq \cl{B}(L^2(G))$ there exists a smallest, up to marginal
equivalence, $\omega$-closed set $\Omega\subseteq G\times G$ supporting every element
of $\cl U$, {\it i.e.} such that $\cl U\subseteq\frak M_{\max}(\Omega)$. This set is
called {\em the $\omega$-support} of $\cl U$ and is denoted $\mathop{\mathrm{supp}}o(\cl U)$ \cite{eks}.
Two functions $h_1,h_2 : G\times G\to \bb{C}$ are said to be
{\em marginally equivalent}, or equal
{\em marginally almost everywhere (m.a.e.)}, if
they differ on a marginally null set.
The predual of $\cl{B}(L^2(G))$ consists of all linear forms $\omega$ given by
$\omega(T):= \sum\limits_{i=1}^{\infty} \sca{Tf_i, g_i}$
where $f_i, g_i\in L^2(G)$ and $\sum\limits_{i=1}^{\infty}\nor{f_i}_2\nor{g_i}_2<\infty$.
Each such $\omega$ defines a trace class operator whose kernel is a function
$h = h_\omega:G\times G\to\bb C$, unique up to marginal equivalence, given by
$h(x,y)=\sum\limits_{i=1}^{\infty} f_i(x )\bar g_i(y)$.
This series converges marginally almost everywhere on $G\times G$.
We use the notation $\du{T}{h} :=\omega(T)$.
We write $T(G)$ for the Banach space of (marginal equivalence classes of) such
functions, equipped with the norm of the predual of $\cl{B}(L^2(G))$.
Let $\frak{S}(G)$ be the multiplier algebra of $T(G)$; by definition, a
measurable function $w : G\times G\rightarrow \bb{C}$ belongs to $\frak{S}(G)$ if
the map $m_w: h\to wh$ leaves $T(G)$ invariant, that is, if
$wh$ is marginally equivalent to a function from $T(G)$, for every $h\in T(G)$.
Note that the operator $m_w$ is automatically bounded.
The elements of $\frak{S}(G)$ are called \emph{(measurable) Schur multipliers}.
By duality, every Schur multiplier induces a bounded operator
$S_w$ on $\cl B(L^2(G))$, given by
\[\du{S_w(T)}{h} = \du{T}{wh}, \ \ \ h\in T(G), \; T\in \cl B(L^2(G))\, .\]
The operators of the form $S_w$, $w\in \frak{S}(G)$, are precisely the
bounded weak* continuous $\cl D$-bimodule maps on $\cl B(L^2(G))$
(see \cite{haa}, \cite{sm}, \cite{pe} and \cite{kp}).
A weak* closed subspace $\cl U$ of $\cl B(L^2(G))$ is invariant under
the maps $S_w$, $w\in \frak{S}(G)$, if and only if it is invariant under
all left and right multiplications by elements of $\cl D$,
{\it i.e.} if $M_fTM_g\in \cl U$ for all $f,g\in L^\infty(G)$ and all $T\in\cl U$, in
other words, if it is a {\em $\cl D$-bimodule}.
For any set $\cl T\subseteq \cl B(L^2(G))$
we denote by $\Bim\cl T$ the smallest weak* closed $\cl D$-bimodule containing
$\cl T$;
thus, $\mathrm{Bim}(\cl T)=\overline{[\mathfrak{S}(G)\cl T]}^{w^*}$.
We call a subspace $\cl U\subseteq \cl B(L^2(G))$ {\em invariant}
if $\rho_rT\rho_r^*\in\cl A$ for all $T\in\cl A$ and all $r\in G$; here, $r\to \rho_r$ is
the right regular representation of $G$ on $L^2(G)$.
An invariant space, which is also a $\cl D$-bimodule, will be called a
{\em jointly invariant space}.
It is not hard to see that, if $\cl A\subseteq \cl B(L^2(G))$,
the smallest weak* closed jointly invariant space
containing $\cl A$ is the weak* closed linear span of
$\{S_w(\rho_rT\rho_r^*): T\in\cl A, w\in \frak S(G), r\in G\}$.
For a complex function $u$ on $G$ we let $N(u):G\times G\to\bb C$ be
the function given by $N(u)(s,t) = u(ts^{-1})$. For any subset $E$ of $G$, we write
$E^*=\{(s,t)\in G\times G: ts^{-1}\in E\}$.
| 4,069 | 18,604 |
en
|
train
|
0.124.2
|
Given any set $\Omega\subseteq G\times G$, we denote by
$\frak M_{\max}(\Omega)$ the set of all $T\in\cl{B}(L^2(G))$ which are {\em supported}
by $\Omega$ in the sense that $M_{\chi_ B}TM_{\chi_A}=0$ whenever
$A\times B\subseteq G\times G$
is a Borel rectangle disjoint from $\Omega$
(we write $\chi_A$ for the characteristic function of a set $A$).
Given any set $\cl U\subseteq \cl{B}(L^2(G))$ there exists a smallest, up to marginal
equivalence, $\omega$-closed set $\Omega\subseteq G\times G$ supporting every element
of $\cl U$, {\it i.e.} such that $\cl U\subseteq\frak M_{\max}(\Omega)$. This set is
called {\em the $\omega$-support} of $\cl U$ and is denoted $\mathop{\mathrm{supp}}o(\cl U)$ \cite{eks}.
Two functions $h_1,h_2 : G\times G\to \bb{C}$ are said to be
{\em marginally equivalent}, or equal
{\em marginally almost everywhere (m.a.e.)}, if
they differ on a marginally null set.
The predual of $\cl{B}(L^2(G))$ consists of all linear forms $\omega$ given by
$\omega(T):= \sum\limits_{i=1}^{\infty} \sca{Tf_i, g_i}$
where $f_i, g_i\in L^2(G)$ and $\sum\limits_{i=1}^{\infty}\nor{f_i}_2\nor{g_i}_2<\infty$.
Each such $\omega$ defines a trace class operator whose kernel is a function
$h = h_\omega:G\times G\to\bb C$, unique up to marginal equivalence, given by
$h(x,y)=\sum\limits_{i=1}^{\infty} f_i(x )\bar g_i(y)$.
This series converges marginally almost everywhere on $G\times G$.
We use the notation $\du{T}{h} :=\omega(T)$.
We write $T(G)$ for the Banach space of (marginal equivalence classes of) such
functions, equipped with the norm of the predual of $\cl{B}(L^2(G))$.
Let $\frak{S}(G)$ be the multiplier algebra of $T(G)$; by definition, a
measurable function $w : G\times G\rightarrow \bb{C}$ belongs to $\frak{S}(G)$ if
the map $m_w: h\to wh$ leaves $T(G)$ invariant, that is, if
$wh$ is marginally equivalent to a function from $T(G)$, for every $h\in T(G)$.
Note that the operator $m_w$ is automatically bounded.
The elements of $\frak{S}(G)$ are called \emph{(measurable) Schur multipliers}.
By duality, every Schur multiplier induces a bounded operator
$S_w$ on $\cl B(L^2(G))$, given by
\[\du{S_w(T)}{h} = \du{T}{wh}, \ \ \ h\in T(G), \; T\in \cl B(L^2(G))\, .\]
The operators of the form $S_w$, $w\in \frak{S}(G)$, are precisely the
bounded weak* continuous $\cl D$-bimodule maps on $\cl B(L^2(G))$
(see \cite{haa}, \cite{sm}, \cite{pe} and \cite{kp}).
A weak* closed subspace $\cl U$ of $\cl B(L^2(G))$ is invariant under
the maps $S_w$, $w\in \frak{S}(G)$, if and only if it is invariant under
all left and right multiplications by elements of $\cl D$,
{\it i.e.} if $M_fTM_g\in \cl U$ for all $f,g\in L^\infty(G)$ and all $T\in\cl U$, in
other words, if it is a {\em $\cl D$-bimodule}.
For any set $\cl T\subseteq \cl B(L^2(G))$
we denote by $\Bim\cl T$ the smallest weak* closed $\cl D$-bimodule containing
$\cl T$;
thus, $\mathrm{Bim}(\cl T)=\overline{[\mathfrak{S}(G)\cl T]}^{w^*}$.
We call a subspace $\cl U\subseteq \cl B(L^2(G))$ {\em invariant}
if $\rho_rT\rho_r^*\in\cl A$ for all $T\in\cl A$ and all $r\in G$; here, $r\to \rho_r$ is
the right regular representation of $G$ on $L^2(G)$.
An invariant space, which is also a $\cl D$-bimodule, will be called a
{\em jointly invariant space}.
It is not hard to see that, if $\cl A\subseteq \cl B(L^2(G))$,
the smallest weak* closed jointly invariant space
containing $\cl A$ is the weak* closed linear span of
$\{S_w(\rho_rT\rho_r^*): T\in\cl A, w\in \frak S(G), r\in G\}$.
For a complex function $u$ on $G$ we let $N(u):G\times G\to\bb C$ be
the function given by $N(u)(s,t) = u(ts^{-1})$. For any subset $E$ of $G$, we write
$E^*=\{(s,t)\in G\times G: ts^{-1}\in E\}$.
It is shown in \cite{bf} (see also \cite{j} and \cite{spronk})
that the map $u\rightarrow N(u)$ is an isometry
from $M^{\cb}A(G)$ into $ \frak{S}(G)$ and that its range consists precisely of all
{\em invariant} Schur multipliers, {\it i.e.} those
$w\in \frak{S}(G)$ for which $w(sr,tr) = w(s,t)$ for every $r\in G$ and
marginally almost all $s,t$. Note that the corresponding operators $S_{N(u)}$
are denoted $\hat\Theta(u)$ in \cite{neuruaspro}.
The following result from \cite{akt} is crucial for what follows.
\begin{theorem}\label{th_satlcg}
Let $J\subseteq A(G)$ be a closed ideal and
$\Sat(J)$ be the closed $L^\infty(G)$-bimodule of $T(G)$ generated by the set
\[
\{N(u)\chi_{L\times L}: u \in J, L\ \text{compact, } \ L\subseteq G \}.
\]
Then
$\Sat(J)^{\perp} = \Bim (J^{\perp})$.
\end{theorem}
| 1,744 | 18,604 |
en
|
train
|
0.124.3
|
\section{Null spaces and harmonic operators}\label{s1}
Given a subset $\Sigma\subseteq M^{\cb}A(G)$, let
\[
\frak{N}(\Sigma) = \{T\in \vn(G) : \sigma\cdot T = 0, \ \mbox{ for all } \sigma\in \Sigma\}
\]
be the {\em common null set} of the operators on $\vn(G)$ of the form
$T\to \sigma\cdot T$, with $\sigma\in \Sigma$.
Letting
\[\Sigma A \stackrel{def}{=}
\overline{\sspp}(\Sigma A(G)) = \overline{\sspp\{ \sigma u : \sigma\in \Sigma, u\in A(G)\}},\]
it is easy to verify that $\Sigma A$ is a closed ideal of $A(G)$ and that
\begin{equation}\label{eq_prean}
\frak{N}(\Sigma) = (\Sigma A)^\bot .
\end{equation}
{\remark \label{remideal}
The sets of the form $\Sigma A$ are precisely the closed ideals of $A(G)$
generated by their compactly supported elements.}
\begin{proof}
It is clear that, if $\Sigma\subseteq M^{\cb}A(G)$, the set
$\{\sigma u: \sigma\in\Sigma, u\in A_c(G)\}$ consists of compactly supported elements
and is dense in $\Sigma A$. Conversely, suppose that
$J\subseteq A(G)$ is a closed ideal such that $J\cap A_c(G)$ is dense in $J$.
For every $u\in J$ with compact support $K$,
there exists $v\in A(G)$ which equals 1 on $K$ \cite[(3.2) Lemme]{eymard},
and so $u=uv\in JA$.
Thus $J= \overline{J\cap A_c(G)}\subseteq JA\subseteq J$ and hence $J=JA$.
\end{proof}
| 495 | 18,604 |
en
|
train
|
0.124.4
|
The following Proposition shows that
it is sufficient to study sets of the form $\frak N(J)$ where $J$ is a closed ideal
of $A(G)$.
\begin{proposition}\label{p_njan} For any subset $\Sigma$ of $M^{\cb}A(G)$,
\[ \frak{N}(\Sigma)=\frak{N}(\Sigma A).\]
\end{proposition}
\proof
If $\sigma\cdot T = 0$ for all $\sigma\in\Sigma$ then {\em a fortiori}
$v\sigma\cdot T=0$, for all $v\in A(G)$ and all $\sigma\in \Sigma$.
It follows that $w\cdot T=0$ for all $w\in \Sigma A$; thus
$\frak{N}(\Sigma)\subseteq \frak{N}(\Sigma A)$.
Suppose conversely that $w\cdot T=0$ for all $w\in \Sigma A$ and fix $\sigma\in\Sigma$.
Now $u\sigma\cdot T=0$ for all $u \in A(G)$,
and so $ \du{\sigma\cdot T}{uv}=0$ when $u,v\in A(G)$.
Since the products $uv$ form a dense subset of $A(G)$, we have
$\sigma\cdot T=0$. Thus $\frak{N}(\Sigma)\supseteq \frak{N}(\Sigma A)$
since $\sigma\in\Sigma$ is arbitrary, and the proof is complete. \qed
It is not hard to see that $\lambda_s$
is in $\frak N(\Sigma)$ if and only if $s$ is in the zero set $Z(\Sigma)$ of $\Sigma$, and so
$Z(\Sigma)$ coincides with the zero set of the ideal $J=\Sigma A$.
Whether or not, for an ideal $J$, these unitaries suffice to generate $\frak N(J)$
depends on properties of the zero set.
For our purposes, a closed subset $E\subseteq G$ is a {\em set of synthesis}
if there is a unique closed ideal $J$ of $A(G)$ with $Z(J)=E$. Note
that this ideal is generated by its compactly supported elements
\cite[Theorem 5.1.6]{kaniuth}.
\begin{lemma} \label{proto}
Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is
a set of synthesis. Then
\[
\frak N(J)=J^\bot=\overline{\sspp\{\lambda_x:x\in E\}}^{w*}
\]
\end{lemma}
\proof
Since $E$ is a set of synthesis, $J=JA$ by Remark \ref{remideal};
thus $J^\bot=(JA)^\bot=\frak N(J)$ by relation (\ref{eq_prean}).
The other equality is essentially a reformulation of the fact that $E$
is a set of synthesis: a function $u\in A(G)$ is in $J$ if and only if
it vanishes at every point of $E$, that is, if and only if it annihilates every $\lambda_s$
with $s\in E$ (since $\du{\lambda_s}{u}= u(s)$). \qed
A linear space $\cl U$ of bounded operators on a Hilbert space is called
{\em a ternary ring of operators (TRO)}
if it satisfies $ST^*R\in\cl U$ whenever $S,T$ and $R$ are in $\cl U$. Note that a TRO
containing the identity operator is automatically a selfadjoint algebra.
\begin{proposition}\label{deutero}
Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is
the coset of a closed subgroup of $G$. Then
$\frak N(J)$ is a (weak-* closed) TRO. In particular, if $E$ is a closed subgroup
then $\frak N(J)$ is a von Neumann subalgebra of $\vn (G)$.
\end{proposition}
\proof We may write $E=Hg$ where $H$ is a closed subgroup and $g\in G$
(the proof for the case $E=gH$ is identical).
Now $E$ is a translate of $H$ which is a set of synthesis by \cite{tatsuuma2}
and hence $E$ is a set of synthesis.
Thus Lemma \ref{proto} applies.
If $sg,tg,rg$ are in $E$ and $S=\lambda_{sg}, \, T=\lambda_{tg}$ and $R=\lambda_{rg}$, then
$ST^*R=\lambda_{st\an rg}$ is also in $\frak N(J)$ because $st\an rg\in E$.
Since $\frak N(J)$ is generated by $\{\lambda_x:x\in E\}$, it follows
that $ST^*R\in\frak N(J)$ for any
three elements $S,T,R$ of $\frak N(J)$. \qed
{\remark Special cases of the above result are proved by Chu and Lau
in \cite{chulau} (see Propositions 3.2.10 and 3.3.9.)}
We now pass from $\vn(G)$ to $\cl B (L^2(G))$:
The algebra $M^{\cb}A(G)$ acts on $\cl B(L^2(G))$ {\it via} the maps
$S_{N(\sigma)},\, \sigma\in M^{\cb}A(G)$ (see \cite{bf} and \cite{j}),
and this action is an extension of the action of $M^{\cb}A(G)$ on $\vn (G)$:
when $T\in\vn (G)$ and $\sigma\in M^{\cb}A(G)$,
we have $S_{N(\sigma)}(T)=\sigma\cdot T$.
Hence, letting
\[
\tilde{\frak N}(\Sigma) =
\{T\in\cl B(L^2(G)): S_{N(\sigma)}(T)=0 , \ \mbox{ for all } \sigma\in \Sigma\},
\]
we have $\frak{N}(\Sigma)=\tilde{\frak N}(\Sigma)\cap \vn(G).$
The following is analogous to Proposition \ref{p_njan}; note, however,
that the dualities are different.
\begin{proposition}\label{new} If $\Sigma\subseteq M^{\cb}A(G)$,
\[ \tilde{\frak{N}}(\Sigma)=\tilde{\frak{N}}(\Sigma A).\]
\end{proposition}
\proof The inclusion $\tilde{\frak{N}}(\Sigma)\subseteq \tilde{\frak{N}}(\Sigma A)$
follows as in the proof of Proposition \ref{p_njan}.
To prove that
$\tilde{\frak N}(\Sigma A)\subseteq \tilde{\frak{N}}(\Sigma)$,
let $T\in \tilde{\frak{N}}(\Sigma A)$; then $S_{N(v\sigma)}(T)=0$ for all
$\sigma \in\Sigma$ and $v \in A(G)$. Thus, if $h\in T(G)$,
\[\du{S_{N(\sigma)}(T)}{N(v) h} =\du{T}{N(\sigma v) h} = \du{S_{N(v\sigma )}(T)}{ h} = 0\, .\]
Since the linear span of the set $\{N(v) h: v \in A(G), h \in T(G) \}$ is dense in $T(G)$,
it follows that $S_{N(\sigma)}(T)=0$ and so $T \in \tilde{\frak{N}}(\Sigma)$. \qed
\begin{proposition}\label{prop2}
For every closed ideal $J$ of $A(G),\quad \tilde{\frak N}(J)= \Bim(J^\bot) $.
\end{proposition}
\proof If $T\in\cl B(L^2(G)), h\in T(G)$ and $u\in A(G)$ then
\[\langle S_{N(u)}(T),h\rangle = \langle T, N(u)h\rangle .\]
By \cite[Proposition 3.1]{akt},
$\Sat(J)$ is the closed linear span of $\{N(u)h: u\in J , h\in T(G)\}$.
We conclude that $T\in (\Sat(J))^\bot$ if and only if $S_{N(u)}(T) = 0$ for all
$u\in J$, {\it i.e.} if and only if $T\in \tilde{\frak{N}}(J)$.
By Theorem \ref{th_satlcg}, $(\Sat(J))^\bot=\Bim(J^\bot)$, and the proof is complete.
$\qquad\Box$
\begin{theorem}\label{thbimn}
For any subset $\Sigma$ of $M^{\cb}A(G)$,
\[ \tilde{\frak N}(\Sigma)= \Bim(\frak{N}(\Sigma)).\]
\end{theorem}
\proof
It follows from relation (\ref{eq_prean}) that
$\Bim((\Sigma A)^\bot) = \Bim(\frak{N}(\Sigma))$.
But $\Bim((\Sigma A)^\bot)=\tilde{\frak N}(\Sigma A)$ from Proposition \ref{prop2} and
$\tilde{\frak N}(\Sigma A)=\tilde{\frak N}(\Sigma)$ from Proposition \ref{new}.
\qed
More can be said when the zero set $Z(\Sigma)$ is a subgroup (or a coset) of $G$.
\begin{lemma}\label{trito}
Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is
a set of synthesis. Then
\begin{equation}\label{eq}
\tilde{\frak N}(J)
=\overline{\sspp\{M_g\lambda_x:x\in E,g\in L^\infty(G)\}}^{w*}
\end{equation}
\end{lemma}
\proof
By Theorem \ref{thbimn}, $\tilde{\frak N}(J) = \Bim(\frak N(J))$
and thus, by Lemma \ref{proto},
$\tilde{\frak N}(J)$ is the weak* closed linear span of
the monomials of the form
$M_f\lambda_sM_g$ where $f,g\in L^\infty(G)$ and $s\in E$. But, because of the
commutation relation
$\lambda_sM_g=M_{g_s}\lambda_s \ \ (\mbox{where } g_s(t)=g(s\an t))$,
we may write $M_f\lambda_sM_g=M_\phi\lambda_s$ where $\phi=fg_s\in L^\infty(G)$.\qed
\begin{theorem}\label{tetarto}
Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is
the coset of a closed subgroup of $G$. Then
$\tilde{\frak N}(J)$ is a (weak* closed) TRO. In particular if $E$ is a closed subgroup
then $\tilde{\frak N}(J)$ is a von Neumann subalgebra of $\cl B(L^2(G))$ and
\[
\tilde{\frak N}(J)=(\cl D\cup\frak N(J))''=(\cl D\cup\{\lambda_x:x\in E\})''.
\]
\end{theorem}
\proof
As in the proof of Proposition \ref{deutero}, we may take $E=Hg$.
By Lemma \ref{trito}, it suffices to check the TRO relation for monomials
of the form $M_f\lambda_{sg}$; but, by the commutation relation,
triple products
$(M_f\lambda_{sg})(M_g\lambda_{tg})^*(M_h\lambda_{rg})$
of such monomials may be written in the form
$M_\phi\lambda_{st\an rg}$ and so belong to $\tilde{\frak N}(J)$ when $sg,tg$ and $rg$
are in the coset $E$.
Finally, when $E$ is a closed subgroup, the last equalities follow
from relation (\ref{eq}) and the bicommutant theorem.
\qed
We next extend the notions of $\sigma$-harmonic functionals \cite{chulau}
and operators \cite{neurun} to jointly harmonic functionals and operators:
\begin{definition}\label{d_jh}
Let $\Sigma\subseteq M^{\cb}A(G)$.
An element $T\in \cl \vn(G)$ will be
called a \emph{$\Sigma$-harmonic functional } if
$\sigma\cdot T=T$ for all $\sigma\in\Sigma$. We write $\cl H_{\Sigma}$
for the set of all {$\Sigma$-harmonic} functionals.
An operator $T\in \cl B(L^2(G))$ will be called \emph{$\Sigma$-harmonic} if
$S_{N(\sigma)}(T)=T$ for all $\sigma\in\Sigma$. We write $\widetilde{\cl H}_{\Sigma}$
for the set of all {$\Sigma$-harmonic} operators.
\end{definition}
Explicitly, if $\Sigma'=\{\sigma -\mathbf 1:\sigma\in\Sigma\}$,
\begin{align*}
\cl H_{\Sigma} &=
\{T\in \vn(G) : \sigma\cdot T=T \;\text{for all }\; \sigma\in\Sigma\} = \frak N(\Sigma') \\
\text{and }\quad
\widetilde{\cl H}_{\Sigma} \ &=
\{T\in \cl B(L^2(G)) : S_{N(\sigma)}(T)=T \;\text{for all }\; \sigma\in\Sigma\} = \tilde{\frak N}(\Sigma').
\end{align*}
The following is an immediate consequence of Theorem \ref{thbimn}.
\begin{corollary}\label{c_jho}
Let $\Sigma\subseteq M^{\cb}A(G)$.
Then the weak* closed $\cl D$-bimodule $\Bim(\cl H_{\Sigma})$
generated by $\cl H_{\Sigma}$ coincides with $\widetilde{\cl H}_{\Sigma}$.
\end{corollary}
Let $\sigma$ be a positive definite normalised function and $\Sigma = \{\sigma\}$.
In \cite[Theorem 4.8]{neurun}, the authors prove,
under some restrictions on $G$ or $\sigma$ (removed in \cite{kalantar}),
that $\widetilde{\cl H}_{\Sigma}$ coincides with the von
Neumann algebra $(\cl D\cup\cl H_{\Sigma})''$.
We give a short proof of a more general result.
Denote by $P^1(G)$ the set of all positive definite normalised functions on $G$.
Note that $P^1(G)\subseteq M^{\cb}A(G)$.
\begin{theorem}
Let $\Sigma\subseteq P^1(G)$.
The space $\widetilde{\cl H}_{\Sigma}$ is a von Neumann subalgebra of $\cl B(L^2(G))$,
and $\widetilde{\cl H}_{\Sigma}=(\cl D \cup\cl H_{\Sigma})''$.
\end{theorem}
\proof Note that $\cl H_{\Sigma}=\frak N(\Sigma')=\frak N(\Sigma' A)$ and
$\widetilde{\cl H}_{\Sigma} = \tilde{\frak N}(\Sigma')= \tilde{\frak N}(\Sigma' A)$.
Since $Z(\Sigma')$ is a closed subgroup \cite[Proposition 32.6]{hr2},
it is a set of spectral synthesis \cite{tatsuuma2}. Thus the result
follows from Theorem \ref{tetarto}.
\qed
| 3,919 | 18,604 |
en
|
train
|
0.124.5
|
\begin{theorem}\label{tetarto}
Let $J\subseteq A(G)$ be a closed ideal. Suppose that its zero set $E=Z(J)$ is
the coset of a closed subgroup of $G$. Then
$\tilde{\frak N}(J)$ is a (weak* closed) TRO. In particular if $E$ is a closed subgroup
then $\tilde{\frak N}(J)$ is a von Neumann subalgebra of $\cl B(L^2(G))$ and
\[
\tilde{\frak N}(J)=(\cl D\cup\frak N(J))''=(\cl D\cup\{\lambda_x:x\in E\})''.
\]
\end{theorem}
\proof
As in the proof of Proposition \ref{deutero}, we may take $E=Hg$.
By Lemma \ref{trito}, it suffices to check the TRO relation for monomials
of the form $M_f\lambda_{sg}$; but, by the commutation relation,
triple products
$(M_f\lambda_{sg})(M_g\lambda_{tg})^*(M_h\lambda_{rg})$
of such monomials may be written in the form
$M_\phi\lambda_{st\an rg}$ and so belong to $\tilde{\frak N}(J)$ when $sg,tg$ and $rg$
are in the coset $E$.
Finally, when $E$ is a closed subgroup, the last equalities follow
from relation (\ref{eq}) and the bicommutant theorem.
\qed
We next extend the notions of $\sigma$-harmonic functionals \cite{chulau}
and operators \cite{neurun} to jointly harmonic functionals and operators:
\begin{definition}\label{d_jh}
Let $\Sigma\subseteq M^{\cb}A(G)$.
An element $T\in \cl \vn(G)$ will be
called a \emph{$\Sigma$-harmonic functional } if
$\sigma\cdot T=T$ for all $\sigma\in\Sigma$. We write $\cl H_{\Sigma}$
for the set of all {$\Sigma$-harmonic} functionals.
An operator $T\in \cl B(L^2(G))$ will be called \emph{$\Sigma$-harmonic} if
$S_{N(\sigma)}(T)=T$ for all $\sigma\in\Sigma$. We write $\widetilde{\cl H}_{\Sigma}$
for the set of all {$\Sigma$-harmonic} operators.
\end{definition}
Explicitly, if $\Sigma'=\{\sigma -\mathbf 1:\sigma\in\Sigma\}$,
\begin{align*}
\cl H_{\Sigma} &=
\{T\in \vn(G) : \sigma\cdot T=T \;\text{for all }\; \sigma\in\Sigma\} = \frak N(\Sigma') \\
\text{and }\quad
\widetilde{\cl H}_{\Sigma} \ &=
\{T\in \cl B(L^2(G)) : S_{N(\sigma)}(T)=T \;\text{for all }\; \sigma\in\Sigma\} = \tilde{\frak N}(\Sigma').
\end{align*}
The following is an immediate consequence of Theorem \ref{thbimn}.
\begin{corollary}\label{c_jho}
Let $\Sigma\subseteq M^{\cb}A(G)$.
Then the weak* closed $\cl D$-bimodule $\Bim(\cl H_{\Sigma})$
generated by $\cl H_{\Sigma}$ coincides with $\widetilde{\cl H}_{\Sigma}$.
\end{corollary}
Let $\sigma$ be a positive definite normalised function and $\Sigma = \{\sigma\}$.
In \cite[Theorem 4.8]{neurun}, the authors prove,
under some restrictions on $G$ or $\sigma$ (removed in \cite{kalantar}),
that $\widetilde{\cl H}_{\Sigma}$ coincides with the von
Neumann algebra $(\cl D\cup\cl H_{\Sigma})''$.
We give a short proof of a more general result.
Denote by $P^1(G)$ the set of all positive definite normalised functions on $G$.
Note that $P^1(G)\subseteq M^{\cb}A(G)$.
\begin{theorem}
Let $\Sigma\subseteq P^1(G)$.
The space $\widetilde{\cl H}_{\Sigma}$ is a von Neumann subalgebra of $\cl B(L^2(G))$,
and $\widetilde{\cl H}_{\Sigma}=(\cl D \cup\cl H_{\Sigma})''$.
\end{theorem}
\proof Note that $\cl H_{\Sigma}=\frak N(\Sigma')=\frak N(\Sigma' A)$ and
$\widetilde{\cl H}_{\Sigma} = \tilde{\frak N}(\Sigma')= \tilde{\frak N}(\Sigma' A)$.
Since $Z(\Sigma')$ is a closed subgroup \cite[Proposition 32.6]{hr2},
it is a set of spectral synthesis \cite{tatsuuma2}. Thus the result
follows from Theorem \ref{tetarto}.
\qed
{\remark It is worth pointing out that $\widetilde{\cl H}_{\Sigma}$
has an abelian commutant,
since it contains a masa. In particular, it is a type I,
and hence an injective, von Neumann algebra.}
In \cite[Theorem 4.3]{akt} it was shown that a weak* closed subspace
$\cl U\subseteq \cl B(L^2(G))$ is jointly invariant if and only if it is of the form
$\cl U = \Bim(J^{\perp})$ for a closed ideal $J\subseteq A(G)$. By
Proposition \ref{prop2}, $\Bim(J^{\perp})=\tilde{\frak{N}}(J)$,
giving another equivalent description. In fact, the ideal $J$ may be replaced
by a subset of $M^{\cb}A(G)$:
\begin{proposition}\label{th_eqc}
Let $\cl U\subseteq \cl B(L^2(G))$ be a weak* closed subspace.
The following are equivalent:
(i) \ \ $\cl U$ is jointly invariant;
(ii) \ there exists a closed ideal $J\subseteq A(G)$ such that $\cl U = \tilde{\frak{N}}(J)$;
(iii) \ there exists a subset $\Sigma\subseteq M^{\cb}A(G)$ such that
$\cl U = \tilde{\frak{N}}(\Sigma)$.
\end{proposition}
\begin{proof}
We observed the implication (i)$\Rightarrow$(ii) above, and
(ii)$\Rightarrow$(iii) is trivial.
Finally, (iii)$\Rightarrow$(i) follows from
Theorem \ref{thbimn} and \cite[Theorem 4.3]{akt}.
\end{proof}
| 1,685 | 18,604 |
en
|
train
|
0.124.6
|
{\remark
It might also be observed that every weak* closed
jointly invariant subspace $\cl U$ is of the form
$\cl U = \widetilde{\cl H}_{\Sigma}$
for some $\Sigma\subseteq M^{\cb}A(G)$.}
We end this section with a discussion on the ideals of the form $\Sigma A$:
If $J$ is a closed ideal of $A(G)$, then $J A\subseteq J$; thus, by (\ref{eq_prean})
and Proposition \ref{p_njan},
$J^\bot\subseteq \frak N(J)$ and therefore
$\Bim (J^\bot)\subseteq\tilde{\frak N}(J)$,
since $\tilde{\frak N}(J)$ is a $\cl D$-bimodule and contains ${\frak N}(J)$.
The equality
$J^\bot= \frak N(J)$ holds if and only if $J$ is
generated by its compactly supported elements, equivalently if
$J=JA$ (see Remark \ref{remideal}). Indeed, by
Proposition \ref{p_njan} we have $\frak N(J)=\frak N(JA)= (JA)^\bot$ and so
the equality $J^\bot= \frak N(J)$ is equivalent to $J^\bot= (JA)^\bot$.
Interestingly, the inclusion $\Bim (J^\bot)\subseteq\tilde{\frak N}(J)$
is in fact always an equality (Proposition \ref{prop2}).
We do not know whether all closed ideals of $A(G)$ are of the form $\Sigma A$.
They certainly are when $A(G)$ satisfies {\em Ditkin's condition at infinity}
\cite[Remark 5.1.8 (2)]{kaniuth},
namely if
every $u\in A(G)$ is the limit of a sequence $(uv_n)$, with $v_n\in A_c(G)$.
Since $A_c(G)$ is dense in $A(G)$, this is equivalent to the condition that
every $u\in A(G)$ belongs to the closed ideal $\overline{uA(G)}$.
This condition has been used before (see for example \cite{kl}).
It certainly holds whenever $A(G)$ has a weak form of approximate identity;
for instance, when $G$ has the approximation property (AP) of
Haagerup and Kraus \cite{hk} and a fortiori when $G$ is amenable.
It also holds for all discrete groups.
See also the discussion in Remark 4.2 of \cite{lt} and the one
following Corollary 4.7 of \cite{akt}.
| 640 | 18,604 |
en
|
train
|
0.124.7
|
\section{Annihilators and Supports}\label{s}
In this section, given a set $\cl A$ of operators on $L^2(G)$, we study the ideal of
all $u\in A(G)$ which act trivially on $\cl A$; its zero set is the $G$-support
of $\cl A$; we relate this to the $\omega$-support of $\cl A$ defined in \cite{eks}.
In \cite{eymard}, Eymard introduced, for $T\in\vn (G)$, the ideal $I_T$ of all
$u\in A(G)$ satisfying $u\cdot T=0$. We generalise this by defining,
for a subset $\cl A$ of $\cl B(L^2(G))$,
\[
I_\cl{A} =\{u \in A(G): S_{N(u)}(\cl A)=\{0\}. \}
\]
It is easy to verify that $I_\cl{A}$ is a closed ideal of $A(G)$.
Let $\cl U(\cl A)$ be the smallest weak* closed jointly invariant
subspace containing $\cl A$.
We next prove that $\cl U(\cl A)$ coincides with the set $\tilde{\frak N}(I_\cl{A})$
of all $T\in\cl B(L^2(G))$ satisfying $S_{N(u)}(T)=0$ for all $u \in I_\cl{A}$.
\begin{proposition} \label{13}
Let $\cl A\subseteq\cl B(L^2(G))$.
If $\sigma\in M^{\cb}A(G)$ then $S_{N(\sigma)}(\cl A)=\{0\}$ if and only if
$S_{N(\sigma)}(\cl U(\cl A))=\{0\}$.
Thus, $I_\cl{A}=I_\cl{U(A)}$.
\end{proposition}
\proof
Recall that
$$\cl U(\cl A) =
\overline{\sspp\{S_w(\rho_r T \rho_r^*) : T\in \cl A, w\in \frak{S}(G), r\in G\}}^{w^*}.$$
The statement now follows immediately from the facts that
$S_{N(\sigma)}\circ S_w= S_w\circ S_{N(\sigma)}$ for all $w\in \frak S(G)$
and $S_{N(\sigma)}\circ {\rm Ad}_{\rho_r}= {\rm Ad}_{\rho_r}\circ S_{N(\sigma)}$
for all $r\in G$.
The first commutation relation is obvious,
and the second one can be seen as follows:
Denoting by $\theta_r$ the predual of the
map ${\rm Ad}_{\rho_r}$, for all $h\in T(G)$ we have
$\theta_r(N(\sigma)h) = N(\sigma)\theta_r(h)$ since $N(\sigma)$ is right invariant and so
\begin{align*}
\du{S_{N(\sigma)}(\rho_rT\rho_r^*)}{h} &= \du{\rho_rT\rho_r^*}{N(\sigma)h}
= \du{T}{\theta_r(N(\sigma)h)} \\
& = \du{T}{N(\sigma)\theta_r(h)} = \du{S_{N(\sigma)}(T)}{\theta_r(h)} \\
&= \du{\rho_r(S_{N(\sigma)}(T))\rho_r^*}{h}.
\end{align*}
Thus $S_{N(\sigma)}(\rho_rT\rho_r^*)=\rho_r(S_{N(\sigma)}(T))\rho_r^*$.
\qed
\begin{theorem} \label{prop16}
Let $\cl A\subseteq\cl B(L^2(G))$. The bimodule
$\tilde{\frak N}(I_\cl{A})$ coincides with the smallest weak* closed jointly
invariant subspace $\cl U(\cl A)$ of $\cl B(L^2(G))$ containing $\cl A$.
\end{theorem}
\proof
Since $\cl{U(A)}$ is weak* closed and jointly invariant, by \cite[Theorem 4.3]{akt} it
equals $\Bim(J^\bot)$,
where $J$ is the closed ideal of $A(G)$ given by
\[J=\{u\in A(G): N(u)\chi_{L\times L} \in (\cl{U(A)})_\bot
\;\text{for all compact $L\subseteq G$}\}.\]
We show that $J\subseteq I_\cl{A}$. Suppose $u\in J$; then,
for all $w\in\frak S(G)$ and all $T\in\cl A$, since $S_w(T)$ is in $\cl{U(A)}$,
by Theorem \ref{th_satlcg}
it annihilates $ N(u)\chi_{L\times L}$ for every compact $L\subseteq G$.
It follows that
$$\du{S_{N(u)}(T)}{w\chi_{L\times L}} = \du{T}{N(u)w\chi_{L\times L}} =\du{S_w(T)}{N(u)\chi_{L\times L}} = 0$$
for all $w\in\frak S(G)$ and all compact $L\subseteq G$.
Taking $w=f\otimesimes\bar g$
with $f,g\in L^\infty(G)$ supported in $L$, this yields
\[
\sca{S_{N(u)}(T)f,g} =
\du{S_{N(u)}(T)}{w\chi_{L\times L}} = 0
\]
for all compactly supported $f,g\in L^\infty(G)$ and therefore $S_{N(u)}(T)=0$.
Since this holds for all $T\in\cl A$, we have shown that $u\in I_\cl{A}$.
It follows that $\cl{U(A)}=\Bim(J^\perp)\supseteq \Bim(I_\cl{A}^\perp)$.
But $\Bim(I_\cl{A}^\perp)=\tilde{\frak N}(I_\cl{A})$ by Proposition \ref{prop2},
and this space is clearly jointly invariant and weak* closed.
Since it contains $\cl A$, it also contains $\cl{U(A)}$ and so
\[\cl{U(A)}=\Bim(J^\perp)= \Bim(I_\cl{A}^\perp)=\tilde{\frak N}(I_\cl{A}). \qquad\Box\]
\noindent\textbf{Supports of functionals and operators}
In \cite{neurun}, the authors generalise the notion of support of an element of
$\vn(G)$ introduced by Eymard \cite{eymard} by defining, for an arbitrary
$T\in\cl B(L^2(G))$,
\[ \mathop{\mathrm{supp}}G T :=
\{x\in G : u(x) = 0 \;\text{for all $u\in A(G)$ with }\; S_{N(u)}(T) = 0\}.\]
Notice that $\mathop{\mathrm{supp}}G T$ coincides with the zero set of the ideal $I_T$ (see also
\cite[Proposition 3.3]{neurun}). More generally, let us define the {\em $G$-support}
of a subset $\cl A$ of $\cl B(L^2(G))$ by
\[\mathop{\mathrm{supp}}G(\cl A) = Z(I_\cl{A}).\]
When $\cl A\subseteq \vn(G)$, then $\mathop{\mathrm{supp}}G(\cl A)$ is just the support
of $\cl A$ considered as a set of functionals on $A(G)$ as in \cite{eymard}.
The following is proved in \cite{neurun} under the assumption
that $G$ has the approximation property of Haagerup and Kraus \cite{hk}:
\begin{proposition}
Let $T\in\cl B(L^2(G))$. Then $\mathop{\mathrm{supp}}G(T)=\emptyset$ if and only if $T=0$.
\end{proposition}
\proof It is clear that the empty set is the $G$-support of the zero operator.
Conversely,
suppose $\mathop{\mathrm{supp}}G(T)=\emptyset$, that is, $Z(I_T)=\emptyset$. This implies
that $I_T=A(G)$ (see \cite[Corollary 3.38]{eymard}).
Hence $S_{N(u)}(T)=0$ for all $u\in A(G)$, and so for all $h\in T(G)$ we have
\[
\du{T}{N(u)h}= \du{S_{N(u)}(T)}{h}=0.
\]
Since the linear span of $\{N(u)h:u\in A(G), h\in T(G)\}$ is dense in $T(G)$,
it follows that $T=0.$
\qed
\begin{proposition}\label{propsame}
The $G$-support of a subset $\cl A\subseteq\cl B(L^2(G))$ is the same as
the $G$-support of the smallest weak* closed jointly invariant subspace $\cl{U(A})$
containing $\cl A$.
\end{proposition}
\proof Since $I_\cl{A}=I_{\cl{U(A})}$ (Proposition \ref{13}), this is immediate. \qed
The following proposition shows that the $G$-support of a subset
$\cl A\subseteq\cl B(L^2(G))$ is in fact the support of a space of linear
functionals on $A(G)$ (as used by Eymard): it
can be obtained either by first forming
the ideal $I_\cl{A}$ of all $u\in A(G)$ `annihilating' $\cl A$
(in the sense that $S_{N(u)}(\cl A)=\{0\}$)
and then taking the support of the annihilator of $I_\cl{A}$ in $\vn(G)$; alternatively,
it can be obtained by forming the smallest weak* closed jointly invariant subspace
$\cl{U(A})$
containing $\cl A$ and then considering the support of the set of
all the functionals on $A(G)$ which are contained in $\cl{U(A})$.
\begin{proposition}\label{propsame2}
The $G$-support of a subset $\cl A\subseteq\cl B(L^2(G))$ coincides with the supports
of the following spaces of functionals on $A(G)$:
(i) \ the space $I_\cl{A}^\bot\subseteq\vn(G)$
(ii) the space $\cl{U(A})\cap\vn(G)=\frak N(I_\cl{A})$.
\end{proposition}
\proof
By Proposition \ref{prop2} and Theorem \ref{prop16},
\[\cl{U(A})= \tilde{\frak N}(I_\cl{A})=\Bim( I_\cl{A}^\bot).\]
Since the $\cl D$-bimodule $\Bim( I_\cl{A}^\bot)$ is jointly invariant,
it coincides with $\cl U(I_\cl{A}^\bot)$.
Thus $\cl{U(A})=\cl U(I_\cl{A}^\bot)$ and so Proposition \ref{propsame} gives
$\mathop{\mathrm{supp}}G(\cl A)=\mathop{\mathrm{supp}}G( I_\cl{A}^\bot)$, proving part (i).
Note that
$\cl U(\frak N(I_\cl{A}))=\Bim(\frak N(I_\cl{A}))=\tilde{\frak N}(I_\cl{A})$ and so
$\cl U(\frak N(I_\cl{A}))=\cl{U(A})$. Thus by Proposition \ref{propsame},
$\frak N(I_\cl{A})$ and $\cl A$
have the same support. Since
$\cl{U(A})\cap\vn(G)=\tilde{\frak N}(I_\cl{A})\cap\vn(G)=\frak N(I_\cl{A})$,
part (ii) follows. \qed
We are now in a position to relate the $G$-support of a set of operators to their
$\omega$-support as introduced in \cite{eks}.
\begin{theorem}\label{312}
Let $\cl U\subseteq\cl B(L^2(G))$ be a weak* closed jointly invariant subspace. Then
\begin{align*}
\mathop{\mathrm{supp}}o(\cl U) &\cong (\mathop{\mathrm{supp}}G(\cl U))^*.
\end{align*}
In particular, the $\omega$-support of a jointly invariant subspace is marginally
equivalent to a topologically closed set.
\end{theorem}
\proof
Let $J = I_\cl{U}$. By definition, $\mathop{\mathrm{supp}}G(\cl U) = Z(J)$. By
the proof of Theorem \ref{prop16},
$\cl U = \Bim(J^\bot)$, and hence, by Theorem \ref{th_satlcg}, $\cl U = (\Sat J)^\bot$.
By \cite[Section 5]{akt},
$\mathop{\mathrm{supp}}o(\cl U)=\nul(\Sat J)=(Z(J))^*$, where
$\nul (\Sat J)$ is the largest, up to marginal equivalence,
$\omega$-closed subset $F$ of $G\times G$
such that $h|_F = 0$ for all $h\in\Sat J$ (see \cite{st1}).
The proof is complete.
\qed
\begin{corollary}\label{c_nss}
Let $\Sigma\subseteq M^{\cb}A(G)$. Then
\[
\mathop{\mathrm{supp}}o \tilde{\frak{N}}(\Sigma) \cong Z(\Sigma)^*.
\]
If $Z(\Sigma)$ satisfies
spectral synthesis, then $\tilde{\frak{N}}(\Sigma) = \frak{M}_{\max}(Z(\Sigma)^*)$.
\end{corollary}
\begin{proof} From Theorem \ref{thbimn}, we know that
$\tilde{\frak{N}}(\Sigma)=\Bim((\Sigma A)^\bot)=\tilde{\frak{N}}(\Sigma A)$
and so $\mathop{\mathrm{supp}}o \tilde{\frak{N}}(\Sigma) \cong Z(\Sigma A)^*$
by \cite[Section 5]{akt}.
But $Z(\Sigma A)=Z(\Sigma)$ as can easily be verified (if $\sigma(t)\ne 0$
there exists $u\in A(G)$ so that $(\sigma u)(t)\ne 0$; the converse is trivial).
The last claim follows from
the fact that, when $Z(\Sigma)$ satisfies
spectral synthesis, there is a unique weak* closed $\cl D$ bimodule whose
$\omega$-support is $Z(\Sigma)^*$ (see \cite[Theorem 4.11]{lt} or the proof of
\cite[Theorem 5.5]{akt}).
\end{proof}
| 3,728 | 18,604 |
en
|
train
|
0.124.8
|
Note that when $\Sigma\subseteq P^1(G)$, the set $Z(\Sigma)$ satisfies
spectral synthesis.
The following corollary is a direct consequence of Corollary \ref{c_nss}.
\begin{corollary}\label{corsyn}
Let $\Sigma\subseteq M^{\cb}A(G)$ and $\Sigma'=\{\mathbf 1-\sigma:\sigma\in\Sigma\}$.
If $Z(\Sigma') $
is a set of spectral synthesis, then
$\widetilde{\cl H}_{\Sigma} = \frak{M}_{\max}(Z(\Sigma')^*)$.
\end{corollary}
\begin{corollary} Let $\Omega$ be a subset of $G\times G$ which
is invariant under all maps $(s,t)\to (sr,tr), \, r\in G$.
Then $\Omega$ is marginally equivalent to an $\omega$-closed set if and only if
it is marginally equivalent to a topologically closed set.
\end{corollary}
\proof A topologically closed set is of course $\omega$-closed. For the converse, let
$\cl U=\frak{M}_{\max}(\Omega)$, so that
$\Omega\cong\mathop{\mathrm{supp}}o(\cl U)$. Note that $\cl U$ is a
weak* closed jointly invariant space.
Indeed, since $\Omega$ is invariant, for every $T\in\cl U$
the operator $T_r=: \rho_rT\rho_r^*$ is supported in $\Omega$
and hence is in $\cl U$. Of course $\cl U$ is invariant under all Schur multipliers.
By Theorem \ref{312},
$\mathop{\mathrm{supp}}o(\cl U)$ is marginally equivalent to a closed set. \qed
\begin{theorem}\label{th_compsa}
Let $\cl A\subseteq \cl B(L^2(G))$. Then $\mathop{\mathrm{supp}}G(\cl A)$ is
the smallest closed subset $E\subseteq G$ such that $E^*$ marginally contains
$\mathop{\mathrm{supp}}o(\cl A)$.
\end{theorem}
\proof
{Let $\cl U=\cl{U(A})$ be the smallest
jointly invariant weak* closed subspace containing $\cl A$.
Let $Z=Z(I_\cl{A})$; by definition, $Z=\mathop{\mathrm{supp}}G\cl A$. But
$\mathop{\mathrm{supp}}G\cl A=\mathop{\mathrm{supp}}G\cl U=Z$ (Proposition \ref{propsame})
and so $\mathop{\mathrm{supp}}o\cl{U}\cong Z^*$ by Theorem \ref{312}.
Thus $Z^*$ does marginally contain $\mathop{\mathrm{supp}}o(\cl A)$.
On the other hand, let
$E\subseteq G$ be a closed set such that $E^*$ marginally contains
$\mathop{\mathrm{supp}}o(\cl A)$. Thus any operator $T\in\cl A$ is supported in
$E^*$. But since $E^*$ is invariant, $\rho_rT\rho_r^*$ is also supported in $E^*,$
for every $r\in G$. Thus $\cl U$ is supported in $E^*$.
This means that $Z^*$ is marginally contained in $E^*$;
that is, there is a null set $N\subseteq G$
such that $Z^*\setminus E^*\subseteq (N\times G)\cup (G\times N)$.
We claim that $Z\subseteq E$. To see this, assume, by way of contradiction,
that there exists $s\in Z\setminus E$.
Then the `diagonal' $\{(r,sr):r\in G\}$ is a subset of
$Z^*\setminus E^*\subseteq (N\times G)\cup (G\times N)$.
It follows that for every $r\in G$, either $r\in N$ or $sr\in N$, which means that
$r\in s\an N$. Hence $G\subseteq N\cup s\an N$, which is a null set.
This contradiction shows that $Z\subseteq E$.
\qed
We note that for subsets $\cl S$ of $\vn(G)$ the relation
$\mathop{\mathrm{supp}}o (\cl{S})\subseteq (\mathop{\mathrm{supp}}G(\cl S))^*$
is in \cite[Lemma 4.1]{lt}.
In \cite{neurun} the authors define, for a closed subset $Z$ of $G$,
the set
\[
\cl B_Z(L^2(G)) = \{T\in\cl B(L^2(G): \mathop{\mathrm{supp}}G(T)\subseteq Z\}.
\]
\begin{corollary}\label{rem38}
If $Z\subseteq G$ is closed, the set $\cl B_Z(L^2(G))$ consists of all $T\in\cl B(L^2(G))$
which are $\omega$-supported in $Z^*$; that is, $\cl B_Z(L^2(G))=\frak{M}_{\max}(Z^*)$.
In particular, this space is a reflexive jointly invariant subspace.
\end{corollary}
\proof If $T$ is $\omega$-supported in $Z^*$, then by Theorem \ref{th_compsa},
$\mathop{\mathrm{supp}}G(T)\subseteq Z$.
Conversely if $\mathop{\mathrm{supp}}G(T)\subseteq Z$ then
$\mathop{\mathrm{supp}}G(\cl U(T))\subseteq Z$ by Proposition \ref{propsame}.
But, by Theorem \ref{312},
$\mathop{\mathrm{supp}}o(\cl U(T)) \cong (\mathop{\mathrm{supp}}G(\cl U(T)))^*\subseteq Z^*$
and so $T$ is $\omega$-supported in $Z^*$. \qed
\begin{remark} \label{last}
The $\omega$-support
$\mathop{\mathrm{supp}}o(\cl A)$ of a set $\cl A$ of operators is more \lq sensitive'
than $\mathop{\mathrm{supp}}G(\cl A)$ in that
it encodes more information about $\cl A$. Indeed, $\mathop{\mathrm{supp}}G(\cl A)$
only depends on the
(weak* closed) jointly invariant subspace generated by $\cl A$, while
$\mathop{\mathrm{supp}}o(\cl A)$ depends on the (weak* closed) masa-bimodule
generated by $\cl A$.
\end{remark}
{\example
Let $G=\bb Z$ and $\cl A=\frak{M}_{\max}\{ (i,j):i+j \in\{0,1\}\}$.
The $\omega$-support of $\cl A$ is of course the two-line set $\{ (i,j):i+j \in\{0,1\}\}$,
while its $G$-support is $\bb Z$
which gives no information about $\cl A$.}
Indeed, if $E\subseteq\bb Z$ contains $\mathop{\mathrm{supp}}G(\cl A)$, then by Theorem
\ref{th_compsa} $E^*=\{(n,m)\in\bb Z\times\bb Z:m-n\in E\}$ must contain
$\{ (i,j):i+j \in\{0,1\}\}$. Thus for all $n\in\bb Z$, since $(-n,n)$ and $(-n,n+1)$ are
in $\mathop{\mathrm{supp}}o(\cl A)$ we have $n-(-n)\in E$ and $n+1-(-n)\in E$; hence
$\bb Z\subseteq E$.
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
\end{document}
| 1,919 | 18,604 |
en
|
train
|
0.125.0
|
\begin{document}
\title[Canonical Generating Classes]{Canonical Equivariant
Cohomology Classes Generating Zeta Values of Totally Real Fields}
\author[Bannai]{Kenichi Bannai$^{*\diamond}$}
\email{[email protected]}
\address{${}^*$Department of Mathematics, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kouhoku-ku, Yokohama 223-8522, Japan}
\address{${}^\diamond$Mathematical Science Team, RIKEN Center for Advanced Intelligence Project (AIP), 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan}
\author[Hagihara]{Kei Hagihara$^{\diamond*}$}
\author[Yamada]{Kazuki Yamada$^*$}
\author[Yamamoto]{Shuji Yamamoto$^{*\diamond}$}
\date{\today}
\date{\today \quad (Version 1.08)}
\begin{abstract}
It is known that the special values at nonpositive integers of a Dirichlet $L$-function
may be expressed using the generalized Bernoulli numbers, which are defined by a
canonical generating function.
The purpose of this article is to consider the generalization of
this classical result to the case of Hecke $L$-functions of totally real fields.
Hecke $L$-functions may be expressed canonically as a finite
sum of zeta functions of Lerch type.
By combining the non-canonical multivariable generating functions
constructed by Shintani,
we newly construct a canonical class, which we call the \textit{Shintani generating class},
in the equivariant cohomology of an algebraic torus associated to the totally real field.
Our main result states that the specializations at torsion points of the derivatives of the
Shintani generating class give values at nonpositive integers of the zeta functions of Lerch type.
This result gives the insight that the correct framework in the higher dimensional case is to consider
higher equivariant cohomology classes instead of functions.
\end{abstract}
\thanks{This research is supported by KAKENHI 18H05233.
The topic of research initiated from the KiPAS program FY2014--2018 of the Faculty of Science and Technology at Keio University.}
\subjclass[2010]{11M35 (Primary), 11R42, 14L15, 55N91 (Secondary)}
\maketitle
\setcounter{tocdepth}{1}
| 695 | 27,670 |
en
|
train
|
0.125.1
|
\section{Introduction}\label{section: introduction}
It is classically known that the special values at nonpositive integers of a Dirichlet $L$-function
may be expressed using the generalized Bernoulli numbers, which are defined by a canonical rational generating function.
This simple but significant result is the basis of the deep connection between the special values of Dirichlet $L$-functions
and important arithmetic invariants pertaining to the abelian extensions of $\bbQ$.
In his ground-breaking article \cite{Shi76}, Shintani generalized this result to the case of Hecke $L$-functions of totally real fields.
His approach consists of two steps: The decomposition of a Hecke $L$-function into a finite sum
of zeta functions -- the \textit{Shintani zeta functions} --
associated to certain cones, and the construction of a multivariable generating function
for special values of each Shintani zeta function.
Although this method attained certain success, including the construction by Barsky \cite{Bar78}
and Cassou-Nogu\`es \cite{CN79} of the
$p$-adic $L$-functions for totally real fields, the decomposition step above requires
a choice of cones, and the resulting generating function
is non-canonical. A canonical object behind these generating functions remained to be found.
The purpose of this article is to construct geometrically such a canonical object, which we call the \textit{Shintani generating class},
through the combination of the following three ideas.
We let $g$ be the degree of the totally real field.
First, the Hecke $L$-functions are expressed canonically
in terms of the \textit{zeta functions of Lerch type} (cf.\ Definition \ref{def: Lerch}),
or simply \textit{Lerch zeta functions},
which are defined for finite additive characters parameterized by torsion points of a
certain algebraic torus of dimension $g$, originally considered by Katz \cite{Katz81},
associated to the totally real field.
Second, via a \v Cech resolution, the multivariable
generating functions constructed by Shintani
for various cones may beautifully be combined
to form the Shintani generating class, a canonical
cohomology class in the $(g-1)$-st cohomology group of the algebraic torus minus the identity.
Third, the class descends into the equivariant cohomology with respect to the action of totally positive units,
which successfully allows for nontrivial specializations of the class and its derivatives at torsion points.
Our main result, Theorem \ref{theorem: main}, states that the specializations at nontrivial torsion points of the derivatives of the
Shintani generating class give values at nonpositive integers of the Lerch zeta functions associated to the totally real field.
The classical result for $\bbQ$ that we generalize, viewed through our emphasis on Lerch zeta functions, is as follows.
The Dirichlet $L$-function may canonically be expressed as a finite linear combination of the classical
\textit{Lerch zeta functions}, defined by the series
\begin{equation}\label{eq: Lerch}
\cL(\xi, s)\coloneqq\sum_{n=1}^\infty \xi(n)n^{-s}
\end{equation}
for finite characters $\xi\in{\Hecke}om_\bbZ(\bbZ,\bbC^\times)$.
The series \eqref{eq: Lerch} converges for any $s\in\bbC$ such that $\Re(s)>1$ and has an analytic continuation
to the whole complex plane, holomorphic if $\xi\neq 1$.
When $\xi=1$, the function $\cL(1, s)$ coincides with the Riemann zeta function
$\zeta(s)$, hence has a simple pole at $s=1$.
A crucial property of the Lerch zeta functions is that it has a canonical generating function $\cG(t)$,
which single-handedly captures for \textit{all} nontrivial finite characters $\xi$ the values of Lerch zeta functions at nonpositive integers.
Let $\bbG_m\coloneqq\Spec\bbZ[t,t^{-1}]$ be the multiplicative group, and let $\cG(t)$ be the rational function
\[
\cG(t)\coloneqq \frac{t}{1-t} \in \Gamma\bigl(U,\sO_{\bbG_m}\bigr),
\]
where $U\coloneqq\bbG_m\setminus\{1\}$.
We denote by $\partial$ the algebraic differential operator
$\partial\coloneqq t\frac{d}{dt}$, referred to as the ``magic stick'' in \cite{Kato93}*{1.1.7}.
Note that any $\xi\in\bbG_m(\bbC)$ corresponds to an additive character $\xi\colon\bbZ\rightarrow\bbC^\times$
given by $\xi(n)\coloneqq\xi^n$ for any $n\in\bbZ$.
Then we have the following.
\begin{theorem}\label{theorem: classical generating}
For any nontrivial torsion point $\xi$ of $\bbG_m$ and $k\in\bbN$, we have
\[
\cL(\xi,-k)=\partial^k\cG(t)\big|_{t=\xi}\in \bbQ(\xi).
\]
In particular, the values $\cL(\xi,-k)$ for any $k\in\bbN$ are all algebraic.
\end{theorem}
The purpose of this article is to generalize the above result to the case of totally real fields.
Let $F$ be a totally real field of degree $g$, and
let $\cO_F$ be its ring of integers.
We denote by $\cO_{F+}$ the set of totally positive integers and
by $\Delta\coloneqq\cO_{F+}^\times$
the set of totally positive units of $F$.
Let $\bbT\coloneqq{\Hecke}om_\bbZ(\cO_F,\bbG_m)$
be an algebraic torus defined over $\bbZ$ which represents the functor
associating to any $\bbZ$-algebra $R$ the group
$\bbT(R)={\Hecke}om_\bbZ(\cO_F,R^\times)$.
Such a torus was used by Katz \cite{Katz81} to reinterpret the construction by Barsky \cite{Bar78}
and Cassou-Nogu\`es \cite{CN79} of the $p$-adic $L$-function of totally real fields.
For the case $F=\bbQ$, we have $\bbT = {\Hecke}om_\bbZ(\bbZ,\bbG_m)=\bbG_m$, hence $\bbT$ is a
natural generalization of the multiplicative group.
For an additive character $\xi\colon\cO_F\rightarrow R^\times$
and $\varepsilon\in\Delta$, we let $\xi^\varepsilon$ be the character defined by
$\xi^\varepsilon(\alpha)\coloneqq\xi(\varepsilon\alpha)$ for any $\alpha\in\cO_F$.
This gives an action of $\Delta$ on the set of additive characters $\bbT(R)$.
We consider the following zeta function, which we regard as
the generalization of the classical Lerch zeta function to the case of totally real fields.
\begin{definition}\label{def: Lerch}
For any torsion point $\xi\in\bbT(\bbC)={\Hecke}om_\bbZ(\cO_F,\bbC^\times)$, we define the
\textit{zeta function of Lerch type}, or simply the
\textit{Lerch zeta function}, by
\begin{equation}\label{eq: Shintani-Lerch}
\cL(\xi\Delta, s)\coloneqq\sum_{\alpha\in\Delta_{\xi}\backslash\cO_{F+}}\xi(\alpha)N(\alpha)^{-s},
\end{equation}
where $N(\alpha)$ is the norm of $\alpha$, and
$\Delta_{\xi}\subset\Delta$ is the isotropic subgroup of $\xi$, i.e.\
the subgroup consisting of
$\varepsilon\in\Delta$ such that $\xi^\varepsilon=\xi$.
\end{definition}
The notation $\cL(\xi\Delta, s)$ is used since \eqref{eq: Shintani-Lerch} depends only on the $\Delta$-orbit of $\xi$.
This series is known to converge for $\Re(s)>1$, and may be continued analytically
to the whole complex plane.
When the narrow class number of $F$ is \textit{one},
the Hecke $L$-function of a finite Hecke character of $F$
may canonically be expressed as a finite linear sum of $\cL(\xi\Delta, s)$ for suitable finite characters $\xi$
(see Proposition \ref{prop: Hecke}).
The action of $\Delta$ on additive characters
gives a right action of $\Delta$ on $\bbT$.
The structure sheaf $\sO_\bbT$ on $\bbT$ has a natural $\Delta$-equivariant structure in the sense of Definition
\ref{def: equivariant structure}.
Let $U\coloneqq\bbT\setminus\{1\}$.
Our main results are as follows.
\begin{theorem}\label{theorem: introduction}
\begin{enumerate}
\item \mbox{(Proposition \ref{prop: Shintani generating class}) }
There exists a canonical class
\[
\cG\in H^{g-1}(U/\Delta,\sO_\bbT),
\]
where $H^{g-1}(U/\Delta,\sO_\bbT)$ is the equivariant
cohomology of $U$ with coefficients in $\sO_\bbT$
(see \S\ref{section: equivariant} for the precise definition.)
\item (Theorem \ref{theorem: main})
For any nontrivial torsion point $\xi$ of $\bbT$, we have a canonical
isomorphism
\[
H^{g-1}(\xi/\Delta_\xi,\sO_\xi)\cong\bbQ(\xi).
\]
Through this isomorphism, for any integer $k\geq 0$,
we have
\[
\cL(\xi\Delta,-k)=\partial^k\cG(\xi)\in\bbQ(\xi),
\]
where
$\partial\colon H^{g-1}(U/\Delta,\sO_\bbT)\rightarrow H^{g-1}(U/\Delta,\sO_\bbT)$
is a certain differential operator given in \eqref{eq: differential}, and
$\partial^k\cG(\xi)$ is the image of $\partial^k\cG$ with respect
to the specialization map
$
H^{g-1}(U/\Delta,\sO_\bbT)\rightarrow H^{g-1}(\xi/\Delta_\xi,\sO_\xi)
$
induced by the equivariant morphism $\xi\rightarrow U$.
\end{enumerate}
\end{theorem}
We refer to the class $\cG$ as the \textit{Shintani generating class}.
If $F=\bbQ$, then we have $\Delta=\{1\}$, and the class $\cG$ is simply the rational function
$\cG(t)=t/(1-t)\in H^0(U,\sO_{\bbG_m})=\Gamma(U,\sO_{\bbG_m})$. Thus Theorem \ref{theorem: introduction} (2)
coincides with Theorem \ref{theorem: classical generating} in this case.
For the case $F=\bbQ$ and also for the case of imaginary quadratic fields (see for example \cite{CW77}\cite{CW78}),
canonical algebraic generating functions
of special values of Hecke $L$-functions play a crucial
role in relating the special values of Hecke $L$-functions
to arithmetic invariants.
However, up until now, the discovery of such a \textit{canonical}
generating function has been elusive in the higher dimensional cases.
Our result suggests that the correct framework in the higher dimensional case is to consider
equivariant cohomology classes instead of functions.
Relation of our work to the results of
Charollois, Dasgupta, and Greenberg
\cite{CDG14} was kindly pointed out to us by Peter Xu.
As a related result, the relation of special values of Hecke $L$-functions of totally
real fields to the topological polylogarithm
on a torus was studied by Be\u\i linson, Kings, and Levin in \cite{BKL18}.
The polylogarithm for general commutative group schemes were constructed by Huber and Kings \cite{HK18}. Our discovery of the Shintani generating class arose from our attempt to explicitly describe
various realizations of the polylogarithm for the algebraic torus $\bbT$.
In subsequent research, we will explore the arithmetic implications of our insight
(see for example \cite{BHY00}).
\tableofcontents
The content of this article is as follows. In \S \ref{section: Lerch Zeta}, we will introduce the
Lerch zeta function $\cL(\xi\Delta, s)$ and show that
this function may be expressed non-canonically as a
linear sum of Shintani zeta functions.
We will then review the multivariable generating function
constructed by Shintani of the special values of Shintani zeta functions.
In \S \ref{section: equivariant}, we will define the equivariant cohomology of a scheme
with an action of a group, and will construct the equivariant \v Cech complex $C^\bullet(\frU/\Delta,\sF)$ which calculates
the equivariant cohomology of $U\coloneqq\bbT\setminus\{1\}$ with coefficients in an equivariant coherent sheaf $\sF$
on $U$. In \S \ref{section: Shintani Class}, we will define in Proposition \ref{prop: Shintani generating class}
the Shintani generating class $\cG$, and in Lemma \ref{lem: differential} give the definition of the derivatives.
Finally in \S \ref{section: specialization}, we will give the proof of our main theorem, Theorem \ref{theorem: main},
which coincides with Theorem \ref{theorem: introduction} (2).
| 3,565 | 27,670 |
en
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.